Accelerate Deep Learning Workloads with Amazon SageMaker
portes grátis
Accelerate Deep Learning Workloads with Amazon SageMaker
Train, deploy, and scale deep learning models effectively using Amazon SageMaker
Dabravolski, Vadim
Packt Publishing Limited
10/2022
278
Mole
Inglês
9781801816441
15 a 20 dias
Descrição não disponível.
Table of Contents
Introducing Deep Learning with Amazon SageMaker
Deep Learning Frameworks and Containers on SageMaker
Managing SageMaker Development Environment
Managing Deep Learning Datasets
Considering Hardware for Deep Learning Training
Engineering Distributed Training
Operationalizing Deep Learning Training
Considering Hardware For Inference
Implementing Model Servers
Operationalizing Inference Workloads
Introducing Deep Learning with Amazon SageMaker
Deep Learning Frameworks and Containers on SageMaker
Managing SageMaker Development Environment
Managing Deep Learning Datasets
Considering Hardware for Deep Learning Training
Engineering Distributed Training
Operationalizing Deep Learning Training
Considering Hardware For Inference
Implementing Model Servers
Operationalizing Inference Workloads
Este título pertence ao(s) assunto(s) indicados(s). Para ver outros títulos clique no assunto desejado.
deep learning; Amazon SageMaker; machine learning; TensorFlow; PyTorch; distributed training
Table of Contents
Introducing Deep Learning with Amazon SageMaker
Deep Learning Frameworks and Containers on SageMaker
Managing SageMaker Development Environment
Managing Deep Learning Datasets
Considering Hardware for Deep Learning Training
Engineering Distributed Training
Operationalizing Deep Learning Training
Considering Hardware For Inference
Implementing Model Servers
Operationalizing Inference Workloads
Introducing Deep Learning with Amazon SageMaker
Deep Learning Frameworks and Containers on SageMaker
Managing SageMaker Development Environment
Managing Deep Learning Datasets
Considering Hardware for Deep Learning Training
Engineering Distributed Training
Operationalizing Deep Learning Training
Considering Hardware For Inference
Implementing Model Servers
Operationalizing Inference Workloads
Este título pertence ao(s) assunto(s) indicados(s). Para ver outros títulos clique no assunto desejado.