Deploy Serverless Machine Learning Models to AWS Lambda

Use Serverless Framework for fast deployment of different ML models to scalable and cost-effective AWS Lambda service.

Ratings: 4.27 / 5.00




Description

In this course you will discover a very scalable, cost-effective and quick way of deploying various machine learning models to production by using principles of serverless computing. Once when you deploy your trained ML model to the cloud, the service provider (AWS in this course) will take care of managing server infrastructure, automated scaling, monitoring, security updating and logging.

You will use free AWS resources which are enough for going through the entire course. If you spend them, which is very unlikely, you will pay only for what you use.

By following course lectures, you will learn about Amazon Web Services, especially Lambda, API Gateway, S3, CloudWatch and others. You will be introduced with various real-life use cases which deploy different kinds of machine learning models, such as NLP, deep learning computer vision or regression models. We will use different ML frameworks - scikit-learn, spaCy, Keras / Tensorflow - and show how to prepare them for AWS Lambda. You will also be introduced with easy-to-use and effective Serverless Framework which makes Lambda creation and deployment very easy.

Although this course doesn't focus much on techniques for training and fine-tuning machine learning models, there will be some examples of training the model in Jupyter Notebook and usage of pre-trained models.

What You Will Learn!

  • Deploy regression, NLP and computer vision machine learning models to scalable AWS Lambda environment
  • How to effectively prepare scikit-learn, spaCy and Keras / Tensorflow frameworks for deployment
  • How to use basics of AWS and Serverless Framework
  • How to monitor usage and secure access to deployed ML models and their APIs

Who Should Attend!

  • Beginner Machine Learning and DevOps Engineers, Data Scientists or Solution Architects
  • All Data Scientists and ML practitioners who need to deploy their trained ML models to production, quickly and at scale, without much bothering with infrastructure