Deployment of Machine Learning Models

Deployment of Machine Learning Models

Ratings: 4.95 / 5.00




Description

This course is for AI and ML Engineers, Practitioners and Researchers who already built an awesome Deep Learning model, and they have a great idea for an app. But they discovered that it is not straight forward to deploy their model in a production App. Another example, say you want to build a robot that uses the Camera sensor to perceive the surrounding environment, build a map of it and eventually navigate it. Here also you discover that you still have a long Journey to go after your model is already performing great on your training machine. Finally, Software Engineers, who have their primary job is to build a working system or an app, often find themselves in a situation where they need to integrate an AI model in their software, which happens a lot today with the expansion of AI applications. They might get this model from a research team in their firm or company, or even use an API or pre-trained model on the internet to do their task.

We cover all those deployment scenarios, covering the journey from working trained model to an optimized deployed model. Our focus will be on CV deployment mainly. We cover Mobile deployment like on Android devices, Edge deployment on Embedded boards like Rasperry Pi, and Browser deployment where your AI model is running in the browser like Chrome, Edge, Safari or any other browser. Also, we cover server deployment scenarios, which are often found in highly scalable apps and systems with millions of users, and also in industrial scenarios like AI visual inspection in factories.

While the course is mostly practical, focusing on “How” things are done and the best way of doing it, we cover also some theoretical parts about the “what” and “why” those techniques are used.

This requires sometimes to understand new types of convolution operations that are optimized for speed and memory, or understanding some model compression techniques that makes them suitable for Embedded and Edge deployments, which was not in scope during building the initial model that was already performing great.

What You Will Learn!

  • Define and understand the different deployment scenarios, being it Edge or Server deployment
  • Understand the constraints on each deployment scenario
  • Be able to choose the scenario suitable to your practical case and put the proper system architecture for it
  • Deploy ML models into Edge and Mobile devices using TLite tools
  • Deploy ML models into Browsers using TFJS
  • Define the different model serving qualities and understand their settings for production-level systems
  • Define the landscape of model serving options and be able to choose the proper one based on the needed qualities
  • Build a server model that uses Cloud APIs like TFHub, Torchhub or TF-API and customize it on custom data, or even build it from scratch
  • Serve a model using Flask, Django or TFServing, using custom infrastructure or in the Cloud like AWS EC2 and using Docker containers
  • Convert different models built in any framework to a common runtime format using ONNX
  • Understand the full ML development cycle and phases
  • Be able to define MLOps, model drift and monitoring

Who Should Attend!

  • Software Engineers
  • Data Scientists
  • Computer Vision Engineers
  • Machine Learning Engineers