AI Quality Workshop: How to Test and Debug ML Models

Supercharge your ability to drive ML performance with ML testing, drift detection, debugging, and AI bias minimization.

Ratings: 4.42 / 5.00




Description

Want to skill up your ability to test and debug machine learning models? Ready to be a powerful contributor to the AI era, the next great wave in software and technology?

Get taught by leading instructors who have previously taught at Carnegie Mellon University and Stanford University, and who have provided training to thousands of students from around the globe, including hot startups and major global corporations:

  • You will learn the analytics that you need to drive model performance

  • You will understand how to create an automated test harness for easier, more effective ML testing

  • You will learn why AI explainability is the key to understanding the key mechanics of your model and to rapid debugging

  • Understand what Shapley Values are, why they are so important, and how to make the most of them

  • You will be able to identify the types of drift that can derail model performance

  • You will learn how to debug model performance challenges

  • You will be able to understand how to evaluate model fairness and identify when bias is occurring - and then address it

  • You will get access to some of the most powerful ML testing and debugging software tools available, for FREE
    (after signing up for the course, terms and conditions apply)

    Testimonials from the live, virtual version of the course:

  • "This is what you would pay thousand of dollars for at a university." - Mike

  • "Excellent course!!! Super thanks to Professor Datta, Josh, Arri, and Rick!! :D" - Trevia

  • "Thank you so very much. I learned a ton. Great job!" - K. M.

  • "Fantastic series. Great explanations and great product. Thank you." - Santosh

  • "Thank you everyone to make this course available... wonderful sessions!" - Chris


What You Will Learn!

  • Rapidly evaluate machine learning models for performance
  • Identify and address model drift
  • Debug production ML models
  • Identify and address possible ML bias issues

Who Should Attend!

  • Data Scientists and ML Engineers who are looking to improve their ability to test, evaluate, and debug machine learning models.