Machine learning model evaluation in Python

A practical course on how to evaluate the performance of a machine learning model using Python

Ratings: 5.00 / 5.00




Description

In this practical course, we are going to focus on the performance evaluation of supervised machine learning models using Python programming language.

After a model has been trained or during hyperparameter tuning, we have to check its performance in order to assess whether it overfits or not. That's why, according to particular projects and needs, we need to select performance metrics carefully. In fact, the choice of the wrong metrics may give us an unreliable model. On the contrary, using the proper performance indicators can lead our project to a higher value.

With this course, you are going to learn:

  1. Performance metrics for regression models (R-squared, Mean Absolute Error, Mean Absolute Percentage Error)

  2. Performance metrics for binary classification models (confusion matrix, precision, recall, accuracy, balanced accuracy, ROC curve and its area)

  3. Performance metrics for multi-class classification models (accuracy, balanced accuracy, macro averaged precision)

All the lessons of this course start with a brief introduction and end with a practical example in Python programming language and its powerful scikit-learn library. The environment that will be used is Jupyter, which is a standard in the data science industry. All the Jupyter notebooks are downloadable.

This course is part of my Supervised Machine Learning in Python online course, so you'll find some lessons that are already included in the larger course.

What You Will Learn!

  • Regression metrics (R-squared, MAE, MAPE)
  • Confusion matrix
  • ROC curve and its area
  • Precision, Recall, F1 score
  • Accuracy, balanced accuracy

Who Should Attend!

  • Python developers
  • Data Scientists
  • Computer engineers
  • Researchers
  • Students