Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Metrics | Evaluating and Comparing Models
Linear Regression for ML
course content

Contenido del Curso

Linear Regression for ML

Linear Regression for ML

1. Simple Linear Regression
2. Multiple Linear Regression
3. Polynomial Regression
4. Evaluating and Comparing Models

bookMetrics

When building a model, it is important to measure its performance.
We require a score associated with the model that accurately describes how well it fits the data. This score is known as a metric, and there are numerous metrics available.
In this chapter, we will focus on the most commonly used ones.

We are already familiar with one metric, SSR (Sum of Squared Residuals), which we minimized to identify the optimal parameters.
Using our notation, we can express the formula for SSR as follows:

or equally:

This metric was useful for comparing models with an equal number of instances. However, it does not provide a comprehensive understanding of the model's performance. Here's why:

Consider the scenario where you have two models trained on different training sets (illustrated in the image below).

You can observe that the first model fits well but still has a higher SSR than the second model, which visually fits the data worse. This discrepancy occurs solely because the first model has many more data points, resulting in a higher sum, but on average, the first model's residuals are lower.
Thus, considering the average of squared residuals as a metric would offer a better description of the model.
This is precisely what the Mean Squared Error (MSE) represents.

MSE

or equally:

To calculate the MSE metric using Python, you can use NumPy's functions:

Or you can use Scikit-learn's mean_squared_error() function:

Where y_true is an array of actual target values and y_pred is an array of predicted target values for the same features.

The issue with MSE is that it presents the error in a squared form.
For instance, let's assume the MSE of a model predicting house prices is 49 dollars squared. However, we are interested in the actual price, not the squared price as indicated by MSE.
Therefore, to obtain a metric with the same unit as the predicted value, we can take the square root of MSE, resulting in 7 dollars. This metric is known as Root Mean Squared Error (RMSE).

RMSE

To calculate the RMSE metric using Python, you can use NumPy's functions:

Or you can use Scikit-learn's mean_squared_error() function with squared=False:

MAE

In SSR, we squared the residuals to get rid of the sign. The second approach would be taking the absolute values of residuals instead of squaring them. That is the idea behind Mean Absolute Error(MAE).

or equally

It is the same as the MSE, but instead of squaring residuals, we take their absolute values.

To calculate the MAE metric using Python, you can use NumPy's functions:

Or you can use Scikit-learn's mean_absolute_error() function:

For choosing the parameters, we used the SSR metric. That is because it was good for mathematical calculations and allowed us to get the Normal Equation.
But to further compare the models, you can use any other metric.

Note

For comparing models, SSR, MSE, and RMSE will always identically choose which model is better and which is worse. And MAE can sometimes prefer a different model than SSR/MSE/RMSE, as those penalize high residuals much more. Usually, you want to choose one metric a priori and focus on minimizing it.

Now it is evident that the second model is superior since all its metrics are lower.
However, it's important to note that lower metrics do not always indicate a better model. The next chapter will explain why this is the case.

¿Todo estuvo claro?

¿Cómo podemos mejorarlo?

¡Gracias por tus comentarios!

Sección 4. Capítulo 1
some-alt