Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Evaluation Metrics for Forecasting | Classical ML Models for Time Series
Machine Learning for Time Series Forecasting

bookEvaluation Metrics for Forecasting

Evaluating the performance of your time series forecasting models is a crucial step in the modeling workflow. Unlike classification problems, forecasting tasks require you to measure how close your predicted values are to the actual values over time. To do this, you can use several regression metrics that are especially meaningful in the context of time series data. The most common metrics include Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Mean Absolute Percentage Error (MAPE).

Mean Absolute Error (MAE) measures the average magnitude of the errors in a set of predictions, without considering their direction. It is calculated as the mean of the absolute differences between predictions and actual values. MAE is easy to interpret because it gives the average error in the same units as the data.

Root Mean Squared Error (RMSE) also measures the average magnitude of the error, but it squares the errors before averaging and then takes the square root. This means that RMSE penalizes larger errors more strongly than MAE. RMSE is sensitive to outliers and is useful when you want to emphasize large errors in your evaluation.

Mean Absolute Percentage Error (MAPE) expresses the prediction error as a percentage, making it easy to interpret across different scales. However, MAPE can be problematic when your actual values are close to zero, as it involves division by the actual value.

Understanding these metrics and their interpretation will help you judge the accuracy of your forecasts and compare different models fairly.

12345678910111213141516171819202122232425262728293031
import numpy as np import matplotlib.pyplot as plt # Simulated actual vs predicted values y_true = np.array([100, 120, 130, 125, 140, 135, 150]) y_pred = np.array([98, 123, 128, 130, 138, 132, 155]) # Compute errors errors = y_true - y_pred # Metrics mae = np.mean(np.abs(errors)) rmse = np.sqrt(np.mean(errors**2)) metrics = {"MAE": mae, "RMSE": rmse} # Plot plt.figure(figsize=(8, 4)) plt.barh(list(metrics.keys()), list(metrics.values()), color=["tab:blue", "tab:orange"], alpha=0.8) # Annotate numeric values for i, (name, value) in enumerate(metrics.items()): plt.text(value + 0.5, i, f"{value:.3f}", va="center", fontsize=10) plt.title("Forecast Error Metrics") plt.xlabel("Error Value") plt.grid(axis="x", linestyle="--", alpha=0.4) plt.tight_layout() plt.show()
copy
Note
Study More

Study more: Each metric has strengths and weaknesses. MAE is robust and interpretable, but does not penalize large errors as much as RMSE. RMSE is more sensitive to outliers, making it useful when large errors are especially undesirable. MAPE gives a percentage error, but can be misleading or undefined when actual values are zero or near zeroβ€”consider using MAE or RMSE in such cases. Always match the metric to your project's goals and data characteristics.

1. Which metric is most robust to outliers in time series forecasting?

2. Why is MAPE problematic for series with zero values?

question mark

Which metric is most robust to outliers in time series forecasting?

Select the correct answer

question mark

Why is MAPE problematic for series with zero values?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 2. ChapterΒ 3

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

Suggested prompts:

Can you explain how to calculate MAPE in this example?

What are the advantages and disadvantages of using MAE vs RMSE?

How should I interpret the MAE and RMSE values in this output?

bookEvaluation Metrics for Forecasting

Swipe to show menu

Evaluating the performance of your time series forecasting models is a crucial step in the modeling workflow. Unlike classification problems, forecasting tasks require you to measure how close your predicted values are to the actual values over time. To do this, you can use several regression metrics that are especially meaningful in the context of time series data. The most common metrics include Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Mean Absolute Percentage Error (MAPE).

Mean Absolute Error (MAE) measures the average magnitude of the errors in a set of predictions, without considering their direction. It is calculated as the mean of the absolute differences between predictions and actual values. MAE is easy to interpret because it gives the average error in the same units as the data.

Root Mean Squared Error (RMSE) also measures the average magnitude of the error, but it squares the errors before averaging and then takes the square root. This means that RMSE penalizes larger errors more strongly than MAE. RMSE is sensitive to outliers and is useful when you want to emphasize large errors in your evaluation.

Mean Absolute Percentage Error (MAPE) expresses the prediction error as a percentage, making it easy to interpret across different scales. However, MAPE can be problematic when your actual values are close to zero, as it involves division by the actual value.

Understanding these metrics and their interpretation will help you judge the accuracy of your forecasts and compare different models fairly.

12345678910111213141516171819202122232425262728293031
import numpy as np import matplotlib.pyplot as plt # Simulated actual vs predicted values y_true = np.array([100, 120, 130, 125, 140, 135, 150]) y_pred = np.array([98, 123, 128, 130, 138, 132, 155]) # Compute errors errors = y_true - y_pred # Metrics mae = np.mean(np.abs(errors)) rmse = np.sqrt(np.mean(errors**2)) metrics = {"MAE": mae, "RMSE": rmse} # Plot plt.figure(figsize=(8, 4)) plt.barh(list(metrics.keys()), list(metrics.values()), color=["tab:blue", "tab:orange"], alpha=0.8) # Annotate numeric values for i, (name, value) in enumerate(metrics.items()): plt.text(value + 0.5, i, f"{value:.3f}", va="center", fontsize=10) plt.title("Forecast Error Metrics") plt.xlabel("Error Value") plt.grid(axis="x", linestyle="--", alpha=0.4) plt.tight_layout() plt.show()
copy
Note
Study More

Study more: Each metric has strengths and weaknesses. MAE is robust and interpretable, but does not penalize large errors as much as RMSE. RMSE is more sensitive to outliers, making it useful when large errors are especially undesirable. MAPE gives a percentage error, but can be misleading or undefined when actual values are zero or near zeroβ€”consider using MAE or RMSE in such cases. Always match the metric to your project's goals and data characteristics.

1. Which metric is most robust to outliers in time series forecasting?

2. Why is MAPE problematic for series with zero values?

question mark

Which metric is most robust to outliers in time series forecasting?

Select the correct answer

question mark

Why is MAPE problematic for series with zero values?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 2. ChapterΒ 3
some-alt