Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lära Visualizing and Logging Metrics | Monitoring and Continuous Delivery
MLOps for Machine Learning Engineers

bookVisualizing and Logging Metrics

123456789101112131415161718192021
import matplotlib.pyplot as plt import numpy as np # Simulate model metric logging over 12 weeks weeks = np.arange(1, 13) accuracy = np.array([0.89, 0.90, 0.91, 0.91, 0.92, 0.91, 0.90, 0.89, 0.87, 0.85, 0.86, 0.86]) precision = np.array([0.88, 0.88, 0.89, 0.90, 0.89, 0.89, 0.88, 0.87, 0.86, 0.84, 0.85, 0.85]) recall = np.array([0.87, 0.88, 0.90, 0.89, 0.91, 0.90, 0.88, 0.86, 0.85, 0.83, 0.84, 0.84]) plt.figure(figsize=(10, 6)) plt.plot(weeks, accuracy, marker='o', label='Accuracy') plt.plot(weeks, precision, marker='s', label='Precision') plt.plot(weeks, recall, marker='^', label='Recall') plt.axhline(0.88, color='red', linestyle='--', label='Alert Threshold') plt.title('Model Metrics Over Time') plt.xlabel('Week') plt.ylabel('Metric Value') plt.ylim(0.8, 1.0) plt.legend() plt.grid(True) plt.show()
copy

When you monitor model metrics such as accuracy, precision, and recall over time, you gain insight into your model's ongoing performance. Consistent values suggest stable behavior, while noticeable drops—especially below a predefined threshold—can signal underlying issues. A sudden decline in accuracy, for instance, may indicate data drift, changes in user behavior, or upstream data quality problems.

To proactively maintain model reliability, you should set up alerts that trigger when metrics fall below critical thresholds. These alerts can be as simple as email notifications or as sophisticated as automated retraining jobs. The key is to respond quickly to performance changes, minimizing any negative impact on users or business outcomes.

Note
Note

Monitoring should include both model and data quality metrics.

question mark

Why is it important to monitor both model and data quality metrics in production machine learning systems?

Select the correct answer

Var allt tydligt?

Hur kan vi förbättra det?

Tack för dina kommentarer!

Avsnitt 5. Kapitel 3

Fråga AI

expand

Fråga AI

ChatGPT

Fråga vad du vill eller prova någon av de föreslagna frågorna för att starta vårt samtal

Suggested prompts:

Can you explain what causes drops in these metrics?

How do I choose an appropriate alert threshold?

What actions should I take if an alert is triggered?

Awesome!

Completion rate improved to 6.25

bookVisualizing and Logging Metrics

Svep för att visa menyn

123456789101112131415161718192021
import matplotlib.pyplot as plt import numpy as np # Simulate model metric logging over 12 weeks weeks = np.arange(1, 13) accuracy = np.array([0.89, 0.90, 0.91, 0.91, 0.92, 0.91, 0.90, 0.89, 0.87, 0.85, 0.86, 0.86]) precision = np.array([0.88, 0.88, 0.89, 0.90, 0.89, 0.89, 0.88, 0.87, 0.86, 0.84, 0.85, 0.85]) recall = np.array([0.87, 0.88, 0.90, 0.89, 0.91, 0.90, 0.88, 0.86, 0.85, 0.83, 0.84, 0.84]) plt.figure(figsize=(10, 6)) plt.plot(weeks, accuracy, marker='o', label='Accuracy') plt.plot(weeks, precision, marker='s', label='Precision') plt.plot(weeks, recall, marker='^', label='Recall') plt.axhline(0.88, color='red', linestyle='--', label='Alert Threshold') plt.title('Model Metrics Over Time') plt.xlabel('Week') plt.ylabel('Metric Value') plt.ylim(0.8, 1.0) plt.legend() plt.grid(True) plt.show()
copy

When you monitor model metrics such as accuracy, precision, and recall over time, you gain insight into your model's ongoing performance. Consistent values suggest stable behavior, while noticeable drops—especially below a predefined threshold—can signal underlying issues. A sudden decline in accuracy, for instance, may indicate data drift, changes in user behavior, or upstream data quality problems.

To proactively maintain model reliability, you should set up alerts that trigger when metrics fall below critical thresholds. These alerts can be as simple as email notifications or as sophisticated as automated retraining jobs. The key is to respond quickly to performance changes, minimizing any negative impact on users or business outcomes.

Note
Note

Monitoring should include both model and data quality metrics.

question mark

Why is it important to monitor both model and data quality metrics in production machine learning systems?

Select the correct answer

Var allt tydligt?

Hur kan vi förbättra det?

Tack för dina kommentarer!

Avsnitt 5. Kapitel 3
some-alt