Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lære Visualizing and Logging Metrics | Monitoring and Continuous Delivery
MLOps for Machine Learning Engineers

bookVisualizing and Logging Metrics

123456789101112131415161718192021
import matplotlib.pyplot as plt import numpy as np # Simulate model metric logging over 12 weeks weeks = np.arange(1, 13) accuracy = np.array([0.89, 0.90, 0.91, 0.91, 0.92, 0.91, 0.90, 0.89, 0.87, 0.85, 0.86, 0.86]) precision = np.array([0.88, 0.88, 0.89, 0.90, 0.89, 0.89, 0.88, 0.87, 0.86, 0.84, 0.85, 0.85]) recall = np.array([0.87, 0.88, 0.90, 0.89, 0.91, 0.90, 0.88, 0.86, 0.85, 0.83, 0.84, 0.84]) plt.figure(figsize=(10, 6)) plt.plot(weeks, accuracy, marker='o', label='Accuracy') plt.plot(weeks, precision, marker='s', label='Precision') plt.plot(weeks, recall, marker='^', label='Recall') plt.axhline(0.88, color='red', linestyle='--', label='Alert Threshold') plt.title('Model Metrics Over Time') plt.xlabel('Week') plt.ylabel('Metric Value') plt.ylim(0.8, 1.0) plt.legend() plt.grid(True) plt.show()
copy

When you monitor model metrics such as accuracy, precision, and recall over time, you gain insight into your model's ongoing performance. Consistent values suggest stable behavior, while noticeable drops—especially below a predefined threshold—can signal underlying issues. A sudden decline in accuracy, for instance, may indicate data drift, changes in user behavior, or upstream data quality problems.

To proactively maintain model reliability, you should set up alerts that trigger when metrics fall below critical thresholds. These alerts can be as simple as email notifications or as sophisticated as automated retraining jobs. The key is to respond quickly to performance changes, minimizing any negative impact on users or business outcomes.

Note
Note

Monitoring should include both model and data quality metrics.

question mark

Why is it important to monitor both model and data quality metrics in production machine learning systems?

Select the correct answer

Var alt klart?

Hvordan kan vi forbedre det?

Tak for dine kommentarer!

Sektion 5. Kapitel 3

Spørg AI

expand

Spørg AI

ChatGPT

Spørg om hvad som helst eller prøv et af de foreslåede spørgsmål for at starte vores chat

Suggested prompts:

Can you explain what causes drops in these metrics?

How do I choose an appropriate alert threshold?

What actions should I take if an alert is triggered?

Awesome!

Completion rate improved to 6.25

bookVisualizing and Logging Metrics

Stryg for at vise menuen

123456789101112131415161718192021
import matplotlib.pyplot as plt import numpy as np # Simulate model metric logging over 12 weeks weeks = np.arange(1, 13) accuracy = np.array([0.89, 0.90, 0.91, 0.91, 0.92, 0.91, 0.90, 0.89, 0.87, 0.85, 0.86, 0.86]) precision = np.array([0.88, 0.88, 0.89, 0.90, 0.89, 0.89, 0.88, 0.87, 0.86, 0.84, 0.85, 0.85]) recall = np.array([0.87, 0.88, 0.90, 0.89, 0.91, 0.90, 0.88, 0.86, 0.85, 0.83, 0.84, 0.84]) plt.figure(figsize=(10, 6)) plt.plot(weeks, accuracy, marker='o', label='Accuracy') plt.plot(weeks, precision, marker='s', label='Precision') plt.plot(weeks, recall, marker='^', label='Recall') plt.axhline(0.88, color='red', linestyle='--', label='Alert Threshold') plt.title('Model Metrics Over Time') plt.xlabel('Week') plt.ylabel('Metric Value') plt.ylim(0.8, 1.0) plt.legend() plt.grid(True) plt.show()
copy

When you monitor model metrics such as accuracy, precision, and recall over time, you gain insight into your model's ongoing performance. Consistent values suggest stable behavior, while noticeable drops—especially below a predefined threshold—can signal underlying issues. A sudden decline in accuracy, for instance, may indicate data drift, changes in user behavior, or upstream data quality problems.

To proactively maintain model reliability, you should set up alerts that trigger when metrics fall below critical thresholds. These alerts can be as simple as email notifications or as sophisticated as automated retraining jobs. The key is to respond quickly to performance changes, minimizing any negative impact on users or business outcomes.

Note
Note

Monitoring should include both model and data quality metrics.

question mark

Why is it important to monitor both model and data quality metrics in production machine learning systems?

Select the correct answer

Var alt klart?

Hvordan kan vi forbedre det?

Tak for dine kommentarer!

Sektion 5. Kapitel 3
some-alt