Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Aprenda Overconfidence and Underconfidence Explained | Foundations of Probabilistic Calibration
Quizzes & Challenges
Quizzes
Challenges
/
Model Calibration with Python

bookOverconfidence and Underconfidence Explained

When a model makes predictions in the form of probabilities, you expect those probabilities to reflect the true likelihood of an event. However, models can be overconfident or underconfident in their predictions. Overconfidence means the model assigns probabilities that are too high compared to the actual frequency of correct predictions. For example, if a model predicts 90% chance of positive but is only right 70% of the time when it says 90%, it is overconfident. Underconfidence is the opposite: the model's predicted probabilities are too conservative, so it predicts lower probabilities than the actual chance of being correct. For instance, if the model predicts 60% chance of positive but is actually correct 80% of the time at that level, it is underconfident. Recognizing these patterns is crucial for interpreting model outputs and improving calibration.

123456789101112131415
import numpy as np # Example predicted probabilities and true labels predicted_probs = np.array([0.95, 0.8, 0.7, 0.4, 0.3, 0.1]) true_labels = np.array([1, 1, 0, 1, 0, 0]) # Compute average confidence (mean of predicted probabilities) average_confidence = np.mean(predicted_probs) # Compute accuracy (fraction of correct predictions using 0.5 threshold) predicted_classes = (predicted_probs >= 0.5).astype(int) accuracy = np.mean(predicted_classes == true_labels) print(f"Average confidence: {average_confidence:.2f}") print(f"Accuracy: {accuracy:.2f}")
copy

Looking at the results, if the average confidence is significantly higher than the accuracy, this suggests the model is overconfident: it is more certain than it should be. If the average confidence is noticeably lower than the accuracy, the model is underconfident, meaning it is less certain than it should be. Properly calibrated models have average confidence close to their actual accuracy, ensuring the probabilities are trustworthy for decision-making.

1. How would overconfidence appear in a model's probability outputs?

2. What does underconfidence imply about a model's predictions?

3. Fill in the blank

question mark

How would overconfidence appear in a model's probability outputs?

Select the correct answer

question mark

What does underconfidence imply about a model's predictions?

Select the correct answer

question-icon

Fill in the blank

If a model's average predicted probability is 0.85 but its accuracy is only 0.65, this is a sign of .
If a model's average predicted probability is 0.85 but its accuracy is only 0.65, this is a sign of overconfidence.

Clique ou arraste solte itens e preencha os espaços

Tudo estava claro?

Como podemos melhorá-lo?

Obrigado pelo seu feedback!

Seção 1. Capítulo 2

Pergunte à IA

expand

Pergunte à IA

ChatGPT

Pergunte o que quiser ou experimente uma das perguntas sugeridas para iniciar nosso bate-papo

bookOverconfidence and Underconfidence Explained

Deslize para mostrar o menu

When a model makes predictions in the form of probabilities, you expect those probabilities to reflect the true likelihood of an event. However, models can be overconfident or underconfident in their predictions. Overconfidence means the model assigns probabilities that are too high compared to the actual frequency of correct predictions. For example, if a model predicts 90% chance of positive but is only right 70% of the time when it says 90%, it is overconfident. Underconfidence is the opposite: the model's predicted probabilities are too conservative, so it predicts lower probabilities than the actual chance of being correct. For instance, if the model predicts 60% chance of positive but is actually correct 80% of the time at that level, it is underconfident. Recognizing these patterns is crucial for interpreting model outputs and improving calibration.

123456789101112131415
import numpy as np # Example predicted probabilities and true labels predicted_probs = np.array([0.95, 0.8, 0.7, 0.4, 0.3, 0.1]) true_labels = np.array([1, 1, 0, 1, 0, 0]) # Compute average confidence (mean of predicted probabilities) average_confidence = np.mean(predicted_probs) # Compute accuracy (fraction of correct predictions using 0.5 threshold) predicted_classes = (predicted_probs >= 0.5).astype(int) accuracy = np.mean(predicted_classes == true_labels) print(f"Average confidence: {average_confidence:.2f}") print(f"Accuracy: {accuracy:.2f}")
copy

Looking at the results, if the average confidence is significantly higher than the accuracy, this suggests the model is overconfident: it is more certain than it should be. If the average confidence is noticeably lower than the accuracy, the model is underconfident, meaning it is less certain than it should be. Properly calibrated models have average confidence close to their actual accuracy, ensuring the probabilities are trustworthy for decision-making.

1. How would overconfidence appear in a model's probability outputs?

2. What does underconfidence imply about a model's predictions?

3. Fill in the blank

question mark

How would overconfidence appear in a model's probability outputs?

Select the correct answer

question mark

What does underconfidence imply about a model's predictions?

Select the correct answer

question-icon

Fill in the blank

If a model's average predicted probability is 0.85 but its accuracy is only 0.65, this is a sign of .
If a model's average predicted probability is 0.85 but its accuracy is only 0.65, this is a sign of overconfidence.

Clique ou arraste solte itens e preencha os espaços

Tudo estava claro?

Como podemos melhorá-lo?

Obrigado pelo seu feedback!

Seção 1. Capítulo 2
some-alt