Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Modeling Summary | Modeling
ML Introduction with scikit-learn
course content

Conteúdo do Curso

ML Introduction with scikit-learn

ML Introduction with scikit-learn

1. Machine Learning Concepts
2. Preprocessing Data with Scikit-learn
3. Pipelines
4. Modeling

book
Modeling Summary

Congratulations on getting so far! You already know how to build a model, use it in a pipeline, and fine-tune the hyperparameters! You also learned two ways to evaluate the model: the train-test split and the cross-validation score.

Let's talk about combining model evaluation and hyperparameter tuning performed by GridSearchCV (or RandomizedSearchCV).

In general, we aim to achieve the best cross-validation score on our dataset, as cross-validation is more stable and less sensitive to how the data is split compared to the train-test split.

Our goal is to identify the hyperparameters that yield the best cross-validation score, which is precisely what GridSearchCV is designed to do. This process results in a fine-tuned model that performs optimally on the training dataset. GridSearchCV also provides a .best_score_ attribute, reflecting the highest cross-validation score achieved during the hyperparameter tuning process.

Typically, the dataset is first divided into train and test sets. We then fine-tune the model on the entire training set using cross-validation to identify the best model. Finally, we assess the model’s performance on the test set, which consists of completely unseen data, to estimate its real-world applicability.

Let's sum it all up. We need:

  1. Preprocess the data;
  2. Do a train-test split;
  3. Find the model with the best cross-validation score on the training set;
  4. Evaluate the best model on the test set.

Before moving on to the final challenge, it's important to note that cross-validation isn't the only method for fine-tuning models. As datasets grow larger, computing cross-validation scores becomes more time-consuming, and the regular train-test split offers more stability due to the increased size of the test set.

Consequently, large datasets are often divided into three sets: a training set, a validation set, and a test set. The model is trained on the training set and evaluated on the validation set to select the model or hyperparameters that perform best.

This selection uses the validation set scores instead of cross-validation scores. Finally, the chosen model is assessed on the test set, which consists of completely unseen data, to verify its performance.

Our penguins dataset is not large. It is actually tiny (342 instances), so we will use the cross-validation score approach in the next chapter.

Why is cross-validation particularly valuable for hyperparameter tuning in smaller datasets, as opposed to larger ones where train-test splits might be preferred?

Why is cross-validation particularly valuable for hyperparameter tuning in smaller datasets, as opposed to larger ones where train-test splits might be preferred?

Selecione a resposta correta

Tudo estava claro?

Como podemos melhorá-lo?

Obrigado pelo seu feedback!

Seção 4. Capítulo 9
We're sorry to hear that something went wrong. What happened?
some-alt