Contenido del Curso
ML Introduction with scikit-learn
ML Introduction with scikit-learn
Modeling Summary
Congratulations on getting so far! You already know how to build a model, use it in a pipeline, and fine-tune the hyperparameters! You also learned two ways to evaluate the model: the train-test split and the cross-validation score.
Let's talk about combining model evaluation and hyperparameter tuning performed by GridSearchCV
(or RandomizedSearchCV
).
In general, we aim to achieve the best cross-validation score on our dataset, as cross-validation is more stable and less sensitive to how the data is split compared to the train-test split.
Our goal is to identify the hyperparameters that yield the best cross-validation score, which is precisely what GridSearchCV
is designed to do. This process results in a fine-tuned model that performs optimally on the training dataset. GridSearchCV
also provides a .best_score_
attribute, reflecting the highest cross-validation score achieved during the hyperparameter tuning process.
Typically, the dataset is first divided into train and test sets. We then fine-tune the model on the entire training set using cross-validation to identify the best model. Finally, we assess the model’s performance on the test set, which consists of completely unseen data, to estimate its real-world applicability.
Let's sum it all up. We need:
- Preprocess the data;
- Do a train-test split;
- Find the model with the best cross-validation score on the training set;
- Evaluate the best model on the test set.
Before moving on to the final challenge, it's important to note that cross-validation isn't the only method for fine-tuning models. As datasets grow larger, computing cross-validation scores becomes more time-consuming, and the regular train-test split offers more stability due to the increased size of the test set.
Consequently, large datasets are often divided into three sets: a training set, a validation set, and a test set. The model is trained on the training set and evaluated on the validation set to select the model or hyperparameters that perform best.
This selection uses the validation set scores instead of cross-validation scores. Finally, the chosen model is assessed on the test set, which consists of completely unseen data, to verify its performance.
Our penguins dataset is not large. It is actually tiny (342 instances), so we will use the cross-validation score approach in the next chapter.
¡Gracias por tus comentarios!