1. How do you handle overfitting in a model?
2. Explain bias-variance trade-off.
3. What is early stopping in the context of training a model?
4. How would you handle imbalanced datasets?
5. Which of the following best describes the difference between data normalization and scaling?
6. How does cross-validation work?
7. Which statement best describes the difference between precision and recall?
8. Which kind of models are utilized by the bagging ensemble method?
9. How does a Random Forest algorithm function?
10. Which of the following is not an ensemble method?
11. In which scenario is a high recall more important than high precision?