Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Apprendre YOLO Model Overview | Object Detection
Computer Vision Essentials
course content

Contenu du cours

Computer Vision Essentials

Computer Vision Essentials

1. Introduction to Computer Vision
2. Image Processing with OpenCV
3. Convolutional Neural Networks
4. Object Detection
5. Advanced Topics Overview

book
YOLO Model Overview

The YOLO (You Only Look Once) algorithm is a fast and efficient object detection model. Unlike traditional approaches like R-CNN that use multiple steps, YOLO processes the entire image in a single pass, making it ideal for real-time applications.

How YOLO Differs from R-CNN Approaches

Traditional object detection methods, such as R-CNN and its variants, rely on a two-stage pipeline: first generating region proposals, then classifying each proposed region. While effective, this approach is computationally expensive and slows down inference, making it less suitable for real-time applications.

YOLO (You Only Look Once) takes a radically different approach. It divides the input image into a grid and predicts bounding boxes and class probabilities for each cell in a single forward pass. This design frames object detection as a single regression problem, allowing YOLO to achieve real-time performance.

Unlike R-CNN-based methods that focus only on local regions, YOLO processes the entire image at once, enabling it to capture global contextual information. This leads to better performance in detecting multiple or overlapping objects, while maintaining high speed and accuracy.

YOLO Architecture and Grid-Based Predictions

YOLO splits an input image into an S × S grid, where each grid cell is responsible for detecting objects whose center falls within it. Each cell predicts bounding box coordinates (x, y, width, height), an object confidence score, and class probabilities. Since YOLO processes the entire image in one forward pass, it is highly efficient compared to earlier object detection models.

Loss Function and Class Confidence Scores

YOLO optimizes detection accuracy using a custom loss function, which includes:

  • Localization loss: measures bounding box accuracy;

  • Confidence loss: ensures predictions indicate object presence correctly;

  • Classification loss: evaluates how well the predicted class matches the true class.

To improve results, YOLO applies anchor boxes and non-max suppression (NMS) to remove redundant detections.

Advantages of YOLO: Speed vs. Accuracy Trade-Off

YOLO's main advantage is speed. Since detection happens in a single pass, YOLO is much faster than R-CNN-based methods, making it suitable for real-time applications like autonomous driving and surveillance. However, early YOLO versions struggled with small object detection, which later versions improved upon.

YOLO: A Brief History

YOLO, developed by Joseph Redmon and Ali Farhadi in 2015, transformed object detection with its single-pass processing.

  • YOLOv2 (2016): added batch normalization, anchor boxes, and dimension clusters;

  • YOLOv3 (2018): introduced a more efficient backbone, multiple anchors, and spatial pyramid pooling;

  • YOLOv4 (2020): added Mosaic data augmentation, an anchor-free detection head, and a new loss function;

  • YOLOv5: enhanced performance with hyperparameter optimization, experiment tracking, and automatic export features;

  • YOLOv6 (2022): open-sourced by Meituan and used in autonomous delivery robots;

  • YOLOv7: expanded capabilities to include pose estimation;

  • YOLOv8 (2023): improved speed, flexibility, and efficiency for vision AI tasks;

  • YOLOv9: introduced Programmable Gradient Information (PGI) and the Generalized Efficient Layer Aggregation Network (GELAN);

  • YOLOv10: developed by Tsinghua University, eliminating Non-Maximum Suppression (NMS) with an End-to-End detection head;

  • YOLOv11: the latest model offering state-of-the-art performance across object detection, segmentation, and classification.

Tout était clair ?

Comment pouvons-nous l'améliorer ?

Merci pour vos commentaires !

Section 4. Chapitre 7

Demandez à l'IA

expand
ChatGPT

Posez n'importe quelle question ou essayez l'une des questions suggérées pour commencer notre discussion

course content

Contenu du cours

Computer Vision Essentials

Computer Vision Essentials

1. Introduction to Computer Vision
2. Image Processing with OpenCV
3. Convolutional Neural Networks
4. Object Detection
5. Advanced Topics Overview

book
YOLO Model Overview

The YOLO (You Only Look Once) algorithm is a fast and efficient object detection model. Unlike traditional approaches like R-CNN that use multiple steps, YOLO processes the entire image in a single pass, making it ideal for real-time applications.

How YOLO Differs from R-CNN Approaches

Traditional object detection methods, such as R-CNN and its variants, rely on a two-stage pipeline: first generating region proposals, then classifying each proposed region. While effective, this approach is computationally expensive and slows down inference, making it less suitable for real-time applications.

YOLO (You Only Look Once) takes a radically different approach. It divides the input image into a grid and predicts bounding boxes and class probabilities for each cell in a single forward pass. This design frames object detection as a single regression problem, allowing YOLO to achieve real-time performance.

Unlike R-CNN-based methods that focus only on local regions, YOLO processes the entire image at once, enabling it to capture global contextual information. This leads to better performance in detecting multiple or overlapping objects, while maintaining high speed and accuracy.

YOLO Architecture and Grid-Based Predictions

YOLO splits an input image into an S × S grid, where each grid cell is responsible for detecting objects whose center falls within it. Each cell predicts bounding box coordinates (x, y, width, height), an object confidence score, and class probabilities. Since YOLO processes the entire image in one forward pass, it is highly efficient compared to earlier object detection models.

Loss Function and Class Confidence Scores

YOLO optimizes detection accuracy using a custom loss function, which includes:

  • Localization loss: measures bounding box accuracy;

  • Confidence loss: ensures predictions indicate object presence correctly;

  • Classification loss: evaluates how well the predicted class matches the true class.

To improve results, YOLO applies anchor boxes and non-max suppression (NMS) to remove redundant detections.

Advantages of YOLO: Speed vs. Accuracy Trade-Off

YOLO's main advantage is speed. Since detection happens in a single pass, YOLO is much faster than R-CNN-based methods, making it suitable for real-time applications like autonomous driving and surveillance. However, early YOLO versions struggled with small object detection, which later versions improved upon.

YOLO: A Brief History

YOLO, developed by Joseph Redmon and Ali Farhadi in 2015, transformed object detection with its single-pass processing.

  • YOLOv2 (2016): added batch normalization, anchor boxes, and dimension clusters;

  • YOLOv3 (2018): introduced a more efficient backbone, multiple anchors, and spatial pyramid pooling;

  • YOLOv4 (2020): added Mosaic data augmentation, an anchor-free detection head, and a new loss function;

  • YOLOv5: enhanced performance with hyperparameter optimization, experiment tracking, and automatic export features;

  • YOLOv6 (2022): open-sourced by Meituan and used in autonomous delivery robots;

  • YOLOv7: expanded capabilities to include pose estimation;

  • YOLOv8 (2023): improved speed, flexibility, and efficiency for vision AI tasks;

  • YOLOv9: introduced Programmable Gradient Information (PGI) and the Generalized Efficient Layer Aggregation Network (GELAN);

  • YOLOv10: developed by Tsinghua University, eliminating Non-Maximum Suppression (NMS) with an End-to-End detection head;

  • YOLOv11: the latest model offering state-of-the-art performance across object detection, segmentation, and classification.

Tout était clair ?

Comment pouvons-nous l'améliorer ?

Merci pour vos commentaires !

Section 4. Chapitre 7
Nous sommes désolés de vous informer que quelque chose s'est mal passé. Qu'est-il arrivé ?
some-alt