Publication:
Real-time detection and recognition of Malaysian traffic signs using YOLOV8

Loading...
Thumbnail Image
Date
2024-08
Authors
Be, Tein Pin
Journal Title
Journal ISSN
Volume Title
Publisher
Research Projects
Organizational Units
Journal Issue
Abstract
The rise of AI and autonomous vehicle technology, including in Malaysia, underscores the potential of traffic sign detection and recognition models to enhance traffic safety, assist drivers, and be implemented in autonomous vehicles. Current datasets for Malaysian traffic signs are imbalanced and insufficient for training. Previous research using YOLOv4 achieved an 𝑚𝐴𝑃50 of 59.88% and 35 FPS, suggesting room for improvement with YOLOv8. This study aims to create a balanced Malaysian Traffic Sign Dataset with 45 classes, each containing 100 to 200 instances, and develop a high-accuracy detection model with a 𝑚𝐴𝑃50−95 above 0.85 and over 45 FPS using YOLOv8. The balanced dataset was formed by combining existing datasets with custom-collected data in Roboflow, Improving quality by correcting errors, adding null images, removing low-quality images, and excluding minority classes. Data augmentation and selection balanced the dataset, with Python scripts aiding the process. The final Dataset_v2 has class instances ranging from 175 to 200. The dataset was split into train-validation-test sets (70:20:10 for Dataset_v2). All images were resized to fit the model input size, and hyperparameters including image sizes and architecture variants, were selected using Ultralytics HUB. Training was conducted in Google Colab using the Tesla T4 GPU. Accuracy was evaluated through m𝐴𝑃50−95, and speed by running a three-minute example video and calculating the average FPS. The best model (version 5), using YOLOv8s architecture trained with 960×960 image size, achieved a 𝑚𝐴𝑃50−95 of 0.859, 𝑚𝐴𝑃50 of 95.20%, and 50.36 FPS, meeting all objectives and surpassing similar models.
Description
Keywords
Citation