Publication:
Chinese traffic sign detection and recognition based on lightweight you only look once (YOLO) models

Loading...
Thumbnail Image
Date
2024-08
Authors
Song, Wei Zhen
Journal Title
Journal ISSN
Volume Title
Publisher
Research Projects
Organizational Units
Journal Issue
Abstract
Detecting and recognizing traffic signs is crucial for intelligent driving systems, providing essential real-time guidance to drivers. Challenges such as bad weather, lighting, and occlusions hinder traffic sign detection. Conventional algorithms struggle to balance accuracy and real-time performance, leading to the favouring of lightweight deep learning detection algorithms for their automatic feature extraction and low computational cost. This study is based on the classic YOLOv4-tiny and YOLOv5s object detection algorithms, proposing several improvement strategies aimed at developing a more robust model for detecting traffic signs. This research selects the Tsinghua-Tencent 100K (TT100K) dataset and the CSUST Chinese Traffic Sign Detection Benchmark (CCTSDB and CCTSDB2021) datasets for training and evaluating traffic sign detection algorithms. Enhancements include an improved lightweight Better Efficient Channel Attention (BECA) mechanism, an upgraded Dense Spatial Pyramid Pooling (Dense SPP) network, an extra detection head, and optimized anchor boxes. The improved TSR-YOLO model showed significant improvements in precision (96.62%), recall (79.73%), F-1 Score (87.37%), and mAP (92.72%) with a stable FPS of around 81. However, due to its complexity, it is unsuitable for embedded devices. Thus, the study developed Sign-YOLO, which has been improved by a Coordinate Attention (CA) module, a High Bidirectional Feature Pyramid Network (High-BiFPN), and the Better Ghost Module to reduce model size. Sign-YOLO was evaluated on the CCTSDB2021 and TT100K datasets, reducing parameters by 0.13M compared to YOLOv5s, achieving a good balance between accuracy and speed for traffic sign detection.
Description
Keywords
Citation