pp. 4239-4252
S&M3797 Research Paper of Special Issue https://doi.org/10.18494/SAM5114 Published: October 11, 2024 Autonomous Multitask Driving Systems Using Improved You Only Look Once Based on Panoptic Driving Perception [PDF] Chun-Jung Lin, Cheng-Jian Lin, and Yi-Chen Yang (Received April 30, 2024; Accepted September 17, 2024) Keywords: panoptic driving perception, YOLO, Taguchi method, drivable area detection, lane detection, vehicle detection
With the continuous development of science and technology, automatic assisted driving is becoming a trend that cannot be ignored. The You Only Look Once (YOLO) model is usually used to detect roads and drivable areas. Since YOLO is often used for a single task and its parameter combination is difficult to obtain, we propose a Taguchi-based YOLO for panoptic driving perception (T-YOLOP) model to improve the accuracy and computing speed of the model in deteching drivable areas and lanes, making it a more practical panoptic driving perception system. In the T-YOLOP model, the Taguchi method is used to determine the appropriate parameter combination. Our experiments use the BDD100K database to verify the performance of the proposed T-YOLOP model. Experimental results show that the accuracies of the proposed T-YOLOP model in deteching drivable areas and lanes are 97.9 and 73.9%, respectively, and these results are better than those of the traditional YOLOP model. Therefore, the proposed T-YOLOP model successfully provides a more reliable solution for the application of panoramic driving perception systems.
Corresponding author: Cheng-Jian LinThis work is licensed under a Creative Commons Attribution 4.0 International License. Cite this article Chun-Jung Lin, Cheng-Jian Lin, and Yi-Chen Yang, Autonomous Multitask Driving Systems Using Improved You Only Look Once Based on Panoptic Driving Perception, Sens. Mater., Vol. 36, No. 10, 2024, p. 4239-4252. |