pp. 1959-1982
S&M3647 Research Paper of Special Issue https://doi.org/10.18494/SAM4827 Published: May 24, 2024 Indoor Mobile Robot Path Planning and Navigation System Based on Deep Reinforcement Learning [PDF] Neng-Sheng Pai, Xiang-Yan Tsai, Pi-Yun Chen, and Hsu-Yung Lin (Received December 15 2023; Accepted May 13, 2024) Keywords: deep reinforcement learning, behavior cloning, YOLO-v7-tiny, A* algorithm, DWA algorithm
In this paper, we propose an autonomous navigation system architecture for indoor mobile robots that combines the advantages of end-to-end (E2E) autonomous driving and traditional navigation algorithms. The architecture aims to overcome the challenges of traditional navigation algorithms relying heavily on high-precision localization and E2E struggling to make good decisions when unable to detect target objects. A neural network is trained using deep reinforcement learning in a simulated environment, and the approach of behavior cloning is introduced to stabilize the training process. With this approach, the trained neural network can make action decisions based solely on 2D LiDAR data and images captured by cameras, eliminating the reliance on high-precision localization systems and overcoming the challenges of traditional navigation algorithms. In real-world environments, the YOLO-v7-tiny model is used for object detection in indoor settings. When the target object is far away, A* and DWA algorithms are employed for path planning to ensure safe and efficient navigation. These algorithms can find the globally optimal path and perform local obstacle avoidance, thus achieving autonomous navigation in indoor environments.
Corresponding author: Pi-Yun ChenThis work is licensed under a Creative Commons Attribution 4.0 International License. Cite this article Neng-Sheng Pai, Xiang-Yan Tsai, Pi-Yun Chen, and Hsu-Yung Lin, Indoor Mobile Robot Path Planning and Navigation System Based on Deep Reinforcement Learning, Sens. Mater., Vol. 36, No. 5, 2024, p. 1959-1982. |