pp. 3743-3755
S&M3763 Research Paper of Special Issue https://doi.org/10.18494/SAM4853 Published: September 5, 2024 Exploring Learning Strategies for Training Deep Neural Networks Using Multiple Graphics Processing Units [PDF] Nien-Tsu Hu, Ching-Chien Huang, Chih-Chieh Mo, and Chien-Lin Huang (Received January 7, 2024; Accepted February 28, 2024) Keywords: learning strategy, multiple GPUs, minibatch, learning rate, deep neural networks, speech recognition
Neural network algorithms are becoming more commonly used to model big data, such as images and speech. Although they often offer superior performance, they require more training time than traditional approaches. Graphics processing units (GPUs) are an excellent solution for reducing training time. The use of multiple GPUs, in addition to a single GPU, can further improve computing power. Training DNNs with algorithm and computer hardware support can be challenging when selecting an appropriate learning strategy. In this work, we investigate various learning strategies for training DNNs using multiple GPUs. Experimental data show that using six GPUs with the suggested approach results in a speed boost of approximately four times that of using a single GPU. Moreover, the precision of the suggested method using six GPUs is similar to that of using a single GPU.
Corresponding author: Ching-Chien Huang and Chien-Lin HuangThis work is licensed under a Creative Commons Attribution 4.0 International License. Cite this article Nien-Tsu Hu, Ching-Chien Huang, Chih-Chieh Mo, and Chien-Lin Huang, Exploring Learning Strategies for Training Deep Neural Networks Using Multiple Graphics Processing Units, Sens. Mater., Vol. 36, No. 9, 2024, p. 3743-3755. |