pp. 1005-1013
S&M2155 Research Paper of Special Issue https://doi.org/10.18494/SAM.2020.2634 Published in advance: February 7, 2020 Published: March 19, 2020 Detection of Head Motion from Facial Feature Points Using Deep Learning for Tele-operation of Robot [PDF] Masahiko Minamoto, Shigeki Hori, Hideyuki Kobayashi, Toshihiro Kawase, Tetsuro Miyazaki, Takahiro Kanno, and Kenji Kawashima (Received September 27, 2019; Accepted October 28, 2019) Keywords: visual interface, tele-operation, deep learning, laparoscope holder
We propose an interface for the tele-operation of a laparoscope-holder robot via head
movement using facial feature point detection. Fourteen feature points on the operator’s face
are detected using a camera. The vertical and horizontal rotation angles and the distance
between the face and the camera are estimated from the points using deep learning. The
training data for deep learning are obtained using a dummy face. The root-mean-square error
(RMSE) between the estimated and directly measured values is calculated for different numbers
of nodes, layers, and epochs, and suitable numbers are determined from the RMSE values.
The trained data are evaluated with four subjects. The effectiveness of the proposed method is
demonstrated experimentally.
Corresponding author: Masahiko MinamotoThis work is licensed under a Creative Commons Attribution 4.0 International License. Cite this article Masahiko Minamoto, Shigeki Hori, Hideyuki Kobayashi, Toshihiro Kawase, Tetsuro Miyazaki, Takahiro Kanno, and Kenji Kawashima, Detection of Head Motion from Facial Feature Points Using Deep Learning for Tele-operation of Robot, Sens. Mater., Vol. 32, No. 3, 2020, p. 1005-1013. |