pp. 2981-2998
S&M2317 Research Paper of Special Issue https://doi.org/10.18494/SAM.2020.2878 Published in advance: June 13, 2020 Published: September 18, 2020 Acoustic-sensing-based Gesture Recognition Using Hierarchical Classifier [PDF] Miki Kawato and Kaori Fujinami (Received March 23, 2020; Accepted May 25, 2020) Keywords: gesture recognition, acoustic sensing, machine learning, hierarchical classifier, feature engineering
A gestural input to control artifacts and access the digital world is an essential part of highly usable systems. In this article, we propose a gesture recognition method that leverages the sound generated by the friction between a surface such as a table and a finger or pen, in which 17 different gestures are defined. The gesture recognition process is regarded as a 17-class classification problem; 89 classification features are defined to represent the envelope of each input sound, while a hierarchical classifier structure is employed to increase the accuracy of confusable classes. Offline experiments show that the highest accuracy is 0.954 under a condition where the classifiers are customized for each user, while an accuracy of 0.854 is obtained under a condition where the classifiers are trained without using the data from test users. We also confirm the effectiveness of the hierarchical classifier approach compared with a single-flat-classifier approach and that of a feature engineering approach compared with a feature learning approach. The information of individual features is also presented.
Corresponding author: Kaori FujinamiThis work is licensed under a Creative Commons Attribution 4.0 International License. Cite this article Miki Kawato and Kaori Fujinami, Acoustic-sensing-based Gesture Recognition Using Hierarchical Classifier, Sens. Mater., Vol. 32, No. 9, 2020, p. 2981-2998. |