pp. 1221-1227
S&M2885 Research Paper of Special Issue https://doi.org/10.18494/SAM3566 Published: March 24, 2022 Development of a Deep-learning-based Pet Video Editor [PDF] Chun-Cheng Lin, Cheng-Yu Yeh, and Kuan-Chun Hsu (Received July 22, 2021; Accepted November 4, 2021) Keywords: pet video editing system, deep learning, convolutional neural network (CNN), object detection, you only look once (YOLO), pets’ body movement recognition
Nowadays, a growing number of people have animals, particularly dogs and cats, as pets. A lot of pet owners spend much time taking care of their beloved pets, whose images are captured in daily life and at memorable moments. Edited video clips can be even widely shared with others via the Internet. However, it takes time to edit the captured pet videos. Accordingly, our team aimed to develop a pet video editor using an object detection and body movement recognition model. Pet videos can be captured and edited automatically as expected using AI techniques. For simplicity, the target was narrowed down to recognize the fundamental movements of dogs, namely, eating, tail raising, and yawning. As the first step, input videos were saved automatically once dogs’ images were detected using a pretrained YOLOv4 object detection model. In this manner, video recordings are made easy and efficient. Subsequently, three types of dogs’ body movements were recognized using a self-designed recognition model. Therefore, close-up images of dogs containing any of the three body movements can be instantly recognized, saved, and then shared with others. In this study, the presented body movement model was experimentally validated to give a recognition accuracy of up to 98.84%. We are currently working on increasing the number of movements that can be recognized by our system.
Corresponding author: Cheng-Yu YehThis work is licensed under a Creative Commons Attribution 4.0 International License. Cite this article Chun-Cheng Lin, Cheng-Yu Yeh, and Kuan-Chun Hsu, Development of a Deep-learning-based Pet Video Editor, Sens. Mater., Vol. 34, No. 3, 2022, p. 1221-1227. |