pp. 655-670
S&M3553 Research Paper of Special Issue https://doi.org/10.18494/SAM4683 Published: February 29, 2024 Vision-based Robotic Arm in Defect Detection and Object Classification Applications [PDF] Cheng-Jian Lin, Jyun-Yu Jhang, Yi-Jyun Gao, and Hsiu-Mei Huang (Received June 15, 2023; Accepted January 12, 2024) Keywords: robotic arm, deep learning network, object measurement, You Only Look Once (YOLO), defect detection
Robotic arms have been widely used in industrial fields. However, researchers have seldom considered the factors affecting the actual factory environment. For example, when objects are conveyed in a factory, conveyor belts are often used to dynamically plan the overall production line. In addition, each object requires multiple checkpoints for repeated audits and inspections to ensure its quality. In this study, a vision-based robotic arm system equipped with multiple functionalities was developed. The development process consisted of three steps: detecting multiple dynamic objects, determining the size of each object, and identifying object defects. In the first step, You Only Look Once was used to detect multiple dynamic objects on a conveyor belt in real time. In the second step, the original image of the object was converted into a grayscale image, and the edge contour of the object was drawn using a Canny edge detection algorithm. Objects in the image were then rotated for vertical and horizontal projections, and then an artificial neural network (ANN) was used to calculate the size of each object. In the third step, a convolutional fuzzy neural network (CFNN) was used to identify object defects. This network was divided into an input layer, a convolution pooling layer, a feature fusion layer, a fuzzy layer, a regular layer, and a defuzzification layer. According to the experimental results, the standard error of the mean between the object size obtained by the ANN and the actual size was 0.009. In addition, the accuracy, recall, precision, and F1-score obtained by the CFNN in object defect detection were 0.9580, 0.9535, 0.9535, and 0.9535, respectively. Compared with other deep neural network models, such as AlexNet and LeNet, the proposed CFNN has fewer parameters and higher performance.
Corresponding author: Jyun-Yu JhangThis work is licensed under a Creative Commons Attribution 4.0 International License. Cite this article Cheng-Jian Lin, Jyun-Yu Jhang, Yi-Jyun Gao, and Hsiu-Mei Huang, Vision-based Robotic Arm in Defect Detection and Object Classification Applications, Sens. Mater., Vol. 36, No. 2, 2024, p. 655-670. |