Young Researcher Paper Award 2021
🥇Winners

Notice of retraction
Vol. 32, No. 8(2), S&M2292

Print: ISSN 0914-4935
Online: ISSN 2435-0869
Sensors and Materials
is an international peer-reviewed open access journal to provide a forum for researchers working in multidisciplinary fields of sensing technology.
Sensors and Materials
is covered by Science Citation Index Expanded (Clarivate Analytics), Scopus (Elsevier), and other databases.

Instructions to authors
English    日本語

Instructions for manuscript preparation
English    日本語

Template
English

Publisher
 MYU K.K.
 Sensors and Materials
 1-23-3-303 Sendagi,
 Bunkyo-ku, Tokyo 113-0022, Japan
 Tel: 81-3-3827-8549
 Fax: 81-3-3827-8547

MYU Research, a scientific publisher, seeks a native English-speaking proofreader with a scientific background. B.Sc. or higher degree is desirable. In-office position; work hours negotiable. Call 03-3827-8549 for further information.


MYU Research

(proofreading and recording)


MYU K.K.
(translation service)


The Art of Writing Scientific Papers

(How to write scientific papers)
(Japanese Only)

Sensors and Materials, Volume 34, Number 1(3) (2022)
Copyright(C) MYU K.K.
pp. 251-260
S&M2808 Research Paper of Special Issue
https://doi.org/10.18494/SAM3732
Published: January 31, 2022

Object Detection of Road Facilities Using YOLOv3 for High-definition Map Updates [PDF]

Tae-Young Lee, Myeong-Hun Jeong, and Almirah Peter

(Received November 15, 2021; Accepted January 4, 2022)

Keywords: high-definition (HD) map, object detection, autonomous driving, deep learning, YOLOv3

Autonomous driving technology is significantly based on the fusion of high-definition (HD) maps and sensors. Therefore, the construction and update of HD maps must be emphasized to achieve full driving automation. Herein, a method is proposed to detect road facilities using object detection with images, particularly for HD map updates utilizing the You Only Look Once version 3 (YOLOv3) algorithm. The proposed approach, a deep-learning-based object detection method, utilizes transfer learning, which can detect objects in road facilities and record road sections that require maintenance. To test the effectiveness of the detection method, we analyze video footage captured in the Korean road environment. The experimental results show that this method achieves a mean average precision (mAP) of 58 and can update HD maps using a crowdsourcing framework.

Corresponding author: Myeong-Hun Jeong


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Cite this article
Tae-Young Lee, Myeong-Hun Jeong, and Almirah Peter, Object Detection of Road Facilities Using YOLOv3 for High-definition Map Updates, Sens. Mater., Vol. 34, No. 1, 2022, p. 251-260.



Forthcoming Regular Issues


Forthcoming Special Issues

Special Issue on Collection, Processing, and Applications of Measured Sensor Signals
Guest editor, Hsiung-Cheng Lin (National Chin-Yi University of Technology)


Special Issue on Advanced Materials and Sensing Technologies on IoT Applications: Part 4-3
Guest editor, Teen-Hang Meen (National Formosa University), Wenbing Zhao (Cleveland State University), and Cheng-Fu Yang (National University of Kaohsiung)


Special Issue on IoT Wireless Networked Sensing for Life and Safety
Guest editor, Toshihiro Itoh (The University of Tokyo) and Jian Lu (National Institute of Advanced Industrial Science and Technology)
Call for paper


Special Issue on the International Multi-Conference on Engineering and Technology Innovation 2021 (IMETI2021)
Guest editor, Wen-Hsiang Hsieh (National Formosa University)
Conference website


Special Issue on Biosensors and Biofuel Cells for Smart Community and Smart Life
Guest editor, Seiya Tsujimura (University of Tsukuba), Isao Shitanda (Tokyo University of Science), and Hiroaki Sakamoto (University of Fukui)
Call for paper


Special Issue on Novel Sensors and Related Technologies on IoT Applications: Part 1
Guest editor, Teen-Hang Meen (National Formosa University), Wenbing Zhao (Cleveland State University), and Cheng-Fu Yang (National University of Kaohsiung)
Call for paper


Copyright(C) MYU K.K. All Rights Reserved.