Young Researcher Paper Award 2023
🥇Winners

Notice of retraction
Vol. 34, No. 8(3), S&M3042

Notice of retraction
Vol. 32, No. 8(2), S&M2292

Print: ISSN 0914-4935
Online: ISSN 2435-0869
Sensors and Materials
is an international peer-reviewed open access journal to provide a forum for researchers working in multidisciplinary fields of sensing technology.
Sensors and Materials
is covered by Science Citation Index Expanded (Clarivate Analytics), Scopus (Elsevier), and other databases.

Instructions to authors
English    日本語

Instructions for manuscript preparation
English    日本語

Template
English

Publisher
 MYU K.K.
 Sensors and Materials
 1-23-3-303 Sendagi,
 Bunkyo-ku, Tokyo 113-0022, Japan
 Tel: 81-3-3827-8549
 Fax: 81-3-3827-8547

MYU Research, a scientific publisher, seeks a native English-speaking proofreader with a scientific background. B.Sc. or higher degree is desirable. In-office position; work hours negotiable. Call 03-3827-8549 for further information.


MYU Research

(proofreading and recording)


MYU K.K.
(translation service)


The Art of Writing Scientific Papers

(How to write scientific papers)
(Japanese Only)

Sensors and Materials, Volume 36, Number 6(5) (2024)
Copyright(C) MYU K.K.
pp. 2569-2583
S&M3687 Research Paper of Special Issue
https://doi.org/10.18494/SAM4822
Published: June 28, 2024

Monocular Depth Estimation of 2D Images Based on Optimized U-net with Transfer Learning [PDF]

Ming-Tsung Yeh,1 Tsung-Chi Chen, Neng-Sheng Pai, and Chi-Huan Cheng

(Received December 13, 2023; Accepted May 22, 2024)

Keywords: depth estimation, transfer-learning-based U-net, convolutional autoencoder, depth classification

Estimating depth from 2D images is vital in various applications, such as object recognition, scene reconstruction, and navigation. It offers significant advantages in augmented reality, image refocusing, and segmentation. In this paper, we propose an optimized U-net network based on a transfer learning encoder and advanced decoder structures to estimate depth on a single 2D image. The encoder–decoder architecture is built from ResNet152v2 as the encoder and an improved U-Net-based decoder to achieve accurate depth predictions. The introduced ResNet152v2 network had been pretrained on the extensive ImageNet dataset, which possesses weights to extract rich and generalizable features for large-scale image classification. This proposed encoder can have prior knowledge to reduce training time and improve object position recognition. The proposed composite up-sampling block (CUB) designed in the decoder applied the 2x and 4x bilinear interpolation combined with the one-stride transpose convolution to expand the low-resolution feature maps obtained from the encoder, enabling the network to recover finer details. The skip connections are used to enhance the representation power of the decoder. The output of each up-sampling block is concatenated with the corresponding pooling layer. This fusion of features from different scales helps capture local and global context information, contributing to more accurate depth predictions. This method utilizes RGB images and depth maps as training inputs from the NYU Depth Dataset V2. The experimental results demonstrate that the transfer learning-based encoder, coupled with our proposed decoder and data augmentation techniques, enables the transformation of complex RGB images into accurate depth maps. The system accurately classifies different depth ranges based on depth data ranging from 0.4 to 10 m. By mapping different depths to corresponding colors using gradational color scales, precise depth classification can be performed on the 2D images.

Corresponding author: Neng-Sheng Pai


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Cite this article
Ming-Tsung Yeh,1 Tsung-Chi Chen, Neng-Sheng Pai, and Chi-Huan Cheng, Monocular Depth Estimation of 2D Images Based on Optimized U-net with Transfer Learning, Sens. Mater., Vol. 36, No. 6, 2024, p. 2569-2583.



Forthcoming Regular Issues


Forthcoming Special Issues

Special Issue on Applications of Novel Sensors and Related Technologies for Internet of Things
Guest editor, Teen-Hang Meen (National Formosa University), Wenbing Zhao (Cleveland State University), and Cheng-Fu Yang (National University of Kaohsiung)
Call for paper


Special Issue on Advanced Sensing Technologies for Green Energy
Guest editor, Yong Zhu (Griffith University)
Call for paper


Special Issue on Room-temperature-operation Solid-state Radiation Detectors
Guest editor, Toru Aoki (Shizuoka University)
Call for paper


Special Issue on International Conference on Biosensors, Bioelectronics, Biomedical Devices, BioMEMS/NEMS and Applications 2023 (Bio4Apps 2023)
Guest editor, Dzung Viet Dao (Griffith University) and Cong Thanh Nguyen (Griffith University)
Conference website
Call for paper


Special Issue on Advanced Sensing Technologies and Their Applications in Human/Animal Activity Recognition and Behavior Understanding
Guest editor, Kaori Fujinami (Tokyo University of Agriculture and Technology)
Call for paper


Special Issue on Piezoelectric Thin Films and Piezoelectric MEMS
Guest editor, Isaku Kanno (Kobe University)
Call for paper


Copyright(C) MYU K.K. All Rights Reserved.