Young Researcher Paper Award 2025
🥇Winners

Notice of retraction
Vol. 32, No. 8(2), S&M2292

Print: ISSN 0914-4935
Online: ISSN 2435-0869
Sensors and Materials
is an international peer-reviewed open access journal to provide a forum for researchers working in multidisciplinary fields of sensing technology.
Sensors and Materials
is covered by Science Citation Index Expanded (Clarivate Analytics), Scopus (Elsevier), and other databases.

Instructions to authors
English    日本語

Instructions for manuscript preparation
English    日本語

Template
English

Publisher
 MYU K.K.
 Sensors and Materials
 1-23-3-303 Sendagi,
 Bunkyo-ku, Tokyo 113-0022, Japan
 Tel: 81-3-3827-8549
 Fax: 81-3-3827-8547

MYU Research, a scientific publisher, seeks a native English-speaking proofreader with a scientific background. B.Sc. or higher degree is desirable. In-office position; work hours negotiable. Call 03-3827-8549 for further information.


MYU Research

(proofreading and recording)


MYU K.K.
(translation service)


The Art of Writing Scientific Papers

(How to write scientific papers)
(Japanese Only)

Sensors and Materials, Volume 38, Number 1(4) (2026)
Copyright(C) MYU K.K.
pp. 477-494
S&M4311 Technical paper
https://doi.org/10.18494/SAM6112
Published: January 29, 2026

C2 Block + Parallel Spatial Attention Module-Ghost Convolution-Feature Diffusion Pyramid Network-You Only Look Once (YOLO)-v11n: An Efficient and Real-time Small Object Detection Algorithm Based on YOLOV11n [PDF]

Yu Fan, Junchao Lin, Chinta Chen, Mingkun Xu, and Cheng-Fu Yang

(Received December 7, 2025; Accepted January 7, 2026)

Keywords: YOLOv11 algorithm, deep learning, feature extraction, attention mechanism, small target detection

Small object detection plays a critical role in applications such as security surveillance, autonomous driving, and remote sensing. However, conventional detection methods often struggle with high annotation costs, low resolution, and heavy computational requirements. To address these challenges, we propose CGF-YOLOv11n, which is the abbreviation of C2 block + parallel spatial attention module (C2PLUS)-Ghost Convolution (GhostConv)-Feature Diffusion Pyramid Network (FDPN)-You Only Look Once (YOLO)v11n, an efficient and real-time small object detection algorithm built upon the YOLOv11n framework. First, we introduce the C2PLUS module, which effectively enhances fine-grained feature extraction for small targets. Second, we design a plug-and-play Ghost-Residual Field-Aware Convolution module to strengthen the feature extraction capability of the backbone network. Finally, the FDPN module is incorporated to promote the balanced fusion between semantic features and spatial information. Experimental results on the VisDrone2019 dataset demonstrate that the proposed method achieves improvements of 3.5 and 3.1% in mAP@0.5 on the validation and test sets, respectively, outperforming the baseline YOLOv11n model. In addition, CGF-YOLOv11n achieves 34 frames per second on the Orange Pi 5 platform, confirming its suitability for real-time deployment and advancing the performance of small object detection systems. The related implementation details, including code and datasets, are available through the authors’ public project repository. In this study, we primarily contribute an efficient modular enhancement strategy for real-time small object detection by integrating C2PLUS, Ghost-based convolution, and FDPN into a lightweight YOLOv11n framework. While the proposed CGF-YOLOv11n demonstrates notable accuracy gains and real-time performance on an embedded platform, the current evaluation is limited to a single aerial benchmark dataset and does not fully explore robustness under extremely dense scenes or severe resolution degradation. Future work will focus on extending validation to more diverse datasets, improving generalization in complex real-world environments, and further optimizing the model for ultralow-power edge devices.

Corresponding author: Yu Fan and Cheng-Fu Yang


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Cite this article
Yu Fan, Junchao Lin, Chinta Chen, Mingkun Xu, and Cheng-Fu Yang, C2 Block + Parallel Spatial Attention Module-Ghost Convolution-Feature Diffusion Pyramid Network-You Only Look Once (YOLO)-v11n: An Efficient and Real-time Small Object Detection Algorithm Based on YOLOV11n, Sens. Mater., Vol. 38, No. 1, 2026, p. 477-494.



Forthcoming Regular Issues


Forthcoming Special Issues

Special Issue on Novel Sensors, Materials, and Related Technologies on Artificial Intelligence of Things Applications
Guest editor, Teen-Hang Meen (National Formosa University), Wenbing Zhao (Cleveland State University), and Cheng-Fu Yang (National University of Kaohsiung)
Call for paper


Special Issue on Mobile Computing and Ubiquitous Networking for Smart Society
Guest editor, Akira Uchiyama (The University of Osaka) and Jaehoon Paul Jeong (Sungkyunkwan University)
Call for paper


Special Issue on Advanced Materials and Technologies for Sensor and Artificial- Intelligence-of-Things Applications (Selected Papers from ICASI 2026)
Guest editor, Sheng-Joue Young (National United University)
Conference website
Call for paper


Special Issue on Innovations in Multimodal Sensing for Intelligent Devices, Systems, and Applications
Guest editor, Jiahui Yu (Research scientist, Zhejiang University), Kairu Li (Professor, Shenyang University of Technology), Yinfeng Fang (Professor, Hangzhou Dianzi University), Chin Wei Hong (Professor, Tokyo Metropolitan University), Zhiqiang Zhang (Professor, University of Leeds)
Call for paper


Special Issue on Advanced Materials and Technologies for Sensor and Artificial- Intelligence-of-Things Applications (Selected Papers from ICASI 2025)
Guest editor, Sheng-Joue Young (National United University)
Conference website
Call for paper


Special Issue on Multisource Sensors for Geographic Spatiotemporal Analysis and Social Sensing Technology Part 5
Guest editor, Prof. Bogang Yang (Beijing Institute of Surveying and Mapping) and Prof. Xiang Lei Liu (Beijing University of Civil Engineering and Architecture)


Copyright(C) MYU K.K. All Rights Reserved.