Search 닫기

Regular paper

Split Viewer

Journal of information and communication convergence engineering 2023; 21(4): 329-336

Published online December 31, 2023

https://doi.org/10.56977/jicce.2023.21.4.329

© Korea Institute of Information and Communication Engineering

A Scene-Specific Object Detection System Utilizing the Advantages of Fixed-Location Cameras

Jin Ho Lee 1, In Su Kim 1, Hector Acosta 1, Hyeong Bok Kim 2, Seung Won Lee2, and Soon Ki Jung1*

1School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
2Testworks, Inc., Seoul 01000, Korea

Correspondence to : Soon Ki Jung (E-mail: skjung@knu.ac.kr)
Department of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Republic of Korea

Received: April 27, 2023; Revised: September 13, 2023; Accepted: September 27, 2023

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper introduces an edge AI-based scene-specific object detection system for long-term traffic management, focusing on analyzing congestion and movement via cameras. It aims to balance fast processing and accuracy in traffic flow data analysis using edge computing. We adapt the YOLOv5 model, with four heads, to a scene-specific model that utilizes the fixed camera’s scene-specific properties. This model selectively detects objects based on scale by blocking nodes, ensuring only objects of certain sizes are identified. A decision module then selects the most suitable object detector for each scene, enhancing inference speed without significant accuracy loss, as demonstrated in our experiments.

Keywords Scene-specific System, You Only Look Once Version 5 (YOLOv5), Edge AI, Embedded System

In urban planning, analyzing traffic congestion and movement is vital for effective long-term traffic control [1]. Traditional methods, like monitoring traffic flow through CCTV cameras, are costly and labor-intensive [2]. To address these challenges, AI cameras are increasingly being used as intelligent traffic detection systems [3,4]. These cameras, functioning as edge computing devices with embedded GPUs, can run lightweight deep learning models. Such an Edge AI system enables efficient monitoring of crossroads, providing high-level traffic data, including flow and congestion insights, thus offering a more resource-efficient solution to traffic management [5].

This research focuses on employing Edge AI systems in urban planning and traffic control to analyze traffic congestion and the movement of people and vehicles. Utilizing AI cameras as edge computing devices, equipped with embedded GPUs, we can run lightweight deep learning models for real-time traffic flow analysis. This approach addresses the high cost and resource intensity of monitoring traffic through CCTV cameras. Our Edge AI system analyzes high-level traffic data such as flow and congestion at crossroads, avoiding the use of cloud computing due to concerns over personal information leakage, transmission delays, and increased network traffic [6,7]. Edge computing offers a solution to these issues, making it a more suitable choice for our study.

Edge computing, however, often struggles to match the performance of cloud computing due to limited resources impacting inference speed and accuracy [8]. Examples of AI services applied to edge computing include traffic surveillance and monitoring research using the Faster R-CNN network [9], a study on enhancing power efficiency and security in Intelligent Transportation Systems (ITS) [10], and research on FD-YOLOv5, a YOLOv5 network-based system for detecting safety helmets in operators [4]. However, these studies primarily emphasize detection accuracy over inference speed. While these methods enhance accuracy and object detection efficacy, considering inference speed is crucial for integrating these models with CCTV for an embedded system.

In our research, we’ve enhanced an Edge AI system for analyzing traffic flow by developing a scene-specific model based on YOLOv5, specifically designed to address slow inference speed in embedded systems and move beyond traditional lightweight model approaches. We made significant modifications to the existing model architecture to detect smaller objects more effectively. Our main contributions are as follows:

  • • In addition to the existing structure, we’ve added layers to the backbone, neck, and head of the model, enabling it to detect smaller objects more efficiently than the standard model.

  • • We designed the scene-specific model by customizing object size detection for each image grid, and selectively deactivating certain layers in the head and corresponding neck modules. This strategy enhances computational speed while ensuring minimal loss in accuracy for specific object sizes.

  • • A bespoke decision module was developed to adapt the scene-specific system model to different CCTV environments, further enhancing its applicability and effectiveness.

The structure of this paper is organized as follows: Section II provides an overview of the YOLOv5 model, highlighting its status as a cutting-edge lightweight model. In Section III, we delve into the detailed implementation process specific to scene-based systems. Section IV discusses our experimental results, offering in-depth interpretations. The paper concludes with Section V, which presents our final thoughts and potential future research directions.

A. YOLOv5 model [16]

In object detection based on deep learning, there are two primary methods. The first method, known as two-stage detection, includes techniques like R-CNN, Fast R-CNN, Faster R-CNN [11], and Mask R-CNN [12]. These methods initially extract region proposals using selective search algorithms or region proposal networks (RPN), followed by object detection based on these proposals. While two-stage detectors are highly accurate, they are characterized by slower inference speeds. The second method involves onestage detectors like the YOLO series [13-16]. These algorithms employ regression to simplify learning the target's generalized characteristics, effectively addressing the challenge of inference speed. In this context, YOLOv5 has been chosen for its suitability in embedded system environments. Among the various models offered by YOLOv5, the lightweight versions are particularly apt for our research, providing satisfactory performance even in embedded system environments.

YOLOv5 encompasses five models: YOLOv5n, YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x, each varying in parameters of depth and width. Among these, YOLOv5x offers the highest accuracy but at the slowest speed, while YOLOv5n provides the fastest inference with the least accuracy. As depicted in Fig. 1, the fundamental architecture of YOLOv5 consists of three main components: the backbone, neck, and head. The backbone, using the SPPF layer and CSPDarknet53 [17], primarily extracts features to create a feature map. The neck, utilizing PANet [18], forms a feature pyramid at various scales, linking the backbone and head. The head component is responsible for image classification and bounding box location regression [19]. Our analysis and modifications focus on the neck and head modules of the network.

Fig. 1. Structure of the modified YOLOv5 network

B. Edge AI system

Given the constraints of computing resources in embedded systems, such as limited memory and processing capacity, optimizing deep learning models for these environments is essential. To deploy models on resource-constrained embedded devices, they need to be made faster and lighter through processes like model light-weighting or model compression. Common methods include knowledge distillation [20], where knowledge from a larger teacher model is transferred to a smaller student model; TensorRT [21], which optimizes the model on embedded GPUs to enhance speed; quantization [22], a technique for minimizing redundant bits in model parameters; and pruning [23], which involves eliminating superfluous parameters from the original network.

Our research introduces a scene-specific method, distinct from the aforementioned techniques. This approach retains submodules related to the head in the model and increases inference speed by selecting a model tailored to each specific scene.

Before implementing our system, we established hypotheses to simplify the model for a specific scene with a stationary camera. These hypotheses are:

  • • Road images will not contain objects larger than cars or trucks, implying a limit on the object size in input images.

  • • Due to the geometric relationship between the camera and the ground, objects of the same class appear smaller when they are farther away or positioned higher in the image.

  • • The variety of object classes detectable in a given environment is limited. For instance, in urban road settings, focusing only on cars and pedestrians simplifies the model, reducing the likelihood of detecting false classes.

Our scene-specific approach is designed in line with these hypotheses, aiming to streamline the detection process by focusing on relevant object sizes and classes.

A. Overview of Scene-specific System

The system is comprised of two main components: the Server block and the Edge AI block. The Server block handles training and designing the model, while the Edge AI block is responsible for inference. An overview of the scenespecific system is depicted in Fig. 2. In the Server block, the model undergoes training and is subsequently tailored for a specific scene. In the Edge AI block, the scene-specific model received from the server processes the test image. The model's prediction is then sent to the Comparison block along with the model itself. The Comparison block determines the most effective object detector (OD) by evaluating both the input model and its prediction results. The chosen model then receives the test image, initiating the inference process.

Fig. 2. Overview of Scene-specific System

B. Integration of an Additional Tiny Head with Corresponding Layers

In our experiments, we observed a challenge in detecting distant, small-sized objects using a stationary camera. To address this, inspired by successful instances of enhanced small object detection [24,25], we not only added an extra tiny head for detecting these small objects but also incorporated corresponding layers in the backbone and neck of the model. This comprehensive update, integrating the tiny head with aligned layers in the backbone and neck, results in a 4- head structure. This structure efficiently handles variations in object scales and improves the detection of smaller objects. However, it’s important to note that these enhancements in detection capabilities are balanced with an increase in computational requirements and memory consumption.

C. Design of Scene-specific Model

In YOLOv5, prediction heads are differentiated by their roles as tiny, small, medium, and large, based on the object scale they detect. We propose that in certain scenes, not all four prediction heads are necessary due to the limited scale of objects in the input image. For instance, in high-altitude environments where objects appear smaller, only the tiny and small prediction heads might be required. Consequently, in such scenarios, the medium and large prediction heads, along with their corresponding neck nodes, are disabled.

Our model is designed to offer four modes of access through a single trained model. Fig. 3 visualizes this redesigned Scene-specific model. It consists of four detectors— each specialized for different object sizes (micro, mini, middle, and big)—determined by the deactivated nodes in the neck and prediction head. Each Object Detector (OD) is a hierarchically structured detector that accumulates itself and smaller variations, except for the OD-Micro. For clarity, we will use the term Accumulated Object Detector (AOD) to denote this feature. AOD-Big corresponds to the complete, unmodified base model. OD-Micro utilizes only the tiny prediction heads, AOD-Mini employs both tiny and small heads, and AOD-Middle combines tiny, small, and medium heads. Each detector activates only its defined prediction heads, blocking others and their related neck nodes to optimize performance for specific scene requirements.

Fig. 3. Scene-specific model

D. Selection of Optimized Inference Model

To enhance the selection of scene-specific model detectors in terms of efficiency and accuracy, our system incorporates an automated decision module. This module evaluates each scene to identify the most fitting detector, taking into account both inference speed and accuracy, with the accuracy benchmarked against the original XLarge (AOD-Big) model, presumed as the ground truth. The selection process involves a comparative analysis of both the inference speed and the accuracy of each detector against the AOD-Big model. The aim is to select a detector that maintains high inference speed while keeping the accuracy loss within an acceptable threshold, preferably less than a specified percentage when compared to the original XLarge model. This approach, grounded in experimental findings, ensures a balanced consideration of speed and accuracy for each scenespecific application. Detailed procedures and outcomes of this evaluative process are detailed in Section IV.

A. Training and Environments

In the initial phase of our study, we designed 12 different models by applying scene-specific detectors to three base models: YOLOv5 Nano, YOLOv5 Small, and YOLOv5 Medium. This approach, however, proved to be challenging due to the necessity of training 12 distinct models and the limited reusability resulting from the removal of nodes related to the model's neck and head. To address these issues, we revised our strategy by partially blocking nodes instead of removing them, thus reducing the number to only three models. This adjustment increased the models' flexibility, eased the training burden, and enhanced reusability.

For our training and experimental data, we used the 2021 AI City Challenge dataset (www.aicitychallenge.org), which includes data on vehicle and pedestrian movement captured by stationary cameras at various altitudes. We processed this dataset by extracting video segments, converting them into images frame by frame, and then using these images as input for a pre-trained YOLOv5 Xlarge model to establish ground truth data. In extracting Ground Truth data, we focused on two labels—pedestrian and vehicle—to minimize false positives. The dataset comprised 9,266 images for training, 2,926 for validation, and 1,030 for testing.

The training of the models was conducted on a server in a PyTorch 1.10.2 and Torchvision 0.11.3 environment, utilizing an Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz and an NVIDIA GeForce RTX 2080 Super GPU. Key training parameters included a momentum of 0.937, an initial learning rate of 0.01, a batch size of 4, and 300 epochs. We also employed an auto-anchor algorithm during training to ensure the anchors were optimally suited to the current data set.

Inference experiments were performed on embedded systems, simulated using a Jetson AGX Xavier. The GPU experiment environment, set up with JetPack 4.6.2 version through SDKmanager, included PyTorch 1.8 and Torchvision 0.9.

B. Result Analysis

Our experiments aimed to assess various model optimization methods, focusing on their effects on model size, inference speed, and accuracy. Initially, we experimented with unstructured pruning but found that it did not significantly reduce the model file size. This lack of size reduction can be attributed to the pruned (zeroed) filters still being stored in the weight file, and without acceleration techniques like skipping zeros during computation, no inference speed improvement was observed. As Table 1 shows, while we conducted accuracy assessments, no notable speed improvements were recorded.

Table 1 . Comparison of accuracy after pruning of each model

PercentPrecisionRecallmAP_05mAP_05:.95
Nano00.9130.8970.9490.828
10%0.9130.8970.9480.813
20%0.8940.8640.9330.754
30%0.8370.7670.8570.593
40%0.8320.3960.6440.348
Small00.9310.9090.9570.867
10%0.9310.9090.9570.867
20%0.9310.9030.9560.816
30%0.9110.8830.9480.696
40%0.8590.7720.8860.516
Medium00.9440.9170.9620.89
10%0.9430.9170.9630.889
20%0.9430.9140.9620.855
30%0.9410.9060.960.761
40%0.9210.8720.9260.58


Subsequently, we conducted TFlite quantization experiments on the same model, initially testing on a server. Results in Table 2 reveal that post-quantization with FP16 and INT8 types, the model's speed actually decreased compared to the baseline. This reduced speed was likely due to the server's Intel CPU, as TFlite quantization is optimized for ARM CPUs, rendering it less effective for our setup.

Table 2 . Evaluated results of each model after quantization

TypePrecisionRecallmAP_05mAP_05:.95Speed CPU (ms)
NanoBase0.9150.9160.9690.84457.9
FP160.910.9160.9680.83599.4
INT80.8140.8830.9220.64599.3
SmallBase0.9320.9280.9770.883118.6
FP160.9320.9260.9760.875318.3
INT80.8460.8920.9420.683242.1
MediumBase0.9460.9380.980.906257
FP160.9430.9370.9790.899892.6
INT80.8280.890.9380.658591.3


Before experimenting with the Scene-specific method, for the analysis of people and vehicle movement through the embedded system, we set a standard inference speed of 30 frames per second (FPS), which is the speed perceptible in real-time by humans. Table 3 presents the results of applying the scene-specific model method to the Nano, Small, and Medium models. In these experiments, accuracy and inference speed were measured using scene (d) from Figure 4. When comparing the models to identify the most suitable for our current embedded system, there was little difference in total accuracy (mAP_0.5 and mAP_0.5:0.95) across the three models. However, in terms of inference speed (FPS), both the Nano and Small models approached our standard, but the Medium model did not. Therefore, considering both accuracy and inference speed, the Small model is the most suitable for our current embedded system.

Table 3 . Comparison of accuracy and inference speed for each model

OD-MicroAOD-MiniAOD-MiddleAOD-Big
NanomAP_0.50.8990.9380.9660.966
mAP_0.5:0.950.7660.8110.8420.843
FPS37333027
SmallmAP_0.50.9040.9450.970.97
mAP_0.5:0.950.8020.8470.8750.874
FPS35322927
MediummAP_0.50.9040.9450.9710.98
mAP_0.5:0.950.8020.8470.8740.874
FPS19171615


Fig. 4. Experimental detection results for each scene

In Section III, we established a 1% accuracy loss threshold as the selection criterion for detectors. This decision is based on the accuracy analysis of each detector in the Small model, as seen in Table 3, where the scene-specific model was applied. The accuracy difference between the AOD-Mini and AOD-Middle of the Small model was approximately 2.58% for mAP_0.5 (0.945 vs. 0.97) and 3.2% for mAP_0.5:0.95 (0.847 vs. 0.875). However, due to some objects not being properly detected in the actual prediction results, we set the selection criterion within 1% to mitigate this issue. Furthermore, we compared the accuracy of AOD-Middle and AODBig to determine the most suitable detector for the current scene. Since their accuracy was comparable, AOD-Middle was chosen.

To validate our proposed method, experiments were conducted across six different scenes. Fig. 4 illustrates these results using the scene-specific model method with images captured by a fixed camera. Images (a) to (f) in Figure 4 display prediction results ranging from OD-Micro (left) to AOD-Big (right). The experiments revealed that in scenes (a), (c), and (e), middle-scale objects were not present, suggesting the suitability of using AOD-Mini. Conversely, in scenes (b), (d), and (f), where big-scale objects were absent, AOD-Middle was deemed appropriate. This confirmed that applying the scene-specific mode across various scenes can increase inference speed while maintaining model accuracy.

In tackling the complexities of real-time traffic analysis with edge AI devices, this study introduced a scene-specific system designed for cameras in fixed environments. We adapted this system to the YOLO network in three variants, enabling the use of four distinct detectors via a single model. A specialized module was developed to select the most appropriate detector for each scene. Our experimental evaluation, conducted across six different scenes, demonstrated that AOD-Mini is the optimal choice for three scenes, while AOD-Middle is more suitable for the other three. This approach successfully improved inference speeds without sacrificing accuracy. The positive results from the Scene- Specific System indicate its potential for broader application and its benefits to other neural network architectures. We anticipate that our research will contribute to the development of innovative lightweight methods, offering alternatives to traditional approaches in model lightweight.

This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the Innovative Human Resource Development for Local Intellectualization support program (IITP-2023-RS-2022-00156389) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation) and also was supported by the BK21 FOUR project (AI-driven Convergence Software Education Research Program) funded by the Ministry of Education, School of Computer Science and Engineering, Kyungpook National University, Korea (4199990214394).

  1. D. B. Nguyen, C. R. Dow, and S. F. Hwang, “An efficient traffic congestion monitoring system on internet of vehicles,” Wireless Communications and Mobile Computing, vol. 2018, pp. 1-17, 2018. DOI: 10.1155/2018/9136813.
    CrossRef
  2. G. P. Rocha Filho, R. I. Meneguette, J. R. Torres Neto, A. Valejo, L. Weigang, J. Ueyama, G. Pessin, and L. A. Villas, “Enhancing intelligence in traffic management systems to aid in vehicle traffic congestion problems in smart cities,” Ad Hoc Networks, vol. 107, p. 102265, Oct. 2020. DOI: 10.1016/j.adhoc.2020.102265.
    CrossRef
  3. K. H. N. Bui, H. Yi, and J. Cho, “A multi-class multi-movement vehicle counting framework for traffic analysis in complex areas using CCTV systems,” Energies, vol. 13, no. 8, p. 2036, Apr. 2020. DOI: 10.3390/en13082036.
    CrossRef
  4. M. Sadiq, S. Masood, and O. Pal, “FD-YOLOv5: A fuzzy image enhancement based robust object detection model for safety helmet detection,” International Journal of Fuzzy Systems, vol. 24, no. 5, pp. 2600-2616, Jul. 2022. DOI: 10.1007/s40815-022-01267-2.
    CrossRef
  5. G. W. Chen, Y. H. Lin, M. T. Sun, and T. U. İk, “Managing edge AI cameras for traffic monitoring,” in 2022 23rd Asia-Pacific Network Operations and Management Symposium (APNOMS), Takamatsu, Japan, pp. 01-04, 2022. DOI: 10.23919/APNOMS56106.2022.9919965.
    CrossRef
  6. K. Y. Cao, Y. F. Liu, G. J. Meng, and Q. M. Sun, “An overview on edge computing research,” IEEE Access, vol. 8, pp. 85714-85728, 2020. DOI: 10.1109/Access.2020.2991734.
    CrossRef
  7. M. Gusev and S. Dustdar, “Going back to the roots-the evolution of edge computing, an IoT perspective,” Ieee Internet Computing, vol. 22, no. 2, pp. 5-15, Mar. 2018. DOI: 10.1109/Mic.2018.022021657.
    CrossRef
  8. X. F. Wang, Y. W. Han, V. C. M. Leung, D. Niyato, X. Q. Yan, and X. Chen, “Convergence of edge computing and deep learning: A comprehensive survey,” IEEE Communications Surveys & Tutorials, vol. 22, no. 2, pp. 869-904, 2020. DOI: 10.1109/Comst.2020.2970550.
    CrossRef
  9. A. Mhalla, T. Chateau, S. Gazzah, and N. E. Ben Amara, “An embedded computer-vision system for multi-object detection in traffic surveillance,” IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 11, pp. 4006-4018, Nov. 2019. DOI: 10.1109/Tits.2018.2876614.
    CrossRef
  10. Y. Jeong, H. W. Oh, S. Kim, and S. E. Lee, “An edge AI device based intelligent transportation system,” Journal of information and communication convergence engineering, vol. 20, no. 3, pp. 166-173, Sep. 2022. DOI: 10.56977/jicce.2022.20.3.166.
    CrossRef
  11. S. Q. Ren, K. M. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, Jun. 2017. DOI: 10.1109/Tpami.2016.2577031.
    Pubmed CrossRef
  12. K. M. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” in Proceedings of the IEEE international conference on computer vision, Venice, Italy, pp. 2980-2988, 2017. DOI: 10.1109/Iccv.2017.322.
    CrossRef
  13. J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, pp. 6517-6525, 2017. DOI: 10.1109/Cvpr.2017.690.
    CrossRef
  14. J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, Apr. 2018. DOI: 10.48550/arXiv.1804.02767.
    CrossRef
  15. A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv: 2004.10934, Apr. 2020. DOI: 10.48550/arXiv.2004.10934.
    CrossRef
  16. G. JOCHER, YOLOv5, ed, 2020. [Online], Available: https://github.com/ultralytics/yolov5.
  17. C. Y. Wang, H. Y. M. Liao, Y. H. Wu, P. Y. Chen, J. W. Hsieh, and I. H. Yeh, “CSPNet: A new backbone that can enhance learning capability of CNN,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, USA, pp. 1571-1580, 2020. DOI: 10.1109/Cvprw50498.2020.00203.
    CrossRef
  18. S. Liu, L. Qi, H. F. Qin, J. P. Shi, and J. Y. Jia, “Path aggregation network for instance segmentation,” in 2018 Ieee/Cvf Conference on Computer Vision and Pattern Recognition (Cvpr), Salt Lake City, USA, pp. 8759-8768, 2018. DOI: 10.1109/Cvpr.2018.00913.
    CrossRef
  19. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in 2016 Ieee Conference on Computer Vision and Pattern Recognition (Cvpr), Las Vegas, USA, pp. 779-788, 2016. DOI: 10.1109/Cvpr.2016.91.
    CrossRef
  20. L. Wang and K. J. Yoon, “Knowledge distillation and studentteacher learning for visual intelligence: A review and new outlooks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 6, pp. 3048-3068, Jun. 2022. DOI: 10.1109/Tpami.2021.3055564.
    Pubmed CrossRef
  21. H. Vanholder, “Efficient inference with tensorrt,” in GPU Technology Conference, vol. 1, p. 2. 2016.
  22. Y. Gong, L. Liu, M. Yang, and L. Bourdev, “Compressing deep convolutional networks using vector quantization,” arXiv preprint arXiv:1412.6115, Dec. 2014. DOI: 10.48550/arXiv.1412.6115.
    CrossRef
  23. S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections for efficient neural network,” Advances in neural information processing systems, vol. 28, 2015.
  24. X. K. Zhu, S. C. Lyu, X. Wang, and Q. Zhao, “TPH-YOLOv5: Improved YOLOv5 based on transformer prediction head for object detection on drone-captured scenarios,” in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, Canada, pp. 2778-2788, 2021. DOI: 10.1109/Iccvw54120.2021.00312.
    Pubmed KoreaMed CrossRef
  25. Y. Li, X. Y. Bai, and C. L. Xia, “An improved YOLOV5 based on triplet attention and prediction head optimization for marine organism detection on underwater mobile platforms,” Journal of Marine Science and Engineering, vol. 10, no. 9, p. 1230, Sep. 2022. DOI: 10.3390/jmse10091230.
    CrossRef

Jin Ho Lee

Jin HO Lee received his bachelor’s degree in mechanical engineering from Dong-A University, South Korea. He is currently doing his master's degree at the Virtual Reality Lab in the Department of Computer Science at Kyungpook National University. His research interest is conducting model compression research for Object Detection.


In Su Kim

In Su Kim received his bachelor's and master's degrees from Kyungpook National University's Department of Computers. He is currently doing his Ph.D. at the Virtual Reality Lab in the Department of Computer Science at Kyungpook National University. His research interests include machine learning, computer vision, and virtual reality.


Hector Acosta

Hector Acosta currently holds a master's degree in the Virtual Reality Lab of the Department of Computer Science at Kyungpook National University. His research interest is conducting model compression research for Object Detection.


Hyeong Bok Kim

Hyeong Bok Kim is currently working as a researcher for the Korean Testworks company(Testworks, Inc., Seoul 01000, Korea).


Seung Won Lee

Swung Won Lee is currently working as a researcher for the Korean Testworks company(Testworks, Inc., Seoul 01000, Korea).


Soon Ki Jung

Soon Ki Jung Dr. Jung serves as a Full-time Professor in the School of Computer Science and Engineering at Kyungpook National University, Korea. His research spans a range of topics including Augmented Reality (AR), 3D computer graphics, Computer Vision, Human-Computer Interaction (HCI), mobile application development, wearable computing, and other related fields.


Article

Regular paper

Journal of information and communication convergence engineering 2023; 21(4): 329-336

Published online December 31, 2023 https://doi.org/10.56977/jicce.2023.21.4.329

Copyright © Korea Institute of Information and Communication Engineering.

A Scene-Specific Object Detection System Utilizing the Advantages of Fixed-Location Cameras

Jin Ho Lee 1, In Su Kim 1, Hector Acosta 1, Hyeong Bok Kim 2, Seung Won Lee2, and Soon Ki Jung1*

1School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
2Testworks, Inc., Seoul 01000, Korea

Correspondence to:Soon Ki Jung (E-mail: skjung@knu.ac.kr)
Department of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Republic of Korea

Received: April 27, 2023; Revised: September 13, 2023; Accepted: September 27, 2023

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper introduces an edge AI-based scene-specific object detection system for long-term traffic management, focusing on analyzing congestion and movement via cameras. It aims to balance fast processing and accuracy in traffic flow data analysis using edge computing. We adapt the YOLOv5 model, with four heads, to a scene-specific model that utilizes the fixed camera’s scene-specific properties. This model selectively detects objects based on scale by blocking nodes, ensuring only objects of certain sizes are identified. A decision module then selects the most suitable object detector for each scene, enhancing inference speed without significant accuracy loss, as demonstrated in our experiments.

Keywords: Scene-specific System, You Only Look Once Version 5 (YOLOv5), Edge AI, Embedded System

I. INTRODUCTION

In urban planning, analyzing traffic congestion and movement is vital for effective long-term traffic control [1]. Traditional methods, like monitoring traffic flow through CCTV cameras, are costly and labor-intensive [2]. To address these challenges, AI cameras are increasingly being used as intelligent traffic detection systems [3,4]. These cameras, functioning as edge computing devices with embedded GPUs, can run lightweight deep learning models. Such an Edge AI system enables efficient monitoring of crossroads, providing high-level traffic data, including flow and congestion insights, thus offering a more resource-efficient solution to traffic management [5].

This research focuses on employing Edge AI systems in urban planning and traffic control to analyze traffic congestion and the movement of people and vehicles. Utilizing AI cameras as edge computing devices, equipped with embedded GPUs, we can run lightweight deep learning models for real-time traffic flow analysis. This approach addresses the high cost and resource intensity of monitoring traffic through CCTV cameras. Our Edge AI system analyzes high-level traffic data such as flow and congestion at crossroads, avoiding the use of cloud computing due to concerns over personal information leakage, transmission delays, and increased network traffic [6,7]. Edge computing offers a solution to these issues, making it a more suitable choice for our study.

Edge computing, however, often struggles to match the performance of cloud computing due to limited resources impacting inference speed and accuracy [8]. Examples of AI services applied to edge computing include traffic surveillance and monitoring research using the Faster R-CNN network [9], a study on enhancing power efficiency and security in Intelligent Transportation Systems (ITS) [10], and research on FD-YOLOv5, a YOLOv5 network-based system for detecting safety helmets in operators [4]. However, these studies primarily emphasize detection accuracy over inference speed. While these methods enhance accuracy and object detection efficacy, considering inference speed is crucial for integrating these models with CCTV for an embedded system.

In our research, we’ve enhanced an Edge AI system for analyzing traffic flow by developing a scene-specific model based on YOLOv5, specifically designed to address slow inference speed in embedded systems and move beyond traditional lightweight model approaches. We made significant modifications to the existing model architecture to detect smaller objects more effectively. Our main contributions are as follows:

  • • In addition to the existing structure, we’ve added layers to the backbone, neck, and head of the model, enabling it to detect smaller objects more efficiently than the standard model.

  • • We designed the scene-specific model by customizing object size detection for each image grid, and selectively deactivating certain layers in the head and corresponding neck modules. This strategy enhances computational speed while ensuring minimal loss in accuracy for specific object sizes.

  • • A bespoke decision module was developed to adapt the scene-specific system model to different CCTV environments, further enhancing its applicability and effectiveness.

The structure of this paper is organized as follows: Section II provides an overview of the YOLOv5 model, highlighting its status as a cutting-edge lightweight model. In Section III, we delve into the detailed implementation process specific to scene-based systems. Section IV discusses our experimental results, offering in-depth interpretations. The paper concludes with Section V, which presents our final thoughts and potential future research directions.

II. RELATED WORK

A. YOLOv5 model [16]

In object detection based on deep learning, there are two primary methods. The first method, known as two-stage detection, includes techniques like R-CNN, Fast R-CNN, Faster R-CNN [11], and Mask R-CNN [12]. These methods initially extract region proposals using selective search algorithms or region proposal networks (RPN), followed by object detection based on these proposals. While two-stage detectors are highly accurate, they are characterized by slower inference speeds. The second method involves onestage detectors like the YOLO series [13-16]. These algorithms employ regression to simplify learning the target's generalized characteristics, effectively addressing the challenge of inference speed. In this context, YOLOv5 has been chosen for its suitability in embedded system environments. Among the various models offered by YOLOv5, the lightweight versions are particularly apt for our research, providing satisfactory performance even in embedded system environments.

YOLOv5 encompasses five models: YOLOv5n, YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x, each varying in parameters of depth and width. Among these, YOLOv5x offers the highest accuracy but at the slowest speed, while YOLOv5n provides the fastest inference with the least accuracy. As depicted in Fig. 1, the fundamental architecture of YOLOv5 consists of three main components: the backbone, neck, and head. The backbone, using the SPPF layer and CSPDarknet53 [17], primarily extracts features to create a feature map. The neck, utilizing PANet [18], forms a feature pyramid at various scales, linking the backbone and head. The head component is responsible for image classification and bounding box location regression [19]. Our analysis and modifications focus on the neck and head modules of the network.

Figure 1. Structure of the modified YOLOv5 network

B. Edge AI system

Given the constraints of computing resources in embedded systems, such as limited memory and processing capacity, optimizing deep learning models for these environments is essential. To deploy models on resource-constrained embedded devices, they need to be made faster and lighter through processes like model light-weighting or model compression. Common methods include knowledge distillation [20], where knowledge from a larger teacher model is transferred to a smaller student model; TensorRT [21], which optimizes the model on embedded GPUs to enhance speed; quantization [22], a technique for minimizing redundant bits in model parameters; and pruning [23], which involves eliminating superfluous parameters from the original network.

Our research introduces a scene-specific method, distinct from the aforementioned techniques. This approach retains submodules related to the head in the model and increases inference speed by selecting a model tailored to each specific scene.

III. SYSTEM MODEL AND METHODS

Before implementing our system, we established hypotheses to simplify the model for a specific scene with a stationary camera. These hypotheses are:

  • • Road images will not contain objects larger than cars or trucks, implying a limit on the object size in input images.

  • • Due to the geometric relationship between the camera and the ground, objects of the same class appear smaller when they are farther away or positioned higher in the image.

  • • The variety of object classes detectable in a given environment is limited. For instance, in urban road settings, focusing only on cars and pedestrians simplifies the model, reducing the likelihood of detecting false classes.

Our scene-specific approach is designed in line with these hypotheses, aiming to streamline the detection process by focusing on relevant object sizes and classes.

A. Overview of Scene-specific System

The system is comprised of two main components: the Server block and the Edge AI block. The Server block handles training and designing the model, while the Edge AI block is responsible for inference. An overview of the scenespecific system is depicted in Fig. 2. In the Server block, the model undergoes training and is subsequently tailored for a specific scene. In the Edge AI block, the scene-specific model received from the server processes the test image. The model's prediction is then sent to the Comparison block along with the model itself. The Comparison block determines the most effective object detector (OD) by evaluating both the input model and its prediction results. The chosen model then receives the test image, initiating the inference process.

Figure 2. Overview of Scene-specific System

B. Integration of an Additional Tiny Head with Corresponding Layers

In our experiments, we observed a challenge in detecting distant, small-sized objects using a stationary camera. To address this, inspired by successful instances of enhanced small object detection [24,25], we not only added an extra tiny head for detecting these small objects but also incorporated corresponding layers in the backbone and neck of the model. This comprehensive update, integrating the tiny head with aligned layers in the backbone and neck, results in a 4- head structure. This structure efficiently handles variations in object scales and improves the detection of smaller objects. However, it’s important to note that these enhancements in detection capabilities are balanced with an increase in computational requirements and memory consumption.

C. Design of Scene-specific Model

In YOLOv5, prediction heads are differentiated by their roles as tiny, small, medium, and large, based on the object scale they detect. We propose that in certain scenes, not all four prediction heads are necessary due to the limited scale of objects in the input image. For instance, in high-altitude environments where objects appear smaller, only the tiny and small prediction heads might be required. Consequently, in such scenarios, the medium and large prediction heads, along with their corresponding neck nodes, are disabled.

Our model is designed to offer four modes of access through a single trained model. Fig. 3 visualizes this redesigned Scene-specific model. It consists of four detectors— each specialized for different object sizes (micro, mini, middle, and big)—determined by the deactivated nodes in the neck and prediction head. Each Object Detector (OD) is a hierarchically structured detector that accumulates itself and smaller variations, except for the OD-Micro. For clarity, we will use the term Accumulated Object Detector (AOD) to denote this feature. AOD-Big corresponds to the complete, unmodified base model. OD-Micro utilizes only the tiny prediction heads, AOD-Mini employs both tiny and small heads, and AOD-Middle combines tiny, small, and medium heads. Each detector activates only its defined prediction heads, blocking others and their related neck nodes to optimize performance for specific scene requirements.

Figure 3. Scene-specific model

D. Selection of Optimized Inference Model

To enhance the selection of scene-specific model detectors in terms of efficiency and accuracy, our system incorporates an automated decision module. This module evaluates each scene to identify the most fitting detector, taking into account both inference speed and accuracy, with the accuracy benchmarked against the original XLarge (AOD-Big) model, presumed as the ground truth. The selection process involves a comparative analysis of both the inference speed and the accuracy of each detector against the AOD-Big model. The aim is to select a detector that maintains high inference speed while keeping the accuracy loss within an acceptable threshold, preferably less than a specified percentage when compared to the original XLarge model. This approach, grounded in experimental findings, ensures a balanced consideration of speed and accuracy for each scenespecific application. Detailed procedures and outcomes of this evaluative process are detailed in Section IV.

IV. RESULTS

A. Training and Environments

In the initial phase of our study, we designed 12 different models by applying scene-specific detectors to three base models: YOLOv5 Nano, YOLOv5 Small, and YOLOv5 Medium. This approach, however, proved to be challenging due to the necessity of training 12 distinct models and the limited reusability resulting from the removal of nodes related to the model's neck and head. To address these issues, we revised our strategy by partially blocking nodes instead of removing them, thus reducing the number to only three models. This adjustment increased the models' flexibility, eased the training burden, and enhanced reusability.

For our training and experimental data, we used the 2021 AI City Challenge dataset (www.aicitychallenge.org), which includes data on vehicle and pedestrian movement captured by stationary cameras at various altitudes. We processed this dataset by extracting video segments, converting them into images frame by frame, and then using these images as input for a pre-trained YOLOv5 Xlarge model to establish ground truth data. In extracting Ground Truth data, we focused on two labels—pedestrian and vehicle—to minimize false positives. The dataset comprised 9,266 images for training, 2,926 for validation, and 1,030 for testing.

The training of the models was conducted on a server in a PyTorch 1.10.2 and Torchvision 0.11.3 environment, utilizing an Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz and an NVIDIA GeForce RTX 2080 Super GPU. Key training parameters included a momentum of 0.937, an initial learning rate of 0.01, a batch size of 4, and 300 epochs. We also employed an auto-anchor algorithm during training to ensure the anchors were optimally suited to the current data set.

Inference experiments were performed on embedded systems, simulated using a Jetson AGX Xavier. The GPU experiment environment, set up with JetPack 4.6.2 version through SDKmanager, included PyTorch 1.8 and Torchvision 0.9.

B. Result Analysis

Our experiments aimed to assess various model optimization methods, focusing on their effects on model size, inference speed, and accuracy. Initially, we experimented with unstructured pruning but found that it did not significantly reduce the model file size. This lack of size reduction can be attributed to the pruned (zeroed) filters still being stored in the weight file, and without acceleration techniques like skipping zeros during computation, no inference speed improvement was observed. As Table 1 shows, while we conducted accuracy assessments, no notable speed improvements were recorded.

Table 1 . Comparison of accuracy after pruning of each model.

PercentPrecisionRecallmAP_05mAP_05:.95
Nano00.9130.8970.9490.828
10%0.9130.8970.9480.813
20%0.8940.8640.9330.754
30%0.8370.7670.8570.593
40%0.8320.3960.6440.348
Small00.9310.9090.9570.867
10%0.9310.9090.9570.867
20%0.9310.9030.9560.816
30%0.9110.8830.9480.696
40%0.8590.7720.8860.516
Medium00.9440.9170.9620.89
10%0.9430.9170.9630.889
20%0.9430.9140.9620.855
30%0.9410.9060.960.761
40%0.9210.8720.9260.58


Subsequently, we conducted TFlite quantization experiments on the same model, initially testing on a server. Results in Table 2 reveal that post-quantization with FP16 and INT8 types, the model's speed actually decreased compared to the baseline. This reduced speed was likely due to the server's Intel CPU, as TFlite quantization is optimized for ARM CPUs, rendering it less effective for our setup.

Table 2 . Evaluated results of each model after quantization.

TypePrecisionRecallmAP_05mAP_05:.95Speed CPU (ms)
NanoBase0.9150.9160.9690.84457.9
FP160.910.9160.9680.83599.4
INT80.8140.8830.9220.64599.3
SmallBase0.9320.9280.9770.883118.6
FP160.9320.9260.9760.875318.3
INT80.8460.8920.9420.683242.1
MediumBase0.9460.9380.980.906257
FP160.9430.9370.9790.899892.6
INT80.8280.890.9380.658591.3


Before experimenting with the Scene-specific method, for the analysis of people and vehicle movement through the embedded system, we set a standard inference speed of 30 frames per second (FPS), which is the speed perceptible in real-time by humans. Table 3 presents the results of applying the scene-specific model method to the Nano, Small, and Medium models. In these experiments, accuracy and inference speed were measured using scene (d) from Figure 4. When comparing the models to identify the most suitable for our current embedded system, there was little difference in total accuracy (mAP_0.5 and mAP_0.5:0.95) across the three models. However, in terms of inference speed (FPS), both the Nano and Small models approached our standard, but the Medium model did not. Therefore, considering both accuracy and inference speed, the Small model is the most suitable for our current embedded system.

Table 3 . Comparison of accuracy and inference speed for each model.

OD-MicroAOD-MiniAOD-MiddleAOD-Big
NanomAP_0.50.8990.9380.9660.966
mAP_0.5:0.950.7660.8110.8420.843
FPS37333027
SmallmAP_0.50.9040.9450.970.97
mAP_0.5:0.950.8020.8470.8750.874
FPS35322927
MediummAP_0.50.9040.9450.9710.98
mAP_0.5:0.950.8020.8470.8740.874
FPS19171615


Figure 4. Experimental detection results for each scene

In Section III, we established a 1% accuracy loss threshold as the selection criterion for detectors. This decision is based on the accuracy analysis of each detector in the Small model, as seen in Table 3, where the scene-specific model was applied. The accuracy difference between the AOD-Mini and AOD-Middle of the Small model was approximately 2.58% for mAP_0.5 (0.945 vs. 0.97) and 3.2% for mAP_0.5:0.95 (0.847 vs. 0.875). However, due to some objects not being properly detected in the actual prediction results, we set the selection criterion within 1% to mitigate this issue. Furthermore, we compared the accuracy of AOD-Middle and AODBig to determine the most suitable detector for the current scene. Since their accuracy was comparable, AOD-Middle was chosen.

To validate our proposed method, experiments were conducted across six different scenes. Fig. 4 illustrates these results using the scene-specific model method with images captured by a fixed camera. Images (a) to (f) in Figure 4 display prediction results ranging from OD-Micro (left) to AOD-Big (right). The experiments revealed that in scenes (a), (c), and (e), middle-scale objects were not present, suggesting the suitability of using AOD-Mini. Conversely, in scenes (b), (d), and (f), where big-scale objects were absent, AOD-Middle was deemed appropriate. This confirmed that applying the scene-specific mode across various scenes can increase inference speed while maintaining model accuracy.

V. DISCUSSION AND CONCLUSIONS

In tackling the complexities of real-time traffic analysis with edge AI devices, this study introduced a scene-specific system designed for cameras in fixed environments. We adapted this system to the YOLO network in three variants, enabling the use of four distinct detectors via a single model. A specialized module was developed to select the most appropriate detector for each scene. Our experimental evaluation, conducted across six different scenes, demonstrated that AOD-Mini is the optimal choice for three scenes, while AOD-Middle is more suitable for the other three. This approach successfully improved inference speeds without sacrificing accuracy. The positive results from the Scene- Specific System indicate its potential for broader application and its benefits to other neural network architectures. We anticipate that our research will contribute to the development of innovative lightweight methods, offering alternatives to traditional approaches in model lightweight.

ACKNOWLEDGEMENTS

This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the Innovative Human Resource Development for Local Intellectualization support program (IITP-2023-RS-2022-00156389) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation) and also was supported by the BK21 FOUR project (AI-driven Convergence Software Education Research Program) funded by the Ministry of Education, School of Computer Science and Engineering, Kyungpook National University, Korea (4199990214394).

Fig 1.

Figure 1.Structure of the modified YOLOv5 network
Journal of Information and Communication Convergence Engineering 2023; 21: 329-336https://doi.org/10.56977/jicce.2023.21.4.329

Fig 2.

Figure 2.Overview of Scene-specific System
Journal of Information and Communication Convergence Engineering 2023; 21: 329-336https://doi.org/10.56977/jicce.2023.21.4.329

Fig 3.

Figure 3.Scene-specific model
Journal of Information and Communication Convergence Engineering 2023; 21: 329-336https://doi.org/10.56977/jicce.2023.21.4.329

Fig 4.

Figure 4.Experimental detection results for each scene
Journal of Information and Communication Convergence Engineering 2023; 21: 329-336https://doi.org/10.56977/jicce.2023.21.4.329

Table 1 . Comparison of accuracy after pruning of each model.

PercentPrecisionRecallmAP_05mAP_05:.95
Nano00.9130.8970.9490.828
10%0.9130.8970.9480.813
20%0.8940.8640.9330.754
30%0.8370.7670.8570.593
40%0.8320.3960.6440.348
Small00.9310.9090.9570.867
10%0.9310.9090.9570.867
20%0.9310.9030.9560.816
30%0.9110.8830.9480.696
40%0.8590.7720.8860.516
Medium00.9440.9170.9620.89
10%0.9430.9170.9630.889
20%0.9430.9140.9620.855
30%0.9410.9060.960.761
40%0.9210.8720.9260.58

Table 2 . Evaluated results of each model after quantization.

TypePrecisionRecallmAP_05mAP_05:.95Speed CPU (ms)
NanoBase0.9150.9160.9690.84457.9
FP160.910.9160.9680.83599.4
INT80.8140.8830.9220.64599.3
SmallBase0.9320.9280.9770.883118.6
FP160.9320.9260.9760.875318.3
INT80.8460.8920.9420.683242.1
MediumBase0.9460.9380.980.906257
FP160.9430.9370.9790.899892.6
INT80.8280.890.9380.658591.3

Table 3 . Comparison of accuracy and inference speed for each model.

OD-MicroAOD-MiniAOD-MiddleAOD-Big
NanomAP_0.50.8990.9380.9660.966
mAP_0.5:0.950.7660.8110.8420.843
FPS37333027
SmallmAP_0.50.9040.9450.970.97
mAP_0.5:0.950.8020.8470.8750.874
FPS35322927
MediummAP_0.50.9040.9450.9710.98
mAP_0.5:0.950.8020.8470.8740.874
FPS19171615

References

  1. D. B. Nguyen, C. R. Dow, and S. F. Hwang, “An efficient traffic congestion monitoring system on internet of vehicles,” Wireless Communications and Mobile Computing, vol. 2018, pp. 1-17, 2018. DOI: 10.1155/2018/9136813.
    CrossRef
  2. G. P. Rocha Filho, R. I. Meneguette, J. R. Torres Neto, A. Valejo, L. Weigang, J. Ueyama, G. Pessin, and L. A. Villas, “Enhancing intelligence in traffic management systems to aid in vehicle traffic congestion problems in smart cities,” Ad Hoc Networks, vol. 107, p. 102265, Oct. 2020. DOI: 10.1016/j.adhoc.2020.102265.
    CrossRef
  3. K. H. N. Bui, H. Yi, and J. Cho, “A multi-class multi-movement vehicle counting framework for traffic analysis in complex areas using CCTV systems,” Energies, vol. 13, no. 8, p. 2036, Apr. 2020. DOI: 10.3390/en13082036.
    CrossRef
  4. M. Sadiq, S. Masood, and O. Pal, “FD-YOLOv5: A fuzzy image enhancement based robust object detection model for safety helmet detection,” International Journal of Fuzzy Systems, vol. 24, no. 5, pp. 2600-2616, Jul. 2022. DOI: 10.1007/s40815-022-01267-2.
    CrossRef
  5. G. W. Chen, Y. H. Lin, M. T. Sun, and T. U. İk, “Managing edge AI cameras for traffic monitoring,” in 2022 23rd Asia-Pacific Network Operations and Management Symposium (APNOMS), Takamatsu, Japan, pp. 01-04, 2022. DOI: 10.23919/APNOMS56106.2022.9919965.
    CrossRef
  6. K. Y. Cao, Y. F. Liu, G. J. Meng, and Q. M. Sun, “An overview on edge computing research,” IEEE Access, vol. 8, pp. 85714-85728, 2020. DOI: 10.1109/Access.2020.2991734.
    CrossRef
  7. M. Gusev and S. Dustdar, “Going back to the roots-the evolution of edge computing, an IoT perspective,” Ieee Internet Computing, vol. 22, no. 2, pp. 5-15, Mar. 2018. DOI: 10.1109/Mic.2018.022021657.
    CrossRef
  8. X. F. Wang, Y. W. Han, V. C. M. Leung, D. Niyato, X. Q. Yan, and X. Chen, “Convergence of edge computing and deep learning: A comprehensive survey,” IEEE Communications Surveys & Tutorials, vol. 22, no. 2, pp. 869-904, 2020. DOI: 10.1109/Comst.2020.2970550.
    CrossRef
  9. A. Mhalla, T. Chateau, S. Gazzah, and N. E. Ben Amara, “An embedded computer-vision system for multi-object detection in traffic surveillance,” IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 11, pp. 4006-4018, Nov. 2019. DOI: 10.1109/Tits.2018.2876614.
    CrossRef
  10. Y. Jeong, H. W. Oh, S. Kim, and S. E. Lee, “An edge AI device based intelligent transportation system,” Journal of information and communication convergence engineering, vol. 20, no. 3, pp. 166-173, Sep. 2022. DOI: 10.56977/jicce.2022.20.3.166.
    CrossRef
  11. S. Q. Ren, K. M. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, Jun. 2017. DOI: 10.1109/Tpami.2016.2577031.
    Pubmed CrossRef
  12. K. M. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” in Proceedings of the IEEE international conference on computer vision, Venice, Italy, pp. 2980-2988, 2017. DOI: 10.1109/Iccv.2017.322.
    CrossRef
  13. J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, pp. 6517-6525, 2017. DOI: 10.1109/Cvpr.2017.690.
    CrossRef
  14. J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, Apr. 2018. DOI: 10.48550/arXiv.1804.02767.
    CrossRef
  15. A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv: 2004.10934, Apr. 2020. DOI: 10.48550/arXiv.2004.10934.
    CrossRef
  16. G. JOCHER, YOLOv5, ed, 2020. [Online], Available: https://github.com/ultralytics/yolov5.
  17. C. Y. Wang, H. Y. M. Liao, Y. H. Wu, P. Y. Chen, J. W. Hsieh, and I. H. Yeh, “CSPNet: A new backbone that can enhance learning capability of CNN,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, USA, pp. 1571-1580, 2020. DOI: 10.1109/Cvprw50498.2020.00203.
    CrossRef
  18. S. Liu, L. Qi, H. F. Qin, J. P. Shi, and J. Y. Jia, “Path aggregation network for instance segmentation,” in 2018 Ieee/Cvf Conference on Computer Vision and Pattern Recognition (Cvpr), Salt Lake City, USA, pp. 8759-8768, 2018. DOI: 10.1109/Cvpr.2018.00913.
    CrossRef
  19. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in 2016 Ieee Conference on Computer Vision and Pattern Recognition (Cvpr), Las Vegas, USA, pp. 779-788, 2016. DOI: 10.1109/Cvpr.2016.91.
    CrossRef
  20. L. Wang and K. J. Yoon, “Knowledge distillation and studentteacher learning for visual intelligence: A review and new outlooks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 6, pp. 3048-3068, Jun. 2022. DOI: 10.1109/Tpami.2021.3055564.
    Pubmed CrossRef
  21. H. Vanholder, “Efficient inference with tensorrt,” in GPU Technology Conference, vol. 1, p. 2. 2016.
  22. Y. Gong, L. Liu, M. Yang, and L. Bourdev, “Compressing deep convolutional networks using vector quantization,” arXiv preprint arXiv:1412.6115, Dec. 2014. DOI: 10.48550/arXiv.1412.6115.
    CrossRef
  23. S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections for efficient neural network,” Advances in neural information processing systems, vol. 28, 2015.
  24. X. K. Zhu, S. C. Lyu, X. Wang, and Q. Zhao, “TPH-YOLOv5: Improved YOLOv5 based on transformer prediction head for object detection on drone-captured scenarios,” in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, Canada, pp. 2778-2788, 2021. DOI: 10.1109/Iccvw54120.2021.00312.
    Pubmed KoreaMed CrossRef
  25. Y. Li, X. Y. Bai, and C. L. Xia, “An improved YOLOV5 based on triplet attention and prediction head optimization for marine organism detection on underwater mobile platforms,” Journal of Marine Science and Engineering, vol. 10, no. 9, p. 1230, Sep. 2022. DOI: 10.3390/jmse10091230.
    CrossRef
JICCE
Dec 31, 2023 Vol.21 No.4, pp. 261~358

Stats or Metrics

Share this article on

  • line

Journal of Information and Communication Convergence Engineering Jouranl of information and
communication convergence engineering
(J. Inf. Commun. Converg. Eng.)

eISSN 2234-8883
pISSN 2234-8255