Search 닫기

Regular paper

Split Viewer

Journal of information and communication convergence engineering 2023; 21(1): 24-31

Published online March 31, 2023

https://doi.org/10.56977/jicce.2023.21.1.24

© Korea Institute of Information and Communication Engineering

SSD PCB Component Detection Using YOLOv5 Model

Pyeoungkee Kim , Xiaorui Huang *, and Ziyu Fang

Department of Computer Software Engineering, Silla University, Busan 46958, South Korea

Correspondence to : Xiaorui Huang (E-mail: awber12138@gmail.com, Tel: +82-999-5066)
Department of Computer Software Engineering, Silla University, Busan 46958, South Korea

Received: November 7, 2022; Revised: January 12, 2023; Accepted: January 30, 2023

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

The solid-state drive (SSD) possesses higher input and output speeds, more resistance to physical shock, and lower latency compared with regular hard disks; hence, it is an increasingly popular storage device. However, tiny components on an internal printed circuit board (PCB) hinder the manual detection of malfunctioning components. With the rapid development of artificial intelligence technologies, automatic detection of components through convolutional neural networks (CNN) can provide a sound solution for this area. This study proposes applying the YOLOv5 model to SSD PCB component detection, which is the first step in detecting defective components. It achieves pioneering state-of-the-art results on the SSD PCB dataset. Contrast experiments are conducted with YOLOX, a neck-and-neck model with YOLOv5; evidently, YOLOv5 obtains an mAP@0.5 of 99.0%, essentially outperforming YOLOX. These experiments prove that the YOLOv5 model is effective for tiny object detection and can be used to study the second step of detecting defective components in the future.

Keywords CNN, PCB, solid-state disk, tiny object detection, YOLOv5

A solid-state drive (SSD), built from silicon memory chips, is the storage component of a computer. Compared with electromechanical drives, SSDs are more resistant to physical shock; they have more silent running conditions, higher input and output speeds, and lower latency [1]. With reliable data retention even without power [2], flash memory, almost instantaneous start-up time, resilient operating temperature, and small and lightweight size, SSDs are more convenient and popular. Since their price fell from \$1.56/GB in 2011 [3] to \$0.11/GB in 2022 [4], SSDs have become increasingly useful in the electronic storage market. However, bad blocks in new SSDs, caused primarily by chip failure, are fairly common. In the first four years of deployment, 30-80% of SSDs evolved at least one bad block, and 2-7% evolved at least one bad chip [5]. Therefore, detecting defective components is common to most users. This study proposed a method for the first step of defect detection on SSDprinted circuit boards (PCBs), which recognizes and classifies the components. These experiments lay the foundation for the second step of defect detection in SSD PCBs. The mass of tiny and dense components on each SSD PCB poses a heavy and inefficient burden on the manual detection of faulty components. Hence, automatic detection using machine learning and deep learning algorithms seems to be a reliable solution. Previous research has focused on PCB flaw detection; however, reports on SSD-PCBs are lacking. Because PCBs have tiny and dense mass components, PCB component detection was further studied to learn its associated challenges. Methods for PCB defect detection have evolved from traditional machine learning to deep learning. Traditional methods use a flawless template for comparison with a test one to classify and locate missing and defective components by extracting and analyzing very few pixels and texture information of good components on PCBs [6]. Purva et al. [10] used contour analysis calculations to increase the precision of PCB detection. Fang et al. [11] adopted the AdaBoost classifier to detect capacitors only. However, these machine learning-based methods are time-consuming, particularly when applied to assembly lines. Thus, the development of deep learning algorithms has introduced a new perspective to PCB defect detection. Kuo et al. [12] used a graph-based neural network to recognize multiple PCB component categories; however, it was barely satisfactory for smaller and similar components, such as capacitors and resistors. Improved Yolov3 [13] which includes a 53-layer Darknet53 feature extractor [8], was also applied to this area and achieved a mean average precision (mAP) of 93.07% for PCB component detection.

Herein, the YOLOv5 model, which is embedded with complex convolutional neural networks (CNNs), was utilized for SSD PCB component detection. Evolving from YOLOv1 [14], YOLOv2 [15], YOLOv3 [13], and YOLOv4 [16] models, YOLOv5 [17] makes some improvements that are beneficial to tiny object detection. This study proves that the YOLOv5 model is sufficient to recognize tiny SSD PCB components by an mAP@0.5 of 99.0%, compared with YOLOX [34], which achieves an mAP@0.5 of 87.9%. YOLOX is an advanced and popular algorithm for computer vision, and it is competitive with YOLOv5. This study presents the framework of the YOLOv5 model and the experimental results on the used SSD PCB dataset, contributed by Ziyu Fang, our second author. The remainder of this paper is organized as follows.

Section II presents related work. Section III presents the details of the YOLOv5 model and the recognition of YOLOX and YOLOv5. Section IV describes the experimental results and analysis. Finally, Section V presents the conclusions, discussion, and future scope.

Studies on PCBs that similarly have tiny and unevenly distributed components are limited, whereas those on SSD PCB component detection are lacking. Thus, herein, the recognition of components on the PCB was learned and a model that can lessen difficulties and improve the performance of SSD PCB component detection was determined. Some commonly encountered challenges in SSD PCB detection were learned from related studies on PCB component detection. Reza et al. [18] highlighted that the tiny and densely cluttered nature of electronic components is a significant challenge. Therefore, in this study, focus was placed on tiny object detection. Leibe et al. [19] found that very few pixels and texture information lead to challenges in tiny object detection; therefore, they used the unit statistical curvature feature algorithm to recognize tiny objects. However, it is inefficient owing to its complex computation. Wang et al. [20] focused on IoUbased metrics and raised the normalized Wasserstein distance to compare the similarity between the ground truth and predicted bounding boxes. In [21], a multiple-center point-based learning network was proposed to improve the localization performance of tiny object detection. Nevertheless, accurately locating the densely distributed components of PCBs poses a challenge. YOLOv5 selects the GIoU loss [22], thus allowing the optimization of nonoverlapping bounding boxes. In addition, CNN is effective for tiny object detection. To avoid excessive computation in tiny object detection, [23] decreased the depth of the convolutional layers in YOLOv3, thereby increasing the speed. In addition, [24] proved that the feature pyramid network (FPN) [25] as the backbone for the extractor performs well on tiny PCB component detection. However, [26] showed that the FPN technique alone is insufficient to perform excellently in tiny object detection and presents room for advancement. YOLOv5 adopts a path aggregation network (PAN) [27] + FPN and spatial pyramid pooling (SPP) [28] framework to ensure pixel completeness of the feature maps. Shi et al. [29] proposed a single-shot detector or tiny object detection, and Li et al. [30] used a single-shot multibox detector to recognize PCB defects and obtain the best mAP of 0.951. Cheong et al. [31] applied transfer learning based on CNNs to PCB defects and component recognition and achieved an mAP of 0.9654 in recognizing 25 different PCB components.

In addition to their small size, the imbalanced distribution of components on PCBs is another challenge for PCB component detection. As shown in [32], using scientific dataset statistics, Mahalingam et al. reported that the components on one board are imbalanced, extremely small, and densely distributed. They used machine-learning algorithms for PCB component recognition and obtained the best mAP of 0.833. Reza et al. [18] explained why the uneven distribution of different components in a single microelectronic image causes problems in their detection. They also proposed a solution called loss boosting (LB), which balances the loss between hard samples (components that are too tiny and dense to be easily detected) and easy samples by automatic loss weight adjustment. They obtained the highest accuracy of 0.9231 for PCB component IC detection. However, this increases the loss of hard samples, which are often ignored by detectors. Avoiding being ignored is a question. YOLOv5 is embedded with focal loss [33], which is functionally similar to the LB. The focal loss downweights easy examples and focuses training on hard samples. The mosaic data augmentation adopted by YOLOv5 also contributes to tiny object detection. The experimental results in later sections prove its effectiveness.

Similar to PCB, SSD PCB has numerous tiny components that are thickly spread on them such the problems and solutions mentioned are true of SSD PCB detection. To date, no existing methods have been applied to SSD PCB component detection, and ours is the first. To prove the originality of the proposed method, it was compared with YOLOX, another advanced model neck-and-neck with YOLOv5, and the experimental results proved that our method performs better. Thus, this study demonstrated that the YOLOv5 model suits SSD PCB component detection.

A. SSD Components

Experiments on six primary components of the SSD PCB, namely capacitors, resistors, ICs, crystals, inductors, and transistors, were conducted. As shown in Fig. 1, the capacitors and resistors are much more than the ICs, crystals, inductors, and transistors. Only one crystal is present on one board, and none on some boards. However, several capacitors and resistors with similar shapes exist, which often causes incorrect detection.

Fig. 1. Image of an SSD PCB image with capacitors, resistors, ICs, crystals, inductors, and transistors.

B. Structure of YOLOv5 Model

The structure of the YOLOv5 model, comprising four parts, is shown in Fig. 2. Although no official study on YOLOv5 has been reported, its structure was learnt through code [17].

Fig. 2. Structure of YOLOv5 model with four parts: input, backbone, neck, and head (prediction).

First, the input part utilizes mosaic data augmentation to mix the four images into one to train more pixel information and reduce the mini-batch size. Fig. 3 shows a mosaic sample image.

Fig. 3. SSD PCB training sample after Mosaic data augmentation.

Furthermore, auto-learn bounding box anchors and letterbox adaptive image scaling are embedded in the YOLOv5 model. They improved the speed and accuracy of the model using k-means clustering and a genetic algorithm.

Second, in the backbone part, YOLOv5 uses focus to save pixel information to the greatest extent possible. It operates through slicing and concat operations, as shown in Fig. 4. Taking the default input as an example, the 640×640×3 input image was sliced into four 320×320×3 feature maps by extracting every other feature information and was subsequently concatenated into a 320×320×12 feature map. Through a convolution-batch normalization-LeakyReLU (CBL) convolution operation, it is transformed into a 320×320×32 feature map without losing pixel information.

Fig. 4. Focus: slicing and concat.

Subsequently, the feature map is passed through the crossstage partial network (CSPNet) [35], as shown in Fig. 5. The CSP layer first copies the feature map and puts it into a CNL-ResNet-1×1 convolution structure and the other into a 1×1 convolution layer, and finally connects them to pass through a convolutional layer, which decreases the redundant feature information and supplements the omissive one, thus comparably simplifying the net.

Fig. 5. CSPNet (Cross stage partial network) operation.

Next, in the neck, PAN + FPN and SPP were included in the YOLOv5 model. This part typically collects feature maps from different stages to extract the complete feature information. Feature maps of arbitrary sizes are transmitted into the SPP layer to obtain size-unified maps, which are then conveyed to the following FPN + PAN layers. The framework is illustrated in Fig. 6.

Fig. 6. Spatial pyramid pooling (SPP) structure.

In Fig. 7, the illustration from the original thesis unfolds the workflow of the PAN + FPN. The FPN’s down-sampling detection of tiny components tends to lose some marginal pixel information, whereas the down-sampling (red dotted line) and bottom-up side complement sampling structure (green dotted line) of FPN + PAN supplement the lost feature formation, thereby enhancing the feature extraction. Down-sampling and bottom-up structure yield a smallerscaled feature map and larger-scaled one, respectively. Next, they are combined by the side complements to complete the feature information.

Fig. 7. Path aggregation network (PAN) structure [27]. (a) Feature pyramid network (FPN) backbone. (b) Bottom-up path augmentation.

Finally, our feature maps are introduced into the detection head. It predicts the bounding box after the adjustment through GIoU loss [22] and the categories. The GIoU loss renders the optimization of nonoverlapping bounding boxes feasible.

C. Recognition of SSD PCB Components

1) SSD PCB Dataset

Our dataset was self-made by taking photos of SSD PCBs and labeling them via the LabelImg program. Approximately 78% dataset was contributed by our second author, Ziyu Fang. A total of 232 images were compiled from almost 20 brands, including Samsung, Intel, SanDisk, and ADTAT. The training, validation, and test sets contained 158, 39, and 35 images, respectively. Additionally, for the training and validation, the number of capacitors, resistors, integrated circuits (ICs), inductors, crystals, and transistors were 9679, 5588, 952, 256, 72, and 282, respectively. Whereas for the test, they were 1089, 620, 117, 14, 7, and 25, respectively. The proportion of each component is listed in Table 1.

Quantity and proportion of components of each class

Quantity and proportion of components of each class in the dataset
Trainset Test set
Quantity Proportion Quantity Proportion
Capacitor 9679 57.51% 1089 58.17%
Resistor 5588 33.20% 620 33.12%
IC 952 5.66% 117 6.25%
Inductor 256 1.52% 14 0.75%
Crystal 72 0.43% 7 0.37%
Transistor 282 1.68% 25 1.34%
Total 16829 100.00% 1872 100.00%


2) Experimental Environment

The experiment was conducted on the Ubuntu 20.04 LTS operating system, which is Pytorch 1.7; our CPU is Intel (R) Core (TM) i7-10700, and our GPU is NVIDIA GeForce RTX 2060 SUPER. The dataset was trained on YOLOv5 and YOLOX models. Essentially, 1024×1024 images were input into each model, the batch size was set to eight, and they were trained for 1500 steps on a single GPU device. Additionally, parameters, such as initial learning, momentum, weight decay, mosaic, and mixup degree, were set as defaults. YOLOv5 achieved an extraordinary result, which was better than that of YOLOX. Further, good results were obtained for the test set.

A. Training and Validation Results

Herein, an mAP@0.5 score of 0.990 was achieved, whereas YOLOX achieved an mAP@0.5 of 0.879. Additionally, the mAP@0.5 scores of each single component detection were prominent, with the capacitor of 0.992, resistor 0.975, IC 0.989, inductor 0.995, crystal 0.995, and transistor 0.995. The precision and recall (PR) curves are shown in Fig. 8.

Fig. 8. Precision and recall (PR) curve of SSD PCB component detection.

As shown in Fig. 9, a better generalization ability was observed in YOLOv5 instead of YOLOX, and YOLOv5 remained stable after convergence. YOLOv5 embedded with Focus and FPN + PAN + SPP structure performed better on tiny object detection than YOLOX. YOLOX does not use Focus and uses FPN + SPP. Its backbone was also different from that of YOLOv5. Theory and experiments show that YOLOv5 performs well in tiny object detection.

Fig. 9. Comparison of mAP@0.5 curves between YOLOv5 and YOLOX.

Additionally, an analysis was conducted using a confusion matrix. In Fig. 10, the capacitors and transistors are not detected incorrectly. Only 0.02 resistors were incorrectly detected as background; 0.02 ICs were wrongly detected as transistors; and 0.15 inductors were incorrectly detected as ICs. The crystal detected 0.10 inductors and 0.20 ICs wrongly. The results of the background class reveal that smaller samples, such as capacitors and resistors, are easier to be missed or recognized as background, whereas larger ones tend to be wrongly detected mutually. Enlarging these samples can improve performance.

Fig. 10. Confusion matrix of YOLOv5 on SSD PCB.

B. Test Results and Analysis

Herein, 35 images were tested and a detailed analysis of the results was performed. In Fig. 11, under the same test conditions, the accuracies of YOLOv5 on capacitors were all higher than 0.80, whereas the accuracies of YOLOX on capacitors were only approximately 0.50. The number beside the class indicates the accuracy.

Fig. 11. Capacitor accuracy comparison between YOLOv5 and YOLOX.

In Fig. 12, under the same test conditions, the accuracies of YOLOv5 on the resistors and other components were higher than 0.85, whereas the accuracies of YOLOX were only around 0.40. Moreover, one resistor was missed by YOLOX but recognized by YOLOv5.

Fig. 12. Resistor accuracy comparison between YOLOv5 and YOLOX.

In Fig. 13, we crop some ICs and combine them into one image for easy reference. YOLOv5 outperformed YOLOX in IC detection.

Fig. 13. IC and transistor accuracy comparison between YOLOv5 and YOLOX.

As shown in Fig. 14, YOLOv5 still performed better than YOLOX in detecting inductors and other components. The accuracies of YOLOv5 were all approximately 0.90, whereas those of YOLOX were only below 0.50. A resistor and transistor were missed by YOLOX but recognized by YOLOv5.

Fig. 14. Inductor accuracy comparison between YOLOv5 and YOLOX.

These test samples show that YOLOv5 has a high rate of correct recognition regardless of the dense distribution and tiny shape of the samples. These experimental results demonstrate that YOLOv5 is sufficient for SSD PCB component detection and tiny object detection.

Research on SSD PCB component detection has been lacking, and our work shows that the YOLOv5 model is sufficiently good for this domain in terms of mAP@0.5. Our test results, particularly the detection of capacitors and resistors, demonstrate the high precision of tiny object recognition, thereby proving that the YOLOv5 model works well for tiny object detection and SSD PCB component detection. Through a contrast test with YOLOX, an advanced model neck, and a neck with YOLOv5, our work demonstrates that the YOLOv5 model performs better on tiny object recognition. The next step is to apply our trained model to the study of the second step of defect detection on an SSD PCB.

This study found that the larger samples could be confused with each other. This can be improved by increasing the number of such samples in the training set. In the future, we will apply the proposed model to an assembly line, thereby saving human costs and increasing efficiency.

  1. V. Kasavajhala, Solid state drive vs. hard disk drive price and performance study, in Proc. Dell Tech. White Paper (2011), pp. 8-9, May., 2011. [Internet] Available: https://www.profesorweb.es/wp-content/uploads/2012/11/ssd_vs_hdd_price_and_performance_study.pdf.
  2. K. Vättö The truth about SSD data retention. [Internet]. Available: https://www.anandtech.com/show/9248/the-truth-about-ssd-dataretention.
  3. P. Hernandez, SSDs sales rise, prices drop below $1 per GB in 2012, in [Online], Jan., 2012. Available: https://www.ecoinsite.com/2012/01/ssd-salesprice-1-dollar-per-gb-2012.html.
  4. S. Downing, Best SSDs 2022: From Budget SATA to Blazing-Fast NVMe, in [Online] . Available: https://www.tomshardware.com/reviews/best-ssds,3891.html.
  5. B. Schroeder and R. Lagisetty and A. Merchant, Flash reliability in production: The expected and the unexpected, in in 14th USENIX Conference on File and Storage Technologies (FAST 16), Santa Clara, USA, pp. 67-80, 2016. Available: https://www.usenix.org/conference/fast16/technical-sessions/presentation/schroeder.
  6. D.G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, Nov., 2010. DOI: 10.1023/B:VISI.0000029664.99615.94.
    CrossRef
  7. H. Bay and T. Tuytelaars and L. V. Gool, Surf: Speeded up robust features, in in European conference on computer vision, vol. 3951, Springer, Berlin, Heidelberg, pp. 404-417, 2006. DOI: 10.1007/11744023_32.
    CrossRef
  8. A. I. M. Hassanin and F. E. Abd El-Samie and E. B. Ghada, A realtime approach for automatic defect detection from PCBs based on SURF features and morphological operations, Multimedia Tools and Applications, vol. 78, no. 24, pp. 34437-34457, Oct., 2019. DOI: 10.1007/s11042-019-08097-9.
    CrossRef
  9. S. U. Rehman and K. F. Thang and N. S. Lai, Automated PCB identification and defect-detection system (APIDS), International Journal of Electrical and Computer Engineering, vol. 9, no. 1, pp. 297-306, Feb., 2019. DOI: 10.47760/ijcsmc.2021.v10i02.008.
    CrossRef
  10. A. Salunke Purva and N. Sherkar Shubhangi and C. S. Arya, PCB (printed circuit board) fault detection using machine learning, International Journal of Computer Science and Mobile Computing, vol. 10, no. 2, pp. 54-56, Feb., 2021. DOI: 10.47760/ijcsmc.2021.
    CrossRef
  11. J. Fang, and L. Shang, and G. Gao, and K. Xiong, and C. Zhang, Capacitor detection on PCB using AdaBoost classifier, Journal of Physics: Conference Series, vol. 1631, no. 1, 012185, Jul., 2020. DOI: 10.1088/1742-6596/1631/1/012185.
    CrossRef
  12. CW. Kuo, and J. D. Ashmore, and D. Huggins, and Z. Kira, Data-efficient graph embedding learning for PCB component detection, in in 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, USA, pp. 551-560, 2019. DOI: 10.1109/WACV.2019.00064.
    CrossRef
  13. J. Redmon, and A. Farhadi, YOLOv3: An incremental improvement, arXiv preprint arXiv:1804.02767, Apr., 2018. DOI: 10.48550/arXiv.1804.02767.
    CrossRef
  14. J. Redmon, and S. Divvala, and R. Girshick, and A. Farhadi, You only look once: Unified, real-time object detection, in in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 779-788, 2016. DOI: 10.1109/CVPR.2016.91.
    CrossRef
  15. J. Redmon, and A. Farhadi, YOLO9000: better, faster, stronger, in in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp. 6517-6525, 2017. DOI: 10.1109/CVPR.2017.690.
    CrossRef
  16. A. Bochkovskiy and CY. Wang and HY. M. Liao, Yolov4: Optimal speed and accuracy of object detection, arXiv preprint arXiv:2004. 10934, Apr., 2020. DOI: 10.48550/arXiv.2004.10934.
    CrossRef
  17. G. Jocher Code. [Online]. Available: https://github.com/ultralytics/yolov5.
  18. M. A. Reza and Z. Chen and D. J. Crandall, Deep neural networkbased detection and verification of microelectronic images, Journal of Hardware and Systems Security, vol. 4, no. 1, pp. 44-54, Jan., 2020. DOI: 10.1007/s41635-019-00088-4.
    CrossRef
  19. Y. Kang, and X. Li, A novel tiny object recognition algorithm based on unit statistical curvature feature, in in European Conference on Computer Vision, vol. 9909, Amsterdam, The Netherlands, pp. 762-777, 2016. DOI: 10.1007/978-3-319-46454-1_46.
    CrossRef
  20. J. Wang, and C. Xu, A normalized gaussian Wasserstein distance for tiny object detection, arXiv preprint arXiv:2110.13389, Oct., 2021. DOI: 10.48550/arXiv.2110.13389.
    CrossRef
  21. J. Wang, and W. Yang, and H. Guo, and R. Zhang, and GS. Xia, Tiny object detection in aerial images, in in 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, pp. 3791-3798, 2021. DOI: 10.1109/ICPR48806.2021.9413340.
    CrossRef
  22. H. Rezatofighi, and N. Tsoi, and JY. Gwak, and A. Sadeghian, and I. Reid, and S. Savarese, Generalized intersection over union: A metric and a loss for bounding box regression, in in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, pp. 658-666, 2019. DOI: 10.1109/CVPR.2019.00075.
    CrossRef
  23. P. Adarsh and P. Rathi and M. Kumar, YOLO v3-Tiny: Object Detection and Recognition using one stage improved model, in in 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, pp. 687-694, 2020. DOI: 10.1109/ICACCS48705.2020.9074315.
    CrossRef
  24. B. Hu, and J. Wang, Detection of PCB surface defects with improved faster-RCNN and feature pyramid network, IEEE Access, vol. 8, pp. 108335-108345, Jun., 2020. DOI: 10.1109/ACCESS.2020.3001349.
    CrossRef
  25. TY. Lin, and P. Dollár, and R. Girshick, and K. He, and B. Hariharan, and S. Belongie, Feature pyramid networks for object detection, in in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp. 936-944, 2017. DOI: 10.1109/CVPR.2017.106.
    KoreaMed CrossRef
  26. Y. Gong, and X. Yu, and Y. Ding, and X. Peng, and J. Zhao, and Z. Han, Effective fusion factor in FPN for tiny object detection, in in Proceedings of the IEEE/CVF winter conference on applications of computer vision, Waikoloa, USA, pp. 1160-1168, 2021. DOI: 10.1109/WACV48630.2021.00120.
    CrossRef
  27. S. Liu, and L. Qi, and H. Qin, and J. Shi, and J. Jia, Path aggregation network for instance segmentation, in in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, pp. 8759-8768, 2018. DOI: 10.1109/CVPR.2018.00913.
    CrossRef
  28. K. He, and X. Zhang, and S. Ren, and J. Sun, Spatial pyramid pooling in deep convolutional networks for visual recognition, in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1904-1916, 2015. DOI: 10.1109/TPAMI.2015.2389824.
    Pubmed CrossRef
  29. W. Shi, and Z. Lu, and W. Wu, and H. Liu, Single‐shot detector with enriched semantics for PCB tiny defect detection, The Journal of Engineering, vol. 2020, no. 13, pp. 366-372, 2020. DOI: 10.1049/joe.2019.1180.
    CrossRef
  30. D. Li, and L. Xu, and G. Ran, and Z. Guo, Computer vision based research on PCB recognition using SSD neural network, Journal of Physics: Conference Series, vol. 1815, no. 1, 012005, 2021. DOI: 10.1088/1742-6596/1815/1/012005.
    CrossRef
  31. L. K. Cheong and S. A. Suandi and S. Rahman, Defects and components recognition in printed circuit boards using convolutional neural network, in in 10th International Conference on Robotics, Vision, Signal Processing and Power Applications, Springer, Singapore, pp. 75-81, 2019. DOI: 10.1007/978-981-13-6447-1_10.
    CrossRef
  32. G. Mahalingam and K. M. Gay and K. Ricanek, PCB-metal: A PCB image dataset for advanced computer vision machine learning component analysis, in in 2019 16th International Conference on Machine Vision Applications (MVA), Tokyo, Japan, pp. 1-5, 2019. DOI: 10.23919/MVA.2019.8757928.
    KoreaMed CrossRef
  33. TY. Lin, and P. Goyal, and R. Girshick, and K. He, and P. Dollár, Focal loss for dense object detection, in in Proceedings of the IEEE International Conference on Computer Vision, pp. 2980-2988, 2017. DOI: 10.1109/TPAMI.2018.2858826.
    Pubmed CrossRef
  34. Z. Ge, and S. Liu, and F. Wang, and Z. Li, and J. Sun, Yolox: Exceeding yolo series in 2021, in Yolox: Exceeding yolo series in 2021, Jul., 2021. DOI: 10.48550/arXiv.2107.08430.
    CrossRef
  35. CY. Wang, and HY. M. Liao, and YH. Wu, and PY. Chen, and JW. Hsieh, and IH. Yeh, CSPNet: A new backbone that can enhance learning capability of CNN, in in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 390-391, 2020. DOI: 10.1109/CVPRW50498.2020.00203.
    CrossRef

Pyeoungkee Kim

Professor in the Computer Software Engineering department at Silla University since March 1995, has also been the Executive Director of the Korean Smart Media Society, Chairman of the International Society for Convergence Science and Technology (IACST), and President of the Busan Makers Association. His academic achievements can be seen at: https://computer.silla.ac.kr/computer2016/index.php?pCode=staff&pg=1&mode=view&idx=816.


Xiaorui Huang

She was born in 1996 and is a postgraduate student majoring in Artificial Intelligence at Silla University in Busan, Korea. She focuses on the computer vision research field. She is from China and achieved her Engineering bachelor’s degree at Bohai University in China.


Ziyu Fang

He was born on March 27, 1990. He just graduated from Silla University in February 2022 and achieved his Ph.D. in the computer vision field. He is from China, and his study focuses on computer vision.


Article

Regular paper

Journal of information and communication convergence engineering 2023; 21(1): 24-31

Published online March 31, 2023 https://doi.org/10.56977/jicce.2023.21.1.24

Copyright © Korea Institute of Information and Communication Engineering.

SSD PCB Component Detection Using YOLOv5 Model

Pyeoungkee Kim , Xiaorui Huang *, and Ziyu Fang

Department of Computer Software Engineering, Silla University, Busan 46958, South Korea

Correspondence to:Xiaorui Huang (E-mail: awber12138@gmail.com, Tel: +82-999-5066)
Department of Computer Software Engineering, Silla University, Busan 46958, South Korea

Received: November 7, 2022; Revised: January 12, 2023; Accepted: January 30, 2023

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The solid-state drive (SSD) possesses higher input and output speeds, more resistance to physical shock, and lower latency compared with regular hard disks; hence, it is an increasingly popular storage device. However, tiny components on an internal printed circuit board (PCB) hinder the manual detection of malfunctioning components. With the rapid development of artificial intelligence technologies, automatic detection of components through convolutional neural networks (CNN) can provide a sound solution for this area. This study proposes applying the YOLOv5 model to SSD PCB component detection, which is the first step in detecting defective components. It achieves pioneering state-of-the-art results on the SSD PCB dataset. Contrast experiments are conducted with YOLOX, a neck-and-neck model with YOLOv5; evidently, YOLOv5 obtains an mAP@0.5 of 99.0%, essentially outperforming YOLOX. These experiments prove that the YOLOv5 model is effective for tiny object detection and can be used to study the second step of detecting defective components in the future.

Keywords: CNN, PCB, solid-state disk, tiny object detection, YOLOv5

I. INTRODUCTION

A solid-state drive (SSD), built from silicon memory chips, is the storage component of a computer. Compared with electromechanical drives, SSDs are more resistant to physical shock; they have more silent running conditions, higher input and output speeds, and lower latency [1]. With reliable data retention even without power [2], flash memory, almost instantaneous start-up time, resilient operating temperature, and small and lightweight size, SSDs are more convenient and popular. Since their price fell from $1.56/GB in 2011 [3] to $0.11/GB in 2022 [4], SSDs have become increasingly useful in the electronic storage market. However, bad blocks in new SSDs, caused primarily by chip failure, are fairly common. In the first four years of deployment, 30-80% of SSDs evolved at least one bad block, and 2-7% evolved at least one bad chip [5]. Therefore, detecting defective components is common to most users. This study proposed a method for the first step of defect detection on SSDprinted circuit boards (PCBs), which recognizes and classifies the components. These experiments lay the foundation for the second step of defect detection in SSD PCBs. The mass of tiny and dense components on each SSD PCB poses a heavy and inefficient burden on the manual detection of faulty components. Hence, automatic detection using machine learning and deep learning algorithms seems to be a reliable solution. Previous research has focused on PCB flaw detection; however, reports on SSD-PCBs are lacking. Because PCBs have tiny and dense mass components, PCB component detection was further studied to learn its associated challenges. Methods for PCB defect detection have evolved from traditional machine learning to deep learning. Traditional methods use a flawless template for comparison with a test one to classify and locate missing and defective components by extracting and analyzing very few pixels and texture information of good components on PCBs [6]. Purva et al. [10] used contour analysis calculations to increase the precision of PCB detection. Fang et al. [11] adopted the AdaBoost classifier to detect capacitors only. However, these machine learning-based methods are time-consuming, particularly when applied to assembly lines. Thus, the development of deep learning algorithms has introduced a new perspective to PCB defect detection. Kuo et al. [12] used a graph-based neural network to recognize multiple PCB component categories; however, it was barely satisfactory for smaller and similar components, such as capacitors and resistors. Improved Yolov3 [13] which includes a 53-layer Darknet53 feature extractor [8], was also applied to this area and achieved a mean average precision (mAP) of 93.07% for PCB component detection.

Herein, the YOLOv5 model, which is embedded with complex convolutional neural networks (CNNs), was utilized for SSD PCB component detection. Evolving from YOLOv1 [14], YOLOv2 [15], YOLOv3 [13], and YOLOv4 [16] models, YOLOv5 [17] makes some improvements that are beneficial to tiny object detection. This study proves that the YOLOv5 model is sufficient to recognize tiny SSD PCB components by an mAP@0.5 of 99.0%, compared with YOLOX [34], which achieves an mAP@0.5 of 87.9%. YOLOX is an advanced and popular algorithm for computer vision, and it is competitive with YOLOv5. This study presents the framework of the YOLOv5 model and the experimental results on the used SSD PCB dataset, contributed by Ziyu Fang, our second author. The remainder of this paper is organized as follows.

Section II presents related work. Section III presents the details of the YOLOv5 model and the recognition of YOLOX and YOLOv5. Section IV describes the experimental results and analysis. Finally, Section V presents the conclusions, discussion, and future scope.

II. RELATED WORKS

Studies on PCBs that similarly have tiny and unevenly distributed components are limited, whereas those on SSD PCB component detection are lacking. Thus, herein, the recognition of components on the PCB was learned and a model that can lessen difficulties and improve the performance of SSD PCB component detection was determined. Some commonly encountered challenges in SSD PCB detection were learned from related studies on PCB component detection. Reza et al. [18] highlighted that the tiny and densely cluttered nature of electronic components is a significant challenge. Therefore, in this study, focus was placed on tiny object detection. Leibe et al. [19] found that very few pixels and texture information lead to challenges in tiny object detection; therefore, they used the unit statistical curvature feature algorithm to recognize tiny objects. However, it is inefficient owing to its complex computation. Wang et al. [20] focused on IoUbased metrics and raised the normalized Wasserstein distance to compare the similarity between the ground truth and predicted bounding boxes. In [21], a multiple-center point-based learning network was proposed to improve the localization performance of tiny object detection. Nevertheless, accurately locating the densely distributed components of PCBs poses a challenge. YOLOv5 selects the GIoU loss [22], thus allowing the optimization of nonoverlapping bounding boxes. In addition, CNN is effective for tiny object detection. To avoid excessive computation in tiny object detection, [23] decreased the depth of the convolutional layers in YOLOv3, thereby increasing the speed. In addition, [24] proved that the feature pyramid network (FPN) [25] as the backbone for the extractor performs well on tiny PCB component detection. However, [26] showed that the FPN technique alone is insufficient to perform excellently in tiny object detection and presents room for advancement. YOLOv5 adopts a path aggregation network (PAN) [27] + FPN and spatial pyramid pooling (SPP) [28] framework to ensure pixel completeness of the feature maps. Shi et al. [29] proposed a single-shot detector or tiny object detection, and Li et al. [30] used a single-shot multibox detector to recognize PCB defects and obtain the best mAP of 0.951. Cheong et al. [31] applied transfer learning based on CNNs to PCB defects and component recognition and achieved an mAP of 0.9654 in recognizing 25 different PCB components.

In addition to their small size, the imbalanced distribution of components on PCBs is another challenge for PCB component detection. As shown in [32], using scientific dataset statistics, Mahalingam et al. reported that the components on one board are imbalanced, extremely small, and densely distributed. They used machine-learning algorithms for PCB component recognition and obtained the best mAP of 0.833. Reza et al. [18] explained why the uneven distribution of different components in a single microelectronic image causes problems in their detection. They also proposed a solution called loss boosting (LB), which balances the loss between hard samples (components that are too tiny and dense to be easily detected) and easy samples by automatic loss weight adjustment. They obtained the highest accuracy of 0.9231 for PCB component IC detection. However, this increases the loss of hard samples, which are often ignored by detectors. Avoiding being ignored is a question. YOLOv5 is embedded with focal loss [33], which is functionally similar to the LB. The focal loss downweights easy examples and focuses training on hard samples. The mosaic data augmentation adopted by YOLOv5 also contributes to tiny object detection. The experimental results in later sections prove its effectiveness.

Similar to PCB, SSD PCB has numerous tiny components that are thickly spread on them such the problems and solutions mentioned are true of SSD PCB detection. To date, no existing methods have been applied to SSD PCB component detection, and ours is the first. To prove the originality of the proposed method, it was compared with YOLOX, another advanced model neck-and-neck with YOLOv5, and the experimental results proved that our method performs better. Thus, this study demonstrated that the YOLOv5 model suits SSD PCB component detection.

III. SSD COMPONENTS AND THEIR RECOGNITION

A. SSD Components

Experiments on six primary components of the SSD PCB, namely capacitors, resistors, ICs, crystals, inductors, and transistors, were conducted. As shown in Fig. 1, the capacitors and resistors are much more than the ICs, crystals, inductors, and transistors. Only one crystal is present on one board, and none on some boards. However, several capacitors and resistors with similar shapes exist, which often causes incorrect detection.

Figure 1. Image of an SSD PCB image with capacitors, resistors, ICs, crystals, inductors, and transistors.

B. Structure of YOLOv5 Model

The structure of the YOLOv5 model, comprising four parts, is shown in Fig. 2. Although no official study on YOLOv5 has been reported, its structure was learnt through code [17].

Figure 2. Structure of YOLOv5 model with four parts: input, backbone, neck, and head (prediction).

First, the input part utilizes mosaic data augmentation to mix the four images into one to train more pixel information and reduce the mini-batch size. Fig. 3 shows a mosaic sample image.

Figure 3. SSD PCB training sample after Mosaic data augmentation.

Furthermore, auto-learn bounding box anchors and letterbox adaptive image scaling are embedded in the YOLOv5 model. They improved the speed and accuracy of the model using k-means clustering and a genetic algorithm.

Second, in the backbone part, YOLOv5 uses focus to save pixel information to the greatest extent possible. It operates through slicing and concat operations, as shown in Fig. 4. Taking the default input as an example, the 640×640×3 input image was sliced into four 320×320×3 feature maps by extracting every other feature information and was subsequently concatenated into a 320×320×12 feature map. Through a convolution-batch normalization-LeakyReLU (CBL) convolution operation, it is transformed into a 320×320×32 feature map without losing pixel information.

Figure 4. Focus: slicing and concat.

Subsequently, the feature map is passed through the crossstage partial network (CSPNet) [35], as shown in Fig. 5. The CSP layer first copies the feature map and puts it into a CNL-ResNet-1×1 convolution structure and the other into a 1×1 convolution layer, and finally connects them to pass through a convolutional layer, which decreases the redundant feature information and supplements the omissive one, thus comparably simplifying the net.

Figure 5. CSPNet (Cross stage partial network) operation.

Next, in the neck, PAN + FPN and SPP were included in the YOLOv5 model. This part typically collects feature maps from different stages to extract the complete feature information. Feature maps of arbitrary sizes are transmitted into the SPP layer to obtain size-unified maps, which are then conveyed to the following FPN + PAN layers. The framework is illustrated in Fig. 6.

Figure 6. Spatial pyramid pooling (SPP) structure.

In Fig. 7, the illustration from the original thesis unfolds the workflow of the PAN + FPN. The FPN’s down-sampling detection of tiny components tends to lose some marginal pixel information, whereas the down-sampling (red dotted line) and bottom-up side complement sampling structure (green dotted line) of FPN + PAN supplement the lost feature formation, thereby enhancing the feature extraction. Down-sampling and bottom-up structure yield a smallerscaled feature map and larger-scaled one, respectively. Next, they are combined by the side complements to complete the feature information.

Figure 7. Path aggregation network (PAN) structure [27]. (a) Feature pyramid network (FPN) backbone. (b) Bottom-up path augmentation.

Finally, our feature maps are introduced into the detection head. It predicts the bounding box after the adjustment through GIoU loss [22] and the categories. The GIoU loss renders the optimization of nonoverlapping bounding boxes feasible.

C. Recognition of SSD PCB Components

1) SSD PCB Dataset

Our dataset was self-made by taking photos of SSD PCBs and labeling them via the LabelImg program. Approximately 78% dataset was contributed by our second author, Ziyu Fang. A total of 232 images were compiled from almost 20 brands, including Samsung, Intel, SanDisk, and ADTAT. The training, validation, and test sets contained 158, 39, and 35 images, respectively. Additionally, for the training and validation, the number of capacitors, resistors, integrated circuits (ICs), inductors, crystals, and transistors were 9679, 5588, 952, 256, 72, and 282, respectively. Whereas for the test, they were 1089, 620, 117, 14, 7, and 25, respectively. The proportion of each component is listed in Table 1.

Quantity and proportion of components of each class.

Quantity and proportion of components of each class in the dataset
Trainset Test set
Quantity Proportion Quantity Proportion
Capacitor 9679 57.51% 1089 58.17%
Resistor 5588 33.20% 620 33.12%
IC 952 5.66% 117 6.25%
Inductor 256 1.52% 14 0.75%
Crystal 72 0.43% 7 0.37%
Transistor 282 1.68% 25 1.34%
Total 16829 100.00% 1872 100.00%


2) Experimental Environment

The experiment was conducted on the Ubuntu 20.04 LTS operating system, which is Pytorch 1.7; our CPU is Intel (R) Core (TM) i7-10700, and our GPU is NVIDIA GeForce RTX 2060 SUPER. The dataset was trained on YOLOv5 and YOLOX models. Essentially, 1024×1024 images were input into each model, the batch size was set to eight, and they were trained for 1500 steps on a single GPU device. Additionally, parameters, such as initial learning, momentum, weight decay, mosaic, and mixup degree, were set as defaults. YOLOv5 achieved an extraordinary result, which was better than that of YOLOX. Further, good results were obtained for the test set.

IV. EXPERIMENTAL RESULT AND ANALYSIS

A. Training and Validation Results

Herein, an mAP@0.5 score of 0.990 was achieved, whereas YOLOX achieved an mAP@0.5 of 0.879. Additionally, the mAP@0.5 scores of each single component detection were prominent, with the capacitor of 0.992, resistor 0.975, IC 0.989, inductor 0.995, crystal 0.995, and transistor 0.995. The precision and recall (PR) curves are shown in Fig. 8.

Figure 8. Precision and recall (PR) curve of SSD PCB component detection.

As shown in Fig. 9, a better generalization ability was observed in YOLOv5 instead of YOLOX, and YOLOv5 remained stable after convergence. YOLOv5 embedded with Focus and FPN + PAN + SPP structure performed better on tiny object detection than YOLOX. YOLOX does not use Focus and uses FPN + SPP. Its backbone was also different from that of YOLOv5. Theory and experiments show that YOLOv5 performs well in tiny object detection.

Figure 9. Comparison of mAP@0.5 curves between YOLOv5 and YOLOX.

Additionally, an analysis was conducted using a confusion matrix. In Fig. 10, the capacitors and transistors are not detected incorrectly. Only 0.02 resistors were incorrectly detected as background; 0.02 ICs were wrongly detected as transistors; and 0.15 inductors were incorrectly detected as ICs. The crystal detected 0.10 inductors and 0.20 ICs wrongly. The results of the background class reveal that smaller samples, such as capacitors and resistors, are easier to be missed or recognized as background, whereas larger ones tend to be wrongly detected mutually. Enlarging these samples can improve performance.

Figure 10. Confusion matrix of YOLOv5 on SSD PCB.

B. Test Results and Analysis

Herein, 35 images were tested and a detailed analysis of the results was performed. In Fig. 11, under the same test conditions, the accuracies of YOLOv5 on capacitors were all higher than 0.80, whereas the accuracies of YOLOX on capacitors were only approximately 0.50. The number beside the class indicates the accuracy.

Figure 11. Capacitor accuracy comparison between YOLOv5 and YOLOX.

In Fig. 12, under the same test conditions, the accuracies of YOLOv5 on the resistors and other components were higher than 0.85, whereas the accuracies of YOLOX were only around 0.40. Moreover, one resistor was missed by YOLOX but recognized by YOLOv5.

Figure 12. Resistor accuracy comparison between YOLOv5 and YOLOX.

In Fig. 13, we crop some ICs and combine them into one image for easy reference. YOLOv5 outperformed YOLOX in IC detection.

Figure 13. IC and transistor accuracy comparison between YOLOv5 and YOLOX.

As shown in Fig. 14, YOLOv5 still performed better than YOLOX in detecting inductors and other components. The accuracies of YOLOv5 were all approximately 0.90, whereas those of YOLOX were only below 0.50. A resistor and transistor were missed by YOLOX but recognized by YOLOv5.

Figure 14. Inductor accuracy comparison between YOLOv5 and YOLOX.

These test samples show that YOLOv5 has a high rate of correct recognition regardless of the dense distribution and tiny shape of the samples. These experimental results demonstrate that YOLOv5 is sufficient for SSD PCB component detection and tiny object detection.

V. DISCUSSION AND CONCLUSION

Research on SSD PCB component detection has been lacking, and our work shows that the YOLOv5 model is sufficiently good for this domain in terms of mAP@0.5. Our test results, particularly the detection of capacitors and resistors, demonstrate the high precision of tiny object recognition, thereby proving that the YOLOv5 model works well for tiny object detection and SSD PCB component detection. Through a contrast test with YOLOX, an advanced model neck, and a neck with YOLOv5, our work demonstrates that the YOLOv5 model performs better on tiny object recognition. The next step is to apply our trained model to the study of the second step of defect detection on an SSD PCB.

This study found that the larger samples could be confused with each other. This can be improved by increasing the number of such samples in the training set. In the future, we will apply the proposed model to an assembly line, thereby saving human costs and increasing efficiency.

Fig 1.

Figure 1.Image of an SSD PCB image with capacitors, resistors, ICs, crystals, inductors, and transistors.
Journal of Information and Communication Convergence Engineering 2023; 21: 24-31https://doi.org/10.56977/jicce.2023.21.1.24

Fig 2.

Figure 2.Structure of YOLOv5 model with four parts: input, backbone, neck, and head (prediction).
Journal of Information and Communication Convergence Engineering 2023; 21: 24-31https://doi.org/10.56977/jicce.2023.21.1.24

Fig 3.

Figure 3.SSD PCB training sample after Mosaic data augmentation.
Journal of Information and Communication Convergence Engineering 2023; 21: 24-31https://doi.org/10.56977/jicce.2023.21.1.24

Fig 4.

Figure 4.Focus: slicing and concat.
Journal of Information and Communication Convergence Engineering 2023; 21: 24-31https://doi.org/10.56977/jicce.2023.21.1.24

Fig 5.

Figure 5.CSPNet (Cross stage partial network) operation.
Journal of Information and Communication Convergence Engineering 2023; 21: 24-31https://doi.org/10.56977/jicce.2023.21.1.24

Fig 6.

Figure 6.Spatial pyramid pooling (SPP) structure.
Journal of Information and Communication Convergence Engineering 2023; 21: 24-31https://doi.org/10.56977/jicce.2023.21.1.24

Fig 7.

Figure 7.Path aggregation network (PAN) structure [27]. (a) Feature pyramid network (FPN) backbone. (b) Bottom-up path augmentation.
Journal of Information and Communication Convergence Engineering 2023; 21: 24-31https://doi.org/10.56977/jicce.2023.21.1.24

Fig 8.

Figure 8.Precision and recall (PR) curve of SSD PCB component detection.
Journal of Information and Communication Convergence Engineering 2023; 21: 24-31https://doi.org/10.56977/jicce.2023.21.1.24

Fig 9.

Figure 9.Comparison of mAP@0.5 curves between YOLOv5 and YOLOX.
Journal of Information and Communication Convergence Engineering 2023; 21: 24-31https://doi.org/10.56977/jicce.2023.21.1.24

Fig 10.

Figure 10.Confusion matrix of YOLOv5 on SSD PCB.
Journal of Information and Communication Convergence Engineering 2023; 21: 24-31https://doi.org/10.56977/jicce.2023.21.1.24

Fig 11.

Figure 11.Capacitor accuracy comparison between YOLOv5 and YOLOX.
Journal of Information and Communication Convergence Engineering 2023; 21: 24-31https://doi.org/10.56977/jicce.2023.21.1.24

Fig 12.

Figure 12.Resistor accuracy comparison between YOLOv5 and YOLOX.
Journal of Information and Communication Convergence Engineering 2023; 21: 24-31https://doi.org/10.56977/jicce.2023.21.1.24

Fig 13.

Figure 13.IC and transistor accuracy comparison between YOLOv5 and YOLOX.
Journal of Information and Communication Convergence Engineering 2023; 21: 24-31https://doi.org/10.56977/jicce.2023.21.1.24

Fig 14.

Figure 14.Inductor accuracy comparison between YOLOv5 and YOLOX.
Journal of Information and Communication Convergence Engineering 2023; 21: 24-31https://doi.org/10.56977/jicce.2023.21.1.24

Quantity and proportion of components of each class.

Quantity and proportion of components of each class in the dataset
Trainset Test set
Quantity Proportion Quantity Proportion
Capacitor 9679 57.51% 1089 58.17%
Resistor 5588 33.20% 620 33.12%
IC 952 5.66% 117 6.25%
Inductor 256 1.52% 14 0.75%
Crystal 72 0.43% 7 0.37%
Transistor 282 1.68% 25 1.34%
Total 16829 100.00% 1872 100.00%

References

  1. V. Kasavajhala, Solid state drive vs. hard disk drive price and performance study, in Proc. Dell Tech. White Paper (2011), pp. 8-9, May., 2011. [Internet] Available: https://www.profesorweb.es/wp-content/uploads/2012/11/ssd_vs_hdd_price_and_performance_study.pdf.
  2. K. Vättö The truth about SSD data retention. [Internet]. Available: https://www.anandtech.com/show/9248/the-truth-about-ssd-dataretention.
  3. P. Hernandez, SSDs sales rise, prices drop below $1 per GB in 2012, in [Online], Jan., 2012. Available: https://www.ecoinsite.com/2012/01/ssd-salesprice-1-dollar-per-gb-2012.html.
  4. S. Downing, Best SSDs 2022: From Budget SATA to Blazing-Fast NVMe, in [Online] . Available: https://www.tomshardware.com/reviews/best-ssds,3891.html.
  5. B. Schroeder and R. Lagisetty and A. Merchant, Flash reliability in production: The expected and the unexpected, in in 14th USENIX Conference on File and Storage Technologies (FAST 16), Santa Clara, USA, pp. 67-80, 2016. Available: https://www.usenix.org/conference/fast16/technical-sessions/presentation/schroeder.
  6. D.G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, Nov., 2010. DOI: 10.1023/B:VISI.0000029664.99615.94.
    CrossRef
  7. H. Bay and T. Tuytelaars and L. V. Gool, Surf: Speeded up robust features, in in European conference on computer vision, vol. 3951, Springer, Berlin, Heidelberg, pp. 404-417, 2006. DOI: 10.1007/11744023_32.
    CrossRef
  8. A. I. M. Hassanin and F. E. Abd El-Samie and E. B. Ghada, A realtime approach for automatic defect detection from PCBs based on SURF features and morphological operations, Multimedia Tools and Applications, vol. 78, no. 24, pp. 34437-34457, Oct., 2019. DOI: 10.1007/s11042-019-08097-9.
    CrossRef
  9. S. U. Rehman and K. F. Thang and N. S. Lai, Automated PCB identification and defect-detection system (APIDS), International Journal of Electrical and Computer Engineering, vol. 9, no. 1, pp. 297-306, Feb., 2019. DOI: 10.47760/ijcsmc.2021.v10i02.008.
    CrossRef
  10. A. Salunke Purva and N. Sherkar Shubhangi and C. S. Arya, PCB (printed circuit board) fault detection using machine learning, International Journal of Computer Science and Mobile Computing, vol. 10, no. 2, pp. 54-56, Feb., 2021. DOI: 10.47760/ijcsmc.2021.
    CrossRef
  11. J. Fang, and L. Shang, and G. Gao, and K. Xiong, and C. Zhang, Capacitor detection on PCB using AdaBoost classifier, Journal of Physics: Conference Series, vol. 1631, no. 1, 012185, Jul., 2020. DOI: 10.1088/1742-6596/1631/1/012185.
    CrossRef
  12. CW. Kuo, and J. D. Ashmore, and D. Huggins, and Z. Kira, Data-efficient graph embedding learning for PCB component detection, in in 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, USA, pp. 551-560, 2019. DOI: 10.1109/WACV.2019.00064.
    CrossRef
  13. J. Redmon, and A. Farhadi, YOLOv3: An incremental improvement, arXiv preprint arXiv:1804.02767, Apr., 2018. DOI: 10.48550/arXiv.1804.02767.
    CrossRef
  14. J. Redmon, and S. Divvala, and R. Girshick, and A. Farhadi, You only look once: Unified, real-time object detection, in in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 779-788, 2016. DOI: 10.1109/CVPR.2016.91.
    CrossRef
  15. J. Redmon, and A. Farhadi, YOLO9000: better, faster, stronger, in in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp. 6517-6525, 2017. DOI: 10.1109/CVPR.2017.690.
    CrossRef
  16. A. Bochkovskiy and CY. Wang and HY. M. Liao, Yolov4: Optimal speed and accuracy of object detection, arXiv preprint arXiv:2004. 10934, Apr., 2020. DOI: 10.48550/arXiv.2004.10934.
    CrossRef
  17. G. Jocher Code. [Online]. Available: https://github.com/ultralytics/yolov5.
  18. M. A. Reza and Z. Chen and D. J. Crandall, Deep neural networkbased detection and verification of microelectronic images, Journal of Hardware and Systems Security, vol. 4, no. 1, pp. 44-54, Jan., 2020. DOI: 10.1007/s41635-019-00088-4.
    CrossRef
  19. Y. Kang, and X. Li, A novel tiny object recognition algorithm based on unit statistical curvature feature, in in European Conference on Computer Vision, vol. 9909, Amsterdam, The Netherlands, pp. 762-777, 2016. DOI: 10.1007/978-3-319-46454-1_46.
    CrossRef
  20. J. Wang, and C. Xu, A normalized gaussian Wasserstein distance for tiny object detection, arXiv preprint arXiv:2110.13389, Oct., 2021. DOI: 10.48550/arXiv.2110.13389.
    CrossRef
  21. J. Wang, and W. Yang, and H. Guo, and R. Zhang, and GS. Xia, Tiny object detection in aerial images, in in 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, pp. 3791-3798, 2021. DOI: 10.1109/ICPR48806.2021.9413340.
    CrossRef
  22. H. Rezatofighi, and N. Tsoi, and JY. Gwak, and A. Sadeghian, and I. Reid, and S. Savarese, Generalized intersection over union: A metric and a loss for bounding box regression, in in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, pp. 658-666, 2019. DOI: 10.1109/CVPR.2019.00075.
    CrossRef
  23. P. Adarsh and P. Rathi and M. Kumar, YOLO v3-Tiny: Object Detection and Recognition using one stage improved model, in in 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, pp. 687-694, 2020. DOI: 10.1109/ICACCS48705.2020.9074315.
    CrossRef
  24. B. Hu, and J. Wang, Detection of PCB surface defects with improved faster-RCNN and feature pyramid network, IEEE Access, vol. 8, pp. 108335-108345, Jun., 2020. DOI: 10.1109/ACCESS.2020.3001349.
    CrossRef
  25. TY. Lin, and P. Dollár, and R. Girshick, and K. He, and B. Hariharan, and S. Belongie, Feature pyramid networks for object detection, in in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp. 936-944, 2017. DOI: 10.1109/CVPR.2017.106.
    KoreaMed CrossRef
  26. Y. Gong, and X. Yu, and Y. Ding, and X. Peng, and J. Zhao, and Z. Han, Effective fusion factor in FPN for tiny object detection, in in Proceedings of the IEEE/CVF winter conference on applications of computer vision, Waikoloa, USA, pp. 1160-1168, 2021. DOI: 10.1109/WACV48630.2021.00120.
    CrossRef
  27. S. Liu, and L. Qi, and H. Qin, and J. Shi, and J. Jia, Path aggregation network for instance segmentation, in in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, pp. 8759-8768, 2018. DOI: 10.1109/CVPR.2018.00913.
    CrossRef
  28. K. He, and X. Zhang, and S. Ren, and J. Sun, Spatial pyramid pooling in deep convolutional networks for visual recognition, in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1904-1916, 2015. DOI: 10.1109/TPAMI.2015.2389824.
    Pubmed CrossRef
  29. W. Shi, and Z. Lu, and W. Wu, and H. Liu, Single‐shot detector with enriched semantics for PCB tiny defect detection, The Journal of Engineering, vol. 2020, no. 13, pp. 366-372, 2020. DOI: 10.1049/joe.2019.1180.
    CrossRef
  30. D. Li, and L. Xu, and G. Ran, and Z. Guo, Computer vision based research on PCB recognition using SSD neural network, Journal of Physics: Conference Series, vol. 1815, no. 1, 012005, 2021. DOI: 10.1088/1742-6596/1815/1/012005.
    CrossRef
  31. L. K. Cheong and S. A. Suandi and S. Rahman, Defects and components recognition in printed circuit boards using convolutional neural network, in in 10th International Conference on Robotics, Vision, Signal Processing and Power Applications, Springer, Singapore, pp. 75-81, 2019. DOI: 10.1007/978-981-13-6447-1_10.
    CrossRef
  32. G. Mahalingam and K. M. Gay and K. Ricanek, PCB-metal: A PCB image dataset for advanced computer vision machine learning component analysis, in in 2019 16th International Conference on Machine Vision Applications (MVA), Tokyo, Japan, pp. 1-5, 2019. DOI: 10.23919/MVA.2019.8757928.
    KoreaMed CrossRef
  33. TY. Lin, and P. Goyal, and R. Girshick, and K. He, and P. Dollár, Focal loss for dense object detection, in in Proceedings of the IEEE International Conference on Computer Vision, pp. 2980-2988, 2017. DOI: 10.1109/TPAMI.2018.2858826.
    Pubmed CrossRef
  34. Z. Ge, and S. Liu, and F. Wang, and Z. Li, and J. Sun, Yolox: Exceeding yolo series in 2021, in Yolox: Exceeding yolo series in 2021, Jul., 2021. DOI: 10.48550/arXiv.2107.08430.
    CrossRef
  35. CY. Wang, and HY. M. Liao, and YH. Wu, and PY. Chen, and JW. Hsieh, and IH. Yeh, CSPNet: A new backbone that can enhance learning capability of CNN, in in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 390-391, 2020. DOI: 10.1109/CVPRW50498.2020.00203.
    CrossRef
JICCE
Mar 31, 2024 Vol.22 No.1, pp. 1~87

Stats or Metrics

Share this article on

  • line

Related articles in JICCE

Journal of Information and Communication Convergence Engineering Jouranl of information and
communication convergence engineering
(J. Inf. Commun. Converg. Eng.)

eISSN 2234-8883
pISSN 2234-8255