Improving Vehicle Identification Number Detection Accuracy with YOLOv5 and Histogram Equalization

Main Article Content

Hasan Alkhadafe
Zahiyah Khalleefah
Ibrahim Nasir

Abstract

This study examines the effectiveness of different image preprocessing techniques for object detection models, using a dataset of VIN images from Roboflow. The dataset was segmented into training, validation, and testing subsets, encompassing a range of conditions such as noise, rain, varying lighting, and reflections. Model performance was evaluated through metrics including precision, recall, average precision (AP), mean average precision (mAP), error rate reduction, and frames per second (FPS).The baseline model, trained on the original dataset, achieved a precision of 97.9% and a recall of 95.7%, with an mAP@0.5 of 99.1% but a lower mAP@0.5:0.95 of 62.3%. Applying Histogram Equalization (HE) resulted in improved recall but reduced precision, with mAP@0.5:0.95 values remaining comparable to the original dataset. The HE+RGB preprocessing showed minor performance changes, with inconsistent improvements in recall and precision. Adaptive Histogram Equalization (AHE) notably improved model performance, reaching a precision of 98.8% and recall of 99.6%, with mAP@0.5 and mAP@0.5:0.95 values of 74.3%, 77.0% respectively. The CLAHE preprocessing technique outperformed all others, achieving the highest precision (99.4%), recall (98.6%), and mAP@0.5:0.95 (75.2% in training, 77.9% in validation, and 75.2% in testing), demonstrating the best balance of accuracy and generalization with minimal misclassifications. Overall, CLAHE emerged as the most effective preprocessing method, offering superior performance across all evaluation metrics.

Article Details

How to Cite
Alkhadafe ح. ص., Khalleefah ز. ش., & Nasir إ. ا. (2024). Improving Vehicle Identification Number Detection Accuracy with YOLOv5 and Histogram Equalization. Sebha University Conference Proceedings, 3(2), 423–428. https://doi.org/10.51984/sucp.v3i2.3409
Section
Confrence Proceeding

References

J. Celko, “Vehicle Identification Number (VIN),” Joe Celko’s Data, Meas. Stand. SQL, pp. 175–178, 2010, doi: 10.1016/B978-0-12-374722-8.00023-2.

V. A. Adibhatla et al., “Applying deep learning to defect detection in printed circuit boards via a newest model of you-only-look-once,” Math. Biosci. Eng. 2021 44411, vol. 18, no. 4, pp. 4411–4428, 2021, doi: 10.3934/MBE.2021223.

J. Chen, K. Jia, W. Chen, Z. Lv, and R. Zhang, “A real-time and high-precision method for small traffic-signs recognition,” Neural Comput. Appl., vol. 34, no. 3, pp. 2233–2245, Feb. 2022, doi: 10.1007/S00521-021-06526-1/METRICS.

Z. Chen, K. Pawar, M. Ekanayake, C. Pain, S. Zhong, and G. F. Egan, “Deep Learning for Image Enhancement and Correction in Magnetic Resonance Imaging—State-of-the-Art and Challenges,” J. Digit. Imaging 2022 361, vol. 36, no. 1, pp. 204–230, Nov. 2022, doi: 10.1007/S10278-022-00721-9.

Y. Jiang, L. Li, J. Zhu, Y. Xue, and H. Ma, “DEANet: Decomposition Enhancement and Adjustment Network for Low-Light Image Enhancement,” Tsinghua Sci. Technol., vol. 28, no. 4, pp. 743–753, Aug. 2023, doi: 10.26599/TST.2022.9010047.

S. Agrawal, R. Panda, P. K. Mishro, and A. Abraham, “A novel joint histogram equalization based image contrast enhancement,” J. King Saud Univ. - Comput. Inf. Sci., vol. 34, no. 4, pp. 1172–1182, Apr. 2022, doi: 10.1016/J.JKSUCI.2019.05.010.

B. S. Rao, “Dynamic Histogram Equalization for contrast enhancement for digital images,” Appl. Soft Comput., vol. 89, p. 106114, Apr. 2020, doi: 10.1016/J.ASOC.2020.106114.

S. Doshvarpassand, X. Wang, and X. Zhao, “Sub-surface metal loss defect detection using cold thermography and dynamic reference reconstruction (DRR),” https://doi.org/10.1177/1475921721999599, vol. 21, no. 2, pp. 354–369, Apr. 2021, doi: 10.1177/1475921721999599.

S. H. Majeed and N. A. M. Isa, “Adaptive Entropy Index Histogram Equalization for Poor Contrast Images,” IEEE Access, vol. 9, pp. 6402–6437, 2021, doi: 10.1109/ACCESS.2020.3048148.

J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” ArXiv, vol. abs/1804.0, 2018, [Online]. Available: https://api.semanticscholar.org/CorpusID:4714433

J. S. D. R. G. A. F. Redmon, “(YOLO) You Only Look Once: Unified, Real-Time Object Detection,” Cvpr, vol. 2016-Decem, pp. 779–788, Dec. 2016, doi: 10.1109/CVPR.2016.91.

K. He, X. Zhang, S. Ren, and J. Sun, “Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 9, pp. 1904–1916, Sep. 2015, doi: 10.1109/TPAMI.2015.2389824.

S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path Aggregation Network for Instance Segmentation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 8759–8768, Dec. 2018, doi: 10.1109/CVPR.2018.00913.

A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” Apr. 2020, Accessed: Jun. 20, 2024. [Online]. Available: https://arxiv.org/abs/2004.10934v1

C.-Y. Wang, H.-Y. M. Liao, Y.-H. Wu, P.-Y. Chen, J.-W. Hsieh, and I.-H. Yeh, “CSPNet: A new backbone that can enhance learning capability of CNN,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 390–391.

K. Sergeev, “Vin detection Dataset,” Roboflow Universe. Roboflow, Feb. 2023. [Online]. Available: https://universe.roboflow.com/kirill-sergeev/vin-detection

T. Diwan, G. Anirudh, and J. V. Tembhurne, “Object detection using YOLO: challenges, architectural successors, datasets and applications,” Multimed. Tools Appl., vol. 82, no. 6, pp. 9243–9275, Mar. 2023, doi: 10.1007/S11042-022-13644-Y/TABLES/7.

I. Ahmad et al., “Deep Learning Based Detector YOLOv5 for Identifying Insect Pests,” Appl. Sci. 2022, Vol. 12, Page 10167, vol. 12, no. 19, p. 10167, Oct. 2022, doi: 10.3390/APP121910167.

Q. Su, Z. Qin, J. Mu, and H. Wu, “Rapid Detection of QR Code Based on Histogram Equalization-Yolov5,” 2023 7th Int. Conf. Electr. Mech. Comput. Eng. ICEMCE 2023, pp. 843–848, 2023, doi: 10.1109/ICEMCE60359.2023.10490796.