Efficient Object Detection in Autonomous Driving Systems Using YOLOv5 and CARLA Simulator
Main Article Content
Abstract
One of primary challenges in autonomous driving is high cost of electronic components, which can hinder widespread adoption and experimentation necessary for advancements in this field. Open-source CARLA simulator provides a cost-effective and realistic environment for conducting experiments in autonomous driving, allowing for precise and efficient testing without need for expensive hardware. In this study, we focus on object detection within autonomous driving systems using CARLA simulator. Deep learning model YOLOv5 was employed to detect ten different objects: bike, motorcycle, person, traffic light green, traffic light orange, traffic light red, traffic sign 30, traffic sign 60, traffic sign 90, and vehicle. Model was trained for 150 epochs using a dataset of 1864 images, divided into 1600 images for training, 264 images for testing. Training results for all classes were Precision (P) of 0.934, Recall (R) of 0.908, mAP@50 of 0.935 and mAP@50-95 of 0.689. Test results for all classes were Precision (P) of 0.93, Recall (R) of 0.892, mAP@50 of 0.93 and mAP@50-95 of 0.675. These results demonstrate model's capability to accurately detect and retrieve objects. Additionally, external testing on model with new images showed good performance, successfully recognizing objects in various scenarios. This research highlights potential of using CARLA simulator and YOLOv5 model for efficient and effective object detection in autonomous driving systems, paving way for further advancements in this critical field.
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.