Publication: Visual object detection for autonomous transport vehicles in smart factories
Program
KU Authors
KU-Authors
Co-Authors
Authors
Advisor
Date
Language
Journal Title
Journal ISSN
Volume Title
Abstract
Autonomous transport vehicles (ATVs) are one of the most substantial components of smart factories of Industry 4.0. They are primarily considered to transfer the goods or perform some certain navigation tasks in the factory with self driving. The recent developments on computer vision studies allow the vehicles to visually perceive the environment and the objects in the environment. There are numerous applications especially for smart traffic networks in outdoor environments but there is lack of application and databases for autonomous transport vehicles in indoor industrial environments. There exist some essential safety and direction signs in smart factories and these signs have an important place in safety issues. Therefore, the detection of these signs by ATVs is crucial. In this study, a visual dataset which includes important indoor safety signs to simulate a factory environment is created. The dataset has been used to train different fast-responding popular deep learning object detection methods: faster R-CNN, YOLOv3, YOLOv4, SSD, and RetinaNet. These methods can be executed in real time to enhance the visual understanding of the ATV, which, in turn, helps the agent to navigate in a safe and reliable state in smart factories. The trained network models were compared in terms of accuracy on our created dataset, and YOLOv4 achieved the best performance among all the tested methods.
Autonomous transport vehicles (ATVs) are one of the most substantial components of smart factories of Industry 4.0. They are primarily considered to transfer the goods or perform some certain navigation tasks in the factory with self driving. The recent developments on computer vision studies allow the vehicles to visually perceive the environment and the objects in the environment. There are numerous applications especially for smart traffic networks in outdoor environments but there is lack of application and databases for autonomous transport vehicles in indoor industrial environments. There exist some essential safety and direction signs in smart factories and these signs have an important place in safety issues. Therefore, the detection of these signs by ATVs is crucial. In this study, a visual dataset which includes important indoor safety signs to simulate a factory environment is created. The dataset has been used to train different fast-responding popular deep learning object detection methods: faster R-CNN, YOLOv3, YOLOv4, SSD, and RetinaNet. These methods can be executed in real time to enhance the visual understanding of the ATV, which, in turn, helps the agent to navigate in a safe and reliable state in smart factories. The trained network models were compared in terms of accuracy on our created dataset, and YOLOv4 achieved the best performance among all the tested methods.
Autonomous transport vehicles (ATVs) are one of the most substantial components of smart factories of Industry 4.0. They are primarily considered to transfer the goods or perform some certain navigation tasks in the factory with self driving. The recent developments on computer vision studies allow the vehicles to visually perceive the environment and the objects in the environment. There are numerous applications especially for smart traffic networks in outdoor environments but there is lack of application and databases for autonomous transport vehicles in indoor industrial environments. There exist some essential safety and direction signs in smart factories and these signs have an important place in safety issues. Therefore, the detection of these signs by ATVs is crucial. In this study, a visual dataset which includes important indoor safety signs to simulate a factory environment is created. The dataset has been used to train different fast-responding popular deep learning object detection methods: faster R-CNN, YOLOv3, YOLOv4, SSD, and RetinaNet. These methods can be executed in real time to enhance the visual understanding of the ATV, which, in turn, helps the agent to navigate in a safe and reliable state in smart factories. The trained network models were compared in terms of accuracy on our created dataset, and YOLOv4 achieved the best performance among all the tested methods.
Description
Source:
Publisher:
Keywords:
Citation
Yazici, A., Çevi̇kal, H., Eker, O., Yavuz, H., Gengeç, N. (2021). Visual object detection for autonomous transport vehicles in smart factories. Turkish Journal of Electrical Engineering and Computer Sciences, 29(4), 2101-2115
