Yayın:
Visual object detection for autonomous transport vehicles in smart factories

dc.contributor.authorGENGEÇ, Nazlıcan
dc.contributor.authorEKER, Onur
dc.contributor.authorÇEVİKALP, Hakan
dc.contributor.authorYAZICI, Ahmet
dc.contributor.authorYAVUZ, Hasan Serhan
dc.date.accessioned2026-01-04T15:35:48Z
dc.date.issued2021-07-26
dc.description.abstractAutonomous transport vehicles (ATVs) are one of the most substantial components of smart factories of Industry 4.0. They are primarily considered to transfer the goods or perform some certain navigation tasks in the factory with self driving. The recent developments on computer vision studies allow the vehicles to visually perceive the environment and the objects in the environment. There are numerous applications especially for smart traffic networks in outdoor environments but there is lack of application and databases for autonomous transport vehicles in indoor industrial environments. There exist some essential safety and direction signs in smart factories and these signs have an important place in safety issues. Therefore, the detection of these signs by ATVs is crucial. In this study, a visual dataset which includes important indoor safety signs to simulate a factory environment is created. The dataset has been used to train different fast-responding popular deep learning object detection methods: faster R-CNN, YOLOv3, YOLOv4, SSD, and RetinaNet. These methods can be executed in real time to enhance the visual understanding of the ATV, which, in turn, helps the agent to navigate in a safe and reliable state in smart factories. The trained network models were compared in terms of accuracy on our created dataset, and YOLOv4 achieved the best performance among all the tested methods.
dc.description.urihttps://doi.org/10.3906/elk-2008-62
dc.description.urihttps://dx.doi.org/10.3906/elk-2008-62
dc.description.urihttps://aperta.ulakbim.gov.tr/record/230902
dc.identifier.doi10.3906/elk-2008-62
dc.identifier.eissn1303-6203
dc.identifier.endpage2115
dc.identifier.openairedoi_dedup___::7ae4e1bf55a5a94bf5fa16f0c3658ffb
dc.identifier.orcid0000-0002-0250-1082
dc.identifier.orcid0000-0003-4040-6438
dc.identifier.orcid0000-0002-1708-8817
dc.identifier.orcid0000-0001-5589-2032
dc.identifier.orcid0000-0002-4944-1013
dc.identifier.scopus2-s2.0-85112720101
dc.identifier.startpage2101
dc.identifier.urihttps://hdl.handle.net/20.500.12597/38933
dc.identifier.volume29
dc.identifier.wos000679322900004
dc.publisherThe Scientific and Technological Research Council of Turkey (TUBITAK-ULAKBIM) - DIGITAL COMMONS JOURNALS
dc.relation.ispartofTURKISH JOURNAL OF ELECTRICAL ENGINEERING & COMPUTER SCIENCES
dc.rightsOPEN
dc.titleVisual object detection for autonomous transport vehicles in smart factories
dc.typeArticle
dspace.entity.typePublication
local.api.response{"authors":[{"fullName":"Nazlıcan GENGEÇ","name":"Nazlıcan","surname":"GENGEÇ","rank":1,"pid":{"id":{"scheme":"orcid_pending","value":"0000-0002-0250-1082"},"provenance":null}},{"fullName":"Onur EKER","name":"Onur","surname":"EKER","rank":2,"pid":{"id":{"scheme":"orcid","value":"0000-0003-4040-6438"},"provenance":null}},{"fullName":"Hakan ÇEVİKALP","name":"Hakan","surname":"ÇEVİKALP","rank":3,"pid":{"id":{"scheme":"orcid","value":"0000-0002-1708-8817"},"provenance":null}},{"fullName":"Ahmet YAZICI","name":"Ahmet","surname":"YAZICI","rank":4,"pid":{"id":{"scheme":"orcid","value":"0000-0001-5589-2032"},"provenance":null}},{"fullName":"Hasan Serhan YAVUZ","name":"Hasan Serhan","surname":"YAVUZ","rank":5,"pid":{"id":{"scheme":"orcid_pending","value":"0000-0002-4944-1013"},"provenance":null}}],"openAccessColor":"gold","publiclyFunded":false,"type":"publication","language":{"code":"und","label":"Undetermined"},"countries":null,"subjects":[{"subject":{"scheme":"FOS","value":"0202 electrical engineering, electronic engineering, information engineering"},"provenance":null},{"subject":{"scheme":"FOS","value":"02 engineering and technology"},"provenance":null}],"mainTitle":"Visual object detection for autonomous transport vehicles in smart factories","subTitle":null,"descriptions":["Autonomous transport vehicles (ATVs) are one of the most substantial components of smart factories of Industry 4.0. They are primarily considered to transfer the goods or perform some certain navigation tasks in the factory with self driving. The recent developments on computer vision studies allow the vehicles to visually perceive the environment and the objects in the environment. There are numerous applications especially for smart traffic networks in outdoor environments but there is lack of application and databases for autonomous transport vehicles in indoor industrial environments. There exist some essential safety and direction signs in smart factories and these signs have an important place in safety issues. Therefore, the detection of these signs by ATVs is crucial. In this study, a visual dataset which includes important indoor safety signs to simulate a factory environment is created. The dataset has been used to train different fast-responding popular deep learning object detection methods: faster R-CNN, YOLOv3, YOLOv4, SSD, and RetinaNet. These methods can be executed in real time to enhance the visual understanding of the ATV, which, in turn, helps the agent to navigate in a safe and reliable state in smart factories. The trained network models were compared in terms of accuracy on our created dataset, and YOLOv4 achieved the best performance among all the tested methods."],"publicationDate":"2021-07-26","publisher":"The Scientific and Technological Research Council of Turkey (TUBITAK-ULAKBIM) - DIGITAL COMMONS JOURNALS","embargoEndDate":null,"sources":["Crossref"],"formats":null,"contributors":null,"coverages":null,"bestAccessRight":{"code":"c_abf2","label":"OPEN","scheme":"http://vocabularies.coar-repositories.org/documentation/access_rights/"},"container":{"name":"TURKISH JOURNAL OF ELECTRICAL ENGINEERING & COMPUTER SCIENCES","issnPrinted":null,"issnOnline":"1303-6203","issnLinking":null,"ep":"2115","iss":null,"sp":"2101","vol":"29","edition":null,"conferencePlace":null,"conferenceDate":null},"documentationUrls":null,"codeRepositoryUrl":null,"programmingLanguage":null,"contactPeople":null,"contactGroups":null,"tools":null,"size":null,"version":null,"geoLocations":null,"id":"doi_dedup___::7ae4e1bf55a5a94bf5fa16f0c3658ffb","originalIds":["10.3906/elk-2008-62","50|doiboost____|7ae4e1bf55a5a94bf5fa16f0c3658ffb","3191258989","oai:aperta.ulakbim.gov.tr:230902","50|r39c86a4b39b::1af640b9166717c45ed547fa9aea6a0e"],"pids":[{"scheme":"doi","value":"10.3906/elk-2008-62"}],"dateOfCollection":null,"lastUpdateTimeStamp":null,"indicators":{"citationImpact":{"citationCount":9,"influence":3.7247434e-9,"popularity":8.463388e-9,"impulse":7,"citationClass":"C5","influenceClass":"C4","impulseClass":"C4","popularityClass":"C4"}},"instances":[{"pids":[{"scheme":"doi","value":"10.3906/elk-2008-62"}],"type":"Article","urls":["https://doi.org/10.3906/elk-2008-62"],"publicationDate":"2021-07-26","refereed":"peerReviewed"},{"pids":[{"scheme":"doi","value":"10.3906/elk-2008-62"}],"type":"Article","urls":["https://doi.org/10.3906/elk-2008-62"],"refereed":"nonPeerReviewed"},{"alternateIdentifiers":[{"scheme":"mag_id","value":"3191258989"},{"scheme":"doi","value":"10.3906/elk-2008-62"}],"type":"Article","urls":["https://dx.doi.org/10.3906/elk-2008-62"],"refereed":"nonPeerReviewed"},{"license":"CC BY","type":"Other literature type","urls":["https://aperta.ulakbim.gov.tr/record/230902"],"publicationDate":"2021-01-01","refereed":"nonPeerReviewed"}],"isGreen":true,"isInDiamondJournal":false}
local.import.sourceOpenAire
local.indexed.atWOS
local.indexed.atScopus

Dosyalar

Koleksiyonlar