Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Real-time object detection for autonomous vehicles using deep learning
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology.
2019 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

Self-driving systems are commonly categorized into three subsystems: perception, planning, and control. In this thesis, the perception problem is studied in the context of real-time object detection for autonomous vehicles. The problem is studied by implementing a cutting-edge real-time object detection deep neural network called Single Shot MultiBox Detector which is trained and evaluated on both real and virtual driving-scene data. The results show that modern real-time capable object detection networks achieve their fast performance at the expense of detection rate and accuracy. The Single Shot MultiBox Detector network is capable of processing images at over fifty frames per second, but scored a relatively low mean average precision score on a diverse driving- scene dataset provided by Berkeley University. Further development in both hardware and software technologies will presumably result in a better trade-off between run-time and detection rate. However, as the technologies stand today, general real-time object detection networks do not seem to be suitable for high precision tasks, such as visual perception for autonomous vehicles. Additionally, a comparison is made between two versions of the Single Shot MultiBox Detector network, one trained on a virtual driving-scene dataset from Ford Center for Autonomous Vehicles, and one trained on a subset of the earlier used Berkeley dataset. These results show that synthetic driving scene data possibly could be an alternative to real-life data when training object detecting networks

Place, publisher, year, edition, pages
2019. , p. 111
Series
IT ; 19007
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:uu:diva-393999OAI: oai:DiVA.org:uu-393999DiVA, id: diva2:1356309
Educational program
Master Programme in Computer Science
Supervisors
Examiners
Available from: 2019-10-01 Created: 2019-10-01 Last updated: 2019-10-01Bibliographically approved

Open Access in DiVA

fulltext(8533 kB)8 downloads
File information
File name FULLTEXT01.pdfFile size 8533 kBChecksum SHA-512
f96dba1e62e5ed70fcc5cc80b2ef81accc1173725d7032741f65da705fd0374a75a67e96fa7af5a16df4975785b1bf16d633bf46d63398ed34f7ddf08c527de9
Type fulltextMimetype application/pdf

By organisation
Department of Information Technology
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar
Total: 8 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 26 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf