Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Navigability Assessment for Autonomous Systems Using Deep Neural Networks
Linköping University, Department of Electrical Engineering, Computer Vision.
2017 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

Automated navigability assessment based on image sensor data is an important concern in the design of autonomous robotic systems. The problem consists in finding a mapping from input data to the navigability status of different areas of the surrounding world. Machine learning techniques are often applied to this problem. This thesis investigates an approach to navigability assessment in the image plane, based on offline learning using deep convolutional neural networks, applied to RGB and depth data collected using a robotic platform. Training outputs were generated by manually marking out instances of near collision in the sequences and tracing back the location of the near-collision frame through the previous frames. Several combinations of network inputs were tried out. Inputs included grayscale gradient versions of the RGB frames, depth maps, image coordinate maps and motion information in the form of a previous RGB frame or heading maps. Some improvement compared to simple depth thresholding was demonstrated, mainly in the handling of noise and missing pixels in the depth maps. The resulting networks appear to be mostly dependent on depth information; an attempt to train a network without the depth frames was unsuccessful,and a network trained using the depth frames alone performed similarly to networks trained with additional inputs. An unsuccessful attempt at training a network towards a more motion-dependent navigability concept was also made. It was done by including training frames captured as the robot was moving away from the obstacle, where the corresponding training outputs were marked as obstacle-free.

Place, publisher, year, edition, pages
2017. , p. 34
Keywords [en]
autonomous systems, autonomous robots, deep learning, convolutional neural networks
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:liu:diva-138356ISRN: LiTH-ISY-EX--17/5045--SEOAI: oai:DiVA.org:liu-138356DiVA, id: diva2:1110839
Subject / course
Computer Vision Laboratory
Presentation
2017-05-30, 14:15 (Swedish)
Supervisors
Examiners
Available from: 2017-06-16 Created: 2017-06-16 Last updated: 2017-06-19Bibliographically approved

Open Access in DiVA

fulltext(4712 kB)102 downloads
File information
File name FULLTEXT01.pdfFile size 4712 kBChecksum SHA-512
fdf13fe75ccad4b2fb8c5f4aae8165bd92d908d854f442a7792c23ee8bb8f03a6b4ae2b91d006db003b288d468e39863dd259f7fb8c73d9a39708f051bfac0b4
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Wimby Schmidt, Ebba
By organisation
Computer Vision
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar
Total: 102 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 725 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf