Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Autonomous meshing, texturing and recognition of objectmodels with a mobile robot
KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. (Robotics Perception Learning RPL, Centre for Autonomous Systems, CAS)ORCID iD: 0000-0002-3111-3812
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. (Robotics Perception Learning RPL, Centre for Autonomous Systems, CAS)ORCID iD: 0000-0003-1189-6634
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. (Robotics Perception Learning RPL, Centre for Autonomous Systems, CAS)ORCID iD: 0000-0002-7796-1438
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. (Robotics Perception Learning RPL, Centre for Autonomous Systems, CAS)ORCID iD: 0000-0002-1170-7162
2017 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We present a system for creating object modelsfrom RGB-D views acquired autonomously by a mobile robot.We create high-quality textured meshes of the objects byapproximating the underlying geometry with a Poisson surface.Our system employs two optimization steps, first registering theviews spatially based on image features, and second aligningthe RGB images to maximize photometric consistency withrespect to the reconstructed mesh. We show that the resultingmodels can be used robustly for recognition by training aConvolutional Neural Network (CNN) on images rendered fromthe reconstructed meshes. We perform experiments on datacollected autonomously by a mobile robot both in controlledand uncontrolled scenarios. We compare quantitatively andqualitatively to previous work to validate our approach.

Place, publisher, year, edition, pages
Vancouver, Canada, 2017.
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:kth:diva-215232OAI: oai:DiVA.org:kth-215232DiVA, id: diva2:1147195
Conference
Intelligent Robots and Systems, IEEE/RSJ International Conference on
Note

QC 20171009

Available from: 2017-10-05 Created: 2017-10-05 Last updated: 2018-01-13Bibliographically approved

Open Access in DiVA

fulltext(2356 kB)89 downloads
File information
File name FULLTEXT01.pdfFile size 2356 kBChecksum SHA-512
717e18694b28e2a6e39ffd7f442c8a2a7748af63651c384714de5addd79732ef09b04975deab36fde79a9837f662c5c01f9cef9244e919ddecb6d4f597a05404
Type fulltextMimetype application/pdf

Other links

webpage

Search in DiVA

By author/editor
Ambrus, RaresBore, NilsFolkesson, JohnJensfelt, Patric
By organisation
Robotics, perception and learning, RPLCentre for Autonomous Systems, CAS
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 89 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 192 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf