Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Object-RPE: Dense 3D Reconstruction and Pose Estimation with Convolutional Neural Networks for Warehouse Robots
Örebro University, School of Science and Technology. (Centre for Applied Autonomous Sensor Systems (AASS))
Örebro University, School of Science and Technology. (Centre for Applied Autonomous Sensor Systems (AASS))ORCID iD: 0000-0002-6013-4874
Örebro University, School of Science and Technology. (Centre for Applied Autonomous Sensor Systems (AASS))ORCID iD: 0000-0003-0217-9326
2019 (English)In: 2019 European Conference on Mobile Robots, ECMR 2019: Proceedings, IEEE, 2019, p. 1-6, article id 152970Conference paper, Published paper (Refereed)
Abstract [en]

We present a system for accurate 3D instance-aware semantic reconstruction and 6D pose estimation, using an RGB-D camera. Our framework couples convolutional neural networks (CNNs) and a state-of-the-art dense Simultaneous Localisation and Mapping (SLAM) system, ElasticFusion, to achieve both high-quality semantic reconstruction as well as robust 6D pose estimation for relevant objects. The method presented in this paper extends a high-quality instance-aware semantic 3D Mapping system from previous work [1] by adding a 6D object pose estimator. While the main trend in CNN-based 6D pose estimation has been to infer object's position and orientation from single views of the scene, our approach explores performing pose estimation from multiple viewpoints, under the conjecture that combining multiple predictions can improve the robustness of an object detection system. The resulting system is capable of producing high-quality object-aware semantic reconstructions of room-sized environments, as well as accurately detecting objects and their 6D poses. The developed method has been verified through experimental validation on the YCB-Video dataset and a newly collected warehouse object dataset. Experimental results confirmed that the proposed system achieves improvements over state-of-the-art methods in terms of surface reconstruction and object pose prediction. Our code and video are available at https://sites.google.com/view/object-rpe.

Place, publisher, year, edition, pages
IEEE, 2019. p. 1-6, article id 152970
National Category
Robotics
Identifiers
URN: urn:nbn:se:oru:diva-78295DOI: 10.1109/ECMR.2019.8870927Scopus ID: 2-s2.0-85074398548ISBN: 978-1-7281-3605-9 (electronic)OAI: oai:DiVA.org:oru-78295DiVA, id: diva2:1374210
Conference
2019 European Conference on Mobile Robots (ECMR), Prague, Czech Republic, 4-6 Sept, 2019
Available from: 2019-11-29 Created: 2019-11-29 Last updated: 2019-12-17Bibliographically approved

Open Access in DiVA

fulltext(768 kB)26 downloads
File information
File name FULLTEXT01.pdfFile size 768 kBChecksum SHA-512
21a96484cd4c58e31b606b65e859b094c11eea1c72c3639a6604c6b2fe1fbb24ed060033dd1af30424effe3a236f70e011a57bda2ebc727edf147df8906c80ff
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Hoang, Dinh-CuongStoyanov, TodorLilienthal, Achim
By organisation
School of Science and Technology
Robotics

Search outside of DiVA

GoogleGoogle Scholar
Total: 26 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 45 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf