Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Detection, Tracking and 3D Modeling of Objects with Sparse RGB-D SLAM and Interactive Perception
KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, Perception and Learning, RPL.ORCID iD: 0000-0003-3252-715X
Wayfair, Boston, MA 02116, USA.
Mitsubishi Electric Research Labs (MERL), Cambridge, MA 02139, USA.
2019 (English)In: IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2019Conference paper, Published paper (Refereed)
Abstract [en]

We present an interactive perception system that enables an autonomous agent to deliberately interact with its environment and produce 3D object models. Our system verifies object hypotheses through interaction and simultaneously maintains 3D SLAM maps for each rigidly moving object hypothesis in the scene. We rely on depth-based segmentation and a multigroup registration scheme to classify features into various object maps. Our main contribution lies in the employment of a novel segment classification scheme that allows the system to handle incorrect object hypotheses, common in cluttered environments due to touching objects or occlusion. We start with a single map and initiate further object maps based on the outcome of depth segment classification. For each existing map, we select a segment to interact with and execute a manipulation primitive with the goal of disturbing it. If the resulting set of depth segments has at least one segment that did not follow the dominant motion pattern of its respective map, we split the map, thus yielding updated object hypotheses. We show qualitative results with a Fetch manipulator and objects of various shapes, which showcase the viability of the method for identifying and modelling multiple objects through repeated interactions.

Place, publisher, year, edition, pages
2019.
National Category
Robotics
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-259617OAI: oai:DiVA.org:kth-259617DiVA, id: diva2:1352526
Conference
IEEE-RAS International Conference on Humanoid Robots
Note

QC 20190930

Available from: 2019-09-19 Created: 2019-09-19 Last updated: 2019-09-30Bibliographically approved

Open Access in DiVA

fulltext(4574 kB)6 downloads
File information
File name FULLTEXT01.pdfFile size 4574 kBChecksum SHA-512
cc0c3b10001ce4c6cc0b6b168e220ce36e49b53e176ef5523f703c7102784c891d1f8e9e237fcf846bb750d0eab9dbfc1c523223cced5317c97df10a60a7077b
Type fulltextMimetype application/pdf

Other links

Conference webpage

Search in DiVA

By author/editor
Almeida, Diogo
By organisation
Robotics, Perception and Learning, RPL
Robotics

Search outside of DiVA

GoogleGoogle Scholar
Total: 6 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 80 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf