Contextual Modeling with Labeled Multi-LDA
2013 (English)In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE , 2013, 2264-2271 p.Conference paper (Refereed)
Learning about activities and object affordances from human demonstration are important cognitive capabilities for robots functioning in human environments, for example, being able to classify objects and knowing how to grasp them for different tasks. To achieve such capabilities, we propose a Labeled Multi-modal Latent Dirichlet Allocation (LM-LDA), which is a generative classifier trained with two different data cues, for instance, one cue can be traditional visual observation and another cue can be contextual information. The novel aspects of the LM-LDA classifier, compared to other methods for encoding contextual information are that, I) even with only one of the cues present at execution time, the classification will be better than single cue classification since cue correlations are encoded in the model, II) one of the cues (e.g., common grasps for the observed object class) can be inferred from the other cue (e.g., the appearance of the observed object). This makes the method suitable for robot online and transfer learning; a capability highly desirable in cognitive robotic applications. Our experiments show a clear improvement for classification and a reasonable inference of the missing data.
Place, publisher, year, edition, pages
IEEE , 2013. 2264-2271 p.
, IEEE International Conference on Intelligent Robots and Systems, ISSN 2153-0858
LDA, Topic Model, Contextual Modeling
Computer Vision and Robotics (Autonomous Systems)
IdentifiersURN: urn:nbn:se:kth:diva-134127DOI: 10.1109/IROS.2013.6696673ISI: 000331367402063ScopusID: 2-s2.0-84893758693ISBN: 978-146736358-7OAI: oai:DiVA.org:kth-134127DiVA: diva2:664981
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), November 3-8, 2013 at Tokyo Big Sight, Japan
QC 201312172013-11-182013-11-182014-04-10Bibliographically approved