Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
The more you learn, the less you store: Memory-controlled incremental SVM for visual place recognition
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0002-1396-0102
2010 (English)In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 28, no 7, p. 1080-1097Article in journal (Refereed) Published
Abstract [en]

The capability to learn from experience is a key property for autonomous cognitive systems working in realistic settings. To this end, this paper presents an SVM-based algorithm, capable of learning model representations incrementally while keeping under control memory requirements. We combine an incremental extension of SVMs [43] with a method reducing the number of support vectors needed to build the decision function without any loss in performance [15] introducing a parameter which permits a user-set trade-off between performance and memory. The resulting algorithm is able to achieve the same recognition results as the original incremental method while reducing the memory growth. Our method is especially suited to work for autonomous systems in realistic settings. We present experiments on two common scenarios in this domain: adaptation in presence of dynamic changes and transfer of knowledge between two different autonomous agents, focusing in both cases on the problem of visual place recognition applied to mobile robot topological localization. Experiments in both scenarios clearly show the power of our approach.

Place, publisher, year, edition, pages
2010. Vol. 28, no 7, p. 1080-1097
Keywords [en]
Incremental learning, Knowledge transfer, Support vector machines, Place recognition, Visual robot localization
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:kth:diva-27548DOI: 10.1016/j.imavis.2010.01.015ISI: 000278233900003Scopus ID: 2-s2.0-77950866368OAI: oai:DiVA.org:kth-27548DiVA, id: diva2:378875
Funder
EU, FP7, Seventh Framework Programme, 215181Swedish Research Council, 2005-3600-Complex
Note
QC 20101216Available from: 2010-12-16 Created: 2010-12-13 Last updated: 2022-06-25Bibliographically approved
In thesis
1. Semantic Mapping with Mobile Robots
Open this publication in new window or tab >>Semantic Mapping with Mobile Robots
2011 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

After decades of unrealistic predictions and expectations, robots have finally escaped from industrial workplaces and made their way into our homes,offices, museums and other public spaces. These service robots are increasingly present in our environments and many believe that it is in the area ofservice and domestic robotics that we will see the largest growth within thenext few years. In order to realize the dream of robot assistants performing human-like tasks together with humans in a seamless fashion, we need toprovide them with the fundamental capability of understanding complex, dynamic and unstructured environments. More importantly, we need to enablethem the sharing of our understanding of space to permit natural cooper-ation. To this end, this thesis addresses the problem of building internalrepresentations of space for artificial mobile agents populated with humanspatial semantics as well as means for inferring that semantics from sensoryinformation. More specifically, an extensible approach to place classificationis introduced and used for mobile robot localization as well as categorizationand extraction of spatial semantic concepts from general place appearance andgeometry. The models can be incrementally adapted to the dynamic changesin the environment and employ efficient ways for cue integration, sensor fu-sion and confidence estimation. In addition, a system and representationalapproach to semantic mapping is presented. The system incorporates and in-tegrates semantic knowledge from multiple sources such as the geometry andgeneral appearance of places, presence of objects, topology of the environmentas well as human input. A conceptual map is designed and used for modelingand reasoning about spatial concepts and their relations to spatial entitiesand their semantic properties. Finally, the semantic mapping algorithm isbuilt into an integrated robotic system and shown to substantially enhancethe performance of the robot on the complex task of active object search. Thepresented evaluations show the effectiveness of the system and its underlyingcomponents and demonstrate applicability to real-world problems in realistichuman settings.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2011. p. xiii, 52
Series
Trita-CSC-A, ISSN 1653-5723 ; 2011:10
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-34171 (URN)978-91-7501-039-7 (ISBN)
Public defence
2011-06-10, Sal F3, Lindstedtsvägen 26, KTH, Stockholm, 13:00 (English)
Opponent
Supervisors
Note
QC 20110527Available from: 2011-05-27 Created: 2011-05-27 Last updated: 2022-06-24Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Pronobis, Andrzej
By organisation
Computer Vision and Active Perception, CVAP
In the same journal
Image and Vision Computing
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 111 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf