Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A realistic benchmark for visual indoor place recognition
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0002-1396-0102
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0002-1170-7162
2010 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 1, p. 81-96Article in journal (Refereed) Published
Abstract [en]

An important competence for a mobile robot system is the ability to localize and perform context interpretation. This is required to perform basic navigation and to facilitate local specific services. Recent advances in vision have made this modality a viable alternative to the traditional range sensors, and visual place recognition algorithms emerged as a useful and widely applied tool for obtaining information about robot's position. Several place recognition methods have been proposed using vision alone or combined with sonar and/or laser. This research calls for standard benchmark datasets for development, evaluation and comparison of solutions. To this end, this paper presents two carefully designed and annotated image databases augmented with an experimental procedure and extensive baseline evaluation. The databases were gathered in an uncontrolled indoor office environment using two mobile robots and a standard camera. The acquisition spanned across a time range of several months and different illumination and weather conditions. Thus, the databases are very well suited for evaluating the robustness of algorithms with respect to a broad range of variations, often occurring in real-world settings. We thoroughly assessed the databases with a purely appearance-based place recognition method based on support vector machines and two types of rich visual features (global and local).

Place, publisher, year, edition, pages
2010. Vol. 58, no 1, p. 81-96
Keywords [en]
Visual place recognition, Robot topological localization, Standard, robotic benchmark, localization, appearance, map
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:kth:diva-19070DOI: 10.1016/j.robot.2009.07.025ISI: 000272968200008Scopus ID: 2-s2.0-70450221694OAI: oai:DiVA.org:kth-19070DiVA, id: diva2:337117
Funder
Swedish Research Council, 2005-3600-Complex
Note
QC 20100525Available from: 2010-08-05 Created: 2010-08-05 Last updated: 2022-06-25Bibliographically approved
In thesis
1. Semantic Mapping with Mobile Robots
Open this publication in new window or tab >>Semantic Mapping with Mobile Robots
2011 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

After decades of unrealistic predictions and expectations, robots have finally escaped from industrial workplaces and made their way into our homes,offices, museums and other public spaces. These service robots are increasingly present in our environments and many believe that it is in the area ofservice and domestic robotics that we will see the largest growth within thenext few years. In order to realize the dream of robot assistants performing human-like tasks together with humans in a seamless fashion, we need toprovide them with the fundamental capability of understanding complex, dynamic and unstructured environments. More importantly, we need to enablethem the sharing of our understanding of space to permit natural cooper-ation. To this end, this thesis addresses the problem of building internalrepresentations of space for artificial mobile agents populated with humanspatial semantics as well as means for inferring that semantics from sensoryinformation. More specifically, an extensible approach to place classificationis introduced and used for mobile robot localization as well as categorizationand extraction of spatial semantic concepts from general place appearance andgeometry. The models can be incrementally adapted to the dynamic changesin the environment and employ efficient ways for cue integration, sensor fu-sion and confidence estimation. In addition, a system and representationalapproach to semantic mapping is presented. The system incorporates and in-tegrates semantic knowledge from multiple sources such as the geometry andgeneral appearance of places, presence of objects, topology of the environmentas well as human input. A conceptual map is designed and used for modelingand reasoning about spatial concepts and their relations to spatial entitiesand their semantic properties. Finally, the semantic mapping algorithm isbuilt into an integrated robotic system and shown to substantially enhancethe performance of the robot on the complex task of active object search. Thepresented evaluations show the effectiveness of the system and its underlyingcomponents and demonstrate applicability to real-world problems in realistichuman settings.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2011. p. xiii, 52
Series
Trita-CSC-A, ISSN 1653-5723 ; 2011:10
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-34171 (URN)978-91-7501-039-7 (ISBN)
Public defence
2011-06-10, Sal F3, Lindstedtsvägen 26, KTH, Stockholm, 13:00 (English)
Opponent
Supervisors
Note
QC 20110527Available from: 2011-05-27 Created: 2011-05-27 Last updated: 2022-06-24Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Pronobis, AndrzejJensfelt, Patric
By organisation
Computer Vision and Active Perception, CVAP
In the same journal
Robotics and Autonomous Systems
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 202 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf