Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Multi-scale conditional transition map: Modeling spatial-temporal dynamics of human movements with local and long-term correlations
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.ORCID iD: 0000-0002-1170-7162
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. (CAS/CVAP/CSC)ORCID iD: 0000-0002-7796-1438
2015 (English)In: Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, IEEE conference proceedings, 2015, 6244-6251 p.Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a novel approach to modeling the dynamics of human movements with a grid-based representation. The model we propose, termed as Multi-scale Conditional Transition Map (MCTMap), is an inhomogeneous HMM process that describes transitions of human location state in spatial and temporal space. Unlike existing work, our method is able to capture both local correlations and long-term dependencies on faraway initiating events. This enables the learned model to incorporate more information and to generate an informative representation of human existence probabilities across the grid map and along the temporal axis for intelligent interaction of the robot, such as avoiding or meeting the human. Our model consists of two levels. For each grid cell, we formulate the local dynamics using a variant of the left-to-right HMM, and thus explicitly model the exiting direction from the current cell. The dependency of this process on the entry direction is captured by employing the Input-Output HMM (IOHMM). On the higher level, we introduce the place where the whole trajectory originated into the IOHMM framework forming a hierarchical input structure to capture long-term dependencies. The capabilities of our method are verified by experimental results from 10 hours of data collected in an office corridor environment.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2015. 6244-6251 p.
National Category
Robotics
Identifiers
URN: urn:nbn:se:kth:diva-183495DOI: 10.1109/IROS.2015.7354268ISI: 000371885406055Scopus ID: 2-s2.0-84958153492OAI: oai:DiVA.org:kth-183495DiVA: diva2:911782
Conference
Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, Sept. 28 2015-Oct. 2 2015, Hamburg, Germany
Funder
EU, FP7, Seventh Framework Programme, 600623
Note

QC 20160406

Available from: 2016-03-14 Created: 2016-03-14 Last updated: 2016-11-23Bibliographically approved

Open Access in DiVA

fulltext(404 kB)32 downloads
File information
File name FULLTEXT02.pdfFile size 404 kBChecksum SHA-512
05b242dac60b46dcf53d86aee78dd97561dc522118cbb907bd62195401b3457d4f9ef3ef199718f7e6dbd8289417c8198f0423997c937a09831ec7a7724afc3b
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopusIEEEXplore

Search in DiVA

By author/editor
Wang, ZhanJensfelt, PatricFolkesson, John
By organisation
Computer Vision and Active Perception, CVAPCentre for Autonomous Systems, CAS
Robotics

Search outside of DiVA

GoogleGoogle Scholar
Total: 32 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 78 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf