Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Human-Centric Partitioning of the Environment
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0003-1189-6634
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0002-7796-1438
KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.ORCID iD: 0000-0002-1170-7162
2017 (English)In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2017, p. 844-850Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we present an object based approach for human-centric partitioning of the environment. Our approach for determining the human-centric regionsis to detect the objects that are commonly associated withfrequent human presence. In order to detect these objects, we employ state of the art perception techniques. The detected objects are stored with their spatio-temporal information inthe robot’s memory to be later used for generating the regions.The advantages of our method is that it is autonomous, requires only a small set of perceptual data and does not even require people to be present while generating the regions.The generated regions are validated using a 1-month dataset collected in an indoor office environment. The experimental results show that although a small set of perceptual data isused, the regions are generated at densely occupied locations.

Place, publisher, year, edition, pages
IEEE, 2017. p. 844-850
Keyword [en]
human-robot interaction, perception, AI
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-215941ISI: 000427262400131Scopus ID: 2-s2.0-85045847052OAI: oai:DiVA.org:kth-215941DiVA, id: diva2:1150027
Conference
IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN
Funder
Swedish Foundation for Strategic Research
Note

QC 20171018

Available from: 2017-10-17 Created: 2017-10-17 Last updated: 2018-04-11Bibliographically approved

Open Access in DiVA

fulltext(2532 kB)69 downloads
File information
File name FULLTEXT01.pdfFile size 2532 kBChecksum SHA-512
b93c95f6228777cc96c4820fd5bfc039fb548521cd763a075a9286e8721bc3812ca7524391f03d4f9f1bad7019fe6bde20edbfc46691ad8c8741738ac61efd2f
Type fulltextMimetype application/pdf

Scopus

Search in DiVA

By author/editor
Karaoguz, HakanBore, NilsFolkesson, JohnJensfelt, Patric
By organisation
Centre for Autonomous Systems, CAS
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 69 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 112 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf