Change search
Refine search result
1 - 4 of 4
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Kruijff, G.-J. M.
    et al.
    Zender, Hendrik
    Language Technology Lab., DFKI GmbH.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Clarification dialogues in human-augmented mapping2006In: HRI 2006: Proceedings of the 2006 ACM Conference on Human-Robot Interaction, 2006, p. 282-289Conference paper (Refereed)
    Abstract [en]

    An approach to dialogue based interaction for resolution of ambiguities encountered as part of Human-Augmented Mapping (HAM) is presented. The paper focuses on issues related to spatial organisation and localisation. The dialogue pattern naturally arises as robots are introduced to novel environments. The paper discusses an approach based on the notion of Questions under Discussion (QUD). The presented approach has been implemented on a mobile platform that has dialogue capabilities and methods for metric SLAM. Experimental results from a pilot study clearly demonstrate that the system can resolve problematic situations.

  • 2. Kruijff, G.-J.
    et al.
    Zender, Hendrik
    Language Technology Lab., DFKI GmbH.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Situated dialogue and understanding spatial organization: Knowing what is where and what you can do there2006In: Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, 2006, p. 328-333Conference paper (Refereed)
    Abstract [en]

    The paper presents an HRI architecture for human-augmented mapping. Through interaction with a human, the robot can augment its autonomously learnt metric map with qualitative information about locations and objects in the environment. The system implements various interaction strategies observed in independent Wizard-of-Oz studies. The paper discusses an ontology-based approach to representing and inferring 2.5D spatial organization, and presents how knowledge of spatial organization can be acquired autonomously or through spoken dialogue interaction.

  • 3.
    Pronobis, Andrzej
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Zender, Hendrik
    Kruijff, Geert-Jan M.
    Mozos, O. M.
    Burgard, Wolfram
    Semantic modelling of space2010In: Cognitive Systems Monographs: Cognitive Systems / [ed] H. I. Christensen, G.-J. M. Kruijff, J. L. Wyatt, Springer Berlin/Heidelberg, 2010, 8, p. 165-221Chapter in book (Refereed)
    Abstract [en]

    A cornerstone for robotic assistants is their understanding of the space they are to be operating in: an environment built by people for people to live and work in. The research questions we are interested in in this chapter concern spatial understanding, and its connection to acting and interacting in indoor environments. Comparing the way robots typically perceive and represent the world with findings from cognitive psychology about how humans do it, it is evident that there is a large discrepancy. If robots are to understand humans and vice versa, robots need to make use of the same concepts to refer to things and phenomena as a person would do. Bridging the gap between human and robot spatial representations is thus of paramount importance.  A spatial knowledge representation for robotic assistants must address the issues of human-robot communication. However, it must also provide a basis for spatial reasoning and efficient planning. Finally, it must ensure safe and reliable navigation control. Only then can robots be deployed in semi-structured environments, such as offices, where they have to interact with humans in everyday situations.  In order to meet the aforementioned requirements, i.e. robust robot control and human-like conceptualization, in CoSy, we adopted a spatial representation that contains maps at different levels of abstraction. This stepwise abstraction from raw sensory input not only produces maps that are suitable for reliable robot navigation, but also yields a level of representation that is similar to a human conceptualization of spatial organization. Furthermore, this model provides a richer semantic view of an environment that permits the robot to do spatial categorization rather than only instantiation.  This approach is at the heart of the Explorer demonstrator, which is a mobile robot capable of creating a conceptual spatial map of an indoor environment. In the present chapter, we describe how we use multi-modal sensory input provided by a laser range finder and a camera in order to build more and more abstract spatial representations.

  • 4.
    Sjöö, Kristoffer
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Zender, Hendrik
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kruijff, Geert-Jan M.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Hawes, Nick
    Brenner, Michael
    The explorer system2010In: Cognitive Systems Monographs: Cognitive Systems / [ed] H. I. Christensen, G.-J. M. Kruijff, J. L. Wyatt, Springer Berlin/Heidelberg, 2010, 8, p. 395-421Chapter in book (Refereed)
    Abstract [en]

    In the Explorer scenario we deal with the problems of modeling space, acting in this space and reasoning about it. Spatial models are built using input from sensors such as laser scanners and cameras but equally importantly also based on human input. It is this combination that enables the creation of a spatial model that can support low level tasks such as navigation, as well as interaction. Even combined, the inputs only provide a partial description of the world. By combining this knowledge with a reasoning system and a common sense ontology, further information can be inferred to make the description of the world more complete. Unlike the PlayMate system, all the information that is needed to build the spatial models are not available to it sensors at all times. The Explorer need to move around, i.e. explorer space, to gather information and integrate this into the spatial models. Two main modes for this exploration of space have been investigated within the Explorer scenario. In the first mode the robot explores space together with a user in a home tour fashion. That is, the user shows the robot around their shared environment. This is what we call the Human Augmented Mapping paradigm. The second mode is fully autonomous exploration where the robot moves with the purpose of covering space. In practice the two modes would both be used interchangeably to get the best trade-off between autonomy, shared representation and speed. The focus in the Explorer is not on performing a particular task to perfection, but rather acting within a flexible framework that alleviates the need for scripting and hardwiring. We want to investigate two problems within this context: what information must be exchanged by different parts of the system to make this possible, and how the current state of the world should be represented during such exchanges. One particular interaction which encompasses a lot of the aforementioned issues is giving the robot the ability to talk about space. This interaction raises questions such as:  how can we design models that allow the robot and human to talk about where things are, and how do we link the dialogue and the mapping systems?

1 - 4 of 4
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf