Digitala Vetenskapliga Arkivet

Change search
Refine search result
1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Abeywardena, D.
    et al.
    Wang, Zhan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dissanayake, G.
    Waslander, S. L.
    Kodagoda, S.
    Model-aided state estimation for quadrotor micro air vehicles amidst wind disturbances2014Conference paper (Refereed)
    Abstract [en]

    This paper extends the recently developed Model-Aided Visual-Inertial Fusion (MA-VIF) technique for quadrotor Micro Air Vehicles (MAV) to deal with wind disturbances. The wind effects are explicitly modelled in the quadrotor dynamic equations excluding the unobservable wind velocity component. This is achieved by a nonlinear observability of the dynamic system with wind effects. We show that using the developed model, the vehicle pose and two components of the wind velocity vector can be simultaneously estimated with a monocular camera and an inertial measurement unit. We also show that the MA-VIF is reasonably tolerant to wind disturbances, even without explicit modelling of wind effects and explain the reasons for this behaviour. Experimental results using a Vicon motion capture system are presented to demonstrate the effectiveness of the proposed method and validate our claims.

  • 2.
    Wang, Zhan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Chemical Science and Engineering (CHE).
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Modeling motion patterns of dynamic objectsby IOHMM2014In: Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, Chicago, IL: IEEE conference proceedings, 2014, p. 1832-1838Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel approach to model motion patterns of dynamic objects, such as people and vehicles, in the environment with the occupancy grid map representation. Corresponding to the ever-changing nature of the motion pattern of dynamic objects, we model each occupancy grid cell by an IOHMM, which is an inhomogeneous variant of the HMM. This distinguishes our work from existing methods which use the conventional HMM, assuming motion evolving according to a stationary process. By introducing observations of neighbor cells in the previous time step as input of IOHMM, the transition probabilities in our model are dependent on the occurrence of events in the cell's neighborhood. This enables our method to model the spatial correlation of dynamics across cells. A sequence processing example is used to illustrate the advantage of our model over conventional HMM based methods. Results from the experiments in an office corridor environment demonstrate that our method is capable of capturing dynamics of such human living environments.

    Download full text (pdf)
    fulltext
  • 3.
    Wang, Zhan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Building a human behavior map from local observations2016In: Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium on, IEEE, 2016, p. 64-70, article id 7745092Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel method for classifying regions from human movements in service robots' working environments. The entire space is segmented subject to the class type according to the functionality or affordance of each place which accommodates a typical human behavior. This is achieved based on a grid map in two steps. First a probabilistic model is developed to capture human movements for each grid cell by using a non-ergodic HMM. Then the learned transition probabilities corresponding to these movements are used to cluster all cells by using the K-means algorithm. The knowledge of typical human movements for each location, represented by the prototypes from K-means and summarized in a ‘behavior-based map’, enables a robot to adjust the strategy of interacting with people according to where they are located, and thus greatly enhances its capability to assist people. The performance of the proposed classification method is demonstrated by experimental results from 8 hours of data that are collected in a kitchen environment.

    Download full text (pdf)
    fulltext
  • 4.
    Wang, Zhan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC).
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Modeling Spatial-Temporal Dynamics of Human Movements for Predicting Future Trajectories2015Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel approach to modeling the dynamics of human movements with a grid-based representation. For each grid cell, we formulate the local dynamics using a variant of the left-to-right HMM, and thus explicitly model the exiting direction from the current cell. The dependency of this process on the entry direction is captured by employing the InputOutput HMM (IOHMM). On a higher level, we introduce the place where the whole trajectory originated into the IOHMM framework forming a hierarchical input structure. Therefore, we manage to capture both local spatial-temporal correlations and the long-term dependency on faraway initiating events, thus enabling the developed model to incorporate more information and to generate more informative predictions of future trajectories. The experimental results in an office corridor environment verify the capabilities of our method.

    Download full text (pdf)
    fulltext
  • 5.
    Wang, Zhan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Multi-scale conditional transition map: Modeling spatial-temporal dynamics of human movements with local and long-term correlations2015In: Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, IEEE conference proceedings, 2015, p. 6244-6251Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel approach to modeling the dynamics of human movements with a grid-based representation. The model we propose, termed as Multi-scale Conditional Transition Map (MCTMap), is an inhomogeneous HMM process that describes transitions of human location state in spatial and temporal space. Unlike existing work, our method is able to capture both local correlations and long-term dependencies on faraway initiating events. This enables the learned model to incorporate more information and to generate an informative representation of human existence probabilities across the grid map and along the temporal axis for intelligent interaction of the robot, such as avoiding or meeting the human. Our model consists of two levels. For each grid cell, we formulate the local dynamics using a variant of the left-to-right HMM, and thus explicitly model the exiting direction from the current cell. The dependency of this process on the entry direction is captured by employing the Input-Output HMM (IOHMM). On the higher level, we introduce the place where the whole trajectory originated into the IOHMM framework forming a hierarchical input structure to capture long-term dependencies. The capabilities of our method are verified by experimental results from 10 hours of data collected in an office corridor environment.

    Download full text (pdf)
    fulltext
  • 6.
    Xiao, Shuang
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Wang, Zhan
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Unsupervised robot learning to predict person motion2015In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2015, no June, p. 691-696Conference paper (Refereed)
    Abstract [en]

    Socially interacting robots will need to understand the intentions and recognize the behaviors of people they come in contact with. In this paper we look at how a robot can learn to recognize and predict people's intended path based on its own observations of people over time. Our approach uses people tracking on the robot from either RGBD cameras or LIDAR. The tracks are separated into homogeneous motion classes using a pre-trained SVM. Then the individual classes are clustered and prototypes are extracted from each cluster. These are then used to predict a person's future motion based on matching to a partial prototype and using the rest of the prototype as the predicted motion. Results from experiments in a kitchen environment in our lab demonstrate the capabilities of the proposed method.

    Download full text (pdf)
    fulltext
1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf