Change search
Refine search result
2345678 201 - 250 of 437
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 201.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    Dept of Mechanical Engineering, Massachusetts Institute of Technology.
    Bearing-Only Vision SLAM with Distinguishable Image Feature2007In: Vision Systems Applications / [ed] Goro Obinata and Ashish Dutta, InTech, 2007Chapter in book (Refereed)
  • 202.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    A framework for vision based bearing only 3D SLAM2006In: Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, Florida - May 2006: Vols 1-10, IEEE , 2006, p. 1944-1950Conference paper (Refereed)
    Abstract [en]

    This paper presents a framework for 3D vision based bearing only SLAM using a single camera, an interesting setup for many real applications due to its low cost. The focus in is on the management of the features to achieve real-time performance in extraction, matching and loop detection. For matching image features to map landmarks a modified, rotationally variant SIFT descriptor is used in combination with a Harris-Laplace detector. To reduce the complexity in the map estimation while maintaining matching performance only a few, high quality, image features are used for map landmarks. The rest of the features are used for matching. The framework has been combined with an EKF implementation for SLAM. Experiments performed in indoor environments are presented. These experiments demonstrate the validity and effectiveness of the approach. In particular they show how the robot is able to successfully match current image features to the map when revisiting an area.

  • 203.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kristensen, S
    active global localisation for a mobile robot using multiple hypothesis tracking2001In: IEEE transactions on robotics and automation, ISSN 1042-296X, Vol. 17, no 5, p. 748-760Article in journal (Refereed)
    Abstract [en]

    In this paper we present a probabilistic approach for mobile robot localization using an incomplete topological world model. The method, which we have termed multi-hypothesis localization (MHL), uses multi-hypothesis Kalman filter based pose tracking combined with a probabilistic formulation of hypothesis correctness to generate and track Gaussian pose hypotheses online. Apart from a lower computational complexity, this approach has the advantage over traditional grid based methods that incomplete and topological world model information can be utilized. Furthermore, the method generates movement commands for the platform to enhance the gathering of information for the pose estimation process. Extensive experiments are presented from two different environments, a typical office environment and an old hospital building.

  • 204.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Wijk, O
    Austin, D
    Andersson, M
    experiments on augmenting condensation for mobile robot localization2000Conference paper (Refereed)
    Abstract [en]

    In this paper we study some modifications of the CONDENSATION algorithm. The case studied is feature based mobile robot localization in a large scale environment. The required sample set size for making the CONDENSATION  algorithm converge properly can in many cases require too much computation. This is often the case when observing features in symmetric environments like for instance doors in long corridors. In such areas a large sample set is required to resolve the generated multi-hypotheses problem. To manage with a sample set size which in the normal case would cause the CONDENSATION algorithm to break down, we study two modifications. The first strategy, called "CONDENSATION with random sampling", takes part of the sample set and spreads it randomly over the environment the robot operates in. The second strategy, called "CONDENSATION with planned sampling", places part of the sample set at planned positions based on the detected features. From the experiments we conclude that the second strategy is the best and can reduce the sample set size by at least a factor of 40.

  • 205.
    Johansson, Ronnie M.
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Suzic, R.
    Particle filter-based information acquisition for robust plan recognition2005In: 2005 7th International Conference on Information Fusion (FUSION), Vols 1 and 2, 2005, p. 183-190Conference paper (Refereed)
    Abstract [en]

    Plan recognition generates high-level information of opponents' plans, typically a probability distribution over a set of plausible plans. Estimations of plans, are in our work, made at different decision-levels, both company-level and the subsumed platoon-level. Naturally, successful plan recognition is heavily dependent on the data that is supplied, and, hence, sensor management is a necessity. A key feature of the sensor management discussed here is that it is driven by the information need of the plan recognition process. In our research, we have presented a general framework for connecting information need to sensor management. In our framework implementation, an essential part is the prioritization of sensing tasks, which is necessary to efficiently utilize limited sensing resources. In our first implementation, the priorities were calculated from, for instance, the estimated threats of opponents (as a function of plan estimates), the distance to the opponent, and the uncertainty in its position. In this article, we add a particle filter method to more carefully represent the uncertainty in the opponent state estimate to make prioritization more well founded and, ultimately, to achieve robust plan recognition. By using the particle filter we can obtain more reliable state estimates (through the particle filter's ability to represent complex probability distributions) and also a statistically based threat variation (through Monte-Carlo simulation). The state transition model of the particle filter can also be used to predict future states to direct sensors with a time delay (a common property of large-scale sensing systems), such as sensors mounted on UAVs that have to travel some distance to make a measurement.

  • 206.
    Johnson-Roberson, Matthew
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Bohg, Jeannette
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Attention-based Active 3D Point Cloud Segmentation2010In: IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, p. 1165-1170Conference paper (Refereed)
    Abstract [en]

    In this paper we present a framework for the segmentation of multiple objects from a 3D point cloud. We extend traditional image segmentation techniques into a full 3D representation. The proposed technique relies on a state-of-the-art min-cut framework to perform a fully 3D global multi-class labeling in a principled manner. Thereby, we extend our previous work in which a single object was actively segmented from the background. We also examine several seeding methods to bootstrap the graphical model-based energy minimization and these methods are compared over challenging scenes. All results are generated on real-world data gathered with an active vision robotic head. We present quantitive results over aggregate sets as well as visual results on specific examples.

  • 207.
    Johnson-Roberson, Matthew
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bohg, Jeannette
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Skantze, Gabriel
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Gustafson, Joakim
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Carlson, Rolf
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH, Speech Communication and Technology.
    Enhanced visual scene understanding through human-robot dialog2010In: Dialog with Robots: AAAI 2010 Fall Symposium, 2010, p. -144Conference paper (Refereed)
  • 208.
    Johnson-Roberson, Matthew
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bohg, Jeannette
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Skantze, Gabriel
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Gustafsson, Joakim
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Carlson, Rolf
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Rasolzadeh, Babak
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Enhanced Visual Scene Understanding through Human-Robot Dialog2011In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE , 2011, p. 3342-3348Conference paper (Refereed)
    Abstract [en]

    We propose a novel human-robot-interaction framework for robust visual scene understanding. Without any a-priori knowledge about the objects, the task of the robot is to correctly enumerate how many of them are in the scene and segment them from the background. Our approach builds on top of state-of-the-art computer vision methods, generating object hypotheses through segmentation. This process is combined with a natural dialog system, thus including a ‘human in the loop’ where, by exploiting the natural conversation of an advanced dialog system, the robot gains knowledge about ambiguous situations. We present an entropy-based system allowing the robot to detect the poorest object hypotheses and query the user for arbitration. Based on the information obtained from the human-robot dialog, the scene segmentation can be re-seeded and thereby improved. We present experimental results on real data that show an improved segmentation performance compared to segmentation without interaction.

  • 209.
    Karaoguz, Hakan
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Human-Centric Partitioning of the Environment2017In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2017, p. 844-850Conference paper (Refereed)
    Abstract [en]

    In this paper, we present an object based approach for human-centric partitioning of the environment. Our approach for determining the human-centric regionsis to detect the objects that are commonly associated withfrequent human presence. In order to detect these objects, we employ state of the art perception techniques. The detected objects are stored with their spatio-temporal information inthe robot’s memory to be later used for generating the regions.The advantages of our method is that it is autonomous, requires only a small set of perceptual data and does not even require people to be present while generating the regions.The generated regions are validated using a 1-month dataset collected in an indoor office environment. The experimental results show that although a small set of perceptual data isused, the regions are generated at densely occupied locations.

  • 210. Karasalo, Maja
    et al.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    An optimization approach to adaptive Kalman filtering2011In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 47, no 8, p. 1785-1793Article in journal (Refereed)
    Abstract [en]

    In this paper, an optimization-based adaptive Kalman filtering method is proposed. The method produces an estimate of the process noise covariance matrix Q by solving an optimization problem over a short window of data. The algorithm recovers the observations h(x) from a system (x) over dot = f (x), y = h(x) + v without a priori knowledge of system dynamics. Potential applications include target tracking using a network of nonlinear sensors, servoing, mapping, and localization. The algorithm is demonstrated in simulations on a tracking example for a target with coupled and nonlinear kinematics. Simulations indicate superiority over a standard MMAE algorithm for a large class of systems.

  • 211.
    Karasalo, Maja
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Gustavi, Tove
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Robust Formation Control using Switching Range Sensors2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 8, p. 1003-1016Article in journal (Refereed)
    Abstract [en]

    In this paper, control algorithms are presented for formation keeping and path followingfor non-holonomic platforms. The controls are based on feedback from onboard directional range sensors, and a switching Kalman filter is introduced for active sensing.Stability is analyzed theoretically and robustness isdemonstrated in experiments and simulations.

  • 212.
    Karasalo, Maja
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Martin, Clyde R.
    Contour reconstruction and matching using recursive smoothing splines2007In: MODELING, ESTIMATION AND CONTROL: FESTSCHRIFT IN HONOR OF GIORGIO PICCI ON THE OCCASION OF THE SIXTY-FIFTH BIRTHDAY / [ed] Chiuso, A; Ferrante, A; Pinzoni, S, BERLIN: SPRINGER-VERLAG BERLIN , 2007, Vol. 364, p. 193-206Conference paper (Refereed)
    Abstract [en]

    In this paper a recursive smoothing spline approach is used for reconstructing a closed contour. Periodic splines are generated through minimizing a cost function subject to constraints imposed by a linear control system. The filtering effect of the smoothing splines allows for usage of noisy sensor data. An important feature of the method is that several data sets for the same closed contour can be processed recursively so that the accuracy can be improved meanwhile the current mapping can be used for planning the path for the data-collecting robot.

  • 213.
    Karasalo, Maja
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Piccolo, Giacomo
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Contour Reconstruction using Recursive Smoothing Splines - Algorithms and Experimental Validation2009In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 57, no 6-7, p. 617-628Article in journal (Refereed)
    Abstract [en]

    In this paper, a recursive smoothing splineapproach for contour reconstruction is studied and evaluated.  Periodic smoothing splines areused by a robot to approximate the contour of encountered obstaclesin the environment.  The splines are generated through minimizing acost function subject to constraints imposed by a linear controlsystem and accuracy is improved iteratively using a recursive splinealgorithm.  The filtering effect of the smoothing splines allows forusage of noisy sensor data and the method is robust with respect to odometrydrift. The algorithm is extensively evaluated in simulationsfor various contours and in experiments using a SICK laser scanner mounted on a PowerBot fromActivMedia Robotics

  • 214.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dimarogonas, Dimos
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Multi-agent average consensus control with prescribed performance guarantees2012In: 2012 IEEE 51st Annual Conference on Decision and Control (CDC), IEEE , 2012, p. 2219-2225Conference paper (Refereed)
    Abstract [en]

    This work proposes a distributed control scheme for the state agreement problem which can guarantee prescribed performance for the system transient. In particular, i) we consider a set of agents that can exchange information according to a static communication graph, ii) we a priori define time-dependent constraints at the edge's space (errors between agents that exchange information) and iii) we design a distributed controller to guarantee that the errors between the neighboring agents do not violate the constraints. Following this technique the contributions are twofold: a) the convergence rate of the system and the communication structure of the agents' network which are strictly connected can be decoupled, and b) the connectivity properties of the initially formed communication graph are rendered invariant by appropriately designing the prescribed performance bounds. It is also shown how the structure and the parameters of the prescribed performance controller can be chosen in case of connected tree graphs and connected graphs with cycles. Simulation results validate the theoretically proven findings while enlightening the merit of the proposed prescribed performance agreement protocol as compared to the linear one.

  • 215.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Doulgeri, Zoe
    Aristotle University of Thessaloniki, Greece.
    Model-free robot joint position regulation and tracking with prescribed performance guarantees2012In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 60, no 2, p. 214-226Article in journal (Refereed)
    Abstract [en]

    The problem of robot joint position control with prescribed performance guarantees is considered; the control objective is the error evolution within prescribed performance bounds in both problems of regulation and tracking. The proposed controllers do not utilize either the robot dynamic model or any approximation structures and are composed by simple PID or PD controllers enhanced by a proportional term of a transformed error through a transformation related gain. Under a sufficient condition for the damping gain, the proposed controllers are able to guarantee (i) predefined minimum speed of convergence, maximum steady state error and overshoot concerning the position error and (ii) uniformly ultimate boundedness (UUB) of the velocity error. The use of the integral term reduces residual errors allowing the proof of asymptotic convergence of both velocity and position errors to zero for the regulation problem under constant disturbances. Performance is a priori guaranteed irrespective of the selection of the control gain values. Simulation results of a three dof spatial robotic manipulator and experimental results of one dof manipulator are given to confirm the theoretical findings.

  • 216.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Doulgeri, Zoe
    Aristotle University of Thessaloniki.
    Regressor-free prescribed performance robot tracking2013In: Robotica (Cambridge. Print), ISSN 0263-5747, E-ISSN 1469-8668Article in journal (Refereed)
    Abstract [en]

    Fast and robust tracking against unknown disturbances is required in many modern complex robotic structures and applications, for which knowledge of the full exact nonlinear system is unreasonable to assume. This paper proposes a regressor-free nonlinear controller of low complexity which ensures prescribed performance position error tracking subject to unknown endogenous and exogenous bounded dynamics assuming that joint position and velocity measurements are available. It is theoretically shown and demonstrated by a simulation study that the proposed controller can guarantee tracking of the desired joint position trajectory with a priori determined accuracy, overshoot and speed of response. Preliminary experimental results to a simplified system are promising for validating the controller to more complex structures.

  • 217.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Doulgeri, Zoe
    Department of Electrical and Computer Eng., Aristotle University of Thessaloniki.
    Regressor-free robot joint position tracking with prescribed performance guarantees2011In: ROBIO 2011: IEEE International Conference on Robotics and Biomimetics, 2011, p. 2312-2317Conference paper (Refereed)
    Abstract [en]

    Fast and robust tracking against unknown disturbancesis required in many modern complex robotic structuresand applications for which knowledge of the full exact nonlinearsystem is unreasonable to assume. This paper proposesa regressor-free nonlinear controller of low complexity whichensures prescribed performance position error tracking subjectto unknown endogenous and exogenous bounded dynamicsassuming that joint position and velocity measurements areavailable. It is theoretically shown and demonstrated by asimulation study that the proposed controller can guaranteetracking of the desired joint position trajectory with a prioridetermined accuracy, overshoot and speed of response. Preliminaryexperimental results to a simplified system are promisingfor validating the controller to more complex structures.

  • 218.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. Chalmers University of Technology, Sweden.
    Droukas, L.
    Doulgeri, Z.
    Operational space robot control for motion performance and safe interaction under Unintentional Contacts2017In: 2016 European Control Conference, ECC 2016, Institute of Electrical and Electronics Engineers Inc. , 2017, p. 407-412Conference paper (Refereed)
    Abstract [en]

    A control law achieving motion performance of quality and compliant reaction to unintended contacts for robot manipulators is proposed in this work. It achieves prescribed performance evolution of the position error under disturbance forces up to a tunable level of magnitude. Beyond this level, it deviates from the desired trajectory complying to what is now interpreted as unintentional contact force, thus achieving enhanced safety by decreasing interaction forces. The controller is a passivity model based controller utilizing an artificial potential that induces vanishing vector fields. Simulation results with a three degrees of freedom (DOF) robot under the control of the proposed scheme, verify theoretical findings and illustrate motion performance and compliance under an external force of short duration in comparison with a switched impedance scheme.

  • 219.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. Department of Signals and Systems, Chalmers University of Technology, Gothenburg, Sweden.
    Papageorgiou, D.
    Doulgeri, Z.
    A Model-Free Controller for Guaranteed Prescribed Performance Tracking of Both Robot Joint Positions and Velocities2016In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 1, no 1, p. 267-273, article id 7377028Article in journal (Refereed)
    Abstract [en]

    The problem of robot joint position and velocity tracking with prescribed performance guarantees is considered. The proposed controller is able to guarantee a prescribed transient and steady state behavior for the position and the velocity tracking errors without utilizing either the robot dynamic model or any approximation structures. Its performance is demonstrated and assessed via experiments with a KUKA LWR4+ arm. 

  • 220.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Mapping Human Intentions to Robot Motions via Physical Interaction Through a Jointly-held Object2014In: Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on, 2014, p. 391-397Conference paper (Refereed)
    Abstract [en]

    In this paper we consider the problem of human-robot collaborative manipulation of an object, where the human is active in controlling the motion, and the robot is passively following the human's lead. Assuming that the human grasp of the object only allows for transfer of forces and not torques, there is a disambiguity as to whether the human desires translation or rotation. In this paper, we analyze different approaches to this problem both theoretically and in experiment. This leads to the proposal of a control methodology that uses switching between two different admittance control modes based on the magnitude of measured force to achieve disambiguation of the rotation/translation problem.

  • 221.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Online Contact Point Estimation for Uncalibrated Tool Use2014In: Robotics and Automation (ICRA), 2014 IEEE International Conference on, IEEE Robotics and Automation Society, 2014, p. 2488-2493Conference paper (Refereed)
    Abstract [en]

    One of the big challenges for robots working outside of traditional industrial settings is the ability to robustly and flexibly grasp and manipulate tools for various tasks. When a tool is interacting with another object during task execution, several problems arise: a tool can be partially or completely occluded from the robot's view, it can slip or shift in the robot's hand - thus, the robot may lose the information about the exact position of the tool in the hand. Thus, there is a need for online calibration and/or recalibration of the tool. In this paper, we present a model-free online tool-tip calibration method that uses force/torque measurements and an adaptive estimation scheme to estimate the point of contact between a tool and the environment. An adaptive force control component guarantees that interaction forces are limited even before the contact point estimate has converged. We also show how to simultaneously estimate the location and normal direction of the surface being touched by the tool-tip as the contact point is estimated. The stability of the the overall scheme and the convergence of the estimated parameters are theoretically proven and the performance is evaluated in experiments on a real robot.

  • 222.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Online Kinematics Estimation for Active Human-Robot Manipulation of Jointly Held Objects2013In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE , 2013, p. 4872-4878Conference paper (Refereed)
    Abstract [en]

    This paper introduces a method for estimating the constraints imposed by a human agent on a jointly manipulated object. These estimates can be used to infer knowledge of where the human is grasping an object, enabling the robot to plan trajectories for manipulating the object while subject to the constraints. We describe the method in detail, motivate its validity theoretically, and demonstrate its use in co-manipulation tasks with a real robot.

  • 223.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Model-free robot manipulation of doors and drawers by means of fixed-grasps2013In: 2013 IEEE International Conference on Robotics and Automation (ICRA), New York: IEEE , 2013, p. 4485-4492Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of robot interaction with objects attached to the environment through joints such as doors or drawers. We propose a methodology that requires no prior knowledge of the objects’ kinematics, including the type of joint - either prismatic or revolute. The method consists of a velocity controller which relies onforce/torque measurements and estimation of the motion direction,rotational axis and the distance from the center of rotation.The method is suitable for any velocity controlled manipulatorwith a force/torque sensor at the end-effector. The force/torquecontrol regulates the applied forces and torques within givenconstraints, while the velocity controller ensures that the endeffectormoves with a task-related desired tangential velocity. The paper also provides a proof that the estimates converge tothe actual values. The method is evaluated in different scenarios typically met in a household environment.

  • 224.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    "Open Sesame!" Adaptive Force/Velocity Control for Opening Unknown Doors2012In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE , 2012, p. 4040-4047Conference paper (Refereed)
    Abstract [en]

    The problem of door opening is fundamental for robots operating in domestic environments. Since these environments are generally less structured than industrial environments, several types of uncertainties associated with the dynamics and kinematics of a door must be dealt with to achieve successful opening. This paper proposes a method that can open doors without prior knowledge of the door kinematics. The proposed method can be implemented on a velocity-controlled manipulator with force sensing capabilities at the end-effector. The method consists of a velocity controller which uses force measurements and estimates of the radial direction based on adaptive estimates of the position of the door hinge. The control action is decomposed into an estimated radial and tangential direction following the concept of hybrid force/motion control. A force controller acting within the velocity controller regulates the radial force to a desired small value while the velocity controller ensures that the end effector of the robot moves with a desired tangential velocity leading to task completion. This paper also provides a proof that the adaptive estimates of the radial direction converge to the actual radial vector. The performance of the control scheme is demonstrated in both simulation and on a real robot.

  • 225.
    Kjellström, Hedvig
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Black, Michael J.
    Tracking People Interacting with Objects2010In: 2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, p. 747-754Conference paper (Refereed)
    Abstract [en]

    While the problem of tracking 3D human motion has been widely studied, most approaches have assumed that the person is isolated and not interacting with the environment. Environmental constraints, however, can greatly constrain and simplify the tracking problem. The most studied constraints involve gravity and contact with the ground plane. We go further to consider interaction with objects in the environment. In many cases, tracking rigid environmental objects is simpler than tracking high-dimensional human motion. When a human is in contact with objects in the world, their poses constrain the pose of body, essentially removing degrees of freedom. Thus what would appear to be a harder problem, combining object and human tracking, is actually simpler. We use a standard formulation of the body tracking problem but add an explicit model of contact with objects. We find that constraints from the world make it possible to track complex articulated human motion in 3D from a monocular camera.

  • 226.
    Kjellström, Hedvig
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Romero, Javier
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Visual object-action recognition: Inferring object affordances from human demonstration2011In: Computer Vision and Image Understanding, ISSN 1077-3142, E-ISSN 1090-235X, Vol. 115, no 1, p. 81-90Article in journal (Refereed)
    Abstract [en]

    This paper investigates object categorization according to function, i.e., learning the affordances of objects from human demonstration. Object affordances (functionality) are inferred from observations of humans using the objects in different types of actions. The intended application is learning from demonstration, in which a robot learns to employ objects in household tasks, from observing a human performing the same tasks with the objects. We present a method for categorizing manipulated objects and human manipulation actions in context of each other. The method is able to simultaneously segment and classify human hand actions, and detect and classify the objects involved in the action. This can serve as an initial step in a learning from demonstration method. Experiments show that the contextual information improves the classification of both objects and actions.

  • 227.
    Kjellström, Hedvig
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Romero, Javier
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Visual Recognition of Grasps for Human-to-Robot Mapping2008In: 2008 IEEE/RSJ International Conference On Robots And Intelligent Systems, Vols 1-3, Conference Proceedings / [ed] Chatila, R; Kelly, A; Merlet, JP, 2008, p. 3192-3199Conference paper (Refereed)
    Abstract [en]

    This paper presents a vision based method for grasp classification. It is developed as part of a Programming by Demonstration (PbD) system for which recognition of objects and pick-and-place actions represent basic building blocks for task learning. In contrary to earlier approaches, no articulated 3D reconstruction of the hand over time is taking place. The indata consists of a single image of the human hand. A 2D representation of the hand shape, based on gradient orientation histograms, is extracted from the image. The hand shape is then classified as one of six grasps by finding similar hand shapes in a large database of grasp images. The database search is performed using Locality Sensitive Hashing (LSH), an approximate k-nearest neighbor approach. The nearest neighbors also give an estimated hand orientation with respect to the camera. The six human grasps are mapped to three Barret hand grasps. Depending on the type of robot grasp, a precomputed grasp strategy is selected. The strategy is further parameterized by the orientation of the hand relative to the object. To evaluate the potential for the method to be part of a robust vision system, experiments were performed, comparing classification results to a baseline of human classification performance. The experiments showed the LSH recognition performance to be comparable to human performance.

  • 228.
    Kootstra, Geert
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Wilming, N.
    Schmidt, N. M.
    Djurfeldt, Mikael
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for High Performance Computing, PDC.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    König, P.
    Learning and adaptation of sensorimotor contingencies: Prism-adaptation, a case study2012In: From Animals to Animats 12, Springer Berlin/Heidelberg, 2012, Vol. 7426 LNAI, p. 341-350Conference paper (Refereed)
    Abstract [en]

    This paper focuses on learning and adaptation of sensorimotor contingencies. As a specific case, we investigate the application of prism glasses, which change visual-motor contingencies. After an initial disruption of sensorimotor coordination, humans quickly adapt. However, scope and generalization of that adaptation is highly dependent on the type of feedback and exhibits markedly different degrees of generalization. We apply a model with a specific interaction of forward and inverse models to a robotic setup and subject it to the identical experiments that have been used on previous human psychophysical studies. Our model demonstrates both locally specific adaptation and global generalization in accordance with the psychophysical experiments. These results emphasize the role of the motor system for sensory processes and open an avenue to improve on sensorimotor processing.

  • 229.
    Kootstra, Gert
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bergström, Niklas
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Fast and Automatic Detection and Segmentation of Unknown Objects2010In: Proceedings of the 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids), IEEE , 2010, p. 442-447Conference paper (Refereed)
    Abstract [en]

    This paper focuses on the fast and automatic detection and segmentation of unknown objects in unknown environments. Many existing object detection and segmentation methods assume prior knowledge about the object or human interference. However, an autonomous system operating in the real world will often be confronted with previously unseen objects. To solve this problem, we propose a segmentation approach named Automatic Detection And Segmentation (ADAS). For the detection of objects, we use symmetry, one of the Gestalt principles for figure-ground segregation to detect salient objects in a scene. From the initial seed, the object is segmented by iteratively applying graph cuts. We base the segmentation on both 2D and 3D cues: color, depth, and plane information. Instead of using a standard grid-based representation of the image, we use super pixels. Besides being a more natural representation, the use of super pixels greatly improves the processing time of the graph cuts, and provides more noise-robust color and depth information. The results show that both the object-detection as well as the object-segmentation method are successful and outperform existing methods.

  • 230.
    Kootstra, Gert
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bergström, Niklas
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gestalt Principles for Attention and Segmentation in Natural and Artificial Vision Systems2011In: Semantic Perception, Mapping and Exploration (SPME), ICRA 2011 Workshop, eSMCs , 2011Conference paper (Refereed)
    Abstract [en]

    Gestalt psychology studies how the human visual system organizes the complex visual input into unitary elements. In this paper we show how the Gestalt principles for perceptual grouping and for figure-ground segregation can be used in computer vision. A number of studies will be shown that demonstrate the applicability of Gestalt principles for the prediction of human visual attention and for the automatic detection and segmentation of unknown objects by a robotic system.

  • 231.
    Kootstra, Gert
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bergström, Niklas
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Using Symmetry to Select Fixation Points for Segmentation2010In: Proceedings of the 20th International Conference on Pattern Recognition, IEEE , 2010, p. 3894-3897Conference paper (Refereed)
    Abstract [en]

    For the interpretation of a visual scene, it is important for a robotic system to pay attention to the objects in the scene and segment them from their background. We focus on the segmentation of previously unseen objects in unknown scenes. The attention model therefore needs to be bottom-up and context-free. In this paper, we propose the use of symmetry, one of the Gestalt principles for figure-ground segregation, to guide the robot’s attention. We show that our symmetry-saliency model outperforms the contrast-saliency model, proposed in. The symmetry model performs better in finding the objects of interest and selects a fixation point closer to the center of the object. Moreover, the objects are better segmented from the background when the initial points are selected on the basis of symmetry.

  • 232.
    Kootstra, Gert
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    de Boer, Bart
    Schomaker, Lambert R. B.
    Predicting Eye Fixations on Complex Visual Stimuli Using Local Symmetry2011In: Cognitive Computation, ISSN 1866-9956, E-ISSN 1866-9964, Vol. 3, no 1, p. 223-240Article in journal (Refereed)
    Abstract [en]

    Most bottom-up models that predict human eye fixations are based on contrast features. The saliency model of Itti, Koch and Niebur is an example of such contrast-saliency models. Although the model has been successfully compared to human eye fixations, we show that it lacks preciseness in the prediction of fixations on mirror-symmetrical forms. The contrast model gives high response at the borders, whereas human observers consistently look at the symmetrical center of these forms. We propose a saliency model that predicts eye fixations using local mirror symmetry. To test the model, we performed an eye-tracking experiment with participants viewing complex photographic images and compared the data with our symmetry model and the contrast model. The results show that our symmetry model predicts human eye fixations significantly better on a wide variety of images including many that are not selected for their symmetrical content. Moreover, our results show that especially early fixations are on highly symmetrical areas of the images. We conclude that symmetry is a strong predictor of human eye fixations and that it can be used as a predictor of the order of fixation.

  • 233.
    Kootstra, Gert
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Popovic, Mila
    Jorgensen, Jimmy Alison
    Kuklinski, Kamil
    Miatliuk, Konstantsin
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Krueger, Norbert
    Enabling grasping of unknown objects through a synergistic use of edge and surface information2012In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 31, no 10, p. 1190-1213Article in journal (Refereed)
    Abstract [en]

    Grasping unknown objects based on visual input, where no a priori knowledge about the objects is used, is a challenging problem. In this paper, we present an Early Cognitive Vision system that builds a hierarchical representation based on edge and texture information which provides a sparse but powerful description of the scene. Based on this representation, we generate contour-based and surface-based grasps. We test our method in two real-world scenarios, as well as on a vision-based grasping benchmark providing a hybrid scenario using real-world stereo images as input and a simulator for extensive and repetitive evaluation of the grasps. The results show that the proposed method is able to generate successful grasps, and in particular that the contour and surface information are complementary for the task of grasping unknown objects. This allows for dealing with rather complex scenes.

  • 234. Kostavelis, I.
    et al.
    Boukas, E.
    Nalpantidis, Lazaros
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gasteratos, A.
    Path tracing on polar depth maps for robot navigation2012In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Berlin/Heidelberg, 2012, p. 395-404Conference paper (Refereed)
    Abstract [en]

    In this paper a Cellular Automata-based (CA) path estimation algorithm suitable for safe robot navigation is presented. The proposed method combines well established 3D vision techniques with CA operations and traces a collision free route from the foot of the robot to the horizon of a scene. Firstly, the depth map of the scene is obtained and, then, a polar transformation is applied. A v-disparity image calculation processing step is applied to the initial depth map separating the ground plane from the obstacles. In the next step, a CA floor field is formed representing all the distances from the robot to the traversable regions in the scene. The target point that the robot should move towards to, is tracked down and an additional CA routine is applied to the floor field revealing a traversable route that the robot should follow to reach its target location.

  • 235. Kostavelis, I.
    et al.
    Nalpantidis, Lazaros
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gasteratos, A.
    Collision risk assessment for autonomous robots by offline traversability learning2012In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 60, no 11, p. 1367-1376Article in journal (Refereed)
    Abstract [en]

    Autonomous robots should be able to move freely in unknown environments and avoid impacts with obstacles. The overall traversability estimation of the terrain and the subsequent selection of an obstacle-free route are prerequisites of a successful autonomous operation. This work proposes a computationally efficient technique for the traversability estimation of the terrain, based on a machine learning classification method. Additionally, a new method for collision risk assessment is introduced. The proposed system uses stereo vision as a first step in order to obtain information about the depth of the scene. Then, a v-disparity image calculation processing step extracts information-rich features about the characteristics of the scene, which are used to train a support vector machine (SVM) separating the traversable and non-traversable scenes. The ones classified as traversable are further processed exploiting the polar transformation of the depth map. The result is a distribution of obstacle existence likelihoods for each direction, parametrized by the robot's embodiment.

  • 236. Kostavelis, I.
    et al.
    Nalpantidis, Lazaros
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gasteratos, A.
    Object recognition using saliency maps and HTM learning2012In: Imaging Systems and Techniques (IST), 2012 IEEE International Conference on, IEEE , 2012, p. 528-532Conference paper (Refereed)
    Abstract [en]

    In this paper a pattern classification and object recognition approach based on bio-inspired techniques is presented. It exploits the Hierarchical Temporal Memory (HTM) topology, which imitates human neocortex for recognition and categorization tasks. The HTM comprises a hierarchical tree structure that exploits enhanced spatiotemporal modules to memorize objects appearing in various orientations. In accordance with HTM's biological inspiration, human vision mechanisms can be used to preprocess the input images. Therefore, the input images undergo a saliency computation step, revealing the plausible information of the scene, where a human might fixate. The adoption of the saliency detection module releases the HTM network from memorizing redundant information and augments the classification accuracy. The efficiency of the proposed framework has been experimentally evaluated in the ETH-80 dataset, and the classification accuracy has been found to be greater than other HTM systems.

  • 237. Kraft, Dirk
    et al.
    Pugeault, Nicolas
    Baseski, Emre
    Popovic, Mila
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kalkan, Sinan
    Woergoetter, Florentin
    Krueger, Norbert
    Birth Of The Object: Detection Of Objectness And Extraction Of Object Shape Through Object-Action Complexes2008In: International Journal of Humanoid Robotics, ISSN 0219-8436, Vol. 5, no 2, p. 247-265Article in journal (Refereed)
    Abstract [en]

    We describe a process in which the segmentation of objects as well as the extraction of the object shape becomes realized through active exploration of a robot vision system. In the exploration process, two behavioral modules that link robot actions to the visual and haptic perception of objects interact. First, by making use of an object independent grasping mechanism, physical control over potential objects can be gained. Having evaluated the initial grasping mechanism as being successful, a second behavior extracts the object shape by making use of prediction based on the motion induced by the robot. This also leads to the concept of an "object" as a set of features that change predictably over different frames. The system is equipped with a certain degree of generic prior knowledge about the world in terms of a sophisticated visual feature extraction process in an early cognitive vision system, knowledge about its own embodiment as well as knowledge about geometric relationships such as rigid body motion. This prior knowledge allows the extraction of representations that are semantically richer compared to many other approaches.

  • 238. Kraft, Dirk
    et al.
    Pugeault, Nicolas
    Baseski, Emre
    Popovic, Mila
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kalkan, Sinan
    Woergoetter, Florentin
    Krueger, Norbert
    BIRTH OF THE OBJECT: DETECTION OF OBJECTNESS AND EXTRACTION OF OBJECT SHAPE THROUGH OBJECT-ACTION COMPLEXES (vol 5, pg 247, 2008)2009In: International Journal of Humanoid Robotics, ISSN 0219-8436, Vol. 6, no 3, p. 561-561Article in journal (Refereed)
  • 239.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Strategies for object manipulation using foveal and peripheral vision2006In: International Conference on Computer Vision Systems (ICVS), New York, USA, IEEE Computer Society, 2006, p. 50-Conference paper (Refereed)
    Abstract [en]

    Computer vision is gaining significant importance as a cheap, passive, and information-rich sensor in research areas such as unmanned vehicles, medical robotics, human-machine interaction, autonomous navigation,robotic manipulation and grasping. However, a current trend is to build computer vision systems that are used to perform a specific task which makes it hard to reuse the ideas across different disciplines. In this paper, we concentrate on vision strategies for robotic manipulation tasksin a domestic environment. This work is an extension of our ongoing work on a development of a general vision system for robotic applications. Inparticular, given fetch-and-carry type of tasks, the issues related to the whole detect-approach-grasp loop are considered.

  • 240.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Eklundh, Jan-Olof
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vision for robotic object manipulation in domestic settings2005In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 52, no 1, p. 85-100Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a vision system for robotic object manipulation tasks in natural, domestic environments. Given complex fetch-and-carry robot tasks, the issues related to the whole detect-approach-grasp loop are considered. Our vision system integrates a number of algorithms using monocular and binocular cues to achieve robustness in realistic settings. The cues are considered and used in connection to both foveal and peripheral vision to provide depth information, segmentation of the object(s) of interest, object recognition, tracking and pose estimation. One important property of the system is that the step from object recognition to pose estimation is completely automatic combining both appearance and geometric models. Experimental evaluation is performed in a realistic indoor environment with occlusions, clutter, changing lighting and background conditions.

  • 241.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Robust Visual Servoing2014In: Household Service Robotics, Elsevier, 2014, p. 397-427Chapter in book (Other academic)
    Abstract [en]

    For service robots operating in domestic environments, it is not enough to consider only control level robustness; it is equally important to consider how image information that serves as input to the control process can be used so as to achieve robust and efficient control. In this chapter we present an effort toward the development of robust visual techniques used to guide robots in various tasks. Given a task at hand, we argue that different levels of complexity should be considered; this also defines the choice of the visual technique used to provide the necessary feedback information. We concentrate on visual feedback estimation where we investigate both two- and three-dimensional techniques. In the former case, we are interested in providing coarse information about the object position/velocity in the image plane. In particular, a set of simple visual features (cues) is employed in an integrated framework where voting is used for fusing the responses from individual cues. The experimental evaluation shows the system performance for three different cases of camera-robot configurations most common for robotic systems. For cases where the robot is supposed to grasp the object, a two-dimensional position estimate is often not enough. Complete pose (position and orientation) of the object may be required. Therefore, we present a model-based system where a wire-frame model of the object is used to estimate its pose. Since a number of similar systems have been proposed in literature, we concentrate on the particular part of the system usually neglected-automatic pose initialization. Finally, we show how a number of existing approaches can successfully be integrated in a system that is able to recognize and grasp fairly textured, everyday objects. One of the examples presented in the experimental section shows a mobile robot performing tasks in a real-word environment-a living room.

  • 242.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Advances in robot vision2005In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 52, no 1, p. 1-3Article in journal (Other academic)
  • 243.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ekvall, Staffan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aarno, Daniel
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sensor Integration and Task Planning for Mobile Manipulation2004Conference paper (Refereed)
    Abstract [en]

    Robotic mobile manipulation in unstructured environments requires integration of a number of key reasearch areas such as localization, navigation, object recognition, visual tracking/servoing, grasping and object manipulation. It has been demonstrated that, given the above, and through simple sequencing of basic skills, a robust system can be designed, [19]. In order to provide the robustness and flexibility required of the overall robotic system in unstructured and dynamic everyday environments, it is important to consider a wide range of individual skills using different sensory modalities. In this work, we consider a combination of deliberative and reactive control together with the use of multiple sensory modalities for modeling and execution of manipulation tasks. Special consideration is given to the design of a vision system necessary for object recognition and scene segmentation as well as learning principles in terms of grasping.

  • 244.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kyrki, Ville
    Initialization and System Modeling in 3 D Pose Tracking2006In: 18th International Conference on Pattern Recognition, Vol 4, Proceedings / [ed] Tang, YY; Wang, SP; Lorette, G; Yeung, DS; Yan, H, IEEE Computer Society, 2006, p. 643-646Conference paper (Refereed)
    Abstract [en]

    Initialization and choice of adequate motion models are two important but seldom discussed problems in 3D model-based pose (position and orientation) tracking. In this paper, we propose an automatic initialization approach suitable for textured objects. In addition, we define, study and experimentally evaluate three motion models commonly used in visual servoing and augmented reality.

  • 245.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kyrki, VilleLappeenranta University of Technology.
    Unifying Perspectives in Computational and Robot Vision2008Conference proceedings (editor) (Refereed)
    Abstract [en]

    The proceedings contain 12 papers. The topics discussed include: recent trends in computational and robot vision; extracting planar kinematic models using interactive perception; people detection using multiple sensors on a mobile robot; perceiving objects and movements to generate actions on a humanoid robot; pose estimation and feature tracking for robot assisted surgery with medical imaging; a sliding window filter for incremental slam; topological and metric robot localization through computer vision techniques; more vision for slam; maps, objects and contexts for robots; vision-based navigation strategies; and image-based visual servoing with extra task related constraints in a general framework for sensor-based robot systems.

  • 246.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Marayong, P.
    Li, M.
    Okamura, Allison M.
    Hager, G. A.
    Human-machine collaborative systems for microsurgical applications2005In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 24, no 9, p. 731-741Article in journal (Refereed)
    Abstract [en]

    Human-machine collaborative systems (HMCSs) are systems that amplify or assist human capabilities during the performance of tasks that require both human judgment and robotic precision. We examine the design and performance of HMCSs in the context of microsurgical procedures such as vitreo-retinal eye surgery. Three specific problems considered are: (1) development of systems tools for describing and implementing HMCSs, (2) segmentation of complex tasks into logical components given sensor traces of human task execution, and (3) measurement and evaluation of HMCS performance. These components can be integrated into a complete workstation with the ability to automatically parse traces of user activities into task models, which are loaded into an execution environment to provide the user with assistance using on-line recognition of task states. The major contributions of this work include an XML task graph modeling framework and execution engine, an algorithm for realtime segmentation of user actions using continuous hidden Markov models, and validation techniques for analyzing the performance of HMCSs.

  • 247.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Marayong, Panadda
    Li, Ming
    Okamura, Allison M.
    Hager, Gregory D.
    Human-Machine Collaborative Systems for Microsurgical Applications2005In: Robotics Research, Springer Berlin/Heidelberg, 2005, p. 162-171Conference paper (Refereed)
    Abstract [en]

     We describe our current progress in developing Human-Machine Collaborative Systems (HMCSs) for microsurgical applications such as vitreo-retinal eye surgery. Three specific problems considered here are (1) developing of systems tools for describing and implementing an HMCS, (2) segmentation of complex tasks into logical components given sensor traces of a human performing the task, and (3) measuring HMCS performance. Our goal is to integrate these into a full microsurgical workstation with the ability to automatically 11 parse" traces of user execution into a task model which is then loaded into the execution environment, providing the user with assistance using online recognition of task state. The major contributions of our work to date include an XML task graph modeling framework and execution engine, an algorithm for real-time segmentation of user actions using continuous Hidden Markov Models, and validation techniques for analyzing the performance of HMCSs.

  • 248.
    Kragic, Danica
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vincze, Markus
    Vision for Robotics2010In: Foundations and Trends in Robotics, ISSN 1935-8253, Vol. 1, no 1, p. 1-78Article in journal (Refereed)
    Abstract [en]

    Robot vision refers to the capability of a robot to visually perceive the environment and use this information for execution of various tasks. Visual feedback has been used extensively for robot navigation and obstacle avoidance. In the recent years, there are also examples that include interaction with people and manipulation of objects. In this paper, we review some of the work that goes beyond of using artificial landmarks and fiducial markers for the purpose of implementing visionbased control in robots. We discuss different application areas, both from the systems perspective and individual problems such as object tracking and recognition.

  • 249. Krueger, Volker
    et al.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ude, Ales
    Geib, Christopher
    The meaning of action: a review on action recognition and mapping2007In: Advanced Robotics, ISSN 0169-1864, E-ISSN 1568-5535, Vol. 21, no 13, p. 1473-1501Article, review/survey (Refereed)
    Abstract [en]

    In this paper, we analyze the different approaches taken to date within the computer vision, robotics and artificial intelligence communities for the representation, recognition, synthesis and understanding of action. We deal with action at different levels of complexity and provide the reader with the necessary related literature references. We put the literature references further into context and outline a possible interpretation of action by taking into account the different aspects of action recognition, action synthesis and task-level planning.

  • 250. Kruijff, G.-J. M.
    et al.
    Zender, H.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Situated dialogue and spatial organization: What, where... and why?2007In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, Vol. 4, no 1, p. 125-138Article in journal (Refereed)
    Abstract [en]

    The paper presents an HRI architecture for human-augmented mapping, which has been implemented and tested on an autonomous mobile robotic platform. Through interaction with a human, the robot can augment its autonomously acquired metric map with qualitative information about locations and objects in the environment. The system implements various interaction strategies observed in independently performed Wizard-of-Oz studies. The paper discusses an ontology-based approach to multi-layered conceptual spatial mapping that provides a common ground for human-robot dialogue. This is achieved by combining acquired knowledge with innate conceptual commonsense knowledge in order to infer new knowledge. The architecture bridges the gap between the rich semantic representations of the meaning expressed by verbal utterances on the one hand and the robot's internal sensor-based world representation on the other. It is thus possible to establish references to spatial areas in a situated dialogue between a human and a robot about their environment. The resulting conceptual descriptions represent qualitative knowledge about locations in the environment that can serve as a basis for achieving a notion of situational awareness.

2345678 201 - 250 of 437
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf