Change search
Refine search result
1234567 101 - 150 of 437
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 101.
    Detry, Renaud
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Madry, Marianna
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Piater, Justus
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Generalizing grasps across partly similar objects2012In: 2012 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2012, p. 3791-3797Conference paper (Refereed)
    Abstract [en]

    The paper starts by reviewing the challenges associated to grasp planning, and previous work on robot grasping. Our review emphasizes the importance of agents that generalize grasping strategies across objects, and that are able to transfer these strategies to novel objects. In the rest of the paper, we then devise a novel approach to the grasp transfer problem, where generalization is achieved by learning, from a set of grasp examples, a dictionary of object parts by which objects are often grasped. We detail the application of dimensionality reduction and unsupervised clustering algorithms to the end of identifying the size and shape of parts that often predict the application of a grasp. The learned dictionary allows our agent to grasp novel objects which share a part with previously seen objects, by matching the learned parts to the current view of the new object, and selecting the grasp associated to the best-fitting part. We present and discuss a proof-of-concept experiment in which a dictionary is learned from a set of synthetic grasp examples. While prior work in this area focused primarily on shape analysis (parts identified, e.g., through visual clustering, or salient structure analysis), the key aspect of this work is the emergence of parts from both object shape and grasp examples. As a result, parts intrinsically encode the intention of executing a grasp.

  • 102.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sufficient Conditions for Decentralized Potential Functions Based Controllers Using Canonical Vector Fields2012In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 57, no 10, p. 2621-2626Article in journal (Refereed)
    Abstract [en]

    A combination of dual Lyapunov analysis and properties of decentralized navigation function based controllers is used to check the stability properties of a certain class of decentralized controllers for navigation and collision avoidance in multiagent systems. The derived results yield a less conservative condition from previous approaches, which relates to the negativity of the sum of the minimum eigenvalues of the Hessian matrices at the critical points, instead of requiring each of the eigenvalues to be negative itself. This provides an improved characterization of the reachable set of this class of decentralized navigation function based controllers, which is less conservative than the previous results for the same class of controllers.

  • 103.
    Dimarogonas, Dimos V.
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Frazzoli, Emilio
    Johansson, Karl H.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Distributed Event-Triggered Control for Multi-Agent Systems2012In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 57, no 5, p. 1291-1297Article in journal (Refereed)
    Abstract [en]

    Event-driven strategies for multi-agent systems are motivated by the future use of embedded microprocessors with limited resources that will gather information and actuate the individual agent controller updates. The controller updates considered here are event-driven, depending on the ratio of a certain measurement error with respect to the norm of a function of the state, and are applied to a first order agreement problem. A centralized formulation is considered first and then its distributed counterpart, in which agents require knowledge only of their neighbors' states for the controller implementation. The results are then extended to a self-triggered setup, where each agent computes its next update time at the previous one, without having to keep track of the state error that triggers the actuation between two consecutive update instants. The results are illustrated through simulation examples.

  • 104. Do, Martin
    et al.
    Romero, Javier
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Azad, Pedram
    Asfour, Tamim
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dillman, Rüdiger
    Grasp recognition and mapping on humanoid robots2009In: 9th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS09, 2009, p. 465-471Conference paper (Refereed)
  • 105.
    Drimus, Alin
    et al.
    Mads Clausen Institute for Product Innovation, University of Southern Denmark.
    Kootstra, Gert
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bilberg, A.
    Mads Clausen Institute for Product Innovation, University of Southern Denmark.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Classification of Rigid and Deformable Objects Using a Novel Tactile Sensor2011In: Proceedings of the 15th International Conference on Advanced Robotics (ICAR), IEEE , 2011, p. 427-434Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a novel tactile-array sensor for use in robotic grippers based on flexible piezoresistive rubber. We start by describing the physical principles of piezoresistive materials, and continue by outlining how to build a flexible tactile-sensor array using conductive thread electrodes. A real-time acquisition system scans the data from the array which is then further processed. We validate the properties of the sensor in an application that classifies a number of household objects while performing a palpation procedure with a robotic gripper. Based on the haptic feedback, we classify various rigid and deformable objects. We represent the array of tactile information as a time series of features and use this as the input for a k-nearest neighbors classifier. Dynamic time warping is used to calculate the distances between different time series. The results from our novel tactile sensor are compared to results obtained from an experimental setup using a Weiss Robotics tactile sensor with similar characteristics. We conclude by exemplifying how the results of the classification can be used in different robotic applications.

  • 106. Drimus, Alin
    et al.
    Kootstra, Gert
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bilberg, Arne
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Design of a flexible tactile sensor for classification of rigid and deformable objects2014In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 62, no 1, p. 3-15Article in journal (Refereed)
    Abstract [en]

    For both humans and robots, tactile sensing is important for interaction with the environment: it is the core sensing used for exploration and manipulation of objects. In this paper, we present a novel tactile-array sensor based on flexible piezoresistive rubber. We describe the design of the sensor and data acquisition system. We evaluate the sensitivity and robustness of the sensor, and show that it is consistent over time with little relaxation. Furthermore, the sensor has the benefit of being flexible, having a high resolution, it is easy to mount, and simple to manufacture. We demonstrate the use of the sensor in an active object-classification system. A robotic gripper with two sensors mounted on its fingers performs a palpation procedure on a set of objects. By squeezing an object, the robot actively explores the material properties, and the system acquires tactile information corresponding to the resulting pressure. Based on a k nearest neighbor classifier and using dynamic time warping to calculate the distance between different time series, the system is able to successfully classify objects. Our sensor demonstrates similar classification performance to the Weiss Robotics tactile sensor, while having additional benefits.

  • 107.
    Ek, Carl Henrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    The importance of structure2011Conference paper (Refereed)
  • 108.
    Ek, Carl Henrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Song, Dan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Huebner, Kai
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Exploring affordances in robot grasping through latent structure representation2010In: The 11th European Conference on Computer Vision (ECCV 2010), 2010Conference paper (Refereed)
  • 109.
    Ek, Carl Henrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Song, Dan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Huebner, Kai
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Task Modeling in Imitation Learning using Latent Variable Models2010In: 2010 10th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2010, 2010, p. 458-553Conference paper (Refereed)
    Abstract [en]

    An important challenge in robotic research is learning and reasoning about different manipulation tasks from scene observations. In this paper we present a probabilistic model capable of modeling several different types of input sources within the same model. Our model is capable to infer the task using only partial observations. Further, our framework allows the robot, given partial knowledge of the scene, to reason about what information streams to acquire in order to disambiguate the state-space the most. We present results for task classification within and also reason about different features discriminative power for different classes of tasks.

  • 110.
    Ekekrantz, Johan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Adaptive Iterative Closest Keypoint2013In: 2013 European Conference on Mobile Robots, ECMR 2013 - Conference Proceedings, New York: IEEE , 2013, p. 80-87Conference paper (Refereed)
    Abstract [en]

    Finding accurate correspondences between overlapping 3D views is crucial for many robotic applications, from multi-view 3D object recognition to SLAM. This step, often referred to as view registration, plays a key role in determining the overall system performance. In this paper, we propose a fast and simple method for registering RGB-D data, building on the principle of the Iterative Closest Point (ICP) algorithm. In contrast to ICP, our method exploits both point position and visual appearance and is able to smoothly transition the weighting between them with an adaptive metric. This results in robust initial registration based on appearance and accurate final registration using 3D points. Using keypoint clustering we are able to utilize a non exhaustive search strategy, reducing runtime of the algorithm significantly. We show through an evaluation on an established benchmark that the method significantly outperforms current methods in both robustness and precision.

  • 111.
    Ekekrantz, Johan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Thippur, Akshaya
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC).
    Probabilistic Primitive Refinement algorithm for colored point cloud data2015In: 2015 European Conference on Mobile Robots (ECMR), Lincoln: IEEE conference proceedings, 2015Conference paper (Refereed)
    Abstract [en]

    In this work we present the Probabilistic Primitive Refinement (PPR) algorithm, an iterative method for accurately determining the inliers of an estimated primitive (such as planes and spheres) parametrization in an unorganized, noisy point cloud. The measurement noise of the points belonging to the proposed primitive surface are modelled using a Gaussian distribution and the measurements of extraneous points to the proposed surface are modelled as a histogram. Given these models, the probability that a measurement originated from the proposed surface model can be computed. Our novel technique to model the noisy surface from the measurement data does not require a priori given parameters for the sensor noise model. The absence of sensitive parameters selection is a strength of our method. Using the geometric information obtained from such an estimate the algorithm then builds a color-based model for the surface, further boosting the accuracy of the segmentation. If used iteratively the PPR algorithm can be seen as a variation of the popular mean-shift algorithm with an adaptive stochastic kernel function.

  • 112.
    Eklundh, Jan-Olof
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Recognition of Objects in the Real World from a Systems Perspective2005In: Kuenstliche Intelligenz, ISSN 0933-1875, Vol. 19, no 2, p. 12-17Article in journal (Refereed)
    Abstract [en]

    Based on a discussion of the requirements for a vision system operating in the real world we present a real-time system that includes a set of behaviours that makes it capable of handling a series of typical tasks. The system is able to localise objects of interests based on multiple cues, attend to the objects and finally recognise them while they are in fixation. A particular aspect of the system concerns the use of 3D cues. We end by showing the system running in practice and present results highlighting the merits of 3D-based attention and segmentation and multiple cues for recognition.

  • 113. Ekvall, S.
    et al.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hoffmann, F.
    Object recognition and pose estimation using color cooccurrence histograms and geometric modeling2005In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 23, no 11, p. 943-955Article in journal (Refereed)
    Abstract [en]

    Robust techniques for object recognition and pose estimation are essential for robotic manipulation and object grasping. In this paper, a novel approach for object recognition and pose estimation based on color cooccurrence histograms and geometric modelling is presented. The particular problems addressed are: (i) robust recognition of objects in natural scenes, (ii) estimation of partial pose using an appearance based approach, and (iii) complete 6DOF model based pose estimation and tracking using geometric models. Our recognition scheme is based on the color cooccurrence histograms embedded in a classical learning framework that facilitates a 'winner-takes-all' strategy across different views and scales. The hypotheses generated in the recognition stage provide the basis for estimating the orientation of the object around the vertical axis. This prior, incomplete pose information is subsequently made precise by a technique that facilitates a geometric model of the object to estimate and continuously track the complete 6DOF pose of the object. Major contributions of the proposed system are the ability to automatically initiate an object tracking process, its robustness and invariance towards scaling and translations as well as the computational efficiency since both recognition and pose estimation rely on the same representation of the object. The performance of the system is evaluated in a domestic environment with changing lighting and background conditions on a set of everyday objects.

  • 114.
    Ekvall, Staffan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Aarno, Daniel
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Online task recognition and real-time adaptive assistance for computer-aided machine control2006In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 22, no 5, p. 1029-1033Article in journal (Refereed)
    Abstract [en]

    Segmentation and recognition of operator-generated motions are commonly facilitated to provide appropriate assistance during task execution in teleoperative and human-machine collaborative settings. The assistance is usually provided in a virtual fixture framework where the level of compliance can be altered online, thus improving the performance in terms of execution time and overall precision. However, the fixtures are typically inflexible, resulting in a degraded performance in cases of unexpected obstacles or incorrect fixture models. In this paper, we present a method for online task tracking and propose the use of adaptive virtual fixtures that can cope with the above problems. Here, rather than executing a predefined plan, the operator has the ability to avoid unforeseen obstacles and deviate from the model. To allow this, the probability of following a certain trajectory (subtask) is estimated and used to automatically adjusts the compliance, thus providing the online decision of how to fixture the movement.

  • 115.
    Ekvall, Staffan
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Integrating active mobile robot object recognition and SLAM in natural environments2006In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, p. 5792-5797Conference paper (Refereed)
    Abstract [en]

    Linking semantic and spatial information has become an important research area in robotics since, for robots interacting with humans and performing tasks in natural environments, it is of foremost importance to be able to reason beyond simple geometrical and spatial levels. In this paper, we consider this problem in a service robot scenario where a mobile robot autonomously navigates in a domestic environment, builds a map as it moves along, localizes its position in it, recognizes objects on its way and puts them in the map. The experimental evaluation is performed in a realistic setting where the main concentration is put on the synergy of object recognition and Simultaneous Localization and Mapping systems.

  • 116.
    Ekvall, Staffan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Grasp recognition for programming by demonstration2005In: 2005 IEEE International Conference on Robotics and Automation (ICRA), Vols 1-4, NEW YORK, NY: IEEE , 2005, p. 748-753Conference paper (Refereed)
    Abstract [en]

    The demand for flexible and re-programmable robots has increased the need for programming by demonstration systems. In this paper, grasp recognition is considered in a programming by demonstration framework. Three methods for grasp recognition are presented and evaluated. The first method uses Hidden Markov Models to model the hand posture sequence during the grasp sequence, while the second method relies on the hand trajectory and hand rotation. The third method is a hybrid method, in which both the first two methods are active in parallel. The particular contribution is that all methods rely on the grasp sequence and not just the final posture of the hand. This facilitates grasp recognition before the grasp is completed. Also, by analyzing the entire sequence and not just the final grasp, the decision is based on more information and increased robustness of the overall system is achieved. The experimental results show that both arm trajectory and final hand posture provide important information for grasp classification. By combining them, the recognition rate of the overall system is increased.

  • 117.
    Ekvall, Staffan
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Integrating object and grasp recognition for dynamic scene interpretation2005In: 2005 12th International Conference on Advanced Robotics, NEW YORK, NY: IEEE , 2005, p. 331-336Conference paper (Refereed)
    Abstract [en]

    Understanding and interpreting dynamic scenes and activities is a very challenging problem. In this paper we present a system capable of learning robot tasks from demonstration. Classical robot task programming requires an experienced programmer and a lot of tedious work. In contrast, Programming by Demonstration is a flexible framework that reduces the complexity of programming robot tasks, and allows end-users to demonstrate the tasks instead of writing code. We present our recent steps towards this goal. A system for learning pick-and-place tasks by manually demonstrating them is presented. Each demonstrated task is described by an abstract model involving a set of simple tasks such as what object is moved, where it is moved, and which grasp type was used to move it.

  • 118.
    Ekvall, Staffan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Learning and evaluation of the approach vector for automatic grasp generation and planning2007In: Proceedings - IEEE International Conference on Robotics and Automation: Vols 1-10, 2007, p. 4715-4720Conference paper (Refereed)
    Abstract [en]

    In this paper, we address the problem of automatic grasp generation for robotic hands where experience and shape primitives are used in synergy so to provide a basis not only for grasp generation but also for a grasp evaluation process when the exact pose of the object is not available. One of the main challenges in automatic grasping is the choice of the object approach vector, which is dependent both on the object shape and pose as well as the grasp type. Using the proposed method, the approach vector is chosen not only based on the sensory input but also on experience that some approach vectors will provide useful tactile information that finally results in stable grasps. A methodology for developing and evaluating grasp controllers is presented where the focus lies on obtaining stable grasps under imperfect vision. The method is used in a teleoperation or a Programming by Demonstration setting where a human demonstrates to a robot how to grasp an object. The system first recognizes the object and grasp type which can then be used by the robot to perform the same action using a mapped version of the human grasping posture.

  • 119.
    Ekvall, Staffan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Receptive field cooccurrence histograms for object detection2005In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-4, 2005, p. 3969-3974Conference paper (Refereed)
    Abstract [en]

    Object recognition is one of the major research topics in the field of computer vision. In robotics, there is often a need for a system that can locate certain objects in the environment - the capability which we denote as 'object detection'. In this paper, we present a new method for object detection. The method is especially suitable for detecting objects in natural scenes, as it is able to cope with problems such as complex background, varying illumination and object occlusion. The proposed method uses the receptive field representation where each pixel in the image is represented by a combination of its color and response to different filters. Thus, the cooccurrence of certain filter responses within a specific radius in the image serves as information basis for building the representation of the object. The specific goal in this work is the development of an on-line learning scheme that is effective after just one training example but still has the ability to improve its performance with more time and new examples. We describe the details behind the algorithm and demonstrate its strength with an extensive experimental evaluation.

  • 120. Ekvall, Staffan
    et al.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Robot Learning from Demonstration: A Task-level Planning Approach2008Article in journal (Refereed)
    Abstract [en]

    In this paper, we deal with the problem of learning by demonstration, task level learning and planning for robotic applications that involve object manipulation. Preprogramming robots for execution of complex domestic tasks such as setting a dinner table is of little use, since the same order of subtasks may not be conceivable in the run time due to the changed state of the world. In our approach, we aim to learn the goal of the task and use a task planner to reach the goal given different initial states of the world. For some tasks, there are underlying constraints that must be fulfille, and knowing just the final goal is not sufficient. We propose two techniques for constraint identification. In the first case, the teacher can directly instruct the system about the underlying constraints. In the second case, the constraints are identified by the robot itself based on multiple observations. The constraints are then considered in the planning phase, allowing the task to be executed without violating any of them. We evaluate our work on a real robot performing pick-and-place tasks.

  • 121.
    Elfwing, Stefan
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Uchibe, E.
    Neural Computation Unit, Okinawa Institute of Science and Technology, Japan.
    Doya, K.
    Neural Computation Unit, Okinawa Institute of Science and Technology, Japan.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Biologically Inspired Embodied Evolution of Survival2005In: 2005 IEEE Congress on Evolutionary Computation, IEEE CEC 2005. Proceedings, 2005, p. 2210-2216Conference paper (Refereed)
    Abstract [en]

    Embodied evolution is a methodology for evolutionary robotics that mimics the distributed, asynchronous and autonomous properties of biological evolution. The evaluation, selection and reproduction are carried out by and between the robots, without any need for human intervention. In this paper we propose a biologically inspired embodied evolution framework, which fully integrates self-preservation, recharging from external batteries in the environment, and self-reproduction, pair-wise exchange of genetic material, into a survival system. The individuals are, explicitly, evaluated for the performance of the battery capturing task, but also, implicitly, for the mating task by the fact that an individual that mates frequently has larger probability to spread its gene in the population. We have evaluated our method in simulation experiments and the simulation results show that the solutions obtained by our embodied evolution method were able to optimize the two survival tasks, battery capturing and mating, simultaneously. We have also performed preliminary experiments in hardware, with promising results.

  • 122.
    Elfwing, Stefan
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Uchibe, E.
    Neural Computation Unit, Okinawa Institute of Science and Technology, Japan.
    Doya, K.
    Neural Computation Unit, Okinawa Institute of Science and Technology, Japan.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Co-Evolution of Shaping Rewards and Meta-Parameters in Reinforcement Learning2008In: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 16, no 6, p. 400-412Article in journal (Refereed)
    Abstract [en]

    In this article, we explore an evolutionary approach to the optimization of potential-based shaping rewards and meta-parameters in reinforcement learning. Shaping rewards is a frequently used approach to increase the learning performance of reinforcement learning, with regards to both initial performance and convergence speed. Shaping rewards provide additional knowledge to the agent in the form of richer reward signals, which guide learning to high-rewarding states. Reinforcement learning depends critically on a few meta-parameters that modulate the learning updates or the exploration of the environment, such as the learning rate alpha, the discount factor of future rewards gamma, and the temperature tau that controls the trade-off between exploration and exploitation in softmax action selection. We validate the proposed approach in simulation using the mountain-car task. We also transfer shaping rewards and meta-parameters, evolutionarily obtained in simulation, to hardware, using a robotic foraging task.

  • 123.
    Elfwing, Stefan
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Uchibe, E.
    ATR Computational Neuroscience Labs, Japan.
    Doya, K.
    ATR Computational Neuroscience Labs, Japan.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Multi-Agent Reinforcement Learning: Using Macro Actions to Learn a Mating Task2004In: 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); Sendai, 2004, p. 3164-3169Conference paper (Refereed)
    Abstract [en]

    Standard reinforcement learning methods are inefficient and often inadequate for learning cooperative multi-agent tasks. For these kinds of tasks the behavior of one agent strongly depends on dynamic interaction with other agents, not only with the interaction with a static environment as in standard reinforcement learning. The success of the learning is therefore coupled to the agents' ability to predict the other agents' behaviors. In this study we try to overcome this problem by adding a few simple macro actions, actions that are extended in time for more than one time step. The macro actions improve the learning by making search of the state space more effective and thereby making the behavior more predictable for the other agent. In this study we have considered a cooperative mating task, which is the first step towards our aim to perform embodied evolution, where the evolutionary selection process is an integrated part of the task. We show, in simulation and hardware, that in the case of learning without macro actions, the agents fail to learn a meaningful behavior. In contrast, for the learning with macro action the agents learn a good mating behavior in reasonable time, in both simulation and hardware.

  • 124.
    Elfwing, Stefan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Uchibe, Eiji
    Doya, Kenji
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Darwinian Embodied Evolution of the Learning Ability for Survival2011In: Adaptive Behavior, ISSN 1059-7123, E-ISSN 1741-2633, Vol. 19, no 2, p. 101-102Article in journal (Refereed)
    Abstract [en]

    In this article we propose a framework for performing embodied evolution with a limited number of robots, by utilizing time-sharing in subpopulations of virtual agents hosted in each robot. Within this framework, we explore the combination of within-generation learning of basic survival behaviors by reinforcement learning, and evolutionary adaptations over the generations of the basic behavior selection policy, the reward functions, and metaparameters for reinforcement learning. We apply a biologically inspired selection scheme, in which there is no explicit communication of the individuals' fitness information. The individuals can only reproduce offspring by mating-a pair-wise exchange of genotypes-and the probability that an individual reproduces offspring in its own subpopulation is dependent on the individual's "health," that is, energy level, at the mating occasion. We validate the proposed method by comparing it with evolution using standard centralized selection, in simulation, and by transferring the obtained solutions to hardware using two real robots.

  • 125. Faeulhammer, Thomas
    et al.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Burbridge, Christopher
    Zillich, Micheal
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hawes, Nick
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vincze, Marcus
    Autonomous Learning of Object Models on a Mobile Robot2017In: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, no 1, p. 26-33, article id 7393491Article in journal (Refereed)
    Abstract [en]

    In this article we present and evaluate a system which allows a mobile robot to autonomously detect, model and re-recognize objects in everyday environments. Whilst other systems have demonstrated one of these elements, to our knowledge we present the first system which is capable of doing all of these things, all without human interaction, in normal indoor scenes. Our system detects objects to learn by modelling the static part of the environment and extracting dynamic elements. It then creates and executes a view plan around a dynamic element to gather additional views for learning. Finally these views are fused to create an object model. The performance of the system is evaluated on publicly available datasets as well as on data collected by the robot in both controlled and uncontrolled scenarios.

  • 126. Fallon, Maurice F.
    et al.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    McClelland, Hunter
    Leonard, John J.
    Relocating Underwater Features Autonomously Using Sonar-Based SLAM2013In: IEEE Journal of Oceanic Engineering, ISSN 0364-9059, E-ISSN 1558-1691, Vol. 38, no 3, p. 500-513Article in journal (Refereed)
    Abstract [en]

    This paper describes a system for reacquiring features of interest in a shallow-water ocean environment, using autonomous underwater vehicles (AUVs) equipped with low-cost sonar and navigation sensors. In performing mine countermeasures, it is critical to enable AUVs to navigate accurately to previously mapped objects of interest in the water column or on the seabed, for further assessment or remediation. An important aspect of the overall system design is to keep the size and cost of the reacquisition vehicle as low as possible, as it may potentially be destroyed in the reacquisition mission. This low-cost requirement prevents the use of sophisticated AUV navigation sensors, such as a Doppler velocity log (DVL) or an inertial navigation system (INS). Our system instead uses the Proviewer 900-kHz imaging sonar from Blueview Technologies, which produces forward-looking sonar (FLS) images at ranges up to 40 m at approximately 4 Hz. In large volumes, it is hoped that this sensor can be manufactured at low cost. Our approach uses a novel simultaneous localization and mapping (SLAM) algorithm that detects and tracks features in the FLS images to renavigate to a previously mapped target. This feature-based navigation (FBN) system incorporates a number of recent advances in pose graph optimization algorithms for SLAM. The system has undergone extensive field testing over a period of more than four years, demonstrating the potential for the use of this new approach for feature reacquisition. In this report, we review the methodologies and components of the FBN system, describe the system's technological features, review the performance of the system in a series of extensive in-water field tests, and highlight issues for future research.

  • 127.
    Fallon, Maurice F.
    et al.
    MIT.
    Johannsson, Hordur
    MIT.
    Kaess,, Michael
    MIT.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    McClelland, Hunter
    MIT.
    Englot, Brendan J.
    MIT.
    Hover, Franz S.
    MIT.
    Leonard, John J.
    MIT.
    Simultaneous Localization and Mapping in Marine Environments2013In: Marine Robot Autonomy, New York: Springer, 2013, p. 329-372Chapter in book (Refereed)
    Abstract [en]

    Accurate navigation is a fundamental requirement for robotic systems—marine and terrestrial. For an intelligent autonomous system to interact effectively and safely with its environment, it needs to accurately perceive its surroundings. While traditional dead-reckoning filtering can achieve extremely low drift rates, the localization accuracy decays monotonically with distance traveled. Other approaches (such as external beacons) can help; nonetheless, the typical prerogative is to remain at a safe distance and to avoid engaging with the environment. In this chapter we discuss alternative approaches which utilize onboard sensors so that the robot can estimate the location of sensed objects and use these observations to improve its own navigation as well as its perception of the environment. This approach allows for meaningful interaction and autonomy. Three motivating autonomous underwater vehicle (AUV) applications are outlined herein. The first fuses external range sensing with relative sonar measurements. The second application localizes relative to a prior map so as to revisit a specific feature, while the third builds an accurate model of an underwater structure which is consistent and complete. In particular we demonstrate that each approach can be abstracted to a core problem of incremental estimation within a sparse graph of the AUV’s trajectory and the locations of features of interest which can be updated and optimized in real time on board the AUV.

  • 128. Feix, Thomas
    et al.
    Romero, Javier
    Schmiedmayer, Heinz-Bodo
    Dollar, Aaron M.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    The GRASP Taxonomy of Human Grasp Types2016In: IEEE Transactions on Human-Machine Systems, ISSN 2168-2291, E-ISSN 2168-2305, Vol. 46, no 1, p. 66-77Article in journal (Refereed)
    Abstract [en]

    In this paper, we analyze and compare existing human grasp taxonomies and synthesize them into a single new taxonomy (dubbed "The GRASP Taxonomy" after the GRASP project funded by the European Commission). We consider only static and stable grasps performed by one hand. The goal is to extract the largest set of different grasps that were referenced in the literature and arrange them in a systematic way. The taxonomy provides a common terminology to define human hand configurations and is important in many domains such as human-computer interaction and tangible user interfaces where an understanding of the human is basis for a proper interface. Overall, 33 different grasp types are found and arranged into the GRASP taxonomy. Within the taxonomy, grasps are arranged according to 1) opposition type, 2) the virtual finger assignments, 3) type in terms of power, precision, or intermediate grasp, and 4) the position of the thumb. The resulting taxonomy incorporates all grasps found in the reviewed taxonomies that complied with the grasp definition. We also show that due to the nature of the classification, the 33 grasp types might be reduced to a set of 17 more general grasps if only the hand configuration is considered without the object shape/size.

  • 129.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Projection of a Markov Process with Neural Networks2001Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this work we have examined an application fromthe insurance industry. We first reformulate it into aproblem of projecting a markov process. We thendevelop a method of carrying out the projectionmany steps into the future by using a combination ofneural networks trained using a maximum entropyprinciple. This methodology improves on currentindustry standard solution in four key areas:variance, bias, confidence level estimation, and theuse of inhomogeneous data.The neural network aspects of the methodologyinclude the use of a generalization error estimate thatdoes not rely on a validation set. We also developour own approximation to the hessian matrix, whichseems to be significantly better than assuming it tobe diagonal and much faster than calculating itexactly. This hessian is used in the network pruningalgorithm. The parameters of a conditional probabilitydistribution were generated by a neuralnetwork, which was trained to maximize thelog-likelihood plus a regularization term.In preparing the data for training the neural networkswe have devised a scheme to decorrelate inputdimensions completely, even non-linear correlations,which should be of general interest in its own right.The results we found indicate that the bias inherentin the current industry-standard projection techniqueis very significant. This work may be the onlyaccurate measurement made of this important source of error.

  • 130.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Projection of a Markov Process with Neural Networks2001Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this work we have examined an application from the insurance industry. We first reformulate it into a problem of projecting a markov process. We then develop a method of carrying out the projection many steps into the future by using a combination of neural networks trained using a maximum entropy principle. This methodology improves on current industry standard solution in four key areas: variance, bias, confidence level estimation, and the use of inhomogeneous data. The neural network aspects of the methodology include the use of a generalization error estimate that does not rely on a validation set. We also develop our own approximation to the hessian matrix, which seems to be significantly better than assuming it to be diagonal and much faster than calculating it exactly. This hessian is used in the network pruning algorithm. The parameters of a conditional probability distribution were generated by a neural network, which was trained to maximize the log-likelihood plus a regularization term. In preparing the data for training the neural networks we have devised a scheme to decorrelate input dimensions completely, even non-linear correlations, which should be of general interest in its own right. The results we found indicate that the bias inherent in the current industry-standard projection technique is very significant. This work may be the only accurate measurement made of this important source of error.

  • 131.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Robustness of the Quadratic Antiparticle Filter forRobot Localization2011In: European Conference on Mobile Robots / [ed] Achim J. Lilienthal and Tom Duckett, 2011, p. 297-302Conference paper (Refereed)
    Abstract [en]

    Robot localization using odometry and feature measurementsis a nonlinear estimation problem. An efficient solutionis found using the extended Kalman filter, EKF. The EKFhowever suffers from divergence and inconsistency when thenonlinearities are significant. We recently developed a new typeof filter based on an auxiliary variable Gaussian distributionwhich we call the antiparticle filter AF as an alternative nonlinearestimation filter that has improved consistency and stability. TheAF reduces to the iterative EKF, IEKF, when the posterior distributionis well represented by a simple Gaussian. It transitions to amore complex representation as required. We have implementedan example of the AF which uses a parameterization of the meanas a quadratic function of the auxiliary variables which we callthe quadratic antiparticle filter, QAF. We present simulationof robot feature based localization in which we examine therobustness to bias, and disturbances with comparison to the EKF.

  • 132.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    The Antiparticle Filter: an Adaptive Nonlinear Estimator2011In: International Symposium of Robotics Research, 2011Conference paper (Refereed)
    Abstract [en]

    We introduce the antiparticle filter, AF, a new type of recursive Bayesian estimator that is unlike either the extended Kalman Filter, EKF, unscented Kalman Filter, UKF or the particle filter PF. We show that for a classic problem of robot localization the AF can substantially outperform these other filters in some situations. The AF estimates the posterior distribution as an auxiliary variable Gaussian which gives an analytic formula using no random samples. It adaptively changes the complexity of the posterior distribution as the uncertainty changes. It is equivalent to the EKF when theuncertainty is low while being able to represent non-Gaussian distributions as the uncertainty increases. The computation time can be much faster than a particle filter for the same accuracy. We have simulated comparisons of two types of AF to the EKF, the iterative EKF, the UKF, an iterative UKF, and the PF demonstrating that AF can reduce the error to a consistent accurate value.

  • 133.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    The antiparticle filter—an adaptive nonlinear estimator2017In: 15th International Symposium of Robotics Research, 2011, Springer, 2017, p. 219-234Conference paper (Refereed)
    Abstract [en]

    We introduce the antiparticle filter, AF, a new type of recursive Bayesian estimator that is unlike either the extended Kalman Filter, EKF, unscented Kalman Filter, UKF or the particle filter PF. We show that for a classic problem of robot localization the AF can substantially outperform these other filters in some situations. The AF estimates the posterior distribution as an auxiliary variable Gaussian which gives an analytic formula using no random samples. It adaptively changes the complexity of the posterior distribution as the uncertainty changes. It is equivalent to the EKF when the uncertainty is low while being able to represent non-Gaussian distributions as the uncertainty increases. The computation time can be much faster than a particle filter for the same accuracy. We have simulated comparisons of two types of AF to the EKF, the iterative EKF, the UKF, an iterative UKF, and the PF demonstrating that AF can reduce the error to a consistent accurate value.

  • 134.
    Folkesson, John
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    Closing the Loop With Graphical SLAM2007In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 23, no 4, p. 731-741Article in journal (Refereed)
    Abstract [en]

    The problem of simultaneous localization and mapping (SLAM) is addressed using a graphical method. The main contributions are a computational complexity that scales well with the size of the environment, the elimination of most of the linearization inaccuracies, and a more flexible and robust data association. We also present a detection criteria for closing loops. We show how multiple topological constraints can be imposed on the graphical solution by a process of coarse fitting followed by fine tuning. The coarse fitting is performed using an approximate system. This approximate system can be shown to possess all the local symmetries. Observations made during the SLAM process often contain symmetries, that is to say, directions of change to the state space that do not affect the observed quantities. It is important that these directions do not shift as we approximate the system by, for example, linearization. The approximate system is both linear and block diagonal. This makes it a very simple system to work with especially when imposing global topological constraints on the solution. These global constraints are nonlinear. We show how these constraints can be discovered automatically. We develop a method of testing multiple hypotheses for data matching using the graph. This method is derived from statistical theory and only requires simple counting of observations. The central insight is to examine the probability of not observing the same features on a return to a region. We present results with data from an outdoor scenario using a SICK laser scanner.

  • 135.
    Folkesson, John
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Graphical SLAM:  a self-correcting map2004In: 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, PROCEEDINGS  , 2004, p. 383-390Conference paper (Refereed)
    Abstract [en]

    We describe an approach to simultaneous localization and mapping, SLAM. This approach has the highly desirable property of robustness to data association errors. Another important advantage of our algorithm is that non-linearities are computed exactly, so that global constraints can be imposed even if they result in large shifts to the map. We represent the map as a graph and use the graph to find an efficient map update algorithm. We also show how topological consistency can be imposed on the map, such as, closing a loop. The algorithm has been implemented on an outdoor robot and we have experimental validation of our ideas. We also explain how the graph can be simplified leading to linear approximations of sections of the map. This reduction gives us a natural way to connect local map patches into a much larger global map.

  • 136.
    Folkesson, John
    et al.
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, Superseded Departments, Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Outdoor exploration and SLAM using a compressed filter2003In: Proceedings - IEEE International Conference on Robotics and Automation, 2003, p. 419-427Conference paper (Refereed)
    Abstract [en]

    In this paper we describe the use of automatic explorationfor autonomous mapping of outdoor scenes. We describe areal-time SLAM implementation along with an autonomous explorationalgorithm. We have implemented SLAM with a compressedextended Kalman filter (CEKF) on an outdoor robot. Our implementationuses walls of buildings as features. The state predictions aremade by using a combination of odometry and inertial data. The systemwas tested on a 200 x 200 m site with 18 buildings on variableterrain. The paper helps explain some of the implementation detailsof the compressed filter such as, how to organize the map as well asmore general issues like, how to include the effects of pitch and rolland efficient feature detection.

  • 137.
    Folkesson, John
    et al.
    Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Masachusetts.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Graphical SLAM for Outdoor Applications2007In: Journal of Field Robotics, ISSN 1556-4959, Vol. 24, no 1-2, p. 51-70Article in journal (Refereed)
    Abstract [en]

    Application of SLAM outdoors is challenged by complexity, handling of non-linearities and flexible integration of a diverse set of features. A graphical approach to SLAM is introduced that enables flexible data-association. The method allows for handling of non-linearities. The method also enables easy introduction of global constraints. Computational issues can be addressed as a graph reduction problem. A complete framework for graphical based SLAM is presented. The framework is demonstrated for a number of outdoor experiments using an ATRV robot equipped with a SICK laser scanner and a CrossBow Inertial Unit. The experiments include handling of large outdoor environments with loop closing. The presented system operates at 5Hz on a 800 MHz computer.

  • 138.
    Folkesson, John
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Graphical SLAM using vision and the measurement subspace2005In: 2005 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, IEEE conference proceedings, 2005, p. 325-330Conference paper (Refereed)
    Abstract [en]

    In this paper we combine a graphical approach for simultaneous localization and mapping, SLAM, with a feature representation that addresses symmetries and constraints in the feature coordinates, the measurement subspace, M-space. The graphical method has the advantages of delayed linearizations and soft commitment to feature measurement matching. It also allows large maps to be built up as a network of small local patches, star nodes. This local map net is then easier to work with. The formation of the star nodes is explicitly stable and invariant with all the symmetries of the original measurements. All linearization errors are kept small by using a local frame. The construction of this invariant star is made clearer by the M-space feature representation. The M-space allows the symmetries and constraints of the measurements to be explicitly represented. We present results using both vision and laser sensors.

  • 139.
    Folkesson, John
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vision SLAM in the Measurement Subspace2005In: 2005 IEEE International Conference on Robotics and Automation (ICRA), Vols 1-4  Book Series, 2005, p. 30-35Conference paper (Refereed)
    Abstract [en]

    In this paper we describe an approach to feature representation for simultaneous localization and mapping, SLAM. It is a general representation for features that addresses symmetries and constraints in the feature coordinates. Furthermore, the representation allows for the features to be added to the map with partial initialization. This is an important property when using oriented vision features where angle information can be used before their full pose is known. The number of the dimensions for a feature can grow with time as more information is acquired. At the same time as the special properties of each type of feature are accounted for, the commonalities of all map features are also exploited to allow SLAM algorithms to be interchanged as well as choice of sensors and features. In other words the SLAM implementation need not be changed at all when changing sensors and features and vice versa. Experimental results both with vision and range data and combinations thereof are presented.

  • 140.
    Folkesson, John
    et al.
    Massacusetts Institute of Technology, Cambridge, MA .
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    Georgia Institute of Tech- nology, Atlanta, GA.
    The m-space feature representation for slam2007In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, ISSN 1552-3098, Vol. 23, no 5, p. 1024-1035Article in journal (Refereed)
    Abstract [en]

    In this paper, a new feature representation for simultaneous localization and mapping (SLAM) is discussed. The representation addresses feature symmetries and constraints explicitly to make the basic model numerically robust. In previous SLAM work, complete initialization of features is typically performed prior to introduction of a new feature into the map. This results in delayed use of new data. To allow early use of sensory data, the new feature representation addresses the use of features that initially have been partially observed. This is achieved by explicitly modelling the subspace of a feature that has been observed. In addition to accounting for the special properties of each feature type, the commonalities can be exploited in the new representation to create a feature framework that allows for interchanging of SLAM algorithms, sensor and features. Experimental results are presented using a low-cost Web-cam, a laser range scanner, and combinations thereof.

  • 141. Frintrop, Simone
    et al.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Active Gaze Control for Attentional Visual SLAM2008In: 2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, 2008, p. 3690-3697Conference paper (Refereed)
    Abstract [en]

    In this paper, we introduce an approach to active camera control for visual SLAM. Features, detected by a biologically motivated attention system, are tracked over several frames to determine stable landmarks. Matching of features to database entries enables global loop closing. The focus of this paper is the active camera control module, which supports the system with three behaviours: (i) A tracking behaviour tracks promising landmarks and prevents them from leaving the field of view, (ii) A redetection behaviour directs the camera actively to regions where landmarks are expected and thus supports loop closing, (iii) Finally, an exploration behaviour investigates regions without landmarks and enables a more uniform distribution of landmarks. Several real-world experiments show that the active camera control outperforms the passive system considerably.

  • 142.
    Ghadirzadeh, Ali
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bütepage, Judith
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Maki, Atsuto
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A sensorimotor reinforcement learning framework for physical human-robot interaction2016In: IEEE International Conference on Intelligent Robots and Systems, IEEE, 2016, p. 2682-2688Conference paper (Refereed)
    Abstract [en]

    Modeling of physical human-robot collaborations is generally a challenging problem due to the unpredictive nature of human behavior. To address this issue, we present a data-efficient reinforcement learning framework which enables a robot to learn how to collaborate with a human partner. The robot learns the task from its own sensorimotor experiences in an unsupervised manner. The uncertainty in the interaction is modeled using Gaussian processes (GP) to implement a forward model and an actionvalue function. Optimal action selection given the uncertain GP model is ensured by Bayesian optimization. We apply the framework to a scenario in which a human and a PR2 robot jointly control the ball position on a plank based on vision and force/torque data. Our experimental results show the suitability of the proposed method in terms of fast and data-efficient model learning, optimal action selection under uncertainty and equal role sharing between the partners.

  • 143.
    Ghadirzadeh, Ali
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Maki, Atsuto
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    A Sensorimotor Approach for Self-Learning of Hand-Eye Coordination2015In: IEEE/RSJ International Conference onIntelligent Robots and Systems, Hamburg, September 28 - October 02, 2015, IEEE conference proceedings, 2015, p. 4969-4975Conference paper (Refereed)
    Abstract [en]

    This paper presents a sensorimotor contingencies (SMC) based method to fully autonomously learn to perform hand-eye coordination. We divide the task into two visuomotor subtasks, visual fixation and reaching, and implement these on a PR2 robot assuming no prior information on its kinematic model. Our contributions are three-fold: i) grounding a robot in the environment by exploiting SMCs in the action planning system, which eliminates the need for prior knowledge of the kinematic or dynamic models of the robot; ii) using a forward model to search for proper actions to solve the task by minimizing a cost function, instead of training a separate inverse model, to speed up training; iii) encoding 3D spatial positions of a target object based on the robot’s joint positions, thus avoiding calibration with respect to an external coordinate system. The method is capable of learning the task of hand-eye coordination from scratch by less than 20 sensory-motor pairs that are iteratively generated at real-time speed. In order to examine the robustness of the method while dealing with nonlinear image distortions, we apply a so-called retinal mapping image deformation to the input images. Experimental results show the successfulness of the method even under considerable image deformations.

  • 144.
    Gratal, Xavi
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bohg, Jeannette
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Scene Representation and Object Grasping Using Active Vision2010Conference paper (Refereed)
  • 145.
    Gratal, Xavi
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Romero, Javier
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bohg, Jeannette
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Visual servoing on unknown objects2012In: Mechatronics (Oxford), ISSN 0957-4158, E-ISSN 1873-4006, Vol. 22, no 4, p. 423-435Article in journal (Refereed)
    Abstract [en]

    We study visual servoing in a framework of detection and grasping of unknown objects. Classically, visual servoing has been used for applications where the object to be servoed on is known to the robot prior to the task execution. In addition, most of the methods concentrate on aligning the robot hand with the object without grasping it. In our work, visual servoing techniques are used as building blocks in a system capable of detecting and grasping unknown objects in natural scenes. We show how different visual servoing techniques facilitate a complete grasping cycle.

  • 146.
    Gratal, Xavi
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Romero, Javier
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danic
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Virtual Visual Servoing for Real-Time Robot Pose Estimation2011Conference paper (Refereed)
    Abstract [en]

    We propose a system for markerless pose estimation and tracking of a robot manipulator. By tracking the manipulator, we can obtain an accurate estimate of its position and orientation necessary in many object grasping and manipulation tasks. Tracking the manipulator allows also for better collision avoidance. The method is based on the notion of virtual visual servoing. We also propose the use of distance transform in the control loop, which makes the performance independent of the feature search window.

  • 147.
    Guo, Meng
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Bottom-up motion and task coordination for loosely-coupled multi-agent systems with dependent local tasks2015In: IEEE International Conference on Automation Science and Engineering, IEEE , 2015, p. 348-355Conference paper (Refereed)
    Abstract [en]

    We propose a bottom-up motion and task coordination scheme for loosely-coupled multi-agent systems under dependent local tasks. Instead of defining a global task for the whole team, each agent is assigned locally a task as syntactically co-safe linear temporal logic formulas that specify both motion and action requirements. Inter-agent dependency is introduced by collaborative actions of which the execution requires multiple agents' collaboration. The proposed solution contains an offline initial plan synthesis, an on-line request-reply messages exchange and a real-time plan adaptation algorithm. It is distributed in that any decision is made locally based on local computation and local communication within neighboring agents. It is scalable and resilient to agent failures as the dependency is formed and removed dynamically based on the plan execution status and agent capabilities, instead of pre-assigned agent identities. The overall scheme is demonstrated by a simulated scenario.

  • 148.
    Guo, Meng
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Consensus with quantized relative state measurements2013In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 49, no 8, p. 2531-2537Article in journal (Refereed)
    Abstract [en]

    In this paper, cooperative control of multi-agent systems under limited communication between neighboring agents is investigated. In particular, quantized values of the relative states are used as the control parameters. By taking advantage of tools from nonsmooth analysis, explicit convergence results are derived for both uniform and logarithmic quantizers under static and time-varying communication topologies. Compared with our previous work, less conservative conditions that ensure global convergence are provided. Moreover, second order dynamical systems under similar constraints are taken into account. Computer simulations are provided to demonstrate the validity of the derived results.

  • 149.
    Guo, Meng
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Dimarogonas, Dimos V
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering (EES), Automatic Control.
    Distributed plan reconfiguration via knowledge transfer in multi-agent systems under local LTL specifications2014In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2014, p. 4304-4309Conference paper (Refereed)
    Abstract [en]

    We propose a cooperative motion and task planning scheme for multi-agent systems where the agents have independently-assigned local tasks, specified as Linear Temporal Logic (LTL) formulas. These tasks contain hard and soft sub-specifications. A least-violating initial plan is synthesized first for the potentially infeasible task and the partially-known workspace. While the system runs, each agent updates its knowledge about the workspace via its sensing capability and shares this knowledge with its neighboring agents. Based on this update, each agent verifies and revises its plan in real time. It is ensured that the hard specification is always fulfilled and the satisfaction for the soft specification is improved gradually. The design is distributed as only local interactions are assumed. The overall framework is demonstrated by a case study.

  • 150.
    Guo, Meng
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Multi-agent plan reconfiguration under local LTL specifications2015In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 34, no 2, p. 218-235Article in journal (Refereed)
    Abstract [en]

    We propose a cooperative motion and task planning scheme for multi-agent systems where the agents have independently assigned local tasks, specified as linear temporal logic formulas. These tasks contain hard and soft sub-specifications. A least-violating initial plan is synthesized first for the potentially infeasible task and the partially-known workspace. This discrete plan is then implemented by the potential-field-based navigation controllers. While the system runs, each agent updates its knowledge about the workspace via its sensing capability and shares this knowledge with its neighbouring agents. Based on the knowledge update, each agent verifies and revises its motion plan in real time. It is ensured that the hard specification is always fulfilled for safety and the satisfaction for the soft specification is improved gradually. The design is distributed as only local interactions are assumed. The overall framework is demonstrated by a case study and an experiment.

1234567 101 - 150 of 437
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf