Change search
Refine search result
1234567 51 - 100 of 437
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Bishop, Adrian N.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A tutorial on constraints for positioning on the plane2010In: 2010 IEEE 21st International Symposiumon Personal Indoor and Mobile Radio Communications (PIMRC), IEEE , 2010, p. 1689-1694Conference paper (Refereed)
    Abstract [en]

    This paper introduces and surveys a number of determinant constraints on the measurement errors in a variety of positioning scenarios. An algorithm for exploiting the constraints for accurate positioning is introduced and the relationship between the proposed algorithm and a so-called traditional maximum likelihood algorithm is examined.

  • 52.
    Bishop, Adrian N.
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Fidan, Baris
    Anderson, Brian D. O.
    Dogancay, Kutluyil
    Pathirana, Pubudu N.
    Optimality analysis of sensor-target localization geometries2010In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 46, no 3, p. 479-492Article in journal (Refereed)
    Abstract [en]

    The problem of target localization involves estimating the position of a target from multiple noisy sensor measurements. It is well known that the relative sensor-target geometry can significantly affect the performance of any particular localization algorithm. The localization performance can be explicitly characterized by certain measures, for example, by the Cramer-Rao lower bound (which is equal to the inverse Fisher information matrix) on the estimator variance. In addition, the Cramer-Rao lower bound is commonly used to generate a so-called uncertainty ellipse which characterizes the spatial variance distribution of an efficient estimate, i.e. an estimate which achieves the lower bound. The aim of this work is to identify those relative sensor-target geometries which result in a measure of the uncertainty ellipse being minimized. Deeming such sensor-target geometries to be optimal with respect to the chosen measure, the optimal sensor-target geometries for range-only, time-of-arrival-based and bearing-only localization are identified and studied in this work. The optimal geometries for an arbitrary number of sensors are identified and it is shown that an optimal sensor-target configuration is not, in general, unique. The importance of understanding the influence of the sensor-target geometry on the potential localization performance is highlighted via formal analytical results and a number of illustrative examples.

  • 53.
    Bishop, Adrian N.
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    A Stochastically Stable Solution to the Problem of Robocentric Mapping2009In: ICRA: 2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, 2009, p. 1540-1547Conference paper (Refereed)
    Abstract [en]

    This paper provides a novel solution for robo-centric mapping using an autonomous mobile robot. The robot dynamic model is the standard unicycle model and the robot is assumed to measure both the range and relative bearing to the landmarks. The algorithm introduced in this paper relies on a coordinate transformation and an extended Kalman filter like algorithm. The coordinate transformation considered in this paper has not been previously considered for robocentric mapping applications. Moreover, we provide a rigorous stochastic stability analysis of the filter employed and we examine the conditions under which the mean-square estimation error converges to a steady-state value.

  • 54.
    Bishop, Adrian N.
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    An Optimality Analysis of Sensor-Target Geometries for Signal Strength Based Localization2009In: 2009 INTERNATIONAL CONFERENCE ON INTELLIGENT SENSORS, SENSOR NETWORKS AND INFORMATION PROCESSING (ISSNIP 2009), NEW YORK: IEEE , 2009, p. 127-132Conference paper (Refereed)
    Abstract [en]

    In this paper we characterize the bounds on localization accuracy in signal strength based localization. In particular, we provide a novel and rigorous analysis of the relative receiver-transmitter geometry and the effect of this geometry on the potential localization performance. We show that uniformly spacing sensors around the target is not optimal if the sensor-target ranges are not identical and is not necessary in any case. Indeed, we show that in general the optimal sensor-target geometry for signal strength based localization is not unique.

  • 55.
    Bishop, Adrian N.
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Stochastically convergent localization of objects by mobile sensors and actively controllable relative sensor-object2015In: 2009 European Control Conference, ECC 2009, 2015, p. 2384-2389Conference paper (Refereed)
    Abstract [en]

    The problem of object (network) localization using a mobile sensor is examined in this paper. Specifically, we consider a set of stationary objects located in the plane and a single mobile nonholonomic sensor tasked at estimating their relative position from range and bearing measurements. We derive a coordinate transform and a relative sensor-object motion model that leads to a novel problem formulation where the measurements are linear in the object positions. We then apply an extended Kalman filter-like algorithm to the estimation problem. Using stochastic calculus we provide an analysis of the convergence properties of the filter. We then illustrate that it is possible to steer the mobile sensor to achieve a relative sensor-object pose using a continuous control law. This last fact is significant since we circumvent Brockett's theorem and control the relative sensor-source pose using a simple controller.

  • 56. Bishop, A.N.
    et al.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Global Robot Localization with Random Finite Set Statistics2010In: Fusion 2010: 13th International Conference on Information Fusion, 2010, p. 5711873-Conference paper (Refereed)
    Abstract [en]

    We re-examine the problem of global localization of a robot using a rigorous Bayesian framework based on the idea of random finite sets. Random sets allow us to naturally develop a complete model of the underlying problem accounting for the statistics of missed detections and of spurious/erroneously detected (potentially unmodeled) features along with the statistical models of robot hypothesis disappearance and appearance. In addition, no explicit data association is required which alleviates one of the more difficult sub-problems. Following the derivation of the Bayesian solution, we outline its first-order statistical moment approximation, the so called probability hypothesis density filter. We present a statistical estimation algorithm for the number of potential robot hypotheses consistent with the accumulated evidence and we show how such an estimate can be used to aid in re-localization of kidnapped robots. We discuss the advantages of the random set approach and examine a number of illustrative simulations.

  • 57.
    Björkman, Mårten
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bekiroglu, Yasemin
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Högman, Virgile
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Enhancing Visual Perception of Shape through Tactile Glances2013In: Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, IEEE conference proceedings, 2013, p. 3180-3186Conference paper (Refereed)
    Abstract [en]

    Object shape information is an important parameter in robot grasping tasks. However, it may be difficult to obtain accurate models of novel objects due to incomplete and noisy sensory measurements. In addition, object shape may change due to frequent interaction with the object (cereal boxes, etc). In this paper, we present a probabilistic approach for learning object models based on visual and tactile perception through physical interaction with an object. Our robot explores unknown objects by touching them strategically at parts that are uncertain in terms of shape. The robot starts by using only visual features to form an initial hypothesis about the object shape, then gradually adds tactile measurements to refine the object model. Our experiments involve ten objects of varying shapes and sizes in a real setup. The results show that our method is capable of choosing a small number of touches to construct object models similar to real object shapes and to determine similarities among acquired models.

  • 58.
    Björkman, Mårten
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bergström, Niklas
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Detecting, segmenting and tracking unknown objects using multi-label MRF inference2014In: Computer Vision and Image Understanding, ISSN 1077-3142, E-ISSN 1090-235X, Vol. 118, p. 111-127Article in journal (Refereed)
    Abstract [en]

    This article presents a unified framework for detecting, segmenting and tracking unknown objects in everyday scenes, allowing for inspection of object hypotheses during interaction over time. A heterogeneous scene representation is proposed, with background regions modeled as a combinations of planar surfaces and uniform clutter, and foreground objects as 3D ellipsoids. Recent energy minimization methods based on loopy belief propagation, tree-reweighted message passing and graph cuts are studied for the purpose of multi-object segmentation and benchmarked in terms of segmentation quality, as well as computational speed and how easily methods can be adapted for parallel processing. One conclusion is that the choice of energy minimization method is less important than the way scenes are modeled. Proximities are more valuable for segmentation than similarity in colors, while the benefit of 3D information is limited. It is also shown through practical experiments that, with implementations on GPUs, multi-object segmentation and tracking using state-of-art MRF inference methods is feasible, despite the computational costs typically associated with such methods.

  • 59.
    Björkman, Mårten
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Eklundh, Jan-Olof
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Attending, Foveating and Recognizing Objects in Real World Scenes2004In: British Machine Vision Conference (BMVC), London, UK / [ed] Andreas Hoppe, Sarah Barman, Tim Ellis, BMVA Press , 2004, p. 227-236Conference paper (Refereed)
    Abstract [en]

    Recognition in cluttered real world scenes is a challenging problem. To find a particular object of interest within a reasonable time, a wide field of view is preferable. However, as we will show with practical experiments, robust recognition is easier if the object is foveated and subtends a considerable partof the visual field. In this paper a binocular system able to overcome these two conflicting requirements will be presented. The system consists of two sets of cameras, a wide field pair and a foveal one. From disparities a number of object hypotheses are generated. An attentional process based on hue and 3D size guides the foveal cameras towards the most salient regions. With the object foveated and segmented in 3D, recognition is performed using scale invariant features. The system is fully automised and runs at real-time speed.

  • 60.
    Björkman, Mårten
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Active 3D scene segmentation and detection of unknown objects2010In: IEEE International Conference on Robotics and Automation (ICRA), Anchorage, USA / [ed] Antonio Bicchi, IEEE Robotics and Automation Society, 2010, p. 3114-3120Conference paper (Refereed)
    Abstract [en]

    We present an active vision system for segmentationof visual scenes based on integration of several cues. The system serves as a visual front end for generation of object hypotheses for new, previously unseen objects in natural scenes. The system combines a set of foveal and peripheral cameraswhere, through a stereo based fixation process, object hypotheses are generated. In addition to considering the segmentation process in 3D, the main contribution of the paper is integration of different cues in a temporal framework and improvement of initial hypotheses over time.

  • 61.
    Björkman, Mårten
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Active 3D Segmentation through Fixation of Previously Unseen Objects2010In: British Machine Vision Conference (BMVC), Aberystwyth, UK / [ed] Frédéric Labrosse, Reyer Zwiggelaar, Yonghuai Liu, and Bernie Tiddeman, BMVA Press , 2010, p. 119.1-119.11Conference paper (Refereed)
    Abstract [en]

    We present an approach for active segmentation based on integration of several cues.It serves as a framework for generation of object hypotheses of previously unseen objectsin natural scenes. Using an approximate Expectation-Maximisation method, the appearance,3D shape and size of objects are modelled in an iterative manner, with fixation usedfor unsupervised initialisation. To better cope with situations where an object is hard tosegregate from the surface it is placed on, a flat surface model is added to the typical twohypotheses used in classical figure-ground segmentation. The framework is further extendedto include modelling over time, in order to exploit temporal consistency for bettersegmentation and to facilitate tracking.

  • 62.
    Björkman, Mårten
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Combination of foveal and peripheral vision for object recognition and pose estimation2004In: 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, p. 5135-5140Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a real-time vision system that integrates a number of algorithms using monocular and binocular cues to achieve robustness in realistic settings, for tasks such as object recognition, tracking and pose estimation. The system consists of two sets of binocular cameras; a peripheral set for disparity based attention and a foveal one for higher level processes. Thus the conflicting requirements of a wide field of view and high resolution can be overcome. One important property of the system is that the step from task specification through object recognition to pose estimation is completely automatic, combining both appearance and geometric models. Experimental evaluation is performed in a realistic indoor environment with occlusions, clutter, changing lighting and background conditions.

  • 63.
    Boberg, Anders
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bishop, Adrian N.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Robocentric Mapping and Localization in Modified Spherical Coordinates with Bearing Measurements2009In: 2009 INTERNATIONAL CONFERENCE ON INTELLIGENT SENSORS, SENSOR NETWORKS AND INFORMATION PROCESSING (ISSNIP 2009), NEW YORK: IEEE , 2009, p. 139-144Conference paper (Refereed)
    Abstract [en]

    In this paper, a new approach to robotic mapping is presented that uses modified spherical coordinates in a robot-centered reference frame and a bearing-only measurement model. The algorithm provided in this paper permits robust delay-free state initialization and is computationally more efficient than the current standard in bearing-only (delay-free initialized) simultaneous localization and mapping (SLAM). Importantly, we provide a detailed nonlinear observability analysis which shows the system is generally observable. We also analyze the error convergence of the filter using stochastic stability analysis. We provide an explicit bound on the asymptotic mean state estimation error. A comparison of the performance of this filter is also made against a standard world-centric SLAM algorithm in a simulated environment.

  • 64.
    Bohg, Jeannette
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Multi-Modal Scene Understanding for Robotic Grasping2011Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Current robotics research is largely driven by the vision of creatingan intelligent being that can perform dangerous, difficult orunpopular tasks. These can for example be exploring the surface of planet mars or the bottomof the ocean, maintaining a furnace or assembling a car.   They can also be more mundane such as cleaning an apartment or fetching groceries. This vision has been pursued since the 1960s when the first robots were built. Some of the tasks mentioned above, especially those in industrial manufacturing, arealready frequently performed by robots. Others are still completelyout of reach. Especially, household robots are far away from beingdeployable as general purpose devices. Although advancements have beenmade in this research area, robots are not yet able to performhousehold chores robustly in unstructured and open-ended environments givenunexpected events and uncertainty in perception and execution.In this thesis, we are analyzing which perceptual andmotor capabilities are necessaryfor the robot to perform common tasks in a household scenario. In that context, an essential capability is tounderstand the scene that the robot has to interact with. This involvesseparating objects from the background but also from each other.Once this is achieved, many other tasks becomemuch easier. Configuration of objectscan be determined; they can be identified or categorized; their pose can be estimated; free and occupied space in the environment can be outlined.This kind of scene model can then inform grasp planning algorithms to finally pick up objects.However, scene understanding is not a trivial problem and evenstate-of-the-art methods may fail. Given an incomplete, noisy andpotentially erroneously segmented scene model, the questions remain howsuitable grasps can be planned and how they can be executed robustly.In this thesis, we propose to equip the robot with a set of predictionmechanisms that allow it to hypothesize about parts of the sceneit has not yet observed. Additionally, the robot can alsoquantify how uncertain it is about this prediction allowing it toplan actions for exploring the scene at specifically uncertainplaces. We consider multiple modalities includingmonocular and stereo vision, haptic sensing and information obtainedthrough a human-robot dialog system. We also study several scene representations of different complexity and their applicability to a grasping scenario. Given an improved scene model from this multi-modalexploration, grasps can be inferred for each objecthypothesis. Dependent on whether the objects are known, familiar orunknown, different methodologies for grasp inference apply. In thisthesis, we propose novel methods for each of these cases. Furthermore,we demonstrate the execution of these grasp both in a closed andopen-loop manner showing the effectiveness of the proposed methods inreal-world scenarios.

  • 65.
    Bohg, Jeannette
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Barck-Holst, Carl
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hübner, Kai
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ralph, Maria
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Rasolzadeh, Babak
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Song, Dan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    TOWARDS GRASP-ORIENTED VISUAL PERCEPTION FOR HUMANOID ROBOTS2009In: INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS, ISSN 0219-8436, Vol. 6, no 3, p. 387-434Article in journal (Refereed)
    Abstract [en]

    A distinct property of robot vision systems is that they are embodied. Visual information is extracted for the purpose of moving in and interacting with the environment. Thus, different types of perception-action cycles need to be implemented and evaluated. In this paper, we study the problem of designing a vision system for the purpose of object grasping in everyday environments. This vision system is firstly targeted at the interaction with the world through recognition and grasping of objects and secondly at being an interface for the reasoning and planning module to the real world. The latter provides the vision system with a certain task that drives it and defines a specific context, i.e. search for or identify a certain object and analyze it for potential later manipulation. We deal with cases of: (i) known objects, (ii) objects similar to already known objects, and (iii) unknown objects. The perception-action cycle is connected to the reasoning system based on the idea of affordances. All three cases are also related to the state of the art and the terminology in the neuroscientific area.

  • 66.
    Bohg, Jeannette
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bergström, Niklas
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Acting and Interacting in the Real World2011Conference paper (Refereed)
  • 67. Bohg, Jeannette
    et al.
    Hausman, Karol
    Sankaran, Bharath
    Brock, Oliver
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Schaal, Stefan
    Sukhatme, Gaurav S.
    Interactive Perception: Leveraging Action in Perception and Perception in Action2017In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 33, no 6, p. 1273-1291Article in journal (Refereed)
    Abstract [en]

    Recent approaches in robot perception follow the insight that perception is facilitated by interaction with the environment. These approaches are subsumed under the term Interactive Perception (IP). This view of perception provides the following benefits. First, interaction with the environment creates a rich sensory signal that would otherwise not be present. Second, knowledge of the regularity in the combined space of sensory data and action parameters facilitates the prediction and interpretation of the sensory signal. In this survey, we postulate this as a principle for robot perception and collect evidence in its support by analyzing and categorizing existing work in this area. We also provide an overview of the most important applications of IP. We close this survey by discussing remaining open questions. With this survey, we hope to help define the field of Interactive Perception and to provide a valuable resource for future research.

  • 68.
    Bohg, Jeannette
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Johnson-Roberson, Matthew
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Strategies for Multi-Modal Scene Exploration2010In: IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, p. 4509-4515Conference paper (Refereed)
    Abstract [en]

    We propose a method for multi-modal scene exploration where initial object hypothesis formed by active visual segmentation are confirmed and augmented through haptic exploration with a robotic arm. We update the current belief about the state of the map with the detection results and predict yet unknown parts of the map with a Gaussian Process. We show that through the integration of different sensor modalities, we achieve a more complete scene model. We also show that the prediction of the scene structure leads to a valid scene representation even if the map is not fully traversed. Furthermore, we propose different exploration strategies and evaluate them both in simulation and on our robotic platform.

  • 69.
    Bohg, Jeannette
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Johnson-Roberson, Matthew
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Leon, Beatriz
    Universitat Jaume I, Castellon, Spain.
    Felip, Javier
    Universitat Jaume I, Castellon, Spain.
    Gratal, Xavi
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bergström, Niklas
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Morales, Antonio
    Universitat Jaume I, Castellon, Spain.
    Mind the Gap - Robotic Grasping under Incomplete Observation2011In: 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, May 9-13, 2011, New York: IEEE , 2011, p. 686-693Conference paper (Refereed)
    Abstract [en]

    We consider the problem of grasp and manipulation planning when the state of the world is only partially observable. Specifically, we address the task of picking up unknown objects from a table top. The proposed approach to object shape prediction aims at closing the knowledge gaps in the robot's understanding of the world. A completed state estimate of the environment can then be provided to a simulator in which stable grasps and collision-free movements are planned. The proposed approach is based on the observation that many objects commonly in use in a service robotic scenario possess symmetries. We search for the optimal parameters of these symmetries given visibility constraints. Once found, the point cloud is completed and a surface mesh reconstructed. Quantitative experiments show that the predictions are valid approximations of the real object shape. By demonstrating the approach on two very different robotic platforms its generality is emphasized.

  • 70.
    Bohg, Jeannette
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Grasping Familiar Objects using Shape Context2009In: ICAR: 2009 14th International Conference on Advanced Robotics, IEEE , 2009, p. 50-55Conference paper (Refereed)
    Abstract [en]

    We present work on vision based robotic grasping. The proposed method relies on extracting and representing the global contour of an object in a monocular image. A suitable grasp is then generated using a learning framework where prototypical grasping points are learned from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labeled synthetic images. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects. Furthermore, we will show how our representation supports the inference of a full grasp configuration.

  • 71.
    Bohg, Jeannette
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Learning grasping points with shape context2010In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 58, no 4, p. 362-377Article in journal (Refereed)
    Abstract [en]

    This paper presents work on vision based robotic grasping. The proposed method adopts a learning framework where prototypical grasping points are learnt from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labelled synthetic images. We evaluate and compare the performance of linear and non-linear classifiers. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects.

  • 72. Bohg, Jeannette
    et al.
    Morales, Antonio
    Asfour, Tamim
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Data-Driven Grasp Synthesis-A Survey2014In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 30, no 2, p. 289-309Article in journal (Refereed)
    Abstract [en]

    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar, or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally, for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.

  • 73.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Multiple Object Detection, Tracking and Long-Term Dynamics Learning in Large 3D MapsManuscript (preprint) (Other academic)
    Abstract [en]

    In this work, we present a method for tracking and learning the dynamics of all objects in a large scale robot environment. A mobile robot patrols the environment and visits the different locations one by one. Movable objects are discovered by change detection, and tracked throughout the robot deployment. For tracking, we extend our previous Rao-Blackwellized particle filter with birth and death processes, enabling the method to handle an arbitrary number of objects. Target births and associations are sampled using Gibbs sampling. The parameters of the system are then learnt using the Expectation Maximization algorithm in an unsupervised fashion. The system therefore enables learning of the dynamics of one particular environment, and of its objects. The algorithm is evaluated on data collected autonomously by a mobile robot in an office environment during a real-world deployment. We show that the algorithm automatically identifies and tracks the moving objects within 3D maps and infers plausible dynamics models, significantly decreasing the modeling bias of our previous work. The proposed method represents an improvement over previous methods for environment dynamics learning as it allows for learning of fine grained processes.

  • 74.
    Bore, Nils
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Efficient retrieval of arbitrary objects from long-term robot observations2017In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 91, p. 139-150Article in journal (Refereed)
    Abstract [en]

    We present a novel method for efficient querying and retrieval of arbitrarily shaped objects from large amounts of unstructured 3D point cloud data. Our approach first performs a convex segmentation of the data after which local features are extracted and stored in a feature dictionary. We show that the representation allows efficient and reliable querying of the data. To handle arbitrarily shaped objects, we propose a scheme which allows incremental matching of segments based on similarity to the query object. Further, we adjust the feature metric based on the quality of the query results to improve results in a second round of querying. We perform extensive qualitative and quantitative experiments on two datasets for both segmentation and retrieval, validating the results using ground truth data. Comparison with other state of the art methods further enforces the validity of the proposed method. Finally, we also investigate how the density and distribution of the local features within the point clouds influence the quality of the results.

  • 75.
    Bore, Nils
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Querying 3D Data by Adjacency Graphs2015In: Computer Vision Systems / [ed] Nalpantidis, Lazaros and Krüger, Volker and Eklundh, Jan-Olof and Gasteratos, Antonios, Springer Publishing Company, 2015, p. 243-252Chapter in book (Refereed)
    Abstract [en]

    The need for robots to search the 3D data they have saved is becoming more apparent. We present an approach for finding structures in 3D models such as those built by robots of their environment. The method extracts geometric primitives from point cloud data. An attributed graph over these primitives forms our representation of the surface structures. Recurring substructures are found with frequent graph mining techniques. We investigate if a model invariant to changes in size and reflection using only the geometric information of and between primitives can be discriminative enough for practical use. Experiments confirm that it can be used to support queries of 3D models.

  • 76.
    Bratt, Mattias
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Design of a Control Strategy for Teleoperation of a Platform with Significant Dynamics2006In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK, NY: IEEE , 2006, p. 1700-1705Conference paper (Refereed)
    Abstract [en]

    A teleoperation system for controlling a robot with fast dynamics over the Internet has been constructed. It employs a predictive control structure with an accurate dynamic model of the robot to overcome problems caused by varying delays. The operator interface uses a stereo virtual reality display of the robot cell, and a haptic device for force feed-back including virtual obstacle avoidance forces.

  • 77.
    Bratt, Mattias
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Minimum jerk based prediction of user actions for a ball catching task2007In: IEEE International Conference on Intelligent Robots and Systems: Vols 1-9, IEEE conference proceedings, 2007, p. 2716-2722Conference paper (Refereed)
    Abstract [en]

    The present paper examines minimum jerk models for human kinematics as a tool to predict user input in teleoperation with significant dynamics. Predictions of user input can be a powerful tool to bridge time-delays and to trigger autonomous sub-sequences. In this paper an example implementation is presented, along with the results of a pilot experiment in which a virtual reality simulation of a teleoperated ball-catching scenario is used to test the predictive power of the model. The results show that delays up to 100 ms can potentially be bridged with this approach.

  • 78. Brooks, A.
    et al.
    Kaupp, T.
    Makarenko, A.
    Williams, S.
    Orebäck, Anders
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Towards component-based robotics2005In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, 2005, p. 3567-3572Conference paper (Refereed)
    Abstract [en]

    This paper gives an overview of Component-Based Software Engineering (CBSE), motivates its application to the field of mobile robotics, and proposes a particular component model. CBSE is an approach to system-building that aims to shift the emphasis from programming to composing systems from a mixture of off-the-shelf and custom-built software components. This paper argues that robotics is particularly well-suited for and in need of component-based ideas. Furthermore, now is the right time for their introduction. The paper introduces Orca - an open-source component-based software engineering framework proposed for mobile robotics with an associated repository of free, reusable components for building mobile robotic systems.

  • 79.
    Båberg, Fredrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Petter, Ögren
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Formation Obstacle Avoidance using RRT and Constraint Based Programming2017In: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), IEEE conference proceedings, 2017, article id 8088131Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a new way of doing formation obstacle avoidance using a combination of Constraint Based Programming (CBP) and Rapidly Exploring Random Trees (RRTs). RRT is used to select waypoint nodes, and CBP is used to move the formation between those nodes, reactively rotating and translating the formation to pass the obstacles on the way. Thus, the CBP includes constraints for both formation keeping and obstacle avoidance, while striving to move the formation towards the next waypoint. The proposed approach is compared to a pure RRT approach where the motion between the RRT waypoints is done following linear interpolation trajectories, which are less computationally expensive than the CBP ones. The results of a number of challenging simulations show that the proposed approach is more efficient for scenarios with high obstacle densities.

  • 80.
    Båberg, Fredrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Wang, Yuquan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Caccamo, Sergio
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Adaptive object centered teleoperation control of a mobile manipulator2016In: 2016 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 455-461Conference paper (Refereed)
    Abstract [en]

    Teleoperation of a mobile robot manipulating and exploring an object shares many similarities with the manipulation of virtual objects in a 3D design software such as AutoCAD. The user interfaces are however quite different, mainly for historical reasons. In this paper we aim to change that, and draw inspiration from the 3D design community to propose a teleoperation interface control mode that is identical to the ones being used to locally navigate the virtual viewpoint of most Computer Aided Design (CAD) softwares.

    The proposed mobile manipulator control framework thus allows the user to focus on the 3D objects being manipulated, using control modes such as orbit object and pan object, supported by data from the wrist mounted RGB-D sensor. The gripper of the robot performs the desired motions relative to the object, while the manipulator arm and base moves in a way that realizes the desired gripper motions. The system redundancies are exploited in order to take additional constraints, such as obstacle avoidance, into account, using a constraint based programming framework.

  • 81.
    Caccamo, Sergio
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bekiroglu, Yasemin
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Active Exploration Using Gaussian Random Fields and Gaussian Process Implicit Surfaces2016In: 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 582-589Conference paper (Refereed)
    Abstract [en]

    In this work we study the problem of exploring surfaces and building compact 3D representations of the environment surrounding a robot through active perception. We propose an online probabilistic framework that merges visual and tactile measurements using Gaussian Random Field and Gaussian Process Implicit Surfaces. The system investigates incomplete point clouds in order to find a small set of regions of interest which are then physically explored with a robotic arm equipped with tactile sensors. We show experimental results obtained using a PrimeSense camera, a Kinova Jaco2 robotic arm and Optoforce sensors on different scenarios. We then demostrate how to use the online framework for object detection and terrain classification.

  • 82.
    Caccamo, Sergio
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Güler, Püren
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Active perception and modeling of deformable surfaces using Gaussian processes and position-based dynamics2016In: IEEE-RAS International Conference on Humanoid Robots, IEEE, 2016, p. 530-537Conference paper (Refereed)
    Abstract [en]

    Exploring and modeling heterogeneous elastic surfaces requires multiple interactions with the environment and a complex selection of physical material parameters. The most common approaches model deformable properties from sets of offline observations using computationally expensive force-based simulators. In this work we present an online probabilistic framework for autonomous estimation of a deformability distribution map of heterogeneous elastic surfaces from few physical interactions. The method takes advantage of Gaussian Processes for constructing a model of the environment geometry surrounding a robot. A fast Position-based Dynamics simulator uses focused environmental observations in order to model the elastic behavior of portions of the environment. Gaussian Process Regression maps the local deformability on the whole environment in order to generate a deformability distribution map. We show experimental results using a PrimeSense camera, a Kinova Jaco2 robotic arm and an Optoforce sensor on different deformable surfaces.

  • 83.
    Caccamo, Sergio
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Parasuraman, Ramviyas
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Båberg, Fredrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Extending a UGV Teleoperation FLC Interface with Wireless Network Connectivity Information2015In: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE , 2015, p. 4305-4312Conference paper (Refereed)
    Abstract [en]

    Teleoperated Unmanned Ground Vehicles (UGVs) are expected to play an important role in future search and rescue operations. In such tasks, two factors are crucial for a successful mission completion: operator situational awareness and robust network connectivity between operator and UGV. In this paper, we address both these factors by extending a new Free Look Control (FLC) operator interface with a graphical representation of the Radio Signal Strength (RSS) gradient at the UGV location. We also provide a new way of estimating this gradient using multiple receivers with directional antennas. The proposed approach allows the operator to stay focused on the video stream providing the crucial situational awareness, while controlling the UGV to complete the mission without moving into areas with dangerously low wireless connectivity. The approach is implemented on a KUKA youBot using commercial-off-the-shelf components. We provide experimental results showing how the proposed RSS gradient estimation method performs better than a difference approximation using omnidirectional antennas and verify that it is indeed useful for predicting the RSS development along a UGV trajectory. We also evaluate the proposed combined approach in terms of accuracy, precision, sensitivity and specificity.

  • 84.
    Caccamo, Sergio
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Parasuraman, Ramviyas
    Purdue Univ, W Lafayette, IN 47907 USA..
    Freda, Luigi
    Sapienza Univ Rome, DIAG, ALCOR Lab, Rome, Italy..
    Gianni, Mario
    Sapienza Univ Rome, DIAG, ALCOR Lab, Rome, Italy..
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    RCAMP: A Resilient Communication-Aware Motion Planner for Mobile Robots with Autonomous Repair of Wireless Connectivity2017In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 2010-2017Conference paper (Refereed)
    Abstract [en]

    Mobile robots, be it autonomous or teleoperated, require stable communication with the base station to exchange valuable information. Given the stochastic elements in radio signal propagation, such as shadowing and fading, and the possibilities of unpredictable events or hardware failures, communication loss often presents a significant mission risk, both in terms of probability and impact, especially in Urban Search and Rescue (USAR) operations. Depending on the circumstances, disconnected robots are either abandoned, or attempt to autonomously back-trace their way to the base station. Although recent results in Communication-Aware Motion Planning can be used to effectively manage connectivity with robots, there are no results focusing on autonomously re-establishing the wireless connectivity of a mobile robot without back-tracing or using detailed a priori information of the network. In this paper, we present a robust and online radio signal mapping method using Gaussian Random Fields, and propose a Resilient Communication-Aware Motion Planner (RCAMP) that integrates the above signal mapping framework with a motion planner. RCAMP considers both the environment and the physical constraints of the robot, based on the available sensory information. We also propose a self-repair strategy using RCMAP, that takes both connectivity and the goal position into account when driving to a connection-safe position in the event of a communication loss. We demonstrate the proposed planner in a set of realistic simulations of an exploration task in single or multi-channel communication scenarios.

  • 85. Caputo, B.
    et al.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Overview of the CLEF 2009 robot vision track2009In: CLEF2009 Working Notes: Working Notes for CLEF 2009 Workshop, co-located with the 13th European Conference on Digital Libraries (ECDL 2009), Corfù, Greece, September 30 - October 2, 2009 / [ed] Carol Peters, Nicola Ferro, CEUR-WS , 2009Conference paper (Refereed)
    Abstract [en]

    The robot vision task has been proposed to the ImageCLEF participants for the first time in 2009. The task attracted a considerable attention, with 19 inscribed research groups, 7 groups eventually participating and a total of 27 submitted runs. The task addressed the problem of visual place recognition applied to robot topological localization. Specifically, participants were asked to classify rooms on the basis of image sequences, captured by a perspective camera mounted on a mobile robot. The sequences were acquired in an office environment, under varying illumination conditions and across a time span of almost two years. The training and validation set consisted of a subset of the IDOL2 database1. The test set consisted of sequences similar to those in the training and validation set, but acquired 20 months later and imaging also additional rooms. Participants were asked to build a system able to answer the question "where are you?" (I am in the kitchen, in the corridor, etc) when presented with a test sequence imaging rooms seen during training, or additional rooms that were not imaged in the training sequence. The system had to assign each test image to one of the rooms present in the training sequence, or indicate that the image came from a new room. We asked all participants to solve the problem separately for each test image (obligatory task). Additionally, results could also be reported for algorithms exploiting the temporal continuity of the image sequences (optional task). Of the 27 runs, 21 were submitted to the obligatory task, and 6 to the optional task. The best result in the obligatory task was obtained by the Multimedia Information Retrieval Group of the University of Glasgow, UK with an approach based on local feature matching. The best result in the optional task was obtained by the Intelligent Systems and Data Mining Group (SIMD) of the University of Castilla-La Mancha, Albacete, Spain, with an approach based on local features and a particle filter.

  • 86.
    Carvalho, J. Frederico
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pequito, S.
    Aguiar, A. P.
    Kar, S.
    Johansson, Karl Henrik
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Composability and controllability of structural linear time-invariant systems: Distributed verification2017In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 78, p. 123-134Article in journal (Refereed)
    Abstract [en]

    Motivated by the development and deployment of large-scale dynamical systems, often comprised of geographically distributed smaller subsystems, we address the problem of verifying their controllability in a distributed manner. Specifically, we study controllability in the structural system theoretic sense, structural controllability, in which rather than focusing on a specific numerical system realization, we provide guarantees for equivalence classes of linear time-invariant systems on the basis of their structural sparsity patterns, i.e., the location of zero/nonzero entries in the plant matrices. Towards this goal, we first provide several necessary and/or sufficient conditions that ensure that the overall system is structurally controllable on the basis of the subsystems’ structural pattern and their interconnections. The proposed verification criteria are shown to be efficiently implementable (i.e., with polynomial time-complexity in the number of the state variables and inputs) in two important subclasses of interconnected dynamical systems: similar (where every subsystem has the same structure) and serial (where every subsystem outputs to at most one other subsystem). Secondly, we provide an iterative distributed algorithm to verify structural controllability for general interconnected dynamical system, i.e., it is based on communication among (physically) interconnected subsystems, and requires only local model and interconnection knowledge at each subsystem.

  • 87. Cedervall, Simon
    et al.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Nonlinear observers for unicycle robots with range sensors2007In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 52, no 7, p. 1325-1329Article in journal (Refereed)
    Abstract [en]

    For nonlinear mobile systems equipped with exteroceptive sensors, the observability does not only depend on the initial conditions, but also on the control and the environment. This presents an interesting issue: how to design an observer together with the exciting control. In this note, the problem of designing an observer based on range sensor readings is studied. A design method based on periodic excitations is proposed for unicycle robotic systems.

  • 88.
    Charalambous, Themistoklis
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hadjicostis, C. N.
    Average consensus in the presence of dynamically changing directed topologies and time delays2014In: Proceedings of the IEEE Conference on Decision and Control, IEEE conference proceedings, 2014, no February, p. 709-714Conference paper (Refereed)
    Abstract [en]

    We have recently proposed a robustified ratio consensus algorithm which achieves asymptotic convergence to the global average in a distributed fashion in static strongly connected digraphs, despite the possible presence of bounded but otherwise arbitrary delays. In this work, we propose a protocol which reaches asymptotic convergence to the global average in a distributed fashion under possible changes in the underlying interconnection topology (e.g., due to component mobility), as well as time-varying delays that might affect transmissions at different times. More specifically, we extend our previous work to also account for the case where, in addition to arbitrary but bounded delays, we may have time varying communication links. The proposed protocol requires that each component has knowledge of the number of its outgoing links, perhaps with some bounded delay, and that the digraphs formed by the switching communication topologies over a finite time window are jointly strongly connected.

  • 89. Cheng, Daizhan
    et al.
    Wang, Jinhuan
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    An extension of LaSalle's invariance principle and its application to multi-agent consensus2008In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 53, no 7, p. 1765-1770Article in journal (Refereed)
    Abstract [en]

    In the paper, an extension of LaSalle's Invariance Principle to a class of switched linear systems is studied. One of the motivations is the consensus problem in multi-agent systems. Unlike most existing results in which each switching mode in the system needs to be asymptotically stable, this paper allows that the switching modes are only Lyapunov stable. Under certain ergodicity assumptions, an extension of LaSalle's Invariance Principle for global asymptotic stability is obtained. Then it is used to solve the consensus reaching problem of certain multi-agent systems in which each agent is modeled by a double integrator, and the associated interaction graph is switching and is assumed to be only jointly connected.

  • 90. Christalin, B.
    et al.
    Colledanchise, Michele
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Murray, R. M.
    Synthesis of reactive control protocols for switch electrical power systems for commercial application with safety specifications2017In: 2016 IEEE Symposium Series on Computational Intelligence, SSCI 2016, IEEE, 2017, article id 7849873Conference paper (Refereed)
    Abstract [en]

    This paper presents a method for the reactive synthesis of fault-tolerant optimal control protocols for a finite deterministic discrete event system subject to safety specifications. A Deterministic Finite State Machine (DFSM) and Behavior Tree (BT) were used to model the system. The synthesis procedure involves formulating the policy problem as a shortest path dynamic programming problem. The procedure evaluates all possible states when applied to the DFSM, or over all possible actions when applied to the BT. The resulting strategy minimizes the number of actions performed to meet operational objectives without violating safety conditions. The effectiveness of the procedure on DFSMs and BTs is demonstrated through three examples of switched electrical power systems for commercial application and analyzed using run-time complexity analysis. The results demonstrated that for large order system BTs provided a tractable model to synthesize an optimal control policy.

  • 91.
    Christensen, Henrik I.
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sandberg, F
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Computational Vision for Interaction with People and RobotsManuscript (preprint) (Other academic)
    Abstract [en]

    Facilities for sensing and modification of the environmentis crucial to delivery of robotics facilities that can interact with humansand objects in the environment. Both for recognition of objectsand interpretation of human activities (for instruction and avoidance)the by far most versatile sensory modality is computational vision.Use of vision for interpretation of human gestures and for manipulationof objects is outlined in this paper. It is here described how combinationof multiple visual cues can be used to achieve robustness andthe tradeoff between models and cue integration is illustrated. Thedescribed vision competences are demonstrated in the context of anintelligent service robot that operates in a regular domestic setting.

  • 92.
    Christensen, Henrik I.
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Nagel, Hans Hellmut
    Introductory remarks2006In: COGNITIVE VISION SYSTEMS: SAMPLING THE SPECTRUM OF APPROACHERS, 2006, p. 1-+Conference paper (Refereed)
  • 93.
    Christensen, Henrik I.
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pacchierotti, Elena
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Embodied social interaction for robots2005In: AISB'05 Convention: Social Intelligence and Interaction in Animals, Robots and Agents: Proceedings of the Symposium on Robot Companions: Hard Problems and Open Challenges in Robot-Human Interaction, 2005, p. 40-45Conference paper (Refereed)
    Abstract [en]

    A key aspect of service robotics for everyday use is the motion of systems in close proximity to humans. It is here essential that the robot exhibits a behaviour that signals safe motion and awareness of the other actors in its environment. To facilitate this there is a need to endow the system with facilities for detection and tracking of objects in the vicinity of the platform, and to design a control law that enables motion generation which is considered socially acceptable. We present a system for in-door navigation in which the rules of proxemics are used to define interaction strategies for the platform.

  • 94. Chrysostomou, Dimitrios
    et al.
    Gasteratos, Antonios
    Nalpantidis, Lazaros
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sirakoulis, Georgios C.
    Multi-view 3D scene reconstruction using ant colony optimization techniques2012In: Measurement science and technology, ISSN 0957-0233, E-ISSN 1361-6501, Vol. 23, no 11, p. 114002-Article in journal (Refereed)
    Abstract [en]

    This paper presents a new method performing high-quality 3D object reconstruction of complex shapes derived from multiple, calibrated photographs of the same scene. The novelty of this research is found in two basic elements, namely: (i) a novel voxel dissimilarity measure, which accommodates the elimination of the lighting variations of the models and (ii) the use of an ant colony approach for further refinement of the final 3D models. The proposed reconstruction procedure employs a volumetric method based on a novel projection test for the production of a visual hull. While the presented algorithm shares certain aspects with the space carving algorithm, it is, nevertheless, first enhanced with the lightness compensating image comparison method, and then refined using ant colony optimization. The algorithm is fast, computationally simple and results in accurate representations of the input scenes. In addition, compared to previous publications, the particular nature of the proposed algorithm allows accurate 3D volumetric measurements under demanding lighting environmental conditions, due to the fact that it can cope with uneven light scenes, resulting from the characteristics of the voxel dissimilarity measure applied. Besides, the intelligent behavior of the ant colony framework provides the opportunity to formulate the process as a combinatorial optimization problem, which can then be solved by means of a colony of cooperating artificial ants, resulting in very promising results. The method is validated with several real datasets, along with qualitative comparisons with other state-of-the-art 3D reconstruction techniques, following the Middlebury benchmark.

  • 95. Civera, Javier
    et al.
    Ciocarlie, Matei
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bekris, Kostas
    Sarma, Sanjay
    Special Issue on Cloud Robotics and Automation2015In: IEEE Transactions on Automation Science and Engineering, ISSN 1545-5955, E-ISSN 1558-3783, Vol. 12, no 2, p. 396-397Article in journal (Other academic)
    Abstract [en]

    The articles in this special section focus on the use of cloud computing in the robotics industry. The Internet and the availability of vast computational resources, ever-growing data and storage capacity have the potential to define a new paradigm for robotics and automation. An intelligent system connected to the Internet can expand its onboard local data, computation and sensors with huge data repositories from similar and very different domains, massive parallel computation from server farms and sensor/actuator streams from other robots and automata. It is the potential and also the research challenges of the field that become the focus on this special section. The goal is to group together and to show the state-of-the-art of this newly emerged field, identify the relevant advances and topics, point out the current lines of research and potential applications, and discuss the main research challenges and future work directions.

  • 96.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dimarogonas, Dimos
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Obstacle avoidance in formation using navigation-like functions and constraint based programming2013In: Proceedings of the International Conference on Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ, IEEE conference proceedings, 2013, p. 5234-5239Conference paper (Refereed)
    Abstract [en]

    In this paper, we combine navigation functionlike potential fields and constraint based programming to achieve obstacle avoidance in formation. Constraint based programming was developed in robotic manipulation as a technique to take several constraints into account when controlling redundant manipulators. The approach has also been generalized, and applied to other control systems such as dual arm manipulators and unmanned aerial vehicles. Navigation functions are an elegant way to design controllers with provable properties for navigation problems. By combining these tools, we take advantage of the redundancy inherent in a multi-agent control problem and are able to concurrently address features such as formation maintenance and goal convergence, even in the presence of moving obstacles. We show how the user can decide a priority ordering of the objectives, as well as a clear way of seeing what objectives are currently addressed and what are postponed. We also analyze the theoretical properties of the proposed controller. Finally, we use a set of simulations to illustrate the approach.

  • 97.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Marzinotto, Alejandro
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Peter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Performance Analysis of Stochastic Behavior Trees2014In: ICRA 2014, 2014Conference paper (Refereed)
    Abstract [en]

    This paper presents a mathematical framework for performance analysis of Behavior Trees (BTs). BTs are a recent alternative to Finite State Machines (FSMs), for doing modular task switching in robot control architectures. By encoding the switching logic in a tree structure, instead of distributing it in the states of a FSM, modularity and reusability are improved.

    In this paper, we compute performance measures, such as success/failure probabilities and execution times, for plans encoded and executed by BTs. To do this, we first introduce Stochastic Behavior Trees (SBT), where we assume that the probabilistic performance measures of the basic action controllers are given. We then show how Discrete Time Markov Chains (DTMC) can be used to aggregate these measures from one level of the tree to the next. The recursive structure of the tree then enables us to step by step propagate such estimates from the leaves (basic action controllers) to the root (complete task execution). Finally, we verify our analytical results using massive Monte Carlo simulations, and provide an illustrative example of the results for a complex robotic task.

  • 98.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    How Behavior Trees Modularize Robustness and Safety in Hybrid Systems2014In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2014), IEEE , 2014, p. 1482-1488Conference paper (Refereed)
    Abstract [en]

    Behavior Trees (BTs) have become a popular framework for designing controllers of in-game opponents in the computer gaming industry. In this paper, we formalize and analyze the reasons behind the success of the BTs using standard tools of robot control theory, focusing on how properties such as robustness and safety are addressed in a modular way. In particular, we show how these key properties can be traced back to the ideas of subsumption and sequential compositions of robot behaviors. Thus BTs can be seen as a recent addition to a long research effort towards increasing modularity, robustness and safety of robot control software. To illustrate the use of BTs, we provide a set of solutions to example problems.

  • 99. Comport, Andrew I.
    et al.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Marchand, E.
    Chaumette, F.
    Robust Real-Time Visual Tracking: Comparison, Theoretical Analysis and Performance Evaluation2005In: 2005 IEEE International Conference on Robotics and Automation (ICRA), Vols 1-4, 2005, p. 2841-2846Conference paper (Refereed)
    Abstract [en]

    In this paper, two real-time pose tracking algorithms for rigid objects are compared. Both methods are 3D-model based and are capable of calculating the pose between the camera and an object with a monocular vision system. Here, special consideration has been put into defining and evaluating different performance criteria such as computational efficiency, accuracy and robustness. Both methods are described and a unifying framework is derived. The main advantage of both algorithms lie in their real-time capabilities (on standard hardware) whilst being robust to miss-tracking, occlusion and changes in illumination.

  • 100.
    Detry, Renaud
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Madry, Marianna
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Learning a dictionary of prototypical grasp-predicting parts from grasping experience2013In: 2013 IEEE International Conference on Robotics and Automation (ICRA), New York: IEEE , 2013, p. 601-608Conference paper (Refereed)
    Abstract [en]

    We present a real-world robotic agent that is capable of transferring grasping strategies across objects that share similar parts. The agent transfers grasps across objects by identifying, from examples provided by a teacher, parts by which objects are often grasped in a similar fashion. It then uses these parts to identify grasping points onto novel objects. We focus our report on the definition of a similarity measure that reflects whether the shapes of two parts resemble each other, and whether their associated grasps are applied near one another. We present an experiment in which our agent extracts five prototypical parts from thirty-two real-world grasp examples, and we demonstrate the applicability of the prototypical parts for grasping novel objects.

1234567 51 - 100 of 437
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf