Ändra sökning
Avgränsa sökresultatet
1234567 1 - 50 av 417
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Aarno, Daniel
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ekvall, Staffan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Adaptive virtual fixtures for machine-assisted teleoperation tasks2005Ingår i: 2005 IEEE International Conference on Robotics and Automation (ICRA), Vols 1-4, 2005, 1139-1144 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    It has been demonstrated in a number of robotic areas how the use of virtual fixtures improves task performance both in terms of execution time and overall precision, [1]. However, the fixtures are typically inflexible, resulting in a degraded performance in cases of unexpected obstacles or incorrect fixture models. In this paper, we propose the use of adaptive virtual fixtures that enable us to cope with the above problems. A teleoperative or human machine collaborative setting is assumed with the core idea of dividing the task, that the operator is executing, into several subtasks. The operator may remain in each of these subtasks as long as necessary and switch freely between them. Hence, rather than executing a predefined plan, the operator has the ability to avoid unforeseen obstacles and deviate from the model. In our system, the probability that the user is following a certain trajectory (subtask) is estimated and used to automatically adjusts the compliance. Thus, an on-line decision of how to fixture the movement is provided.

  • 2.
    Aarno, Daniel
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Layered HMM for motion intention recognition2006Ingår i: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, 5130-5135 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Acquiring, representing and modeling human skins is one of the key research areas in teleoperation, programming. by-demonstration and human-machine collaborative settings. One of the common approaches is to divide the task that the operator is executing into several subtasks in order to provide manageable modeling. In this paper we consider the use of a Layered Hidden Markov Model (LHMM) to model human skills. We evaluate a gestem classifier that classifies motions into basic action-primitives, or gestems. The gestem classifiers are then used in a LHMM to model a simulated teleoperated task. We investigate the online and offline classilication performance with respect to noise, number of gestems, type of HAIM and the available number of training sequences. We also apply the LHMM to data recorded during the execution of a trajectory-tracking task in 2D and 3D with a robotic manipulator in order to give qualitative as well as quantitative results for the proposed approach. The results indicate that the LHMM is suitable for modeling teleoperative trajectory-tracking tasks and that the difference in classification performance between one and multi dimensional HMMs for gestem classification is small. It can also be seen that the LHMM is robust w.r.t misclassifications in the underlying gestem classifiers.

  • 3.
    Aarno, Daniel
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Motion intention recognition in robot assisted applications2008Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 0921-8830, Vol. 56, nr 8, 692-705 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Acquiring, representing and modelling human skills is one of the key research areas in teleoperation, programming-by-demonstration and human-machine collaborative settings. The problems are challenging mainly because of the lack of a general mathematical model to describe human skills. One of the common approaches is to divide the task that the operator is executing into several subtasks or low-level subsystems in order to provide manageable modelling. In this paper we consider the use of a Layered Hidden Markov Model (LHMM) to model human skills. We evaluate a gesteme classifier that classifies motions into basic action-primitives, or gestemes. The gesteme classifiers are then used in a LHMM to model a teleoperated task. The proposed methodology uses three different HMM models at the gesteme level: one-dimensional HMM, multi-dimensional HMM and multidimensional HMM with Fourier transform. The online and off-line classification performance of these three models is evaluated with respect to the number of gestemes, the influence of the number of training samples, the effect of noise and the effect of the number of observation symbols. We also apply the LHMM to data recorded during the execution of a trajectory tracking task in 2D and 3D with a mobile manipulator in order to provide qualitative as well as quantitative results for the proposed approach. The results indicate that the LHMM is suitable for modelling teleoperative trajectory-tracking tasks and that the difference in classification performance between one and multidimensional HMMs for gesteme classification is small. It can also be seen that the LHMM is robust with respect to misclassifications in the underlying gesteme classifiers.

  • 4.
    Aarno, Daniel
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Lingelbach, F.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Constrained path planning and task-consistent path adaptation for mobile manipulators2005Ingår i: 2005 12th International Conference on Advanced Robotics, 2005, 268-273 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents our ongoing research in the design of a versatile service robot capable of operating in a home or office environment. Ideas presented here cover architectural issues and possible applications for such a robot system with focus on tasks requiring constrained end-effector motions. Two key components of such system is a path planner and a reactive behavior capable of force relaxation and path adaptation. These components are presented in detail along with an overview of the software architecture they fit into.

  • 5.
    Aarno, Daniel
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Sommerfeld, Johan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Pugeault, Nicolas
    Kalkan, Sinan
    Woergoetter, Florentin
    Krüger, Norbert
    Early reactive grasping with second order 3D feature relations2008Ingår i: Recent Progress In Robotics: Viable Robotic Service To Human / [ed] Lee, S; Suh, IH; Kim, MS, 2008, Vol. 370, 91-105 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    One of the main challenges in the field of robotics is to make robots ubiquitous. To intelligently interact with the world, such robots need to understand the environment and situations around them and react appropriately, they need context-awareness. But how to equip robots with capabilities of gathering and interpreting the necessary information for novel tasks through interaction with the environment and by providing some minimal knowledge in advance? This has been a longterm question and one of the main drives in the field of cognitive system development. The main idea behind the work presented in this paper is that the robot should, like a human infant, learn about objects by interacting with them, forming representations of the objects and their categories that are grounded in its embodiment. For this purpose, we study an early learning of object grasping process where the agent, based on a set of innate reflexes and knowledge about its embodiment. We stress out that this is not the work on grasping, it is a system that interacts with the environment based on relations of 3D visual features generated trough a stereo vision system. We show how geometry, appearance and spatial relations between the features can guide early reactive grasping which can later on be used in a more purposive manner when interacting with the environment.

  • 6. Abeywardena, D.
    et al.
    Wang, Zhan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Dissanayake, G.
    Waslander, S. L.
    Kodagoda, S.
    Model-aided state estimation for quadrotor micro air vehicles amidst wind disturbances2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper extends the recently developed Model-Aided Visual-Inertial Fusion (MA-VIF) technique for quadrotor Micro Air Vehicles (MAV) to deal with wind disturbances. The wind effects are explicitly modelled in the quadrotor dynamic equations excluding the unobservable wind velocity component. This is achieved by a nonlinear observability of the dynamic system with wind effects. We show that using the developed model, the vehicle pose and two components of the wind velocity vector can be simultaneously estimated with a monocular camera and an inertial measurement unit. We also show that the MA-VIF is reasonably tolerant to wind disturbances, even without explicit modelling of wind effects and explain the reasons for this behaviour. Experimental results using a Vicon motion capture system are presented to demonstrate the effectiveness of the proposed method and validate our claims.

  • 7.
    Alberti, Marina
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för kemivetenskap (CHE).
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Relational approaches for joint object classification andscene similarity measurement in indoor environments2014Ingår i: Proc. of 2014 AAAI Spring Symposium QualitativeRepresentations for Robots 2014, Palo Alto, California: The AAAI Press , 2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    The qualitative structure of objects and their spatial distribution,to a large extent, define an indoor human environmentscene. This paper presents an approach forindoor scene similarity measurement based on the spatialcharacteristics and arrangement of the objects inthe scene. For this purpose, two main sets of spatialfeatures are computed, from single objects and objectpairs. A Gaussian Mixture Model is applied both onthe single object features and the object pair features, tolearn object class models and relationships of the objectpairs, respectively. Given an unknown scene, the objectclasses are predicted using the probabilistic frameworkon the learned object class models. From the predictedobject classes, object pair features are extracted. A fi-nal scene similarity score is obtained using the learnedprobabilistic models of object pair relationships. Ourmethod is tested on a real world 3D database of deskscenes, using a leave-one-out cross-validation framework.To evaluate the effect of varying conditions on thescene similarity score, we apply our method on mockscenes, generated by removing objects of different categoriesin the test scenes.

  • 8.
    Almeida, Diogo
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Karayiannidis, Yiannis
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Folding Assembly by Means of Dual-Arm Robotic Manipulation2016Ingår i: 2016 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2016, 3987-3993 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we consider folding assembly as an assembly primitive suitable for dual-arm robotic assembly, that can be integrated in a higher level assembly strategy. The system composed by two pieces in contact is modelled as an articulated object, connected by a prismatic-revolute joint. Different grasping scenarios were considered in order to model the system, and a simple controller based on feedback linearisation is proposed, using force torque measurements to compute the contact point kinematics. The folding assembly controller has been experimentally tested with two sample parts, in order to showcase folding assembly as a viable assembly primitive.

  • 9.
    Almeida, Diogo
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Viña, Francisco E.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Karayiannidis, Yiannis
    Bimanual Folding Assembly: Switched Control and Contact Point Estimation2016Ingår i: IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, 2016, Cancun: IEEE, 2016Konferensbidrag (Refereegranskat)
    Abstract [en]

    Robotic assembly in unstructured environments is a challenging task, due to the added uncertainties. These can be mitigated through the employment of assembly systems, which offer a modular approach to the assembly problem via the conjunction of primitives. In this paper, we use a dual-arm manipulator in order to execute a folding assembly primitive. When executing a folding primitive, two parts are brought into rigid contact and posteriorly translated and rotated. A switched controller is employed in order to ensure that the relative motion of the parts follows the desired model, while regulating the contact forces. The control is complemented with an estimator based on a Kalman filter, which tracks the contact point between parts based on force and torque measurements. Experimental results are provided, and the effectiveness of the control and contact point estimation is shown.

  • 10.
    Ambrus, Rares
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Unsupervised construction of 4D semantic maps in a long-term autonomy scenario2017Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    Robots are operating for longer times and collecting much more data than just a few years ago. In this setting we are interested in exploring ways of modeling the environment, segmenting out areas of interest and keeping track of the segmentations over time, with the purpose of building 4D models (i.e. space and time) of the relevant parts of the environment.

    Our approach relies on repeatedly observing the environment and creating local maps at specific locations. The first question we address is how to choose where to build these local maps. Traditionally, an operator defines a set of waypoints on a pre-built map of the environment which the robot visits autonomously. Instead, we propose a method to automatically extract semantically meaningful regions from a point cloud representation of the environment. The resulting segmentation is purely geometric, and in the context of mobile robots operating in human environments, the semantic label associated with each segment (i.e. kitchen, office) can be of interest for a variety of applications. We therefore also look at how to obtain per-pixel semantic labels given the geometric segmentation, by fusing probabilistic distributions over scene and object types in a Conditional Random Field.

    For most robotic systems, the elements of interest in the environment are the ones which exhibit some dynamic properties (such as people, chairs, cups, etc.), and the ability to detect and segment such elements provides a very useful initial segmentation of the scene. We propose a method to iteratively build a static map from observations of the same scene acquired at different points in time. Dynamic elements are obtained by computing the difference between the static map and new observations. We address the problem of clustering together dynamic elements which correspond to the same physical object, observed at different points in time and in significantly different circumstances. To address some of the inherent limitations in the sensors used, we autonomously plan, navigate around and obtain additional views of the segmented dynamic elements. We look at methods of fusing the additional data and we show that both a combined point cloud model and a fused mesh representation can be used to more robustly recognize the dynamic object in future observations. In the case of the mesh representation, we also show how a Convolutional Neural Network can be trained for recognition by using mesh renderings.

    Finally, we present a number of methods to analyse the data acquired by the mobile robot autonomously and over extended time periods. First, we look at how the dynamic segmentations can be used to derive a probabilistic prior which can be used in the mapping process to further improve and reinforce the segmentation accuracy. We also investigate how to leverage spatial-temporal constraints in order to cluster dynamic elements observed at different points in time and under different circumstances. We show that by making a few simple assumptions we can increase the clustering accuracy even when the object appearance varies significantly between observations. The result of the clustering is a spatial-temporal footprint of the dynamic object, defining an area where the object is likely to be observed spatially as well as a set of time stamps corresponding to when the object was previously observed. Using this data, predictive models can be created and used to infer future times when the object is more likely to be observed. In an object search scenario, this model can be used to decrease the search time when looking for specific objects.

  • 11.
    Ambrus, Rares
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bore, Nils
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL.
    Autonomous meshing, texturing and recognition of objectmodels with a mobile robot2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a system for creating object modelsfrom RGB-D views acquired autonomously by a mobile robot.We create high-quality textured meshes of the objects byapproximating the underlying geometry with a Poisson surface.Our system employs two optimization steps, first registering theviews spatially based on image features, and second aligningthe RGB images to maximize photometric consistency withrespect to the reconstructed mesh. We show that the resultingmodels can be used robustly for recognition by training aConvolutional Neural Network (CNN) on images rendered fromthe reconstructed meshes. We perform experiments on datacollected autonomously by a mobile robot both in controlledand uncontrolled scenarios. We compare quantitatively andqualitatively to previous work to validate our approach.

  • 12.
    Ambrus, Rares
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bore, Nils
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Meta-rooms: Building and Maintaining Long Term Spatial Models in a Dynamic World2014Ingår i: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2014), IEEE conference proceedings, 2014, 1854-1861 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a novel method for re-creating the static structure of cluttered office environments -which we define as the " meta-room" -from multiple observations collected by an autonomous robot equipped with an RGB-D depth camera over extended periods of time. Our method works directly with point clusters by identifying what has changed from one observation to the next, removing the dynamic elements and at the same time adding previously occluded objects to reconstruct the underlying static structure as accurately as possible. The process of constructing the meta-rooms is iterative and it is designed to incorporate new data as it becomes available, as well as to be robust to environment changes. The latest estimate of the meta-room is used to differentiate and extract clusters of dynamic objects from observations. In addition, we present a method for re-identifying the extracted dynamic objects across observations thus mapping their spatial behaviour over extended periods of time.

  • 13.
    Ambrus, Rares
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Claici, Sebastian
    Wendt, Axel
    Automatic Room Segmentation From Unstructured 3-D Data of Indoor Environments2017Ingår i: IEEE Robotics and Automation Letters, ISSN 2377-3766, E-ISSN 1949-3045, Vol. 2, nr 2, 749-756 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present an automatic approach for the task of reconstructing a 2-D floor plan from unstructured point clouds of building interiors. Our approach emphasizes accurate and robust detection of building structural elements and, unlike previous approaches, does not require prior knowledge of scanning device poses. The reconstruction task is formulated as a multiclass labeling problem that we approach using energy minimization. We use intuitive priors to define the costs for the energy minimization problem and rely on accurate wall and opening detection algorithms to ensure robustness. We provide detailed experimental evaluation results, both qualitative and quantitative, against state-of-the-art methods and labeled ground-truth data.

  • 14.
    Ambrus, Rares
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ekekrantz, Johan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Unsupervised learning of spatial-temporal models of objects in a long-term autonomy scenario2015Ingår i: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE , 2015, 5678-5685 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a novel method for clustering segmented dynamic parts of indoor RGB-D scenes across repeated observations by performing an analysis of their spatial-temporal distributions. We segment areas of interest in the scene using scene differencing for change detection. We extend the Meta-Room method and evaluate the performance on a complex dataset acquired autonomously by a mobile robot over a period of 30 days. We use an initial clustering method to group the segmented parts based on appearance and shape, and we further combine the clusters we obtain by analyzing their spatial-temporal behaviors. We show that using the spatial-temporal information further increases the matching accuracy.

  • 15.
    Ambrus, Rares
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Robotik, perception och lärande, RPL. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Unsupervised object segmentation through change detection in a long term autonomy scenario2016Ingår i: IEEE-RAS International Conference on Humanoid Robots, IEEE, 2016, 1181-1187 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this work we address the problem of dynamic object segmentation in office environments. We make no prior assumptions on what is dynamic and static, and our reasoning is based on change detection between sparse and non-uniform observations of the scene. We model the static part of the environment, and we focus on improving the accuracy and quality of the segmented dynamic objects over long periods of time. We address the issue of adapting the static structure over time and incorporating new elements, for which we train and use a classifier whose output gives an indication of the dynamic nature of the segmented elements. We show that the proposed algorithms improve the accuracy and the rate of detection of dynamic objects by comparing with a labelled dataset.

  • 16.
    Andersson, Sofie
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Nikou, Alexandros
    KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Dimarogonas, Dimos V.
    KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Control Synthesis for Multi-Agent Systems under Metric Interval Temporal Logic Specifications2017Ingår i: IFAC-PapersOnLine, Elsevier, 2017, Vol. 50, 2397-2402 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents a framework for automatic synthesis of a control sequence for multi-agent systems governed by continuous linear dynamics under timed constraints. First, the motion of the agents in the workspace is abstracted into individual Transition Systems (TS). Second, each agent is assigned with an individual formula given in Metric Interval Temporal Logic (MITL) and in parallel, the team of agents is assigned with a collaborative team formula. The proposed method is based on a correct-by-construction control synthesis method, and hence guarantees that the resulting closed-loop system will satisfy the desired specifications. The specifications considers boolean-valued properties under real-time bounds. Extended simulations has been performed in order to demonstrate the efficiency of the proposed methodology.

  • 17.
    Andreasson, Martin
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Dimarogonas, Dimos V.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Johansson, Karl H.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Sandberg, Henrik
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Distributed vs. centralized power systems frequency control2013Ingår i: 2013 European Control Conference, ECC 2013, 2013, 3524-3529 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper considers a distributed control algorithm for frequency control of electrical power systems. We propose a distributed controller which retains the reference frequency of the buses under unknown load changes, while asymptotically minimizing a quadratic cost of power generation. For comparison, we also propose a centralized controller which also retains the reference frequency while minimizing the same cost of power generation. We derive sufficient stability criteria for the parameters of both controllers. The controllers are evaluated by simulation on the IEEE 30 bus test network, where their performance is compared.

  • 18.
    Andreasson, Martin
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Dimarogonas, Dimos V.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Sandberg, Henrik
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Johansson, Karl H.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Distributed controllers for multiterminal HVDC transmission systems2017Ingår i: IEEE Transactions on Control of Network Systems, ISSN 2325-5870, Vol. 4, nr 3, 564-574 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    High-voltage direct current (HVDC) is a commonly used technology for long-distance electric power transmission, mainly due to its low resistive losses. In this paper the voltagedroop method (VDM) is reviewed, and three novel distributed controllers for multi-terminal HVDC (MTDC) transmission systems are proposed. Sufficient conditions for when the proposed controllers render the closed-loop system asymptotically stable are provided. These conditions give insight into suitable controller architecture, e.g., that the communication graph should be identical with the graph of the MTDC system, including edge weights. Provided that the closed-loop systems are asymptotically stable, it is shown that the voltages asymptotically converge to within predefined bounds. Furthermore, a quadratic cost of the injected currents is asymptotically minimized. The proposed controllers are evaluated on a four-bus MTDC system.

  • 19. Anisi, David A.
    et al.
    Ögren, Petter
    Swedish Defence Research Agency (FOI), Sweden.
    Hu, Xiaoming
    KTH, Skolan för teknikvetenskap (SCI), Matematik (Inst.), Optimeringslära och systemteori. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Cooperative Minimum Time Surveillance With Multiple Ground Vehicles2010Ingår i: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 55, nr 12, 2679-2691 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper, we formulate and solve two different minimum time problems related to unmanned ground vehicle (UGV) surveillance. The first problem is the following. Given a set of surveillance UGVs and a polyhedral area, find waypoint-paths for all UGVs such that every point of the area is visible from a point on a path and such that the time for executing the search in parallel is minimized. Here, the sensors' field of view are assumed to have a limited coverage range and be occluded by the obstacles. The second problem extends the first by additionally requiring the induced information graph to be connected at the time instants when the UGVs perform the surveillance mission, i.e., when they gather and transmit sensor data. In the context of the second problem, we also introduce and utilize the notion of recurrent connectivity, which is a significantly more flexible connectivity constraint than, e.g., the 1-hop connectivity constraints and use it to discuss consensus filter convergence for the group of UGVs.

  • 20.
    Annergren, Mariette
    et al.
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik.
    Larsson, Christian A.
    Hjalmarsson, Håkan
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik.
    Bombois, Xavier
    Wahlberg, Bo
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för elektro- och systemteknik (EES), Centra, ACCESS Linnaeus Centre.
    Application-Oriented Input Design in System Identification Optimal input design for control2017Ingår i: IEEE CONTROL SYSTEMS MAGAZINE, ISSN 1066-033X, Vol. 37, nr 2, 31-56 s.Artikel i tidskrift (Refereegranskat)
  • 21.
    Axelsson, Unnar
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Underwater feature extraction and pillar mapping2015Rapport (Övrigt vetenskapligt)
    Abstract [en]

    A mechanicaly scanned imaging sonar, MSIS, pro-duces a 2D image of the range and bearing of return intensities.The pattern produced in this image depends on the envior-mental feature that caused it. These features are very usefulfor underwater navigation but the inverse mapping of sonarimage pattern to environmental feature can be ambiguous. Weinvestigate problems associated with using MSIS for navigation.In particular we show that support vector machines can be usedto classify the existance and types of feature in a sonar image.We develop a sonar processing pipleline that can be used fornavigation. This is tested on two sonar datasets collected fromROV’s. 1

  • 22.
    Aydemir, Alper
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bishop, Adrian N.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Simultaneous Object Class and Pose Estimation for Mobile Robotic Applications with Minimalistic Recognition2010Ingår i: 2010 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)    / [ed] Rakotondrabe M; Ivan IA, 2010, 2020-2027 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we address the problem of simultaneous object class and pose estimation using nothing more than object class label measurements from a generic object classifier. We detail a method for designing a likelihood function over the robot configuration space. This function provides a likelihood measure of an object being of a certain class given that the robot (from some position) sees and recognizes an object as being of some (possibly different) class. Using this likelihood function in a recursive Bayesian framework allows us to achieve a kind of spatial averaging and determine the object pose (up to certain ambiguities to be made precise). We show how inter-class confusion from certain robot viewpoints can actually increase the ability to determine the object pose. Our approach is motivated by the idea of minimalistic sensing since we use only class label measurements albeit we attempt to estimate the object pose in addition to the class.

  • 23.
    Aydemir, Alper
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Exploiting and modeling local 3D structure for predicting object locations2012Ingår i: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE , 2012, 3885-3892 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we argue that there is a strong correlation between local 3D structure and object placement in everyday scenes. We call this the 3D context of the object. In previous work, this is typically hand-coded and limited to flat horizontal surfaces. In contrast, we propose to use a more general model for 3D context and learn the relationship between 3D context and different object classes. This way, we can capture more complex 3D contexts without implementing specialized routines. We present extensive experiments with both qualitative and quantitative evaluations of our method for different object classes. We show that our method can be used in conjunction with an object detection algorithm to reduce the rate of false positives. Our results support that the 3D structure surrounding objects in everyday scenes is a strong indicator of their placement and that it can give significant improvements in the performance of, for example, an object detection system. For evaluation, we have collected a large dataset of Microsoft Kinect frames from five different locations, which we also make publicly available.

  • 24.
    Aydemir, Alper
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    What can we learn from 38,000 rooms?: Reasoning about unexplored space in indoor environments2012Ingår i: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE , 2012, 4675-4682 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Many robotics tasks require the robot to predict what lies in the unexplored part of the environment. Although much work focuses on building autonomous robots that operate indoors, indoor environments are neither well understood nor analyzed enough in the literature. In this paper, we propose and compare two methods for predicting both the topology and the categories of rooms given a partial map. The methods are motivated by the analysis of two large annotated floor plan data sets corresponding to the buildings of the MIT and KTH campuses. In particular, utilizing graph theory, we discover that local complexity remains unchanged for growing global complexity in real-world indoor environments, a property which we exploit. In total, we analyze 197 buildings, 940 floors and over 38,000 real-world rooms. Such a large set of indoor places has not been investigated before in the previous work. We provide extensive experimental results and show the degree of transferability of spatial knowledge between two geographically distinct locations. We also contribute the KTH data set and the software tools to with it.

  • 25.
    Aydemir, Alper
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Pronobis, Andrzej
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Gobelbecker, Moritz
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Active Visual Object Search in Unknown Environments Using Uncertain Semantics2013Ingår i: IEEE Transactions on robotics, ISSN 1552-3098, Vol. 29, nr 4, 986-1002 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper, we study the problem of active visual search (AVS) in large, unknown, or partially known environments. We argue that by making use of uncertain semantics of the environment, a robot tasked with finding an object can devise efficient search strategies that can locate everyday objects at the scale of an entire building floor, which is previously unknown to the robot. To realize this, we present a probabilistic model of the search environment, which allows for prioritizing the search effort to those parts of the environment that are most promising for a specific object type. Further, we describe a method for reasoning about the unexplored part of the environment for goal-directed exploration with the purpose of object search. We demonstrate the validity of our approach by comparing it with two other search systems in terms of search trajectory length and time. First, we implement a greedy coverage-based search strategy that is found in previous work. Second, we let human participants search for objects as an alternative comparison for our method. Our results show that AVS strategies that exploit uncertain semantics of the environment are a very promising idea, and our method pushes the state-of-the-art forward in AVS.

  • 26.
    Aydemir, Alper
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Sjöö, Kristoffer
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Pronobis, Andrzej
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Search in the real world: Active visual object search based on spatial relations2011Ingår i: IEEE International Conference on Robotics and Automation (ICRA), 2011, IEEE , 2011, 2818-2824 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Objects are integral to a robot’s understandingof space. Various tasks such as semantic mapping, pick-andcarrymissions or manipulation involve interaction with objects.Previous work in the field largely builds on the assumption thatthe object in question starts out within the ready sensory reachof the robot. In this work we aim to relax this assumptionby providing the means to perform robust and large-scaleactive visual object search. Presenting spatial relations thatdescribe topological relationships between objects, we thenshow how to use these to create potential search actions. Weintroduce a method for efficiently selecting search strategiesgiven probabilities for those relations. Finally we performexperiments to verify the feasibility of our approach.

  • 27.
    Barck-Holst, Carl
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ralph, Maria
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Holmar, Fredrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning Grasping Affordance Using Probabilistic and Ontological Approaches2009Ingår i: 2009 International Conference on Advanced Robotics, ICAR 2009, IEEE , 2009, 96-101 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present two approaches to modeling affordance relations between objects, actions and effects. The first approach we present focuses on a probabilistic approach which uses a voting function to learn which objects afford which types of grasps. We compare the success rate of this approach to a second approach which uses an ontological reasoning engine for learning affordances. Our second approach employs a rule-based system with axioms to reason on grasp selection for a given object.

  • 28.
    Basiri, Meysam
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bishop, Adrian N.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Distributed control of triangular formations with angle-only constraints2010Ingår i: Systems & control letters (Print), ISSN 0167-6911, E-ISSN 1872-7956, Vol. 59, nr 2, 147-154 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper considers the coupled, bearing-only formation control of three mobile agents moving in the plane. Each agent has only local inter-agent bearing knowledge and is required to maintain a specified angular separation relative to both neighbor agents. Assuming that the desired angular separation of each agent relative to the group is feasible, a triangle is generated. The control law is distributed and accordingly each agent can determine their own control law using only the locally measured bearings. A convergence result is established in this paper which guarantees global asymptotic convergence of the formation to the desired formation shape.

  • 29.
    Basiri, Meysam
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bishop, Adrian N.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Distributed Control of Triangular Sensor Formations with Angle-Only Constraints2009Ingår i: 2009 INTERNATIONAL CONFERENCE ON INTELLIGENT SENSORS, SENSOR NETWORKS AND INFORMATION PROCESSING (ISSNIP 2009), NEW YORK: IEEE , 2009, 121-126 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper considers the coupled formation control of three mobile agents moving in the plane. Each agent has only local inter-agent bearing knowledge and is required to maintain a specified angular separation relative to its neighbors. The problem considered in this paper differs from similar problems in the literature since no inter-agent distance measurements are employed and the desired formation is specified entirely by the internal triangle angles. Each agent's control law is distributed and based only on its locally measured bearings. A convergence result is established which guarantees global convergence of the formation to the desired formation shape.

  • 30. Bayro-Corrochano, Eduardo
    et al.
    Eklundh, Jan-Olof
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Advances in theory and applications of pattern recognition, image processing and computer vision2011Ingår i: Pattern Recognition Letters, ISSN 0167-8655, Vol. 32, nr 16, 2143-2144 s.Artikel i tidskrift (Refereegranskat)
  • 31.
    Behere, Sagar
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    A Generic Framework for Robot Motion Planning and Control2010Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
    Abstract [en]

    This thesis deals with the general problem of robot motion planning and control. It proposes the hypothesis that it should bepossible to create a generic software framework capable of dealing with all robot motion planning and control problems, independent of the robot being used, the task being solved, the workspace obstacles or the algorithms employed. The thesis work then consisted of identifying the requirements and creating a design and implementation of such a framework. This report motivates and documents the entire process. The framework developed was tested on two different robot arms under varying conditions. The testing method and results are also presented.The thesis concludes that the proposed hypothesis is indeed valid.

  • 32. Bekiroglu, Y.
    et al.
    Damianou, A.
    Detry, Renaud
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. University of Liège.
    Stork, Johannes A.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. University of Bristol.
    Probabilistic consolidation of grasp experience2016Ingår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2016, 193-200 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a probabilistic model for joint representation of several sensory modalities and action parameters in a robotic grasping scenario. Our non-linear probabilistic latent variable model encodes relationships between grasp-related parameters, learns the importance of features, and expresses confidence in estimates. The model learns associations between stable and unstable grasps that it experiences during an exploration phase. We demonstrate the applicability of the model for estimating grasp stability, correcting grasps, identifying objects based on tactile imprints and predicting tactile imprints from object-relative gripper poses. We performed experiments on a real platform with both known and novel objects, i.e., objects the robot trained with, and previously unseen objects. Grasp correction had a 75% success rate on known objects, and 73% on new objects. We compared our model to a traditional regression model that succeeded in correcting grasps in only 38% of cases.

  • 33.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detry, Renaud
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Joint Observation of Object Pose and Tactile Imprints for Online Grasp Stability Assessment2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper studies the viability of concurrentobject pose tracking and tactile sensing for assessing graspstability on a physical robotic platform. We present a kernellogistic-regression model of pose- and touch-conditional graspsuccess probability. Models are trained on grasp data whichconsist of (1) the pose of the gripper relative to the object,(2) a tactile description of the contacts between the objectand the fully-closed gripper, and (3) a binary descriptionof grasp feasibility, which indicates whether the grasp canbe used to rigidly control the object. The data is collectedby executing grasps demonstrated by a human on a roboticplatform composed of an industrial arm, a three-finger gripperequipped with tactile sensing arrays, and a vision-based objectpose tracking system. The robot is able to track the poseof an object while it is grasping it, and it can acquiregrasp tactile imprints via pressure sensor arrays mounted onits gripper’s fingers. We consider models defined on severalsubspaces of our input data – using tactile perceptions orgripper poses only. Models are optimized and evaluated with f-fold cross-validation. Our preliminary results show that stabilityassessments based on both tactile and pose data can providebetter rates than assessments based on tactile data alone.

  • 34.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detry, Renaud
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning Tactile Characterizations Of Object- And Pose-specific Grasps2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    Our aim is to predict the stability of a grasp from the perceptions available to a robot before attempting to lift up and transport an object. The percepts we consider consist of the tactile imprints and the object-gripper configuration read before and until the robot’s manipulator is fully closed around an object. Our robot is equipped with multiple tactile sensing arrays and it is able to track the pose of an object during the application of a grasp. We present a kernel-logistic-regression model of pose- and touch-conditional grasp success probability which we train on grasp data collected by letting the robot experience the effect on tactile and visual signals of grasps suggested by a teacher, and letting the robot verify which grasps can be used to rigidly control the object. We consider models defined on several subspaces of our input data – e.g., using tactile perceptions or pose information only. Our experiment demonstrates that joint tactile and pose-based perceptions carry valuable grasp-related information, as models trained on both hand poses and tactile parameters perform better than the models trained exclusively on one perceptual input.

  • 35.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Huebner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Integrating Grasp Planning with Online Stability Assessment using Tactile Sensing2011Ingår i: IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2011, 4750-4755 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents an integration of grasp planning and online grasp stability assessment based on tactile data. We show how the uncertainty in grasp execution posterior to grasp planning can be dealt with using tactile sensing and machine learning techniques. The majority of the state-of-the-art grasp planners demonstrate impressive results in simulation. However, these results are mostly based on perfect scene/object knowledge allowing for analytical measures to be employed. It is questionable how well these measures can be used in realistic scenarios where the information about the object and robot hand may be incomplete and/or uncertain. Thus, tactile and force-torque sensory information is necessary for successful online grasp stability assessment. We show how a grasp planner can be integrated with a probabilistic technique for grasp stability assessment in order to improve the hypotheses about suitable grasps on different types of objects. Experimental evaluation with a three-fingered robot hand equipped with tactile array sensors shows the feasibility and strength of the integrated approach.

  • 36.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kyrki, Ville
    Department of Information Technology, Lappeenranta University of Technology, Finland.
    Learning grasp stability based on tactile data and HMMs2010Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, the problem of learning grasp stability in robotic object grasping based on tactile measurements is studied. Although grasp stability modeling and estimation has been studied for a long time, there are few robots today able of demonstrating extensive grasping skills. The main contribution of the work presented here is an investigation of probabilistic modeling for inferring grasp stability based on learning from examples. The main objective is classification of a grasp as stable or unstable before applying further actions on it, e.g. lifting. The problem cannot be solved by visual sensing which is typically used to execute an initial robot hand positioning with respect to the object. The output of the classification system can trigger a regrasping step if an unstable grasp is identified. An off-line learning process is implemented and used for reasoning about grasp stability for a three-fingered robotic hand using Hidden Markov models. To evaluate the proposed method, experiments are performed both in simulation and on a real robot system.

  • 37.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Laaksonen, Janne
    Department of Information Technology, Lappeenranta University of Technology, Finland.
    Jorgensen, Jimmy Alison
    The Maersk Mc-Kinney Moller Institute University of Southern Denmark, Denmark.
    Kyrki, Ville
    the Department of Information Technology, Lappeenranta University of Technology, Finland.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Assessing Grasp Stability Based on Learning and Haptic Data2011Ingår i: IEEE Transactions on robotics, ISSN 1552-3098, Vol. 27, nr 3, 616-629 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    An important ability of a robot that interacts with the environment and manipulates objects is to deal with the uncertainty in sensory data. Sensory information is necessary to, for example, perform online assessment of grasp stability. We present methods to assess grasp stability based on haptic data and machinelearning methods, including AdaBoost, support vector machines (SVMs), and hidden Markov models (HMMs). In particular, we study the effect of different sensory streams to grasp stability. This includes object information such as shape; grasp information such as approach vector; tactile measurements fromfingertips; and joint configuration of the hand. Sensory knowledge affects the success of the grasping process both in the planning stage (before a grasp is executed) and during the execution of the grasp (closed-loop online control). In this paper, we study both of these aspects. We propose a probabilistic learning framework to assess grasp stability and demonstrate that knowledge about grasp stability can be inferred using information from tactile sensors. Experiments on both simulated and real data are shown. The results indicate that the idea to exploit the learning approach is applicable in realistic scenarios, which opens a number of interesting venues for the future research.

  • 38.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Laaksonen, Janne
    the Department of Information Technology, Lappeenranta University of Technology, Finland.
    Jorgensen, Jimmy
    The Maersk Mc-Kinney Moller Institute University of Southern Denmark, Denmark.
    Kyrki, Ville
    the Department of Information Technology, Lappeenranta University of Technology, Finland.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning grasp stability based on haptic data2010Konferensbidrag (Refereegranskat)
  • 39.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Wang, Lu
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    A probabilistic framework for task-oriented grasp stability assessment2013Ingår i: 2013 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2013, 3040-3047 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a probabilistic framework for grasp modeling and stability assessment. The framework facilitates assessment of grasp success in a goal-oriented way, taking into account both geometric constraints for task affordances and stability requirements specific for a task. We integrate high-level task information introduced by a teacher in a supervised setting with low-level stability requirements acquired through a robot's self-exploration. The conditional relations between tasks and multiple sensory streams (vision, proprioception and tactile) are modeled using Bayesian networks. The generative modeling approach both allows prediction of grasp success, and provides insights into dependencies between variables and features relevant for object grasping.

  • 40.
    Bergström, Niklas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Interactive Perception: From Scenes to Objects2012Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    This thesis builds on the observation that robots, like humans, do not have enough experience to handle all situations from the start. Therefore they need tools to cope with new situations, unknown scenes and unknown objects. In particular, this thesis addresses objects. How can a robot realize what objects are if it looks at a scene and has no knowledge about objects? How can it recover from situations where its hypotheses about what it sees are wrong? Even if it has built up experience in form of learned objects, there will be situations where it will be uncertain or mistaken, and will therefore still need the ability to correct errors. Much of our daily lives involves interactions with objects, and the same will be true robots existing among us. Apart from being able to identify individual objects, the robot will therefore need to manipulate them.

    Throughout the thesis, different aspects of how to deal with these questions is addressed. The focus is on the problem of a robot automatically partitioning a scene into its constituting objects. It is assumed that the robot does not know about specific objects, and is therefore considered inexperienced. Instead a method is proposed that generates object hypotheses given visual input, and then enables the robot to recover from erroneous hypotheses. This is done by the robot drawing from a human's experience, as well as by enabling it to interact with the scene itself and monitoring if the observed changes are in line with its current beliefs about the scene's structure.

    Furthermore, the task of object manipulation for unknown objects is explored. This is also used as a motivation why the scene partitioning problem is essential to solve. Finally aspects of monitoring the outcome of a manipulation is investigated by observing the evolution of flexible objects in both static and dynamic scenes. All methods that were developed for this thesis have been tested and evaluated on real robotic platforms. These evaluations show the importance of having a system capable of recovering from errors and that the robot can take advantage of human experience using just simple commands.

  • 41.
    Bergström, Niklas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Modeling of Natural Human – Robot Encounters2008Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
  • 42.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bohg, Jeannette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Roberson-Johnson, Matthew
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kootstra, Gert
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Active Scene Analysis2010Konferensbidrag (Refereegranskat)
  • 43.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Generating Object Hypotheses in Natural Scenes through Human-Robot Interaction2011Ingår i: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS / [ed] Amato, Nancy M., San Francisco: IEEE , 2011, 827-833 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    We propose a method for interactive modeling ofobjects and object relations based on real-time segmentation ofvideo sequences. In interaction with a human, the robot canperform multi-object segmentation through principled model-ing of physical constraints. The key contribution is an efficientmulti-labeling framework, that allows object modeling anddisambiguation in natural scenes. Object modeling and labelingis done in a real-time, to which hypotheses and constraintsdenoting relations between objects can be added incrementally.Through instructions such as key presses or spoken words, ascene can be segmented in regions corresponding to multiplephysical objects. The approach solves some of the difficultproblems related to disambiguation of objects merged due totheir direct physical contact. Results show that even a limited setof simple interactions with a human operator can substantiallyimprove segmentation results.

  • 44.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bohg, Jeannette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Integration of Visual Cues for Robotic Grasping2009Ingår i: COMPUTER VISION SYSTEMS, PROCEEDINGS / [ed] Fritz M, Schiele B, Piater JH, Berlin: Springer-Verlag Berlin , 2009, Vol. 5815, 245-254 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we propose a method that generates grasping actions for novel objects based on visual input from a stereo camera. We are integrating two methods that are advantageous either in predicting how to grasp an object or where to apply a grasp. The first one reconstructs a wire frame object model through curve matching. Elementary grasping actions can be associated to parts of this model. The second method predicts grasping points in a 2D contour image of an object. By integrating the information from the two approaches, we can generate a sparse set, of full grasp configurations that are of a good quality. We demonstrate our approach integrated in a vision system for complex shaped objects as well as in cluttered scenes.

  • 45.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Scene Understanding through Autonomous Interactive Perception2011Ingår i: Computer Vision Systems: Lecture Notes in Computer Science / [ed] Crowley James L., Draper Bruce, Thonnat Monique, Springer Verlag , 2011, 153-162 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    We propose a framework for detecting, extracting and mod-eling objects in natural scenes from multi-modal data. Our frameworkis iterative, exploiting different hypotheses in a complementary manner.We employ the framework in realistic scenarios, based on visual appear-ance and depth information. Using a robotic manipulator that interactswith the scene, object hypotheses generated using appearance informa-tion are confirmed through pushing. The framework is iterative, eachgenerated hypothesis is feeding into the subsequent one, continuously re-fining the predictions about the scene. We show results that demonstratethe synergic effect of applying multiple hypotheses for real-world sceneunderstanding. The method is efficient and performs in real-time.

  • 46.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Yamakawa, Yuji
    Senoo, Taku
    Ishikawa, Masatoshi
    On-line learning of temporal state models for flexible objects2012Ingår i: 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids), IEEE , 2012, 712-718 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    State estimation and control are intimately related processes in robot handling of flexible and articulated objects. While for rigid objects, we can generate a CAD model before-hand and a state estimation boils down to estimation of pose or velocity of the object, in case of flexible and articulated objects, such as a cloth, the representation of the object's state is heavily dependent on the task and execution. For example, when folding a cloth, the representation will mainly depend on the way the folding is executed.

  • 47.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Yamakawa, Yuji
    Tokyo University.
    Senoo, Taku
    Tokyo University.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ishikawa, Masatoshi
    Tokyo University.
    State Recognition of Deformable Objects Using Shape Context2011Ingår i: The 29th Annual Conference of the Robotics Society of Japan, 2011Konferensbidrag (Övrigt vetenskapligt)
  • 48.
    Bertolli, Federico
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik I.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    SLAM using visual scan-matching with distinguishable 3D points2006Ingår i: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, 4042-4047 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    Scan-matching based on data from a laser scanner is frequently used for mapping and localization. This paper presents an scan-matching approach based instead on visual information from a stereo system. The Scale Invariant Feature Transform (SIFT) is used together with epipolar constraints to get high matching precision between the stereo images. Calculating the 3D position of the corresponding points in the world results in a visual scan where each point has a descriptor attached to it. These descriptors can be used when matching scans acquired from different positions. Just like in the work with laser based scan matching a map can be defined as a set of reference scans and their corresponding acquisition point. In essence this reduces each visual scan that can consist of hundreds of points to a single entity for which only the corresponding robot pose has to be estimated in the map. This reduces the overall complexity of the map. The SIFT descriptor attached to each of the points in the reference allows for robust matching and detection of loop closing situations. The paper presents real-world experimental results from an indoor office environment.

  • 49.
    Bishop, Adrian N.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    A tutorial on constraints for positioning on the plane2010Ingår i: 2010 IEEE 21st International Symposiumon Personal Indoor and Mobile Radio Communications (PIMRC), IEEE , 2010, 1689-1694 s.Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper introduces and surveys a number of determinant constraints on the measurement errors in a variety of positioning scenarios. An algorithm for exploiting the constraints for accurate positioning is introduced and the relationship between the proposed algorithm and a so-called traditional maximum likelihood algorithm is examined.

  • 50.
    Bishop, Adrian N.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Fidan, Baris
    Anderson, Brian D. O.
    Dogancay, Kutluyil
    Pathirana, Pubudu N.
    Optimality analysis of sensor-target localization geometries2010Ingår i: Automatica, ISSN 0005-1098, Vol. 46, nr 3, 479-492 s.Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The problem of target localization involves estimating the position of a target from multiple noisy sensor measurements. It is well known that the relative sensor-target geometry can significantly affect the performance of any particular localization algorithm. The localization performance can be explicitly characterized by certain measures, for example, by the Cramer-Rao lower bound (which is equal to the inverse Fisher information matrix) on the estimator variance. In addition, the Cramer-Rao lower bound is commonly used to generate a so-called uncertainty ellipse which characterizes the spatial variance distribution of an efficient estimate, i.e. an estimate which achieves the lower bound. The aim of this work is to identify those relative sensor-target geometries which result in a measure of the uncertainty ellipse being minimized. Deeming such sensor-target geometries to be optimal with respect to the chosen measure, the optimal sensor-target geometries for range-only, time-of-arrival-based and bearing-only localization are identified and studied in this work. The optimal geometries for an arbitrary number of sensors are identified and it is shown that an optimal sensor-target configuration is not, in general, unique. The importance of understanding the influence of the sensor-target geometry on the potential localization performance is highlighted via formal analytical results and a number of illustrative examples.

1234567 1 - 50 av 417
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf