Ändra sökning
Avgränsa sökresultatet
3456789 251 - 300 av 683
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 251.
    Hamid Muhammed, Hamed
    et al.
    KTH, Skolan för teknik och hälsa (STH), Medicinsk teknik.
    Bergholm, Fredrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Sensitivity Analysis of Multichannel Images Intended for Instantaneous Imaging Spectrometry Applications2010Ingår i: SIAM Journal on Imaging Sciences, ISSN 1936-4954, E-ISSN 1936-4954, Vol. 3, nr 1, s. 79-109Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper presents a sensitivity analysis of using instantaneous multichannel two-dimensional (2D) imaging to achieve instantaneous 2D imaging spectroscopy. A simulated multiple-filter mosaic was introduced and used to acquire multichannel data which were transformed into spectra. The feasibility of two different transformation approaches (the concrete pseudoinverse approach and a statistical approach) was investigated through extensive experimental tasks. A promising statistical method was identified to be used for accurate estimation of spectra from multichannel data. Comparison between estimated and measured spectra shows that higher estimation accuracy can be achieved when using a larger number of usable multiple-filter combinations in the mosaic.

  • 252.
    Hang, Kaiyu
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. CVAP/CAS/CSC, KTH Royal Institute of Technology.
    Dexterous Grasping: Representation and Optimization2016Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Many robot object interactions require that an object is firmly held, and that the grasp remains stable during the whole manipulation process. Based on grasp wrench space, this thesis address the problems of measuring the grasp sensitivity against friction changes, planning contacts and hand configurations on mesh and point cloud representations of arbitrary objects, planning adaptable grasps and finger gaiting for keeping a grasp stable under various external disturbances, as well as learning of grasping manifolds for more accurate reachability and inverse kinematics computation for multifingered grasping. 

    Firstly, we propose a new concept called friction sensitivity, which measures how susceptible a specific grasp is to changes in the underlying frictionc oefficients. We develop algorithms for the synthesis of stable grasps with low friction sensitivity and for the synthesis of stable grasps in the case of small friction coefficients.  

    Secondly, for fast planning of contacts and hand configurations for dexterous grasping, as well as keeping the stability of a grasp during execution, we present a unified framework for grasp planning and in-hand grasp adaptation using visual, tactile and proprioceptive feedback. The main objective of the proposed framework is to enable fingertip grasping by addressing problems of changed weight of the object, slippage and external disturbances. For this purpose, we introduce the Hierarchical Fingertip Space (HFTS) as a representation enabling optimization for both efficient grasp synthesis and online finger gaiting. Grasp synthesis is followed by a grasp adaptation step that consists of both grasp force adaptation through impedance control and regrasping/finger gaiting when the former is not sufficient. 

    Lastly, to improve the efficiency and accuracy of dexterous grasping and in-hand manipulation, we present a system for fingertip grasp planning that incrementally learns a heuristic for hand reachability and multi-fingered inverse kinematics. During execution the system plans and executes fingertip grasps using Canny’s grasp quality metric and a learned random forest based hand reachability heuristic. In the offline module, this heuristic is improved based on a grasping manifold that is incrementally learned from the experiences collected during execution.

  • 253.
    Hang, Kaiyu
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Haustein, Joshua
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Li, Miao
    EPFL.
    Billard, Aude
    Smith, Christian
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    On the Evolution of Fingertip Grasping Manifolds2016Ingår i: IEEE International Conference on Robotics and Automation, IEEE Robotics and Automation Society, 2016, s. 2022-2029, artikel-id 7487349Konferensbidrag (Refereegranskat)
    Abstract [en]

    Efficient and accurate planning of fingertip grasps is essential for dexterous in-hand manipulation. In this work, we present a system for fingertip grasp planning that incrementally learns a heuristic for hand reachability and multi-fingered inverse kinematics. The system consists of an online execution module and an offline optimization module. During execution the system plans and executes fingertip grasps using Canny’s grasp quality metric and a learned random forest based hand reachability heuristic. In the offline module, this heuristic is improved based on a grasping manifold that is incrementally learned from the experiences collected during execution. The system is evaluated both in simulation and on a SchunkSDH dexterous hand mounted on a KUKA-KR5 arm. We show that, as the grasping manifold is adapted to the system’s experiences, the heuristic becomes more accurate, which results in an improved performance of the execution module. The improvement is not only observed for experienced objects, but also for previously unknown objects of similar sizes.

  • 254.
    Hang, Kaiyu
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Li, Miao
    Stork, Johannes A.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Billard, Aude
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hierarchical Fingertip Space for Synthesizing Adaptable Fingertip Grasps2014Konferensbidrag (Refereegranskat)
  • 255.
    Hang, Kaiyu
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Li, Miao
    EPFL.
    Stork, Johannes A.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Pokorny, Florian T.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Billard, Aude
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hierarchical Fingertip Space: A Unified Framework for Grasp Planning and In-Hand Grasp Adaptation2016Ingår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, nr 4, s. 960-972, artikel-id 7530865Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present a unified framework for grasp planning and in-hand grasp adaptation using visual, tactile and proprioceptive feedback. The main objective of the proposed framework is to enable fingertip grasping by addressing problems of changed weight of the object, slippage and external disturbances. For this purpose, we introduce the Hierarchical Fingertip Space (HFTS) as a representation enabling optimization for both efficient grasp synthesis and online finger gaiting. Grasp synthesis is followed by a grasp adaptation step that consists of both grasp force adaptation through impedance control and regrasping/finger gaiting when the former is not sufficient. Experimental evaluation is conducted on an Allegro hand mounted on a Kuka LWR arm.

  • 256.
    Hang, Kaiyu
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Pokorny, Florian T.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Friction Coefficients and Grasp Synthesis2013Ingår i: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013), IEEE , 2013, s. 3520-3526Konferensbidrag (Refereegranskat)
    Abstract [en]

    We propose a new concept called friction sensitivity which measures how susceptible a specific grasp is to changes in the underlying friction coefficients. We develop algorithms for the synthesis of stable grasps with low friction sensitivity and for the synthesis of stable grasps in the case of small friction coefficients. We describe how grasps with low friction sensitivity can be used when a robot has an uncertain belief about friction coefficients and study the statistics of grasp quality under changes in those coefficients. We also provide a parametric estimate for the distribution of grasp qualities and friction sensitivities for a uniformly sampled set of grasps.

  • 257.
    Hang, Kaiyu
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Stork, Johannes A.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hierarchical Fingertip Space for Multi-fingered Precision Grasping2014Ingår i: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2014), IEEE , 2014, s. 1641-1648Konferensbidrag (Refereegranskat)
    Abstract [en]

    Dexterous in-hand manipulation of objects benefits from the ability of a robot system to generate precision grasps. In this paper, we propose a concept of Fingertip Space and its use for precision grasp synthesis. Fingertip Space is a representation that takes into account both the local geometry of object surface as well as the fingertip geometry. As such, it is directly applicable to the object point cloud data and it establishes a basis for the grasp search space. We propose a model for a hierarchical encoding of the Fingertip Space that enables multilevel refinement for efficient grasp synthesis. The proposed method works at the grasp contact level while not neglecting object shape nor hand kinematics. Experimental evaluation is performed for the Barrett hand considering also noisy and incomplete point cloud data.

  • 258.
    Hang, Kaiyu
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Stork, Johannes Andreas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Pokorny, Florian T.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Combinatorial optimization for hierarchical contact-level grasping2014Ingår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2014, s. 381-388Konferensbidrag (Refereegranskat)
    Abstract [en]

    We address the problem of generating force-closed point contact grasps on complex surfaces and model it as a combinatorial optimization problem. Using a multilevel refinement metaheuristic, we maximize the quality of a grasp subject to a reachability constraint by recursively forming a hierarchy of increasingly coarser optimization problems. A grasp is initialized at the top of the hierarchy and then locally refined until convergence at each level. Our approach efficiently addresses the high dimensional problem of synthesizing stable point contact grasps while resulting in stable grasps from arbitrary initial configurations. Compared to a sampling-based approach, our method yields grasps with higher grasp quality. Empirical results are presented for a set of different objects. We investigate the number of levels in the hierarchy, the computational complexity, and the performance relative to a random sampling baseline approach.

  • 259. Hanheide, Marc
    et al.
    Gretton, Charles
    Dearden, Richard
    Hawes, Nick
    Wyatt, Jeremy
    Pronobis, Andrzej
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Aydemir, Alper
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Göbelbecker, Moritz
    Zender, Hendrik
    Exploiting probabilistic knowledge under uncertain sensing for efficient robot behaviour2011Ingår i: 22nd International Joint Conference on Artificial Intelligence, 2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    Robots must perform tasks efficiently and reliably while acting under uncertainty. One way to achieve efficiency is to give the robot common-sense knowledge about the structure of the world. Reliable robot behaviour can be achieved by modelling the uncertainty in the world probabilistically. We present a robot system that combines these two approaches and demonstrate the improvements in efficiency and reliability that result. Our first contribution is a probabilistic relational model integrating common-sense knowledge about the world in general, with observations of a particularenvironment. Our second contribution is a continual planning system which isable to plan in the large problems posed by that model, by automatically switching between decision-theoretic and classical procedures. We evaluate our system on objects earch tasks in two different real-world indoor environments. By reasoning about the trade-offs between possible courses of action with different informational effects, and exploiting the cues and general structures of those environments, our robot is able to consistently demonstrate efficient and reliable goal-directed behaviour.

  • 260.
    Hanheide, Marc
    et al.
    University of Lincoln.
    Göbelbecker, Moritz
    University of Freiburg.
    Horn, Graham S.
    University of Birmingham.
    Pronobis, Andrzej
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Sjöö, Kristoffer
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. krsj@kth.se.
    Aydemir, Alper
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Gretton, Charles
    University of Birmingham.
    Dearden, Richard
    University of Birmingham.
    Janicek, Miroslav
    DFKI, Saarbrücken.
    Zender, Hendrik
    DFKI, Saarbrücken.
    Kruijff, Geert-Jan
    DFKI, Saarbrücken.
    Hawes, Nick
    University of Birmingham.
    Wyatt, Jeremy
    University of Birmingham.
    Robot task planning and explanation in open and uncertain worlds2015Ingår i: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot's knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot's actions can have: epistemic effects (I believe X because I saw it) and assumptions (I'll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization.

  • 261.
    Hanheide, Marc
    et al.
    University of Birmingham.
    Hawes, Nick
    University of Birmingham.
    Wyatt, Jeremy
    University of Birmingham.
    Göbelbecker, Moritz
    Albert-Ludwigs-Universität.
    Brenner, Michael
    Albert-Ludwigs-Universität, Freiburg.
    Sjöö, Kristoffer
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Aydemir, Alper
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Zender, Hendrik
    DFKI Saarbrücken.
    Kruijff, Geert-Jan
    DFKI Saarbrücken.
    A Framework for Goal Generation and Management2010Ingår i: Proceedings of the AAAI Workshop on Goal-Directed Autonomy, 2010Konferensbidrag (Refereegranskat)
    Abstract [en]

    Goal-directed behaviour is often viewed as an essential char- acteristic of an intelligent system, but mechanisms to generate and manage goals are often overlooked. This paper addresses this by presenting a framework for autonomous goal gener- ation and selection. The framework has been implemented as part of an intelligent mobile robot capable of exploring unknown space and determining the category of rooms au- tonomously. We demonstrate the efficacy of our approach by comparing the performance of two versions of our inte- grated system: one with the framework, the other without. This investigation leads us conclude that such a framework is desirable for an integrated intelligent system because it re- duces the complexity of the problems that must be solved by other behaviour-generation mechanisms, it makes goal- directed behaviour more robust in the face of a dynamic and unpredictable environments, and it provides an entry point for domain-specific knowledge in a more general system.

  • 262. Hawes, N.
    et al.
    Brenner, M.
    Sjöö, Kristoffer
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Planning as an architectural control mechanism2008Konferensbidrag (Refereegranskat)
    Abstract [en]

    We describe recent work on PECAS, an architecture for intelligent robotics that supports multi-modal interaction.

  • 263. Hawes, N.
    et al.
    Hanheide, M.
    Hargreaves, J.
    Page, B.
    Zender, H.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Home alone: Autonomous extension and correction of spatial representations2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present an account of the problems faced by a mobile robot given an incomplete tour of an unknown environment, and introduce a collection of techniques which can generate successful behaviour even in the presence of such problems. Underlying our approach is the principle that an autonomous system must be motivated to act to gather new knowledge, and to validate and correct existing knowledge. This principle is embodied in Dora, a mobile robot which features the aforementioned techniques: shared representations, non-monotonic reasoning, and goal generation and management. To demonstrate how well this collection of techniques work in real-world situations we present a comprehensive analysis of the Dora system's performance over multiple tours in an indoor environment. In this analysis Dora successfully completed 18 of 21 attempted runs, with all but 3 of these successes requiring one or more of the integrated techniques to recover from problems.

  • 264.
    Hawes, Nick
    et al.
    University of Birmingham.
    Hanheide, Marc
    University of Birmingham.
    Sjöö, Kristoffer
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Aydemir, Alper
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Göbelbecker, Moritz
    Albert-Ludwigs-Universität.
    Brenner, Michael
    Albert-Ludwigs-Universität, Freiburg.
    Zender, Hendrik
    Lison, Pierre
    DFKI Saarbrücken.
    Kruijff-Korbayova, Ivana
    DFKI Saarbrücken.
    Kruijff, Geert-Jan
    DFKI Saarbrücken.
    Zillich, Michael
    Vienna University of Technology.
    Dora The Explorer: A Motivated Robot2009Ingår i: Proc. of 9th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2010) / [ed] van der Hoek, Kaminka, Lespérance, Luck, Sen, 2009, s. 1617-1618Konferensbidrag (Refereegranskat)
    Abstract [en]

    Dora the Explorer is a mobile robot with a sense of curios- ity and a drive to explore its world. Given an incomplete tour of an indoor environment, Dora is driven by internal motivations to probe the gaps in her spatial knowledge. She actively explores regions of space which she hasn't previously visited but which she expects will lead her to further unex- plored space. She will also attempt to determine the cate- gories of rooms through active visual search for functionally important objects, and through ontology-driven inference on the results of this search.

  • 265.
    Hawes, Nick
    et al.
    University of Birmingham.
    Zender, Hendrik
    DFKI Saarbrücken.
    Sjöö, Kristoffer
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Brenner, Michael
    Albert-Ludwigs-Universität, Freiburg.
    Kruijff, Geert-Jan
    DFKI Saarbrücken.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Planning and Acting with an Integrated Sense of Space2009Ingår i: Proceedings of the 1st International Workshop on Hybrid Control of Autonomous Systems:  Integrating Learning, Deliberation and Reactive Control (HYCAS), 2009Konferensbidrag (Refereegranskat)
    Abstract [en]

    The paper describes PECAS, an architecture for intelligent systems, and its application in the Explorer, an interactive mobile robot. PECAS is a new architectural combination of information fusion and continual planning. PECAS plans, integrates and monitors the asynchronous flow of information between multiple concurrent systems. Information fusion provides a suitable intermediary to robustly couple the various reactive and deliberative forms of processing used concurrently in the Explorer. The Explorer instantiates PECAS around a hybrid spatial model combining SLAM, visual search, and conceptual inference. This paper describes the elements of this model, and demonstrates on an implemented scenario how PECAS provides means for flexible control.

  • 266. Hirsch, D.
    et al.
    Markström, Ingemar
    KTH, Skolan för datavetenskap och kommunikation (CSC), Teoretisk datalogi, TCS.
    Patterson, M. L.
    Sandberg, A.
    Vejdemo-Johansson, Mikael
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. Jožef Štefan Institute, Slovenia.
    More ties than we thought2015Ingår i: PeerJ, ISSN 2167-8359, E-ISSN 2167-8359, Vol. 2015, nr 1, artikel-id e2Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We extend the existing enumeration of neck tie-knots to include tie-knots with a textured front, tied with the narrow end of a tie. These tie-knots have gained popularity in recent years, based on reconstructions of a costume detail from The Matrix Reloaded, and are explicitly ruled out in the enumeration by Fink & Mao (2000). We show that the relaxed tie-knot description language that comprehensively describes these extended tie-knot classes is context free. It has a regular sub-language that covers all the knots that originally inspired the work. From the full language, we enumerate 266,682 distinct tie-knots that seem tie-able with a normal neck-tie. Out of these 266,682, we also enumerate 24,882 tie-knots that belong to the regular sub-language.

  • 267.
    Hjelm, Martin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detry, R.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Representations for cross-task, cross-object grasp Transfer2014Ingår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2014, s. 5699-5704Konferensbidrag (Refereegranskat)
    Abstract [en]

    We address The problem of Transferring grasp knowledge across objects and Tasks. This means dealing with Two important issues: 1) The induction of possible Transfers, i.e., whether a given object affords a given Task, and 2) The planning of a grasp That will allow The robot To fulfill The Task. The induction of object affordances is approached by abstracting The sensory input of an object as a set of attributes That The agent can reason about Through similarity and proximity. For grasp execution, we combine a part-based grasp planner with a model of Task constraints. The Task constraint model indicates areas of The object That The robot can grasp To execute The Task. Within These areas, The part-based planner finds a hand placement That is compatible with The object shape. The key contribution is The ability To Transfer Task parameters across objects while The part-based grasp planner allows for Transferring grasp information across Tasks. As a result, The robot is able To synthesize plans for previously unobserved Task/object combinations. We illustrate our approach with experiments conducted on a real robot.

  • 268.
    Hjelm, Martin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detry, Renaud
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Sparse Summarization of Robotic Grasping Data2013Ingår i: 2013 IEEE International Conference on Robotics and Automation (ICRA), New York: IEEE , 2013, s. 1082-1087Konferensbidrag (Refereegranskat)
    Abstract [en]

    We propose a new approach for learning a summarized representation of high dimensional continuous data. Our technique consists of a Bayesian non-parametric model capable of encoding high-dimensional data from complex distributions using a sparse summarization. Specifically, the method marries techniques from probabilistic dimensionality reduction and clustering. We apply the model to learn efficient representations of grasping data for two robotic scenarios.

  • 269.
    Hjelm, Martin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detry, Renaud
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning Human Priors for Task-Constrained Grasping2015Ingår i: COMPUTER VISION SYSTEMS (ICVS 2015), Springer Berlin/Heidelberg, 2015, s. 207-217Konferensbidrag (Refereegranskat)
    Abstract [en]

    An autonomous agent using manmade objects must understand how task conditions the grasp placement. In this paper we formulate task based robotic grasping as a feature learning problem. Using a human demonstrator to provide examples of grasps associated with a specific task, we learn a representation, such that similarity in task is reflected by similarity in feature. The learned representation discards parts of the sensory input that is redundant for the task, allowing the agent to ground and reason about the relevant features for the task. Synthesized grasps for an observed task on previously unseen objects can then be filtered and ordered by matching to learned instances without the need of an analytically formulated metric. We show on a real robot how our approach is able to utilize the learned representation to synthesize and perform valid task specific grasps on novel objects.

  • 270.
    Huebner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    BADGr-A toolbox for box-based approximation, decomposition and GRasping2012Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 60, nr 3, s. 367-376Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper, we conclude our work on shape approximation by box primitives for the goal of simple and efficient grasping. As a main product of our research, we present the BADGr toolbox for Box-based Approximation, Decomposition and Grasping of objects. The contributions of the work presented here are twofold: in terms of shape approximation, we provide an algorithm for creating a 3D box primitive representation to identify object parts from 3D point clouds. We motivate and evaluate this choice particularly towards the task of grasping. As a contribution in the field of grasping, we further provide a grasp hypothesis generation framework that utilizes the chosen box presentation in a flexible manner.

  • 271.
    Hyttinen, Emil
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detry, R.
    Learning the tactile signatures of prototypical object parts for robust part-based grasping of novel objects2015Ingår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2015, nr June, s. 4927-4932Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a robotic agent that learns to derive object grasp stability from touch. The main contribution of our work is the use of a characterization of the shape of the part of the object that is enclosed by the gripper to condition the tactile-based stability model. As a result, the agent is able to express that a specific tactile signature may for instance indicate stability when grasping a cylinder, while cuing instability when grasping a box. We proceed by (1) discretizing the space of graspable object parts into a small set of prototypical shapes, via a data-driven clustering process, and (2) learning a touch-based stability classifier for each prototype. Classification is conducted through kernel logistic regression, applied to a low-dimensional approximation of the tactile data read from the robot's hand. We present an experiment that demonstrates the applicability of the method, yielding a success rate of 89%. Our experiment also shows that the distribution of tactile data differs substantially between grasps collected with different prototypes, supporting the use of shape cues in touch-based stability estimators.

  • 272.
    Högman, Virgile
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Interactive object classification using sensorimotor contingencies2013Ingår i: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE , 2013, s. 2799-2805Konferensbidrag (Refereegranskat)
    Abstract [en]

    Understanding and representing objects and their function is a challenging task. Objects we manipulate in our daily activities can be described and categorized in various ways according to their properties or affordances, depending also on our perception of those. In this work, we are interested in representing the knowledge acquired through interaction with objects, describing these in terms of action-effect relations, i.e. sensorimotor contingencies, rather than static shape or appearance representations. We demonstrate how a robot learns sensorimotor contingencies through pushing using a probabilistic model. We show how functional categories can be discovered and how entropy-based action selection can improve object classification.

  • 273.
    Högman, Virgile
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Maki, Atsuto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    A sensorimotor learning framework for object categorization2016Ingår i: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, Vol. 8, nr 1, s. 15-25Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper presents a framework that enables a robot to discover various object categories through interaction. The categories are described using action-effect relations, i.e. sensorimotor contingencies rather than more static shape or appearance representation. The framework provides a functionality to classify objects and the resulting categories, associating a class with a specific module. We demonstrate the performance of the framework by studying a pushing behavior in robots, encoding the sensorimotor contingencies and their predictability with Gaussian Processes. We show how entropy-based action selection can improve object classification and how functional categories emerge from the similarities of effects observed among the objects. We also show how a multidimensional action space can be realized by parameterizing pushing using both position and velocity.

  • 274.
    Hübner, Kai
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Rasolzadeh, Babak
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Schmidt, Martina
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Integration of visual and shape attributes for object action complexes2008Ingår i: Computer Vision Systems, Proceedings / [ed] Gasteratos, A; Vincze, M; Tsotsos, JK, 2008, Vol. 5008, s. 13-22Konferensbidrag (Refereegranskat)
    Abstract [en]

    Our work is oriented towards the idea of developing cognitive capabilities in artificial systems through Object Action Complexes (OACs) [7]. The theory comes up with the claim that objects and actions are inseparably intertwined. Categories of objects are not built by visual appearance only, as very common in computer vision, but by the actions an agent can perform and by attributes perceivable. The core of the OAC concept is constituting objects from a set of attributes, which can be manifold in type (e.g. color, shape, mass, material), to actions. This twofold of attributes and actions provides the base for categories. The work presented here is embedded in the development of an extensible system for providing and evolving attributes,, beginning with attributes extractable from visual data.

  • 275.
    Hübner, Kai
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Grasping by parts: Robot grasp generation from 3D box primitives2010Ingår i: 4th International Conference on Cognitive Systems, CogSys 2010, 2010Konferensbidrag (Refereegranskat)
    Abstract [en]

    Robot grasping capabilities are essential for perceiving, interpreting and acting in arbitrary and dynamic environments. While classical computer vision and visual interpretation of scenes focus on the robot's internal representation of the world rather passively, robot grasping capabilities are needed to actively execute tasks, modify scenarios and thereby reach versatile goals. Grasping is a central issue of various robot applications, especially when unknown objects have to be manipulated by the system. We present an approach aimed at the object description, but constrain it by performable actions. In particular, we will connect box-like representations of objects with grasping, and motivate this approach in a number of ways. The contributions of our work are two-fold: in terms of shape approximation, we provide an algorithm for a 3D box primitive representation to identify object parts from 3D point clouds. We motivate and evaluate this choice particularly toward the task of grasping. As a contribution in the field of grasping, we present a grasp hypothesis generation framework that utilizes the box presentation in a highly flexible manner.

  • 276.
    Hübner, Kai
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Selection of Robot Pre-Grasps using Box-Based Shape Approximation2008Ingår i: 2008 IEEE/RSJ International Conference On Robots And Intelligent Systems, Vols 1-3, Conference Proceedings / [ed] Chatila, R; Kelly, A; Merlet, JP, 2008, s. 1765-1770Konferensbidrag (Refereegranskat)
    Abstract [en]

    Grasping is a central issue of various robot applications, especially when unknown objects have to be manipulated by the system. In earlier work, we have shown the efficiency of 3D object shape approximation by box primitives for the purpose of grasping. A point cloud was approximated by box primitives [1]. In this paper, we present a continuation of these ideas and focus on the box representation itself. On the number of grasp hypotheses from box face normals, we apply heuristic selection integrating task, orientation and shape issues. Finally, an off-line trained neural network is applied to chose a final best hypothesis as the final grasp. We motivate how boxes as one of the simplest representations can be applied in a more sophisticated manner to generate task-dependent grasps.

  • 277.
    Hübner, Kai
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ruthotto, Steffen
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Minimum Volume Bounding Box decomposition for shape approximation in robot grasping2008Ingår i: 2008 IEEE International Conference on Robotics and Automation, ICRA 2008: Vols 1-9, 2008, s. 1628-1633Konferensbidrag (Refereegranskat)
    Abstract [en]

    Thinking about intelligent robots involves consideration of how such systems can be enabled to perceive, interpret and act in arbitrary and dynamic environments. While sensor perception and model interpretation focus on the robot's internal representation of the world rather passively, robot grasping capabilities are needed to actively execute tasks, modify scenarios and thereby reach versatile goals. These capabilities should also include the generation of stable grasps to safely handle even objects unknown to the robot. We believe that the key to this ability is not to select a good grasp depending on the identification of an object (e.g. as a cup), but on its shape (e.g. as a composition of shape primitives). In this paper, we envelop given 3D data points into primitive box shapes by a fit-and-split algorithm that is based on an efficient Minimum Volume Bounding Box implementation. Though box shapes are not able to approximate arbitrary data in a precise manner, they give efficient clues for planning grasps on arbitrary objects. We present the algorithm and experiments using the 3D grasping simulator GraspIt! [1].

  • 278.
    Hübner, Kai
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Welke, Kai
    Przybylski, Markus
    Vahrenkamp, Nikolaus
    Asfour, Tamim
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Dillmann, Rudiger
    Grasping Known Objects with Humanoid Robots: A Box-Based Approach2009Ingår i: 2009 International Conference on Advanced Robotics, ICAR 2009, IEEE , 2009, s. 179-184Konferensbidrag (Refereegranskat)
    Abstract [en]

    Autonomous grasping of household objects is one of the major skills that an intelligent service robot necessarily has to provide in order to interact with the environment. In this paper, we propose a grasping strategy for known objects, comprising an off-line, box-based grasp generation technique on 3D shape representations. The complete system is able to robustly detect an object and estimate its pose, flexibly generate grasp hypotheses from the assigned model and perform such hypotheses using visual servoing. We will present experiments implemented on the humanoid platform ARMAR-III.

  • 279.
    Jensfelt, Patric
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Austin, D
    Christensen, H.I
    Toward task oriented localization2000Konferensbidrag (Refereegranskat)
    Abstract [en]

    In the course of building a fully autonomous robot platform it is important to look at the computational resources spent by the indi- vidual modules. Each of them cannot be greedy, or the overall demand for computational power will be beyond what can be handled on-board. Maintaining an estimate of the pose of a mobile robot is a typical exam- ple where we might not always need to run the algorithm at the highest possible rate. This paper deals with the problem of determining how much e ort is needed in order to accomplish the localization part of a task. The approach we have taken to the problem is to optimize a cost function that accounts for the cost of sensing and the growth of the uncertainty.

  • 280.
    Jensfelt, Patric
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Austin, D
    Wijk, O
    Andersson, M
    Feature based condensation for mobile robot localization2000Konferensbidrag (Refereegranskat)
    Abstract [en]

    Much attention has been given to CONDENSATION methods for mobile robot localization. This has resulted in somewhat of a breakthrough in representing urncertainty for mobile robots. In this paper we use CONDENSATION with planned sampling as a tool for doing feature based global localization in a large and semi-structured environment. This paper presents a comparison of four different feature types: sonar based triangulation points and point pairs, as well as lines and doors extracted using a laser scanner. We show eperimental results that highlight the information content of the different features, and point to fruitful combinations. Accuracy, computation time and the ability to narrow down the search space are among the measures used to compare the features. From the comparison of the features, some general guidelines are drawn for determining good feature types.

  • 281.
    Jensfelt, Patric
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, H
    Laser based pose tracking1999Konferensbidrag (Refereegranskat)
    Abstract [en]

    The trend in localization is towards using more and more detailed models of the world. Our aim is to deal with the question of how simple a model can be used to provide and maintain pose information in an in-door setting. In this paper a Kalman filter based method for continuous position updating using a laser scanner is presented. By updating the position at a high frequency the matching problem becomes tractable and outliers can effectively be filtered out by means of validation gates. The experimental results presented show that the method performs very well in an in-door environment

  • 282.
    Jensfelt, Patric
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, H
    Laser based position acquisition and tracking in an indoor environment1998Konferensbidrag (Refereegranskat)
  • 283.
    Jensfelt, Patric
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik Iskov
    GeorgiaTech.
    Mobile robot2005Patent (Övrig (populärvetenskap, debatt, mm))
    Abstract [en]

    A mobile robot (1) arranged to operate in an environment is described as well as a method for building a map (20). The mobile robot (1) is in an installation mode arranged to store representations of detected objects (19) in a storage means (7) based on detected movement in order to create a map (20). The mobile robot (1) is in a maintenance mode arranged to move in the environment using the map (20) created in the installation mode. The mobile robot (1) comprises editing means for editing, in the installation mode, the map (20) in the storage means (7) based on the map (20) output from the output means (13).

  • 284.
    Jensfelt, Patric
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ekvall, Staffan
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Integrating SLAM and Object Detection for Service Robot Tasks2005Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    A mobile robot system operating in a domestic environment has to integrate components from a number of key research areas such as recognition, visual tracking, visual servoing, object grasping, robot localization, etc. There also has to be an underlying methodology to facilitate the integration. We have previously showed that through sequencing of basic skills, provided by the above mentioned competencies, the system has the ability to carry out flexible grasping for fetch and carry tasks in realistic environments. Through careful fusion of reactive and deliberative control and use of multiple sensory modalities a flexible system is achieved. However, our previous work has mostly concentrated on pick-and-place tasks leaving limited place for generalization. Currently, we are interested in more complex tasks such as collaborating and helping humans in their everyday tasks, opening doors and cupboards, building maps of the environment including objects that are automatically recognized by the system. In this paper, we will show some of the current results regarding the above. Most systems for simultaneous localization and mapping (SLAM) build maps that are only used for localizing the robot. Such maps are typically based on grids or different types of features such as point and lines. Here we augment the process with an object recognition system that detects objects in the environment and puts them in the map generated by the SLAM system. The metric map is also split into topological entities corresponding to rooms. In this way the user can command the robot to retrieve a certain object from a certain room.

  • 285.
    Jensfelt, Patric
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Ekvall, Staffan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Aarno, Daniel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Augmenting slam with object detection in a service robot framework2006Ingår i: Proceedings, IEEE International Workshop on Robot and Human Interactive Communication, 2006, s. 741-746Konferensbidrag (Refereegranskat)
    Abstract [en]

    In a service robot scenario, we are interested in a task of building maps of the environment that include automatically recognized objects. Most systems for simultaneous localization and mapping (SLAM) build maps that are only used for localizing the robot. Such maps are typically based on grids or different types of features such as point and lines. Here, we augment the process with an object recognition system that detects objects in the environment and puts them in the map generated by the SLAM system. During task execution, the robot can use this information to reason about objects, places and their relationships. The metric map is also split into topological entities corresponding to rooms. In this way, the user can command the robot to retrieve an object from a particular room or get help from a robot when searching for a certain object

  • 286.
    Jensfelt, Patric
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Christensen, Henrik I.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Numerisk Analys och Datalogi, NADA. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Exploiting distinguishable image features in robotic mapping and localization2006Ingår i: European Robotics Symposium 2006 / [ed] Christensen, HI, 2006, Vol. 22, s. 143-157Konferensbidrag (Refereegranskat)
    Abstract [en]

    Simultaneous localization and mapping (SLAM) is an important research area in robotics. Lately, systems that use a single bearing-only sensors have received significant attention and the use of visual sensors have been strongly advocated. In this paper, we present a framework for 3D bearing only SLAM using a single camera. We concentrate on image feature selection in order to achieve precise localization and thus good reconstruction in 3D. In addition, we demonstrate how these features can be managed to provide real-time performance and fast matching, to detect loop-closing situations. The proposed vision system has been combined with an extended Kalman Filter (EKF) based SLAM method. A number of experiments have been performed in indoor environments which demonstrate the validity and effectiveness of the approach. We also show how the SLAM generated map can be used for robot localization. The use of vision features which are distinguishable allows a straightforward solution to the "kidnapped-robot" scenario.

  • 287.
    Jensfelt, Patric
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Förell, Erik
    Ljunggren, Per
    Field and service applications - Automating the marking process for exhibitions and fairs - The making of Harry Plotter2007Ingår i: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 14, nr 3, s. 35-42Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Robot technology is constantly finding new applications. This article presented the design of a system for automating the process of marking the locations for stands in large scale exhibition spaces. It is a true service robot application, with a high level of autonomy. It is also an excellent example of what mobile robot localization can be used for. The robot system solves a real task, adding value for the customer, and has been in operation at the Stockholm International Fairs since August 2003. It has now become an integral part of the standard routines of marking. With its help, the time for a standard job from 8 h by two people has been cut to 4 h with one person and one robot. Using more than one robot further increases the gain in productivity.

  • 288.
    Jensfelt, Patric
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Gullstrand, Gunnar
    Forell, Erik
    A mobile robot system for automatic floor marking2006Ingår i: Journal of Field Robotics, ISSN 1556-4959, Vol. 23, nr 07-jun, s. 441-459Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper describes a patent awarded system for automatically marking the positions of stands for a trade fair or exhibition. The system has been in operation since August 2003 and has been used for every exhibition in the three main exhibition halls at the Stockholm International Fair since then. The system has speeded up the marking process significantly What used to be a job for two men over 8 h now takes one robot monitored by one man 4 h to complete. The operators of the robot are from the same group of people that previously performed the marking task manually. Environmental features are much further away than in most other indoor applications and even many outdoor applications. Experiments show that many of the problems that are typically associated with the large beam width of ultrasonic sensors in normal indoor environments manifest themselves here for the laser because of the long range. Reaching the required level of accuracy was only possible by proper modeling of the laser scanner. The system has been evaluated by hand measuring 680 marked points. To make the integration of the robot system into the overall system as smooth as possible the robot uses information from the existing computer aided design (CAD) model of the environment in combination with a SICK LMS 291 laser scanner to localize the robot. This allows the robot to make use of the same information about changes in the environment as the people administrating the CAD system.

  • 289.
    Jensfelt, Patric
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Gullstrand, Gunnar
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Forell, Erik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    A system for automatic marking of floors in very large spaces2006Ingår i: Field and Service Robotics / [ed] Corke, P; Sukkarieh, S, SPRINGER-VERLAG BERLIN: BERLIN , 2006, Vol. 25, s. 93-104Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper describes a system for automatic marking of floors. Such systems can be used for example when marking the positions of stands for a trade fair or exhibition. Achieving a high enough accuracy in such an environment, characterized by very large open spaces, is a major challenge. Environmental features will be much further away then in most other indoor applications and even many outdoor applications. A SICK LMS 291 laser scanner is used for localization purposes. Experiments show that many of the problems that are typically associated with the large beam width of ultra sonic sensors in normal indoor environments manifest themselves here for the laser because of the long range. The system that is presented has been in operation for almost two years to date and has been used for every exhibition in the three main exhibition halls at the Stockholm International Fair since then. The system has speeded up the marking process significantly. For example, what used to be a job for two men over eight hours now takes one robot monitored by one man four hours to complete.

  • 290.
    Jensfelt, Patric
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Folkesson, John
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    A framework for vision based bearing only 3D SLAM2006Ingår i: Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, Florida - May 2006: Vols 1-10, IEEE , 2006, s. 1944-1950Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents a framework for 3D vision based bearing only SLAM using a single camera, an interesting setup for many real applications due to its low cost. The focus in is on the management of the features to achieve real-time performance in extraction, matching and loop detection. For matching image features to map landmarks a modified, rotationally variant SIFT descriptor is used in combination with a Harris-Laplace detector. To reduce the complexity in the map estimation while maintaining matching performance only a few, high quality, image features are used for map landmarks. The rest of the features are used for matching. The framework has been combined with an EKF implementation for SLAM. Experiments performed in indoor environments are presented. These experiments demonstrate the validity and effectiveness of the approach. In particular they show how the robot is able to successfully match current image features to the map when revisiting an area.

  • 291.
    Jensfelt, Patric
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kristensen, S
    Active global localisation for a mobile robot using multiple hypothesis tracking1999Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present a probabilistic approach for mobile robot localization using an incomplete topological world model. The method, which we have termed multi-hypothesis localization (MHL), uses multi-hypothesis Kalman filter based pose tracking combined with a probabilistic formulation of hypothesis correctness to generate and track Gaussian pose hypotheses online. Apart from a lower computational complexity, this approach has the advantage over traditional grid based methods that incomplete and topological world model information can be utilized. Furthermore, the method generates movement commands for the platform to enhance the gathering of information for the pose estimation process. Extensive experiments are presented from two different environments, a typical office environment and an old hospital building.

  • 292.
    Jensfelt, Patric
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kristensen, S
    active global localisation for a mobile robot using multiple hypothesis tracking2001Ingår i: IEEE transactions on robotics and automation, ISSN 1042-296X, Vol. 17, nr 5, s. 748-760Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper we present a probabilistic approach for mobile robot localization using an incomplete topological world model. The method, which we have termed multi-hypothesis localization (MHL), uses multi-hypothesis Kalman filter based pose tracking combined with a probabilistic formulation of hypothesis correctness to generate and track Gaussian pose hypotheses online. Apart from a lower computational complexity, this approach has the advantage over traditional grid based methods that incomplete and topological world model information can be utilized. Furthermore, the method generates movement commands for the platform to enhance the gathering of information for the pose estimation process. Extensive experiments are presented from two different environments, a typical office environment and an old hospital building.

  • 293.
    Jensfelt, Patric
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Wijk, O
    Austin, D
    Andersson, M
    experiments on augmenting condensation for mobile robot localization2000Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we study some modifications of the CONDENSATION algorithm. The case studied is feature based mobile robot localization in a large scale environment. The required sample set size for making the CONDENSATION  algorithm converge properly can in many cases require too much computation. This is often the case when observing features in symmetric environments like for instance doors in long corridors. In such areas a large sample set is required to resolve the generated multi-hypotheses problem. To manage with a sample set size which in the normal case would cause the CONDENSATION algorithm to break down, we study two modifications. The first strategy, called "CONDENSATION with random sampling", takes part of the sample set and spreads it randomly over the environment the robot operates in. The second strategy, called "CONDENSATION with planned sampling", places part of the sample set at planned positions based on the detected features. From the experiments we conclude that the second strategy is the best and can reduce the sample set size by at least a factor of 40.

  • 294.
    Johnson-Roberson, Matthew
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bohg, Jeannette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Attention-based Active 3D Point Cloud Segmentation2010Ingår i: IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, s. 1165-1170Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present a framework for the segmentation of multiple objects from a 3D point cloud. We extend traditional image segmentation techniques into a full 3D representation. The proposed technique relies on a state-of-the-art min-cut framework to perform a fully 3D global multi-class labeling in a principled manner. Thereby, we extend our previous work in which a single object was actively segmented from the background. We also examine several seeding methods to bootstrap the graphical model-based energy minimization and these methods are compared over challenging scenes. All results are generated on real-world data gathered with an active vision robotic head. We present quantitive results over aggregate sets as well as visual results on specific examples.

  • 295.
    Johnson-Roberson, Matthew
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bohg, Jeannette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Skantze, Gabriel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Gustafson, Joakim
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Carlson, Rolf
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Enhanced visual scene understanding through human-robot dialog2010Ingår i: Dialog with Robots: AAAI 2010 Fall Symposium, 2010, s. -144Konferensbidrag (Refereegranskat)
  • 296.
    Johnson-Roberson, Matthew
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bohg, Jeannette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Skantze, Gabriel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Gustafsson, Joakim
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Carlson, Rolf
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Rasolzadeh, Babak
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Enhanced Visual Scene Understanding through Human-Robot Dialog2011Ingår i: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE , 2011, s. 3342-3348Konferensbidrag (Refereegranskat)
    Abstract [en]

    We propose a novel human-robot-interaction framework for robust visual scene understanding. Without any a-priori knowledge about the objects, the task of the robot is to correctly enumerate how many of them are in the scene and segment them from the background. Our approach builds on top of state-of-the-art computer vision methods, generating object hypotheses through segmentation. This process is combined with a natural dialog system, thus including a ‘human in the loop’ where, by exploiting the natural conversation of an advanced dialog system, the robot gains knowledge about ambiguous situations. We present an entropy-based system allowing the robot to detect the poorest object hypotheses and query the user for arbitration. Based on the information obtained from the human-robot dialog, the scene segmentation can be re-seeded and thereby improved. We present experimental results on real data that show an improved segmentation performance compared to segmentation without interaction.

  • 297.
    Johnson-Roberson, Matthew
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Pizarro, Oscar
    Williams, Stefan
    Saliency Ranking for Benthic Survey using Underwater Images2010Ingår i: 11TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV 2010), NEW YORK: IEEE , 2010, s. 459-466Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents a novel architecture for a classification system based on the visual saliency of images. The work is motivated by the difficulty of reviewing large numbers of images as a human operator in the context of Autonomous Underwater Vehicle (AUV) surveys. We formulate a feature space in which an algorithm operates over color and texture to determine saliency and illustrate how this can be used to find interesting or unusual images within a large data set. The saliency classification based on these general image features allows for overlays highlighting interesting benthos or geologic structures on large scale 3D seafloor reconstructions, quickly providing spatial context to human observers. These results are validated using a set of human trials in which images are classified into salient and non-salient categories by a number of test subjects. The trials show good agreement both between subjects and between the human labels and the automated classification system. The results of the automated technique are also compared directly to a more traditional SVM classification system showing favorable results for our system for generalizing to new environments.

  • 298.
    Karayiannidis, Yiannis
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Dimarogonas, Dimos
    KTH, Skolan för elektro- och systemteknik (EES), Reglerteknik. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Multi-agent average consensus control with prescribed performance guarantees2012Ingår i: 2012 IEEE 51st Annual Conference on Decision and Control (CDC), IEEE , 2012, s. 2219-2225Konferensbidrag (Refereegranskat)
    Abstract [en]

    This work proposes a distributed control scheme for the state agreement problem which can guarantee prescribed performance for the system transient. In particular, i) we consider a set of agents that can exchange information according to a static communication graph, ii) we a priori define time-dependent constraints at the edge's space (errors between agents that exchange information) and iii) we design a distributed controller to guarantee that the errors between the neighboring agents do not violate the constraints. Following this technique the contributions are twofold: a) the convergence rate of the system and the communication structure of the agents' network which are strictly connected can be decoupled, and b) the connectivity properties of the initially formed communication graph are rendered invariant by appropriately designing the prescribed performance bounds. It is also shown how the structure and the parameters of the prescribed performance controller can be chosen in case of connected tree graphs and connected graphs with cycles. Simulation results validate the theoretically proven findings while enlightening the merit of the proposed prescribed performance agreement protocol as compared to the linear one.

  • 299.
    Karayiannidis, Yiannis
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Doulgeri, Zoe
    Aristotle University of Thessaloniki, Greece.
    Model-free robot joint position regulation and tracking with prescribed performance guarantees2012Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 60, nr 2, s. 214-226Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The problem of robot joint position control with prescribed performance guarantees is considered; the control objective is the error evolution within prescribed performance bounds in both problems of regulation and tracking. The proposed controllers do not utilize either the robot dynamic model or any approximation structures and are composed by simple PID or PD controllers enhanced by a proportional term of a transformed error through a transformation related gain. Under a sufficient condition for the damping gain, the proposed controllers are able to guarantee (i) predefined minimum speed of convergence, maximum steady state error and overshoot concerning the position error and (ii) uniformly ultimate boundedness (UUB) of the velocity error. The use of the integral term reduces residual errors allowing the proof of asymptotic convergence of both velocity and position errors to zero for the regulation problem under constant disturbances. Performance is a priori guaranteed irrespective of the selection of the control gain values. Simulation results of a three dof spatial robotic manipulator and experimental results of one dof manipulator are given to confirm the theoretical findings.

  • 300.
    Karayiannidis, Yiannis
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Doulgeri, Zoe
    Aristotle University of Thessaloniki.
    Regressor-free prescribed performance robot tracking2013Ingår i: Robotica (Cambridge. Print), ISSN 0263-5747, E-ISSN 1469-8668Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Fast and robust tracking against unknown disturbances is required in many modern complex robotic structures and applications, for which knowledge of the full exact nonlinear system is unreasonable to assume. This paper proposes a regressor-free nonlinear controller of low complexity which ensures prescribed performance position error tracking subject to unknown endogenous and exogenous bounded dynamics assuming that joint position and velocity measurements are available. It is theoretically shown and demonstrated by a simulation study that the proposed controller can guarantee tracking of the desired joint position trajectory with a priori determined accuracy, overshoot and speed of response. Preliminary experimental results to a simplified system are promising for validating the controller to more complex structures.

3456789 251 - 300 av 683
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf