Change search
Refine search result
1234567 151 - 200 of 437
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 151.
    Guo, Meng
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Task and Motion Coordination for Heterogeneous Multiagent Systems With Loosely Coupled Local Tasks2017In: IEEE Transactions on Automation Science and Engineering, ISSN 1545-5955, E-ISSN 1558-3783, Vol. 14, no 2, p. 797-808Article in journal (Refereed)
    Abstract [en]

    We consider a multiagent system that consists of heterogeneous groups of homogeneous agents. Instead of defining a global task for the whole team, each agent is assigned a local task as syntactically cosafe linear temporal logic formulas that specify both motion and action requirements. Interagent dependence is introduced by collaborative actions, of which the execution requires multiple agents' collaboration. To ensure the satisfaction of all local tasks without central coordination, we propose a bottom-up motion and task coordination strategy that contains an off-line initial plan synthesis and an online coordination scheme based on real-time exchange of request and reply messages. It facilitates not only the collaboration among heterogeneous agents but also the task swapping between homogeneous agents to reduce the total execution cost. It is distributed as any decision is made locally by each agent based on local computation and communication within neighboring agents. It is scalable and resilient to agent failures as the dependence is formed and removed dynamically based on agent capabilities and their plan execution status, instead of preassigned agent identities. The overall scheme is demonstrated by a simulated scenario of 20 agents with loosely coupled local tasks.

  • 152.
    Guo, Meng
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Tumova, Jana
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Communication-Free Multi-Agent Control Under Local Temporal Tasks and Relative-Distance Constraints2016In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 61, no 12, p. 3948-3962Article in journal (Refereed)
    Abstract [en]

    We propose a distributed control and coordination strategy for multi-agent systems where each agent has a local task specified as a Linear Temporal Logic (LTL) formula and at the same time is subject to relative-distance constraints with its neighboring agents. The local tasks capture the temporal requirements on individual agents' behaviors, while the relative-distance constraints impose requirements on the collective motion of the whole team. The proposed solution relies only on relative-state measurements among the neighboring agents without the need for explicit information exchange. It is guaranteed that the local tasks given as syntactically co-safe or general LTL formulas are fulfilled and the relative-distance constraints are satisfied at all time. The approach is demonstrated with computer simulations.

  • 153.
    Guo, Meng
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering (EES), Automatic Control.
    Tumova, Jana
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Automatic Control.
    Dimarogonas, Dimos V
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Cooperative decentralized multi-agent control under local LTL tasks and connectivity constraints2014In: Proceedings of the IEEE Conference on Decision and Control, IEEE conference proceedings, 2014, no February, p. 75-80Conference paper (Refereed)
    Abstract [en]

    We propose a framework for the decentralized control of a team of agents that are assigned local tasks expressed as Linear Temporal Logic (LTL) formulas. Each local LTL task specification captures both the requirements on the respective agent's behavior and the requests for the other agents' collaborations needed to accomplish the task. Furthermore, the agents are subject to communication constraints. The presented solution follows the automata-theoretic approach to LTL model checking, however, it avoids the computationally demanding construction of synchronized product system between the agents. A decentralized coordination scheme through a dynamic leader selection is proposed, to guarantee the low-level connectivity maintenance and a progress towards the satisfaction of each agent's task.

  • 154.
    Gustavi, Tove
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Dimarogonas, Dimos V.
    Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA 02139, United States.
    Egerstedt, Magnus
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sufficient conditions for connectivity maintenance and rendezvous in leader-follower networks2010In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 46, no 1, p. 133-139Article in journal (Refereed)
    Abstract [en]

    In this paper we derive a set of constraints that are sufficient to guarantee maintained connectivity in a leader-follower multi-agent network with proximity based communication topology. In the scenario we consider, only the leaders are aware of the global mission, which is to converge to a known destination point. Thus, the followers need to stay in contact with the group of leaders in order to reach the goal. In the paper we show that we can maintain the initial network structure, and thereby connectivity, by setting up bounds on the ratio of leaders-to-followers and on the magnitude of the goal attraction force experienced by the leaders. The results are first established for an initially complete communication graph and then extended to an incomplete graph. The results are illustrated by computer simulations.

  • 155. Gustavi, Tove
    et al.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Observer-Based Leader-Following Formation Control Using Onboard Sensor Information2008In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, ISSN 1552-3098, Vol. 24, no 6, p. 1457-1462Article in journal (Refereed)
    Abstract [en]

    In this paper, leader-following formation control for mobile multiagent systems with limited sensor information is studied. The control algorithms developed require information available from onboard sensors only, and in particular, the measurement of the leader (neighbor) speed is not needed. Instead, an observer Is designed for the estimation of this speed, With the proposed control algorithms as building blocks, many complex formations can be obtained.

  • 156.
    Gálvez López, Dorian
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Paul, Chandana
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hybrid Laser and Vision Based Object Search and Localization2008In: 2008 IEEE International Conference on Robotics and Automation: Vols 1-9, 2008, p. 2636-2643Conference paper (Refereed)
    Abstract [en]

    We describe a method for an autonomous robot to efficiently locate one or more distinct objects in a realistic environment using monocular vision. We demonstrate how to efficiently subdivide acquired images into interest regions for the robot to zoom in on, using receptive field cooccurrence histograms. Objects are recognized through SIFT feature matching and the positions of the objects are estimated. Assuming a 2D map of the robot's surroundings and a set of navigation nodes between which it is free to move, we show how to compute an efficient sensing plan that allows the robot's camera to cover the environment, while obeying restrictions on the different objects' maximum and minimum viewing distances. The approach has been implemented on a real robotic system and results are presented showing its practicability and the quality of the position estimates obtained.

  • 157. Göbelbecker, M.
    et al.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A planning approach to active visual search in large environments2011In: AAAI Workshop Tech. Rep., 2011, p. 8-13Conference paper (Refereed)
    Abstract [en]

    In this paper we present a principled planner based approach to the active visual object search problem in unknown environments. We make use of a hierarchical planner that combines the strength of decision theory and heuristics. Furthermore, our object search approach leverages on the conceptual spatial knowledge in the form of object co-occurrences and semantic place categorisation. A hierarchical model for representing object locations is presented with which the planner is able to perform indirect search. Finally we present real world experiments to show the feasibility of the approach.

  • 158.
    Göbelbecker, Moritz
    et al.
    University of Freiburg.
    Hanheide, Marc
    University of Lincoln.
    Gretton, Charles
    University of Birmingham.
    Hawes, Nick
    University of Birmingham.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kristoffer, Sjöö
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Zender, Hendrik
    DFKI, Saarbruecken.
    Dora: A Robot that Plans and Acts Under Uncertainty2012In: Proceedings of the 35th German Conference on Artificial Intelligence (KI’12), 2012Conference paper (Refereed)
    Abstract [en]

    Dealing with uncertainty is one of the major challenges when constructing autonomous mobile robots. The CogX project addressed key aspects of that by developing and implementing mechanisms for self-understanding and self-extension -- i.e. awareness of gaps in knowledge, and the ability to reason and act to fill those gaps. We discuss our robot called Dora, a showcase outcome of that project. Dora is able to perform a variety of search tasks in unexplored environments. One of the results of the project is the Dora robot, that can perform a variety of search tasks in unexplored environments by exploiting probabilistic knowledge representations while retaining efficiency by using a fast planning system.

  • 159.
    Göransson, Rasmus
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, A.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kinect@home: A crowdsourced RGB-D dataset2016In: 13th International Conference on Intelligent Autonomous Systems, IAS 2014, Springer, 2016, Vol. 302, p. 843-858Conference paper (Refereed)
    Abstract [en]

    Algorithms for 3D localization, mapping, and reconstruction are getting increasingly mature. It is time to also make the datasets on which they are tested more realistic to reflect the conditions in the homes of real people. Today algorithms are tested on data gathered in the lab or at best in a few places, and almost always by the people that designed the algorithm. In this paper, we present the first RGB-D dataset from the crowd sourced data collection project Kinect@Home and perform an initial analysis of it. The dataset contains 54 recordings with a total of approximately 45 min of RGB-D video. We present a comparison of two different pose estimation methods, the Kinfu algorithm and a key point-based method, to show how this dataset can be used even though it is lacking ground truth. In addition, the analysis highlights the different characteristics and error modes of the two methods and shows how challenging data from the real world is.

  • 160.
    Güler, Rezan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Biotechnology (BIO), Protein Technology.
    Pauwels, Karl
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pieropan, Alessandro
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Estimating the Deformability of Elastic Materials using Optical Flow and Position-based Dynamics2015In: Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th International Conference on, IEEE conference proceedings, 2015, p. 965-971Conference paper (Refereed)
    Abstract [en]

    Knowledge of the physical properties of objects is essential in a wide range of robotic manipulation scenarios. A robot may not always be aware of such properties prior to interaction. If an object is incorrectly assumed to be rigid, it may exhibit unpredictable behavior when grasped. In this paper, we use vision based observation of the behavior of an object a robot is interacting with and use it as the basis for estimation of its elastic deformability. This is estimated in a local region around the interaction point using a physics simulator. We use optical flow to estimate the parameters of a position-based dynamics simulation using meshless shape matching (MSM). MSM has been widely used in computer graphics due to its computational efficiency, which is also important for closed-loop control in robotics. In a controlled experiment we demonstrate that our method can qualitatively estimate the physical properties of objects with different degrees of deformability.

  • 161.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Haustein, Joshua
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Li, Miao
    EPFL.
    Billard, Aude
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    On the Evolution of Fingertip Grasping Manifolds2016In: IEEE International Conference on Robotics and Automation, IEEE Robotics and Automation Society, 2016, p. 2022-2029, article id 7487349Conference paper (Refereed)
    Abstract [en]

    Efficient and accurate planning of fingertip grasps is essential for dexterous in-hand manipulation. In this work, we present a system for fingertip grasp planning that incrementally learns a heuristic for hand reachability and multi-fingered inverse kinematics. The system consists of an online execution module and an offline optimization module. During execution the system plans and executes fingertip grasps using Canny’s grasp quality metric and a learned random forest based hand reachability heuristic. In the offline module, this heuristic is improved based on a grasping manifold that is incrementally learned from the experiences collected during execution. The system is evaluated both in simulation and on a SchunkSDH dexterous hand mounted on a KUKA-KR5 arm. We show that, as the grasping manifold is adapted to the system’s experiences, the heuristic becomes more accurate, which results in an improved performance of the execution module. The improvement is not only observed for experienced objects, but also for previously unknown objects of similar sizes.

  • 162.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Li, Miao
    Stork, Johannes A.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bekiroglu, Yasemin
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Billard, Aude
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hierarchical Fingertip Space for Synthesizing Adaptable Fingertip Grasps2014Conference paper (Refereed)
  • 163.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Li, Miao
    EPFL.
    Stork, Johannes A.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bekiroglu, Yasemin
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pokorny, Florian T.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Billard, Aude
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hierarchical Fingertip Space: A Unified Framework for Grasp Planning and In-Hand Grasp Adaptation2016In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, no 4, p. 960-972, article id 7530865Article in journal (Refereed)
    Abstract [en]

    We present a unified framework for grasp planning and in-hand grasp adaptation using visual, tactile and proprioceptive feedback. The main objective of the proposed framework is to enable fingertip grasping by addressing problems of changed weight of the object, slippage and external disturbances. For this purpose, we introduce the Hierarchical Fingertip Space (HFTS) as a representation enabling optimization for both efficient grasp synthesis and online finger gaiting. Grasp synthesis is followed by a grasp adaptation step that consists of both grasp force adaptation through impedance control and regrasping/finger gaiting when the former is not sufficient. Experimental evaluation is conducted on an Allegro hand mounted on a Kuka LWR arm.

  • 164.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pokorny, Florian T.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Friction Coefficients and Grasp Synthesis2013In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013), IEEE , 2013, p. 3520-3526Conference paper (Refereed)
    Abstract [en]

    We propose a new concept called friction sensitivity which measures how susceptible a specific grasp is to changes in the underlying friction coefficients. We develop algorithms for the synthesis of stable grasps with low friction sensitivity and for the synthesis of stable grasps in the case of small friction coefficients. We describe how grasps with low friction sensitivity can be used when a robot has an uncertain belief about friction coefficients and study the statistics of grasp quality under changes in those coefficients. We also provide a parametric estimate for the distribution of grasp qualities and friction sensitivities for a uniformly sampled set of grasps.

  • 165.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Stork, Johannes A.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hierarchical Fingertip Space for Multi-fingered Precision Grasping2014In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2014), IEEE , 2014, p. 1641-1648Conference paper (Refereed)
    Abstract [en]

    Dexterous in-hand manipulation of objects benefits from the ability of a robot system to generate precision grasps. In this paper, we propose a concept of Fingertip Space and its use for precision grasp synthesis. Fingertip Space is a representation that takes into account both the local geometry of object surface as well as the fingertip geometry. As such, it is directly applicable to the object point cloud data and it establishes a basis for the grasp search space. We propose a model for a hierarchical encoding of the Fingertip Space that enables multilevel refinement for efficient grasp synthesis. The proposed method works at the grasp contact level while not neglecting object shape nor hand kinematics. Experimental evaluation is performed for the Barrett hand considering also noisy and incomplete point cloud data.

  • 166.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Stork, Johannes Andreas
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Pokorny, Florian T.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Combinatorial optimization for hierarchical contact-level grasping2014In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2014, p. 381-388Conference paper (Refereed)
    Abstract [en]

    We address the problem of generating force-closed point contact grasps on complex surfaces and model it as a combinatorial optimization problem. Using a multilevel refinement metaheuristic, we maximize the quality of a grasp subject to a reachability constraint by recursively forming a hierarchy of increasingly coarser optimization problems. A grasp is initialized at the top of the hierarchy and then locally refined until convergence at each level. Our approach efficiently addresses the high dimensional problem of synthesizing stable point contact grasps while resulting in stable grasps from arbitrary initial configurations. Compared to a sampling-based approach, our method yields grasps with higher grasp quality. Empirical results are presented for a set of different objects. We investigate the number of levels in the hierarchy, the computational complexity, and the performance relative to a random sampling baseline approach.

  • 167. Hanheide, Marc
    et al.
    Gretton, Charles
    Dearden, Richard
    Hawes, Nick
    Wyatt, Jeremy
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Göbelbecker, Moritz
    Zender, Hendrik
    Exploiting probabilistic knowledge under uncertain sensing for efficient robot behaviour2011In: 22nd International Joint Conference on Artificial Intelligence, 2011Conference paper (Refereed)
    Abstract [en]

    Robots must perform tasks efficiently and reliably while acting under uncertainty. One way to achieve efficiency is to give the robot common-sense knowledge about the structure of the world. Reliable robot behaviour can be achieved by modelling the uncertainty in the world probabilistically. We present a robot system that combines these two approaches and demonstrate the improvements in efficiency and reliability that result. Our first contribution is a probabilistic relational model integrating common-sense knowledge about the world in general, with observations of a particularenvironment. Our second contribution is a continual planning system which isable to plan in the large problems posed by that model, by automatically switching between decision-theoretic and classical procedures. We evaluate our system on objects earch tasks in two different real-world indoor environments. By reasoning about the trade-offs between possible courses of action with different informational effects, and exploiting the cues and general structures of those environments, our robot is able to consistently demonstrate efficient and reliable goal-directed behaviour.

  • 168.
    Hanheide, Marc
    et al.
    University of Lincoln.
    Göbelbecker, Moritz
    University of Freiburg.
    Horn, Graham S.
    University of Birmingham.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. krsj@kth.se.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gretton, Charles
    University of Birmingham.
    Dearden, Richard
    University of Birmingham.
    Janicek, Miroslav
    DFKI, Saarbrücken.
    Zender, Hendrik
    DFKI, Saarbrücken.
    Kruijff, Geert-Jan
    DFKI, Saarbrücken.
    Hawes, Nick
    University of Birmingham.
    Wyatt, Jeremy
    University of Birmingham.
    Robot task planning and explanation in open and uncertain worlds2015In: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921Article in journal (Refereed)
    Abstract [en]

    A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot's knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot's actions can have: epistemic effects (I believe X because I saw it) and assumptions (I'll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization.

  • 169.
    Hanheide, Marc
    et al.
    University of Birmingham.
    Hawes, Nick
    University of Birmingham.
    Wyatt, Jeremy
    University of Birmingham.
    Göbelbecker, Moritz
    Albert-Ludwigs-Universität.
    Brenner, Michael
    Albert-Ludwigs-Universität, Freiburg.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Zender, Hendrik
    DFKI Saarbrücken.
    Kruijff, Geert-Jan
    DFKI Saarbrücken.
    A Framework for Goal Generation and Management2010In: Proceedings of the AAAI Workshop on Goal-Directed Autonomy, 2010Conference paper (Refereed)
    Abstract [en]

    Goal-directed behaviour is often viewed as an essential char- acteristic of an intelligent system, but mechanisms to generate and manage goals are often overlooked. This paper addresses this by presenting a framework for autonomous goal gener- ation and selection. The framework has been implemented as part of an intelligent mobile robot capable of exploring unknown space and determining the category of rooms au- tonomously. We demonstrate the efficacy of our approach by comparing the performance of two versions of our inte- grated system: one with the framework, the other without. This investigation leads us conclude that such a framework is desirable for an integrated intelligent system because it re- duces the complexity of the problems that must be solved by other behaviour-generation mechanisms, it makes goal- directed behaviour more robust in the face of a dynamic and unpredictable environments, and it provides an entry point for domain-specific knowledge in a more general system.

  • 170. Hashimoto, K.
    et al.
    Adachi, S.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Event-triggered intermittent sampling for nonlinear model predictive control2017In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 81, p. 148-155Article in journal (Refereed)
    Abstract [en]

    In this paper, we propose a new aperiodic formulation of model predictive control for nonlinear continuous-time systems. Unlike earlier approaches, we provide event-triggered conditions without using the optimal cost as a Lyapunov function candidate. Instead, we evaluate the time interval when the optimal state trajectory enters a local set around the origin. The obtained event-triggered strategy is more suitable for practical applications than the earlier approaches in two directions. First, it does not include parameters (e.g., Lipschitz constant parameters of stage and terminal costs) which may be a potential source of conservativeness for the event-triggered conditions. Second, the event-triggered conditions are necessary to be checked only at certain sampling time instants, instead of continuously. This leads to the alleviation of the sensing cost and becomes more suitable for practical implementations under a digital platform. The proposed event-triggered scheme is also validated through numerical simulations.

  • 171.
    Haustein, Joshua
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hang, Kaiyu
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Integrating motion and hierarchical fingertip grasp planning2017In: 2017 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 3439-3446, article id 7989392Conference paper (Refereed)
    Abstract [en]

    In this work, we present an algorithm that simultaneously searches for a high quality fingertip grasp and a collision-free path for a robot hand-arm system to achieve it. The algorithm combines a bidirectional sampling-based motion planning approach with a hierarchical contact optimization process. Rather than tackling these problems in a decoupled manner, the grasp optimization is guided by the proximity to collision-free configurations explored by the motion planner. We implemented the algorithm for a 13-DoF manipulator and show that it is capable of efficiently planning reachable high quality grasps in cluttered environments. Further, we show that our algorithm outperforms a decoupled integration in terms of planning runtime.

  • 172. Hawes, N.
    et al.
    Brenner, M.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Planning as an architectural control mechanism2008Conference paper (Refereed)
    Abstract [en]

    We describe recent work on PECAS, an architecture for intelligent robotics that supports multi-modal interaction.

  • 173. Hawes, N.
    et al.
    Hanheide, M.
    Hargreaves, J.
    Page, B.
    Zender, H.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Home alone: Autonomous extension and correction of spatial representations2011Conference paper (Refereed)
    Abstract [en]

    In this paper we present an account of the problems faced by a mobile robot given an incomplete tour of an unknown environment, and introduce a collection of techniques which can generate successful behaviour even in the presence of such problems. Underlying our approach is the principle that an autonomous system must be motivated to act to gather new knowledge, and to validate and correct existing knowledge. This principle is embodied in Dora, a mobile robot which features the aforementioned techniques: shared representations, non-monotonic reasoning, and goal generation and management. To demonstrate how well this collection of techniques work in real-world situations we present a comprehensive analysis of the Dora system's performance over multiple tours in an indoor environment. In this analysis Dora successfully completed 18 of 21 attempted runs, with all but 3 of these successes requiring one or more of the integrated techniques to recover from problems.

  • 174.
    Hawes, Nick
    et al.
    University of Birmingham.
    Hanheide, Marc
    University of Birmingham.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Aydemir, Alper
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Göbelbecker, Moritz
    Albert-Ludwigs-Universität.
    Brenner, Michael
    Albert-Ludwigs-Universität, Freiburg.
    Zender, Hendrik
    Lison, Pierre
    DFKI Saarbrücken.
    Kruijff-Korbayova, Ivana
    DFKI Saarbrücken.
    Kruijff, Geert-Jan
    DFKI Saarbrücken.
    Zillich, Michael
    Vienna University of Technology.
    Dora The Explorer: A Motivated Robot2009In: Proc. of 9th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2010) / [ed] van der Hoek, Kaminka, Lespérance, Luck, Sen, 2009, p. 1617-1618Conference paper (Refereed)
    Abstract [en]

    Dora the Explorer is a mobile robot with a sense of curios- ity and a drive to explore its world. Given an incomplete tour of an indoor environment, Dora is driven by internal motivations to probe the gaps in her spatial knowledge. She actively explores regions of space which she hasn't previously visited but which she expects will lead her to further unex- plored space. She will also attempt to determine the cate- gories of rooms through active visual search for functionally important objects, and through ontology-driven inference on the results of this search.

  • 175.
    Hawes, Nick
    et al.
    University of Birmingham.
    Zender, Hendrik
    DFKI Saarbrücken.
    Sjöö, Kristoffer
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Brenner, Michael
    Albert-Ludwigs-Universität, Freiburg.
    Kruijff, Geert-Jan
    DFKI Saarbrücken.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Planning and Acting with an Integrated Sense of Space2009In: Proceedings of the 1st International Workshop on Hybrid Control of Autonomous Systems:  Integrating Learning, Deliberation and Reactive Control (HYCAS), 2009Conference paper (Refereed)
    Abstract [en]

    The paper describes PECAS, an architecture for intelligent systems, and its application in the Explorer, an interactive mobile robot. PECAS is a new architectural combination of information fusion and continual planning. PECAS plans, integrates and monitors the asynchronous flow of information between multiple concurrent systems. Information fusion provides a suitable intermediary to robustly couple the various reactive and deliberative forms of processing used concurrently in the Explorer. The Explorer instantiates PECAS around a hybrid spatial model combining SLAM, visual search, and conceptual inference. This paper describes the elements of this model, and demonstrates on an implemented scenario how PECAS provides means for flexible control.

  • 176. Heshmati-Alamdari, S.
    et al.
    Eqtami, A.
    Karras, G. C.
    Dimarogonas, Dimos V
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kyriakopoulos, K. J.
    A self-triggered visual servoing model predictive control scheme for under-actuated underwater robotic vehicles2014In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2014, p. 3826-3831Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel Vision-based Nonlinear Model Predictive Control (NMPC) scheme for an under-actuated underwater robotic vehicle. In this scheme, the control loop does not close periodically, but instead a self-triggering framework decides when to provide the next control update. Between two consecutive triggering instants, the control sequence computed by the NMPC is applied to the system in an open-loop fashion, i.e, no state measurements are required during that period. This results to a significant smaller number of requested measurements from the vision system, as well as less frequent computations of the control law, reducing in that way the processing time and the energy consumption. The image constraints (i.e preserving the target inside the camera's field of view), the external disturbances induced by currents and waves, as well as the vehicle's kinematic constraints due to under-actuation, are being considered during the control design. The closed-loop system has analytically guaranteed stability and convergence properties, while the performance of the proposed control scheme is experimentally verified using a small under-actuated underwater vehicle in a test tank.

  • 177. Heshmati-alamdari, S.
    et al.
    Nikou, Alexandros
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kyriakopoulos, K. J.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A Robust Force Control Approach for Underwater Vehicle Manipulator Systems2017In: IFAC-PapersOnLine, ISSN 2405-8963, Vol. 50, no 1, p. 11197-11202Article in journal (Refereed)
    Abstract [en]

    In various interaction tasks using Underwater Vehicle Manipulator Systems (UVMSs) (e.g. sampling of the sea organisms, underwater welding), important factors such as: i) uncertainties and complexity of UVMS dynamic model ii) external disturbances (e.g. sea currents and waves) iii) imperfection and noises of measuring sensors iv) steady state performance as well as v) inferior overshoot of interaction force error, should be addressed during the force control design. Motivated by the above factors, this paper presents a model-free control protocol for force controlling of an Underwater Vehicle Manipulator System which is in contact with an unknown compliant environment, without incorporating any knowledge of the UVMS's dynamic model, exogenous disturbances and sensor's noise model. Moreover, the transient and steady state response as well as reduction of overshooting force error are solely determined by certain designer-specified performance functions and are fully decoupled by the UVMS's dynamic model, the control gain selection, as well as the initial conditions. Finally, a simulation study clarifies the proposed method and verifies its efficiency.

  • 178.
    Hjelm, Martin
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Detry, R.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Representations for cross-task, cross-object grasp Transfer2014In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2014, p. 5699-5704Conference paper (Refereed)
    Abstract [en]

    We address The problem of Transferring grasp knowledge across objects and Tasks. This means dealing with Two important issues: 1) The induction of possible Transfers, i.e., whether a given object affords a given Task, and 2) The planning of a grasp That will allow The robot To fulfill The Task. The induction of object affordances is approached by abstracting The sensory input of an object as a set of attributes That The agent can reason about Through similarity and proximity. For grasp execution, we combine a part-based grasp planner with a model of Task constraints. The Task constraint model indicates areas of The object That The robot can grasp To execute The Task. Within These areas, The part-based planner finds a hand placement That is compatible with The object shape. The key contribution is The ability To Transfer Task parameters across objects while The part-based grasp planner allows for Transferring grasp information across Tasks. As a result, The robot is able To synthesize plans for previously unobserved Task/object combinations. We illustrate our approach with experiments conducted on a real robot.

  • 179.
    Hjelm, Martin
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Detry, Renaud
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kjellström, Hedvig
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Sparse Summarization of Robotic Grasping Data2013In: 2013 IEEE International Conference on Robotics and Automation (ICRA), New York: IEEE , 2013, p. 1082-1087Conference paper (Refereed)
    Abstract [en]

    We propose a new approach for learning a summarized representation of high dimensional continuous data. Our technique consists of a Bayesian non-parametric model capable of encoding high-dimensional data from complex distributions using a sparse summarization. Specifically, the method marries techniques from probabilistic dimensionality reduction and clustering. We apply the model to learn efficient representations of grasping data for two robotic scenarios.

  • 180.
    Hjelm, Martin
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Detry, Renaud
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Learning Human Priors for Task-Constrained Grasping2015In: COMPUTER VISION SYSTEMS (ICVS 2015), Springer Berlin/Heidelberg, 2015, p. 207-217Conference paper (Refereed)
    Abstract [en]

    An autonomous agent using manmade objects must understand how task conditions the grasp placement. In this paper we formulate task based robotic grasping as a feature learning problem. Using a human demonstrator to provide examples of grasps associated with a specific task, we learn a representation, such that similarity in task is reflected by similarity in feature. The learned representation discards parts of the sensory input that is redundant for the task, allowing the agent to ground and reason about the relevant features for the task. Synthesized grasps for an observed task on previously unseen objects can then be filtered and ordered by matching to learned instances without the need of an analytically formulated metric. We show on a real robot how our approach is able to utilize the learned representation to synthesize and perform valid task specific grasps on novel objects.

  • 181. Hu, Jiangping
    et al.
    Hu, Xiaoming
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Nonlinear filtering in target tracking using cooperative mobile sensors2010In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 46, no 12, p. 2041-2046Article in journal (Refereed)
    Abstract [en]

    Collaborative signal processing and sensor deployment have been among the most important research tasks in target tracking using networked sensors In this paper the mathematical model is formulated for single target tracking using mobile nonlinear scalar range sensors Then a sensor deployment strategy is proposed for the mobile sensors and a nonlinear convergent filter is built to estimate the trajectory of the target.

  • 182.
    Hu, Jiangping
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Optimal target trajectory estimation and filtering using networked sensors2008In: Journal of Systems Science and Complexity, ISSN 1009-6124, E-ISSN 1559-7067, Vol. 21, no 3, p. 325-336Article in journal (Refereed)
    Abstract [en]

    Target tracking using distributed sensor network is in general a challenging problem because it always needs to deal with real-time processing of noisy information. In this paper the problem of using nonlinear sensors such as distance and direction sensors for estimating a moving target is studied. The problem is formulated as a prudent design of nonlinear filters for a linear system subject to noisy nonlinear measurements and partially unknown input, which is generated by an exogenous system. In the worst case where the input is completely unknown, the exogenous dynamics is reduced to the random walk model. It can be shown that the nonlinear filter will have optimal convergence if the number of the sensors are large enough and the convergence rate will be highly improved if the sensors are deployed appropriately. This actually raises an interesting issue on active sensing: how to optimally move the sensors if they are considered as mobile multi-agent systems? Finally, a simulation example is given to illustrate and validate the construction of our filter.

  • 183.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Non-regular feedback linearization of nonlinear systems via a normal form algorithm2004In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 40, no 3, p. 439-447Article in journal (Refereed)
    Abstract [en]

    In this paper, the problem of non-regular static state feedback linearization of affine nonlinear systems is considered. First of all, a new canonical form for non-regular feedback linear systems is proposed. Using this form, a recursive algorithm is presented, which yields a condition for single input linearization. Then the left semi-tensor product of matrices is introduced and several new properties are developed. Using the recursive framework and new matrix product, a formula is presented for normal form algorithm. Based on it, a set of conditions for single-input (approximate) linearizability is presented.

  • 184.
    Hyttinen, Emil
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Adaptive Grasping Using Tactile Sensing2017Licentiate thesis, monograph (Other academic)
    Abstract [en]

    Grasping novel objects is challenging because of incomplete object data and because of uncertainties inherent in real world applications. To robustly perform grasps on previously unseen objects, feedback from touch is essential. In our research, we study how information from touch sensors can be used to improve grasping novel objects. Since it is not trivial to extract relevant object properties and deduce appropriate actions from touch sensing, we employ machine learning techniques to learn suitable behaviors. We have shown that grasp stability estimation based on touch can be improved by including an approximate notion of object shape. Further we have devised a method to guide local grasp adaptations based on our stability estimation method. Grasp corrections are found by simulating tactile data for grasps in the vicinity of the current grasp. We present several experiments to demonstrate the applicability of our methods. The thesis is concluded by discussing our results and suggesting potential topics for further research.

  • 185.
    Hyttinen, Emil
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Detry, R.
    Learning the tactile signatures of prototypical object parts for robust part-based grasping of novel objects2015In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2015, no June, p. 4927-4932Conference paper (Refereed)
    Abstract [en]

    We present a robotic agent that learns to derive object grasp stability from touch. The main contribution of our work is the use of a characterization of the shape of the part of the object that is enclosed by the gripper to condition the tactile-based stability model. As a result, the agent is able to express that a specific tactile signature may for instance indicate stability when grasping a cylinder, while cuing instability when grasping a box. We proceed by (1) discretizing the space of graspable object parts into a small set of prototypical shapes, via a data-driven clustering process, and (2) learning a touch-based stability classifier for each prototype. Classification is conducted through kernel logistic regression, applied to a low-dimensional approximation of the tactile data read from the robot's hand. We present an experiment that demonstrates the applicability of the method, yielding a success rate of 89%. Our experiment also shows that the distribution of tactile data differs substantially between grasps collected with different prototypes, supporting the use of shape cues in touch-based stability estimators.

  • 186.
    Hyttinen, Emil
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Detry, Renaud
    Estimating Tactile Data for Adaptive Grasping of Novel Objects2017In: 2017 IEEE-RAS 17TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTICS (HUMANOIDS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 643-648Conference paper (Refereed)
    Abstract [en]

    We present an adaptive grasping method that finds stable grasps on novel objects. The main contributions of this paper is in the computation of the probability of success of grasps in the vicinity of an already applied grasp. Our method performs adaptions by simulating tactile data for grasps in the vicinity of the current grasp. The simulated data is used to evaluate hypothetical configurations and thereby guide the robot in the right direction. We demonstrate the applicability of our method by constructing a system that can plan, apply and adapt grasps on novel objects. Experiments are conducted on objects from the YCB object set, [1], and our method increases the robot's success rate from 71.4% to 88.1%. Our experiments show that the application of our grasp adaption method improves grasp stability significantly.

  • 187.
    Högman, Virgile
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Interactive object classification using sensorimotor contingencies2013In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE , 2013, p. 2799-2805Conference paper (Refereed)
    Abstract [en]

    Understanding and representing objects and their function is a challenging task. Objects we manipulate in our daily activities can be described and categorized in various ways according to their properties or affordances, depending also on our perception of those. In this work, we are interested in representing the knowledge acquired through interaction with objects, describing these in terms of action-effect relations, i.e. sensorimotor contingencies, rather than static shape or appearance representations. We demonstrate how a robot learns sensorimotor contingencies through pushing using a probabilistic model. We show how functional categories can be discovered and how entropy-based action selection can improve object classification.

  • 188.
    Högman, Virgile
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Maki, Atsuto
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A sensorimotor learning framework for object categorization2016In: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, Vol. 8, no 1, p. 15-25Article in journal (Refereed)
    Abstract [en]

    This paper presents a framework that enables a robot to discover various object categories through interaction. The categories are described using action-effect relations, i.e. sensorimotor contingencies rather than more static shape or appearance representation. The framework provides a functionality to classify objects and the resulting categories, associating a class with a specific module. We demonstrate the performance of the framework by studying a pushing behavior in robots, encoding the sensorimotor contingencies and their predictability with Gaussian Processes. We show how entropy-based action selection can improve object classification and how functional categories emerge from the similarities of effects observed among the objects. We also show how a multidimensional action space can be realized by parameterizing pushing using both position and velocity.

  • 189.
    Hübner, Kai
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Rasolzadeh, Babak
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Schmidt, Martina
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Integration of visual and shape attributes for object action complexes2008In: Computer Vision Systems, Proceedings / [ed] Gasteratos, A; Vincze, M; Tsotsos, JK, 2008, Vol. 5008, p. 13-22Conference paper (Refereed)
    Abstract [en]

    Our work is oriented towards the idea of developing cognitive capabilities in artificial systems through Object Action Complexes (OACs) [7]. The theory comes up with the claim that objects and actions are inseparably intertwined. Categories of objects are not built by visual appearance only, as very common in computer vision, but by the actions an agent can perform and by attributes perceivable. The core of the OAC concept is constituting objects from a set of attributes, which can be manifold in type (e.g. color, shape, mass, material), to actions. This twofold of attributes and actions provides the base for categories. The work presented here is embedded in the development of an extensible system for providing and evolving attributes,, beginning with attributes extractable from visual data.

  • 190.
    Hübner, Kai
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Grasping by parts: Robot grasp generation from 3D box primitives2010In: 4th International Conference on Cognitive Systems, CogSys 2010, 2010Conference paper (Refereed)
    Abstract [en]

    Robot grasping capabilities are essential for perceiving, interpreting and acting in arbitrary and dynamic environments. While classical computer vision and visual interpretation of scenes focus on the robot's internal representation of the world rather passively, robot grasping capabilities are needed to actively execute tasks, modify scenarios and thereby reach versatile goals. Grasping is a central issue of various robot applications, especially when unknown objects have to be manipulated by the system. We present an approach aimed at the object description, but constrain it by performable actions. In particular, we will connect box-like representations of objects with grasping, and motivate this approach in a number of ways. The contributions of our work are two-fold: in terms of shape approximation, we provide an algorithm for a 3D box primitive representation to identify object parts from 3D point clouds. We motivate and evaluate this choice particularly toward the task of grasping. As a contribution in the field of grasping, we present a grasp hypothesis generation framework that utilizes the box presentation in a highly flexible manner.

  • 191.
    Hübner, Kai
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Selection of Robot Pre-Grasps using Box-Based Shape Approximation2008In: 2008 IEEE/RSJ International Conference On Robots And Intelligent Systems, Vols 1-3, Conference Proceedings / [ed] Chatila, R; Kelly, A; Merlet, JP, 2008, p. 1765-1770Conference paper (Refereed)
    Abstract [en]

    Grasping is a central issue of various robot applications, especially when unknown objects have to be manipulated by the system. In earlier work, we have shown the efficiency of 3D object shape approximation by box primitives for the purpose of grasping. A point cloud was approximated by box primitives [1]. In this paper, we present a continuation of these ideas and focus on the box representation itself. On the number of grasp hypotheses from box face normals, we apply heuristic selection integrating task, orientation and shape issues. Finally, an off-line trained neural network is applied to chose a final best hypothesis as the final grasp. We motivate how boxes as one of the simplest representations can be applied in a more sophisticated manner to generate task-dependent grasps.

  • 192.
    Hübner, Kai
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Welke, Kai
    Przybylski, Markus
    Vahrenkamp, Nikolaus
    Asfour, Tamim
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dillmann, Rudiger
    Grasping Known Objects with Humanoid Robots: A Box-Based Approach2009In: 2009 International Conference on Advanced Robotics, ICAR 2009, IEEE , 2009, p. 179-184Conference paper (Refereed)
    Abstract [en]

    Autonomous grasping of household objects is one of the major skills that an intelligent service robot necessarily has to provide in order to interact with the environment. In this paper, we propose a grasping strategy for known objects, comprising an off-line, box-based grasp generation technique on 3D shape representations. The complete system is able to robustly detect an object and estimate its pose, flexibly generate grasp hypotheses from the assigned model and perform such hypotheses using visual servoing. We will present experiments implemented on the humanoid platform ARMAR-III.

  • 193. Jacobs, T.
    et al.
    Virk, Gurvinder S.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. University of Gävle.
    ISO 13482 - The new safety standard for personal care robots2014In: Proceedings for the Joint Conference of ISR 2014 - 45th International Symposium on Robotics and Robotik 2014 - 8th German Conference on Robotics, ISR/ROBOTIK 2014, 2014, p. 698-703Conference paper (Refereed)
    Abstract [en]

    In the future, personal care robots will work in close interaction with humans. This poses a great challenge to the manufacturers of such robots who have to ensure the safety of their systems. Up to now, only general safety standards for machines were available and the lack of a specialized safety standard with detailed requirements has resulted in uncertainty and a relatively high residual risk for manufacturers. This situation is changed with the publication of ISO 13482, a safety standard for personal care robots. This paper gives an overview about the contents of the new safety standard and the expected effects for service robot manufacturers and the way, personal care robots will be developed in the future. The scope of the standard and its application in the risk assessment process is described. Special focus lies on the aspect of intended close-interaction and contact between human and robot, and the possibility to validate that all safety requirements have been met.

  • 194.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Austin, D
    Wijk, O
    Andersson, M
    Feature based condensation for mobile robot localization2000Conference paper (Refereed)
    Abstract [en]

    Much attention has been given to CONDENSATION methods for mobile robot localization. This has resulted in somewhat of a breakthrough in representing urncertainty for mobile robots. In this paper we use CONDENSATION with planned sampling as a tool for doing feature based global localization in a large and semi-structured environment. This paper presents a comparison of four different feature types: sonar based triangulation points and point pairs, as well as lines and doors extracted using a laser scanner. We show eperimental results that highlight the information content of the different features, and point to fruitful combinations. Accuracy, computation time and the ability to narrow down the search space are among the measures used to compare the features. From the comparison of the features, some general guidelines are drawn for determining good feature types.

  • 195.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ekvall, Staffan
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Integrating SLAM and Object Detection for Service Robot Tasks2005Conference paper (Other academic)
    Abstract [en]

    A mobile robot system operating in a domestic environment has to integrate components from a number of key research areas such as recognition, visual tracking, visual servoing, object grasping, robot localization, etc. There also has to be an underlying methodology to facilitate the integration. We have previously showed that through sequencing of basic skills, provided by the above mentioned competencies, the system has the ability to carry out flexible grasping for fetch and carry tasks in realistic environments. Through careful fusion of reactive and deliberative control and use of multiple sensory modalities a flexible system is achieved. However, our previous work has mostly concentrated on pick-and-place tasks leaving limited place for generalization. Currently, we are interested in more complex tasks such as collaborating and helping humans in their everyday tasks, opening doors and cupboards, building maps of the environment including objects that are automatically recognized by the system. In this paper, we will show some of the current results regarding the above. Most systems for simultaneous localization and mapping (SLAM) build maps that are only used for localizing the robot. Such maps are typically based on grids or different types of features such as point and lines. Here we augment the process with an object recognition system that detects objects in the environment and puts them in the map generated by the SLAM system. The metric map is also split into topological entities corresponding to rooms. In this way the user can command the robot to retrieve a certain object from a certain room.

  • 196.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ekvall, Staffan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Aarno, Daniel
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Augmenting slam with object detection in a service robot framework2006In: Proceedings, IEEE International Workshop on Robot and Human Interactive Communication, 2006, p. 741-746Conference paper (Refereed)
    Abstract [en]

    In a service robot scenario, we are interested in a task of building maps of the environment that include automatically recognized objects. Most systems for simultaneous localization and mapping (SLAM) build maps that are only used for localizing the robot. Such maps are typically based on grids or different types of features such as point and lines. Here, we augment the process with an object recognition system that detects objects in the environment and puts them in the map generated by the SLAM system. During task execution, the robot can use this information to reason about objects, places and their relationships. The metric map is also split into topological entities corresponding to rooms. In this way, the user can command the robot to retrieve an object from a particular room or get help from a robot when searching for a certain object

  • 197.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Exploiting distinguishable image features in robotic mapping and localization2006In: European Robotics Symposium 2006 / [ed] Christensen, HI, 2006, Vol. 22, p. 143-157Conference paper (Refereed)
    Abstract [en]

    Simultaneous localization and mapping (SLAM) is an important research area in robotics. Lately, systems that use a single bearing-only sensors have received significant attention and the use of visual sensors have been strongly advocated. In this paper, we present a framework for 3D bearing only SLAM using a single camera. We concentrate on image feature selection in order to achieve precise localization and thus good reconstruction in 3D. In addition, we demonstrate how these features can be managed to provide real-time performance and fast matching, to detect loop-closing situations. The proposed vision system has been combined with an extended Kalman Filter (EKF) based SLAM method. A number of experiments have been performed in indoor environments which demonstrate the validity and effectiveness of the approach. We also show how the SLAM generated map can be used for robot localization. The use of vision features which are distinguishable allows a straightforward solution to the "kidnapped-robot" scenario.

  • 198.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Förell, Erik
    Ljunggren, Per
    Field and service applications - Automating the marking process for exhibitions and fairs - The making of Harry Plotter2007In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 14, no 3, p. 35-42Article in journal (Refereed)
    Abstract [en]

    Robot technology is constantly finding new applications. This article presented the design of a system for automating the process of marking the locations for stands in large scale exhibition spaces. It is a true service robot application, with a high level of autonomy. It is also an excellent example of what mobile robot localization can be used for. The robot system solves a real task, adding value for the customer, and has been in operation at the Stockholm International Fairs since August 2003. It has now become an integral part of the standard routines of marking. With its help, the time for a standard job from 8 h by two people has been cut to 4 h with one person and one robot. Using more than one robot further increases the gain in productivity.

  • 199.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Gullstrand, Gunnar
    Forell, Erik
    A mobile robot system for automatic floor marking2006In: Journal of Field Robotics, ISSN 1556-4959, Vol. 23, no 07-jun, p. 441-459Article in journal (Refereed)
    Abstract [en]

    This paper describes a patent awarded system for automatically marking the positions of stands for a trade fair or exhibition. The system has been in operation since August 2003 and has been used for every exhibition in the three main exhibition halls at the Stockholm International Fair since then. The system has speeded up the marking process significantly What used to be a job for two men over 8 h now takes one robot monitored by one man 4 h to complete. The operators of the robot are from the same group of people that previously performed the marking task manually. Environmental features are much further away than in most other indoor applications and even many outdoor applications. Experiments show that many of the problems that are typically associated with the large beam width of ultrasonic sensors in normal indoor environments manifest themselves here for the laser because of the long range. Reaching the required level of accuracy was only possible by proper modeling of the laser scanner. The system has been evaluated by hand measuring 680 marked points. To make the integration of the robot system into the overall system as smooth as possible the robot uses information from the existing computer aided design (CAD) model of the environment in combination with a SICK LMS 291 laser scanner to localize the robot. This allows the robot to make use of the same information about changes in the environment as the people administrating the CAD system.

  • 200.
    Jensfelt, Patric
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Gullstrand, Gunnar
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Forell, Erik
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    A system for automatic marking of floors in very large spaces2006In: Field and Service Robotics / [ed] Corke, P; Sukkarieh, S, SPRINGER-VERLAG BERLIN: BERLIN , 2006, Vol. 25, p. 93-104Conference paper (Refereed)
    Abstract [en]

    This paper describes a system for automatic marking of floors. Such systems can be used for example when marking the positions of stands for a trade fair or exhibition. Achieving a high enough accuracy in such an environment, characterized by very large open spaces, is a major challenge. Environmental features will be much further away then in most other indoor applications and even many outdoor applications. A SICK LMS 291 laser scanner is used for localization purposes. Experiments show that many of the problems that are typically associated with the large beam width of ultra sonic sensors in normal indoor environments manifest themselves here for the laser because of the long range. The system that is presented has been in operation for almost two years to date and has been used for every exhibition in the three main exhibition halls at the Stockholm International Fair since then. The system has speeded up the marking process significantly. For example, what used to be a job for two men over eight hours now takes one robot monitored by one man four hours to complete.

1234567 151 - 200 of 437
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf