Change search
Refine search result
3456789 251 - 300 of 437
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 251. Kruijff, G.-J. M.
    et al.
    Zender, Hendrik
    Language Technology Lab., DFKI GmbH.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Clarification dialogues in human-augmented mapping2006In: HRI 2006: Proceedings of the 2006 ACM Conference on Human-Robot Interaction, 2006, p. 282-289Conference paper (Refereed)
    Abstract [en]

    An approach to dialogue based interaction for resolution of ambiguities encountered as part of Human-Augmented Mapping (HAM) is presented. The paper focuses on issues related to spatial organisation and localisation. The dialogue pattern naturally arises as robots are introduced to novel environments. The paper discusses an approach based on the notion of Questions under Discussion (QUD). The presented approach has been implemented on a mobile platform that has dialogue capabilities and methods for metric SLAM. Experimental results from a pilot study clearly demonstrate that the system can resolve problematic situations.

  • 252. Kruijff, G.-J.
    et al.
    Zender, Hendrik
    Language Technology Lab., DFKI GmbH.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Situated dialogue and understanding spatial organization: Knowing what is where and what you can do there2006In: Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, 2006, p. 328-333Conference paper (Refereed)
    Abstract [en]

    The paper presents an HRI architecture for human-augmented mapping. Through interaction with a human, the robot can augment its autonomously learnt metric map with qualitative information about locations and objects in the environment. The system implements various interaction strategies observed in independent Wizard-of-Oz studies. The paper discusses an ontology-based approach to representing and inferring 2.5D spatial organization, and presents how knowledge of spatial organization can be acquired autonomously or through spoken dialogue interaction.

  • 253.
    Kunze, Lars
    et al.
    University of Birmingham.
    Burbridge, Christopher
    University of Birmingham.
    Alberti, Marina
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Thippur, Akshaya
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Chemical Science and Engineering (CHE).
    Hawes, Nick
    University of Birmingham.
    Combining Top-down Spatial Reasoning and Bottom-up Object Class Recognition for Scene Understanding2014In: Proc. of 2014 IEEE/RSJ International Conference on IntelligentRobots and Systems 2014, IEEE conference proceedings, 2014, p. 2910-2915Conference paper (Refereed)
    Abstract [en]

    Many robot perception systems are built to only consider intrinsic object features to recognise the class of an object. By integrating both top-down spatial relational reasoning and bottom-up object class recognition the overall performance of a perception system can be improved. In this paper we present a unified framework that combines a 3D object class recognition system with learned, spatial models of object relations. In robot experiments we show that our combined approach improves the classification results on real world office desks compared to pure bottom-up perception. Hence, by using spatial knowledge during object class recognition perception becomes more efficient and robust and robots can understand scenes more effectively.

  • 254. Kyrki, V.
    et al.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Recent trends in computational and robot vision2008In: Unifying perspectives in computational and robot vision / [ed] Danica Kragic, Ville Kyrki, New York: Springer Science+Business Media B.V., 2008, p. 1-10Chapter in book (Refereed)
    Abstract [en]

    There are many characteristics in common in computer vision research and vision research in robotics. For example, the Structure-and-Motion problem in vision has its analog of SLAM (Simultaneous Localization and Mapping) in robotics, visual SLAM being one of the current hot topics. Tracking is another area seeing great interest in both communities, in its many variations, such as 2-D and 3-D tracking, single and multi-object tracking, rigid and deformable object tracking. Other topics of interest for both communities are object and action recognition. Despite having these common interests, however, "pure" computer vision has seen significant theoretical and methodological advances during the last decade which many of the robotics researchers are not fully aware of. On the other hand, the manipulation and control capabilities of robots as well as the range of application areas have developed greatly. In robotics, vision can not be considered an isolated component, but it is instead a part of a system resulting in an action. Thus, in robotics the vision research should include consideration of the control of the system, in other words, the entire perception-action loop. A holistic system approach would then be useful and could provide significant advances in this application domain.

  • 255. Kyrki, V.
    et al.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    Measurement errors in visual servoing2006In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 54, no 10, p. 815-827Article in journal (Refereed)
    Abstract [en]

    This paper addresses the issue of measurement errors in visual servoing. The error characteristics of the vision based state estimation and the associated uncertainty of the control are investigated. The major contribution is the analysis of the propagation of image error through pose estimation and visual servoing control law. Using the analysis, two classical visual servoing methods are evaluated: position-based and 2.5D visual servoing. The evaluation offers a tool to build and analyze hybrid control systems such as switching or partitioning control.

  • 256. Kyrki, V.
    et al.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik Iskov
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    New shortest-path approaches to visual servoing2004Conference paper (Refereed)
    Abstract [en]

    In recent years, a number of visual servo control algorithms have been proposed. Most approaches try to solve the inherent problems of image-based and position-based servoing by partitioning the control between image and Cartesian spaces. However, partitioning of the control often causes the Cartesian path to become more complex, which might result in operation close to the joint limits. A solution to avoid the joint limits is to use a shortest-path approach, which avoids the limits in most cases. In this paper, two new shortest-path approaches to visual servoing are presented. First, a position-based approach is proposed that guarantees both shortest Cartesian trajectory and object visibility. Then, a variant is presented, which avoids the use of a 3D model of the target object by using homography based partial pose estimation.

  • 257.
    Kyrki, Ville
    et al.
    Lappeenranta University of Technology.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Integration of Model-based and Model-free Cues for Visual Object Tracking in 3D2005In: 2005 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2005, p. 1554-1560Conference paper (Refereed)
    Abstract [en]

    Vision is one of the most powerful sensory modalities in robotics, allowing operation in dynamic environments. One of our long-term research interests is mobile manipulation, where precise location of the target object is commonly required during task execution. Recently, a number of approaches have been proposed for real-time 3D tracking and most of them utilize an edge (wireframe) model of the target. However, the use of an edge model has significant problems in complex scenes due to occlusions and multiple responses, especially in terms of initialization. In this paper, we propose a new tracking method based on integration of model-based cues with automatically generated model-free cues, in order to improve tracking accuracy and to avoid weaknesses of edge based tracking. The integration is performed in a Kalman filter framework that operates in real-time. Experimental evaluation shows that the inclusion of model-free cues offers superior performance.

  • 258.
    Kyrki, Ville
    et al.
    Lappeenranta University of Technology.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Tracking Unobservable Rotations by Cue Integration2006In: 2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2006, p. 2744-2750Conference paper (Refereed)
    Abstract [en]

    Model based object tracking has earned significant importance in areas such as augmented reality, surveillance, visual servoing, robotic object manipulation and grasping. Although an active research area, there are still few systems that perform robustly in realistic settings. The key problems to robust and precise object tracking are outliers caused by occlusion, self-occlusion, cluttered background, and reflections. Two most common solutions to the above problems have been the use of robust estimators and the integration of visual cues. The tracking system considered in this paper achieves robustness by integrating model-based and model-free cues. As model-based cues, we consider a CAD model of the object known a priori and as model-free cues,. automatically generated corner features are used. The main idea is to account for relative object motion between consecutive frames using integration of the two cues. The particular contribution of this work is the integration framework where not only polyhedral objects are considered. In particular, we deal with spherical, cylindrical and conical objects for which the complete pose cannot be estimate using only CAD like models. Using the integration with the model-free features, we show how a full pose estimate can be obtained. Experimental evaluation demonstrates robust system performance in realistic settings with highly textured objects.

  • 259.
    Kyrki, Ville
    et al.
    Lappeenranta University of Technology, Finland.
    Serrano Vicente, Isabel
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Eklundh, Jan-Olof
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Action Recognition and Understanding using Motor Primitives2007In: 2007 RO-MAN: 16TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, 2007, p. 1113-1118Conference paper (Refereed)
    Abstract [en]

    We investigate modeling and recognition of arm manipulation actions of different levels of complexity. To model the process, we are using a combination of discriminative support vector machines and generative hidden Markov models. The experimental evaluation, performed with 10 people, investigates both definition and structure of primitive motions as well as the validity of the modeling approach taken.

  • 260. Laaksonen, J.
    et al.
    Kyrki, V.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Evaluation of feature representation and machine learning methods in grasp stability learning2010In: 2010 10th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2010, 2010, p. 112-117Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of sensor-based grasping under uncertainty, specifically, the on-line estimation of grasp stability. We show that machine learning approaches can to some extent detect grasp stability from haptic pressure and finger joint information. Using data from both simulations and two real robotic hands, the paper compares different feature representations and machine learning methods to evaluate their performance in determining the grasp stability. A boosting classifier was found to perform the best of the methods tested.

  • 261. Laskey, M.
    et al.
    Mahler, J.
    McCarthy, Z.
    Pokorny, F. T.
    Patil, S.
    Van Den Berg, J.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Abbeel, P.
    Goldberg, K.
    Multi-armed bandit models for 2D grasp planning with uncertainty2015In: IEEE International Conference on Automation Science and Engineering, IEEE conference proceedings, 2015, p. 572-579Conference paper (Refereed)
    Abstract [en]

    For applications such as warehouse order fulfillment, robot grasps must be robust to uncertainty arising from sensing, mechanics, and control. One way to achieve robustness is to evaluate the performance of candidate grasps by sampling perturbations in shape, pose, and gripper approach and to compute the probability of force closure for each candidate to identify a grasp with the highest expected quality. Since evaluating the quality of each grasp is computationally demanding, prior work has turned to cloud computing. To improve computational efficiency and to extend this work, we consider how Multi-Armed Bandit (MAB) models for optimizing decisions can be applied in this context. We formulate robust grasp planning as a MAB problem and evaluate convergence times towards an optimal grasp candidate using 100 object shapes from the Brown Vision 2D Lab Dataset with 1000 grasp candidates per object. We consider the case where shape uncertainty is represented as a Gaussian process implicit surface (GPIS) with Gaussian uncertainty in pose, gripper approach angle, and coefficient of friction. We find that Thompson Sampling and the Gittins index MAB methods converged to within 3% of the optimal grasp up to 10x faster than uniform allocation and 5x faster than iterative pruning.

  • 262.
    Leon, Beatriz
    et al.
    Universitat Jaume I, Castellon, Spain.
    Ulbrich, Stefan
    Karlsruher Institut für Technologie (KIT) , Institut für Anthropomatik.
    Diankov, Rosen
    School of Computer Science, Carnegie Mellon University, Pittsburgh, USA.
    Puche, Gustavo
    Universitat Jaume I, Castellon, Spain.
    Przybylski, Markus
    Karlsruher Institut für Technologie (KIT) , Institut für Anthropomatik.
    Morales, Antonio
    Universitat Jaume I, Castellon, Spain.
    Asfour, Tamim
    Karlsruher Institut für Technologie (KIT) , Institut für Anthropomatik.
    Moisio, Sami
    Lappeenranta University of Technology, Department of Information Technology, LAPPEENRANTA, Finland .
    Bohg, Jeannette
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kuffner, James
    School of Computer Science, Carnegie Mellon University, Pittsburgh, USA.
    Dillmann, Rüdiger
    Karlsruher Institut für Technologie (KIT) , Institut für Anthropomatik.
    OpenGRASP: A Toolkit for Robot Grasping Simulation2010In: Simulation, Modeling, and Programming for Autonomous Robots Second International Conference, SIMPAR 2010, Darmstadt, Germany, November 15-18, 2010 / [ed] Ando, Noriaki and Balakirsky, Stephen and Hemker, Thomas and Reggiani, Monica and von Stryk, Oskar, Berlin / Heidelberg: Springer , 2010, p. 109-120Conference paper (Refereed)
    Abstract [en]

    Simulation is essential for different robotic research fields such as mobile robotics, motion planning and grasp planning. For grasping in particular, there are no software simulation packages, which provide a holistic environment that can deal with the variety of aspects associated with this problem. These aspects include development and testing of new algorithms, modeling of the environments and robots, including the modeling of actuators, sensors and contacts. In this paper, we present a new simulation toolkit for grasping and dexterous manipulation called OpenGRASP addressing those aspects in addition to extensibility, interoperability and public availability. OpenGRASP is based on a modular architecture, that supports the creation and addition of new functionality and the integration of existing and widely-used technologies and standards. In addition, a designated editor has been created for the generation and migration of such models. We demonstrate the current state of OpenGRASP’s development and its application in a grasp evaluation environment.

  • 263. Li, W.
    et al.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Orebäck, Anders
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Architecture and its implementation for robots to navigate in unknown indoor environments2005In: Chinese Journal of Mechanical Engineering (English Edition), ISSN 1000-9345, Vol. 18, no 3, p. 366-370Article in journal (Refereed)
    Abstract [en]

    It is discussed with the design and implementation of an architecture for a mobile robot to navigate in dynamic and unknown indoor environments. The architecture is based on the framework of Open Robot Control Software at KTH (OROCOS@KTH), which is also discussed and evaluated to navigate indoor efficiently, a new algorithm named door-like-exit detection is proposed which employs 2D feature of a door and extracts key points of pathway from the raw data of a laser scanner. As a hybrid architecture, it is decomposed into several basic components which can be classified as either deliberative or reactive. Each component can concurrently execute and communicate with another. It is expansible and transferable and its components are reusable.

  • 264. Liu, Zhixin
    et al.
    Han, Jing
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    The proportion of leaders needed for the expected consensus2011In: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 47, no 12, p. 2697-2703Article in journal (Refereed)
    Abstract [en]

    In order to have a self-organized multi-agent system exhibit some expected collective behavior, it is necessary to add some special agents with information (called leaders) to intervene the system. Then a fundamental question is: how many such leaders are needed? Naturally the answer depends on the model to be studied. In this paper a typical model proposed by Vicsek et al. is used for answering the question. By estimating the characteristics concerning the initial states of all agents and analyzing the system dynamics, we provide lower bounds on the ratio of leaders needed to guarantee the expected consensus.

  • 265. Lopez-Nicolas, G.
    et al.
    Sagues, C.
    Guerrero, J. J.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Switching visual control based on epipoles for mobile robots2008In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 56, no 7, p. 592-603Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a Visual control approach consisting in a switching control scheme based on the epipolar geometry. The method facilitates a classical teach-by-showing approach where a reference image is used to control the robot to the desired pose (position and orientation). As a result of our proposal a mobile robot carries out a smooth trajectory towards the target and the epipolar geometry model is used through the whole motion. The control scheme developed considers the motion constraints of the mobile platform in a framework based on the epipolar geometry that does not rely on artificial markers or specific models of the environment. The proposed method is designed in order to cope with the degenerate estimation case of the epipolar geometry with short baseline. Experimental evaluation has been performed in realistic indoor and outdoor settings.

  • 266. Lundberg, C.
    et al.
    Reinhold, Roger
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, H. I.
    Evaluation of robot deployment in live missions with the military, police, and fire brigade2007In: Sensors, and Command, Control, Communications and Intelligence (C31) Technologies for Homeland Security and Homeland Defense VI, SPIE - International Society for Optical Engineering, 2007, p. 65380R-Conference paper (Refereed)
    Abstract [en]

    Robots have been successfully deployed within bomb squads all over the world for decades. Recent technical improvements are increasing the prospects to achieve the same benefits also for other high risk professions. As the number of applications increase issues of collaboration and coordination come into question. Can several groups deploy the same type of robot? Can they deploy the same methods? Can resources be shared? What characterizes the different applications? What are the similarities and differences between different groups? This paper reports on a study of four areas in which robots are already, or are about to be deployed: Military Operations in Urban Terrain (MOUT), Military and Police Explosive Ordnance Disposal (EOD), Military Chemical Biological Radiological Nuclear contamination control (CBRN), and Fire Fighting (FF). The aim of the study has been to achieve a general overview across the four areas to survey and compare their similarities and differences. It has also been investigated to what extent it is possible for the them to deploy the same type of robot. It was found that the groups share many requirements, but, that they also have a few individual hard constrains. A comparison across the groups showed the demands of man-portability, ability to access narrow premises, and ability to handle objects of different weight to be decisive; two or three different sizes of robots will be needed to satisfy the need of the four areas.

  • 267. Lundberg, C.
    et al.
    Reinhold, Roger
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, H. I.
    Results from a long term study of a portable field robot in urban terrain2007In: Unmanned Systems Technology IX, SPIE - International Society for Optical Engineering, 2007, p. 65610S-Conference paper (Refereed)
    Abstract [en]

    The military have a considerable amount of experience from using robots for mine clearing and bomb removal. As new technology emerges it is necessary to investigate the possibly to expand robot use. This study has investigated an Army company, specialized in urban operations, while fulfilling their tasks with the support of a PackBot Scout. The robot was integrated and deployed as an ordinary component of the company and included modifying and retraining a number of standard behaviors to include the robot. This paper reports on the following issues: evaluation of missions where the platform can be deployed, what technical improvements are the most desired, and what are the new risks introduced by use of robots? Information was gathered through observation, interviews, and a questionnaire. The results indicate the robot to be useful for reconnaissance and mapping. The users also anticipated that the robot could be used to decrease the risks of IEDs1 by either triggering or by neutralising them with a disruptor2. The robot was further considered to be useful for direct combat if armed, and for placing explosive loads against, for example, a door. Autonomous rendering of maps, acquiring images, two-way audio, and improved sensing such as IR were considered important improvements. The robot slowing down the pace of the unit was considered to be the main risk when used in urban operations.

  • 268.
    Lundberg, Carl
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hedström, Andreas
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    The use of robots in harsh and unstructured field applications2005In: 2005 IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN), NEW YORK, NY: IEEE , 2005, p. 143-150Conference paper (Refereed)
    Abstract [en]

    Robots have a potential to be a significant aid in high risk, unstructured and stressing situations such as experienced by police, fire brigade, rescue workers and military. In this project we have explored the abilities of today's robot technology in the mentioned fields. This was done by, studying the user, identifying scenarios where a robot could be used and implementing a robot system for these cases. We have concluded that highly portable field robots are emerging to be an available technology but that the human-robot interaction is currently a major limiting factor of today's systems. Further we have found that operational protocols, stating how to use the robots, have to be designed in order to make robots an effective tool in harsh and unstructured field environments.

  • 269.
    Lundberg, Carl
    et al.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik Iskov
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Evaluation of mapping with a tele-operated robot with video feedback2006In: Proc. IEEE Int. Workshop Robot Human Interact. Commun., 2006, p. 164-170Conference paper (Refereed)
    Abstract [en]

    This research has examined robot operators' abilities to gain situational awareness while performing teleoperation with video feedback. The research included a user study in which 20 test persons explored and drew a map of a corridor and several rooms, which they had not visited before. Half of the participants did the exploration and mapping using a teleoperated robot (IRobot PackBot) with video feedback but without being able to see or enter the exploration area themselves. The other half fulfilled the task manually by walking through the premises. The two groups were evaluated regarding time consumption and the rendered maps were evaluated concerning error rate and dimensional and logical accuracy. Dimensional accuracy describes the test person's ability to estimate and reproduce dimensions in the map. Logical accuracy refers to missed, added, misinterpreted, reversed and inconsistent objects or shapes in the depiction. The evaluation showed that fulfilling the task with the robot on average took 96% longer time and rendered 44% more errors than doing it without the robot. Robot users overestimated dimensions with an average of 16% while non-robot users made an average overestimation of 1%. Further, the robot users had a 69% larger standard deviation in their dimensional estimations and on average made 23% more logical errors during the test.

  • 270. Luo, J.
    et al.
    Pronobis, Andrzej
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Caputo, B.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Incremental learning for place recognition in dynamic environments2007In: Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on, IEEE , 2007, p. 721-728Conference paper (Refereed)
    Abstract [en]

    Vision-based place recognition is a desirable feature for an autonomous mobile system. In order to work in realistic scenarios, visual recognition algorithms should be adaptive, i.e. should be able to learn from experience and adapt continuously to changes in the environment. This paper presents a discriminative incremental learning approach to place recognition. We use a recently introduced version of the incremental SVM, which allows to control the memory requirements as the system updates its internal representation. At the same time, it preserves the recognition performance of the batch algorithm. In order to assess the method, we acquired a database capturing the intrinsic variability of places over time. Extensive experiments show the power and the potential of the approach.

  • 271. López-Nicolás, G
    et al.
    Sagüés, C.
    Guerrero, J.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Nonholonomic epipolar visual servoing2006In: 2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), New York: IEEE , 2006, p. 2378-2384Conference paper (Refereed)
    Abstract [en]

    A significant amount of work has been reported in the area of visual servoing during the last decade. However, most of the contributions are applied in cases of holonomic robots. More recently, the use of visual feedback for control of nonholonomic vehicles has been reported. Some of the examples are docking and parallel parking maneuvers of cars or vision-based stabilization of a mobile manipulator to a desired pose with respect to a target of interest. Still, many of the approaches are mostly interested in the control part of visual servoing loop considering very simple vision algorithms based on artificial markers. In this paper, we present an approach for nonholonomic visual servoing based on epipolar geometry. The method facilitates a classical teach-by-showing approach where a reference image is used to define the desired pose (position and orientation) of the robot. The major contribution of the paper is the design of the control law that considers nonholonomic constraints of the robot as well as the robust feature detection and matching process based on scale and rotation invariant image features. An extensive experimental evaluation has been performed in a realistic indoor setting and the results are summarized in the paper.

  • 272.
    Madry, Marianna
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Detry, Renaud
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hang, Kaiyu
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Improving Generalization for 3D Object Categorization with Global Structure Histograms2012In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE conference proceedings, 2012, p. 1379-1386Conference paper (Refereed)
    Abstract [en]

    We propose a new object descriptor for three dimensional data named the Global Structure Histogram (GSH). The GSH encodes the structure of a local feature response on a coarse global scale, providing a beneficial trade-off between generalization and discrimination. Encoding the structural characteristics of an object allows us to retain low local variations while keeping the benefit of global representativeness. In an extensive experimental evaluation, we applied the framework to category-based object classification in realistic scenarios. We show results obtained by combining the GSH with several different local shape representations, and we demonstrate significant improvements to other state-of-the-art global descriptors.

  • 273.
    Madry, Marianna
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Maboudi Afkham, Heydar
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ek, Carl Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Carlsson, Stefan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Extracting essential local object characteristics for 3D object categorization2013In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE conference proceedings, 2013, p. 2240-2247Conference paper (Refereed)
    Abstract [en]

    Most object classes share a considerable amount of local appearance and often only a small number of features are discriminative. The traditional approach to represent an object is based on a summarization of the local characteristics by counting the number of feature occurrences. In this paper we propose the use of a recently developed technique for summarizations that, rather than looking into the quantity of features, encodes their quality to learn a description of an object. Our approach is based on extracting and aggregating only the essential characteristics of an object class for a task. We show how the proposed method significantly improves on previous work in 3D object categorization. We discuss the benefits of the method in other scenarios such as robot grasping. We provide extensive quantitative and qualitative experiments comparing our approach to the state of the art to justify the described approach.

  • 274.
    Madry, Marianna
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Song, Dan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    From object categories to grasp transfer using probabilistic reasoning2012In: 2012 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2012, p. 1716-1723Conference paper (Refereed)
    Abstract [en]

    In this paper we address the problem of grasp generation and grasp transfer between objects using categorical knowledge. The system is built upon an i) active scene segmentation module, able of generating object hypotheses and segmenting them from the background in real time, ii) object categorization system using integration of 2D and 3D cues, and iii) probabilistic grasp reasoning system. Individual object hypotheses are first generated, categorized and then used as the input to a grasp generation and transfer system that encodes task, object and action properties. The experimental evaluation compares individual 2D and 3D categorization approaches with the integrated system, and it demonstrates the usefulness of the categorization in task-based grasping and grasp transfer.

  • 275.
    Markdahl, Johan
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Hoppe, Jens
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematics (Div.).
    Wang, Lin
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Exact solutions to the closed loop kinematics of an almost globally stabilizing feedback law on SO(3)2012In: 2012 IEEE 51st Annual Conference on Decision and Control (CDC), IEEE , 2012, p. 2274-2279Conference paper (Refereed)
    Abstract [en]

    We propose a kinematic control law that solves the problem of stabilizing the attitude of a fully actuated rigid body to a desired rest attitude. The control law is designed on the special orthogonal group SO(3), thereby avoiding complications due to the representational singularities of local parametrizations and the unwinding phenomenon associated with global many-to-one parametrizations. We prove almost global stability, i.e. asymptotical stability from all initial conditions except for a set of zero measure. The proposed control law decouples the closed loop kinematics, allowing us to solve the state equations exactly for the rigid body attitude as a function of time, the initial conditions, and two gain parameters. The exact solutions provide an understanding of the transient behaviour of the system and can e.g. be used to tune the gain parameters. The geometric flavor of these ideas is illustrated by simulation.

  • 276.
    Martinez, David
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Modeling and recognition of actions through motor primitives2008In: 2008 IEEE International Conference On Robotics And Automation: Vols 1-9, 2008, p. 1704-1709Conference paper (Refereed)
    Abstract [en]

    We investigate modeling and recognition of object manipulation actions for the purpose of imitation based learning in robotics. To model the process, we are using a combination of discriminative (support vector machines, conditional random fields) and generative approaches (hidden Markov models). We examine the hypothesis that complex actions can be represented as a sequence of motion or action primitives. The experimental evaluation, performed with five object manipulation actions and 10 people, investigates the modeling approach of the primitive action structure and compares the performance of the considered generative and discriminative models.

  • 277.
    Marzinotto, Alejandro
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Colledanchise, Michele
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Towards a Unified Behavior Trees Framework for Robot Control2014In: Robotics and Automation (ICRA), 2014 IEEE International Conference on , IEEE Robotics and Automation Society, 2014, p. 5420-5427Conference paper (Refereed)
    Abstract [en]

    This paper presents a unified framework for Behavior Trees (BTs), a plan representation and execution tool. The available literature lacks the consistency and mathematical rigor required for robotic and control applications. Therefore, we approach this problem in two steps: first, reviewing the most popular BT literature exposing the aforementioned issues; second, describing our unified BT framework along with equivalence notions between BTs and Controlled Hybrid Dynamical Systems (CHDSs). This paper improves on the existing state of the art as it describes BTs in a more accurate and compact way, while providing insight about their actual representation capabilities. Lastly, we demonstrate the applicability of our framework to real systems scheduling open-loop actions in a grasping mission that involves a NAO robot and our BT library.

  • 278.
    Mitsunaga, Noriaki
    et al.
    Osaka Kyoiku University.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kanda, Takayuki
    Advanced Telecommunications Research International.
    Ishiguro, Hiroshi
    Osaka University.
    Hagita, Norihiro
    Advanced Telecommunications Research International.
    Adapting Nonverbal Behavior Parameters to be Preferred by Individuals2012In: Human-Robot Interaction in Social Robotics / [ed] Takayuki Kanda and Hiroshi Ishiguro, Boca Raton, FL, USA: CRC Press, 2012, 1, p. 312-324Chapter in book (Other academic)
  • 279. Mitsunaga, Noriaki
    et al.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kanda, Takayuki
    Ishiguro, Hiroshi
    Hagita, Norihiro
    Adapting robot behavior for human-robot interaction2008In: IEEE Transactions on Robotics, ISSN 1552-3098, Vol. 24, no 4, p. 911-916Article in journal (Refereed)
    Abstract [en]

    Human beings subconsciously adapt their behaviors to a communication partner in order to make interactions run smoothly. In human-robot interactions, not only the human but also the robot is expected to adapt to its partner. Thus, to facilitate human-robot interactions, a robot should be able to read subconscious comfort and discomfort signals from humans and adjust its behavior accordingly, just like a human would. However, most previous, research works expected the human to consciously give feedback, which might interfere with the aim of interaction. We propose an adaptation mechanism based on reinforcement learning that reads subconscious body signals from a human partner, and uses this information to adjust interaction distances, gaze meeting, and motion speed and timing in human-robot interactions. The mechanism uses gazing at the robot's face and human movement distance as subconscious body signals that indicate a human's comfort and discomfort. A pilot study with a humanoid robot that has ten interaction behaviors has been conducted. The study result of 12 subjects suggests that the proposed mechanism enables autonomous adaptation to individual preferences. Also, detailed discussion and conclusions are presented.

  • 280. Mitsunaga, Noriaki
    et al.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kanda, Takayuki
    Ishiguro, Hiroshi
    Hagita, Norihiro
    Robot Behavior Adaptation for Human-Robot Interaction based on Policy Gradient Reinforcement Learning2005In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005. (IROS 2005)., IEEE , 2005, p. 1594-1601Conference paper (Refereed)
    Abstract [en]

    In this paper we propose an adaptation mechanism for robot behaviors to make robot-human interactions run more smoothly. We propose such a mechanism based on reinforcement learning, which reads minute body signals from a human partner, and uses this information to adjust interaction distances, gaze-meeting, and motion speed and timing in human-robot interaction. We show that this enables autonomous adaptation to individual preferences by an experiment with twelve subjects.

  • 281. Mozos, O.M.
    et al.
    Triebel, R.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Rottmann, A.
    Burgard, W.
    Supervised semantic labeling of places using information extracted from sensor data2007In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 55, no 5, p. 391-402Article in journal (Refereed)
    Abstract [en]

    Indoor environments can typically be divided into places with different functionalities like corridors, rooms or doorways. The ability to learn such semantic categories from sensor data enables a mobile robot to extend the representation of the environment facilitating the interaction with humans. As an example, natural language terms like ``corridor" or ``room" can be used to communicate the position of the robot in a map in a more intuitive way. In this work, we first propose an approach based on supervised learning to classify the pose of a mobile robot into semantic classes. Our method uses AdaBoost to boost simple features extracted from sensor range data into a strong classifier. We present two main applications of this approach. Firstly, we show how our approach can be utilized by a moving robot for an online classification of the poses traversed along its path using a hidden Markov model. In this case we additionally use as features objects extracted from images. Secondly, we introduce an approach to learn topological maps from geometric maps by applying our semantic classification procedure in combination with a probabilistic relaxation method. Alternatively, we apply associative Markov networks to classify geometric maps and compare the results with the relaxation approach. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various indoor environments

  • 282. Nalpantidis, L.
    et al.
    Kragic Jensfelt, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kostavelis, I.
    Gasteratos, A.
    Theta- disparity: An efficient representation of the 3D scene structure2015In: 13th International Conference on Intelligent Autonomous Systems, IAS 2014, Springer, 2015, Vol. 302, p. 795-806Conference paper (Refereed)
    Abstract [en]

    We propose a new representation of 3D scene structure, named thetadisparity. The proposed representation is a 2D angular depth histogram that is calculated using a disparity map. It models the structure of the prominent objects in the scene and reveals their radial distribution relative to a point of interest. The proposed representation is analyzed and used as a basic attention mechanism to autonomously resolve two different robotic scenarios. The method is efficient due to the low computational complexity. We show that the method can be successfully used for the planning of different tasks in the industrial and service robotics domains, e.g., object grasping, manipulation, plane extraction, path detection, and obstacle avoidance.

  • 283.
    Nalpantidis, Lazaros
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    YES - YEt another object Segmentation: exploiting camera movement2012In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE , 2012, p. 2116-2121Conference paper (Refereed)
    Abstract [en]

    We address the problem of object segmentation in image sequences where no a-priori knowledge of objects is assumed. We take advantage of robots' ability to move, gathering multiple images of the scene. Our approach starts by extracting edges, uses a polar domain representation and performs integration over time based on a simple dilation operation. The proposed system can be used for providing reliable initial segmentation of unknown objects in scenes of varying complexity, allowing for recognition, categorization or physical interaction with the objects. The experimental evaluation on both self-captured and a publicly available dataset shows the efficiency and stability of the proposed method.

  • 284.
    Nalpantidis, Lazaros
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kostavelis, I.
    Gasteratos, A.
    Intelligent stereo vision in autonomous robot traversability estimation2012In: Robotic Vision: Technologies for Machine Learning and Vision Applications, IGI Global, 2012, p. 193-209Chapter in book (Refereed)
    Abstract [en]

    Traversability estimation is the process of assessing whether a robot is able to move across a specific area. Autonomous robots need to have such an ability to automatically detect and avoid non-traversable areas and, thus, stereo vision is commonly used towards this end constituting a reliable solution under a variety of circumstances. This chapter discusses two different intelligent approaches to assess the traversability of the terrain in front of a stereo vision-equipped robot. First, an approach based on a fuzzy inference system is examined and then another approach is considered, which extracts geometrical descriptions of the scene depth distribution and uses a trained support vector machine (SVM) to assess the traversability. The two methods are presented and discussed in detail.

  • 285.
    Nikou, Alexandros
    et al.
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Heshmati-alamdari, Shahab
    Verginis, Christos K.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Decentralized Abstractions and Timed Constrained Planning of a General Class of Coupled Multi-Agent Systems2017In: 2017 IEEE 56th Annual Conference on Decision and Control, CDC 2017, IEEE , 2017Conference paper (Refereed)
    Abstract [en]

    This paper presents a fully automated procedure for controller synthesis for a general class of multi-agent systems under coupling constraints. Each agent is modeled with dynamics consisting of two terms: the first one models the coupling constraints and the other one is an additional bounded control input. We aim to design these inputs so that each agent meets an individual high-level specification given as a Metric Interval Temporal Logic (MITL). Furthermore, the connectivity of the initially connected agents, is required to be maintained. First, assuming a polyhedral partition of the workspace, a novel decentralized abstraction that provides controllers for each agent that guarantee the transition between different regions is designed. The controllers are the solution of a Decentralized Robust Optimal Control Problem (DROCP) for each agent. Second, by utilizing techniques from formal verification, an algorithm that computes the individual runs which provably satisfy the high-level tasks is provided.

  • 286.
    Nikou, Alexandros
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Tumova, Jana
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Probabilistic Plan Synthesis for Coupled Multi-Agent Systems2017Conference paper (Refereed)
  • 287.
    Nikou, Alexandros
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Tumova, Jana
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Probabilistic Plan Synthesis for Coupled Multi-Agent Systems2017In: IFAC-PapersOnLine, ISSN 2405-8963, Vol. 50, no 1, p. 10766-10771Article in journal (Refereed)
    Abstract [en]

    This paper presents a fully automated procedure for controller synthesis for multi-agent systems under the presence of uncertainties. We model the motion of each of the N agents in the environment as a Markov Decision Process (MDP) and we assign to each agent one individual high-level formula given in Probabilistic Computational Tree Logic (PCTL). Each agent may need to collaborate with other agents in order to achieve a task. The collaboration is imposed by sharing actions between the agents. We aim to design local control policies such that each agent satisfies its individual PCTL formula. The proposed algorithm builds on clustering the agents, MDP products construction and controller policies design. We show that our approach has better computational complexity than the centralized case, which traditionally suffers from very high computational demands.

  • 288.
    Nikou, Alexandros
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Verginis, Christos
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dimarogonas, Dimos
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Robust Distance-Based Formation Control of Multiple Rigid Bodies with Orientation Alignment2017Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of distance- and orientation-based formationcontrol of a class of second-order nonlinear multi-agent systems in 3D space, under static andundirected communication topologies. More specifically, we design a decentralized model-freecontrol protocol in the sense that each agent uses only local information from its neighbors to calculate its own control signal, without incorporating any knowledge of the model nonlinearities and exogenous disturbances. Moreover, the transient and steady state response is solely determined by certain designer-specified performance functions and is fully decoupled by the agents’ dynamic model, the control gain selection, the underlying graph topology as well asthe initial conditions. Additionally, by introducing certain inter-agent distance constraints, we guarantee collision avoidance and connectivity maintenance between neighboring agents. Finally, simulation results verify the performance of the proposed controllers.

  • 289. Okamura, Allison M.
    et al.
    Mataric, Maja J.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis, NA.
    Medical and Health-Care Robotics Achievements and Opportunities2010In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 17, no 3, p. 26-37Article in journal (Refereed)
  • 290.
    Pacchierotti, Elena
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Design of an office-guide robot for social interaction studies2006In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, p. 4965-4970Conference paper (Refereed)
    Abstract [en]

    In this paper, the design of an office-guide robot for social interaction studies is presented. We are interested in studying the impact of passage behaviours in casual encounters. While the system offers assistance in locating the appropriate office that a visitor wants to reach, it is expected to engage in a passing behaviour to allow free passage for other persons that it may encounter. Through use of such an approach it is possible to study the effect of social interaction in a situation that is much more natural than out-of-context user studies. The system has been tested in an early evaluation phase when it worked for almost 7 hours. A total of 64 interactions with people were registered and 13 passage behaviors were performed to conclude that this framework can be successfully used for the evaluation of passing behaviors in natural contexts of operation.

  • 291.
    Pacchierotti, Elena
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Embodied social interaction for service robots in hallway environments2006In: Field and Service Robotics / [ed] Corke, P; Sukkarieh, S, BERLIN: SPRINGER-VERLAG BERLIN , 2006, Vol. 25, p. 293-304Conference paper (Refereed)
    Abstract [en]

    A key aspect of service robotics for everyday use is the motion in close proximity to humans. It is essential that the robot exhibits a behavior that signals safety of motion and awareness of the persons in the environment. To achieve this, there is a need to define control strategies that are perceived as socially acceptable by users that are not familiar with robots. In this paper a system for navigation in a hallway is presented, in which the rules of proxemics are used to define the interaction strategies. The experimental results show the contribution to the establishment of effective spatial interaction patterns between the robot and a person.

  • 292.
    Pacchierotti, Elena
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Human-robot embodied interaction in hallway settings: a pilot user study2005In: 2005 IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN), 2005, p. 164-171Conference paper (Refereed)
    Abstract [en]

    This paper explores the problem of embodied interaction between a service robot and a person in a hallway setting. For operation in environments with people that have limited experience with robots, a behaviour that signals awareness of the persons and safety of motion is essential. A control strategy based on human spatial behaviour studies is presented that adopts human-robot interaction patterns similar to those used in person-person encounters. The results of a pilot study with human subjects are presented in which the users have evaluated the acceptability of the robot behaviour patterns during passage, with respect to three basic parameters: the robot speed, the signaling distance at which the robot starts the maneuver and the lateral distance from the person for safe passage. The study has shown a good overall user response and has provided some useful indications on how to design a hallway passage behaviour that could be most acceptable to human users.

  • 293.
    Pacchierotti, Elena
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Evaluation of passing distance for social robots2006In: RO-MAN 2006: The 15th IEEE International Symposium on Robot and Human Interactive Communication, 2006, p. 315-320Conference paper (Refereed)
    Abstract [en]

    Casual encounters with mobile robots for nonexperts can be a challenge due to lack of an interaction model. The present work is based on the rules from proxemics which are used to design a passing strategy. In narrow corridors the lateral distance of passage is a key parameter to consider. An implemented system has been used in a small study to verify the basic parametric design for such a system. In total 10 subjects evaluated variations in proxemics for encounters with a robot in a corridor setting. The user feedback indicates that entering the intimate sphere of people is less comfortable, however a too significant avoidance is also considered unnecessary. Adequate signaling of avoidance is a behaviour that must be carefully tuned.

  • 294.
    Pacchierotti, Elena
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik Iskov
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Tasking everyday interaction2007In: Autonomous navigation in dynamic environments / [ed] Laugier,Chatila; Raja Chatila, Springer, 2007, p. 151-168Chapter in book (Refereed)
    Abstract [en]

    An important problem in the design of mobile robot systems for operation in natural environments for everyday tasks is the safe handling of encounters with people. People-People encounters follow certain social rules to allow co-existence even in cramped spaces. These social rules are often described according to the classification termed proxemics. In this paper we present an analysis of how the physical interaction with people can be modelled using the rules of proxemics and discuss how the rules of embodied feedback generation can simplify the interaction with novice users. We also provide some guidelines for the design of a control architecture for a mobile robot moving among people. The concepts presented are illustrated by a number of real experiments that verify the overall approach to the design of systems for navigation in human-populated environments.

  • 295. Parasuraman, Ramviyas
    et al.
    Caccamo, Sergio
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Båberg, Fredrik
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Neerincx, Mark
    A New UGV Teleoperation Interface for Improved Awareness of Network Connectivity and Physical Surroundings2017In: Journal of Human-Robot Interaction, E-ISSN 2163-0364, Vol. 6, no 3, p. 48-70Article in journal (Refereed)
    Abstract [en]

    A reliable wireless connection between the operator and the teleoperated unmanned ground vehicle (UGV) is critical in many urban search and rescue (USAR) missions. Unfortunately, as was seen in, for example, the Fukushima nuclear disaster, the networks available in areas where USAR missions take place are often severely limited in range and coverage. Therefore, during mission execution, the operator needs to keep track of not only the physical parts of the mission, such as navigating through an area or searching for victims, but also the variations in network connectivity across the environment. In this paper, we propose and evaluate a new teleoperation user interface (UI) that includes a way of estimating the direction of arrival (DoA) of the radio signal strength (RSS) and integrating the DoA information in the interface. The evaluation shows that using the interface results in more objects found, and less aborted missions due to connectivity problems, as compared to a standard interface. The proposed interface is an extension to an existing interface centered on the video stream captured by the UGV. But instead of just showing the network signal strength in terms of percent and a set of bars, the additional information of DoA is added in terms of a color bar surrounding the video feed. With this information, the operator knows what movement directions are safe, even when moving in regions close to the connectivity threshold.

  • 296.
    Pauwels, Karl
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    SimTrack: A Simulation-based Framework for Scalable Real-time Object Pose Detection and Tracking2015In: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE , 2015, p. 1300-1307Conference paper (Refereed)
    Abstract [en]

    We propose a novel approach for real-time object pose detection and tracking that is highly scalable in terms of the number of objects tracked and the number of cameras observing the scene. Key to this scalability is a high degree of parallelism in the algorithms employed. The method maintains a single 3D simulated model of the scene consisting of multiple objects together with a robot operating on them. This allows for rapid synthesis of appearance, depth, and occlusion information from each camera viewpoint. This information is used both for updating the pose estimates and for extracting the low-level visual cues. The visual cues obtained from each camera are efficiently fused back into the single consistent scene representation using a constrained optimization method. The centralized scene representation, together with the reliability measures it enables, simplify the interaction between pose tracking and pose detection across multiple cameras. We demonstrate the robustness of our approach in a realistic manipulation scenario. We publicly release this work as a part of a general ROS software framework for real-time pose estimation, SimTrack, that can be integrated easily for different robotic applications.

  • 297.
    Pedreira Carabel, Carlos Javier
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Terrain Mapping for Autonomous Vehicles2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Autonomous vehicles have become the forefront of the automotive industry nowadays, looking to have safer and more efficient transportation systems. One of the main issues for every autonomous vehicle consists in being aware of its position and the presence of obstacles along its path. The current project addresses the pose and terrain mapping problem integrating a visual odometry method and a mapping technique. An RGB-D camera, the Kinect v2 from Microsoft, was chosen as sensor for capturing information from the environment. It was connected to an Intel mini-PC for real-time processing. Both pieces of hardware were mounted on-board of a four-wheeled research concept vehicle (RCV) to test the feasibility of the current solution at outdoor locations. The Robot Operating System (ROS) was used as development environment with C++ as programming language. The visual odometry strategy consisted in a frame registration algorithm called Adaptive Iterative Closest Keypoint (AICK) based on Iterative Closest Point (ICP) using Oriented FAST and Rotated BRIEF (ORB) as image keypoint extractor. A grid-based local costmap rolling window type was implemented to have a two-dimensional representation of the obstacles close to the vehicle within a predefined area, in order to allow further path planning applications.

    Experiments were performed both offline and in real-time to test the system at indoors and outdoors scenarios. The results confirmed the viability of using the designed framework to keep tracking the pose of the camera and detect objects in indoor environments. However, outdoor environments evidenced the limitations of the features of the RGB-D sensor, making the current system configuration unfeasible for outdoor purposes.

  • 298.
    Pereira, Pedro O.
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Family of controllers for attitude synchronization on the sphere2017In: Automatica, ISSN 0005-1098, Vol. 75, p. 271-281Article in journal (Refereed)
    Abstract [en]

    In this paper we study a family of controllers that guarantees attitude synchronization for a network of agents in the unit sphere domain, i.e.,S-2. We propose distributed continuous controllers for elements whose dynamics are controllable, i.e., control with torque as command, and which can be implemented by each individual agent without the need of a common global orientation frame among the network, i.e., it requires only local information that can be measured by each individual agent from its own orientation frame. The controllers are constructed as functions of distance functions in S-2, and we provide conditions on those distance functions that guarantee that i) a synchronized network of agents is locally asymptotically stable for an arbitrary connected network graph; ii) a synchronized network is asymptotically achieved for almost all initial conditions in a tree network graph. When performing synchronization along a principal axis, we propose controllers that do not require full torque, but rather torque orthogonal to that principal axis; while for synchronization along other axes, the proposed controllers require full torque. We also study the equilibria configurations that come with specific types of network graphs. The proposed strategies can be used in attitude synchronization of swarms of under actuated rigid bodies, such as satellites.

  • 299.
    Petersson, Lars
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Tell, Dennis
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Strandberg, Morten
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, H.I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Systems integration for real–world manipulation tasks2002In: 2002 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS I-IV, PROCEEDINGS, 2002, p. 2500-2505Conference paper (Refereed)
    Abstract [en]

     A system developed to demonstrate integration of a number of key research areas such as localization, recognition, visual tracking, visual servoing and grasping is presented together with the underlying methodology adopted to facilitate the integration. Through sequencing of basic skills, provided by the above mentioned competencies, the system has the potential to carry out flexible grasping for fetch and carry in realistic environments. Through careful fusion of reactive and deliberative control and use of multiple sensory modalities a significant flexibility is achieved. Experimental verification of the integrated system is presented.

  • 300.
    Piccolo, Giacomo
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Karasalo, Maja
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Contour reconstruction using recursive smoothing splines experimental validation2007In: IEEE International Conference on Intelligent Robots and Systems: Vols 1-9, 2007, p. 2077-2082Conference paper (Refereed)
    Abstract [en]

    In this paper, a recursive smoothing spline approach for contour reconstruction is studied and evaluated. Periodic smoothing splines are used by a robot to approximate the contour of encountered obstacles in the environment. The splines are generated through minimizing a cost function subject to constraints imposed by a linear control system and accuracy is improved iteratively using a recursive spline algorithm. The filtering effect of the smoothing splines allows for usage of noisy sensor data and the method is robust to odometry drift. Experimental evaluation is performed for contour reconstruction of three objects using a SICK laser scanner mounted on a PowerBot from ActivMedia Robotics.

3456789 251 - 300 of 437
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf