Change search
Refine search result
1 - 49 of 49
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bratt, Mattias
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Design of a Control Strategy for Teleoperation of a Platform with Significant Dynamics2006In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK, NY: IEEE , 2006, p. 1700-1705Conference paper (Refereed)
    Abstract [en]

    A teleoperation system for controlling a robot with fast dynamics over the Internet has been constructed. It employs a predictive control structure with an accurate dynamic model of the robot to overcome problems caused by varying delays. The operator interface uses a stereo virtual reality display of the robot cell, and a haptic device for force feed-back including virtual obstacle avoidance forces.

  • 2.
    Bratt, Mattias
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Minimum jerk based prediction of user actions for a ball catching task2007In: IEEE International Conference on Intelligent Robots and Systems: Vols 1-9, IEEE conference proceedings, 2007, p. 2716-2722Conference paper (Refereed)
    Abstract [en]

    The present paper examines minimum jerk models for human kinematics as a tool to predict user input in teleoperation with significant dynamics. Predictions of user input can be a powerful tool to bridge time-delays and to trigger autonomous sub-sequences. In this paper an example implementation is presented, along with the results of a pilot experiment in which a virtual reality simulation of a teleoperated ball-catching scenario is used to test the predictive power of the model. The results show that delays up to 100 ms can potentially be bridged with this approach.

  • 3.
    Cruciani, Silvia
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    In-hand manipulation using three-stages open loop pivoting2017Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a method for pivoting an object held by a parallel gripper, without requiring accurate dynamical models or advanced hardware. Our solution uses the motion of the robot arm for generating inertial forces to move the object. It also controls the rotational friction at the pivoting point by commanding a desired distance to the gripper's fingers. This method relies neither on fast and precise tracking systems to obtain the position of the tool, nor on real-time and high-frequency controllable robotic grippers to quickly adjust the finger distance. We demonstrate the efficacy of our method by applying it on a Baxter robot.

  • 4.
    Cruciani, Silvia
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    In-Hand Manipulation Using Three-Stages Open Loop Pivoting2017In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 1244-1251Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a method for pivoting an object held by a parallel gripper, without requiring accurate dynamical models or advanced hardware. Our solution uses the motion of the robot arm for generating inertial forces to move the object. It also controls the rotational friction at the pivoting point by commanding a desired distance to the gripper's fingers. This method relies neither on fast and precise tracking systems to obtain the position of the tool, nor on real-time and high-frequency controllable robotic grippers to quickly adjust the finger distance. We demonstrate the efficacy of our method by applying it on a Baxter robot.

  • 5.
    Gratal, Xavi
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Integrating 3D features and virtual visual servoing for hand-eye and humanoid robot pose estimation2015In: IEEE-RAS International Conference on Humanoid Robots, IEEE Computer Society, 2015, no February, p. 240-245Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose an approach for vision-based pose estimation of a robot hand or full-body pose. The method is based on virtual visual servoing using a CAD model of the robot and it combines 2-D image features with depth features. The method can be applied to estimate either the pose of a robot hand or pose of the whole body given that its joint configuration is known. We present experimental results that show the performance of the approach as demonstrated on both a mobile humanoid robot and a stationary manipulator.

  • 6.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Haustein, Joshua
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Li, Miao
    EPFL.
    Billard, Aude
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    On the Evolution of Fingertip Grasping Manifolds2016In: IEEE International Conference on Robotics and Automation, IEEE Robotics and Automation Society, 2016, p. 2022-2029, article id 7487349Conference paper (Refereed)
    Abstract [en]

    Efficient and accurate planning of fingertip grasps is essential for dexterous in-hand manipulation. In this work, we present a system for fingertip grasp planning that incrementally learns a heuristic for hand reachability and multi-fingered inverse kinematics. The system consists of an online execution module and an offline optimization module. During execution the system plans and executes fingertip grasps using Canny’s grasp quality metric and a learned random forest based hand reachability heuristic. In the offline module, this heuristic is improved based on a grasping manifold that is incrementally learned from the experiences collected during execution. The system is evaluated both in simulation and on a SchunkSDH dexterous hand mounted on a KUKA-KR5 arm. We show that, as the grasping manifold is adapted to the system’s experiences, the heuristic becomes more accurate, which results in an improved performance of the execution module. The improvement is not only observed for experienced objects, but also for previously unknown objects of similar sizes.

  • 7.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. Chalmers, Sweden.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Barrientos, Francisco Eli Vina
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    An Adaptive Control Approach for Opening Doors and Drawers Under Uncertainties2016In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, no 1, p. 161-175Article in journal (Refereed)
    Abstract [en]

    We study the problem of robot interaction with mechanisms that afford one degree of freedom motion, e.g., doors and drawers. We propose a methodology for simultaneous compliant interaction and estimation of constraints imposed by the joint. Our method requires no prior knowledge of the mechanisms' kinematics, including the type of joint, prismatic or revolute. The method consists of a velocity controller that relies on force/torque measurements and estimation of the motion direction, the distance, and the orientation of the rotational axis. It is suitable for velocity controlled manipulators with force/torque sensor capabilities at the end-effector. Forces and torques are regulated within given constraints, while the velocity controller ensures that the end-effector of the robot moves with a task-related desired velocity. We give proof that the estimates converge to the true values under valid assumptions on the grasp, and error bounds for setups with inaccuracies in control, measurements, or modeling. The method is evaluated in different scenarios involving opening a representative set of door and drawer mechanisms found in household environments.

  • 8.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Mapping Human Intentions to Robot Motions via Physical Interaction Through a Jointly-held Object2014In: Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on, 2014, p. 391-397Conference paper (Refereed)
    Abstract [en]

    In this paper we consider the problem of human-robot collaborative manipulation of an object, where the human is active in controlling the motion, and the robot is passively following the human's lead. Assuming that the human grasp of the object only allows for transfer of forces and not torques, there is a disambiguity as to whether the human desires translation or rotation. In this paper, we analyze different approaches to this problem both theoretically and in experiment. This leads to the proposal of a control methodology that uses switching between two different admittance control modes based on the magnitude of measured force to achieve disambiguation of the rotation/translation problem.

  • 9.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Online Contact Point Estimation for Uncalibrated Tool Use2014In: Robotics and Automation (ICRA), 2014 IEEE International Conference on, IEEE Robotics and Automation Society, 2014, p. 2488-2493Conference paper (Refereed)
    Abstract [en]

    One of the big challenges for robots working outside of traditional industrial settings is the ability to robustly and flexibly grasp and manipulate tools for various tasks. When a tool is interacting with another object during task execution, several problems arise: a tool can be partially or completely occluded from the robot's view, it can slip or shift in the robot's hand - thus, the robot may lose the information about the exact position of the tool in the hand. Thus, there is a need for online calibration and/or recalibration of the tool. In this paper, we present a model-free online tool-tip calibration method that uses force/torque measurements and an adaptive estimation scheme to estimate the point of contact between a tool and the environment. An adaptive force control component guarantees that interaction forces are limited even before the contact point estimate has converged. We also show how to simultaneously estimate the location and normal direction of the surface being touched by the tool-tip as the contact point is estimated. The stability of the the overall scheme and the convergence of the estimated parameters are theoretically proven and the performance is evaluated in experiments on a real robot.

  • 10.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Online Kinematics Estimation for Active Human-Robot Manipulation of Jointly Held Objects2013In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE , 2013, p. 4872-4878Conference paper (Refereed)
    Abstract [en]

    This paper introduces a method for estimating the constraints imposed by a human agent on a jointly manipulated object. These estimates can be used to infer knowledge of where the human is grasping an object, enabling the robot to plan trajectories for manipulating the object while subject to the constraints. We describe the method in detail, motivate its validity theoretically, and demonstrate its use in co-manipulation tasks with a real robot.

  • 11.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Design of force-driven online motion plans for door opening under uncertainties2012In: Workshop on Real-time Motion Planning: Online, Reactive, and in Real-time, 2012Conference paper (Refereed)
    Abstract [en]

    The problem of door opening is fundamental for household robotic applications. Domestic environments are generally less structured than industrial environments and thus several types of uncertainties associated with the dynamics and kinematics of a door must be dealt with to achieve successful opening. This paper proposes a method that can open doors without prior knowledge of the door kinematics. The proposed method can be implemented on a velocity-controlled manipulator with force sensing capabilities at the end-effector. The velocity reference is designed by using feedback of force measurements while constraint and motion directions are updated online based on adaptive estimates of the position of the door hinge. The online estimator is appropriately designed in order to identify the unknown directions. The proposed scheme has theoretically guaranteed performance which is further demonstrated in experiments on a real robot. Experimental results additionally show the robustness of the proposed method under disturbances introduced by the motion of the mobile platform.

  • 12.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Interactive perception and manipulation of unknown constrained mechanisms using adaptive control2013In: ICRA 2013 Mobile Manipulation Workshop on Interactive Perception, 2013Conference paper (Refereed)
  • 13.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Model-free robot manipulation of doors and drawers by means of fixed-grasps2013In: 2013 IEEE International Conference on Robotics and Automation (ICRA), New York: IEEE , 2013, p. 4485-4492Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of robot interaction with objects attached to the environment through joints such as doors or drawers. We propose a methodology that requires no prior knowledge of the objects’ kinematics, including the type of joint - either prismatic or revolute. The method consists of a velocity controller which relies onforce/torque measurements and estimation of the motion direction,rotational axis and the distance from the center of rotation.The method is suitable for any velocity controlled manipulatorwith a force/torque sensor at the end-effector. The force/torquecontrol regulates the applied forces and torques within givenconstraints, while the velocity controller ensures that the endeffectormoves with a task-related desired tangential velocity. The paper also provides a proof that the estimates converge tothe actual values. The method is evaluated in different scenarios typically met in a household environment.

  • 14.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    "Open Sesame!" Adaptive Force/Velocity Control for Opening Unknown Doors2012In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE , 2012, p. 4040-4047Conference paper (Refereed)
    Abstract [en]

    The problem of door opening is fundamental for robots operating in domestic environments. Since these environments are generally less structured than industrial environments, several types of uncertainties associated with the dynamics and kinematics of a door must be dealt with to achieve successful opening. This paper proposes a method that can open doors without prior knowledge of the door kinematics. The proposed method can be implemented on a velocity-controlled manipulator with force sensing capabilities at the end-effector. The method consists of a velocity controller which uses force measurements and estimates of the radial direction based on adaptive estimates of the position of the door hinge. The control action is decomposed into an estimated radial and tangential direction following the concept of hybrid force/motion control. A force controller acting within the velocity controller regulates the radial force to a desired small value while the velocity controller ensures that the end effector of the robot moves with a desired tangential velocity leading to task completion. This paper also provides a proof that the adaptive estimates of the radial direction converge to the actual radial vector. The performance of the control scheme is demonstrated in both simulation and on a real robot.

  • 15.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Adaptive force/velocity control for opening unknown doors2012In: Robot Control, Volume 10, Part  1, 2012, p. 753-758Conference paper (Refereed)
    Abstract [en]

    The problem of door opening is fundamental for robots operating in domesticenvironments. Since these environments are generally unstructured, a robot must deal withseveral types of uncertainties associated with the dynamics and kinematics of a door to achievesuccessful opening. The present paper proposes a dynamic force/velocity controller which usesadaptive estimation of the radial direction based on adaptive estimates of the door hinge’sposition. The control action is decomposed into estimated radial and tangential directions,which are proved to converge to the corresponding actual values. The force controller usesreactive compensation of the tangential forces and regulates the radial force to a desired smallvalue, while the velocity controller ensures that the robot’s end-effector moves with a desiredtangential velocity. The performance of the control scheme is demonstrated in simulation witha 2 DoF planar manipulator opening a door.

  • 16.
    Krishnan, Rakesh
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics. Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden.
    Björsell, Niclas
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics.
    Gutierrez-Farewik, Elena
    KTH Engineering Sciences, Mechanics, Royal Institute of Technology (KTH), Stockholm, Sweden.
    Smith, Christian
    Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden.
    A survey of human shoulder functional kinematic representations2018In: Medical and Biological Engineering and Computing, ISSN 0140-0118, E-ISSN 1741-0444Article, review/survey (Refereed)
    Abstract [en]

    In this survey, we review the field of human shoulder functional kinematic representations. The central question of this review is to evaluate whether the current approaches in shoulder kinematics can meet the high-reliability computational challenge. This challenge is posed by applications such as robot-assisted rehabilitation. Currently, the role of kinematic representations in such applications has been mostly overlooked. Therefore, we have systematically searched and summarised the existing literature on shoulder kinematics. The shoulder is an important functional joint, and its large range of motion (ROM) poses several mathematical and practical challenges. Frequently, in kinematic analysis, the role of the shoulder articulation is approximated to a ball-and-socket joint. Following the high-reliability computational challenge, our review challenges this inappropriate use of reductionism. Therefore, we propose that this challenge could be met by kinematic representations, that are redundant, that use an active interpretation and that emphasise on functional understanding.

  • 17.
    Krishnan, Rakesh
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics. Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden..
    Björsell, Niclas
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics.
    Smith, Christian
    Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden.
    How do we plan movements?: A geometric answer2016In: School and symposium on advanced neurorehabilitation (SSNR2016): Proceedings, 2016, , p. 2p. 16-17Conference paper (Other academic)
    Abstract [en]

    Human movement is essentially a complex phenomenon. When humans work closely with robots, understanding human motion using robot’s sensors is a very challenging problem. This is partially due to the lack of proper consensus among researchers on which representation to use in such situations. This extended abstract presents a novel kinematic framework to study human intention using hybrid twists. Thisis important as the functional aspects of the human shoulder are evaluated using the information embedded in thoraco-humeral kinematics. We successfully demonstrate that our approach is singularity free. We also demonstrate that how the twis tparameters vary according to the movement being performed.

  • 18.
    Krishnan, Rakesh
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics. Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden.
    Björsell, Niclas
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics.
    Smith, Christian
    Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden.
    Human shoulder functional kinematics: Are we ready for the high-reliability computational challenge?2016Conference paper (Other academic)
    Abstract [en]

    In  this  preview  talk,  I  will  present  a  short  summary  of  our  ongoing  work  related  to  human shoulder functional kinematics.  Robot-assisted rehabilitation needs functional understanding of human kinematics in design, operation and evaluation of this technology. Human shoulder is an important  functional  joint  that  enables  fine  motor  skills  for  human  upper  arm  manipulation. Due  to  several  mathematical  and  practical  challenges,  the  shoulder  kinematics  is  often oversimplified. Moreover, there is a lack of agreement among different research communities on  the  suitable  kinematic  representation  when  connecting  humans  to  robots.  Currently,  it  is expected  that  this  computational  structure  used  in  such  applications  have  high-reliability. Therefore,  we  pose  the  question:  Are  we  ready  for  the  high-reliability  computational challenge?

  • 19.
    Krishnan, Rakesh
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics. Computer Vision and Active Perception Lab, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden.
    Björsell, Niclas
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics.
    Smith, Christian
    Computer Vision and Active Perception Lab, School of Computer Science and Communication, KTH- Royal Institute of Technology.
    Invariant Spatial Parametrization of Human Thoracohumeral Kinematics: A Feasibility Study2016In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Robotics and Automation Society, 2016, p. 4469-4476, article id 7759658Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a novel kinematic framework using hybrid twists, that has the potential to improve the reliability of estimated human shoulderkinematics. This is important as the functional aspects ofthe human shoulder are evaluated using the information embedded in thoracohumeral kinematics. We successfully demonstrate in our results, that our approach is invariant of the body-fixed coordinate definition, is singularity free and has high repeatability; thus resulting in a flexible user-specific kinematic tracking not restricted to bony landmarks.

  • 20.
    Krishnan, Rakesh
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics. Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden.
    Björsell, Niclas
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics.
    Smith, Christian
    Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden.
    Segmenting humeral submovements using invariant geometric signatures2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (Iros) / [ed] Bicchi, A., Okamura, A., IEEE, 2017, p. 6951-6958, article id 8206619Conference paper (Refereed)
    Abstract [en]

    Discrete submovements are the building blocks of any complex movement. When robots collaborate with humans, extraction of such submovements can bevery helpful in applications such as robot-assisted rehabilitation. Our work aims to segment these submovements based on the invariant geometric information embedded in segment kinematics. Moreover, this segmentation is achieved without any explicit kinematic representation.Our work demonstrates the usefulness of this invariant framework in segmenting a variety of humeral movements, which are performed at different speeds across different subjects. Our results indicate that this invariant framework has high computational reliability despite the inherent variability in human motion.

  • 21.
    Marzinotto, Alejandro
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Colledanchise, Michele
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Towards a Unified Behavior Trees Framework for Robot Control2014In: Robotics and Automation (ICRA), 2014 IEEE International Conference on , IEEE Robotics and Automation Society, 2014, p. 5420-5427Conference paper (Refereed)
    Abstract [en]

    This paper presents a unified framework for Behavior Trees (BTs), a plan representation and execution tool. The available literature lacks the consistency and mathematical rigor required for robotic and control applications. Therefore, we approach this problem in two steps: first, reviewing the most popular BT literature exposing the aforementioned issues; second, describing our unified BT framework along with equivalence notions between BTs and Controlled Hybrid Dynamical Systems (CHDSs). This paper improves on the existing state of the art as it describes BTs in a more accurate and compact way, while providing insight about their actual representation capabilities. Lastly, we demonstrate the applicability of our framework to real systems scheduling open-loop actions in a grasping mission that involves a NAO robot and our BT library.

  • 22.
    Masud, Nauman
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL. Univ Gavle, Dept Elect Math & Sci, S-80176 Gavle, Sweden.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Robotics, perception and learning, RPL.
    Isaksson, Magnus
    Univ Gavle, Dept Elect Math & Sci, S-80176 Gavle, Sweden..
    Disturbance observer based dynamic load torque compensator for assistive exoskeletons2018In: Mechatronics (Oxford), ISSN 0957-4158, E-ISSN 1873-4006, Vol. 54, p. 78-93Article in journal (Refereed)
    Abstract [en]

    In assistive robotics applications, the human limb is attached intimately to the robotic exoskeleton. The coupled dynamics of the human-exoskeleton system are highly nonlinear and uncertain, and effectively appear as uncertain load-torques at the joint actuators of the exoskeleton. This uncertainty makes the application of standard computed torque techniques quite challenging. Furthermore, the need for safe human interaction severely limits the gear ratio of the actuators. With small gear ratios, the uncertain joint load-torques cannot be ignored and need to be effectively compensated. A novel disturbance observer based dynamic load-torque compensator is hereby proposed and analysed for the current controlled DC-drive actuators of the exoskeleton, to effectively compensate the said uncertain load-torques at the joint level. The feedforward dynamic load-torque compensator is proposed based on the higher order dynamic model of the current controlled DC-drive. The dynamic load-torque compensator based current controlled DC-drive is then combined with a tailored feedback disturbance observer to further improve the compensation performance in the presence of drive parametric uncertainty. The proposed compensator structure is shown both theoretically and practically to give significantly improved performance w.r.t disturbance observer compensator alone and classical static load-torque compensator, for rated load-torque frequencies up to 1.6 Hz, which is a typical joint frequency bound for normal daily activities for elderly. It is also shown theoretically that the proposed compensator achieves the improved performance with comparable reference current requirement for the current controlled DC-drive.

  • 23.
    Mitsunaga, Noriaki
    et al.
    Osaka Kyoiku University.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kanda, Takayuki
    Advanced Telecommunications Research International.
    Ishiguro, Hiroshi
    Osaka University.
    Hagita, Norihiro
    Advanced Telecommunications Research International.
    Adapting Nonverbal Behavior Parameters to be Preferred by Individuals2012In: Human-Robot Interaction in Social Robotics / [ed] Takayuki Kanda and Hiroshi Ishiguro, Boca Raton, FL, USA: CRC Press, 2012, 1, p. 312-324Chapter in book (Other academic)
  • 24. Mitsunaga, Noriaki
    et al.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kanda, Takayuki
    Ishiguro, Hiroshi
    Hagita, Norihiro
    Adapting robot behavior for human-robot interaction2008In: IEEE Transactions on Robotics, ISSN 1552-3098, Vol. 24, no 4, p. 911-916Article in journal (Refereed)
    Abstract [en]

    Human beings subconsciously adapt their behaviors to a communication partner in order to make interactions run smoothly. In human-robot interactions, not only the human but also the robot is expected to adapt to its partner. Thus, to facilitate human-robot interactions, a robot should be able to read subconscious comfort and discomfort signals from humans and adjust its behavior accordingly, just like a human would. However, most previous, research works expected the human to consciously give feedback, which might interfere with the aim of interaction. We propose an adaptation mechanism based on reinforcement learning that reads subconscious body signals from a human partner, and uses this information to adjust interaction distances, gaze meeting, and motion speed and timing in human-robot interactions. The mechanism uses gazing at the robot's face and human movement distance as subconscious body signals that indicate a human's comfort and discomfort. A pilot study with a humanoid robot that has ten interaction behaviors has been conducted. The study result of 12 subjects suggests that the proposed mechanism enables autonomous adaptation to individual preferences. Also, detailed discussion and conclusions are presented.

  • 25. Mitsunaga, Noriaki
    et al.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kanda, Takayuki
    Ishiguro, Hiroshi
    Hagita, Norihiro
    Robot Behavior Adaptation for Human-Robot Interaction based on Policy Gradient Reinforcement Learning2005In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005. (IROS 2005)., IEEE , 2005, p. 1594-1601Conference paper (Refereed)
    Abstract [en]

    In this paper we propose an adaptation mechanism for robot behaviors to make robot-human interactions run more smoothly. We propose such a mechanism based on reinforcement learning, which reads minute body signals from a human partner, and uses this information to adjust interaction distances, gaze-meeting, and motion speed and timing in human-robot interaction. We show that this enables autonomous adaptation to individual preferences by an experiment with twelve subjects.

  • 26. Mitsunaga, Noriaki
    et al.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Kanda, Takayuki
    Ishiguro, Hiroshi
    Hagita, Norihiro
    Robot Behavior Adaptation for Human-Robot Interaction based on Policy Gradient Reinforcement Learning2006In: Journal of the Robotics Society of Japan, ISSN 0289-1824, Vol. 24, no 7, p. 820-829Article in journal (Refereed)
    Abstract [ja]

    (Please note: The main body of this paper is written in Japanese) When humans interact in a social context, there are many factors apart from the actual communication that need to be considered. Previous studies in behavioral sciences have shown that there is a need for a certain amount of personal space and that different people tend to meet the gaze of others to different extents. For humans, this is mostly subconscious, but when two persons interact, there is an automatic adjustment of these factors to avoid discomfort. In this paper we propose an adaptation mechanism for robot behaviors to make human-robot interactions run more smoothly. We propose such a mechanism based on policy gradient reinforcement learning, that reads minute body signals from a human partner, and uses this information to adjust interaction distances, gaze meeting, and motion speed and timing in human-robot interaction. We show that this enables autonomous adaptation to individual preferences by the experiment with twelve subjects.

  • 27.
    Rakesh, Krishnan
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Engineering Sciences (SCI), Centres, BioMEx.
    Björsell, N.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Engineering Sciences (SCI), Centres, BioMEx.
    Segmenting humeral submovements using invariant geometric signatures2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 6951-6958, article id 8206619Conference paper (Refereed)
    Abstract [en]

    Discrete submovements are the building blocks of any complex movement. When robots collaborate with humans, extraction of such submovements can be very helpful in applications such as robot-assisted rehabilitation. Our work aims to segment these submovements based on the invariant geometric information embedded in segment kinematics. Moreover, this segmentation is achieved without any explicit kinematic representation. Our work demonstrates the usefulness of this invariant framework in segmenting a variety of humeral movements, which are performed at different speeds across different subjects. Our results indicate that this invariant framework has high computational reliability despite the inherent variability in human motion.

  • 28.
    Rakesh, Krishnan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. Department of Electronics, Mathematics and Natural Sciences, University of Gävle, Gävle, Sweden.
    Niclas, Björsell
    Department of Electronics, Mathematics and Natural Sciences, University of Gävle, Gävle, Sweden.
    Christian, Smith
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Invariant Spatial Parametrization of Human Thoracohumeral Kinematics: A Feasibility Study2016Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a novel kinematic framework using hybrid twists, that has the potential to improve the reliability of estimated human shoulder kinematics. This is important as the functional aspects of the human shoulder are evaluated using the information embedded in thoracohumeral kinematics. We successfully demonstrate in our results, that our approach is invariant of the body-fixed coordinate definition, is singularity free and has high repeatability; thus resulting in a flexibleuser-specific kinematic tracking not restricted to bony landmarks.

  • 29.
    Shi, Chao
    et al.
    Intelligent Robotics and Communications Laboratories, Advanced Telecommunications Research International.
    Shiomi, Masahiro
    Intelligent Robotics and Communications Laboratories, Advanced Telecommunications Research International.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kanda, Takayuki
    Intelligent Robotics and Communications Laboratories, Advanced Telecommunications Research International.
    Ishiguro, Hiroshi
    Intelligent Robotics Laboratory, Osaka University.
    A model of distributional handing interaction for a mobile robot2013In: Robotics: Science and Systems IX, 2013Conference paper (Refereed)
    Abstract [en]

    This paper reports our research on developing a model for a robot distributing flyers to pedestrians. The difficulty is that potential receivers are pedestrians who are not necessarily cooperative; thus, the robot needs to appropriately plan its motion, making it is easy and non-obstructive for potential receivers to receive the flyers. In order to establish the model, we observed human interactions on distributional handing in the real world. We analyzed and evaluated different handing methods that people perform, and established a model for a robot to perform natural handing. The proposed model is implemented into a humanoid robot and is confirmed as effective in a field experimen

  • 30.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Input Estimation for Teleoperation: Using Minimum Jerk Human Motion Models to Improve Telerobotic Performance2009Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis treats the subject of applying human motion models to create estimators for the input signals of human operators controlling a telerobotic system.In telerobotic systems, the control signal input by the operator is often treated as a known quantity. However, there are instances where this is not the case. For example, a well-studied problem is teleoperation under time delay, where the robot at the remote site does not have access to current operator input due to time delays in the communication channel. Another is where the hardware sensors in the input device have low accuracy. Both these cases are studied in this thesis. A solution to these types of problems is to apply an estimator to the input signal. There exist several models that describe human hand motion, and these can be used to create a model-based estimator. In the present work, we propose the use of the minimum jerk (MJ) model. This choice of model is based mainly on the simplicity of the MJ model, which can be described as a fifth degree polynomial in the cartesian space of the position of the subject's hand. Estimators incorporating the MJ model are implemented and inserted into control systems for a teleoperatedrobot arm. We perform experiments where we show that these estimators can be used for predictors increasing task performance in the presence of time delays. We also show how similar estimators can be used to implement direct position control using a handheld device equipped only with accelerometers.

  • 31.
    Smith, Christian
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bratt, Mattias
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Teleoperation for a ball-catching task with significant dynamics2008In: Neural Networks, ISSN 0893-6080, E-ISSN 1879-2782, Vol. 21, no 4, p. 604-620Article in journal (Refereed)
    Abstract [en]

    In this paper we present ongoing work on how to incorporate human motion models into the design of a high performance teleoperation platform. A short description of human motion models used for ball-catching is followed by a more detailed study of a teleoperation platform on which to conduct experiments. Also, a pilot study using minimum jerk theory to explain user input behavior in teleoperated catching is presented.

  • 32.
    Smith, Christian
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    A Minimum Jerk Predictor for Teleoperation with Variable Time Delay2009In: 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, NEW YORK: IEEE , 2009, p. 5621-5627Conference paper (Refereed)
    Abstract [en]

    In this paper we describe a method for bridging internet time delays in a teleoperation scenario. In the scenario, the sizes of the time delays is not only stochastic, but it is also large compared to the task execution time. The method proposed uses minimum jerk motion models to predict the input from the user a time into the future that is equivalent to, he one-way communication delay. We present results from a teleoperated ball-catching experiment with real internet delays, where we show that the proposed method makes a significant improvement over traditional methods for teleoperation over intercontinental distances.

  • 33.
    Smith, Christian
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Robot Manipulators Constructing a High-Performance Robot from Commercially Available Parts2009In: IEEE robotics & automation magazine, ISSN 1070-9932, E-ISSN 1558-223X, Vol. 16, no 4, p. 75-83Article in journal (Refereed)
    Abstract [en]

    In this paper we present a design study and technical specifications of a high performance robotic manipulator to be used for ball catching experiments using commercial off-the-shelf (COTS) components. Early evaluation shows that very good performance can be achieved using standardized PowerCube actuator modules from Amtec and a standard workstation using CAN bus communication. Implementation issues of low-level control and software platform are also described, as well as early experimental evaluation of the system.

  • 34.
    Smith, Christian
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Using COTS to construct a high performance robot arm2007In: Proceedings - IEEE International Conference on Robotics and Automation, IEEE , 2007, p. 4056-4063Conference paper (Refereed)
    Abstract [en]

    In this paper we present a design study and technical specifications of a high performance robotic manipulator to be used for ball catching experiments using commercial off-the-shelf (COTS) components. Early evaluation shows that very good performance can be achieved using standardized PowerCube actuator modules from Amtec and a standard workstation using CAN bus communication. Implementation issues of low-level control and software platform are also described, as well as early experimental evaluation of the system.

  • 35.
    Smith, Christian
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    Wiimote Robot Control Using Human Motion Models2009In: 2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, NEW YORK: IEEE , 2009, p. 5509-5515Conference paper (Refereed)
    Abstract [en]

    As mass-market video game controllers have become more advanced, there has been a recent increase in interest for using these as intuitive and inexpensive control devices. In this paper we examine position control for a robot using a wiimote game controller. We show that human motion models can be used to achieve better precision than traditional tracking approaches, sufficient for simpler tasks. We also present an experiment that shows that very intuitive control an be achieved, as novice subjects can control a robot arm through simple tasks after just a few minutes of practice and minimal instructions.

  • 36.
    Smith, Christian
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    A predictor for operator input for time-delayed teleoperation2010In: Mechatronics (Oxford), ISSN 0957-4158, E-ISSN 1873-4006, Vol. 20, no 7, p. 778-786Article in journal (Refereed)
    Abstract [en]

    In this paper we describe a method for bridging Internet time delays in a free motion type teleoperation scenario in an unmodeled remote environment with video feedback The method proposed uses minimum jerk motion models to predict the input from the user a time into the future that is equivalent to the round-trip communication delay The predictions are then used to control a remote robot Thus the operator can in effect observe the resulting motion of the remote robot with virtually no time-delay even in the presence of a delay on the physical communications channel We present results from a visually guided teleoperated line tracing experiment with 100 ms round-trip delays where we show that the proposed method makes a significant performance improvement for teleoperation with delays corresponding to intercontinental distances.

  • 37.
    Smith, Christian
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Karayiannidis, Ioannis
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Nalpantidis, Lazaros
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Gratal, Javier
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Qi, Peng
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dimarogonas, Dimos
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dual arm manipulation-A survey2012In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 60, no 10, p. 1340-1353Article, review/survey (Refereed)
    Abstract [en]

    Recent advances in both anthropomorphic robots and bimanual industrial manipulators had led to an increased interest in the specific problems pertaining to dual arm manipulation. For the future, we foresee robots performing human-like tasks in both domestic and industrial settings. It is therefore natural to study specifics of dual arm manipulation in humans and methods for using the resulting knowledge in robot control. The related scientific problems range from low-level control to high level task planning and execution. This review aims to summarize the current state of the art from the heterogenous range of fields that study the different aspects of these problems specifically in dual arm manipulation.

  • 38.
    Smith, Christian
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Karayiannidis, Yiannis
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Optimal Command Ordering for Serial Link Manipulators2012In: 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids), IEEE Robotics and Automation Society, 2012, p. 255-261Conference paper (Refereed)
    Abstract [en]

    Reducing the number of cables needed for the actuators and sensors of humanoid and other robots with high numbers of degrees of freedom (DoF) is a relevant problem, often solved by using a common bus for all communication, which may result in bandwidth limitation problems. This paper proposes an optimized method to re-order the commands sent to the joint-local controllers of a high DoF serial manipulator. The proposed method evaluates which local controller would benefit the most from an updated command given a cost function, and sends a command to this controller. As is demonstrated in both simulation and in experiments on a real robot, the resulting scheme can significantly improve system performance, equivalent to increasing the communication frequency by up to 3 times.

  • 39.
    Smith, Christian
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Mitsunaga, Noriaki
    Kanda, Takayuki
    Ishiguro, Hiroshi
    Hagita, Norihiro
    Adaptation of an Interactive Robot's Behavior Using Policy Gradient Reinforcement Learning2005In: Proceedings of the 10th Robotics Symposia, 2005, p. 319-324Conference paper (Refereed)
    Abstract [en]

    In this paper we propose an adaptation mechanism for robot behaviors to make robot-human interactions run more smoothly. We propose such a mechanism based on reinforcement learning, which reads minute body signals from a human partner, and uses this information to adjust interaction distances, gaze-meeting, and motion speed and timing in human-robot interaction. We show that this enables autonomous adaptation to individual preferences by an experiment with twelve subjects.

  • 40.
    Smith, Christian
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Shi, Chao
    Advanced Telecommunications Research International.
    Shiomi, Masahiro
    Advanced Telecommunications Research International.
    Kanda, Takayuki
    Advanced Telecommunications Research International.
    Ishiguro, Hiroshi
    Osaka University.
    A model of handing interaction towards a pedestrian2013In: 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2013, IEEE conference proceedings, 2013, p. 415-416Conference paper (Refereed)
    Abstract [en]

    This video reports our research on developing a model for a robot handing flyers to pedestrians. The difficulty is that potential receivers are pedestrians who are not necessarily cooperative; thus, the robot needs to appropriately plan its motion making it is easy and non-obstructive for potential receivers to receive the flyers. In order to establish a model, we analyzed human interaction, and found that (1) a giver approaches a pedestrian from frontal right/left but not frontal center, and (2) he simultaneously stops his walking motion and arm-extending motion at the moment when he hands out the flyer. Using these findings, we established a model for a robot to perform natural proactive handing. The proposed model is implemented in a humanoid robot and is confirmed as effective in a field experiment.

  • 41.
    Vina, Francisco
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Karayiannidis, Yiannis
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Pauwels, Karl
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    In-hand manipulation using gravity and controlled slip2015In: Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, IEEE conference proceedings, 2015, p. 5636-5641Conference paper (Refereed)
    Abstract [en]

    In this work we propose a sliding mode controllerfor in-hand manipulation that repositions a tool in the robot’shand by using gravity and controlling the slippage of the tool. In our approach, the robot holds the tool with a pinch graspand we model the system as a link attached to the grippervia a passive revolute joint with friction, i.e., the grasp onlyaffords rotational motions of the tool around a given axis ofrotation. The robot controls the slippage by varying the openingbetween the fingers in order to allow the tool to move tothe desired angular position following a reference trajectory.We show experimentally how the proposed controller achievesconvergence to the desired tool orientation under variations ofthe tool’s inertial parameters.

  • 42.
    Vina, Francisco
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Karayiannidis, Yiannis
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Adaptive Contact Point Estimation for Autonomous Tool Manipulation2014Conference paper (Refereed)
    Abstract [en]

    Autonomous grasping and manipulation of toolsenables robots to perform a large variety of tasks in unstructuredenvironments such as households. Many commonhousehold tasks involve controlling the motion of the tip of a toolwhile it is in contact with another object. Thus, for these types oftasks the robot requires knowledge of the location of the contactpoint while it is executing the task in order to accomplish themanipulation objective. In this work we propose an integraladaptive control law that uses force/torque measurements toestimate online the location of the contact point between thetool manipulated by the robot and the surface which the tooltouches

  • 43.
    Viña Barrientos, Francisco
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Karayiannidis, Yiannis
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Adaptive Control for Pivoting with Visual and Tactile Feedback2016Conference paper (Refereed)
    Abstract [en]

    In this work we present an adaptive control approach for pivoting, which is an in-hand manipulation maneuver that consists of rotating a grasped object to a desired orientation relative to the robot’s hand. We perform pivoting by means of gravity, allowing the object to rotate between the fingers of a one degree of freedom gripper and controlling the gripping force to ensure that the object follows a reference trajectory and arrives at the desired angular position. We use a visual pose estimation system to track the pose of the object and force measurements from tactile sensors to control the gripping force. The adaptive controller employs an update law that accommodates for errors in the friction coefficient,which is one of the most common sources of uncertainty in manipulation. Our experiments confirm that the proposed adaptive controller successfully pivots a grasped object in the presence of uncertainty in the object’s friction parameters.

  • 44.
    Viña, Francisco
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bekiroglu, Yasemin
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Karayiannidis, Yiannis
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Predicting Slippage and Learning Manipulation Affordances through Gaussian Process Regression2013In: Proceeding of the 2013 IEEE-RAS International Conference on Humanoid Robots, IEEE Computer Society, 2013Conference paper (Refereed)
    Abstract [en]

    Object grasping is commonly followed by someform of object manipulation – either when using the grasped object as a tool or actively changing its position in the hand through in-hand manipulation to afford further interaction. In this process, slippage may occur due to inappropriate contact forces, various types of noise and/or due to the unexpected interaction or collision with the environment. In this paper, we study the problem of identifying continuous bounds on the forces and torques that can be applied on a grasped object before slippage occurs. We model the problem as kinesthetic rather than cutaneous learning given that the measurements originate from a wrist mounted force-torque sensor. Given the continuous output, this regression problem is solved using a Gaussian Process approach.We demonstrate a dual armed humanoid robot that can autonomously learn force and torque bounds and use these to execute actions on objects such as sliding and pushing. We show that the model can be used not only for the detection of maximum allowable forces and torques but also for potentially identifying what types of tasks, denoted as manipulation affordances, a specific grasp configuration allows. The latter can then be used to either avoid specific motions or as a simple step of achieving in-hand manipulation of objects through interaction with the environment.

  • 45.
    Wang, Yuquan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ogren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Karayiannidis, Yiannis
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Dual Arm Manipulation using ConstraintBased Programming2014In: Proceedings of the 19th World CongressThe International Federation of Automatic Control / [ed] Boje, Edward, Xia, Xiaohua, 2014, Vol. 19, p. 311-319Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a technique for online generation of dual arm trajectories using constraint based programming based on bound margins. Using this formulation, we take both equality and inequality constraints into account, in a way that incorporates both feedback and feedforward terms, enabling e.g. tracking of timed trajectories in a new way. The technique is applied to a dual arm manipulator performing a bi-manual task. We present experimental validation of the approach, including comparisons between simulations and real experiments of a complex bimanual tracking task. We also show how to add force feedback to the framework, to account for modeling errors in the systems. We compare the results with and without feedback, and show how the resulting trajectory is modified to achieve the prescribed interaction forces.

  • 46.
    Wang, Yuquan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Karayiannidis, Yiannis
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Whole Body Control of a Dual-Arm Mobile Robot Using a Virtual Kinematic Chain2016In: INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS, ISSN 0219-8436, Vol. 13, no 1, article id 1550047Article in journal (Refereed)
    Abstract [en]

    Dual-arm manipulators have more advanced manipulation abilities compared to single-arm manipulators and manipulators mounted on a mobile base have additional mobility and a larger workspace. Combining these advantages, mobile dual-arm robots are expected to perform a variety of tasks in the future. Kinematically, the configuration of two arms that branches from the mobile base results in a serial-to-parallel kinematic structure. In order to respond to external disturbances, this serial-to-parallel kinematic structure makes inverse kinematic computations non-trivial, as the motion of the base has to take the needs of both arms into account. Instead of using the dual-arm kinematics directly, we propose to use a virtual kinematic chain (VKC) to specify the common motion of the two arms. We formulate a constraint-based programming solution which consists of two parts. In the first part, we use an extended serial kinematic chain including the mobile base and the VKC to formulate constraints that realize the desired orientation and translation expressed in the world frame. In the second part, we use the resolved VKC motion to constrain the common motion of the two arms. In order to explore the redundancy of the two arms in an optimization framework, we also provide a VKC-oriented manipulability measure as well as its closed-form gradient. We verify the proposed approach with simulations and experiments that are performed on a PR2 robot, which has two 7 degrees of freedom (DoF) arms and a 3 DoF mobile base.

  • 47.
    Wang, Yuquan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Karayiannidis, Ioannis
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Cooperative control of a serial-to-parallel structure using a virtual kinematic chain in a mobile dual-arm manipulation application2015In: Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, Hamburg, Germany: IEEE Robotics and Automation Society, 2015, p. 2372-2379Conference paper (Refereed)
    Abstract [en]

    In the future mobile dual-arm robots are expected to perform many tasks. Kinematically, the configuration of two manipulators that branch from the same common mobile base results in a serial-to-parallel kinematic structure, which makes inverse kinematic computations non-trivial. The motion of the base has to be decided in a trade-off, taking the needs of both arms into account. We propose to use a Virtual Kinematic Chain (VKC) to specify the common motion of the parallel manipulators, instead of using the two manipulators kinematics directly. With this VKC, we formulate a constraint based programming solution for the robot to respond to external disturbances during task execution. The proposed approach is experimentally verified both in a noise-free illustrative simulation and a real human robot co-manipulation task.

  • 48.
    Wang, Yuquan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Karayiannidis, Yiannis
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Whole body control of a dual-arm mobile robot using a virtual kinematic chain2014Manuscript (preprint) (Other academic)
  • 49.
    Ögren, Petter
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Karayiannidis, Yiannis
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    A Multi Objective Control Approach to Online Dual Arm Manipulation2012In: Robot Control, International Federation of Automatic Control , 2012, p. 747-752Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a new way to exploit the redundancy of dual arm mobile manipulators when performing inherently bi-manual tasks using online controllers. Bi-manual tasks are tasks that require motion of both arms in order to be carried out efficiently, such as holding and cleaning an object, or moving an object from one hand to the other. These tasks are often associated with several constraints, such as singularity- and collision avoidance, but also a high degree of redundancy, as the relative positions of the two grippers is far more important than the absolute positions, when for example handing an object from one arm to the other. By applying a modular multi objective control framework, inspired by earlier work on sub-task control, we exploit this redundancy to form a subset of the joint space that is feasible, i.e. not violating any of the constraints. Earlier approacher added the additional tasks in terms of equality constraints, thereby reducing the dimension of the feasible subset until it was a single point. Here however, we add the additional tasks in terms of inequalities, removing parts of the feasible set rather than collapsing its dimensionality. Thus, we are able to handle an arbitrary number of constraints, instead of a number corresponding to the dimension of the feasible set (degree of redundancy). Finally, inside the feasible set we choose controls stay in the set, while simultaneously minimizing some given objective. The proposed approach is illustrated by several simulation examples.

1 - 49 of 49
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf