Digitala Vetenskapliga Arkivet

Change search
Refine search result
12 1 - 50 of 73
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bratt, Mattias
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Design of a Control Strategy for Teleoperation of a Platform with Significant Dynamics2006In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK, NY: IEEE , 2006, p. 1700-1705Conference paper (Refereed)
    Abstract [en]

    A teleoperation system for controlling a robot with fast dynamics over the Internet has been constructed. It employs a predictive control structure with an accurate dynamic model of the robot to overcome problems caused by varying delays. The operator interface uses a stereo virtual reality display of the robot cell, and a haptic device for force feed-back including virtual obstacle avoidance forces.

    Download full text (pdf)
    Bratt_iros06.pdf
  • 2.
    Bratt, Mattias
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Christensen, Henrik I.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Minimum jerk based prediction of user actions for a ball catching task2007In: IEEE International Conference on Intelligent Robots and Systems: Vols 1-9, IEEE conference proceedings, 2007, p. 2716-2722Conference paper (Refereed)
    Abstract [en]

    The present paper examines minimum jerk models for human kinematics as a tool to predict user input in teleoperation with significant dynamics. Predictions of user input can be a powerful tool to bridge time-delays and to trigger autonomous sub-sequences. In this paper an example implementation is presented, along with the results of a pilot experiment in which a virtual reality simulation of a teleoperated ball-catching scenario is used to test the predictive power of the model. The results show that delays up to 100 ms can potentially be bridged with this approach.

    Download full text (pdf)
    Bratt_iros07.pdf
  • 3.
    Cruciani, Silvia
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Hang, Kaiyu
    GRAB Lab, Department of Mechanical Engineering and Material Science at Yale University, New Haven, CT USA.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Dual-Arm In-Hand Manipulation Using Visual Feedback2019In: IEEE-RAS International Conference on Humanoid Robots, Institute of Electrical and Electronics Engineers (IEEE) , 2019, p. 387-394Conference paper (Refereed)
    Abstract [en]

    In this work, we address the problem of executing in-hand manipulation based on visual input. Given an initial grasp, the robot has to change its grasp configuration without releasing the object. We propose a method for in-hand manipulation planning and execution based on information on the object's shape using a dual-Arm robot. From the available information on the object, which can be a complete point cloud but also partial data, our method plans a sequence of rotations and translations to reconfigure the object's pose. This sequence is executed using non-prehensile pushes defined as relative motions between the two robot arms.

  • 4.
    Cruciani, Silvia
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    In-Hand Manipulation Using Three-Stages Open Loop Pivoting2017In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 1244-1251Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a method for pivoting an object held by a parallel gripper, without requiring accurate dynamical models or advanced hardware. Our solution uses the motion of the robot arm for generating inertial forces to move the object. It also controls the rotational friction at the pivoting point by commanding a desired distance to the gripper's fingers. This method relies neither on fast and precise tracking systems to obtain the position of the tool, nor on real-time and high-frequency controllable robotic grippers to quickly adjust the finger distance. We demonstrate the efficacy of our method by applying it on a Baxter robot.

    Download full text (pdf)
    fulltext
  • 5.
    Cruciani, Silvia
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Integrating Path Planning and Pivoting2018In: 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L, IEEE , 2018, p. 6601-6608Conference paper (Refereed)
    Abstract [en]

    In this work we propose a method for integrating motion planning and in-hand manipulation. Commonly addressed as a separate step from the final execution, in-hand manipulation allows the robot to reorient an object within the end-effector for the successful outcome of the goal task. A joint achievement of repositioning the object and moving the manipulator towards its desired final pose saves time in the execution and introduces more flexibility in the system. We address this problem using a pivoting strategy (i.e. in-hand rotation) for repositioning the object and we integrate this strategy with a path planner for the execution of a complex task. This method is applied on a Baxter robot and its efficacy is shown by experimental results.

    Download full text (pdf)
    fulltext
  • 6.
    Cruciani, Silvia
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Hang, Kaiyu
    Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China.;Hong Kong Univ Sci & Technol, Inst Adv Study, Hong Kong, Peoples R China..
    Dexterous Manipulation Graphs2018In: 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Maciejewski, AA Okamura, A Bicchi, A Stachniss, C Song, DZ Lee, DH Chaumette, F Ding, H Li, JS Wen, J Roberts, J Masamune, K Chong, NY Amato, N Tsagwarakis, N Rocco, P Asfour, T Chung, WK Yasuyoshi, Y Sun, Y Maciekeski, T Althoefer, K AndradeCetto, J Chung, WK Demircan, E Dias, J Fraisse, P Gross, R Harada, H Hasegawa, Y Hayashibe, M Kiguchi, K Kim, K Kroeger, T Li, Y Ma, S Mochiyama, H Monje, CA Rekleitis, I Roberts, R Stulp, F Tsai, CHD Zollo, L, IEEE , 2018, p. 2040-2047Conference paper (Refereed)
    Abstract [en]

    We propose the Dexterous Manipulation Graph as a tool to address in-hand manipulation and reposition an object inside a robot's end-effector. This graph is used to plan a sequence of manipulation primitives so to bring the object to the desired end pose. This sequence of primitives is translated into motions of the robot to move the object held by the end-effector. We use a dual arm robot with parallel grippers to test our method on a real system and show successful planning and execution of in-hand manipulation.

    Download full text (pdf)
    fulltext
  • 7.
    Gratal, Xavi
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Integrating 3D features and virtual visual servoing for hand-eye and humanoid robot pose estimation2015In: IEEE-RAS International Conference on Humanoid Robots, IEEE Computer Society, 2015, no February, p. 240-245Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose an approach for vision-based pose estimation of a robot hand or full-body pose. The method is based on virtual visual servoing using a CAD model of the robot and it combines 2-D image features with depth features. The method can be applied to estimate either the pose of a robot hand or pose of the whole body given that its joint configuration is known. We present experimental results that show the performance of the approach as demonstrated on both a mobile humanoid robot and a stationary manipulator.

  • 8.
    Gustavsson, Oscar
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Iovino, Matteo
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. ABB Corp Res, Västerås, Sweden..
    Styrud, Jonathan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. ABB Robot, Västerås, Sweden..
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Combining Context Awareness and Planning to Learn Behavior Trees from Demonstration2022In: 2022 31ST IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (IEEE RO-MAN 2022), Institute of Electrical and Electronics Engineers Inc. , 2022, p. 1153-1160Conference paper (Refereed)
    Abstract [en]

    Fast changing tasks in unpredictable, collaborative environments are typical for medium-small companies, where robotised applications are increasing. Thus, robot programs should be generated in short time with small effort, and the robot able to react dynamically to the environment. To address this we propose a method that combines context awareness and planning to learn Behavior Trees (BTs), a reactive policy representation that is becoming more popular in robotics and has been used successfully in many collaborative scenarios. Context awareness allows for inferring from the demonstration the frames in which actions are executed and to capture relevant aspects of the task, while a planner is used to automatically generate the BT from the sequence of actions from the demonstration. The learned BT is shown to solve non-trivial manipulation tasks where learning the context is fundamental to achieve the goal. Moreover, we collected non-expert demonstrations to study the performances of the algorithm in industrial scenarios.

  • 9.
    Hang, Kaiyu
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Haustein, Joshua
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Li, Miao
    EPFL.
    Billard, Aude
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    On the Evolution of Fingertip Grasping Manifolds2016In: IEEE International Conference on Robotics and Automation, IEEE Robotics and Automation Society, 2016, p. 2022-2029, article id 7487349Conference paper (Refereed)
    Abstract [en]

    Efficient and accurate planning of fingertip grasps is essential for dexterous in-hand manipulation. In this work, we present a system for fingertip grasp planning that incrementally learns a heuristic for hand reachability and multi-fingered inverse kinematics. The system consists of an online execution module and an offline optimization module. During execution the system plans and executes fingertip grasps using Canny’s grasp quality metric and a learned random forest based hand reachability heuristic. In the offline module, this heuristic is improved based on a grasping manifold that is incrementally learned from the experiences collected during execution. The system is evaluated both in simulation and on a SchunkSDH dexterous hand mounted on a KUKA-KR5 arm. We show that, as the grasping manifold is adapted to the system’s experiences, the heuristic becomes more accurate, which results in an improved performance of the execution module. The improvement is not only observed for experienced objects, but also for previously unknown objects of similar sizes.

  • 10.
    Iovino, Matteo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. ABB Corporate Research, Västerås.
    Dogan, Fethiye Irmak
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Interactive Disambiguation for Behavior Tree Execution2022In: 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids), Institute of Electrical and Electronics Engineers (IEEE) , 2022Conference paper (Refereed)
    Abstract [en]

    Abstract:In recent years, robots are used in an increasing variety of tasks, especially by small- and medium sized enterprises. These tasks are usually fast-changing, they have a collaborative scenario and happen in unpredictable environments with possible ambiguities. It is important to have methods capable of generating robot programs easily, that are made as general as possible by handling uncertainties. We present a system that integrates a method to learn Behavior Trees (BTs) from demonstration for pick and place tasks, with a framework that uses verbal interaction to ask follow-up clarification questions to resolve ambiguities. During the execution of a task, the system asks for user input when there is need to disambiguate an object in the scene, i.e. when the targets of the task are objects of a same type that are present in multiple instances. The integrated system is demonstrated on different scenarios of a pick and place task, with increasing level of ambiguities. The code used for this paper is made publicly available 1 1 https://github.com/matiov/disambiguate-BT-execution.

  • 11.
    Iovino, Matteo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. ABB Corporate Research, Västerås, Sweden.
    Forster, Julian
    ETH Zürich, Autonomous Systems Lab, Zürich, Switzerland.
    Falco, Pietro
    ABB Corporate Research, Västerås, Sweden.
    Chung, Jen Jen
    ETH Zürich, Autonomous Systems Lab, Zürich, Switzerland; School of ITEE, The University of Queensland, Australia.
    Siegwart, Roland
    ETH Zürich, Autonomous Systems Lab, Zürich, Switzerland.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    On the programming effort required to generate Behavior Trees and Finite State Machines for robotic applications2023In: Proceedings - ICRA 2023: IEEE International Conference on Robotics and Automation, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 5807-5813Conference paper (Refereed)
    Abstract [en]

    In this paper we provide a practical demonstration of how the modularity in a Behavior Tree (BT) decreases the effort in programming a robot task when compared to a Finite State Machine (FSM). In recent years the way to represent a task plan to control an autonomous agent has been shifting from the standard FSM towards BTs. Many works in the literature have highlighted and proven the benefits of such design compared to standard approaches, especially in terms of modularity, reactivity and human readability. However, these works have often failed in providing a tangible comparison in the implementation of those policies and the programming effort required to modify them. This is a relevant aspect in many robotic applications, where the design choice is dictated both by the robustness of the policy and by the time required to program it. In this work, we compare backward chained BTs with a fault-tolerant design of FSMs by evaluating the cost to modify them. We validate the analysis with a set of experiments in a simulation environment where a mobile manipulator solves an item fetching task.

  • 12.
    Iovino, Matteo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Scukins, Edvards
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Styrud, Jonathan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Ögren, Petter
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    A survey of Behavior Trees in robotics and AI2022In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 154, article id 104096Article in journal (Refereed)
    Abstract [en]

    Behavior Trees (BTs) were invented as a tool to enable modular AI in computer games, but have received an increasing amount of attention in the robotics community in the last decade. With rising demands on agent AI complexity, game programmers found that the Finite State Machines (FSM) that they used scaled poorly and were difficult to extend, adapt and reuse. In BTs, the state transition logic is not dispersed across the individual states, but organized in a hierarchical tree structure, with the states as leaves. This has a significant effect on modularity, which in turn simplifies both synthesis and analysis by humans and algorithms alike. These advantages are needed not only in game AI design, but also in robotics, as is evident from the research being done. In this paper we present a comprehensive survey of the topic of BTs in Artificial Intelligence and Robotic applications. The existing literature is described and categorized based on methods, application areas and contributions, and the paper is concluded with a list of open research challenges.

  • 13.
    Iovino, Matteo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. ABB Corporate Research, Västerås, Sweden.
    Styrud, Jonathan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. ABB Robotics, Västerås, Sweden.
    Falco, Pietro
    ABB Corporate Research, Västerås, Sweden.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    A Framework for Learning Behavior Trees in Collaborative Robotic ApplicationsManuscript (preprint) (Other academic)
    Abstract [en]

    In modern industrial collaborative robotic applications, it is desirable to create robot programs automatically,intuitively, and time-efficiently. Moreover, robots need to becontrolled by reactive policies to face the unpredictability ofthe environment they operate in. In this paper we proposea framework that combines a method that learns BehaviorTrees (BTs) from demonstration with a method that evolvesthem with Genetic Programming (GP) for collaborative roboticapplications. The main contribution of this paper is to show thatby combining the two learning methods we obtain a method thatallows non-expert users to semi-automatically, time-efficiently,and interactively generate BTs. We validate the framework witha series of manipulation experiments. The BT is fully learnt insimulation and then transferred to a real collaborative robot.

    Download full text (pdf)
    fulltext
  • 14.
    Iovino, Matteo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. ABB Corporate Research.
    Styrud, Jonathan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. ABB Robotics.
    Falco, Pietro
    ABB Corporporate Research Center Sweden.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Learning Behavior Trees with Genetic Programming in Unpredictable Environments2021In: 2021 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2021, p. 459-4597Conference paper (Refereed)
    Abstract [en]

    Modern industrial applications require robots to operate in unpredictable environments, and programs to be created with a minimal effort, to accommodate frequent changes to the task. Here, we show that genetic programming can be effectively used to learn the structure of a behavior tree (BT) to solve a robotic task in an unpredictable environment. We propose to use a simple simulator for learning, and demonstrate that the learned BTs can solve the same task in a realistic simulator, converging without the need for task specific heuristics, making our method appealing for real robotic applications.

    Download full text (pdf)
    fulltext
  • 15.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. Chalmers, Sweden.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Barrientos, Francisco Eli Vina
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    An Adaptive Control Approach for Opening Doors and Drawers Under Uncertainties2016In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, no 1, p. 161-175Article in journal (Refereed)
    Abstract [en]

    We study the problem of robot interaction with mechanisms that afford one degree of freedom motion, e.g., doors and drawers. We propose a methodology for simultaneous compliant interaction and estimation of constraints imposed by the joint. Our method requires no prior knowledge of the mechanisms' kinematics, including the type of joint, prismatic or revolute. The method consists of a velocity controller that relies on force/torque measurements and estimation of the motion direction, the distance, and the orientation of the rotational axis. It is suitable for velocity controlled manipulators with force/torque sensor capabilities at the end-effector. Forces and torques are regulated within given constraints, while the velocity controller ensures that the end-effector of the robot moves with a task-related desired velocity. We give proof that the estimates converge to the true values under valid assumptions on the grasp, and error bounds for setups with inaccuracies in control, measurements, or modeling. The method is evaluated in different scenarios involving opening a representative set of door and drawer mechanisms found in household environments.

    Download full text (pdf)
    fulltext
  • 16.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Mapping Human Intentions to Robot Motions via Physical Interaction Through a Jointly-held Object2014In: Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on, 2014, p. 391-397Conference paper (Refereed)
    Abstract [en]

    In this paper we consider the problem of human-robot collaborative manipulation of an object, where the human is active in controlling the motion, and the robot is passively following the human's lead. Assuming that the human grasp of the object only allows for transfer of forces and not torques, there is a disambiguity as to whether the human desires translation or rotation. In this paper, we analyze different approaches to this problem both theoretically and in experiment. This leads to the proposal of a control methodology that uses switching between two different admittance control modes based on the magnitude of measured force to achieve disambiguation of the rotation/translation problem.

    Download full text (pdf)
    Roman2014Karayiannidis
  • 17.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Online Contact Point Estimation for Uncalibrated Tool Use2014In: Robotics and Automation (ICRA), 2014 IEEE International Conference on, IEEE Robotics and Automation Society, 2014, p. 2488-2493Conference paper (Refereed)
    Abstract [en]

    One of the big challenges for robots working outside of traditional industrial settings is the ability to robustly and flexibly grasp and manipulate tools for various tasks. When a tool is interacting with another object during task execution, several problems arise: a tool can be partially or completely occluded from the robot's view, it can slip or shift in the robot's hand - thus, the robot may lose the information about the exact position of the tool in the hand. Thus, there is a need for online calibration and/or recalibration of the tool. In this paper, we present a model-free online tool-tip calibration method that uses force/torque measurements and an adaptive estimation scheme to estimate the point of contact between a tool and the environment. An adaptive force control component guarantees that interaction forces are limited even before the contact point estimate has converged. We also show how to simultaneously estimate the location and normal direction of the surface being touched by the tool-tip as the contact point is estimated. The stability of the the overall scheme and the convergence of the estimated parameters are theoretically proven and the performance is evaluated in experiments on a real robot.

    Download full text (pdf)
    fulltext
  • 18.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Online Kinematics Estimation for Active Human-Robot Manipulation of Jointly Held Objects2013In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE , 2013, p. 4872-4878Conference paper (Refereed)
    Abstract [en]

    This paper introduces a method for estimating the constraints imposed by a human agent on a jointly manipulated object. These estimates can be used to infer knowledge of where the human is grasping an object, enabling the robot to plan trajectories for manipulating the object while subject to the constraints. We describe the method in detail, motivate its validity theoretically, and demonstrate its use in co-manipulation tasks with a real robot.

    Download full text (pdf)
    iros2013karayiannidis
  • 19.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Design of force-driven online motion plans for door opening under uncertainties2012In: Workshop on Real-time Motion Planning: Online, Reactive, and in Real-time, 2012Conference paper (Refereed)
    Abstract [en]

    The problem of door opening is fundamental for household robotic applications. Domestic environments are generally less structured than industrial environments and thus several types of uncertainties associated with the dynamics and kinematics of a door must be dealt with to achieve successful opening. This paper proposes a method that can open doors without prior knowledge of the door kinematics. The proposed method can be implemented on a velocity-controlled manipulator with force sensing capabilities at the end-effector. The velocity reference is designed by using feedback of force measurements while constraint and motion directions are updated online based on adaptive estimates of the position of the door hinge. The online estimator is appropriately designed in order to identify the unknown directions. The proposed scheme has theoretically guaranteed performance which is further demonstrated in experiments on a real robot. Experimental results additionally show the robustness of the proposed method under disturbances introduced by the motion of the mobile platform.

  • 20.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Interactive perception and manipulation of unknown constrained mechanisms using adaptive control2013In: ICRA 2013 Mobile Manipulation Workshop on Interactive Perception, 2013Conference paper (Refereed)
  • 21.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Model-free robot manipulation of doors and drawers by means of fixed-grasps2013In: 2013 IEEE International Conference on Robotics and Automation (ICRA), New York: IEEE , 2013, p. 4485-4492Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of robot interaction with objects attached to the environment through joints such as doors or drawers. We propose a methodology that requires no prior knowledge of the objects’ kinematics, including the type of joint - either prismatic or revolute. The method consists of a velocity controller which relies onforce/torque measurements and estimation of the motion direction,rotational axis and the distance from the center of rotation.The method is suitable for any velocity controlled manipulatorwith a force/torque sensor at the end-effector. The force/torquecontrol regulates the applied forces and torques within givenconstraints, while the velocity controller ensures that the endeffectormoves with a task-related desired tangential velocity. The paper also provides a proof that the estimates converge tothe actual values. The method is evaluated in different scenarios typically met in a household environment.

    Download full text (pdf)
    icra2013Karayiannidis
  • 22.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    "Open Sesame!" Adaptive Force/Velocity Control for Opening Unknown Doors2012In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE , 2012, p. 4040-4047Conference paper (Refereed)
    Abstract [en]

    The problem of door opening is fundamental for robots operating in domestic environments. Since these environments are generally less structured than industrial environments, several types of uncertainties associated with the dynamics and kinematics of a door must be dealt with to achieve successful opening. This paper proposes a method that can open doors without prior knowledge of the door kinematics. The proposed method can be implemented on a velocity-controlled manipulator with force sensing capabilities at the end-effector. The method consists of a velocity controller which uses force measurements and estimates of the radial direction based on adaptive estimates of the position of the door hinge. The control action is decomposed into an estimated radial and tangential direction following the concept of hybrid force/motion control. A force controller acting within the velocity controller regulates the radial force to a desired small value while the velocity controller ensures that the end effector of the robot moves with a desired tangential velocity leading to task completion. This paper also provides a proof that the adaptive estimates of the radial direction converge to the actual radial vector. The performance of the control scheme is demonstrated in both simulation and on a real robot.

    Download full text (pdf)
    Iros2012Karayiannidis
  • 23.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Adaptive force/velocity control for opening unknown doors2012In: Robot Control, Volume 10, Part  1, 2012, p. 753-758Conference paper (Refereed)
    Abstract [en]

    The problem of door opening is fundamental for robots operating in domesticenvironments. Since these environments are generally unstructured, a robot must deal withseveral types of uncertainties associated with the dynamics and kinematics of a door to achievesuccessful opening. The present paper proposes a dynamic force/velocity controller which usesadaptive estimation of the radial direction based on adaptive estimates of the door hinge’sposition. The control action is decomposed into estimated radial and tangential directions,which are proved to converge to the corresponding actual values. The force controller usesreactive compensation of the tangential forces and regulates the radial force to a desired smallvalue, while the velocity controller ensures that the robot’s end-effector moves with a desiredtangential velocity. The performance of the control scheme is demonstrated in simulation witha 2 DoF planar manipulator opening a door.

  • 24.
    Khanna, Parag
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Björkman, Mårten
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    A Multimodal Data Set of Human Handovers with Design Implications for Human-Robot Handovers2023In: 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1843-1850Conference paper (Refereed)
    Abstract [en]

    Handovers are basic yet sophisticated motor tasks performed seamlessly by humans. They are among the most common activities in our daily lives and social environments. This makes mastering the art of handovers critical for a social and collaborative robot. In this work, we present an experimental study that involved human-human handovers by 13 pairs, i.e., 26 participants. We record and explore multiple features of handovers amongst humans aimed at inspiring handovers amongst humans and robots. With this work, we further create and publish a novel data set of 8672 handovers, which includes human motion tracking and the handover-forces. We further analyze the effect of object weight and the role of visual sensory input in human-human handovers, as well as possible design implications for robots. As a proof of concept, the data set was used for creating a human-inspired data-driven strategy for robotic grip release in handovers, which was demonstrated to result in better robot to human handovers.

  • 25.
    Khanna, Parag
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Björkman, Mårten
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Human Inspired Grip-Release Technique for Robot-Human Handovers2022In: 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids), IEEE Robotics and Automation Society, 2022, p. 694-701Conference paper (Refereed)
    Abstract [en]

    Fluent and natural robot human handovers are essential for human robot collaborative tasks. The robot's grip-release action is important for achieving this fluency. This paper describes an experimental study investigating interaction forces during grip-release in human-human handovers comprising of 13 participant pairs and a sensor embedded object. The results from this study were used to create a human inspired, data-driven strategy for robot grip-release technique in robot human handovers. This strategy was then evaluated alongside other techniques for grip-release in a robot human handovers experimentation study involving 20 participants. It was concluded that the data-driven strategy outperformed other strategies in getting natural handovers by faster grip-release for the sensor embedded object.

  • 26.
    Khanna, Parag
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Yadollahi, Elmira
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Björkman, Mårten
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Effects of Explanation Strategies to Resolve Failures in Human-Robot Collaboration2023In: 2023 32nd IEEE international conference on robot and human interactive communication, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1829-1836Conference paper (Refereed)
    Abstract [en]

    Despite significant improvements in robot capabilities, they are likely to fail in human-robot collaborative tasks due to high unpredictability in human environments and varying human expectations. In this work, we explore the role of explanation of failures by a robot in a human-robot collaborative task. We present a user study incorporating common failures in collaborative tasks with human assistance to resolve the failure. In the study, a robot and a human work together to fill a shelf with objects. Upon encountering a failure, the robot explains the failure and the resolution to overcome the failure, either through handovers or humans completing the task. The study is conducted using different levels of robotic explanation based on the failure action, failure cause, and action history, and different strategies in providing the explanation over the course of repeated interaction. Our results show that the success in resolving the failures is not only a function of the level of explanation but also the type of failures. Furthermore, while novice users rate the robot higher overall in terms of their satisfaction with the explanation, their satisfaction is not only a function of the robot's explanation level at a certain round but also the prior information they received from the robot.

  • 27.
    Khanna, Parag
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Yadollahi, Elmira
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Leite, Iolanda
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Björkman, Mårten
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    How do Humans take an Object from a Robot: Behavior changes observed in a User Study2023In: HAI 2023 - Proceedings of the 11th Conference on Human-Agent Interaction, Association for Computing Machinery (ACM) , 2023, p. 372-374Conference paper (Refereed)
    Abstract [en]

    To facilitate human-robot interaction and gain human trust, a robot should recognize and adapt to changes in human behavior. This work documents different human behaviors observed while taking objects from an interactive robot in an experimental study, categorized across two dimensions: pull force applied and handedness. We also present the changes observed in human behavior upon repeated interaction with the robot to take various objects.

  • 28.
    Krishnan, Rakesh
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electrical Engineering, Mathematics and Science, Electronics. Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden; BioMEx Center, Royal Institute of Technology (KTH), Stockholm, Sweden.
    Björsell, Niclas
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electrical Engineering, Mathematics and Science, Electronics.
    Gutierrez-Farewik, Elena
    KTH Engineering Sciences, Mechanics, Royal Institute of Technology (KTH), Stockholm, Sweden; BioMEx Center, Royal Institute of Technology (KTH), Stockholm, Sweden .
    Smith, Christian
    Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden; BioMEx Center, Royal Institute of Technology (KTH), Stockholm, Sweden.
    A survey of human shoulder functional kinematic representations2019In: Medical and Biological Engineering and Computing, ISSN 0140-0118, E-ISSN 1741-0444, Vol. 57, no 2, p. 339-367Article, review/survey (Refereed)
    Abstract [en]

    In this survey, we review the field of human shoulder functional kinematic representations. The central question of this review is to evaluate whether the current approaches in shoulder kinematics can meet the high-reliability computational challenge. This challenge is posed by applications such as robot-assisted rehabilitation. Currently, the role of kinematic representations in such applications has been mostly overlooked. Therefore, we have systematically searched and summarised the existing literature on shoulder kinematics. The shoulder is an important functional joint, and its large range of motion (ROM) poses several mathematical and practical challenges. Frequently, in kinematic analysis, the role of the shoulder articulation is approximated to a ball-and-socket joint. Following the high-reliability computational challenge, our review challenges this inappropriate use of reductionism. Therefore, we propose that this challenge could be met by kinematic representations, that are redundant, that use an active interpretation and that emphasise on functional understanding.

  • 29.
    Krishnan, Rakesh
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics. Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden..
    Björsell, Niclas
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics.
    Smith, Christian
    Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden.
    How do we plan movements?: A geometric answer2016In: School and symposium on advanced neurorehabilitation (SSNR2016): Proceedings, 2016, , p. 2p. 16-17Conference paper (Other academic)
    Abstract [en]

    Human movement is essentially a complex phenomenon. When humans work closely with robots, understanding human motion using robot’s sensors is a very challenging problem. This is partially due to the lack of proper consensus among researchers on which representation to use in such situations. This extended abstract presents a novel kinematic framework to study human intention using hybrid twists. Thisis important as the functional aspects of the human shoulder are evaluated using the information embedded in thoraco-humeral kinematics. We successfully demonstrate that our approach is singularity free. We also demonstrate that how the twis tparameters vary according to the movement being performed.

    Download full text (pdf)
    fulltext
  • 30.
    Krishnan, Rakesh
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics. Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden.
    Björsell, Niclas
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics.
    Smith, Christian
    Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden.
    Human shoulder functional kinematics: Are we ready for the high-reliability computational challenge?2016Conference paper (Other academic)
    Abstract [en]

    In  this  preview  talk,  I  will  present  a  short  summary  of  our  ongoing  work  related  to  human shoulder functional kinematics.  Robot-assisted rehabilitation needs functional understanding of human kinematics in design, operation and evaluation of this technology. Human shoulder is an important  functional  joint  that  enables  fine  motor  skills  for  human  upper  arm  manipulation. Due  to  several  mathematical  and  practical  challenges,  the  shoulder  kinematics  is  often oversimplified. Moreover, there is a lack of agreement among different research communities on  the  suitable  kinematic  representation  when  connecting  humans  to  robots.  Currently,  it  is expected  that  this  computational  structure  used  in  such  applications  have  high-reliability. Therefore,  we  pose  the  question:  Are  we  ready  for  the  high-reliability  computational challenge?

  • 31.
    Krishnan, Rakesh
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics. Computer Vision and Active Perception Lab, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden.
    Björsell, Niclas
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics.
    Smith, Christian
    Computer Vision and Active Perception Lab, School of Computer Science and Communication, KTH- Royal Institute of Technology.
    Invariant Spatial Parametrization of Human Thoracohumeral Kinematics: A Feasibility Study2016In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE Robotics and Automation Society, 2016, p. 4469-4476, article id 7759658Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a novel kinematic framework using hybrid twists, that has the potential to improve the reliability of estimated human shoulderkinematics. This is important as the functional aspects ofthe human shoulder are evaluated using the information embedded in thoracohumeral kinematics. We successfully demonstrate in our results, that our approach is invariant of the body-fixed coordinate definition, is singularity free and has high repeatability; thus resulting in a flexible user-specific kinematic tracking not restricted to bony landmarks.

    Download full text (pdf)
    fulltext
  • 32.
    Krishnan, Rakesh
    et al.
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics. Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden.
    Björsell, Niclas
    University of Gävle, Faculty of Engineering and Sustainable Development, Department of Electronics, Mathematics and Natural Sciences, Electronics.
    Smith, Christian
    Robotics, Perception and Learning, School of Computer Science and Communication, KTH - Royal Institute of Technology, Stockholm, Sweden.
    Segmenting humeral submovements using invariant geometric signatures2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (Iros) / [ed] Bicchi, A., Okamura, A., IEEE, 2017, p. 6951-6958, article id 8206619Conference paper (Refereed)
    Abstract [en]

    Discrete submovements are the building blocks of any complex movement. When robots collaborate with humans, extraction of such submovements can bevery helpful in applications such as robot-assisted rehabilitation. Our work aims to segment these submovements based on the invariant geometric information embedded in segment kinematics. Moreover, this segmentation is achieved without any explicit kinematic representation.Our work demonstrates the usefulness of this invariant framework in segmenting a variety of humeral movements, which are performed at different speeds across different subjects. Our results indicate that this invariant framework has high computational reliability despite the inherent variability in human motion.

    Download full text (pdf)
    fulltext
  • 33.
    Liu, Yixing
    et al.
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Fluid Mechanics and Engineering Acoustics, Biomechanics. KTH, School of Engineering Sciences (SCI), Centres, BioMEx.
    Zhang, Longbin
    KTH, School of Engineering Sciences (SCI), Centres, BioMEx. KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Fluid Mechanics and Engineering Acoustics, Biomechanics.
    Wang, Ruoli
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Fluid Mechanics and Engineering Acoustics, Biomechanics. KTH, School of Engineering Sciences (SCI), Centres, BioMEx. Karolinska Inst, Dept Womens & Childrens Hlth, S-17177 Stockholm, Sweden..
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Gutierrez-Farewik, Elena
    KTH, School of Engineering Sciences (SCI), Centres, BioMEx. KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Fluid Mechanics and Engineering Acoustics, Biomechanics. Karolinska Inst, Dept Womens & Childrens Hlth, S-17177 Stockholm, Sweden..
    Weight Distribution of a Knee Exoskeleton Influences Muscle Activities During Movements2021In: IEEE Access, E-ISSN 2169-3536, Vol. 9, p. 91614-91624Article in journal (Refereed)
    Abstract [en]

    Lower extremity powered exoskeletons help people with movement disorders to perform daily activities and are used increasingly in gait retraining and rehabilitation. Studies of powered exoskeletons often focus on technological aspects such as actuators, control methods, energy and effects on gait. Limited research has been conducted on how different mechanical design parameters can affect the user. In this paper, we study the effects of weight distributions of knee exoskeleton components on simulated muscle activities during three functional movements. Four knee exoskeleton CAD models were developed based on actual motor and gear reducer products. Different placements of the motor and gearbox resulted in different weight distributions. One unilateral knee exoskeleton prototype was fabricated and tested on 5 healthy subjects. Simulation results were compared to observed electromyography signals. Muscle activities varied among weight distributions and movements, wherein no one physical design was optimal for all movements. We describe how a powered exoskeleton's core components can be expected to affect a user's ability and performance. Exoskeleton physical design should ideally take the user's activity goals and ability into consideration.

  • 34.
    Marzinotto, Alejandro
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Colledanchise, Michele
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Towards a Unified Behavior Trees Framework for Robot Control2014In: Robotics and Automation (ICRA), 2014 IEEE International Conference on , IEEE Robotics and Automation Society, 2014, p. 5420-5427Conference paper (Refereed)
    Abstract [en]

    This paper presents a unified framework for Behavior Trees (BTs), a plan representation and execution tool. The available literature lacks the consistency and mathematical rigor required for robotic and control applications. Therefore, we approach this problem in two steps: first, reviewing the most popular BT literature exposing the aforementioned issues; second, describing our unified BT framework along with equivalence notions between BTs and Controlled Hybrid Dynamical Systems (CHDSs). This paper improves on the existing state of the art as it describes BTs in a more accurate and compact way, while providing insight about their actual representation capabilities. Lastly, we demonstrate the applicability of our framework to real systems scheduling open-loop actions in a grasping mission that involves a NAO robot and our BT library.

    Download full text (pdf)
    fulltext
  • 35.
    Masud, Nauman
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Mattsson, Per
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Isaksson, Magnus
    Högskolan i Gävle.
    On stability and performance of disturbance observer-based-dynamic load torque compensator for assistive exoskeleton: A hybrid approach2020In: Mechatronics (Oxford), ISSN 0957-4158, E-ISSN 1873-4006, Vol. 69Article in journal (Refereed)
    Abstract [en]

    A disturbance observer-based-dynamic load-torque compensator for current-controlled DC-drives, as joint actuator of assistive exoskeletons, has been recently proposed. It has been shown that this compensator can effectively linearize and decouple the coupled nonlinear dynamics of the human-exoskeleton system, by more effectively compensating the associated nonlinear load-torques of the exoskeleton at the joint level. In this paper, a detailed analysis of the current controlled DC drive-servo system using the said compensator, with respect to performance and stability is presented, highlighting the key factors and considerations affecting both the stability and performance of the compensated servo system. It is shown both theoretically and through simulation results that the stability of the compensated servo system is compromised as performance is increased and vice-versa. Based on the saturation state of the servo system, a new hybrid switching control strategy is then proposed to select stability or performance-based compensator and controller optimally. The strategy is then experimentally verified both at the joint and task space level by using the developed four active-degree of freedom exoskeleton test rig.

    Download full text (pdf)
    fulltext
  • 36.
    Masud, Nauman
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Department of Electronics Mathematics and Science, University of Gävle, 80176, Gävle, Sweden.
    Rafique, Sajid
    Department of Electronics Mathematics and Science, University of Gävle, 80176, Gävle, Sweden.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Isaksson, Magnus
    Department of Electronics Mathematics and Science, University of Gävle, 80176, Gävle, Sweden.
    Design control and actuator selection of a lower body assistive exoskeleton with 3-D passive compliant supports2023In: Journal of the Brazilian Society of Mechanical Sciences and Engineering, ISSN 1678-5878, E-ISSN 1806-3691, Vol. 45, no 12, article id 611Article in journal (Refereed)
    Abstract [en]

    Physical human–robotic interaction is a crucial area of concern for robotic exoskeletons. Lower weight requirement for the worn exoskeletons limits the number and size of joint actuators, resulting in a low active degree of freedom for the exoskeletons with joint actuators having limited power and bandwidth. This limitation invariably results in reduced physical human–robotic interaction performance for the exoskeleton. Recently several techniques have been proposed for the low active degree of freedom exoskeletons with improved physical human–robotic interaction performance using better load torque compensators and improved active compliance. However, effective practical implementation of these techniques requires special hardware and software design considerations. A detailed design of a new lower body exoskeleton is proposed in this paper that can apply these recently developed techniques to practically improve the physical human–robotic interaction performance of the worn exoskeletons. The design presented includes the exoskeleton's structural design, new joint assemblies and the design of novel 3-D passive, compliant supports. A methodology of selecting and verifying the joint actuators and estimating the desired assistive forces at the contact supports based on human user joint torque requirements and the degree of assistance is also thoroughly presented. A new CAN-based master–slave control architecture that supports the implementation of recent techniques for improved physical human–robotic interaction is also fully presented. A new control strategy capable of imparting simultaneous impedance-based force tracking control of the exoskeleton in task space using DOB-based-DLTC at joint space is also thoroughly presented. Simulation verification of the proposed strategy based on the actual gait data of elderly is presented lastly.

  • 37.
    Masud, Nauman
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Sajid, Rafique
    University of Gävle.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Isaksson, Magnus
    University of Gävle.
    Design, control, and actuator selection of a lower-body assistive exoskeleton with 3-D passive compliant supportsIn: Mechatronics (Oxford), ISSN 0957-4158, E-ISSN 1873-4006, ISSN 0957-4158Article in journal (Refereed)
    Abstract [en]

    Physical human-robotic interaction is a crucial area of concern for robotic exoskeletons. Lower weight requirement for the worn exoskeletons limits the number and size of joint actuators, resulting in a low active degree of freedom for the exoskeletons with joint actuators having limited power and bandwidth. This limitation invariably results in reduced physical human-robotic interaction performance for the exoskeleton. Recently several techniques have been proposed for the low-active-degree-of-freedom-exoskeletons with improved physical human-robotic interaction performance using better load-torque compensators and improved active compliance. However, effective practical implementation of these techniques requires special hardware and software design considerations. A detailed design of a new lower-body exoskeleton is presented in this paper that can apply these recently developed techniques to practically improve the physical human-robotic interaction performance of the worn-exoskeletons. The design presented includes the exoskeleton's structural design, new joint assemblies, and the design of novel 3-D passive, compliant supports. A methodology of selecting and verifying the joint actuators and estimating the desired assistive forces at the contact supports based on human-user joint torque requirements and the degree of assistance is also thoroughly presented. A new CAN-based master-slave control architecture that supports the implementation of recent techniques for improved physical human-robotic interaction is also fully presented. A new control strategy capable of imparting simultaneous impedance-based force tracking control of the exoskeleton in task-space using DOB-based-DLTC at joint-space is also thoroughly presented.

  • 38.
    Masud, Nauman
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Department of Electrical Engineering, Mathematics, and Science, University of Gävle, 80176, Sweden.
    Senkic, Dario
    Department of Industrial Design, Management and Mechanical Engineering, University of Gävle, 80176, Sweden.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Isaksson, Magnus
    Department of Electrical Engineering, Mathematics, and Science, University of Gävle, 80176, Sweden.
    Modeling and control of a 4-ADOF upper-body exoskeleton with mechanically decoupled 3-D compliant arm-supports for improved-pHRI2021In: Mechatronics (Oxford), ISSN 0957-4158, E-ISSN 1873-4006, Vol. 73, p. 102406-Article in journal (Refereed)
    Abstract [en]

    Safe physical human-robotic interaction is a crucial concern for worn exoskeletons where lower weight requirement limits the number and size of actuators to be used. A novel control strategy is suggested in this paper for the low degree of freedom exoskeletons, by combining proposed mechanically decoupled passive-compliant arm-supports with active compliance, to achieve an improved and safer physical-human-robotic-interaction performance, while considering the practical limitations of low-power actuators. The approach is further improved with a novel vectoral-form of disturbance observer-based dynamic load-torque compensator, proposed to linearize and decouple the nonlinear human-machine dynamics effectively. The design of a four-degree of freedom exoskeleton test-rig that can assure the implementation of the proposed strategy is also shortly presented. It is shown through simulation and experimentation, that the use of proposed strategy results in an improved and safer physical human-robotic interaction, for the exoskeletons using limited-power actuators. It is also shown both through simulation and experimentation, that the proposed vectoral-form of disturbance based dynamic load-toque compensator, effectively outperforms the other traditional compensators in compensating the load-torques at the joints of the exoskeleton.

    Download full text (pdf)
    fulltext
  • 39.
    Masud, Nauman
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Univ Gävle, Dept Elect Math & Sci, S-80176 Gävle, Sweden.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Isaksson, Magnus
    Univ Gävle, Dept Elect Math & Sci, S-80176 Gävle, Sweden..
    Disturbance observer based dynamic load torque compensator for assistive exoskeletons2018In: Mechatronics (Oxford), ISSN 0957-4158, E-ISSN 1873-4006, Vol. 54, p. 78-93Article in journal (Refereed)
    Abstract [en]

    In assistive robotics applications, the human limb is attached intimately to the robotic exoskeleton. The coupled dynamics of the human-exoskeleton system are highly nonlinear and uncertain, and effectively appear as uncertain load-torques at the joint actuators of the exoskeleton. This uncertainty makes the application of standard computed torque techniques quite challenging. Furthermore, the need for safe human interaction severely limits the gear ratio of the actuators. With small gear ratios, the uncertain joint load-torques cannot be ignored and need to be effectively compensated. A novel disturbance observer based dynamic load-torque compensator is hereby proposed and analysed for the current controlled DC-drive actuators of the exoskeleton, to effectively compensate the said uncertain load-torques at the joint level. The feedforward dynamic load-torque compensator is proposed based on the higher order dynamic model of the current controlled DC-drive. The dynamic load-torque compensator based current controlled DC-drive is then combined with a tailored feedback disturbance observer to further improve the compensation performance in the presence of drive parametric uncertainty. The proposed compensator structure is shown both theoretically and practically to give significantly improved performance w.r.t disturbance observer compensator alone and classical static load-torque compensator, for rated load-torque frequencies up to 1.6 Hz, which is a typical joint frequency bound for normal daily activities for elderly. It is also shown theoretically that the proposed compensator achieves the improved performance with comparable reference current requirement for the current controlled DC-drive.

    Download full text (pdf)
    fulltext
  • 40.
    Mitsunaga, Noriaki
    et al.
    Osaka Kyoiku University.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kanda, Takayuki
    Advanced Telecommunications Research International.
    Ishiguro, Hiroshi
    Osaka University.
    Hagita, Norihiro
    Advanced Telecommunications Research International.
    Adapting Nonverbal Behavior Parameters to be Preferred by Individuals2012In: Human-Robot Interaction in Social Robotics / [ed] Takayuki Kanda and Hiroshi Ishiguro, Boca Raton, FL, USA: CRC Press, 2012, 1, p. 313-324Chapter in book (Other academic)
    Abstract [en]

    A human subconsciously adapts his or her behaviors to a communication partner in order to make interactions run smoothly. In human–robot interactions, not only the human but also the robot is expected to adapt to its partner. Thus, to facilitate human–robot interaction, a robot should be able to read subconscious comfort and discomfort signals from humans and adjust its behavior accordingly, just as a human would. However, most previous research works expected the human to consciously give feedback, which might interfere with the aim of interaction. We propose an adaptation mechanism based on reinforcement learning that reads subconscious body signals from a human partner, and uses this information to adjust inter action distances, gaze meeting, and motion speed and timing in human–robot interaction. We use gazing at the robot’s face and human movement distance as subconscious body signals that indicate a human’s comfort and discomfort. An evaluation trial with a humanoid robot which has ten interaction behaviors has been conducted. The experimental result of 12 subjects shows that the proposed mechanism enables autonomous adaptation to individual preferences. Also, a detailed discussion and conclusions are presented. 

  • 41. Mitsunaga, Noriaki
    et al.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kanda, Takayuki
    Ishiguro, Hiroshi
    Hagita, Norihiro
    Adapting robot behavior for human-robot interaction2008In: IEEE Transactions on Robotics, ISSN 1552-3098, Vol. 24, no 4, p. 911-916Article in journal (Refereed)
    Abstract [en]

    Human beings subconsciously adapt their behaviors to a communication partner in order to make interactions run smoothly. In human-robot interactions, not only the human but also the robot is expected to adapt to its partner. Thus, to facilitate human-robot interactions, a robot should be able to read subconscious comfort and discomfort signals from humans and adjust its behavior accordingly, just like a human would. However, most previous, research works expected the human to consciously give feedback, which might interfere with the aim of interaction. We propose an adaptation mechanism based on reinforcement learning that reads subconscious body signals from a human partner, and uses this information to adjust interaction distances, gaze meeting, and motion speed and timing in human-robot interactions. The mechanism uses gazing at the robot's face and human movement distance as subconscious body signals that indicate a human's comfort and discomfort. A pilot study with a humanoid robot that has ten interaction behaviors has been conducted. The study result of 12 subjects suggests that the proposed mechanism enables autonomous adaptation to individual preferences. Also, detailed discussion and conclusions are presented.

    Download full text (pdf)
    Smith_TRO_08.pdf
  • 42. Mitsunaga, Noriaki
    et al.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kanda, Takayuki
    Ishiguro, Hiroshi
    Hagita, Norihiro
    Robot Behavior Adaptation for Human-Robot Interaction based on Policy Gradient Reinforcement Learning2005In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005. (IROS 2005)., IEEE , 2005, p. 1594-1601Conference paper (Refereed)
    Abstract [en]

    In this paper we propose an adaptation mechanism for robot behaviors to make robot-human interactions run more smoothly. We propose such a mechanism based on reinforcement learning, which reads minute body signals from a human partner, and uses this information to adjust interaction distances, gaze-meeting, and motion speed and timing in human-robot interaction. We show that this enables autonomous adaptation to individual preferences by an experiment with twelve subjects.

    Download full text (pdf)
    Smith_iros_2005
  • 43. Mitsunaga, Noriaki
    et al.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
    Kanda, Takayuki
    Ishiguro, Hiroshi
    Hagita, Norihiro
    Robot Behavior Adaptation for Human-Robot Interaction based on Policy Gradient Reinforcement Learning2006In: Journal of the Robotics Society of Japan, ISSN 0289-1824, Vol. 24, no 7, p. 820-829Article in journal (Refereed)
    Abstract [ja]

    (Please note: The main body of this paper is written in Japanese) When humans interact in a social context, there are many factors apart from the actual communication that need to be considered. Previous studies in behavioral sciences have shown that there is a need for a certain amount of personal space and that different people tend to meet the gaze of others to different extents. For humans, this is mostly subconscious, but when two persons interact, there is an automatic adjustment of these factors to avoid discomfort. In this paper we propose an adaptation mechanism for robot behaviors to make human-robot interactions run more smoothly. We propose such a mechanism based on policy gradient reinforcement learning, that reads minute body signals from a human partner, and uses this information to adjust interaction distances, gaze meeting, and motion speed and timing in human-robot interaction. We show that this enables autonomous adaptation to individual preferences by the experiment with twelve subjects.

    Download full text (pdf)
    fulltext
  • 44.
    Rajabi, Nona
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Khanna, Parag
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kanik, Sumeyra U. Demir
    Ericsson Res, Stockholm, Sweden..
    Yadollahi, Elmira
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Vasco, Miguel
    Univ Lisbon, INESC ID, Lisbon, Portugal.;Univ Lisbon, Inst Super Tecn, Lisbon, Portugal..
    Björkman, Mårten
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Detecting the Intention of Object Handover in Human-Robot Collaborations: An EEG Study2023In: 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 549-555Conference paper (Refereed)
    Abstract [en]

    Human-robot collaboration (HRC) relies on smooth and safe interactions. In this paper, we focus on the human-to-robot handover scenario, where the robot acts as a taker. We investigate the feasibility of detecting the intention of a human-to-robot handover action through the analysis of electroencephalogram (EEG) signals. Our study confirms that temporal patterns in EEG signals provide information about motor planning and can be leveraged to predict the likelihood of an individual executing a motor task with an average accuracy of 94.7%. We also suggest the effectiveness of the time-frequency features of EEG signals in the final second prior to the movement for distinguishing between handover action and other actions. Furthermore, we classify human intentions for different tasks based on time-frequency representations of pre-movement EEG signals and achieve an average accuracy of 63.5% for contrasting every two tasks against each other. The result encourages the possibility of using EEG signals to detect human handover intention in HRC tasks.

  • 45.
    Rakesh, Krishnan
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Engineering Sciences (SCI), Centres, BioMEx. Univ Gavle, Dept Elect Math & Nat Sci, Gavle, Sweden..
    Bjorsell, Niclas
    Univ Gavle, Dept Elect Math & Nat Sci, Gavle, Sweden..
    Gutierrez-Farewik, Elena
    KTH, School of Engineering Sciences (SCI), Centres, BioMEx.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Engineering Sciences (SCI), Centres, BioMEx.
    A survey of human shoulder functional kinematic representations2019In: Medical and Biological Engineering and Computing, ISSN 0140-0118, E-ISSN 1741-0444, Vol. 57, no 2, p. 339-367Article, review/survey (Refereed)
    Abstract [en]

    In this survey, we review the field of human shoulder functional kinematic representations. The central question of this review is to evaluate whether the current approaches in shoulder kinematics can meet the high-reliability computational challenge. This challenge is posed by applications such as robot-assisted rehabilitation. Currently, the role of kinematic representations in such applications has been mostly overlooked. Therefore, we have systematically searched and summarised the existing literature on shoulder kinematics. The shoulder is an important functional joint, and its large range of motion (ROM) poses several mathematical and practical challenges. Frequently, in kinematic analysis, the role of the shoulder articulation is approximated to a ball-and-socket joint. Following the high-reliability computational challenge, our review challenges this inappropriate use of reductionism. Therefore, we propose that this challenge could be met by kinematic representations, that are redundant, that use an active interpretation and that emphasise on functional understanding.

  • 46.
    Rakesh, Krishnan
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Engineering Sciences (SCI), Centres, BioMEx.
    Björsell, N.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Engineering Sciences (SCI), Centres, BioMEx.
    Segmenting humeral submovements using invariant geometric signatures2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 6951-6958, article id 8206619Conference paper (Refereed)
    Abstract [en]

    Discrete submovements are the building blocks of any complex movement. When robots collaborate with humans, extraction of such submovements can be very helpful in applications such as robot-assisted rehabilitation. Our work aims to segment these submovements based on the invariant geometric information embedded in segment kinematics. Moreover, this segmentation is achieved without any explicit kinematic representation. Our work demonstrates the usefulness of this invariant framework in segmenting a variety of humeral movements, which are performed at different speeds across different subjects. Our results indicate that this invariant framework has high computational reliability despite the inherent variability in human motion.

  • 47.
    Rakesh, Krishnan
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. Department of Electronics, Mathematics and Natural Sciences, University of Gävle, Gävle, Sweden.
    Cruciani, Silvia
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Gutierrez-Farewik, Elena
    KTH, School of Engineering Sciences (SCI), Mechanics.
    Björsell, Niclas
    Department of Electronics, Mathematics and Natural Sciences, University of Gävle, Gävle, Sweden.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Reliably Segmenting Motion Reversals of a Rigid-IMU Cluster Using Screw-Based Invariants2018Conference paper (Refereed)
    Abstract [en]

    Human-robot interaction (HRI) is movingtowards the human-robot synchronization challenge. Inrobots like exoskeletons, this challenge translates to thereliable motion segmentation problem using wearabledevices. Therefore, our paper explores the possibility ofsegmenting the motion reversals of a rigid-IMU clusterusing screw-based invariants. Moreover, we evaluate thereliability of this framework with regard to the sensorplacement, speed and type of motion. Overall, our resultsshow that the screw-based invariants can reliably segmentthe motion reversals of a rigid-IMU cluster.

  • 48.
    Rakesh, Krishnan
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. Department of Electronics, Mathematics and Natural Sciences, University of Gävle, Gävle, Sweden.
    Niclas, Björsell
    Department of Electronics, Mathematics and Natural Sciences, University of Gävle, Gävle, Sweden.
    Christian, Smith
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Invariant Spatial Parametrization of Human Thoracohumeral Kinematics: A Feasibility Study2016Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a novel kinematic framework using hybrid twists, that has the potential to improve the reliability of estimated human shoulder kinematics. This is important as the functional aspects of the human shoulder are evaluated using the information embedded in thoracohumeral kinematics. We successfully demonstrate in our results, that our approach is invariant of the body-fixed coordinate definition, is singularity free and has high repeatability; thus resulting in a flexibleuser-specific kinematic tracking not restricted to bony landmarks.

  • 49.
    Shi, Chao
    et al.
    Intelligent Robotics and Communications Laboratories, Advanced Telecommunications Research International.
    Shiomi, Masahiro
    Intelligent Robotics and Communications Laboratories, Advanced Telecommunications Research International.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kanda, Takayuki
    Intelligent Robotics and Communications Laboratories, Advanced Telecommunications Research International.
    Ishiguro, Hiroshi
    Intelligent Robotics Laboratory, Osaka University.
    A model of distributional handing interaction for a mobile robot2013In: Robotics: Science and Systems IX, 2013Conference paper (Refereed)
    Abstract [en]

    This paper reports our research on developing a model for a robot distributing flyers to pedestrians. The difficulty is that potential receivers are pedestrians who are not necessarily cooperative; thus, the robot needs to appropriately plan its motion, making it is easy and non-obstructive for potential receivers to receive the flyers. In order to establish the model, we observed human interactions on distributional handing in the real world. We analyzed and evaluated different handing methods that people perform, and established a model for a robot to perform natural handing. The proposed model is implemented into a humanoid robot and is confirmed as effective in a field experimen

  • 50.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Input Estimation for Teleoperation: Using Minimum Jerk Human Motion Models to Improve Telerobotic Performance2009Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis treats the subject of applying human motion models to create estimators for the input signals of human operators controlling a telerobotic system.In telerobotic systems, the control signal input by the operator is often treated as a known quantity. However, there are instances where this is not the case. For example, a well-studied problem is teleoperation under time delay, where the robot at the remote site does not have access to current operator input due to time delays in the communication channel. Another is where the hardware sensors in the input device have low accuracy. Both these cases are studied in this thesis. A solution to these types of problems is to apply an estimator to the input signal. There exist several models that describe human hand motion, and these can be used to create a model-based estimator. In the present work, we propose the use of the minimum jerk (MJ) model. This choice of model is based mainly on the simplicity of the MJ model, which can be described as a fifth degree polynomial in the cartesian space of the position of the subject's hand. Estimators incorporating the MJ model are implemented and inserted into control systems for a teleoperatedrobot arm. We perform experiments where we show that these estimators can be used for predictors increasing task performance in the presence of time delays. We also show how similar estimators can be used to implement direct position control using a handheld device equipped only with accelerometers.

    Download full text (pdf)
    FULLTEXT01
12 1 - 50 of 73
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf