Change search
Refine search result
1234567 1 - 50 of 1114
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1.
    Aarno, Daniel
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ekvall, Staffan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Adaptive virtual fixtures for machine-assisted teleoperation tasks2005In: 2005 IEEE International Conference on Robotics and Automation (ICRA), Vols 1-4, 2005, 1139-1144 p.Conference paper (Refereed)
    Abstract [en]

    It has been demonstrated in a number of robotic areas how the use of virtual fixtures improves task performance both in terms of execution time and overall precision, [1]. However, the fixtures are typically inflexible, resulting in a degraded performance in cases of unexpected obstacles or incorrect fixture models. In this paper, we propose the use of adaptive virtual fixtures that enable us to cope with the above problems. A teleoperative or human machine collaborative setting is assumed with the core idea of dividing the task, that the operator is executing, into several subtasks. The operator may remain in each of these subtasks as long as necessary and switch freely between them. Hence, rather than executing a predefined plan, the operator has the ability to avoid unforeseen obstacles and deviate from the model. In our system, the probability that the user is following a certain trajectory (subtask) is estimated and used to automatically adjusts the compliance. Thus, an on-line decision of how to fixture the movement is provided.

  • 2.
    Aarno, Daniel
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Layered HMM for motion intention recognition2006In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, 5130-5135 p.Conference paper (Refereed)
    Abstract [en]

    Acquiring, representing and modeling human skins is one of the key research areas in teleoperation, programming. by-demonstration and human-machine collaborative settings. One of the common approaches is to divide the task that the operator is executing into several subtasks in order to provide manageable modeling. In this paper we consider the use of a Layered Hidden Markov Model (LHMM) to model human skills. We evaluate a gestem classifier that classifies motions into basic action-primitives, or gestems. The gestem classifiers are then used in a LHMM to model a simulated teleoperated task. We investigate the online and offline classilication performance with respect to noise, number of gestems, type of HAIM and the available number of training sequences. We also apply the LHMM to data recorded during the execution of a trajectory-tracking task in 2D and 3D with a robotic manipulator in order to give qualitative as well as quantitative results for the proposed approach. The results indicate that the LHMM is suitable for modeling teleoperative trajectory-tracking tasks and that the difference in classification performance between one and multi dimensional HMMs for gestem classification is small. It can also be seen that the LHMM is robust w.r.t misclassifications in the underlying gestem classifiers.

  • 3.
    Abdul Khaliq, Ali
    et al.
    Örebro University, School of Science and Technology.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Saffiotti, Alessandro
    Örebro University, School of Science and Technology.
    Point-to-point safe navigation of a mobile robot using stigmergy and RFID technology2016In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), 2016, 1497-1504 p., 7759243Conference paper (Refereed)
    Abstract [en]

    Reliable autonomous navigation is still a challenging problem for robots with simple and inexpensive hardware. A key difficulty is the need to maintain an internal map of the environment and an accurate estimate of the robot’s position in this map. Recently, a stigmergic approach has been proposed in which a navigation map is stored into the environment, on a grid of RFID tags, and robots use it to optimally reach predefined goal points without the need for internal maps. While effective,this approach is limited to a predefined set of goal points. In this paper, we extend this approach to enable robots to travel to any point on the RFID floor, even if it was not previously identified as a goal location, as well as to keep a safe distance from any given critical location. Our approach produces safe, repeatable and quasi-optimal trajectories without the use of internal maps, self localization, or path planning. We report experiments run in a real apartment equipped with an RFID floor, in which a service robot either reaches or avoids a user who wears slippers equipped with an RFID tag reader.

  • 4.
    Abdullah, Muhammad
    Örebro University, School of Science and Technology.
    Mobile Robot Navigation using potential fields andmarket based optimization2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A team of mobile robots moving in a shared area raises the problem of safe and autonomous navigation. While avoiding static and dynamic obstacles, mobile robots in a team can lead to complicated and irregular movements. Local reactive approaches are used to deal with situations where robots are moving in dynamic environment; these approaches help in safe navigation of robots but do not give optimal solution. In this work a 2-D navigation strategy is implemented, where a potential field method is used for obstacle avoidance. This potential field method is improved using fuzzy rules, traffic rules and market based optimization (MBO). Fuzzy rules are used to deform repulsive potential fields in the vicinity of obstacles. Traffic rules are used to deal situations where two robots are crossing each other. Market based optimization (MBO) is used to strengthen or weaken repulsive potential fields generated by other robots based on their importance. For the verification of this strategy on more realistic vehicles this navigation strategy is implemented and tested in simulation. Issues while implementing this method and limitations of this navigation strategy are also discussed. Extensive experiments are performed to examine the validity of MBO navigation strategy over traditional potential field (PF) method.

  • 5.
    Adamson, Göran
    et al.
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Wang, Lihui
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre. Kungliga Tekniska Högskolan, Stockholm (KTH).
    Holm, Magnus
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Moore, Philip
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre. Academy for Innovation & Research, Falmouth University, UK.
    Adaptive Robotic Control in Cloud Environments2014In: Proceedings of the 24th International Conference on Flexible Automation and Intelligent Manufacturing / [ed] F. Frank Chen, The University of Texas at San Antonio, U.S.A., Lancaster, Pennsylvania, USA: DEStech Publications, Inc , 2014, 37-44 p.Conference paper (Refereed)
    Abstract [en]

    The increasing globalization is a trend which forces manufacturing industry of today to focus on more cost-effective manufacturing systems and collaboration within global supply chains and manufacturing networks. Cloud Manufacturing (CM) is evolving as a new manufacturing paradigm to match this trend, enabling the mutually advantageous sharing of resources, knowledge and information between distributed companies and manufacturing units. Providing a framework for collaboration within complex and critical tasks, such as manufacturing and design, it increases the companies’ ability to successfully compete on a global marketplace. One of the major, crucial objectives for CM is the coordinated planning, control and execution of discrete manufacturing operations in a collaborative and networked environment. This paper describes the overall concept of adaptive Function Block control of manufacturing equipment in Cloud environments, with the specific focus on robotic assembly operations, and presents Cloud Robotics as “Robot Control-as-a-Service” within CM.

  • 6.
    Adamson, Göran
    et al.
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Wang, Lihui
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre. Kungliga Tekniska Högskolan, Stockholm (KTH).
    Holm, Magnus
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre.
    Moore, Philip
    University of Skövde, School of Engineering Science. University of Skövde, The Virtual Systems Research Centre. Academy for Innovation & Research, Falmouth University, UK.
    Function Block Approach for Adaptive Robotic Control in Virtual and Real Environments2014In: Proceedings of the 14th Mechatronics Forum International Conference / [ed] Leo J. De Vin and Jorge Solis, Karlstad: Karlstads universitet, 2014, 473-479 p.Conference paper (Refereed)
    Abstract [en]

    Many manufacturing companies are facing an increasing amount of changes and uncertainty, caused by both internal and external factors. Frequently changing customer and market demands lead to variations in manufacturing quantities, product design and shorter product life-cycles, and variations in manufacturing capability and functionality contribute to a high level of uncertainty. The result is unpredictable manufacturing system performance, with an increased number of unforeseen events occurring in these systems. Such events are difficult for traditional planning and control systems to satisfactorily manage. For scenarios like these, with a dynamically changing manufacturing environment, adaptive decision making is crucial for successfully performing manufacturing operations. Relying on real-time information of manufacturing processes and operations, and their enabling resources, adaptive decision making can be realized with a control approach combining IEC 61499 event-driven Function Blocks (FBs) with manufacturing features. These FBs are small decision-making modules with embedded algorithms designed to generate the desired equipment control code. When dynamically triggered by event inputs, parameter values in their data inputs are forwarded to the appropriate algorithms, which generate new events and data output as control instructions. The data inputs also include monitored real-time information which allows the dynamic creation of equipment control code adapted to the actual run-time conditions on the shop-floor. Manufacturing features build on the concept that a manufacturing task can be broken down into a sequence of minor basic operations, in this research assembly features (AFs). These features define atomic assembly operations, and by combining and implementing these in the event-driven FB embedded algorithms, automatic code generation is possible. A test case with a virtual robot assembly cell is presented, demonstrating the functionality of the proposed control approach.

  • 7.
    Adolfsson, Sebastian
    University West, Department of Engineering Science, Division of Production System.
    RatSLAM with Viso2: Implementation of alternative monocular odometer2017Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In this work, a ROS (Robot Operating System) version of Open RatSLAM, [1] [2], was tested with Viso2 [3] as an alternative monocular odometer. A land based rover [4] was used to perform data acquisition and a remote control tool was developed to facilitate this procedure, implemented as ROS nodes on both Ubuntu 16.04 and on Android 7.0.An additional requirement that comes from using Viso2 is the need for camera information together with the image stream, which might require camera calibration. A ROS node to manually add this camera information was made as well as a node to change the generated odometry message from Viso2 to a form that RatSLAM uses. The implemented odometer uses feature tracking to estimate motion, which is fundamentally different to matching intensity profiles which the original method does and can hence be used when different properties of the visual odometry function is desired. From experiments, it was seen that the feature tracking method from Viso2 generated amore robust motion estimate in terms of real world scale and it was also able to better handle environments of varying illumination or that contains large continuous surfaces of the same colour. However, the feature tracking may give slight variations in the generated data upon successive runs due to the random selection of features to track. Since the structure of RatSLAM gives the system ability to make loop closures even with large differences in position, an alternative odometry does not necessarily give a significant improvement in performance of the system in environments that the original system operates well in. Even though both algorithms show difficulty with estimating fast rotations, especially when the camera view contains areas with few features, the performance improvement in Viso2 together with its ability to better maintain the real-world scale motivates its usefulness. The source code, as well as instructions for installation and usage is public

  • 8.
    Adolfsson, Sebastian
    University West, Department of Engineering Science, Division of Production System.
    RatSLAM with Viso2: Implementation of alternative monocular odometer2017Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    In this work, a ROS (Robot Operating System) version of OpenRatSLAM, [1] [2], was tested with Viso2 [3] as an alternative monocular odometer. A land based rover [4] was used to perform data acquisition and a remote control tool was developed to facilitate this procedure, implemented as ROS nodes on both Ubuntu 16.04 and on Android 7.0.  An additional requirement that comes from using Viso2 is the need for camera information together with the image stream, which might require camera calibration. A ROS node to manually add this camera information was made as well as a node to change the generated odometry message from Viso2 to a form that RatSLAM uses. The implemented odometer uses feature tracking to estimate motion, which is fundamentally different to matching intensity profiles which the original method does and can hence be used when different properties of the visual odometry function is desired. From experiments, it was seen that the feature tracking method from Viso2 generated a more robust motion estimate in terms of real world scale and it was also able to better handle environments of varying illumination or that contains large continuous surfaces of the same colour. However, the feature tracking may give slight variations in the generated data upon successive runs due to the random selection of features to track. Since the structure of RatSLAM gives the system ability to make loop closures even with large differences in position, an alternative odometry does not necessarily give a significant improvement in performance of the system in environments that the original system operates well in. Even though both algorithms show difficulty with estimating fast rotations, especially when the camera view contains areas with few features, the performance improvement in Viso2 together with its ability to better maintain the real-world scale motivates its usefulness.  The source code, as well as instructions for installation and usage is public.

  • 9. Agarwal, P.
    et al.
    Al Moubayed, Samer
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Alspach, A.
    Kim, J.
    Carter, E. J.
    Lehman, J. F.
    Yamane, K.
    Imitating human movement with teleoperated robotic head2016In: 25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016, IEEE, 2016, 630-637 p.Conference paper (Refereed)
    Abstract [en]

    Effective teleoperation requires real-time control of a remote robotic system. In this work, we develop a controller for realizing smooth and accurate motion of a robotic head with application to a teleoperation system for the Furhat robot head [1], which we call TeleFurhat. The controller uses the head motion of an operator measured by a Microsoft Kinect 2 sensor as reference and applies a processing framework to condition and render the motion on the robot head. The processing framework includes a pre-filter based on a moving average filter, a neural network-based model for improving the accuracy of the raw pose measurements of Kinect, and a constrained-state Kalman filter that uses a minimum jerk model to smooth motion trajectories and limit the magnitude of changes in position, velocity, and acceleration. Our results demonstrate that the robot can reproduce the human head motion in real time with a latency of approximately 100 to 170 ms while operating within its physical limits. Furthermore, viewers prefer our new method over rendering the raw pose data from Kinect.

  • 10. Agarwal, Priyanshu
    et al.
    Al Moubayed, Samer
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Alspach, Alexander
    Kim, Joohyung
    Carter, Elizabeth J.
    Lehman, Jill Fain
    Yamane, Katsu
    Imitating Human Movement with Teleoperated Robotic Head2016In: 2016 25TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2016, 630-637 p.Conference paper (Refereed)
    Abstract [en]

    Effective teleoperation requires real-time control of a remote robotic system. In this work, we develop a controller for realizing smooth and accurate motion of a robotic head with application to a teleoperation system for the Furhat robot head [1], which we call TeleFurhat. The controller uses the head motion of an operator measured by a Microsoft Kinect 2 sensor as reference and applies a processing framework to condition and render the motion on the robot head. The processing framework includes a pre-filter based on a moving average filter, a neural network-based model for improving the accuracy of the raw pose measurements of Kinect, and a constrained-state Kalman filter that uses a minimum jerk model to smooth motion trajectories and limit the magnitude of changes in position, velocity, and acceleration. Our results demonstrate that the robot can reproduce the human head motion in real time with a latency of approximately 100 to 170 ms while operating within its physical limits. Furthermore, viewers prefer our new method over rendering the raw pose data from Kinect.

  • 11.
    Aguilar Gómez, Raquel
    University West, Department of Engineering Science, Division of Automation Systems.
    Investigation of a Flexible Manufacturing Scenario for Production Systems: Case study: GKN Aerospace Company2015Independent thesis Advanced level (degree of Master (Two Years)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Nowadays the globalization of the market has a direct impact to the companies. On one hand, gives them the opportunity to spread their business but on the other hand increases the competition and the efforts to succeed. This project is pursuing to investigate and design a system capable to take upon and optimize the production within functional shops by in-troducing automation. Although the system it would be a standard solution for the functional shops, a case study for GKN Aerospace Company with its specifications has been carried out. The task become trickier due to the aeronautic sector requires a lot of work and effort to produce their products. The items and materials are expensive as well as the processes and treatments thus the introduction of automation is quite complex. This case study has accom-plished a research through the concepts and options to introduce automation within this production process. Taking under consideration the costs and efforts that would demand the investment for automation, the project propose a new and innovative manufacturing system "Move & Play", suitable with the features and requirements GKN Aerospace Company may need.The system proposed has been designed regarding to features such as movability, modularity and flexibility. These features have been considered essentials in order to make the system worth it and cost effective. The job-shop approach that the company use in its production contemplate the continuous change of products and introduction of new ones. Hence, the idea is to use the "Move & Play" system when it is required along the production. As a consequence, this system will create a "local product flow" along that point in the production that will organize the rest of the production or "global flow". Moreover it is expected to apply the concept of "plug-in" the processes that the system need at each time and therefore give flexibility to the system. With that purposes, the work has studied three different designs of the "Move & Play" system. Each design have considered different aspects and disposition. These designs have been compared according with criteria such as geometrical, the capacity and investment needed and the cycle time spent. Finally a conclusion on which design is the most suitable has been stated as well as a future work to continue with the work has been proposed.

  • 12. Aguilar, Luis T.
    et al.
    Boiko, Igor M.
    Fridman, Leonid M.
    Freidovich, Leonid B.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Generating oscillations in inertia wheel pendulum via two-relay controller2012In: International Journal of Robust and Nonlinear Control, ISSN 1049-8923, E-ISSN 1099-1239, Vol. 22, no 3, 318-330 p.Article in journal (Refereed)
    Abstract [en]

    The problem of generating oscillations of the inertia wheel pendulum is considered. We combine exact feedback linearization with two-relay controller, tuned using frequency-domain tools, such as computing the locus of a perturbed relay system. Explicit expressions for the parameters of the controller in terms of the desired frequency and amplitude are derived. Sufficient conditions for orbital asymptotic stability of the closed-loop system are obtained with the help of the Poincare map. Performance is validated via experiments. The approach can be easily applied for a minimum phase system, provided the behavior of the states of the zero dynamics is of no concern. Copyright (C) 2011 John Wiley & Sons, Ltd.

  • 13.
    Aguilar, Luis T.
    et al.
    CITEDI, National Polytechnic Institute, Tijuana, BC, Mexico.
    Freidovich, Leonid
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Orlov, Yury
    CICESE Research Center, Ensenada, Baja California, Mexico.
    Merida, Jovan
    CITEDI, National Polytechnic Institute, Tijuana, BC, Mexico.
    Performance Analysis of Relay Feedback Position Regulators for Manipulators with Coulomb Friction2013In: Proc. 12th European Control Conference, NEW YORK, NY 10017 USA: IEEE , 2013, 3754-3759 p.Conference paper (Refereed)
    Abstract [en]

    The purpose of the paper is to analyze the performance of several global position regulators for robot manipulators with Coulomb friction. All the controllers include a proportional-differential part and a switched part whereas the difference between the controllers is in the way of compensation of the gravitational forces. Stability analysis is also revisited within the nonsmooth Lyapunov function framework for the controllers with and without gravity pre-compensation. Performance issues of the proposed controllers are evaluated in an experimental study of a five degrees-of-freedom robot manipulator. In the experiments, we choose two criteria for performance analysis. In the first set of experiments, we set the same gains to all the controllers. In the second set of experiments, the gains of the controller were chosen such that the work done by the manipulator is similar.

  • 14.
    Akan, Batu
    Mälardalen University, School of Innovation, Design and Engineering.
    Human Robot Interaction Solutions for Intuitive Industrial Robot Programming2012Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Over the past few decades the use of industrial robots has increased the efficiency as well as competitiveness of many companies. Despite this fact, in many cases, robot automation investments are considered to be technically challenging. In addition, for most small and medium sized enterprises (SME) this process is associated with high costs. Due to their continuously changing product lines, reprogramming costs are likely to exceed installation costs by a large margin. Furthermore, traditional programming methods for industrial robots are too complex for an inexperienced robot programmer, thus assistance from a robot programming expert is often needed.  We hypothesize that in order to make industrial robots more common within the SME sector, the robots should be reprogrammable by technicians or manufacturing engineers rather than robot programming experts. In this thesis we propose a high-level natural language framework for interacting with industrial robots through an instructional programming environment for the user.  The ultimate goal of this thesis is to bring robot programming to a stage where it is as easy as working together with a colleague.In this thesis we mainly address two issues. The first issue is to make interaction with a robot easier and more natural through a multimodal framework. The proposed language architecture makes it possible to manipulate, pick or place objects in a scene through high level commands. Interaction with simple voice commands and gestures enables the manufacturing engineer to focus on the task itself, rather than programming issues of the robot. This approach shifts the focus of industrial robot programming from the coordinate based programming paradigm, which currently dominates the field, to an object based programming scheme.The second issue addressed is a general framework for implementing multimodal interfaces. There have been numerous efforts to implement multimodal interfaces for computers and robots, but there is no general standard framework for developing them. The general framework proposed in this thesis is designed to perform natural language understanding, multimodal integration and semantic analysis with an incremental pipeline and includes a novel multimodal grammar language, which is used for multimodal presentation and semantic meaning generation.

  • 15.
    Akan, Batu
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ameri E., Afsh
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Çürüklü, Baran
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Towards Creation of Robot Programs Through User InteractionArticle in journal (Other academic)
    Abstract [en]

    This paper proposes a novel system for task-level programming of industrial robots. The user interacts with an industrial robot by giving instructions in a structured natural language and by selecting objects through an augmented reality interface. The proposed system consists of two parts. First, a multimodal framework that provides a natural language interface to the user. This framework performs modality fusion, semantic analysis and helps the user to interact with the system easier and more naturally. The proposed language architecture makes it possible to manipulate, pick or place objects in a scene through high-level commands. The second component is the POPStar planner, which is based on partial order planner (POP), that takes landmarks extracted from user instructions as input, and creates a sequence of actions to operate the robotic cell with minimal makespan. The proposed planner takes advantage of partial order capabilities of POP to plan execution of actions in parallel and employs a best-first search algorithm to seek a series of actions that lead to a minimal makespan. The proposed planner can as well handle robots with multiple grippers, and  parallel machines. Using different topologies for the landmark graphs, we show that it is possible to create schedules for changing object types, which are processed in different stages in the robot cell. Results show that the proposed system can create and adapt schedules for robot cells with changing product types in low volume production based on the user's instructions.

  • 16.
    Akan, Batu
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Ameri E., Afshin
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Curuklu, Baran
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Scheduling for Multiple Type Objects Using POPStar Planner2014In: Proceedings of the 19th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA'14), Barcelona, Spain, September, 2014, 2014, Article number 7005148- p.Conference paper (Refereed)
    Abstract [en]

    In this paper, scheduling of robot cells that produce multiple object types in low volumes are considered. The challenge is to maximize the number of objects produced in a given time window as well as to adopt the  schedule for changing object types. Proposed algorithm, POPStar, is based on a partial order planner which is guided by best-first search algorithm and landmarks. The best-first search, uses heuristics to help the planner to create complete plans while minimizing the makespan. The algorithm takes landmarks, which are extracted from user's instructions given in structured English as input. Using different topologies for the landmark graphs, we show that it is possible to create schedules for changing object types, which will be processed in different stages in the robot cell. Results show that the POPStar algorithm can create and adapt schedules for robot cells with changing product types in low volume production.

  • 17.
    Akan, Batu
    et al.
    Mälardalen University, School of Innovation, Design and Engineering.
    Çürüklü, Baran
    Mälardalen University, School of Innovation, Design and Engineering.
    Spampinato, Giacomo
    Mälardalen University, School of Innovation, Design and Engineering.
    Asplund, Lars
    Mälardalen University, School of Innovation, Design and Engineering.
    Object Selection using a Spatial Language for Flexible Assembly2009In: 14th IEEE International Conference on Emerging Technologies and Factory Automation, 2009. (ETFA 2009), Mallorca, Spain, 2009Conference paper (Refereed)
    Abstract [en]

    In this paper we present a new simplified natural language that makes use of spatial relations between the objects in scene to navigate an industrial robot for simple pick and place applications. Developing easy to use, intuitive interfaces is crucial to introduce robotic automation to many small medium sized enterprises (SMEs). Due to their continuously changing product lines, reprogramming costs are far higher than installation costs. In order to hide the complexities of robot programming we propose a natural language where the use can control and jog the robot based on reference objects in the scene. We used Gaussian kernels to represent spatial regions, such as left or above. Finally we present some dialogues between the user and robot to demonstrate the usefulness of the proposed system.

  • 18.
    Al Hayani, Musab
    University West, Department of Engineering Science, Division of Automation and Computer Engineering.
    Offline Programming of Robots in Car Seat Production2013Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Company Purtech in Dals-Ed manufactures molded polyurethane (PUR). Examples of products that include polyurethane are car seats. Robots are used to fill the molds with PUR and to apply the release agent (wax) in the empty molds.

    Turning from online programming into a graphical offline programming of release agent spraying robots is going to simplify the process by:

    1. Applying less of release agent to avoid polluting environment, to produce an easier removal of moulds, for the sake of homogeneous moulds and for economical saving in the cost of release agent
    2. Adaption of spraying paths to variation in production speed.
    3. Programming of complex spraying trajectories to deal with sharp geometrical subsurface
    4. Decreasing onsite programming time (when program a new workpiece or modify an old one); so that robots would be free for production.

    While turning into offline programming brought the challenges of:

    1. Impact of variation in the production speed
    2. Lack of 3D models of workcell’s equipments
    3. Robot joint configuration when paths and robtargets are in move.
    4. Physical Joint limits, Singularities & Reach limits
    5. Collisions within the cell space.

    At the end, the following objectives are successfully met:

    1. Adaption of spraying programs to variation in production speed by developing and embedding a method in those programs.
    2. Graphical offline generation of spraying trajectories and optimization of those trajectories to the Purtech condition of spraying allowed time for each carrier.
    3. Simulation of release agent spraying process; and producing of a well structured RAPID program that reflect the simulated process.

              

  • 19.
    Alberti, Marina
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Chemical Science and Engineering (CHE).
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Relational approaches for joint object classification andscene similarity measurement in indoor environments2014In: Proc. of 2014 AAAI Spring Symposium QualitativeRepresentations for Robots 2014, Palo Alto, California: The AAAI Press , 2014Conference paper (Refereed)
    Abstract [en]

    The qualitative structure of objects and their spatial distribution,to a large extent, define an indoor human environmentscene. This paper presents an approach forindoor scene similarity measurement based on the spatialcharacteristics and arrangement of the objects inthe scene. For this purpose, two main sets of spatialfeatures are computed, from single objects and objectpairs. A Gaussian Mixture Model is applied both onthe single object features and the object pair features, tolearn object class models and relationships of the objectpairs, respectively. Given an unknown scene, the objectclasses are predicted using the probabilistic frameworkon the learned object class models. From the predictedobject classes, object pair features are extracted. A fi-nal scene similarity score is obtained using the learnedprobabilistic models of object pair relationships. Ourmethod is tested on a real world 3D database of deskscenes, using a leave-one-out cross-validation framework.To evaluate the effect of varying conditions on thescene similarity score, we apply our method on mockscenes, generated by removing objects of different categoriesin the test scenes.

  • 20.
    Alenljung, Beatrice
    et al.
    Högskolan i Skövde, Institutionen för informationsteknologi.
    Andreasson, Rebecca
    Högskolan i Skövde, Institutionen för informationsteknologi.
    Billing, Erik A.
    Högskolan i Skövde, Institutionen för informationsteknologi.
    Lindblom, Jessica
    Högskolan i Skövde, Institutionen för informationsteknologi.
    Lowe, Robert
    Högskolan i Skövde, Institutionen för informationsteknologi.
    User Experience of Conveying Emotions by Touch2017In: Proceedings of the 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE , 2017, 1240-1247 p.Conference paper (Refereed)
    Abstract [en]

    In the present study, 64 users were asked to convey eight distinct emotion to a humanoid Nao robot via touch, and were then asked to evaluate their experiences of performing that task. Large differences between emotions were revealed. Users perceived conveying of positive/pro-social emotions as significantly easier than negative emotions, with love and disgust as the two extremes. When asked whether they would act differently towards a human, compared to the robot, the users’ replies varied. A content analysis of interviews revealed a generally positive user experience (UX) while interacting with the robot, but users also found the task challenging in several ways. Three major themes with impact on the UX emerged; responsiveness, robustness, and trickiness. The results are discussed in relation to a study of human-human affective tactile interaction, with implications for human-robot interaction (HRI) and design of social and affective robotics in particular. 

  • 21.
    Alenljung, Beatrice
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Andreasson, Rebecca
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Department of Information Technology, Visual Information & Interaction. Uppsala University, Uppsala, Sweden.
    Billing, Erik A.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lindblom, Jessica
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Lowe, Robert
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    User Experience of Conveying Emotions by Touch2017In: Proceedings of the 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2017, 1240-1247 p.Conference paper (Refereed)
    Abstract [en]

    In the present study, 64 users were asked to convey eight distinct emotion to a humanoid Nao robot via touch, and were then asked to evaluate their experiences of performing that task. Large differences between emotions were revealed. Users perceived conveying of positive/pro-social emotions as significantly easier than negative emotions, with love and disgust as the two extremes. When asked whether they would act differently towards a human, compared to the robot, the users’ replies varied. A content analysis of interviews revealed a generally positive user experience (UX) while interacting with the robot, but users also found the task challenging in several ways. Three major themes with impact on the UX emerged; responsiveness, robustness, and trickiness. The results are discussed in relation to a study of human-human affective tactile interaction, with implications for human-robot interaction (HRI) and design of social and affective robotics in particular. 

  • 22.
    Alexanderson, Simon
    et al.
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    House, David
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Beskow, Jonas
    KTH, School of Computer Science and Communication (CSC), Speech, Music and Hearing, TMH.
    Automatic annotation of gestural units in spontaneous face-to-face interaction2016In: MA3HMI 2016 - Proceedings of the Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction, 2016, 15-19 p.Conference paper (Refereed)
    Abstract [en]

    Speech and gesture co-occur in spontaneous dialogue in a highly complex fashion. There is a large variability in the motion that people exhibit during a dialogue, and different kinds of motion occur during different states of the interaction. A wide range of multimodal interface applications, for example in the fields of virtual agents or social robots, can be envisioned where it is important to be able to automatically identify gestures that carry information and discriminate them from other types of motion. While it is easy for a human to distinguish and segment manual gestures from a flow of multimodal information, the same task is not trivial to perform for a machine. In this paper we present a method to automatically segment and label gestural units from a stream of 3D motion capture data. The gestural flow is modeled with a 2-level Hierarchical Hidden Markov Model (HHMM) where the sub-states correspond to gesture phases. The model is trained based on labels of complete gesture units and self-adaptive manipulators. The model is tested and validated on two datasets differing in genre and in method of capturing motion, and outperforms a state-of-the-art SVM classifier on a publicly available dataset.

  • 23.
    Alissandrakis, Aris
    et al.
    Dept. of Comput. Intell. & Syst. Sci., Tokyo Inst. of Technol., Tokyo, Japan.
    Otero, Nuno
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Saunders, Joe
    Adaptive Systems Research Group, University of Hertfordshire, Hatfield, UK.
    Dautenhahn, Kerstin
    Adaptive Systems Research Group, University of Hertfordshire, Hatfield, UK.
    Nehaniv, Chrystopher
    Adaptive Systems Research Group, University of Hertfordshire, Hatfield, UK.
    Helping Robots Imitate: Metrics And Computational Solutions Inspired By Human-Robot Interaction Studies2010In: Advances in Cognitive Systems / [ed] Samia Nefti-Meziani and John Gray, Institution of Engineering and Technology, 2010, 127-167 p.Chapter in book (Refereed)
    Abstract [en]

    In this chapter we describe three lines of research related to the issue of helping robots imitate people. These studies are based on observed human be- haviour, technical metrics and implemented technical solutions. The three lines of research are: (a) a number of user studies that show how humans naturally tend to demonstrate a task for a robot to learn, (b) a formal approach to tackle the problem of what a robot should imitate, and (c) a technology-driven conceptual framework and technique, inspired by social learning theories, that addresses how a robot can be taught. In this merging exercise we will try to propose a way through this prob- lem space, towards the design of a Human-Robot Interaction (HRI) system able to be taught by humans via demonstration.

  • 24.
    Almeida, Diogo
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH.
    Team KTH’s Picking Solution for the Amazon Picking Challenge 20162017In: Warehouse Picking Automation Workshop 2017: Solutions, Experience, Learnings and Outlook of the Amazon Robotics Challenge, 2017Conference paper (Other (popular science, discussion, etc.))
    Abstract [en]

    In this work we summarize the solution developed by Team KTH for the Amazon Picking Challenge 2016 in Leipzig, Germany. The competition simulated a warehouse automation scenario and it was divided in two tasks: a picking task where a robot picks items from a shelf and places them in a tote and a stowing task which is the inverse task where the robot picks items from a tote and places them in a shelf. We describe our approach to the problem starting from a high level overview of our system and later delving into details of our perception pipeline and our strategy for manipulation and grasping. The solution was implemented using a Baxter robot equipped with additional sensors.

  • 25.
    Almeida, Diogo
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH.
    Karayiannidis, Yiannis
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. Chalmers.
    Dexterous manipulation by means of compliant grasps and external contacts2017Conference paper (Refereed)
    Abstract [en]

    We propose a method that allows for dexterousmanipulation of an object by exploiting contact with an externalsurface. The technique requires a compliant grasp, enablingthe motion of the object in the robot hand while allowingfor significant contact forces to be present on the externalsurface. We show that under this type of grasp it is possibleto estimate and control the pose of the object with respect tothe surface, leveraging the trade-off between force control andmanipulative dexterity. The method is independent of the objectgeometry, relying only on the assumptions of type of grasp andthe existence of a contact with a known surface. Furthermore,by adapting the estimated grasp compliance, the method canhandle unmodelled effects. The approach is demonstrated andevaluated with experiments on object pose regulation andpivoting against a rigid surface, where a mechanical springprovides the required compliance.

  • 26.
    Almeida, Diogo
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Karayiannidis, Yiannis
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folding Assembly by Means of Dual-Arm Robotic Manipulation2016In: 2016 IEEE International Conference on Robotics and Automation (ICRA), IEEE conference proceedings, 2016, 3987-3993 p.Conference paper (Refereed)
    Abstract [en]

    In this paper, we consider folding assembly as an assembly primitive suitable for dual-arm robotic assembly, that can be integrated in a higher level assembly strategy. The system composed by two pieces in contact is modelled as an articulated object, connected by a prismatic-revolute joint. Different grasping scenarios were considered in order to model the system, and a simple controller based on feedback linearisation is proposed, using force torque measurements to compute the contact point kinematics. The folding assembly controller has been experimentally tested with two sample parts, in order to showcase folding assembly as a viable assembly primitive.

  • 27.
    Almeida, Diogo
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Viña, Francisco E.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Karayiannidis, Yiannis
    Bimanual Folding Assembly: Switched Control and Contact Point Estimation2016In: IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, 2016, Cancun: IEEE, 2016Conference paper (Refereed)
    Abstract [en]

    Robotic assembly in unstructured environments is a challenging task, due to the added uncertainties. These can be mitigated through the employment of assembly systems, which offer a modular approach to the assembly problem via the conjunction of primitives. In this paper, we use a dual-arm manipulator in order to execute a folding assembly primitive. When executing a folding primitive, two parts are brought into rigid contact and posteriorly translated and rotated. A switched controller is employed in order to ensure that the relative motion of the parts follows the desired model, while regulating the contact forces. The control is complemented with an estimator based on a Kalman filter, which tracks the contact point between parts based on force and torque measurements. Experimental results are provided, and the effectiveness of the control and contact point estimation is shown.

  • 28.
    AlNabulsi, Yasan
    University West, Department of Engineering Science, Division of Production System.
    Robot motion control based on 3D mouse tracking2017Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The manufacturing industry and associated systems are being developed in an increasing manner to cover the market needs, where the manufacturing companies are continuously racing and competing to achieve high productivity rate and better production quality. In this thesis work, an advanced method for motion control of industrial robots has been investigated and implemented. This method is based on motion tracking of a 3DSpaceMouse, which was used to perform movements by the operator. The benefits and disadvantages of this method were discussed in this thesis work. It mainly showed a high accuracy in response to the motion applied by the 3DSpaceMouse, and a great stability regarding the programming environment that was used to build it. The movements applied by the 3DSpaceMouse were successfully captured and stored in variables in the programming platform. The capturing and storing process was successfully created as a package and prepared to be exported for usage by other software. Complete simulation was performed for an industrial robot, and successful communications among the various hardware and software components of this solution were accomplished. This has formed a complete integrated solution that has also included a user-friendly HRI. This HRI made it easy and simple to track the motion control processes and establish connections with the robot controller. Thus, it can be considered a feasible solution for motion control of industrial robots, which can be used by the manufacturing companies. Several tests and verification processes were carried out to obtain accepted results and to succeed in implementing a working model. Some errors and unexpected events have appeared during the work, which required handling in order to achieve a working integrated system.

  • 29. Alomari, M.
    et al.
    Duckworth, P.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Hawasly, M.
    Hogg, D. C.
    Cohn, A. G.
    Grounding of human environments and activities for autonomous robots2017In: IJCAI International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence , 2017, 1395-1402 p.Conference paper (Refereed)
    Abstract [en]

    With the recent proliferation of human-oriented robotic applications in domestic and industrial scenarios, it is vital for robots to continually learn about their environments and about the humans they share their environments with. In this paper, we present a novel, online, incremental framework for unsupervised symbol grounding in real-world, human environments for autonomous robots. We demonstrate the flexibility of the framework by learning about colours, people names, usable objects and simple human activities, integrating stateofthe-art object segmentation, pose estimation, activity analysis along with a number of sensory input encodings into a continual learning framework. Natural language is grounded to the learned concepts, enabling the robot to communicate in a human-understandable way. We show, using a challenging real-world dataset of human activities as perceived by a mobile robot, that our framework is able to extract useful concepts, ground natural language descriptions to them, and, as a proof-ofconcept, generate simple sentences from templates to describe people and the activities they are engaged in.

  • 30.
    Altergren, Andreé
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Förberedelse till modernisering av styrsystem för produktion av processvatten2012Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The heat and power station Allöverket in Kristianstad produced the process water in a plant called the total desalination. In this plant which consists out of two identical lines, they let the incoming water passing through ion exchangers to replace the undesired ions in the water to more desired ions. The reason for doing this is that the undesired ions in the water give coatings on the turbine. When the ion exchangers have been saturated, they must be restored to the original condition. This is done by starting a regeneration program, which consists of a number of steps such as back flushing, intake of chemicals and four different flushes. There are limits on the conductivity and the content of silicon in the water that will be delivered. But there is only measurement of the conductivity connected to the control system, and because of this they control the plant with different timers. To the control system there is connected a number of centrifugal pumps, solenoid valves and instrumentation for measurement of conductivity. In the total desalination sits a Siemens S5 control system that controls the plant after amount of water, conductivity and different times of the sequences. The control functions are now located on a control cabinet out of the factory. The Siemens S5 control system is old and outdated and will be changed to an ABB 800xA control system. This control system they already use to control other parts of the factory from the control room. With this study I have developed a new functional description of the plant and it consists of function diagrams which describes how the software controls the plant today. To the function description, I have also made a new technical description and revised the process scheme so that all documentation of the plant says the same thing.

  • 31.
    Alvarado, Cristian
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE).
    Ibrahim, Ayad
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE).
    Fjärrstyrning av videokamera2012Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    On some occasions it may be inappropriate with a cameraman behind the camera. In somecases there is no cameraman available or it is a smaller event, like a family celebration, to befilmed. It can also be difficult for a cameraman to shoot from certain angles. One solution forthis kind of situations can be to remotely control the camera movements. There are thosekinds of solutions on the market today but they either address professional filmmakers or thefunctionality is limited, for example by the absence of wireless remote control.This project aims to develop a solution to the problem with absence of wireless remote controland also a solution with more flexibility and less complexity than today’s solutions. The maindifference between this and existing solutions is the use of Bluetooth technology ascommunication between the devices. The project resulted in a solution consisting of twounits; one operating unit where the camera is mounted and a remote unit for control of theoperating unit. The remote unit is managed by the filmmaker. The remote unit consist of anAndroid application on a smart and which communicates with the control unit via Bluetooth.

  • 32.
    Amato, Giuseppe
    et al.
    ISTI-CNR, Pisa, Italy.
    Broxvall, Mathias
    Örebro University, School of Science and Technology.
    Chessa, Stefano
    Università di Pisa, Pisa, Italy.
    Dragone, Mauro
    University College Dublin, Dublin, Ireland.
    Gennaro, Caludio
    ISTI-CNR, Pisa, Italy.
    Lopez, Rafa
    Robotnik Automation, Valencia, Spain.
    Maguire, Liam
    University of Ulster, Coleraine, Ireland.
    McGinnity, Martin T.
    University of Ulster, Coleraine, Ireland.
    Micheli, Alessio
    Università di Pisa, Pisa, Italy.
    Renteria, Arantxa
    Tecnalia, Derio, Spain.
    O’Hare, Gregory M. P.
    University College Dublin, Dublin, Ireland.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Robotic UBIquitous COgnitive Network2012In: Ambient Intelligence: Software and Applications / [ed] Paulo Novais, Kasper Hallenborg, Dante I. Tapia, Juan M. Corchado Rodríguez, Springer-Verlag New York, 2012, 191-195 p.Conference paper (Refereed)
    Abstract [en]

    Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them self-adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The EU FP7 project RUBICON develops self-sustaining learning solutions yielding cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, agent control systems, wireless sensor networks and machine learning. This paper briefly illustrates how these techniques are being extended, integrated, and applied to AAL applications.

  • 33.
    Ambrus, Rares
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Bore, Nils
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Folkesson, John
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Meta-rooms: Building and Maintaining Long Term Spatial Models in a Dynamic World2014In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2014), IEEE conference proceedings, 2014, 1854-1861 p.Conference paper (Refereed)
    Abstract [en]

    We present a novel method for re-creating the static structure of cluttered office environments -which we define as the " meta-room" -from multiple observations collected by an autonomous robot equipped with an RGB-D depth camera over extended periods of time. Our method works directly with point clusters by identifying what has changed from one observation to the next, removing the dynamic elements and at the same time adding previously occluded objects to reconstruct the underlying static structure as accurately as possible. The process of constructing the meta-rooms is iterative and it is designed to incorporate new data as it becomes available, as well as to be robust to environment changes. The latest estimate of the meta-room is used to differentiate and extract clusters of dynamic objects from observations. In addition, we present a method for re-identifying the extracted dynamic objects across observations thus mapping their spatial behaviour over extended periods of time.

  • 34.
    Amundin, Mats
    et al.
    Kolmården Wildlife Park.
    Hållsten, Henrik
    Filosofiska institutionen, Stockholms universitet.
    Eklund, Robert
    Linköping University, Department of Culture and Communication, Language and Culture. Linköping University, Faculty of Arts and Sciences.
    Karlgren, Jussi
    Kungliga Tekniska Högskolan.
    Molinder, Lars
    Carnegie Investment Bank, Swedden.
    A proposal to use distributional models to analyse dolphin vocalisation2017In: Proceedings of the 1st International Workshop on Vocal Interactivity in-and-between Humans, Animals and Robots, VIHAR 2017 / [ed] Angela Dassow, Ricard Marxer & Roger K. Moore, 2017, 31-32 p.Conference paper (Refereed)
    Abstract [en]

    This paper gives a brief introduction to the starting points of an experimental project to study dolphin communicative behaviour using distributional semantics, with methods implemented for the large scale study of human language.

  • 35.
    Anderson, S. J.
    et al.
    MIT.
    Karumanchi, S. B.
    MIT.
    Iagnemma, Karl
    MIT.
    Constraint-based planning and control for safe, semi-autonomous operation of vehicles2012In: 2012 IEEE intelligent vehicles symposium: (IV 2012) : Alcala de Henares, Madrid, Spain, 3-7 June 2012, 2012, 383-388 p.Conference paper (Refereed)
    Abstract [en]

    This paper presents a new approach to semi-autonomous vehicle hazard avoidance and stability control, based on the design and selective enforcement of constraints. This differs from traditional approaches that rely on the planning and tracking of paths. This emphasis on constraints facilitates "minimally-invasive" control for human-machine systems; instead of forcing a human operator to follow an automation-determined path, the constraint-based approach identifies safe homotopies, and allows the operator to navigate freely within them, introducing control action only as necessary to ensure that the vehicle does not violate safety constraints. The method evaluates candidate homotopies based on "restrictiveness", rather than traditional measures of path goodness, and designs and enforces requisite constraints on the human's control commands to ensure that the vehicle never leaves the controllable subset of a desired homotopy. Identification of these homotopic classes in off-road environments is performed using geometric constructs. The goodness of competing homotopies and their associated constraints is then characterized using geometric heuristics. Finally, input limits satisfying homotopy and vehicle dynamic constraints are enforced using threat-based feedback mechanisms to ensure that the vehicle avoids collisions and instability while preserving the human operator's situational awareness and mental models. The methods developed in this work are shown in simulation and experimentally demonstrated in safe, high-speed teleoperation of an unmanned ground vehicle. © 2012 IEEE.

  • 36.
    Anderson, S.
    et al.
    MIT.
    Peters, S.
    MIT.
    Iagnemma, Karl
    MIT.
    Overholt, J.
    US Army Tank Automotive RDE Center (TARDEC).
    Semi-Autonomous Stability Control and Hazard Avoidance for Manned and Unmanned Ground Vehicles2010In: Proceedings of the 27th Army Science Conference, 2010, 1-8 p.Conference paper (Refereed)
    Abstract [en]

    This paper presents a method for trajectory planning, threatassessment, and semi-autonomous control of manned andunmanned ground vehicles. A model predictive controlleriteratively replans a stability-optimal trajectory through the saferegion of the environment while a threat assessor and semi-autonomous control law modulate driver and controller inputs tomaintain stability, preserve controllability, and ensure that thevehicle avoids obstacles and hazardous areas. The efficacy of thisapproach in avoiding hazards while accounting for various typesof human error, including errors caused by time delays, isdemonstrated in simulation.

  • 37.
    Anderson, S.
    et al.
    MIT.
    Peters, S.
    MIT, USA.
    Pilutti, T.
    Ford Research Laboratories, Ford Motor Co., Dearborn, MI 48124, United States.
    Tseng, E.
    Ford Research Laboratories, Ford Motor Co., Dearborn, MI 48124, United States.
    Iagnemma, Karl
    MIT.
    Semi-autonomous Avoidance of Moving Hazards for Passenger Vehicles2010In: Proceedings of the ASME Dynamic Systems and Control Conference--2010: presented at 2010 ASME Dynamic Systems and Control Conference, September 12-15, 2010 Cambridge, Mass., USA, New York: ASME Press, 2010, 141-148 p.Conference paper (Refereed)
    Abstract [en]

    This paper presents a method for semi-autonomous hazard avoidance in the presence of unknown moving obstacles and unpredictable driver inputs. This method iteratively predicts the motion and anticipated intersection of the host vehicle with both static and dynamic hazards and excludes projected collision states from a traversable corridor. A model predictive controller iteratively replans a stability-optimal trajectory through the navigable region of the environment while a threat assessor and semi-autonomous control law modulate driver and controller inputs to maintain stability, preserve controllability, and ensure safe hazard avoidance. The efficacy of this approach is demonstrated through both simulated and experimental results using a semi-autonomously controlled Jaguar S-Type. Copyright © 2010 by ASME.

  • 38.
    Anderson, Sterling J.
    et al.
    Department of Mechanical Engineering Massachusetts Institute of Technology, Cambridge, MA, USA.
    Karumanchi, Sisir B.
    Department of Mechanical Engineering Massachusetts Institute of Technology, Cambridge, MA, USA.
    Iagnemma, Karl
    Department of Mechanical Engineering Massachusetts Institute of Technology, Cambridge, MA, USA.
    Walker, James M.
    Quantum Signal, LLC, Saline, MI, USA.
    The intelligent copilot: A constraint-based approach to shared-adaptive control of ground vehicles2013In: IEEE Intelligent Transportation Systems Magazine, ISSN 1939-1390, Vol. 5, no 2, 45-54 p.Article in journal (Refereed)
    Abstract [en]

    This work presents a new approach to semi-autonomous vehicle hazard avoidance and stability control, based on the design and selective enforcement of constraints. This differs from traditional approaches that rely on the planning and tracking of paths and facilitates minimally-invasive control for human-machine systems. Instead of forcing a human operator to follow an automation-determined path, the constraint-based approach identifies safe homotopies, and allows the operator to navigate freely within them, introducing control action only as necessary to ensure that the vehicle does not violate safety constraints. This method evaluates candidate homotopies based on restrictiveness rather than traditional measures of path goodness, and designs and enforces requisite constraints on the human's control commands to ensure that the vehicle never leaves the controllable subset of a desired homotopy. This paper demonstrates the approach in simulation and characterizes its effect on human teleoperation of unmanned ground vehicles via a 20-user, 600-trial study on an outdoor obstacle course. Aggregated across all drivers and experiments, the constraintbased control system required an average of 43% of the available control authority to reduce collision frequency by 78% relative to traditional teleoperation, increase average speed by 26%, and moderate operator steering commands by 34%. © 2009-2012 IEEE

  • 39.
    Anderson, Sterling J.
    et al.
    MIT.
    Karumanchi, Sisir B.
    MIT.
    Johnson, Bryan
    Quantum Signal LLC..
    Perlin, Victor
    Quantum Signal LLC..
    Rohde, Mitchell
    Quantum Signal LLC..
    Iagnemma, Karl
    MIT.
    Constraint-based semi-autonomy for unmanned ground vehicles using local sensing2012In: UNMANNED SYSTEMS TECHNOLOGY XIV, Bellingham, WA: SPIE - International Society for Optical Engineering, 2012, Article no. 83870K- p.Conference paper (Refereed)
    Abstract [en]

    Teleoperated vehicles are playing an increasingly important role in a variety of military functions. While advantageous in many respects over their manned counterparts, these vehicles also pose unique challenges when it comes to safely avoiding obstacles. Not only must operators cope with difficulties inherent to the manned driving task, but they must also perform many of the same functions with a restricted field of view, limited depth perception, potentially disorienting camera viewpoints, and significant time delays. In this work, a constraint-based method for enhancing operator performance by seamlessly coordinating human and controller commands is presented. This method uses onboard LIDAR sensing to identify environmental hazards, designs a collision-free path homotopy traversing that environment, and coordinates the control commands of a driver and an onboard controller to ensure that the vehicle trajectory remains within a safe homotopy. This system's performance is demonstrated via off-road teleoperation of a Kawasaki Mule in an open field among obstacles. In these tests, the system safely avoids collisions and maintains vehicle stability even in the presence of "routine" operator error, loss of operator attention, and complete loss of communications.

  • 40.
    Anderson, Sterling J.
    et al.
    Massachusetts Institute of Technology.
    Peters, Steven C.
    Massachusetts Institute of Technology.
    Iagnemma, Karl
    Massachusetts Institute of Technology.
    Pilutti, Tom E.
    Ford Research Laboratories, Ford Motor Company, Dearborn, MI, United States.
    A Unified Approach to Semi-Autonomous Control of Passenger Vehicles in Hazard Avoidance Scenarios2009In: IEEE 2009 IEEE International Conference on Systems, Man and Cybernetics, SMC 2009, VOLS 1-9, Piscataway, N.J.: IEEE Press, 2009, 2032-2037 p.Conference paper (Refereed)
    Abstract [en]

    This paper describes the design of unified active safety framework that combines trajectory planning, threat assessment, and semi-autonomous control of passenger vehicles into a single constrained-optimal-control-based system. This framework allows for multiple actuation modes, diverse trajectory-planning objectives, and varying levels of autonomy. The vehicle navigation problem is formulated as a constrained optimal control problem with constraints bounding a navigable region of the road surface. A model predictive controller iteratively plans the best-case vehicle trajectory through this constrained corridor. The framework then uses this trajectory to assess the threat posed to the vehicle and intervenes in proportion to this threat. This approach minimizes controller intervention while ensuring that the vehicle does not depart from a navigable corridor of travel. Simulated results are presented here to demonstrate the framework's ability to incorporate multiple threat thresholds and configurable intervention laws while sharing control with a human driver. ©2009 IEEE.

  • 41.
    Anderson, Sterling J.
    et al.
    MIT.
    Peters, Steven C.
    MIT.
    Pilutti, Tom E.
    Ford Research Laboratories, Dearborn, MI 48124, United States.
    Iagnemma, Karl
    MIT.
    Design and Development of an Optimal-Control-Based Framework for Trajectory Planning, Threat Assessment, and Semi-autonomous Control of Passenger Vehicles in Hazard Avoidance Scenarios2011In: Robotics Research, Berlin: Springer Berlin/Heidelberg, 2011, 39-54 p.Conference paper (Refereed)
    Abstract [en]

    This paper describes the design of an optimal-control-based active safety framework that performs trajectory planning, threat assessment, and semi-autonomous control of passenger vehicles in hazard avoidance scenarios. This framework allows for multiple actuation modes, diverse trajectory-planning objectives, and varying levels of autonomy. A model predictive controller iteratively plans a best-case vehicle trajectory through a navigable corridor as a constrained optimal control problem. The framework then uses this trajectory to assess the threat posed to the vehicle and intervenes in proportion to this threat. This approach minimizes controller intervention while ensuring that the vehicle does not depart from a navigable corridor of travel. Simulation and experimental results are presented here to demonstrate the framework's ability to incorporate configurable intervention laws while sharing control with a human driver. © 2011 Springer-Verlag.

  • 42.
    Anderson, Sterling J.
    et al.
    MIT.
    Peters, Steven C.
    MIT.
    Pilutti, Tom E.
    Ford Research Laboratories, Ford Motor Company, Dearborn, MI, United States.
    Iagnemma, Karl
    MIT.
    Experimental Study of an Optimal-Control-Based Framework for Trajectory Planning, Threat Assessment, and Semi-Autonomous Control of Passenger Vehicles in Hazard Avoidance Scenarios2010In: FIELD AND SERVICE ROBOTICS, Berlin: Springer Berlin/Heidelberg, 2010, 59-68 p.Conference paper (Refereed)
    Abstract [en]

    This paper describes the design of an optimal-control-based active safety framework that performs trajectory planning, threat assessment, and semi-autonomous control of passenger vehicles in hazard avoidance scenarios. The vehicle navigation problem is formulated as a constrained optimal control problem with constraints bounding a navigable region of the road surface. A model predictive controller iteratively plans an optimal vehicle trajectory through the constrained corridor. Metrics from this "best-case" scenario establish the minimum threat posed to the vehicle given its current state. Based on this threat assessment, the level of controller intervention required to prevent departure from the navigable corridor is calculated and driver/controller inputs are scaled accordingly. This approach minimizes controller intervention while ensuring that the vehicle does not depart from a navigable corridor of travel. It also allows for multiple actuation modes, diverse trajectory-planning objectives, and varying levels of autonomy. Experimental results are presented here to demonstrate the framework's semi-autonomous performance in hazard avoidance scenarios.

  • 43.
    Andersson, Anders
    et al.
    Swedish National Road and Transport Research Institute, Traffic and road users, Vehicle technology and simulation.
    Nyberg, Peter
    Linköpings universitet.
    Sehammar, Håkan
    Swedish National Road and Transport Research Institute, Traffic and road users, Vehicle technology and simulation.
    Öberg, Per
    Linköpings universitet.
    Vehicle Powertrain Test Bench Co-Simulation with a Moving Base Simulator Using a Pedal Robot2013In: SAE International Journal of Passenger Cars - Electronic and Electrical Systems, ISSN 1946-4614, Vol. 6, no 1, 169-179 p.Article in journal (Refereed)
    Abstract [en]

    To evaluate driver perception of a vehicle powertrain a moving base simulator is a well-established technique. We are connecting the moving base simulator Sim III, at the Swedish National Road and Transport Research Institute with a newly built chassis dynamometer at Vehicular Systems, Linköping University.

    The purpose of the effort is to enhance fidelity of moving base simulators by letting drivers experience an actual powertrain. At the same time technicians are given a new tool for evaluating powertrain solutions in a controlled environment.

    As a first step the vehicle model from the chassis dynamometer system has been implemented in Sim III. Interfacing software was developed and an optical fiber covering the physical distance of 500 m between the facilities is used to connect the systems. Further, a pedal robot has been developed that uses two linear actuators pressing the accelerator and brake pedals. The pedal robot uses feedback loops on accelerator position or brake cylinder pressure and is controlled via an UDP interface.

    Results from running the complete setup showed expected functionality and we are successful in performing a driving mission based on real road topography data. Vehicle acceleration and general driving feel was perceived as realistic by the test subjects while braking still needs improvements. The pedal robot construction enables use of a large set of cars available on the market and except for mounting the brake pressure sensor the time to switch vehicle is approximately 30 minutes.

  • 44.
    Andersson, Karl
    Mid Sweden University, Faculty of Science, Technology and Media, Department of Electronics Design.
    PLC Lab Station: An Implementation of External Monitoring and Control Using OPC2014Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The PLC is frequently used when implementing automated control, which is animportant part of many modern industries. This thesis has been carried out incollaboration with ÅF Consult in Sundsvall, who were in need of a PLC labstation for educational purposes. The overall aim of this thesis has been todesign and construct such a lab station and also to implement a solution forexternal monitoring and control possibilities. The methodology of this projecthas included a literary study, followed by the implementation of the actualsolutions and finally an evaluation of the project. The finished lab stationincludes a conveyor belt and a robotic arm controlled using two PLCs. Theconveyor belt is designed to be able to store, transport, differentiate and sortsmall cubes of various materials, and the robotic arm is designed as a pick-andplacedevice that can move the cubes between different positions on the labstation. The monitoring and control solution is set up using an OPC clientserverconnection on a PC and it provides a graphical user interface where thelab station can be monitored and controlled externally. The lab station offersdiverse functionality, but due to some inconsistency in the included equipmentit is not entirely reliable. The external monitoring and control solution alsoprovides good functionality, but the time frame of the project resulted in a lessextensive implementation than originally intended. The overall solutions are,however, considered to offer a functional and proper platform for educationalpurposes.

  • 45.
    Andersson, Olov
    et al.
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Wzorek, Mariusz
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Rudol, Piotr
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Doherty, Patrick
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Model-Predictive Control with Stochastic Collision Avoidance using Bayesian Policy Optimization2016In: IEEE International Conference on Robotics and Automation (ICRA), 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016, 4597-4604 p.Conference paper (Refereed)
    Abstract [en]

    Robots are increasingly expected to move out of the controlled environment of research labs and into populated streets and workplaces. Collision avoidance in such cluttered and dynamic environments is of increasing importance as robots gain more autonomy. However, efficient avoidance is fundamentally difficult since computing safe trajectories may require considering both dynamics and uncertainty. While heuristics are often used in practice, we take a holistic stochastic trajectory optimization perspective that merges both collision avoidance and control. We examine dynamic obstacles moving without prior coordination, like pedestrians or vehicles. We find that common stochastic simplifications lead to poor approximations when obstacle behavior is difficult to predict. We instead compute efficient approximations by drawing upon techniques from machine learning. We propose to combine policy search with model-predictive control. This allows us to use recent fast constrained model-predictive control solvers, while gaining the stochastic properties of policy-based methods. We exploit recent advances in Bayesian optimization to efficiently solve the resulting probabilistically-constrained policy optimization problems. Finally, we present a real-time implementation of an obstacle avoiding controller for a quadcopter. We demonstrate the results in simulation as well as with real flight experiments.

  • 46.
    Andreasson, Henrik
    et al.
    Örebro University, School of Science and Technology.
    Bouguerra, Abdelbaki
    Örebro University, School of Science and Technology.
    Cirillo, Marcello
    Örebro University, School of Science and Technology.
    Dimitrov, Dimitar Nikolaev
    INRIA - Grenoble, Meylan, France .
    Driankov, Dimiter
    Örebro University, School of Science and Technology.
    Karlsson, Lars
    Örebro University, School of Science and Technology.
    Lilienthal, Achim J.
    Örebro University, School of Science and Technology.
    Pecora, Federico
    Örebro University, School of Science and Technology.
    Saarinen, Jari Pekka
    Örebro University, School of Science and Technology. Aalto University, Aalto, Finland .
    Sherikov, Aleksander
    Centre de recherche Grenoble, Rhône-Alpes, Grenoble, France .
    Stoyanov, Todor
    Örebro University, School of Science and Technology.
    Autonomous transport vehicles: where we are and what is missing2015In: IEEE robotics & automation magazine, ISSN 1070-9932, Vol. 22, no 1, 64-75 p.Article in journal (Refereed)
    Abstract [en]

    In this article, we address the problem of realizing a complete efficient system for automated management of fleets of autonomous ground vehicles in industrial sites. We elicit from current industrial practice and the scientific state of the art the key challenges related to autonomous transport vehicles in industrial environments and relate them to enabling techniques in perception, task allocation, motion planning, coordination, collision prediction, and control. We propose a modular approach based on least commitment, which integrates all modules through a uniform constraint-based paradigm. We describe an instantiation of this system and present a summary of the results, showing evidence of increased flexibility at the control level to adapt to contingencies.

  • 47.
    Andreasson, Henrik
    et al.
    Örebro University, Örebro, Sweden.
    Bouguerra, Abdelbaki
    Örebro University, Örebro, Sweden.
    Åstrand, Björn
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Rögnvaldsson, Thorsteinn
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Gold-fish SLAM: An application of SLAM to localize AGVs2012In: Field and Service Robotics: Results of the 8th International Conference / [ed] Kazuya Yoshida & Satoshi Tadokoro, Heidelberg: Springer, 2012, 585-598 p.Conference paper (Refereed)
    Abstract [en]

    The main focus of this paper is to present a case study of a SLAM solution for Automated Guided Vehicles (AGVs) operating in real-world industrial environments. The studied solution, called Gold-fish SLAM, was implemented to provide localization estimates in dynamic industrial environments, where there are static landmarks that are only rarely perceived by the AGVs. The main idea of Gold-fish SLAM is to consider the goods that enter and leave the environment as temporary landmarks that can be used in combination with the rarely seen static landmarks to compute online estimates of AGV poses. The solution is tested and verified in a factory of paper using an eight ton diesel-truck retrofitted with an AGV control system running at speeds up to 3m/s. The paper includes also a general discussion on how SLAM can be used in industrial applications with AGVs. © Springer-Verlag Berlin Heidelberg 2014.

  • 48.
    Andreasson, Martin
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Johansson, Karl H.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Undamped Nonlinear Consensus Using Integral Lyapunov Functions2012In: 2012 American Control Conference (ACC), IEEE Computer Society, 2012, 6644-6649 p.Conference paper (Refereed)
    Abstract [en]

    This paper analyzes a class of nonlinear consensus algorithms where the input of an agent can be decoupled into a product of a gain function of the agents own state, and a sum of interaction functions of the relative states of its neighbors. We prove the stability of the protocol for both single and double integrator dynamics using novel Lyapunov functions, and provide explicit formulas for the consensus points. The results are demonstrated through simulations of a realistic example within the framework of our proposed consensus algorithm.

  • 49.
    Andreasson, Martin
    et al.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Sandberg, Henrik
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Johansson, Karl H.
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Distributed PI-Control with Applications to Power Systems Frequency Control2014In: American Control Conference (ACC), 2014, IEEE conference proceedings, 2014, 3183-3188 p.Conference paper (Refereed)
    Abstract [en]

    This paper considers a distributed PI-controller for networked dynamical systems. Sufficient conditions for when the controller is able to stabilize a general linear system and eliminate static control errors are presented. The proposed controller is applied to frequency control of power transmission systems. Sufficient stability criteria are derived, and it is shown that the controller parameters can always be chosen so that the frequencies in the closed loop converge to nominal operational frequency. We show that the load sharing property of the generators is maintained, i.e., the input power of the generators is proportional to a controller parameter. The controller is evaluated by simulation on the IEEE 30 bus test network, where its effectiveness is demonstrated.

  • 50.
    Andrikopoulos, George
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Nikolakopoulos, George
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    HUmanoid Robotic Leg via pneumatic muscle actuators: implementation and control2017In: Meccanica (Milano. Print), ISSN 0025-6455, E-ISSN 1572-9648Article in journal (Refereed)
    Abstract [en]

    In this article, a HUmanoid Robotic Leg (HURL) via the utilization of pneumatic muscle actuators (PMAs) is presented. PMAs are a pneumatic form of actuation possessing crucial attributes for the implementation of a design that mimics the motion characteristics of a human ankle. HURL acts as a feasibility study in the conceptual goal of developing a 10 degree-of-freedom (DoF) lower-limb humanoid for compliance and postural control, while serving as a knowledge basis for its future alternative use in prosthetic robotics. HURL’s design properties are described in detail, while its 2-DoF motion capabilities (dorsiflexion–plantar flexion, eversion–inversion) are experimentally evaluated via an advanced nonlinear PID-based control algorithm.

1234567 1 - 50 of 1114
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf