Digitala Vetenskapliga Arkivet

Change search
Refine search result
123 1 - 50 of 109
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Almeida, Diogo
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Ambrus, Rares
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Caccamo, Sergio
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Chen, Xi
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Cruciani, Silvia
    Pinto Basto De Carvalho, Joao F
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Haustein, Joshua
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Marzinotto, Alejandro
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vina, Francisco
    KTH.
    Karayiannidis, Yiannis
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Ögren, Petter
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Jensfelt, Patric
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Team KTH’s Picking Solution for the Amazon Picking Challenge 20162017In: Warehouse Picking Automation Workshop 2017: Solutions, Experience, Learnings and Outlook of the Amazon Robotics Challenge, 2017Conference paper (Other (popular science, discussion, etc.))
    Abstract [en]

    In this work we summarize the solution developed by Team KTH for the Amazon Picking Challenge 2016 in Leipzig, Germany. The competition simulated a warehouse automation scenario and it was divided in two tasks: a picking task where a robot picks items from a shelf and places them in a tote and a stowing task which is the inverse task where the robot picks items from a tote and places them in a shelf. We describe our approach to the problem starting from a high level overview of our system and later delving into details of our perception pipeline and our strategy for manipulation and grasping. The solution was implemented using a Baxter robot equipped with additional sensors.

  • 2.
    Almeida, Diogo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Ambrus, Rares
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Caccamo, Sergio
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Chen, Xi
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Cruciani, Silvia
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Pinto Basto de Carvalho, Joao Frederico
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Haustein, Joshua
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Marzinotto, Alejandro
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Viña, Francisco
    Karayiannidis, Yiannis
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Ögren, Petter
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Jensfelt, Patric
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Kragic, Danica
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Team KTH’s Picking Solution for the Amazon Picking Challenge 20162020In: Advances on Robotic Item Picking: Applications in Warehousing and E-Commerce Fulfillment, Springer Nature , 2020, p. 53-62Chapter in book (Other academic)
    Abstract [en]

    In this chapter we summarize the solution developed by team KTH for the Amazon Picking Challenge 2016 in Leipzig, Germany. The competition, which simulated a warehouse automation scenario, was divided into two parts: a picking task, where the robot picks items from a shelf and places them into a tote, and a stowing task, where the robot picks items from a tote and places them in a shelf. We describe our approach to the problem starting with a high-level overview of the system, delving later into the details of our perception pipeline and strategy for manipulation and grasping. The hardware platform used in our solution consists of a Baxter robot equipped with multiple vision sensors.

  • 3.
    Anisi, David A.
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Lindskog, Therese
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Algorithms for the connectivity constrained unmanned ground vehicle surveillance problem2009In: European Control Conference (ECC), Budapest, Hungary: EUCA , 2009Conference paper (Refereed)
    Abstract [en]

    The Connectivity Constrained UGV Surveillance Problem (CUSP) considered in this paper is the following. Given a set of surveillance UGVs and a user defined area to be covered, find waypoint-paths such that; 1) the area is completely surveyed, 2) the time for performing the search is minimized and 3) the induced information graph is kept recurrently connected. It has previously been shown that the CUSP is NP-hard. This paper presents four different heuristic algorithms for solving the CUSP, namely, the Token Station Algorithm, the Stacking Algorithm, the Visibility Graph Algorithm and the Connectivity Primitive Algorithm. These algorithms are then compared by means of Monte Carlo simulations. The conclusions drawn are that the Token Station Algorithm provides the most optimal solutions, the Stacking Algorithm has the lowest computational complexity, while the Connectivity Primitive Algorithm provides the best trade-off between optimality and computational complexity for larger problem instances.

  • 4.
    Anisi, David A.
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Ögren, Petter
    Minimum time multi-UGV surveillance2008In: OPTIMIZATION AND COOPERATIVE CONTROL STRATEGIES / [ed] Hirsch MJ; Commander CW; Pardalos PM; Murphey R, Berlin: Springer Verlag , 2008, p. 31-45Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of concurrent task- and path planning for a number of  surveillance Unmanned Ground Vehicles (UGVs) such that a user defined area of interest is covered by the UGVs' sensors in minimum time. We first formulate the problem, and show that it is in fact  a generalization of the Multiple Traveling Salesmen Problem (MTSP), which is known to be NP-hard. We then propose a solution that decomposes the problem into three subproblems. The first is to find a maximal convex covering of the search area. Most results on static coverage  use disjoint partitions of the search area, e.g. triangulation, to convert the continuous sensor positioning problem into a  discrete one. However, by a simple example, we show that a highly overlapping set of maximal convex sets is better suited for  minimum time coverage. The second subproblem is a combinatorial assignment and ordering of the sets in the cover.  Since Tabu search algorithms are known to perform well on various routing problems,  we use it as a part of our proposed solution. Finally, the third subproblem utilizes a particular shortest path sub-routine in order to find the vehicle paths, and calculate the overall objective function used in the Tabu search. The proposed algorithm is illustrated by a number of simulation examples.

  • 5.
    Anisi, David A.
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Ögren, Petter
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.).
    Communication constrained multi-UGV surveillance2008In: IFAC World Congress, Seoul, Korea, 2008Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of connectivity constrained surveillance of a given polyhedral area with obstacles using a group of Unmanned Ground Vehicles (UGVs). The considered communication restrictions may involve both line-of-sight constraints and limited sensor range constraints. In this paper, the focus is on dynamic information graphs, G, which are required to be kept recurrently connected. The main motivation for introducing this weaker notion of connectivity is security and surveillance applications where the sentry vehicles may have to split temporary in order to complete the given mission efficiently but are required to establish contact recurrently in order to exchange information or to make sure that all units are intact and well-functioning. From a theoretical standpoint, recurrent connectivity is shown to be sufficient for exponential convergence of consensus filters for the collected sensor data.

  • 6. Anisi, David A.
    et al.
    Ögren, Petter
    Swedish Defence Research Agency (FOI), Sweden.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre.
    Cooperative Minimum Time Surveillance With Multiple Ground Vehicles2010In: IEEE Transactions on Automatic Control, ISSN 0018-9286, E-ISSN 1558-2523, Vol. 55, no 12, p. 2679-2691Article in journal (Refereed)
    Abstract [en]

    In this paper, we formulate and solve two different minimum time problems related to unmanned ground vehicle (UGV) surveillance. The first problem is the following. Given a set of surveillance UGVs and a polyhedral area, find waypoint-paths for all UGVs such that every point of the area is visible from a point on a path and such that the time for executing the search in parallel is minimized. Here, the sensors' field of view are assumed to have a limited coverage range and be occluded by the obstacles. The second problem extends the first by additionally requiring the induced information graph to be connected at the time instants when the UGVs perform the surveillance mission, i.e., when they gather and transmit sensor data. In the context of the second problem, we also introduce and utilize the notion of recurrent connectivity, which is a significantly more flexible connectivity constraint than, e.g., the 1-hop connectivity constraints and use it to discuss consensus filter convergence for the group of UGVs.

  • 7. Anisi, David A.
    et al.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Hu, Xiaoming
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Lindskog, Therese
    Cooperative Surveillance Missions with Multiple Unmanned Ground Vehicles (UGVs)2008In: 47TH IEEE CONFERENCE ON DECISION AND CONTROL, 2008 (CDC 2008), 2008, p. 2444-2449Conference paper (Refereed)
    Abstract [en]

    This paper proposes an optimization based approach to multi-UGV surveillance. In particular, we formulate both the minimum time- and connectivity constrained surveillance problems, show NP-hardness of them and propose decomposition techniques that allow us to solve them efficiently in an algorithmic manner. The minimum time formulation is the following. Given a set of surveillance UGVs and a polyhedral area, find waypoint-paths for all UGVs such that every point of the area is visible from a point on a path and such that the time for executing the search in parallel is minimized. Here, the sensor's field of view are assumed to be occluded by the obstacles and limited by a maximal sensor range. The connectivity constrained formulation extends the first by additionally requiring that the information graph induced by the sensors is connected at the time instants when the UGVs stop to perform the surveillance task. The second formulation is relevant to situation when mutual visibility is needed either to transmit the sensor data being gathered, or to protect the team from hostile persons trying to approach the stationary UGVs.

  • 8.
    Anisi, David A.
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Ögren, Petter
    Department of Autonomous Systems Swedish Defence Research Agency.
    Robinson, John W. C.
    Department of Autonomous Systems Swedish Defence Research Agency.
    Safe receding horizon control of an aerial vehicle2006In: PROCEEDINGS OF THE 45TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-14, IEEE , 2006, p. 57-62Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of designing a real time high performance controller and trajectory generator for air vehicles. The control objective is to use information about terrain and enemy threats to fly low and avoid radar exposure on the way to a given target. The proposed algorithm builds on the well known approach of Receding Horizon Control (RHC) combined with a terminal cost, calculated from a graph representation of the environment. Using a novel safety maneuver, and under an assumption on the maximal terrain inclination, we are able to prove safety as well as task completion. The safety maneuver is incorporated in the short term optimization, which is performed using Nonlinear Programming (NLP). Some key characteristics of the trajectory planner are highlighted through simulations.

  • 9.
    Anisi, David
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Robinson, John W.C.
    Dept. of Autonomous Systems, Swedish Defence Research Agency (FOI), Stockholm, Sweden.
    Ögren, Petter
    Dept. of Autonomous Systems, Swedish Defence Research Agency (FOI), Stockholm, Sweden.
    Online Trajectory Planning for Aerial Vehicle: A Safe Approach with Guaranteed Task CompletionManuscript (Other academic)
    Abstract [en]

    On-line trajectory optimization in three dimensional space is the main topic of the paper at hand. The high-level framework augments on-line receding horizon control with an off-line computed terminal cost that captures the global characteristics of the environment, as well as any possible mission objectives. The first part of the paper is devoted to the single vehicle case while the second part considers the problem of simultaneous arrival of multiple aerial vehicles. The main contribution of the first part is two-fold. Firstly, by augmenting a so called safety maneuver at the end of the planned trajectory, this paper extends previous results by addressing provable safety properties in a 3D setting. Secondly, assuming initial feasibility, the planning method presented is shown to have finite time task completion. Moreover, a quantitative comparison between the two competing objectives of optimality and computational tractability is made. Finally, some other key characteristics of the trajectory planner, such as ability to minimize threat exposure and robustness, are highlighted through simulations. As for the simultaneous arrival problem considered in the second part, by using a time-scale separation principle, we are able to adopt standard Laplacian control to a consensus problem which is neither unconstrained, nor first order. 

  • 10.
    Anisi, David
    et al.
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory.
    Robinson, John W.C.
    Swedish Defence Research Agency (FOI), Department of Aeronautics .
    Ögren, Petter
    Swedish Defence Research Agency (FOI).
    On-line Trajectory planning for aerial vehicles: a safe approach with guaranteed task completion2006In: Collection of Technical Papers: AIAA Guidance, Navigation, and Control Conference 2006, 2006, p. 914-938Conference paper (Refereed)
    Abstract [en]

    On-line trajectory optimization in three dimensional space is the main topic of the paper at hand. The high-level framework augments on-line receding horizon control with an off-line computed terminal cost that captures the global characteristics of the environment, as well as any possible mission objectives. The first part of the paper is devoted to the single vehicle case while the second part considers the problem of simultaneous arrival of multiple aerial vehicles. The main contribution of the first part is two-fold. Firstly, by augmenting a so called safety maneuver at the end of the planned trajectory, this paper extends previous results by addressing provable safety properties in a 3 D setting. Secondly, assuming initial feasibility, the planning method presented is shown to have finite time task completion. Moreover, a quantitative comparison between the two competing objectives of optimality and computational tractability is made. Finally, some other key characteristics of the trajectory planner, such as ability to minimize threat exposure and robustness, are highlighted through simulations. As for the simultaneous arrival problem considered in the second part, by using a time-scale separation principle, we are able to adopt standard Laplacian control to a consensus problem which is neither unconstrained, nor first order.

  • 11.
    Bhat, Sriharsha
    et al.
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Vehicle Engineering and Solid Mechanics.
    Torroba, Ignacio
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Özkahraman, Özer
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Bore, Nils
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Sprague, Christopher
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Xie, Yiping
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Stenius, Ivan
    KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Vehicle Engineering and Solid Mechanics.
    Severholt, Josefine
    KTH, School of Engineering Sciences (SCI).
    Ljung, Carl
    KTH, School of Engineering Sciences (SCI).
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Ögren, Petter
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    A Cyber-Physical System for Hydrobatic AUVs: System Integration and Field Demonstration2020Conference paper (Refereed)
    Abstract [en]

    Cyber-physical systems (CPSs) comprise a network of sensors and actuators that are integrated with a computing and communication core. Hydrobatic Autonomous Underwater Vehicles (AUVs) can be efficient and agile, offering new use cases in ocean production, environmental sensing and security. In this paper, a CPS concept for hydrobatic AUVs is validated in real-world field trials with the hydrobatic AUV SAM developed at the Swedish Maritime Robotics Center (SMaRC). We present system integration of hardware systems, software subsystems for mission planning using Neptus, mission execution using behavior trees, flight and trim control, navigation and dead reckoning. Together with the software systems, we show simulation environments in Simulink and Stonefish for virtual validation of the entire CPS. Extensive field validation of the different components of the CPS has been performed. Results of a field demonstration scenario involving the search and inspection of a submerged Mini Cooper using payload cameras on SAM in the Baltic Sea are presented. The full system including the mission planning interface, behavior tree, controllers, dead-reckoning and object detection algorithm is validated. The submerged target is successfully detected both in simulation and reality, and simulation tools show tight integration with target hardware.

  • 12.
    Bhat, Sriharsha
    et al.
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering.
    Torroba, Ignacio
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Özkahraman, Özer
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Bore, Nils
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Sprague, Christopher
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Xie, Yiping
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Stenius, Ivan
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering.
    Severholt, Josefine
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering.
    Ljung, Carl
    KTH, School of Engineering Sciences (SCI), Aeronautical and Vehicle Engineering.
    Folkesson, John
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Ögren, Petter
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    A Cyber-Physical System for Hydrobatic AUVs: System Integration and Field Demonstration2020In: 2020 IEEE/OES Autonomous Underwater Vehicles Symposium, AUV 2020, Institute of Electrical and Electronics Engineers (IEEE) , 2020Conference paper (Refereed)
    Abstract [en]

    Cyber-physical systems (CPSs) comprise a network of sensors and actuators that are integrated with a computing and communication core. Hydrobatic Autonomous Underwater Vehicles (AUVs) can be efficient and agile, offering new use cases in ocean production, environmental sensing and security. In this paper, a CPS concept for hydrobatic AUVs is validated in real-world field trials with the hydrobatic AUV SAM developed at the Swedish Maritime Robotics Center (SMaRC). We present system integration of hardware systems, software subsystems for mission planning using Neptus, mission execution using behavior trees, flight and trim control, navigation and dead reckoning. Together with the software systems, we show simulation environments in Simulink and Stonefish for virtual validation of the entire CPS. Extensive field validation of the different components of the CPS has been performed. Results of a field demonstration scenario involving the search and inspection of a submerged Mini Cooper using payload cameras on SAM in the Baltic Sea are presented. The full system including the mission planning interface, behavior tree, controllers, dead-reckoning and object detection algorithm is validated. The submerged target is successfully detected both in simulation and reality, and simulation tools show tight integration with target hardware. 

  • 13.
    Båberg, Fredrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Caccamo, Sergio
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smets, Nanja
    Neerincx, Mark
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Free Look UGV Teleoperation Control Tested in Game Environment: Enhanced Performance and Reduced Workload2016In: International Symposium on Safety,Security and Rescue Robotics, 2016Conference paper (Refereed)
    Abstract [en]

    Concurrent telecontrol of the chassis and camera ofan Unmanned Ground Vehicle (UGV) is a demanding task forUrban Search and Rescue (USAR) teams. The standard way ofcontrolling UGVs is called Tank Control (TC), but there is reasonto believe that Free Look Control (FLC), a control mode used ingames, could reduce this load substantially by decoupling, andproviding separate controls for, camera translation and rotation.The general hypothesis is that FLC (1) reduces robot operators’workload and (2) enhances their performance for dynamic andtime-critical USAR scenarios. A game-based environment wasset-up to systematically compare FLC with TC in two typicalsearch and rescue tasks: navigation and exploration. The resultsshow that FLC improves mission performance in both exploration(search) and path following (navigation) scenarios. In the former,more objects were found, and in the latter shorter navigationtimes were achieved. FLC also caused lower workload and stresslevels in both scenarios, without inducing a significant differencein the number of collisions. Finally, FLC was preferred by 75% of the subjects for exploration, and 56% for path following.

  • 14.
    Båberg, Fredrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Petter, Ögren
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. KTH, School of Electrical Engineering and Computer Science (EECS), Centres, Centre for Autonomous Systems, CAS.
    Formation Obstacle Avoidance using RRT and Constraint Based Programming2017In: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), IEEE conference proceedings, 2017, article id 8088131Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a new way of doing formation obstacle avoidance using a combination of Constraint Based Programming (CBP) and Rapidly Exploring Random Trees (RRTs). RRT is used to select waypoint nodes, and CBP is used to move the formation between those nodes, reactively rotating and translating the formation to pass the obstacles on the way. Thus, the CBP includes constraints for both formation keeping and obstacle avoidance, while striving to move the formation towards the next waypoint. The proposed approach is compared to a pure RRT approach where the motion between the RRT waypoints is done following linear interpolation trajectories, which are less computationally expensive than the CBP ones. The results of a number of challenging simulations show that the proposed approach is more efficient for scenarios with high obstacle densities.

  • 15.
    Båberg, Fredrik
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Wang, Yuquan
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Caccamo, Sergio
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Adaptive object centered teleoperation control of a mobile manipulator2016In: 2016 IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 455-461Conference paper (Refereed)
    Abstract [en]

    Teleoperation of a mobile robot manipulating and exploring an object shares many similarities with the manipulation of virtual objects in a 3D design software such as AutoCAD. The user interfaces are however quite different, mainly for historical reasons. In this paper we aim to change that, and draw inspiration from the 3D design community to propose a teleoperation interface control mode that is identical to the ones being used to locally navigate the virtual viewpoint of most Computer Aided Design (CAD) softwares.

    The proposed mobile manipulator control framework thus allows the user to focus on the 3D objects being manipulated, using control modes such as orbit object and pan object, supported by data from the wrist mounted RGB-D sensor. The gripper of the robot performs the desired motions relative to the object, while the manipulator arm and base moves in a way that realizes the desired gripper motions. The system redundancies are exploited in order to take additional constraints, such as obstacle avoidance, into account, using a constraint based programming framework.

    Download full text (pdf)
    fulltext
  • 16.
    Båberg, Fredrik
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Wang, Yuquan
    KTH, School of Industrial Engineering and Management (ITM), Production Engineering, Sustainable Production Systems.
    Caccamo, Sergio
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Ögren, Petter
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Extended version of Adaptive Object Centered Teleoperation Control of a Mobile ManipulatorManuscript (preprint) (Other academic)
    Abstract [en]

    Teleoperation of a mobile robot manipulating and exploring an object shares many similarities with the manipulation of virtual objects in a 3D design software such as AutoCAD. The user interfaces are however quite different, mainly for historical reasons. In this paper we aim to change that, and draw inspiration from the 3D design community to propose a teleoperation interface control mode that is identical to the ones being used to locally navigate the virtual viewpoint of most Computer Aided Design (CAD)softwares.

    The proposed mobile manipulator control framework thus allows the user to focus on the 3D objects being manipulated, using control modes such as orbit object and pan object. The gripper of the robot performs the desired motions relative to the object, while the manipulator arm and base moves in a way that realizes the desired gripper motions. The system redundancies are exploited in order to take additional constraints, such as obstacle avoidance, into account, using a constraint based programming framework.

  • 17.
    Caccamo, Sergio
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Parasuraman, Ramviyas
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Båberg, Fredrik
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Extending a UGV Teleoperation FLC Interface with Wireless Network Connectivity Information2015In: 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE , 2015, p. 4305-4312Conference paper (Refereed)
    Abstract [en]

    Teleoperated Unmanned Ground Vehicles (UGVs) are expected to play an important role in future search and rescue operations. In such tasks, two factors are crucial for a successful mission completion: operator situational awareness and robust network connectivity between operator and UGV. In this paper, we address both these factors by extending a new Free Look Control (FLC) operator interface with a graphical representation of the Radio Signal Strength (RSS) gradient at the UGV location. We also provide a new way of estimating this gradient using multiple receivers with directional antennas. The proposed approach allows the operator to stay focused on the video stream providing the crucial situational awareness, while controlling the UGV to complete the mission without moving into areas with dangerously low wireless connectivity. The approach is implemented on a KUKA youBot using commercial-off-the-shelf components. We provide experimental results showing how the proposed RSS gradient estimation method performs better than a difference approximation using omnidirectional antennas and verify that it is indeed useful for predicting the RSS development along a UGV trajectory. We also evaluate the proposed combined approach in terms of accuracy, precision, sensitivity and specificity.

  • 18.
    Caccamo, Sergio
    et al.
    KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS. KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Parasuraman, Ramviyas
    Purdue Univ, W Lafayette, IN 47907 USA..
    Freda, Luigi
    Sapienza Univ Rome, DIAG, ALCOR Lab, Rome, Italy..
    Gianni, Mario
    Sapienza Univ Rome, DIAG, ALCOR Lab, Rome, Italy..
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    RCAMP: A Resilient Communication-Aware Motion Planner for Mobile Robots with Autonomous Repair of Wireless Connectivity2017In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 2010-2017Conference paper (Refereed)
    Abstract [en]

    Mobile robots, be it autonomous or teleoperated, require stable communication with the base station to exchange valuable information. Given the stochastic elements in radio signal propagation, such as shadowing and fading, and the possibilities of unpredictable events or hardware failures, communication loss often presents a significant mission risk, both in terms of probability and impact, especially in Urban Search and Rescue (USAR) operations. Depending on the circumstances, disconnected robots are either abandoned, or attempt to autonomously back-trace their way to the base station. Although recent results in Communication-Aware Motion Planning can be used to effectively manage connectivity with robots, there are no results focusing on autonomously re-establishing the wireless connectivity of a mobile robot without back-tracing or using detailed a priori information of the network. In this paper, we present a robust and online radio signal mapping method using Gaussian Random Fields, and propose a Resilient Communication-Aware Motion Planner (RCAMP) that integrates the above signal mapping framework with a motion planner. RCAMP considers both the environment and the physical constraints of the robot, based on the available sensory information. We also propose a self-repair strategy using RCMAP, that takes both connectivity and the goal position into account when driving to a connection-safe position in the event of a communication loss. We demonstrate the proposed planner in a set of realistic simulations of an exploration task in single or multi-channel communication scenarios.

  • 19. Christalin, B.
    et al.
    Colledanchise, Michele
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Murray, R. M.
    Synthesis of reactive control protocols for switch electrical power systems for commercial application with safety specifications2017In: 2016 IEEE Symposium Series on Computational Intelligence, SSCI 2016, IEEE, 2017, article id 7849873Conference paper (Refereed)
    Abstract [en]

    This paper presents a method for the reactive synthesis of fault-tolerant optimal control protocols for a finite deterministic discrete event system subject to safety specifications. A Deterministic Finite State Machine (DFSM) and Behavior Tree (BT) were used to model the system. The synthesis procedure involves formulating the policy problem as a shortest path dynamic programming problem. The procedure evaluates all possible states when applied to the DFSM, or over all possible actions when applied to the BT. The resulting strategy minimizes the number of actions performed to meet operational objectives without violating safety conditions. The effectiveness of the procedure on DFSMs and BTs is demonstrated through three examples of switched electrical power systems for commercial application and analyzed using run-time complexity analysis. The results demonstrated that for large order system BTs provided a tractable model to synthesize an optimal control policy.

  • 20.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Almeid, Diogo
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Towards Blended Planning and Acting using Behavior Trees. A Reactive, Safe and Fault Tolerant Approach.Article in journal (Refereed)
  • 21.
    Colledanchise, Michele
    et al.
    Istituto Italiano di Tecnologia - IIT, Genoa, Italy.
    Almeida, Diogo
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Ögren, Petter
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Towards Blended Reactive Planning and Acting using Behavior Trees2019In: 2019 International Conference on Robotics And Automation (ICRA), IEEE Robotics and Automation Society, 2019, p. 8839-8845Conference paper (Refereed)
    Abstract [en]

    In this paper, we show how a planning algorithm can be used to automatically create and update a Behavior Tree (BT), controlling a robot in a dynamic environment. The planning part of the algorithm is based on the idea of back chaining. Starting from a goal condition we iteratively select actions to achieve that goal, and if those actions have unmet preconditions, they are extended with actions to achieve them in the same way. The fact that BTs are inherently modular and reactive makes the proposed solution blend acting and planning in a way that enables the robot to effectively react to external disturbances. If an external agent undoes an action the robot re- executes it without re-planning, and if an external agent helps the robot, it skips the corresponding actions, again without re- planning. We illustrate our approach in two different robotics scenarios.

    Download full text (pdf)
    fulltext
  • 22.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Dimarogonas, Dimos V
    KTH, School of Electrical Engineering (EES), Centres, ACCESS Linnaeus Centre. KTH, School of Electrical Engineering (EES), Automatic Control.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Robot navigation under uncertainties using event based sampling2014In: Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, IEEE conference proceedings, 2014, p. 1438-1445Conference paper (Refereed)
    Abstract [en]

    In many robot applications, sensor feedback is needed to reduce uncertainties in environment models. However, sensor data acquisition also induces costs in terms of the time elapsed to make the observations and the computations needed to find new estimates. In this paper, we show how to use event based sampling to reduce the number of measurements done, thereby saving time, computational resources and power, without jeopardizing critical system properties such as safety and goal convergence. This is done by combining recent advances in nonlinear estimation with event based control using artificial potential fields. The results are particularly useful for real time systems such as high speed vehicles or teleoperated robots, where the cost of taking measurements is even higher, in terms of stops or transmission times. We conclude the paper with a set of simulations to illustrate the effectiveness of the approach and compare it with a baseline approach using periodic measurements.

  • 23.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Dimarogonas, Dimos
    KTH, School of Electrical Engineering (EES), Automatic Control. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Obstacle avoidance in formation using navigation-like functions and constraint based programming2013In: Proceedings of the International Conference on Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ, IEEE conference proceedings, 2013, p. 5234-5239Conference paper (Refereed)
    Abstract [en]

    In this paper, we combine navigation functionlike potential fields and constraint based programming to achieve obstacle avoidance in formation. Constraint based programming was developed in robotic manipulation as a technique to take several constraints into account when controlling redundant manipulators. The approach has also been generalized, and applied to other control systems such as dual arm manipulators and unmanned aerial vehicles. Navigation functions are an elegant way to design controllers with provable properties for navigation problems. By combining these tools, we take advantage of the redundancy inherent in a multi-agent control problem and are able to concurrently address features such as formation maintenance and goal convergence, even in the presence of moving obstacles. We show how the user can decide a priority ordering of the objectives, as well as a clear way of seeing what objectives are currently addressed and what are postponed. We also analyze the theoretical properties of the proposed controller. Finally, we use a set of simulations to illustrate the approach.

  • 24.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Marzinotto, Alejandro
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Dimarogonas, Dimos V.
    KTH, School of Electrical Engineering (EES), Automatic Control.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    The advantages of using behavior trees in multi-robot systems2016In: 47th International Symposium on Robotics, ISR 2016, VDE Verlag GmbH, 2016, p. 23-30Conference paper (Refereed)
    Abstract [en]

    Multi-robot teams offer possibilities of improved performance and fault tolerance, compared to single robot solutions. In this paper, we show how to realize those possibilities when starting from a single robot system controlled by a Behavior Tree (BT). By extending the single robot BT to a multi-robot BT, we are able to combine the fault tolerant properties of the BT, in terms of built-in fallbacks, with the fault tolerance inherent in multi-robot approaches, in terms of a faulty robot being replaced by another one. Furthermore, we improve performance by identifying and taking advantage of the opportunities of parallel task execution, that are present in the single robot BT. Analyzing the proposed approach, we present results regarding how mission performance is affected by minor faults (a robot losing one capability) as well as major faults (a robot losing all its capabilities).

  • 25.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Marzinotto, Alejandro
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Peter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Performance Analysis of Stochastic Behavior Trees2014In: ICRA 2014, 2014Conference paper (Refereed)
    Abstract [en]

    This paper presents a mathematical framework for performance analysis of Behavior Trees (BTs). BTs are a recent alternative to Finite State Machines (FSMs), for doing modular task switching in robot control architectures. By encoding the switching logic in a tree structure, instead of distributing it in the states of a FSM, modularity and reusability are improved.

    In this paper, we compute performance measures, such as success/failure probabilities and execution times, for plans encoded and executed by BTs. To do this, we first introduce Stochastic Behavior Trees (SBT), where we assume that the probabilistic performance measures of the basic action controllers are given. We then show how Discrete Time Markov Chains (DTMC) can be used to aggregate these measures from one level of the tree to the next. The recursive structure of the tree then enables us to step by step propagate such estimates from the leaves (basic action controllers) to the root (complete task execution). Finally, we verify our analytical results using massive Monte Carlo simulations, and provide an illustrative example of the results for a complex robotic task.

  • 26.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Marzinotto, Alejandro
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Stochastic Behavior Trees for Estimating and Optimizing the Performance of Reactive Plan ExecutionsArticle in journal (Refereed)
  • 27.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Murray, Richard M.
    CALTECH, Dept Control & Dynam Syst, Pasadena, CA 91125 USA..
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Synthesis of Correct-by-Construction Behavior Trees2017In: 2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) / [ed] Bicchi, A Okamura, A, IEEE , 2017, p. 6039-6046Conference paper (Refereed)
    Abstract [en]

    In this paper we study the problem of synthesizing correct-by-construction Behavior Trees (BTs) controlling agents in adversarial environments. The proposed approach combines the modularity and reactivity of BTs with the formal guarantees of Linear Temporal Logic (LTL) methods. Given a set of admissible environment specifications, an agent model in form of a Finite Transition System and the desired task in form of an LTL formula, we synthesize a BT in polynomial time, that is guaranteed to correctly execute the desired task. To illustrate the approach, we present three examples of increasing complexity.

  • 28.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Parasuraman, Ramviyas
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Learning of Behavior Trees for Autonomous Agents.Article in journal (Refereed)
  • 29.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    How Behavior Trees Generalize the Teleo-Reactive Paradigm and And-Or-Trees2016Conference paper (Refereed)
    Abstract [en]

    Behavior Trees (BTs) is a way of organizing the switching structure of a control system, that was originally developed in the computer gaming industry but is now also being used in robotics. The Teleo-Reactive programs (TRs) is a highly cited reactive hierarchical robot control approach suggested by Nilsson and And-Or-Trees are trees used for heuristic problems solving. In this paper, we show that BTs generalize TRs as well as And-Or-Trees, even though the two concepts are quite different. And-Or-Trees are trees of conditions, and we show that they transform into a feedback execution plan when written as a BT. TRs are hierarchical control structures, and we show how every TR can be written as a BT. Furthermore, we show that so-called Universal TRs, guaranteeing that the goal will be reached, are a special case of so-called Finite Time Successful BTs. This implies that many designs and theoretical results developed for TRs can be applied to BTs.

    Download full text (pdf)
    fulltext
  • 30.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    How Behavior Trees Modularize Hybrid Control Systems and Generalize Sequential Behavior Compositions, the Subsumption Architecture, and Decision Trees2017In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 33, no 2, p. 372-389Article in journal (Refereed)
    Download full text (pdf)
    fulltext
  • 31.
    Colledanchise, Michele
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    How Behavior Trees Modularize Robustness and Safety in Hybrid Systems2014In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2014), IEEE , 2014, p. 1482-1488Conference paper (Refereed)
    Abstract [en]

    Behavior Trees (BTs) have become a popular framework for designing controllers of in-game opponents in the computer gaming industry. In this paper, we formalize and analyze the reasons behind the success of the BTs using standard tools of robot control theory, focusing on how properties such as robustness and safety are addressed in a modular way. In particular, we show how these key properties can be traced back to the ideas of subsumption and sequential compositions of robot behaviors. Thus BTs can be seen as a recent addition to a long research effort towards increasing modularity, robustness and safety of robot control software. To illustrate the use of BTs, we provide a set of solutions to example problems.

  • 32.
    Colledancise, Michele
    et al.
    Istituto Italiano di Tecnologia, Genova, Liguria, IT.
    Parasuraman, Ramviyas Nattanmai
    Purdue University System, West Lafayette, IN, US.
    Petter, Ögren
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Learning of Behavior Trees for Autonomous Agents2018In: IEEE Transactions on Games, ISSN 2475-1502Article in journal (Refereed)
    Abstract [en]

    In this paper, we study the problem of automatically synthesizing a successful Behavior Tree (BT) in an a-priori unknown dynamic environment. Starting with a given set of behaviors, a reward function, and sensing in terms of a set of binary conditions, the proposed algorithm incrementally learns a switching structure in terms of a BT, that is able to handle the situations encountered. Exploiting the fact that BTs generalize And-Or-Trees and also provide very natural chromosome mappings for genetic pro- gramming, we combine the long term performance of Genetic Programming with a greedy element and use the And-Or analogy to limit the size of the resulting structure. Finally, earlier results on BTs enable us to provide certain safety guarantees for the resulting system. Using the testing environment Mario AI we compare our approach to alternative methods for learning BTs and Finite State Machines. The evaluation shows that the proposed approach generated solutions with better performance, and often fewer nodes than the other two methods.

  • 33. Egerstedt, M
    et al.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Shakernia, O
    Lygeros, J
    Toward Optimal Control of Switched Linear Systems2000Conference paper (Refereed)
    Abstract [en]

    We investigate the problem of driving the state of a switched linear control system between boundary states. We propose tight lower bounds for the minimum energy control problem. Furthermore, we show that the change of the system dynamics across the switching surface gives rise to phenomena that can be treated as a decidability problem of hybrid systems. Applying earlier results on controller synthesis for hybrid systems with linear continuous dynamics, we provide an algorithm for computing the minimum number of switchings of a trajectory from one state to another, and show that this algorithm is computable for a fairly wide class of linear switched systems

  • 34.
    Iovino, Matteo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Scukins, Edvards
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Styrud, Jonathan
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Ögren, Petter
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Smith, Christian
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    A survey of Behavior Trees in robotics and AI2022In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 154, article id 104096Article in journal (Refereed)
    Abstract [en]

    Behavior Trees (BTs) were invented as a tool to enable modular AI in computer games, but have received an increasing amount of attention in the robotics community in the last decade. With rising demands on agent AI complexity, game programmers found that the Finite State Machines (FSM) that they used scaled poorly and were difficult to extend, adapt and reuse. In BTs, the state transition logic is not dispersed across the individual states, but organized in a hierarchical tree structure, with the states as leaves. This has a significant effect on modularity, which in turn simplifies both synthesis and analysis by humans and algorithms alike. These advantages are needed not only in game AI design, but also in robotics, as is evident from the research being done. In this paper we present a comprehensive survey of the topic of BTs in Artificial Intelligence and Robotic applications. The existing literature is described and categorized based on methods, application areas and contributions, and the paper is concluded with a list of open research challenges.

  • 35.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. Chalmers, Sweden.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Barrientos, Francisco Eli Vina
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    An Adaptive Control Approach for Opening Doors and Drawers Under Uncertainties2016In: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 32, no 1, p. 161-175Article in journal (Refereed)
    Abstract [en]

    We study the problem of robot interaction with mechanisms that afford one degree of freedom motion, e.g., doors and drawers. We propose a methodology for simultaneous compliant interaction and estimation of constraints imposed by the joint. Our method requires no prior knowledge of the mechanisms' kinematics, including the type of joint, prismatic or revolute. The method consists of a velocity controller that relies on force/torque measurements and estimation of the motion direction, the distance, and the orientation of the rotational axis. It is suitable for velocity controlled manipulators with force/torque sensor capabilities at the end-effector. Forces and torques are regulated within given constraints, while the velocity controller ensures that the end-effector of the robot moves with a task-related desired velocity. We give proof that the estimates converge to the true values under valid assumptions on the grasp, and error bounds for setups with inaccuracies in control, measurements, or modeling. The method is evaluated in different scenarios involving opening a representative set of door and drawer mechanisms found in household environments.

    Download full text (pdf)
    fulltext
  • 36.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Design of force-driven online motion plans for door opening under uncertainties2012In: Workshop on Real-time Motion Planning: Online, Reactive, and in Real-time, 2012Conference paper (Refereed)
    Abstract [en]

    The problem of door opening is fundamental for household robotic applications. Domestic environments are generally less structured than industrial environments and thus several types of uncertainties associated with the dynamics and kinematics of a door must be dealt with to achieve successful opening. This paper proposes a method that can open doors without prior knowledge of the door kinematics. The proposed method can be implemented on a velocity-controlled manipulator with force sensing capabilities at the end-effector. The velocity reference is designed by using feedback of force measurements while constraint and motion directions are updated online based on adaptive estimates of the position of the door hinge. The online estimator is appropriately designed in order to identify the unknown directions. The proposed scheme has theoretically guaranteed performance which is further demonstrated in experiments on a real robot. Experimental results additionally show the robustness of the proposed method under disturbances introduced by the motion of the mobile platform.

  • 37.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Interactive perception and manipulation of unknown constrained mechanisms using adaptive control2013In: ICRA 2013 Mobile Manipulation Workshop on Interactive Perception, 2013Conference paper (Refereed)
  • 38.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Model-free robot manipulation of doors and drawers by means of fixed-grasps2013In: 2013 IEEE International Conference on Robotics and Automation (ICRA), New York: IEEE , 2013, p. 4485-4492Conference paper (Refereed)
    Abstract [en]

    This paper addresses the problem of robot interaction with objects attached to the environment through joints such as doors or drawers. We propose a methodology that requires no prior knowledge of the objects’ kinematics, including the type of joint - either prismatic or revolute. The method consists of a velocity controller which relies onforce/torque measurements and estimation of the motion direction,rotational axis and the distance from the center of rotation.The method is suitable for any velocity controlled manipulatorwith a force/torque sensor at the end-effector. The force/torquecontrol regulates the applied forces and torques within givenconstraints, while the velocity controller ensures that the endeffectormoves with a task-related desired tangential velocity. The paper also provides a proof that the estimates converge tothe actual values. The method is evaluated in different scenarios typically met in a household environment.

    Download full text (pdf)
    icra2013Karayiannidis
  • 39.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Vina, Francisco
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    "Open Sesame!" Adaptive Force/Velocity Control for Opening Unknown Doors2012In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE , 2012, p. 4040-4047Conference paper (Refereed)
    Abstract [en]

    The problem of door opening is fundamental for robots operating in domestic environments. Since these environments are generally less structured than industrial environments, several types of uncertainties associated with the dynamics and kinematics of a door must be dealt with to achieve successful opening. This paper proposes a method that can open doors without prior knowledge of the door kinematics. The proposed method can be implemented on a velocity-controlled manipulator with force sensing capabilities at the end-effector. The method consists of a velocity controller which uses force measurements and estimates of the radial direction based on adaptive estimates of the position of the door hinge. The control action is decomposed into an estimated radial and tangential direction following the concept of hybrid force/motion control. A force controller acting within the velocity controller regulates the radial force to a desired small value while the velocity controller ensures that the end effector of the robot moves with a desired tangential velocity leading to task completion. This paper also provides a proof that the adaptive estimates of the radial direction converge to the actual radial vector. The performance of the control scheme is demonstrated in both simulation and on a real robot.

    Download full text (pdf)
    Iros2012Karayiannidis
  • 40.
    Karayiannidis, Yiannis
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Kragic, Danica
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Adaptive force/velocity control for opening unknown doors2012In: Robot Control, Volume 10, Part  1, 2012, p. 753-758Conference paper (Refereed)
    Abstract [en]

    The problem of door opening is fundamental for robots operating in domesticenvironments. Since these environments are generally unstructured, a robot must deal withseveral types of uncertainties associated with the dynamics and kinematics of a door to achievesuccessful opening. The present paper proposes a dynamic force/velocity controller which usesadaptive estimation of the radial direction based on adaptive estimates of the door hinge’sposition. The control action is decomposed into estimated radial and tangential directions,which are proved to converge to the corresponding actual values. The force controller usesreactive compensation of the tangential forces and regulates the radial force to a desired smallvalue, while the velocity controller ensures that the robot’s end-effector moves with a desiredtangential velocity. The performance of the control scheme is demonstrated in simulation witha 2 DoF planar manipulator opening a door.

  • 41.
    Kartasev, Mart
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Salér, Justin
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Ögren, Petter
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Improving the Performance of Backward Chained Behavior Trees that use Reinforcement Learning2023In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Institute of Electrical and Electronics Engineers (IEEE) , 2023, p. 1572-1579Conference paper (Refereed)
    Abstract [en]

    In this paper we show how to improve the performance of backward chained behavior trees (BTs) that include policies trained with reinforcement learning (RL). BTs represent a hierarchical and modular way of combining control policies into higher level control policies. Backward chaining is a design principle for the construction of BTs that combines reactivity with goal directed actions in a structured way. The backward chained structure has also enabled convergence proofs for BTs, identifying a set of local conditions to be satisfied for the convergence of all trajectories to a set of desired goal states. The key idea of this paper is to improve performance of backward chained BTs by using the conditions identified in a theoretical convergence proof to configure the RL problems for individual controllers. Specifically, previous analysis identified so-called active constraint conditions (ACCs), that should not be violated in order to avoid having to return to work on previously achieved subgoals. We propose a way to set up the RL problems, such that they do not only achieve each immediate subgoal, but also avoid violating the identified ACCs. The resulting performance improvement depends on how often ACC violations occurred before the change, and how much effort, in terms of execution time, was needed to re-achieve them. The proposed approach is illustrated in a dynamic simulation environment.

  • 42. Kruijff-Korbayová, I
    et al.
    Colas, F
    Gianni, M
    Pirri, F
    de Greeff, J
    Hindriks, K
    Neerincx, M
    Ögren, Petter
    KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Optimization and Systems Theory. KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Svoboda, T
    Worst, R
    TRADR Project: Long-Term Human-Robot Teaming for Robot Assisted Disaster Response2015In: Künstliche Intelligenz, ISSN 0933-1875, E-ISSN 1610-1987, Vol. 29, no 2, p. 193-201Article in journal (Refereed)
    Abstract [en]

    This paper describes the project TRADR: Long-Term Human-Robot Teaming for Robot Assisted Disaster Response. Experience shows that any incident serious enough to require robot involvement will most likely involve a sequence of sorties over several hours, days and even months. TRADR focuses on the challenges that thus arise for the persistence of environment models, multi-robot action models, and human-robot teaming, in order to allow incremental capability improvement over the duration of a mission. TRADR applies a user centric design approach to disaster response robotics, with use cases involving the response to a medium to large scale industrial accident by teams consisting of human rescuers and several robots (both ground and airborne). This paper describes the fundamentals of the project: the motivation, objectives and approach in contrast to related work. 

  • 43. Lundberg, I.
    et al.
    Björkman, Mårten
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Intrinsic camera and hand-eye calibration for a robot vision system using a point marker2015In: IEEE-RAS International Conference on Humanoid Robots, IEEE Computer Society, 2015, p. 59-66Conference paper (Refereed)
    Abstract [en]

    Accurate robot camera calibration is a requirement for vision guided robots to perform precision assembly tasks. In this paper, we address the problem of doing intrinsic camera and hand-eye calibration on a robot vision system using a single point marker. This removes the need for using bulky special purpose calibration objects, and also facilitates on line accuracy checking and re-calibration when needed, without altering the robots production environment. The proposed solution provides a calibration routine that produces high quality results on par with the robot accuracy and completes a calibration in 3 minutes without need of manual intervention. We also present a method for automatic testing of camera calibration accuracy. Results from experimental verification on the dual arm concept robot FRIDA are presented.

  • 44.
    Marzinotto, Alejandro
    et al.
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Colledanchise, Michele
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Smith, Christian
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Computer Vision and Active Perception, CVAP.
    Towards a Unified Behavior Trees Framework for Robot Control2014In: Robotics and Automation (ICRA), 2014 IEEE International Conference on , IEEE Robotics and Automation Society, 2014, p. 5420-5427Conference paper (Refereed)
    Abstract [en]

    This paper presents a unified framework for Behavior Trees (BTs), a plan representation and execution tool. The available literature lacks the consistency and mathematical rigor required for robotic and control applications. Therefore, we approach this problem in two steps: first, reviewing the most popular BT literature exposing the aforementioned issues; second, describing our unified BT framework along with equivalence notions between BTs and Controlled Hybrid Dynamical Systems (CHDSs). This paper improves on the existing state of the art as it describes BTs in a more accurate and compact way, while providing insight about their actual representation capabilities. Lastly, we demonstrate the applicability of our framework to real systems scheduling open-loop actions in a grasping mission that involves a NAO robot and our BT library.

    Download full text (pdf)
    fulltext
  • 45. McGhan, C. L. R.
    et al.
    Wang, Y. -S
    Murray, R. M.
    Vaquero, T.
    Williams, B. C.
    Colledanchise, Michele
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL.
    Towards architecture-wide analysis, verification, and validation for total system stability during goal-seeking space robotics operations2016In: AIAA Space and Astronautics Forum and Exposition, SPACE 2016, American Institute of Aeronautics and Astronautics, 2016Conference paper (Refereed)
    Abstract [en]

    In this paper we discuss the beginnings of an attempt to define and analyze the stability of an entire modular robotic system architecture - one which includes a three-tier (3T) layer breakdown of capabilities, with symbolic, deterministic planning at the highest level. We approach the problem from the standpoint of a control theory outlook, and try to formalize the issues that result from trying to quantitatively characterize the overall performance of a well-defined system without a need for exhaustive testing. We start by discussing the concept of bounded-input bounded-output stability, giving examples where the technique might not be sufficient to guarantee what we term “total system stability” due to complications associated with the levels of abstraction between the modules and components that are being chained together in the architecture. We then go on to discuss necessary conditions that may fall out of this naturally as a result. We further try to better-define the input and output constraints needed to guarantee total system stability, using an assumption-guarantee-like contractual framework that sits alongside the architecture; the requirements then may have influence across multiple modules, in order to keep consistency. We also discuss how the structure of the architectural modules may help or hinder the process of capability characterization and performance analysis of each module and a given architecture configuration as a whole. We then discuss two overlapping methods that, combined, should allow us to analyze the effectiveness of the architecture, and help towards verification and validation of both the components and the system as a whole. Demonstrative examples are given using a specific architectural implementation called the Resilient Spacecraft Executive. In future work, we hope to define both necessary and sufficient conditions for total system stability across such a system architecture for robotics use.

    Download full text (pdf)
    fulltext
  • 46.
    Nimara, Doumitrou Daniil
    et al.
    Ericsson GAIA, Sweden.
    Malek-Mohammadi, Mohammadreza
    Qualcomm, Sweden.
    Wei, Jieqiang
    Ericsson GAIA, Sweden.
    Huang, Vincent
    Ericsson GAIA, Sweden.
    Ögren, Petter
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Model-Based Reinforcement Learning for Cavity Filter Tuning2023In: Proceedings of the 5th Annual Learning for Dynamics and Control Conference, L4DC 2023, ML Research Press , 2023, p. 1297-1307Conference paper (Refereed)
    Abstract [en]

    The ongoing development of telecommunication systems like 5G has led to an increase in demand of well calibrated base transceiver station (BTS) components. A pivotal component of every BTS is cavity filters, which provide a sharp frequency characteristic to select a particular band of interest and reject the rest. Unfortunately, their characteristics in combination with manufacturing tolerances make them difficult for mass production and often lead to costly manual post-production fine tuning. To address this, numerous approaches have been proposed to automate the tuning process. One particularly promising one, that has emerged in the past few years, is to use model free reinforcement learning (MFRL); however, the agents are not sample efficient. This poses a serious bottleneck, as utilising complex simulators or training with real filters is prohibitively time demanding. This work advocates for the usage of model based reinforcement learning (MBRL) and showcases how its utilisation can significantly decrease sample complexity, while maintaining similar levels of success rate. More specifically, we propose an improvement over a state-of-the-art (SoTA) MBRL algorithm, namely the Dreamer algorithm. This improvement can serve as a template for applications in other similar, high-dimensional non-image data problems. We carry experiments on two complex filter types, and show that our novel modification on the Dreamer architecture reduces sample complexity by a factor of 4 and 10, respectively. Our findings pioneer the usage of MBRL which paves the way for utilising more precise and accurate simulators which was previously prohibitively time demanding.

  • 47.
    Pallin, Martin
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Rashid, Jayedur
    Ögren, Petter
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    A Decentralized Asynchronous Collaborative Genetic Algorithm for Heterogeneous Multi-agent Search and Rescue Problems2021In: 2021 IEEE International Symposium on Safety, Security, and Rescue Robotics, SSRR 2021, Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 1-8Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a version of the Genetic Algorithm (GA) for combined task assignment and path planning that is highly decentralized in the sense that each agent only knows its own capabilities and data, and a set of so-called handover values communicated to it from the other agents over an unreliable low bandwidth communication channel. These handover values are used in combination with a local GA involving no other agents, to decide what tasks to execute, and what tasks to leave to others. We compare the performance of our approach to a centralized version of GA, and a partly decentralized version of GA where computations are local, but all agents need complete information regarding all other agents, including position, range, battery, and local obstacle maps. We compare solution performance as well as messages sent for the three algorithms, and conclude that the proposed algorithms has a small decrease in performance, but a significant decrease in required communication. We compare the performance of our approach to a centralized version of GA, and a partly decentralized version of GA where computations are local, but all agents need complete information regarding all other agents, including position, range, battery, and local obstacle maps. We compare solution performance as well as messages sent for the three algorithms, and conclude that the proposed algorithms has a small decrease in performance, but a significant decrease in required communication. We compare solution performance as well as messages sent for the three algorithms, and conclude that the proposed algorithms has a small decrease in performance, but a significant decrease in required communication.

  • 48.
    Pallin, Martin
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Rashid, Jayedur
    Ögren, Petter
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Formulation and Solution of the Multi-agent Concurrent Search and Rescue Problem2021In: IEEE International Symposium on Safety, Security, and Rescue Robotics, SSRR 2021, Institute of Electrical and Electronics Engineers (IEEE) , 2021, p. 27-33Conference paper (Refereed)
    Abstract [en]

    In this paper we formulate and solve the concurrent multi-agent search and rescue problem (C-SARP), where a multi-agent system is to concurrently search an area and assist the victims found during the search. It is widely believed that a UAV-system can help saving lives by locating and assisting victims over large inaccessible areas in the initial stages after a disaster, such as an earthquake, flood, or plane crash. In such a scenario, a natural objective is to minimize the loss of lives. Therefore, two types of uncertainties needs to be taken into account, the uncertainty in position of the victims, and the uncertainty in health over time. It is rational to start looking where victims are most likely to be found, such as the reported position of a victim in a life boat with access to a radio, but it is also rational to start looking where loss of lives is most likely to occur, such as the uncertain position of victims swimming in cold water. We show that the proposed C-SARP is NP-hard, and that the two elements of search and rescue should not be decoupled, making C-SARP substantially different from previously studied multi agent problems, including coverage, multi agent travelling salesmen problems and earlier studies of decoupled search and rescue. Finally, we provide an experimental comparison between the most promising algorithms used in the literature to address similar problems, and find that the solutions to the C-SARP reproduce the trajectories recommended in search and rescue manuals for simple problems, but outperform those trajectories in terms of expected survivability for more complex scenarios.

  • 49. Parasuraman, Ramviyas
    et al.
    Caccamo, Sergio
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Båberg, Fredrik
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Ögren, Petter
    KTH, School of Computer Science and Communication (CSC), Robotics, perception and learning, RPL. KTH, School of Computer Science and Communication (CSC), Centres, Centre for Autonomous Systems, CAS.
    Neerincx, Mark
    A New UGV Teleoperation Interface for Improved Awareness of Network Connectivity and Physical Surroundings2017In: Journal of Human-Robot Interaction, E-ISSN 2163-0364, Vol. 6, no 3, p. 48-70Article in journal (Refereed)
    Abstract [en]

    A reliable wireless connection between the operator and the teleoperated unmanned ground vehicle (UGV) is critical in many urban search and rescue (USAR) missions. Unfortunately, as was seen in, for example, the Fukushima nuclear disaster, the networks available in areas where USAR missions take place are often severely limited in range and coverage. Therefore, during mission execution, the operator needs to keep track of not only the physical parts of the mission, such as navigating through an area or searching for victims, but also the variations in network connectivity across the environment. In this paper, we propose and evaluate a new teleoperation user interface (UI) that includes a way of estimating the direction of arrival (DoA) of the radio signal strength (RSS) and integrating the DoA information in the interface. The evaluation shows that using the interface results in more objects found, and less aborted missions due to connectivity problems, as compared to a standard interface. The proposed interface is an extension to an existing interface centered on the video stream captured by the UGV. But instead of just showing the network signal strength in terms of percent and a set of bars, the additional information of DoA is added in terms of a color bar surrounding the video feed. With this information, the operator knows what movement directions are safe, even when moving in regions close to the connectivity threshold.

  • 50.
    Parasuraman, Ramviyas
    et al.
    Univ Georgia, Sch Comp, Athens, GA 30602 USA..
    Min, Byung-Cheol
    Purdue Univ, Comp & Informat Technol, W Lafayette, IN USA..
    Ögren, Petter
    KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
    Rapid prediction of network quality in mobile robots2023In: Ad hoc networks, ISSN 1570-8705, E-ISSN 1570-8713, Vol. 138, article id 103014Article in journal (Refereed)
    Abstract [en]

    Mobile robots rely on wireless networks for sharing sensor data from remote missions. The robot's spatial network quality will vary considerably across a given mission environment and network access point (AP) location, which are often unknown apriori. Therefore, predicting these spatial variations becomes essential and challenging, especially in dynamic and unstructured environments. To address this challenge, we propose an online algorithm to predict wireless connection quality measured through the well-exploited Radio Signal Strength (RSS) metric in the future positions along a mobile robot's trajectory. We assume no knowledge of the environment or AP positions other than robot odometry and RSS measurements at the previous trajectory points. We propose a discrete Kalman filter-based solution considering path loss and shadowing effects. The algorithm is evaluated with unique real-world datasets in indoor, outdoor, and underground data showing prediction accuracy of up to 96%, revealing significant performance improvements over conventional approaches, including Gaussian Processes Regression. Having such accurate predictions will help the robot plan its trajectory and task operations in a communication-aware manner ensuring mission success. Further, we extensively analyze the approach regarding the impacts of localization error, source location, mobility, antenna type, and connection failures on prediction accuracy, providing novel perspectives and observations for performance evaluation.

123 1 - 50 of 109
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf