Ändra sökning
Avgränsa sökresultatet
1234567 51 - 100 av 683
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 51. Barnes, Nick
    et al.
    Loy, Gareth
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Shaw, David
    The regular polygon detector2010Ingår i: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 43, nr 3, s. 592-602Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper describes a robust regular polygon detector. Given image edges, we derive the a posteriori probability for a mixture of regular polygons, and thus the probability density function for the appearance of a set of regular polygons. Likely regular polygons can be isolated quickly by discretising and collapsing the search space into three dimensions. We derive a complete formulation for efficiently recovering the remaining dimensions using maximum likelihood at the locations of the most likely polygons. Results show robustness to noise, the ability to find and differentiate different shape types, and to perform real-time sign detection for driver assistance.

  • 52.
    Basiri, Meysam
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bishop, Adrian N.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Distributed Control of Triangular Sensor Formations with Angle-Only Constraints2009Ingår i: 2009 INTERNATIONAL CONFERENCE ON INTELLIGENT SENSORS, SENSOR NETWORKS AND INFORMATION PROCESSING (ISSNIP 2009), NEW YORK: IEEE , 2009, s. 121-126Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper considers the coupled formation control of three mobile agents moving in the plane. Each agent has only local inter-agent bearing knowledge and is required to maintain a specified angular separation relative to its neighbors. The problem considered in this paper differs from similar problems in the literature since no inter-agent distance measurements are employed and the desired formation is specified entirely by the internal triangle angles. Each agent's control law is distributed and based only on its locally measured bearings. A convergence result is established which guarantees global convergence of the formation to the desired formation shape.

  • 53. Bekiroglu, Y.
    et al.
    Damianou, A.
    Detry, Renaud
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. University of Liège.
    Stork, Johannes A.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. University of Bristol.
    Probabilistic consolidation of grasp experience2016Ingår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2016, s. 193-200Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a probabilistic model for joint representation of several sensory modalities and action parameters in a robotic grasping scenario. Our non-linear probabilistic latent variable model encodes relationships between grasp-related parameters, learns the importance of features, and expresses confidence in estimates. The model learns associations between stable and unstable grasps that it experiences during an exploration phase. We demonstrate the applicability of the model for estimating grasp stability, correcting grasps, identifying objects based on tactile imprints and predicting tactile imprints from object-relative gripper poses. We performed experiments on a real platform with both known and novel objects, i.e., objects the robot trained with, and previously unseen objects. Grasp correction had a 75% success rate on known objects, and 73% on new objects. We compared our model to a traditional regression model that succeeded in correcting grasps in only 38% of cases.

  • 54.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning to Assess Grasp Stability from Vision, Touch and Proprioception2012Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    Grasping and manipulation of objects is an integral part of a robot’s physical interaction with the environment. In order to cope with real-world situations, sensor based grasping of objects and grasp stability estimation is an important skill. This thesis addresses the problem of predicting the stability of a grasp from the perceptions available to a robot once fingers close around the object before attempting to lift it. A regrasping step can be triggered if an unstable grasp is identified. The percepts considered consist of object features (visual), gripper configurations (proprioceptive) and tactile imprints (haptic) when fingers contact the object. This thesis studies tactile based stability estimation by applying machine learning methods such as Hidden Markov Models. An approach to integrate visual and tactile feedback is also introduced to further improve the predictions of grasp stability, using Kernel Logistic Regression models.

    Like humans, robots are expected to grasp and manipulate objects in a goal-oriented manner. In other words, objects should be grasped so to afford subsequent actions: if I am to hammer a nail, the hammer should be grasped so to afford hammering. Most of the work on grasping commonly addresses only the problem of finding a stable grasp without considering the task/action a robot is supposed to fulfill with an object. This thesis also studies grasp stability assessment in a task-oriented way based on a generative approach using probabilistic graphical models, Bayesian Networks. We integrate high-level task information introduced by a teacher in a supervised setting with low-level stability requirements acquired through a robot’s exploration. The graphical model is used to encode probabilistic relationships between tasks and sensory data (visual, tactile and proprioceptive). The generative modeling approach enables inference of appropriate grasping configurations, as well as prediction of grasp stability. Overall, results indicate that the idea of exploiting learning approaches for grasp stability assessment is applicable in realistic scenarios.

  • 55.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Detry, Renaud
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Grasp Stability from Vision and Touch2012Konferensbidrag (Refereegranskat)
  • 56.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detry, Renaud
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Joint Observation of Object Pose and Tactile Imprints for Online Grasp Stability Assessment2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper studies the viability of concurrentobject pose tracking and tactile sensing for assessing graspstability on a physical robotic platform. We present a kernellogistic-regression model of pose- and touch-conditional graspsuccess probability. Models are trained on grasp data whichconsist of (1) the pose of the gripper relative to the object,(2) a tactile description of the contacts between the objectand the fully-closed gripper, and (3) a binary descriptionof grasp feasibility, which indicates whether the grasp canbe used to rigidly control the object. The data is collectedby executing grasps demonstrated by a human on a roboticplatform composed of an industrial arm, a three-finger gripperequipped with tactile sensing arrays, and a vision-based objectpose tracking system. The robot is able to track the poseof an object while it is grasping it, and it can acquiregrasp tactile imprints via pressure sensor arrays mounted onits gripper’s fingers. We consider models defined on severalsubspaces of our input data – using tactile perceptions orgripper poses only. Models are optimized and evaluated with f-fold cross-validation. Our preliminary results show that stabilityassessments based on both tactile and pose data can providebetter rates than assessments based on tactile data alone.

  • 57.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detry, Renaud
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning Tactile Characterizations Of Object- And Pose-specific Grasps2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    Our aim is to predict the stability of a grasp from the perceptions available to a robot before attempting to lift up and transport an object. The percepts we consider consist of the tactile imprints and the object-gripper configuration read before and until the robot’s manipulator is fully closed around an object. Our robot is equipped with multiple tactile sensing arrays and it is able to track the pose of an object during the application of a grasp. We present a kernel-logistic-regression model of pose- and touch-conditional grasp success probability which we train on grasp data collected by letting the robot experience the effect on tactile and visual signals of grasps suggested by a teacher, and letting the robot verify which grasps can be used to rigidly control the object. We consider models defined on several subspaces of our input data – e.g., using tactile perceptions or pose information only. Our experiment demonstrates that joint tactile and pose-based perceptions carry valuable grasp-related information, as models trained on both hand poses and tactile parameters perform better than the models trained exclusively on one perceptual input.

  • 58.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Huebner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Integrating Grasp Planning with Online Stability Assessment using Tactile Sensing2011Ingår i: IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2011, s. 4750-4755Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents an integration of grasp planning and online grasp stability assessment based on tactile data. We show how the uncertainty in grasp execution posterior to grasp planning can be dealt with using tactile sensing and machine learning techniques. The majority of the state-of-the-art grasp planners demonstrate impressive results in simulation. However, these results are mostly based on perfect scene/object knowledge allowing for analytical measures to be employed. It is questionable how well these measures can be used in realistic scenarios where the information about the object and robot hand may be incomplete and/or uncertain. Thus, tactile and force-torque sensory information is necessary for successful online grasp stability assessment. We show how a grasp planner can be integrated with a probabilistic technique for grasp stability assessment in order to improve the hypotheses about suitable grasps on different types of objects. Experimental evaluation with a three-fingered robot hand equipped with tactile array sensors shows the feasibility and strength of the integrated approach.

  • 59.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kyrki, Ville
    Department of Information Technology, Lappeenranta University of Technology, Finland.
    Learning grasp stability based on tactile data and HMMs2010Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, the problem of learning grasp stability in robotic object grasping based on tactile measurements is studied. Although grasp stability modeling and estimation has been studied for a long time, there are few robots today able of demonstrating extensive grasping skills. The main contribution of the work presented here is an investigation of probabilistic modeling for inferring grasp stability based on learning from examples. The main objective is classification of a grasp as stable or unstable before applying further actions on it, e.g. lifting. The problem cannot be solved by visual sensing which is typically used to execute an initial robot hand positioning with respect to the object. The output of the classification system can trigger a regrasping step if an unstable grasp is identified. An off-line learning process is implemented and used for reasoning about grasp stability for a three-fingered robotic hand using Hidden Markov models. To evaluate the proposed method, experiments are performed both in simulation and on a real robot system.

  • 60.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Laaksonen, Janne
    Department of Information Technology, Lappeenranta University of Technology, Finland.
    Jorgensen, Jimmy Alison
    The Maersk Mc-Kinney Moller Institute University of Southern Denmark, Denmark.
    Kyrki, Ville
    the Department of Information Technology, Lappeenranta University of Technology, Finland.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Assessing Grasp Stability Based on Learning and Haptic Data2011Ingår i: IEEE Transactions on robotics, ISSN 1552-3098, E-ISSN 1941-0468, Vol. 27, nr 3, s. 616-629Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    An important ability of a robot that interacts with the environment and manipulates objects is to deal with the uncertainty in sensory data. Sensory information is necessary to, for example, perform online assessment of grasp stability. We present methods to assess grasp stability based on haptic data and machinelearning methods, including AdaBoost, support vector machines (SVMs), and hidden Markov models (HMMs). In particular, we study the effect of different sensory streams to grasp stability. This includes object information such as shape; grasp information such as approach vector; tactile measurements fromfingertips; and joint configuration of the hand. Sensory knowledge affects the success of the grasping process both in the planning stage (before a grasp is executed) and during the execution of the grasp (closed-loop online control). In this paper, we study both of these aspects. We propose a probabilistic learning framework to assess grasp stability and demonstrate that knowledge about grasp stability can be inferred using information from tactile sensors. Experiments on both simulated and real data are shown. The results indicate that the idea to exploit the learning approach is applicable in realistic scenarios, which opens a number of interesting venues for the future research.

  • 61.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Laaksonen, Janne
    the Department of Information Technology, Lappeenranta University of Technology, Finland.
    Jorgensen, Jimmy
    The Maersk Mc-Kinney Moller Institute University of Southern Denmark, Denmark.
    Kyrki, Ville
    the Department of Information Technology, Lappeenranta University of Technology, Finland.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning grasp stability based on haptic data2010Konferensbidrag (Refereegranskat)
  • 62.
    Bekiroglu, Yasemin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Wang, Lu
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    A probabilistic framework for task-oriented grasp stability assessment2013Ingår i: 2013 IEEE International Conference on Robotics and Automation (ICRA), IEEE Computer Society, 2013, s. 3040-3047Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a probabilistic framework for grasp modeling and stability assessment. The framework facilitates assessment of grasp success in a goal-oriented way, taking into account both geometric constraints for task affordances and stability requirements specific for a task. We integrate high-level task information introduced by a teacher in a supervised setting with low-level stability requirements acquired through a robot's self-exploration. The conditional relations between tasks and multiple sensory streams (vision, proprioception and tactile) are modeled using Bayesian networks. The generative modeling approach both allows prediction of grasp success, and provides insights into dependencies between variables and features relevant for object grasping.

  • 63. Benítez, G. E. F.
    et al.
    Parra, V.
    Huerta, M.
    Marzinotto, Alejandro
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. Simon Bolivar University, Venezuela.
    Clotet, R.
    González, R.
    Moreno, A.
    Pinto, K.
    Rivas, D.
    Alvizu, R.
    Sanchez, L. E.
    Smartphone application for quantitative measurement of Parkinson tremors2015Ingår i: IFMBE Proceedings, Springer, 2015, s. 785-788Konferensbidrag (Refereegranskat)
    Abstract [en]

    One of the most common concerns in the atten-tion of patients with Parkinson's disease is an objectively eval-uation of the illness progress and the efficacy of treatments, in terms of the intensity and frequency of tremors. This symptom is produced by gradual degradation of the pigmented neurons located at the substantia nigra in the brain, in order to detect such movement levels. This paper proposes a Smartphone application for a quantitative detection, measurement and analysis of the Parkinson's tremor, due to the global use of Smartphones and the affordable cost of some Android plat-form devices. The subjects of this working project are those people who suffer from Parkinson's disease, who could down-load the application to their mobile phone in order to measure quantitatively the intensity and duration of their tremors, in any place, to send reports by email or to record them for a later use. The application enables the remote monitoring of the patients.

  • 64.
    Bergström, Niklas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Interactive Perception: From Scenes to Objects2012Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    This thesis builds on the observation that robots, like humans, do not have enough experience to handle all situations from the start. Therefore they need tools to cope with new situations, unknown scenes and unknown objects. In particular, this thesis addresses objects. How can a robot realize what objects are if it looks at a scene and has no knowledge about objects? How can it recover from situations where its hypotheses about what it sees are wrong? Even if it has built up experience in form of learned objects, there will be situations where it will be uncertain or mistaken, and will therefore still need the ability to correct errors. Much of our daily lives involves interactions with objects, and the same will be true robots existing among us. Apart from being able to identify individual objects, the robot will therefore need to manipulate them.

    Throughout the thesis, different aspects of how to deal with these questions is addressed. The focus is on the problem of a robot automatically partitioning a scene into its constituting objects. It is assumed that the robot does not know about specific objects, and is therefore considered inexperienced. Instead a method is proposed that generates object hypotheses given visual input, and then enables the robot to recover from erroneous hypotheses. This is done by the robot drawing from a human's experience, as well as by enabling it to interact with the scene itself and monitoring if the observed changes are in line with its current beliefs about the scene's structure.

    Furthermore, the task of object manipulation for unknown objects is explored. This is also used as a motivation why the scene partitioning problem is essential to solve. Finally aspects of monitoring the outcome of a manipulation is investigated by observing the evolution of flexible objects in both static and dynamic scenes. All methods that were developed for this thesis have been tested and evaluated on real robotic platforms. These evaluations show the importance of having a system capable of recovering from errors and that the robot can take advantage of human experience using just simple commands.

  • 65.
    Bergström, Niklas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Modeling of Natural Human – Robot Encounters2008Självständigt arbete på avancerad nivå (masterexamen), 20 poäng / 30 hpStudentuppsats (Examensarbete)
  • 66.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bohg, Jeannette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Roberson-Johnson, Matthew
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kootstra, Gert
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Active Scene Analysis2010Konferensbidrag (Refereegranskat)
  • 67.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Generating Object Hypotheses in Natural Scenes through Human-Robot Interaction2011Ingår i: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS / [ed] Amato, Nancy M., San Francisco: IEEE , 2011, s. 827-833Konferensbidrag (Refereegranskat)
    Abstract [en]

    We propose a method for interactive modeling ofobjects and object relations based on real-time segmentation ofvideo sequences. In interaction with a human, the robot canperform multi-object segmentation through principled model-ing of physical constraints. The key contribution is an efficientmulti-labeling framework, that allows object modeling anddisambiguation in natural scenes. Object modeling and labelingis done in a real-time, to which hypotheses and constraintsdenoting relations between objects can be added incrementally.Through instructions such as key presses or spoken words, ascene can be segmented in regions corresponding to multiplephysical objects. The approach solves some of the difficultproblems related to disambiguation of objects merged due totheir direct physical contact. Results show that even a limited setof simple interactions with a human operator can substantiallyimprove segmentation results.

  • 68.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bohg, Jeannette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Integration of Visual Cues for Robotic Grasping2009Ingår i: COMPUTER VISION SYSTEMS, PROCEEDINGS / [ed] Fritz M, Schiele B, Piater JH, Berlin: Springer-Verlag Berlin , 2009, Vol. 5815, s. 245-254Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we propose a method that generates grasping actions for novel objects based on visual input from a stereo camera. We are integrating two methods that are advantageous either in predicting how to grasp an object or where to apply a grasp. The first one reconstructs a wire frame object model through curve matching. Elementary grasping actions can be associated to parts of this model. The second method predicts grasping points in a 2D contour image of an object. By integrating the information from the two approaches, we can generate a sparse set, of full grasp configurations that are of a good quality. We demonstrate our approach integrated in a vision system for complex shaped objects as well as in cluttered scenes.

  • 69.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Björkman, Mårten
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Scene Understanding through Autonomous Interactive Perception2011Ingår i: Computer Vision Systems: Lecture Notes in Computer Science / [ed] Crowley James L., Draper Bruce, Thonnat Monique, Springer Verlag , 2011, s. 153-162Konferensbidrag (Refereegranskat)
    Abstract [en]

    We propose a framework for detecting, extracting and mod-eling objects in natural scenes from multi-modal data. Our frameworkis iterative, exploiting different hypotheses in a complementary manner.We employ the framework in realistic scenarios, based on visual appear-ance and depth information. Using a robotic manipulator that interactswith the scene, object hypotheses generated using appearance informa-tion are confirmed through pushing. The framework is iterative, eachgenerated hypothesis is feeding into the subsequent one, continuously re-fining the predictions about the scene. We show results that demonstratethe synergic effect of applying multiple hypotheses for real-world sceneunderstanding. The method is efficient and performs in real-time.

  • 70.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Yamakawa, Yuji
    Senoo, Taku
    Ishikawa, Masatoshi
    On-line learning of temporal state models for flexible objects2012Ingår i: 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids), IEEE , 2012, s. 712-718Konferensbidrag (Refereegranskat)
    Abstract [en]

    State estimation and control are intimately related processes in robot handling of flexible and articulated objects. While for rigid objects, we can generate a CAD model before-hand and a state estimation boils down to estimation of pose or velocity of the object, in case of flexible and articulated objects, such as a cloth, the representation of the object's state is heavily dependent on the task and execution. For example, when folding a cloth, the representation will mainly depend on the way the folding is executed.

  • 71.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kanda, Takayuki
    Miyashita, Takahiro
    Ishiguro, Hiroshi
    Hagita, Norihiro
    Modeling of Natural Human-Robot Encounters2008Ingår i: 2008 IEEE/RSJ International Conference On Robots And Intelligent Systems, Vols 1-3, Conference Proceedings / [ed] Chatila, R; Kelly, A; Merlet, JP, 2008, s. 2623-2629Konferensbidrag (Refereegranskat)
    Abstract [en]

    For a person to feel comfortable when approaching a robot it is necessary for the robot to behave in an expected way. People's behavior around a robot not being aware of them were observed during a preliminary experiment. Based on those observations people were classified into four groups depending on their interest in the robot. People were tracked with a laser range finder based system, and their positions, directions and velocities were estimated. A second classification based on that information was made and the relation between the two classifications were mapped. Different actions were created for the robot to be able to react naturally to different human behaviors. In this paper we evaluate three different robot behaviors with respect to how natural they appear. One behavior that actively tries to engage people, one that passively indicates that people have been noticed and a third that makes random gestures. During an experiment test subjects were instructed to act according to the groups from the classification based on interest, and the robot's performance with regard to naturalness was evaluated. Both first and third person evaluation made clear that the active and passive behavior were considered equally natural, while a robot randomly making gestures was considered much less natural.

  • 72.
    Bergström, Niklas
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Yamakawa, Yuji
    Tokyo University.
    Senoo, Taku
    Tokyo University.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ishikawa, Masatoshi
    Tokyo University.
    State Recognition of Deformable Objects Using Shape Context2011Ingår i: The 29th Annual Conference of the Robotics Society of Japan, 2011Konferensbidrag (Övrigt vetenskapligt)
  • 73.
    Bertolli, Federico
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Christensen, Henrik I.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    SLAM using visual scan-matching with distinguishable 3D points2006Ingår i: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-12, NEW YORK: IEEE , 2006, s. 4042-4047Konferensbidrag (Refereegranskat)
    Abstract [en]

    Scan-matching based on data from a laser scanner is frequently used for mapping and localization. This paper presents an scan-matching approach based instead on visual information from a stereo system. The Scale Invariant Feature Transform (SIFT) is used together with epipolar constraints to get high matching precision between the stereo images. Calculating the 3D position of the corresponding points in the world results in a visual scan where each point has a descriptor attached to it. These descriptors can be used when matching scans acquired from different positions. Just like in the work with laser based scan matching a map can be defined as a set of reference scans and their corresponding acquisition point. In essence this reduces each visual scan that can consist of hundreds of points to a single entity for which only the corresponding robot pose has to be estimated in the map. This reduces the overall complexity of the map. The SIFT descriptor attached to each of the points in the reference allows for robust matching and detection of loop closing situations. The paper presents real-world experimental results from an indoor office environment.

  • 74.
    Bishop, Adrian
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Stochastically convergent localization of objects and actively controllable sensor-object pose2009Ingår i: Proceedings of 10th European Control Conference (ECC 2009), 2009Konferensbidrag (Refereegranskat)
    Abstract [en]

    The problem of object (network) localization using a mobile sensor is examined in this paper. Specifically, we consider a set of stationary objects located in the plane and a single mobile nonholonomic sensor tasked at estimating their relative position from range and bearing measurements. We derive a coordinate transform and a relative sensor-object motion model that leads to a novel problem formulation where the measurements are linear in the object positions. We then apply an extended Kalman filter-like algorithm to the estimation problem. Using stochastic calculus we provide an analysis of the convergence properties of the filter. We then illustrate that it is possible to steer the mobile sensor to achieve a relative sensor-object pose using a continuous control law. This last fact is significant since we circumvent Brockett's theorem and control the relative sensor-source pose using a simple controller.

  • 75.
    Bishop, Adrian N.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    A tutorial on constraints for positioning on the plane2010Ingår i: 2010 IEEE 21st International Symposiumon Personal Indoor and Mobile Radio Communications (PIMRC), IEEE , 2010, s. 1689-1694Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper introduces and surveys a number of determinant constraints on the measurement errors in a variety of positioning scenarios. An algorithm for exploiting the constraints for accurate positioning is introduced and the relationship between the proposed algorithm and a so-called traditional maximum likelihood algorithm is examined.

  • 76.
    Bishop, Adrian N.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. Australian National University, Canberra, Australia.
    Gaussian-sum-based probability hypothesis density filtering with delayed and out-of-sequence measurements2010Ingår i: 18th Mediterranean Conference on Control and Automation, MED'10 - Conference Proceedings, 2010, s. 1423-1428Konferensbidrag (Refereegranskat)
    Abstract [en]

    The problem of multiple-sensor-based multipleobject tracking is studied for adverse environments involving clutter (false positives), missing measurements (false negatives) and random target births and deaths (a priori unknown target numbers). Various (potentially spatially separated) sensors are assumed to generate signals which are sent to the estimator via parallel channels which incur independent delays. These signals may arrive out of order, be corrupted or even lost. In addition, there may be periods when the estimator receives no information. A closed-form, recursive solution to the considered problem is detailed that generalizes the Gaussian-mixture probability hypothesis density (GM-PHD) filter previously detailed in the literature. This generalization allows the GM-PHD framework to be applied in more realistic network scenarios involving not only transmission delays but rather more general irregular measurement sequences where particular measurements from some sensors can arrive out of order with respect to the generating sensor and also with respect to the signals generated by the other sensors in the network.

  • 77.
    Bishop, Adrian N.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. Australian National University (ANU), Australia .
    Basiri, M.
    Bearing-only triangular formation control on the plane and the sphere2010Ingår i: 18th Mediterranean Conference on Control and Automation, MED'10 - Conference Proceedings, 2010, s. 790-795Konferensbidrag (Refereegranskat)
    Abstract [en]

    We consider the problem of distributed bearing-only formation control. Each agent measures the inter-agent bearings in a local coordinate system and is tasked at maintaining a specified angular separation relative to its neighbors. The problem we consider differs from other problems in the literature since no inter-agent distance measurements are employed. Each agent's control law is distributed and based only on its locally measured bearings. A strong convergence result is established which guarantees global convergence of the formation to the desired shape while at the same time ensures that collisions are avoided naturally. We show that the control scheme is robust to agent motion failures and the presence of additional group motion inputs. Finally, we extend our system to the case where the agent's motion is restricted to a sphere.

  • 78.
    Bishop, Adrian N.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Fidan, Baris
    Anderson, Brian D. O.
    Dogancay, Kutluyil
    Pathirana, Pubudu N.
    Optimal Range-Difference-Based Localization Considering Geometrical Constraints2008Ingår i: IEEE Journal of Oceanic Engineering, ISSN 0364-9059, E-ISSN 1558-1691, Vol. 33, nr 3, s. 289-301Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper proposes a new type of algorithm aimed at finding the traditional maximum-likelihood (TML) estimate of the position of a target given time-difference-of-arrival (TDOA) information, contaminated by noise. The novelty lies in the fact that a performance index, akin to but not identical with that in maximum likelihood (ML), is a minimized subject to a number of constraints, which flow from geometric constraints inherent in the underlying problem. The minimization is in a higher dimensional space than for TML, and has the advantage that the algorithm can be very straightforwardly and systematically initialized. Simulation evidence shows that failure to converge to a solution of the localization problem near the true value is less likely to occur with this new algorithm than with TML. This makes it attractive to use in adverse geometric situations.

  • 79.
    Bishop, Adrian N.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Fidan, Baris
    Anderson, Brian D. O.
    Dogancay, Kutluyil
    Pathirana, Pubudu N.
    Optimality analysis of sensor-target localization geometries2010Ingår i: Automatica, ISSN 0005-1098, E-ISSN 1873-2836, Vol. 46, nr 3, s. 479-492Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The problem of target localization involves estimating the position of a target from multiple noisy sensor measurements. It is well known that the relative sensor-target geometry can significantly affect the performance of any particular localization algorithm. The localization performance can be explicitly characterized by certain measures, for example, by the Cramer-Rao lower bound (which is equal to the inverse Fisher information matrix) on the estimator variance. In addition, the Cramer-Rao lower bound is commonly used to generate a so-called uncertainty ellipse which characterizes the spatial variance distribution of an efficient estimate, i.e. an estimate which achieves the lower bound. The aim of this work is to identify those relative sensor-target geometries which result in a measure of the uncertainty ellipse being minimized. Deeming such sensor-target geometries to be optimal with respect to the chosen measure, the optimal sensor-target geometries for range-only, time-of-arrival-based and bearing-only localization are identified and studied in this work. The optimal geometries for an arbitrary number of sensors are identified and it is shown that an optimal sensor-target configuration is not, in general, unique. The importance of understanding the influence of the sensor-target geometry on the potential localization performance is highlighted via formal analytical results and a number of illustrative examples.

  • 80.
    Bishop, Adrian N.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    A Stochastically Stable Solution to the Problem of Robocentric Mapping2009Ingår i: ICRA: 2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, 2009, s. 1540-1547Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper provides a novel solution for robo-centric mapping using an autonomous mobile robot. The robot dynamic model is the standard unicycle model and the robot is assumed to measure both the range and relative bearing to the landmarks. The algorithm introduced in this paper relies on a coordinate transformation and an extended Kalman filter like algorithm. The coordinate transformation considered in this paper has not been previously considered for robocentric mapping applications. Moreover, we provide a rigorous stochastic stability analysis of the filter employed and we examine the conditions under which the mean-square estimation error converges to a steady-state value.

  • 81.
    Bishop, Adrian N.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    An Optimality Analysis of Sensor-Target Geometries for Signal Strength Based Localization2009Ingår i: 2009 INTERNATIONAL CONFERENCE ON INTELLIGENT SENSORS, SENSOR NETWORKS AND INFORMATION PROCESSING (ISSNIP 2009), NEW YORK: IEEE , 2009, s. 127-132Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we characterize the bounds on localization accuracy in signal strength based localization. In particular, we provide a novel and rigorous analysis of the relative receiver-transmitter geometry and the effect of this geometry on the potential localization performance. We show that uniformly spacing sensors around the target is not optimal if the sensor-target ranges are not identical and is not necessary in any case. Indeed, we show that in general the optimal sensor-target geometry for signal strength based localization is not unique.

  • 82.
    Bishop, Adrian N.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Stochastically convergent localization of objects by mobile sensors and actively controllable relative sensor-object2015Ingår i: 2009 European Control Conference, ECC 2009, 2015, s. 2384-2389Konferensbidrag (Refereegranskat)
    Abstract [en]

    The problem of object (network) localization using a mobile sensor is examined in this paper. Specifically, we consider a set of stationary objects located in the plane and a single mobile nonholonomic sensor tasked at estimating their relative position from range and bearing measurements. We derive a coordinate transform and a relative sensor-object motion model that leads to a novel problem formulation where the measurements are linear in the object positions. We then apply an extended Kalman filter-like algorithm to the estimation problem. Using stochastic calculus we provide an analysis of the convergence properties of the filter. We then illustrate that it is possible to steer the mobile sensor to achieve a relative sensor-object pose using a continuous control law. This last fact is significant since we circumvent Brockett's theorem and control the relative sensor-source pose using a simple controller.

  • 83.
    Bishop, Adrian N.
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Savkin, Andrey V.
    Pathirana, Pubudu N.
    Vision-Based Target Tracking and Surveillance With Robust Set-Valued State Estimation2010Ingår i: IEEE Signal Processing Letters, ISSN 1070-9908, E-ISSN 1558-2361, Vol. 17, nr 3, s. 289-292Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Tracking a target from a video stream (or a sequence of image frames) involves nonlinear measurements in Cartesian coordinates. However, the target dynamics, modeled in Cartesian coordinates, result in a linear system. We present a robust linear filter based on an analytical nonlinear to linear measurement conversion algorithm. Using ideas from robust control theory, a rigorous theoretical analysis is given which guarantees that the state estimation error for the filter is bounded, i.e., a measure against filter divergence is obtained. In fact, an ellipsoidal set-valued estimate is obtained which is guaranteed to contain the true target location with an arbitrarily high probability. The algorithm is particularly suited to visual surveillance and tracking applications involving targets moving on a plane.

  • 84.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Learning to Disambiguate Object Hypotheses through Self-Exploration2014Ingår i: 14th IEEE-RAS International Conference onHumanoid Robots, IEEE Computer Society, 2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present a probabilistic learning framework to form object hypotheses through interaction with the environment. A robot learns how to manipulate objects through pushing actions to identify how many objects are present in the scene. We use a segmentation system that initializes object hypotheses based on RGBD data and adopt a reinforcement approach to learn the relations between pushing actions and their effects on object segmentations. Trained models are used to generate actions that result in minimum number of pushes on object groups, until either object separation events are observed or it is ensured that there is only one object acted on. We provide baseline experiments that show that a policy based on reinforcement learning for action selection results in fewer pushes, than if pushing actions were selected randomly.

  • 85.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bekiroglu, Yasemin
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Högman, Virgile
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Enhancing Visual Perception of Shape through Tactile Glances2013Ingår i: Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, IEEE conference proceedings, 2013, s. 3180-3186Konferensbidrag (Refereegranskat)
    Abstract [en]

    Object shape information is an important parameter in robot grasping tasks. However, it may be difficult to obtain accurate models of novel objects due to incomplete and noisy sensory measurements. In addition, object shape may change due to frequent interaction with the object (cereal boxes, etc). In this paper, we present a probabilistic approach for learning object models based on visual and tactile perception through physical interaction with an object. Our robot explores unknown objects by touching them strategically at parts that are uncertain in terms of shape. The robot starts by using only visual features to form an initial hypothesis about the object shape, then gradually adds tactile measurements to refine the object model. Our experiments involve ten objects of varying shapes and sizes in a real setup. The results show that our method is capable of choosing a small number of touches to construct object models similar to real object shapes and to determine similarities among acquired models.

  • 86.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bergström, Niklas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detecting, segmenting and tracking unknown objects using multi-label MRF inference2014Ingår i: Computer Vision and Image Understanding, ISSN 1077-3142, E-ISSN 1090-235X, Vol. 118, s. 111-127Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This article presents a unified framework for detecting, segmenting and tracking unknown objects in everyday scenes, allowing for inspection of object hypotheses during interaction over time. A heterogeneous scene representation is proposed, with background regions modeled as a combinations of planar surfaces and uniform clutter, and foreground objects as 3D ellipsoids. Recent energy minimization methods based on loopy belief propagation, tree-reweighted message passing and graph cuts are studied for the purpose of multi-object segmentation and benchmarked in terms of segmentation quality, as well as computational speed and how easily methods can be adapted for parallel processing. One conclusion is that the choice of energy minimization method is less important than the way scenes are modeled. Proximities are more valuable for segmentation than similarity in colors, while the benefit of 3D information is limited. It is also shown through practical experiments that, with implementations on GPUs, multi-object segmentation and tracking using state-of-art MRF inference methods is feasible, despite the computational costs typically associated with such methods.

  • 87.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Dahlgren, F.
    Stenström, P.
    Using Hints to Reduce the Read Miss Penalty for Flat COMA Protocols1995Ingår i: Proc. of the 28th Hawaii International Conference on System Sciences, 1995, s. 242-251Konferensbidrag (Refereegranskat)
  • 88.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Eklundh, Jan-Olof
    A Real-Time System for Epipolar Geometry and Ego-Motion Estimation2000Ingår i: Proc. IEEE Computer Vision and Pattern Recognition (CVPR’00), 2000, s. 506-513Konferensbidrag (Refereegranskat)
  • 89.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Eklundh, Jan-Olof
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Attending, Foveating and Recognizing Objects in Real World Scenes2004Ingår i: British Machine Vision Conference (BMVC), London, UK / [ed] Andreas Hoppe, Sarah Barman, Tim Ellis, BMVA Press , 2004, s. 227-236Konferensbidrag (Refereegranskat)
    Abstract [en]

    Recognition in cluttered real world scenes is a challenging problem. To find a particular object of interest within a reasonable time, a wide field of view is preferable. However, as we will show with practical experiments, robust recognition is easier if the object is foveated and subtends a considerable partof the visual field. In this paper a binocular system able to overcome these two conflicting requirements will be presented. The system consists of two sets of cameras, a wide field pair and a foveal one. From disparities a number of object hypotheses are generated. An attentional process based on hue and 3D size guides the foveal cameras towards the most salient regions. With the object foveated and segmented in 3D, recognition is performed using scale invariant features. The system is fully automised and runs at real-time speed.

  • 90.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Eklundh, Jan-Olof
    Real-Time Epipolar Geometry Estimation and Disparity1999Ingår i: Proc. International Conference on Computer Vision (ICCV’99), 1999, s. 234-141Konferensbidrag (Refereegranskat)
  • 91.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Eklundh, Jan-Olof
    KTH, Tidigare Institutioner, Numerisk analys och datalogi, NADA.
    Real-time epipolar geometry estimation of binocular stereo heads2002Ingår i: IEEE Transaction on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, Vol. 24, nr 3, s. 425-432Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Stereo is an important cue for visually guided robots. While moving around in the world, such a robot can use dynamic fixation to overcome limitations in image resolution and field of view. In this paper, a binocular stereo system capable of dynamic fixation is presented. The external calibration is performed continuously taking temporal consistency into consideration, greatly simplifying the process. The essential matrix, which is estimated in real-time, is used to describe the epipolar geometry. It will be shown, how outliers can be identified and excluded from the calculations. An iterative approach based on a differential model of the optical flow, commonly used in structure from motion, is also presented and tested towards the essential matrix. The iterative method will be shown to be superior in terms of both computational speed and robustness, when the vergence angles are less than about 15degrees. For larger angles, the differential model is insufficient and the essential matrix is preferably used instead.

  • 92.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Eklundh, Jan-Olof
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Vision in the real world: Finding, attending and recognizing objects2006Ingår i: International journal of imaging systems and technology (Print), ISSN 0899-9457, E-ISSN 1098-1098, Vol. 16, nr 5, s. 189-208Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper we discuss the notion of a seeing system that uses vision to interact with its environment. The requirements on such a system depend on the tasks it is involved in and should be evaluated with these in mind. Here we consider the task of finding and recognizing objects in the real world. After a discussion of the needed functionalities and issues about the design we present an integrated real-time vision system capable of finding, attending and recognizing objects in real settings. The system is based on a dual set of cameras, a wide field set for attention and a foveal one for recognition. The continuously running attentional process uses top-down object characteristics in terms of hue and 3D size. Recognition is performed with objects of interest foveated and segmented from its background. We describe the system structure as well as the different components in detail and present experimental evaluations of its overall performance.

  • 93.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Eklundh, Jan-Olof
    Visual Cues for a Fixating Active Agent2001Ingår i: Proc. Robot Vision, 2001Konferensbidrag (Refereegranskat)
  • 94.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Active 3D scene segmentation and detection of unknown objects2010Ingår i: IEEE International Conference on Robotics and Automation (ICRA), Anchorage, USA / [ed] Antonio Bicchi, IEEE Robotics and Automation Society, 2010, s. 3114-3120Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present an active vision system for segmentationof visual scenes based on integration of several cues. The system serves as a visual front end for generation of object hypotheses for new, previously unseen objects in natural scenes. The system combines a set of foveal and peripheral cameraswhere, through a stereo based fixation process, object hypotheses are generated. In addition to considering the segmentation process in 3D, the main contribution of the paper is integration of different cues in a temporal framework and improvement of initial hypotheses over time.

  • 95.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Active 3D Segmentation through Fixation of Previously Unseen Objects2010Ingår i: British Machine Vision Conference (BMVC), Aberystwyth, UK / [ed] Frédéric Labrosse, Reyer Zwiggelaar, Yonghuai Liu, and Bernie Tiddeman, BMVA Press , 2010, s. 119.1-119.11Konferensbidrag (Refereegranskat)
    Abstract [en]

    We present an approach for active segmentation based on integration of several cues.It serves as a framework for generation of object hypotheses of previously unseen objectsin natural scenes. Using an approximate Expectation-Maximisation method, the appearance,3D shape and size of objects are modelled in an iterative manner, with fixation usedfor unsupervised initialisation. To better cope with situations where an object is hard tosegregate from the surface it is placed on, a flat surface model is added to the typical twohypotheses used in classical figure-ground segmentation. The framework is further extendedto include modelling over time, in order to exploit temporal consistency for bettersegmentation and to facilitate tracking.

  • 96.
    Björkman, Mårten
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Combination of foveal and peripheral vision for object recognition and pose estimation2004Ingår i: 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, s. 5135-5140Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we present a real-time vision system that integrates a number of algorithms using monocular and binocular cues to achieve robustness in realistic settings, for tasks such as object recognition, tracking and pose estimation. The system consists of two sets of binocular cameras; a peripheral set for disparity based attention and a foveal one for higher level processes. Thus the conflicting requirements of a wide field of view and high resolution can be overcome. One important property of the system is that the step from task specification through object recognition to pose estimation is completely automatic, combining both appearance and geometric models. Experimental evaluation is performed in a realistic indoor environment with occlusions, clutter, changing lighting and background conditions.

  • 97.
    Boberg, Anders
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Bishop, Adrian N.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Jensfelt, Patric
    KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS. KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Robocentric Mapping and Localization in Modified Spherical Coordinates with Bearing Measurements2009Ingår i: 2009 INTERNATIONAL CONFERENCE ON INTELLIGENT SENSORS, SENSOR NETWORKS AND INFORMATION PROCESSING (ISSNIP 2009), NEW YORK: IEEE , 2009, s. 139-144Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, a new approach to robotic mapping is presented that uses modified spherical coordinates in a robot-centered reference frame and a bearing-only measurement model. The algorithm provided in this paper permits robust delay-free state initialization and is computationally more efficient than the current standard in bearing-only (delay-free initialized) simultaneous localization and mapping (SLAM). Importantly, we provide a detailed nonlinear observability analysis which shows the system is generally observable. We also analyze the error convergence of the filter using stochastic stability analysis. We provide an explicit bound on the asymptotic mean state estimation error. A comparison of the performance of this filter is also made against a standard world-centric SLAM algorithm in a simulated environment.

  • 98. Bodenhagen, L.
    et al.
    Detry, Renaud
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP.
    Piater, J.
    Krüger, N.
    What a successful grasp tells about the success chances of grasps in its vicinity2011Konferensbidrag (Refereegranskat)
    Abstract [en]

    Infants gradually improve their grasping competences, both in terms of motor abilities as well as in terms of the internal shape grasp representations. Grasp densities [3] provide a statistical model of such an internal learning process. In the concept of grasp densities, kernel density estimation is used based on a six-dimensional kernel representing grasps with given position and orientation. For this so far an isotropic kernel has been used which exact shape have only been weakly justified. Instead in this paper, we use an anisotropic kernel that is statistically based on measured conditional probabilities representing grasp success in the neighborhood of a successful grasp. The anisotropy has been determined utilizing a simulation environment that allowed for evaluation of large scale experiments. The anisotropic kernel has been fitted to the conditional probabilities obtained from the experiments. We then show that convergence is an important problem associated with the grasp density approach and we propose a measure for the convergence of the densities. In this context, we show that the use of the statistically grounded anisotropic kernels leads to a significantly faster convergence of grasp densities.

  • 99.
    Bohg, Jeannette
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Multi-Modal Scene Understanding for Robotic Grasping2011Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    Current robotics research is largely driven by the vision of creatingan intelligent being that can perform dangerous, difficult orunpopular tasks. These can for example be exploring the surface of planet mars or the bottomof the ocean, maintaining a furnace or assembling a car.   They can also be more mundane such as cleaning an apartment or fetching groceries. This vision has been pursued since the 1960s when the first robots were built. Some of the tasks mentioned above, especially those in industrial manufacturing, arealready frequently performed by robots. Others are still completelyout of reach. Especially, household robots are far away from beingdeployable as general purpose devices. Although advancements have beenmade in this research area, robots are not yet able to performhousehold chores robustly in unstructured and open-ended environments givenunexpected events and uncertainty in perception and execution.In this thesis, we are analyzing which perceptual andmotor capabilities are necessaryfor the robot to perform common tasks in a household scenario. In that context, an essential capability is tounderstand the scene that the robot has to interact with. This involvesseparating objects from the background but also from each other.Once this is achieved, many other tasks becomemuch easier. Configuration of objectscan be determined; they can be identified or categorized; their pose can be estimated; free and occupied space in the environment can be outlined.This kind of scene model can then inform grasp planning algorithms to finally pick up objects.However, scene understanding is not a trivial problem and evenstate-of-the-art methods may fail. Given an incomplete, noisy andpotentially erroneously segmented scene model, the questions remain howsuitable grasps can be planned and how they can be executed robustly.In this thesis, we propose to equip the robot with a set of predictionmechanisms that allow it to hypothesize about parts of the sceneit has not yet observed. Additionally, the robot can alsoquantify how uncertain it is about this prediction allowing it toplan actions for exploring the scene at specifically uncertainplaces. We consider multiple modalities includingmonocular and stereo vision, haptic sensing and information obtainedthrough a human-robot dialog system. We also study several scene representations of different complexity and their applicability to a grasping scenario. Given an improved scene model from this multi-modalexploration, grasps can be inferred for each objecthypothesis. Dependent on whether the objects are known, familiar orunknown, different methodologies for grasp inference apply. In thisthesis, we propose novel methods for each of these cases. Furthermore,we demonstrate the execution of these grasp both in a closed andopen-loop manner showing the effectiveness of the proposed methods inreal-world scenarios.

  • 100.
    Bohg, Jeannette
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Barck-Holst, Carl
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Hübner, Kai
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ralph, Maria
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Rasolzadeh, Babak
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Song, Dan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    TOWARDS GRASP-ORIENTED VISUAL PERCEPTION FOR HUMANOID ROBOTS2009Ingår i: INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS, ISSN 0219-8436, Vol. 6, nr 3, s. 387-434Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A distinct property of robot vision systems is that they are embodied. Visual information is extracted for the purpose of moving in and interacting with the environment. Thus, different types of perception-action cycles need to be implemented and evaluated. In this paper, we study the problem of designing a vision system for the purpose of object grasping in everyday environments. This vision system is firstly targeted at the interaction with the world through recognition and grasping of objects and secondly at being an interface for the reasoning and planning module to the real world. The latter provides the vision system with a certain task that drives it and defines a specific context, i.e. search for or identify a certain object and analyze it for potential later manipulation. We deal with cases of: (i) known objects, (ii) objects similar to already known objects, and (iii) unknown objects. The perception-action cycle is connected to the reasoning system based on the idea of affordances. All three cases are also related to the state of the art and the terminology in the neuroscientific area.

1234567 51 - 100 av 683
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf