Endre søk
Begrens søket
1 - 4 of 4
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Hjelm, Martin
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Robotik, perception och lärande, RPL.
    Holistic Grasping: Affordances, Grasp Semantics, Task Constraints2019Doktoravhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    Most of us perform grasping actions over a thousand times per day without giving it much consideration, be it from driving to drinking coffee. Learning robots the same ease when it comes to grasping has been a goal for the robotics research community for decades.

    The reason for the slow progress lays mainly in the inferiority of the robot sensorimotor system. Robotic grippers are often non-compliant, lack the degrees of freedom of human hands, and haptic sensors are rudimentary involving significantly less resolution and sensitivity than in humans.

    Research has therefore focused on engineering solutions that center on the stability of the grasp. This involves specifying complex functions and search strategies detailing the interaction between the digits of the robot and the surface of the object. Given the amount of variation in materials, shapes, and ability to deform it seems infeasible to analytically formulate such a gripper-to-shape mapping. Many researchers have instead looked to data-driven methods for learning the gripper-to-shape mapping as does this thesis.

    Humans obviously have a similar mapping capability. However, how we grasp an object is determined foremost by what we are going to do with the object. We have priors on task, material, and the dynamics of objects that help guide the grasping process. We also have a deeper understanding of how shape and material relate to our own embodiment.

    We tie all these aspects together: our understanding of what an object can be used for, how that affects our interaction with it, and how our hand can form to achieve the goal of the manipulation. For us humans grasping is not just a gripper-to-shape mapping it is a holistic process where all parts of the chain matters to the outcome. The focus of this thesis is thus on how to incorporate such a holistic process into robotic grasp planning.  

    We will address the holistic grasping process through three jointly connected modules. The first is affordance detection and learning to infer the common parts for objects that afford an action, a form of conceptualization of the affordance categories. The second is learning grasp semantics, how shape relates to the gripper configuration. And finally the third is to learn how task constrains the grasping process.

    We will explore these three parts through the concept of similarity. This translates directly into the idea that we should learn a representation that puts similar types of the entities that we are describing, that is, objects, grasps, and tasks, close to each other in space. We will show that the idea of similarity based representations will help the robot reason about which parts of an object is important for affordance inference, which grasps and tasks are similar, and how the categories relate to each other. Finally, the similarity-based approach will help us tie all parts together in the conceptual demonstration of how a holistic grasping process might be realized.

  • 2.
    Hjelm, Martin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detry, R.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Representations for cross-task, cross-object grasp Transfer2014Inngår i: Proceedings - IEEE International Conference on Robotics and Automation, IEEE conference proceedings, 2014, s. 5699-5704Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We address The problem of Transferring grasp knowledge across objects and Tasks. This means dealing with Two important issues: 1) The induction of possible Transfers, i.e., whether a given object affords a given Task, and 2) The planning of a grasp That will allow The robot To fulfill The Task. The induction of object affordances is approached by abstracting The sensory input of an object as a set of attributes That The agent can reason about Through similarity and proximity. For grasp execution, we combine a part-based grasp planner with a model of Task constraints. The Task constraint model indicates areas of The object That The robot can grasp To execute The Task. Within These areas, The part-based planner finds a hand placement That is compatible with The object shape. The key contribution is The ability To Transfer Task parameters across objects while The part-based grasp planner allows for Transferring grasp information across Tasks. As a result, The robot is able To synthesize plans for previously unobserved Task/object combinations. We illustrate our approach with experiments conducted on a real robot.

  • 3.
    Hjelm, Martin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detry, Renaud
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kjellström, Hedvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Sparse Summarization of Robotic Grasping Data2013Inngår i: 2013 IEEE International Conference on Robotics and Automation (ICRA), New York: IEEE , 2013, s. 1082-1087Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We propose a new approach for learning a summarized representation of high dimensional continuous data. Our technique consists of a Bayesian non-parametric model capable of encoding high-dimensional data from complex distributions using a sparse summarization. Specifically, the method marries techniques from probabilistic dimensionality reduction and clustering. We apply the model to learn efficient representations of grasping data for two robotic scenarios.

  • 4.
    Hjelm, Martin
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Ek, Carl Henrik
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Detry, Renaud
    Kragic, Danica
    KTH, Skolan för datavetenskap och kommunikation (CSC), Datorseende och robotik, CVAP. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Autonoma System, CAS.
    Learning Human Priors for Task-Constrained Grasping2015Inngår i: COMPUTER VISION SYSTEMS (ICVS 2015), Springer Berlin/Heidelberg, 2015, s. 207-217Konferansepaper (Fagfellevurdert)
    Abstract [en]

    An autonomous agent using manmade objects must understand how task conditions the grasp placement. In this paper we formulate task based robotic grasping as a feature learning problem. Using a human demonstrator to provide examples of grasps associated with a specific task, we learn a representation, such that similarity in task is reflected by similarity in feature. The learned representation discards parts of the sensory input that is redundant for the task, allowing the agent to ground and reason about the relevant features for the task. Synthesized grasps for an observed task on previously unseen objects can then be filtered and ordered by matching to learned instances without the need of an analytically formulated metric. We show on a real robot how our approach is able to utilize the learned representation to synthesize and perform valid task specific grasps on novel objects.

1 - 4 of 4
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf