Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Online Learning of Vision-Based Robot Control during Autonomous Operation
Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.
Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska högskolan.ORCID-id: 0000-0002-6096-3648
2015 (Engelska)Ingår i: New Development in Robot Vision / [ed] Yu Sun, Aman Behal and Chi-Kit Ronald Chung, Springer Berlin/Heidelberg, 2015, s. 137-156Kapitel i bok, del av antologi (Refereegranskat)
Abstract [en]

Online learning of vision-based robot control requires appropriate activation strategies during operation. In this chapter we present such a learning approach with applications to two areas of vision-based robot control. In the first setting, selfevaluation is possible for the learning system and the system autonomously switches to learning mode for producing the necessary training data by exploration. The other application is in a setting where external information is required for determining the correctness of an action. Therefore, an operator provides training data when required, leading to an automatic mode switch to online learning from demonstration. In experiments for the first setting, the system is able to autonomously learn the inverse kinematics of a robotic arm. We propose improvements producing more informative training data compared to random exploration. This reduces training time and limits learning to regions where the learnt mapping is used. The learnt region is extended autonomously on demand. In experiments for the second setting, we present an autonomous driving system learning a mapping from visual input to control signals, which is trained by manually steering the robot. After the initial training period, the system seamlessly continues autonomously. Manual control can be taken back at any time for providing additional training.

Ort, förlag, år, upplaga, sidor
Springer Berlin/Heidelberg, 2015. s. 137-156
Serie
Cognitive Systems Monographs, ISSN 1867-4925 ; Vol. 23
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
URN: urn:nbn:se:liu:diva-110891DOI: 10.1007/978-3-662-43859-6_8ISBN: 978-3-662-43858-9 (tryckt)ISBN: 978-3-662-43859-6 (tryckt)OAI: oai:DiVA.org:liu-110891DiVA, id: diva2:750041
Tillgänglig från: 2014-09-26 Skapad: 2014-09-26 Senast uppdaterad: 2018-01-11Bibliografiskt granskad
Ingår i avhandling
1. Online Learning for Robot Vision
Öppna denna publikation i ny flik eller fönster >>Online Learning for Robot Vision
2014 (Engelska)Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

In tele-operated robotics applications, the primary information channel from the robot to its human operator is a video stream. For autonomous robotic systems however, a much larger selection of sensors is employed, although the most relevant information for the operation of the robot is still available in a single video stream. The issue lies in autonomously interpreting the visual data and extracting the relevant information, something humans and animals perform strikingly well. On the other hand, humans have great diculty expressing what they are actually looking for on a low level, suitable for direct implementation on a machine. For instance objects tend to be already detected when the visual information reaches the conscious mind, with almost no clues remaining regarding how the object was identied in the rst place. This became apparent already when Seymour Papert gathered a group of summer workers to solve the computer vision problem 48 years ago [35].

Articial learning systems can overcome this gap between the level of human visual reasoning and low-level machine vision processing. If a human teacher can provide examples of what to be extracted and if the learning system is able to extract the gist of these examples, the gap is bridged. There are however some special demands on a learning system for it to perform successfully in a visual context. First, low level visual input is often of high dimensionality such that the learning system needs to handle large inputs. Second, visual information is often ambiguous such that the learning system needs to be able to handle multi modal outputs, i.e. multiple hypotheses. Typically, the relations to be learned  are non-linear and there is an advantage if data can be processed at video rate, even after presenting many examples to the learning system. In general, there seems to be a lack of such methods.

This thesis presents systems for learning perception-action mappings for robotic systems with visual input. A range of problems are discussed, such as vision based autonomous driving, inverse kinematics of a robotic manipulator and controlling a dynamical system. Operational systems demonstrating solutions to these problems are presented. Two dierent approaches for providing training data are explored, learning from demonstration (supervised learning) and explorative learning (self-supervised learning). A novel learning method fullling the stated demands is presented. The method, qHebb, is based on associative Hebbian learning on data in channel representation. Properties of the method are demonstrated on a vision-based autonomously driving vehicle, where the system learns to directly map low-level image features to control signals. After an initial training period, the system seamlessly continues autonomously. In a quantitative evaluation, the proposed online learning method performed comparably with state of the art batch learning methods.

Ort, förlag, år, upplaga, sidor
Linköping: Linköping University Electronic Press, 2014. s. 62
Serie
Linköping Studies in Science and Technology. Thesis, ISSN 0280-7971 ; 1678
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
urn:nbn:se:liu:diva-110892 (URN)10.3384/lic.diva-110892 (DOI)978-91-7519-228-4 (ISBN)
Presentation
2014-10-24, Visionen, Hus B, Campus Valla, Linköpings universitet, Linköping, 13:15 (Svenska)
Opponent
Handledare
Forskningsfinansiär
EU, FP7, Sjunde ramprogrammet, 247947Vetenskapsrådet
Tillgänglig från: 2014-09-26 Skapad: 2014-09-26 Senast uppdaterad: 2018-01-11Bibliografiskt granskad

Open Access i DiVA

fulltext(1658 kB)107 nedladdningar
Filinformation
Filnamn FULLTEXT01.pdfFilstorlek 1658 kBChecksumma SHA-512
7d807ee64a3a7deef65cc432ac091c57ebbd2730965057515cd37827256b2271beb1abb4b2eca0ad4af7ac8d0e4c30f4df0478780afdf0c1afdc03daa49031c8
Typ fulltextMimetyp application/pdf
Supplemental Material (Video)(92859 kB)70 nedladdningar
Filinformation
Filnamn MOVIE01.mp4Filstorlek 92859 kBChecksumma SHA-512
604591a73958658b14fe0178fc8bbfaeb59722c7fe4551f7cb519273ec7e38ad6dc792834383de20377fec7382f3c5428daa94d65f993caf604536be4d4b22f2
Typ movieMimetyp video/mp4

Övriga länkar

Förlagets fulltextFind book in another country/Hitta boken i ett annat land

Sök vidare i DiVA

Av författaren/redaktören
Öfjäll, KristofferFelsberg, Michael
Av organisationen
DatorseendeTekniska högskolan
Datorseende och robotik (autonoma system)

Sök vidare utanför DiVA

GoogleGoogle Scholar
Totalt: 107 nedladdningar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 1332 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf