Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Learning comparison: Reinforcement Learning vs Inverse Reinforcement Learning: How well does inverse reinforcement learning perform in simple markov decision processes in comparison to reinforcement learning?
KTH, School of Electrical Engineering and Computer Science (EECS).
2019 (English)Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesisAlternative title
Jämförelse mellan förstärkningsinlärning och inverterad förstärkningsinlärning (Swedish)
Abstract [en]

This research project elaborates a qualitative comparison between two different learning approaches, Reinforcement Learning (RL) and Inverse Reinforcement Learning (IRL) over the Gridworld Markov Decision Process. The interest focus will be set on the second learning paradigm, IRL, as it is considered to be relatively new and little work has been developed in this field of study. As observed, RL outperforms IRL, obtaining a correct solution in all the different scenarios studied. However, the behaviour of the IRL algorithms can be improved and this will be shown and analyzed as part of the scope.

Abstract [sv]

Denna studie är en kvalitativ jämförelse mellan två olika inlärningsangreppssätt, “Reinforcement Learning” (RL) och “Inverse Reinforcement Learning” (IRL), om använder "Gridworld", en "Markov Decision-Process". Fokus ligger på den senare algoritmen, IRL, eftersom den anses relativt ny och få studier har i nuläget gjorts kring den. I studien är RL mer fördelaktig än IRL, som skapar en korrekt lösning i alla olika scenarier som presenteras i studien. Beteendet hos IRL-algoritmen kan dock förbättras vilket också visas och analyseras i denna studie.

Place, publisher, year, edition, pages
2019. , p. 33
Series
TRITA-EECS-EX ; 2019:400
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:kth:diva-259371OAI: oai:DiVA.org:kth-259371DiVA, id: diva2:1351647
Supervisors
Examiners
Available from: 2019-10-17 Created: 2019-09-16 Last updated: 2019-10-17Bibliographically approved

Open Access in DiVA

fulltext(1299 kB)14 downloads
File information
File name FULLTEXT01.pdfFile size 1299 kBChecksum SHA-512
dd3decfc3fe64c4e70d7eb6bb440d853c566dc3516513683be71359a64c28c02430b09f21d8a1fa2853b685f61e71e8ba1cc5646869161fe2a9e4d95cf6956fe
Type fulltextMimetype application/pdf

By organisation
School of Electrical Engineering and Computer Science (EECS)
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 14 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 59 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf