Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
An Imitation-Learning based Agentplaying Super Mario
Blekinge Institute of Technology, Faculty of Computing, Department of Creative Technologies.
2014 (English)Student thesis
Abstract [en]

Context. Developing an Artificial Intelligence (AI) agent that canpredict and act in all possible situations in the dynamic environmentsthat modern video games often consists of is on beforehand nearly im-possible and would cost a lot of money and time to create by hand. Bycreating a learning AI agent that could learn by itself by studying itsenvironment with the help of Reinforcement Learning (RL) it wouldsimplify this task. Another wanted feature that often is required is AIagents with a natural acting behavior and a try to solve that problemcould be to imitating a human by using Imitation Learning (IL). Objectives. The purpose of this investigation is to study if it is pos-sible to create a learning AI agent feasible to play and complete somelevels in a platform game with the combination of the two learningtechniques RL and IL. Methods. To be able to investigate the research question an imple-mentation is done that combines one RL technique and one IL tech-nique. By letting a set of human players play the game their behavioris saved and applied to the agents. The RL is then used to train andtweak the agents playing performance. A couple of experiments areexecuted to evaluate the differences between the trained agents againsttheir respective human teacher. Results. The results of these experiments showed promising indica-tions that the agents during different phases of the experiments hadsimilarly behavior compared to their human trainers. The agents alsoperformed well when comparing them to other already existing ones. Conclusions. To conclude there is promising results of creating dy-namical agents with natural behavior with the combination of RL andIL and that it with additional adjustments would make it performeven better as a learning AI with a more natural behavior.

Place, publisher, year, edition, pages
2014. , 50 p.
Keyword [en]
Artificial Intelligence, Reinforcement Learning, Imitation Learning
National Category
Computer Science Human Computer Interaction Software Engineering
Identifiers
URN: urn:nbn:se:bth-4529Local ID: oai:bth.se:arkivex93F504D0E2885A1EC1257D0300512793OAI: oai:DiVA.org:bth-4529DiVA: diva2:831873
Educational program
PAACI Master of Science in Game and Software Engineering
Uppsok
Technology
Supervisors
Available from: 2015-04-22 Created: 2014-06-26 Last updated: 2016-02-22Bibliographically approved

Open Access in DiVA

fulltext(737 kB)132 downloads
File information
File name FULLTEXT01.pdfFile size 737 kBChecksum SHA-512
362e1aa9a0ba62c3cb1de24044bc4221be4005f552a51536fa5c0d58b2f7a5f8682dcb1976f885d5b76d02a8f5c1a4699d35a17179359d59ba2da97dd753bb6d
Type fulltextMimetype application/pdf

By organisation
Department of Creative Technologies
Computer ScienceHuman Computer InteractionSoftware Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 132 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 130 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf