Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Tunable Dynamics in Agent-Based Simulation using Multi-Objective Reinforcement Learning
Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0002-4144-4893
Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
2019 (English)In: Proceedings of the 2019 Adaptive and Learning Agents Workshop (ALA), 2019, 2019, p. 1-7Conference paper, Published paper (Refereed)
Abstract [en]

Agent-based simulation is a powerful tool for studying complex systems of interacting agents. To achieve good results, the behavior models used for the agents must be of high quality. Traditionally these models have been handcrafted by domain experts. This is a difficult, expensive and time consuming process. In contrast, reinforcement learning allows agents to learn how to achieve their goals by interacting with the environment. However, after training the behavior of such agents is often static, i.e. it can no longer be affected by a human. This makes it difficult to adapt agent behavior to specific user needs, which may vary among different runs of the simulation. In this paper we address this problem by studying how multi-objective reinforcement learning can be used as a framework for building tunable agents, whose characteristics can be adjusted at runtime to promote adaptiveness and diversity in agent-based simulation. We propose an agent architecture that allows us to adapt popular deep reinforcement learning algorithms to multi-objective environments. We empirically show that our method allows us to train tunable agents that can approximate the policies of multiple species of agents.

Place, publisher, year, edition, pages
2019. p. 1-7
Keywords [en]
Modelling for agent based simulation, Reward structures for learning, Learning agent capabilities (agent models, communication, observation)
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-161093OAI: oai:DiVA.org:liu-161093DiVA, id: diva2:1362933
Conference
Adaptive and Learning Agents Workshop (ALA-19) at AAMAS, Montreal, Canada, May 13-14, 2019
Funder
Vinnova, 2017-04885Wallenberg AI, Autonomous Systems and Software Program (WASP)Available from: 2019-10-22 Created: 2019-10-22 Last updated: 2021-04-20Bibliographically approved

Open Access in DiVA

Tunable Dynamics in Agent-Based Simulation using Multi-Objective Reinforcement Learning(687 kB)1339 downloads
File information
File name FULLTEXT01.pdfFile size 687 kBChecksum SHA-512
c8606dea6d6b77aeff1bf09f567a81744e08f1f2d11e84799be07e20c81df39247c8208a2839b515be8df7f189d4e3ebe9274f78483f3929b4bf918382fdb328
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Källström, JohanHeintz, Fredrik
By organisation
Artificial Intelligence and Integrated Computer SystemsFaculty of Science & Engineering
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 1346 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 1152 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf