Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Autonomous Systems in Society and War: Philosophical Inquiries
KTH, School of Architecture and the Built Environment (ABE), Philosophy and History of Technology, Philosophy.
2013 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The overall aim of this thesis is to look at some philosophical issues surrounding autonomous systems in society and war. These issues can be divided into three main categories. The first, discussed in papers I and II, concerns ethical issues surrounding the use of autonomous systems – where the focus in this thesis is on military robots. The second issue, discussed in paper III, concerns how to make sure that advanced robots behave ethically adequate. The third issue, discussed in papers IV and V, has to do with agency and responsibility. Another issue, somewhat aside from the philosophical, has to do with coping with future technologies, and developing methods for dealing with potentially disruptive technologies. This is discussed in papers VI and VII.

Paper I systemizes some ethical issues surrounding the use of UAVs in war, with the laws of war as a backdrop. It is suggested that the laws of war are too wide and might be interpreted differently depending on which normative moral theory is used.

Paper II is about future, more advanced autonomous robots, and whether the use of such robots can undermine the justification for killing in war. The suggestion is that this justification is substantially undermined if robots are used to replace humans to a high extent. Papers I and II both suggest revisions or additions to the laws or war.

Paper III provides a discussion on one normative moral theory – ethics of care – connected to care robots. The aim is twofold: first, to provide a plausible and ethically relevant interpretation of the key term care in ethics of care, and second, to discuss whether ethics of care may be a suitable theory to implement in care robots.

Paper IV discusses robots connected to agency and responsibility, with a focus on consciousness. The paper has a functionalistic approach, and it is suggested that robots should be considered agents if they can behave as if they are, in a moral Turing test.

Paper V is also about robots and agency, but with a focus on free will. The main question is whether robots can have free will in the same sense as we consider humans to have free will when holding them responsible for their actions in a court of law. It is argued that autonomy with respect to norms is crucial for the agency of robots.

Paper VI investigates the assessment of socially disruptive technological change. The coevolution of society and potentially disruptive technolgies makes decision-guidance on such technologies difficult. Four basic principles are proposed for such decision guidance, involving interdisciplinary and participatory elements.

Paper VII applies the results from paper VI – and a workshop – to autonomous systems, a potentially disruptive technology. A method for dealing with potentially disruptive technolgies is developed in the paper.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2013. , p. ix, 57
Series
Theses in philosophy from the Royal Institute of Technology, ISSN 1650-8831
Keywords [en]
UAVs, drones, military robots, laws of war, justification for killing, ethics of care, care robots, functional morality, moral responsibility, Moral Turing Test, robot morality, artificial agent, artificial agency, autonomy, norms, disruptive technology, co-evolution, scenarios, autonomous systems, security, decision guidance, technology assessment
National Category
Philosophy
Identifiers
URN: urn:nbn:se:kth:diva-127813ISBN: 978-91-7501-820-1 (print)OAI: oai:DiVA.org:kth-127813DiVA, id: diva2:646174
Public defence
2013-10-02, Kapellet, Brinellvägen 6-8, KTH, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20130911

Available from: 2013-09-11 Created: 2013-09-06 Last updated: 2022-06-23Bibliographically approved
List of papers
1. Is it morally right to use Unmanned Aerial Vehicles (UAVs) in war?
Open this publication in new window or tab >>Is it morally right to use Unmanned Aerial Vehicles (UAVs) in war?
2011 (English)In: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, ISSN 2210-5433, Vol. 24, no 3, p. 279-291Article in journal (Refereed) Published
Abstract [en]

Several robotic automation systems, such as UAVs, are being used in combat today. This evokes ethical questions. In this paper it is argued that UAVs, more than other weapons, may determine which normative theory the interpretation of the laws of war (LOW) will be based on. UAVs are unique as a weapon in the sense that the advantages they provide in terms of fewer casualties, and the fact that they make war seem more like a computer game, might lower the threshold for entering war. This indicates the importance of revising the LOW, or adding some rules that focus specifically on UAVs.

Keywords
UAVs, laws of war, robots
National Category
Philosophy Ethics
Identifiers
urn:nbn:se:kth:diva-32432 (URN)10.1007/s13347-011-0033-8 (DOI)2-s2.0-80052618840 (Scopus ID)
Note

QC 20110414. Updated from submitted to published, 20120316. Previous title: Is it morally right to use UAVs (unmanned aerial vehicles) in war?

Available from: 2011-04-14 Created: 2011-04-14 Last updated: 2022-06-24Bibliographically approved
2. Autonomous Robots in War: Undermining the Ethical Justification for Killing
Open this publication in new window or tab >>Autonomous Robots in War: Undermining the Ethical Justification for Killing
(English)Manuscript (preprint) (Other academic)
Identifiers
urn:nbn:se:kth:diva-128339 (URN)
Note

QS 2013

Available from: 2013-09-11 Created: 2013-09-11 Last updated: 2022-06-23Bibliographically approved
3. Robots and the ethics of care
Open this publication in new window or tab >>Robots and the ethics of care
2013 (English)In: International Journal of Technoethics, ISSN 1947-3451, Vol. 4, no 1, p. 67-82Article in journal (Refereed) Published
Abstract [en]

In this paper, the moral theory ethics of care - EoC - is investigated and connected to care robots. The aim is twofold: first, to provide a plausible and ethically relevant interpretation of the key term care in EoC (which is, it is argued, slightly different from the everyday use of the term) indicating that we should distinguish between "natural care" and "ethical care". The second aim is to discuss whether EoC may be a suitable theory to implement in care robots. The conclusion is that EoC may be a theory that is suitable for robots in health care settings.

National Category
Philosophy Ethics
Identifiers
urn:nbn:se:kth:diva-128338 (URN)10.4018/jte.2013010106 (DOI)000214295300006 ()2-s2.0-84882313905 (Scopus ID)
Note

QC 20130911

Available from: 2013-09-11 Created: 2013-09-11 Last updated: 2022-06-23Bibliographically approved
4. The functional morality of robots
Open this publication in new window or tab >>The functional morality of robots
2010 (English)In: International Journal of Technoethics, ISSN 1947-3451, Vol. 1, no 4, p. 65-73Article in journal (Refereed) Published
Abstract [en]

It is often argued that a robot cannot be held morally responsible for its actions. The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility. When deciding whether humans are moral agents one should look at their behaviour and listen to the reasons they give for their judgments in order to determine that they understood the situation properly. The author suggests that this should be done for robots as well. In this regard, if a robot passes a moral version of the Turing Test—a Moral Turing Test (MTT) we should hold the robot morally responsible for its actions. This is supported by the impossibility of deciding who actually has (semantic or only syntactic) understanding of a moral situation, and by two examples: the transferring of a human mind into a computer, and aliens who actually are robots.

Keywords
functional morality, moral responsibiblity, moral turing test, robot morality, understanding
National Category
Philosophy
Identifiers
urn:nbn:se:kth:diva-32438 (URN)10.4018/jte.2010100105 (DOI)000214287700005 ()2-s2.0-84864120957 (Scopus ID)
Note

QC 20110414

Available from: 2011-04-14 Created: 2011-04-14 Last updated: 2022-06-24Bibliographically approved
5. The Pragmatic Robotic Agent in advance
Open this publication in new window or tab >>The Pragmatic Robotic Agent in advance
2013 (English)In: Techné: Research in Philosophy and Technology, ISSN 1091-8264, E-ISSN 2691-5928, Vol. 17, no 3, p. 295-315Article in journal (Other academic) Published
Abstract [en]

Can artifacts be agents in the same sense as humans? This paper endorses a pragmatic stance to that issue. The crucial question is whether artifacts can have free will in the same pragmatic sense as we consider humans to have a free will when holding them responsible for their actions. The origin of actions is important. Can an action originate inside an artifact, considering that it is, at least today, programmed by a human? In this paper it is argued that autonomy with respect to norms is crucial for artificial agency.

National Category
Philosophy
Identifiers
urn:nbn:se:kth:diva-128341 (URN)10.5840/techne2014249 (DOI)
Note

QC 20140617

Available from: 2013-09-11 Created: 2013-09-11 Last updated: 2024-01-22Bibliographically approved
6. Assessing socially disruptive technological change
Open this publication in new window or tab >>Assessing socially disruptive technological change
Show others...
2010 (English)In: Technology in society, ISSN 0160-791X, E-ISSN 1879-3274, Vol. 32, no 3, p. 209-218Article in journal (Refereed) Published
Abstract [en]

The co-evolution of society and potentially disruptive technologies makes decision guidance on such technologies difficult. Four basic principles are proposed for such decision guidance. None of the currently available methods satisfies these principles, but some of them contain useful methodological elements that should be integrated in a more satisfactory methodology. The outlines of such a methodology, multiple expertise interaction, are proposed. It combines elements from several previous methodologies, including (1) interdisciplinary groups of experts that assess the potential internal development of a particular technology; (2) external scenarios describing how the surrounding world can develop in ways that are relevant for the technology in question; and (3) a participatory process of convergence seminars, which is tailored to ensure that several alternative future developments are taken seriously into account. In particular, we suggest further development of a bottom-up scenario methodology to capture the co-evolutionary character of socio-technical development paths.

Keywords
Co-evolution, Convergence seminars, Critical functions of society, Decision guidance, Disruptive technologies, Multiple expertise interaction, Scenario planning, Technical artifact, Technology assessment
National Category
Humanities
Identifiers
urn:nbn:se:kth:diva-25198 (URN)10.1016/j.techsoc.2010.07.002 (DOI)000215254100005 ()2-s2.0-77956925681 (Scopus ID)
Note
QC 20101012Available from: 2010-10-12 Created: 2010-10-12 Last updated: 2024-03-18Bibliographically approved
7. Co-evolutionary scenarios for creative prototyping of future robot systems for civil protection
Open this publication in new window or tab >>Co-evolutionary scenarios for creative prototyping of future robot systems for civil protection
2014 (English)In: Technological forecasting & social change, ISSN 0040-1625, E-ISSN 1873-5509, Vol. 84, p. 93-100Article in journal (Refereed) Published
Abstract [en]

Co-evolutionary scenarios are used for creative prototyping with the purpose of assessing potential implications of future autonomous robot systems on civil protection. The methodology is based on a co-evolutionary scenario approach and the development of different evolutionary paths. Opportunities, threats and ethical aspects in connection with the introduction of robotics in the domestic security and safety sector are identified using an iterative participatory workshop methodology. Three creative prototypes of robotic systems are described: "RoboMall", "RoboButler" and "SnakeSquad". The debate in society that might follow the introduction of these three robot systems and society's response to the experienced ethical problems and opportunities are discussed in the context of two scenarios of different future societies.

Keywords
Co-evolutionary, Scenarios, Autonomous systems, Robots, Security, Safety
National Category
Humanities
Identifiers
urn:nbn:se:kth:diva-128342 (URN)10.1016/j.techfore.2013.07.016 (DOI)000336011800010 ()2-s2.0-84898409064 (Scopus ID)
Note

QC 20130911

Available from: 2013-09-11 Created: 2013-09-11 Last updated: 2022-06-23Bibliographically approved

Open Access in DiVA

Kappa(624 kB)9884 downloads
File information
File name FULLTEXT01.pdfFile size 624 kBChecksum SHA-512
9d0dfb5d082ddef299b63d71cb6649fb9e74f2a5ca850e05c4434a0e7f1524b78e8700948bb16c263947b63fc26ac13b5ec9b65ac9ba76d1348d8425a82cea19
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Johansson, Linda
By organisation
Philosophy
Philosophy

Search outside of DiVA

GoogleGoogle Scholar
Total: 9885 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 4139 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf