Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Autonomous Systems in Society and War: Philosophical Inquiries
KTH, Skolan för arkitektur och samhällsbyggnad (ABE), Filosofi och teknikhistoria, Filosofi.
2013 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [en]

The overall aim of this thesis is to look at some philosophical issues surrounding autonomous systems in society and war. These issues can be divided into three main categories. The first, discussed in papers I and II, concerns ethical issues surrounding the use of autonomous systems – where the focus in this thesis is on military robots. The second issue, discussed in paper III, concerns how to make sure that advanced robots behave ethically adequate. The third issue, discussed in papers IV and V, has to do with agency and responsibility. Another issue, somewhat aside from the philosophical, has to do with coping with future technologies, and developing methods for dealing with potentially disruptive technologies. This is discussed in papers VI and VII.

Paper I systemizes some ethical issues surrounding the use of UAVs in war, with the laws of war as a backdrop. It is suggested that the laws of war are too wide and might be interpreted differently depending on which normative moral theory is used.

Paper II is about future, more advanced autonomous robots, and whether the use of such robots can undermine the justification for killing in war. The suggestion is that this justification is substantially undermined if robots are used to replace humans to a high extent. Papers I and II both suggest revisions or additions to the laws or war.

Paper III provides a discussion on one normative moral theory – ethics of care – connected to care robots. The aim is twofold: first, to provide a plausible and ethically relevant interpretation of the key term care in ethics of care, and second, to discuss whether ethics of care may be a suitable theory to implement in care robots.

Paper IV discusses robots connected to agency and responsibility, with a focus on consciousness. The paper has a functionalistic approach, and it is suggested that robots should be considered agents if they can behave as if they are, in a moral Turing test.

Paper V is also about robots and agency, but with a focus on free will. The main question is whether robots can have free will in the same sense as we consider humans to have free will when holding them responsible for their actions in a court of law. It is argued that autonomy with respect to norms is crucial for the agency of robots.

Paper VI investigates the assessment of socially disruptive technological change. The coevolution of society and potentially disruptive technolgies makes decision-guidance on such technologies difficult. Four basic principles are proposed for such decision guidance, involving interdisciplinary and participatory elements.

Paper VII applies the results from paper VI – and a workshop – to autonomous systems, a potentially disruptive technology. A method for dealing with potentially disruptive technolgies is developed in the paper.

sted, utgiver, år, opplag, sider
Stockholm: KTH Royal Institute of Technology, 2013. , ix, 57 s.
Serie
Theses in philosophy from the Royal Institute of Technology, ISSN 1650-8831
Emneord [en]
UAVs, drones, military robots, laws of war, justification for killing, ethics of care, care robots, functional morality, moral responsibility, Moral Turing Test, robot morality, artificial agent, artificial agency, autonomy, norms, disruptive technology, co-evolution, scenarios, autonomous systems, security, decision guidance, technology assessment
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-127813ISBN: 978-91-7501-820-1 (tryckt)OAI: oai:DiVA.org:kth-127813DiVA: diva2:646174
Disputas
2013-10-02, Kapellet, Brinellvägen 6-8, KTH, Stockholm, 10:00 (engelsk)
Opponent
Veileder
Merknad

QC 20130911

Tilgjengelig fra: 2013-09-11 Laget: 2013-09-06 Sist oppdatert: 2014-06-17bibliografisk kontrollert
Delarbeid
1. Is it morally right to use Unmanned Aerial Vehicles (UAVs) in war?
Åpne denne publikasjonen i ny fane eller vindu >>Is it morally right to use Unmanned Aerial Vehicles (UAVs) in war?
2011 (engelsk)Inngår i: Philosophy & Technology, ISSN 2210-5433, E-ISSN 2210-5441, ISSN 2210-5433, Vol. 24, nr 3, 279-291 s.Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Several robotic automation systems, such as UAVs, are being used in combat today. This evokes ethical questions. In this paper it is argued that UAVs, more than other weapons, may determine which normative theory the interpretation of the laws of war (LOW) will be based on. UAVs are unique as a weapon in the sense that the advantages they provide in terms of fewer casualties, and the fact that they make war seem more like a computer game, might lower the threshold for entering war. This indicates the importance of revising the LOW, or adding some rules that focus specifically on UAVs.

Emneord
UAVs, laws of war, robots
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-32432 (URN)10.1007/s13347-011-0033-8 (DOI)2-s2.0-80052618840 (Scopus ID)
Merknad

QC 20110414. Updated from submitted to published, 20120316. Previous title: Is it morally right to use UAVs (unmanned aerial vehicles) in war?

Tilgjengelig fra: 2011-04-14 Laget: 2011-04-14 Sist oppdatert: 2017-12-11bibliografisk kontrollert
2. Autonomous Robots in War: Undermining the Ethical Justification for Killing
Åpne denne publikasjonen i ny fane eller vindu >>Autonomous Robots in War: Undermining the Ethical Justification for Killing
(engelsk)Manuskript (preprint) (Annet vitenskapelig)
Identifikatorer
urn:nbn:se:kth:diva-128339 (URN)
Merknad

QS 2013

Tilgjengelig fra: 2013-09-11 Laget: 2013-09-11 Sist oppdatert: 2013-09-11bibliografisk kontrollert
3. Robots and the ethics of care
Åpne denne publikasjonen i ny fane eller vindu >>Robots and the ethics of care
2013 (engelsk)Inngår i: International Journal of Technoethics, ISSN 1947-3451, Vol. 4, nr 1, 67-82 s.Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

In this paper, the moral theory ethics of care - EoC - is investigated and connected to care robots. The aim is twofold: first, to provide a plausible and ethically relevant interpretation of the key term care in EoC (which is, it is argued, slightly different from the everyday use of the term) indicating that we should distinguish between "natural care" and "ethical care". The second aim is to discuss whether EoC may be a suitable theory to implement in care robots. The conclusion is that EoC may be a theory that is suitable for robots in health care settings.

HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-128338 (URN)10.4018/jte.2013010106 (DOI)2-s2.0-84882313905 (Scopus ID)
Merknad

QC 20130911

Tilgjengelig fra: 2013-09-11 Laget: 2013-09-11 Sist oppdatert: 2013-09-11bibliografisk kontrollert
4. The functional morality of robots
Åpne denne publikasjonen i ny fane eller vindu >>The functional morality of robots
2010 (engelsk)Inngår i: International Journal of Technoethics, ISSN 1947-3451, Vol. 1, nr 4, 65-73 s.Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

It is often argued that a robot cannot be held morally responsible for its actions. The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility. When deciding whether humans are moral agents one should look at their behaviour and listen to the reasons they give for their judgments in order to determine that they understood the situation properly. The author suggests that this should be done for robots as well. In this regard, if a robot passes a moral version of the Turing Test—a Moral Turing Test (MTT) we should hold the robot morally responsible for its actions. This is supported by the impossibility of deciding who actually has (semantic or only syntactic) understanding of a moral situation, and by two examples: the transferring of a human mind into a computer, and aliens who actually are robots.

Emneord
functional morality, moral responsibiblity, moral turing test, robot morality, understanding
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-32438 (URN)2-s2.0-84864120957 (Scopus ID)
Merknad

QC 20110414

Tilgjengelig fra: 2011-04-14 Laget: 2011-04-14 Sist oppdatert: 2013-09-11bibliografisk kontrollert
5. The Pragmatic Robotic Agent in advance
Åpne denne publikasjonen i ny fane eller vindu >>The Pragmatic Robotic Agent in advance
2013 (engelsk)Inngår i: Techné: Research in Philosophy and Technology, ISSN 1091-8264, E-ISSN 1091-8264, Vol. 17, nr 3, 295-315 s.Artikkel i tidsskrift (Annet vitenskapelig) Published
Abstract [en]

Can artifacts be agents in the same sense as humans? This paper endorses a pragmatic stance to that issue. The crucial question is whether artifacts can have free will in the same pragmatic sense as we consider humans to have a free will when holding them responsible for their actions. The origin of actions is important. Can an action originate inside an artifact, considering that it is, at least today, programmed by a human? In this paper it is argued that autonomy with respect to norms is crucial for artificial agency.

HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-128341 (URN)10.5840/techne2014249 (DOI)
Merknad

QC 20140617

Tilgjengelig fra: 2013-09-11 Laget: 2013-09-11 Sist oppdatert: 2017-12-06bibliografisk kontrollert
6. Assessing socially disruptive technological change
Åpne denne publikasjonen i ny fane eller vindu >>Assessing socially disruptive technological change
Vise andre…
2010 (engelsk)Inngår i: Technology in society, ISSN 0160-791X, E-ISSN 1879-3274, Vol. 32, nr 3, 209-218 s.Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

The co-evolution of society and potentially disruptive technologies makes decision guidance on such technologies difficult. Four basic principles are proposed for such decision guidance. None of the currently available methods satisfies these principles, but some of them contain useful methodological elements that should be integrated in a more satisfactory methodology. The outlines of such a methodology, multiple expertise interaction, are proposed. It combines elements from several previous methodologies, including (1) interdisciplinary groups of experts that assess the potential internal development of a particular technology; (2) external scenarios describing how the surrounding world can develop in ways that are relevant for the technology in question; and (3) a participatory process of convergence seminars, which is tailored to ensure that several alternative future developments are taken seriously into account. In particular, we suggest further development of a bottom-up scenario methodology to capture the co-evolutionary character of socio-technical development paths.

Emneord
Co-evolution, Convergence seminars, Critical functions of society, Decision guidance, Disruptive technologies, Multiple expertise interaction, Scenario planning, Technical artifact, Technology assessment
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-25198 (URN)10.1016/j.techsoc.2010.07.002 (DOI)2-s2.0-77956925681 (Scopus ID)
Merknad
QC 20101012Tilgjengelig fra: 2010-10-12 Laget: 2010-10-12 Sist oppdatert: 2017-12-12bibliografisk kontrollert
7. Co-evolutionary scenarios for creative prototyping of future robot systems for civil protection
Åpne denne publikasjonen i ny fane eller vindu >>Co-evolutionary scenarios for creative prototyping of future robot systems for civil protection
2014 (engelsk)Inngår i: Technological forecasting & social change, ISSN 0040-1625, E-ISSN 1873-5509, Vol. 84, 93-100 s.Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Co-evolutionary scenarios are used for creative prototyping with the purpose of assessing potential implications of future autonomous robot systems on civil protection. The methodology is based on a co-evolutionary scenario approach and the development of different evolutionary paths. Opportunities, threats and ethical aspects in connection with the introduction of robotics in the domestic security and safety sector are identified using an iterative participatory workshop methodology. Three creative prototypes of robotic systems are described: "RoboMall", "RoboButler" and "SnakeSquad". The debate in society that might follow the introduction of these three robot systems and society's response to the experienced ethical problems and opportunities are discussed in the context of two scenarios of different future societies.

Emneord
Co-evolutionary, Scenarios, Autonomous systems, Robots, Security, Safety
HSV kategori
Identifikatorer
urn:nbn:se:kth:diva-128342 (URN)10.1016/j.techfore.2013.07.016 (DOI)000336011800010 ()2-s2.0-84898409064 (Scopus ID)
Merknad

QC 20130911

Tilgjengelig fra: 2013-09-11 Laget: 2013-09-11 Sist oppdatert: 2017-12-06bibliografisk kontrollert

Open Access i DiVA

Kappa(624 kB)1166 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 624 kBChecksum SHA-512
9d0dfb5d082ddef299b63d71cb6649fb9e74f2a5ca850e05c4434a0e7f1524b78e8700948bb16c263947b63fc26ac13b5ec9b65ac9ba76d1348d8425a82cea19
Type fulltextMimetype application/pdf

Søk i DiVA

Av forfatter/redaktør
Johansson, Linda
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 1166 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

isbn
urn-nbn

Altmetric

isbn
urn-nbn
Totalt: 3138 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf