Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Mitigating algorithmic bias in Artificial Intelligence systems
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics.
2019 (English)Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

Artificial Intelligence (AI) systems are increasingly used in society to make decisions that can have direct implications on human lives; credit risk assessments, employment decisions and criminal suspects predictions. As public attention has been drawn towards examples of discriminating and biased AI systems, concerns have been raised about the fairness of these systems. Face recognition systems, in particular, are often trained on non-diverse data sets where certain groups often are underrepresented in the data. The focus of this thesis is to provide insights regarding different aspects that are important to consider in order to mitigate algorithmic bias as well as to investigate the practical implications of bias in AI systems. To fulfil this objective, qualitative interviews with academics and practitioners with different roles in the field of AI and a quantitative online survey is conducted. A practical scenario covering face recognition and gender bias is also applied in order to understand how people reason about this issue in a practical context. The main conclusion of the study is that despite high levels of awareness and understanding about challenges and technical solutions, the academics and practitioners showed little or no awareness of legal aspects regarding bias in AI systems. The implication of this finding is that AI can be seen as a disruptive technology, where organizations tend to develop their own mitigation tools and frameworks as well as use their own moral judgement and understanding of the area instead of turning to legal authorities.

Place, publisher, year, edition, pages
2019. , p. 57
Series
UPTEC STS, ISSN 1650-8319 ; 19033
Keywords [en]
Artificial Intelligence, AI, algorithmic bias, disruptive technology
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:uu:diva-388627OAI: oai:DiVA.org:uu-388627DiVA, id: diva2:1334465
External cooperation
IBM Svenska AB
Educational program
Systems in Technology and Society Programme
Supervisors
Examiners
Available from: 2019-07-03 Created: 2019-07-02 Last updated: 2019-07-03Bibliographically approved

Open Access in DiVA

fulltext(2492 kB)29 downloads
File information
File name FULLTEXT01.pdfFile size 2492 kBChecksum SHA-512
00d10225dbc5a4a68b7d9b9953e049942ef38ebd2bc7ac83a64ec9ef5e6f4c67a3461b18fd1c158f638f2e17cca6780c0cd5d35b7fb3c0bac125ee288f0274f4
Type fulltextMimetype application/pdf

By organisation
Department of Mathematics
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar
Total: 29 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 277 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf