Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Diverse Sounds: Enabling Inclusive Sonic Interaction
KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. (Sound and Music Computing)ORCID iD: 0000-0002-4422-5223
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This compilation thesis collects a series of publications on designing sonic interactions for diversity and inclusion. The presented papers focus on case studies in which musical interfaces were either developed or reviewed. While the described studies are substantially different in their nature, they all contribute to the thesis by providing reflections on how musical interfaces could be designed to enable inclusion rather than exclusion. Building on this work, I introduce two terms: inclusive sonic interaction design and Accessible Digital Musical Instruments (ADMIs). I also define nine properties to consider in the design and evaluation of ADMIs: expressiveness, playability, longevity, customizability, pleasure, sonic quality, robustness, multimodality and causality. Inspired by the experience of playing an acoustic instrument, I propose to enable musical inclusion for under-represented groups (for example persons with visual- and hearing-impairments, as well as elderly people) through the design of Digital Musical Instruments (DMIs) in the form of rich multisensory experiences allowing for multiple modes of interaction. At the same time, it is important to enable customization to fit user needs, both in terms of gestural control and provided sonic output. I conclude that the computer music community has the potential to actively engage more people in music-making activities. In addition, I stress the importance of identifying challenges that people face in these contexts, thereby enabling initiatives towards changing practices.

Abstract [sv]

I denna sammanläggningsavhandling presenteras ett antal artiklar med fokus på mångfald och breddat deltagande inom fältet sonisk interaktionsdesign (engelska: Sonic Interaction Design). Publikationerna behandlar utvecklingen av musikgränssnitt samt en översikt av sådana system. De studier som beskrivs i denna avhandling skiljer sig väsentligt åt sinsemellan men bidrar alla till avhandlingens tes genom att förse läsaren med reflektioner kring hur musikgränssnitt kan utformas för att främja breddat deltagande inom musikskapande. Baserat på dessa studier introducerar jag två begrepp: inkluderande sonisk interaktionsdesign (engelska: inclusive sonic interaction design) och tillgängliga digitala musikinstrument (engelska: Accessible Digital Musical Instruments, ADMIs). Jag definierar även nio egenskaper att ta i beaktning vid design och utvärdering av sådana instrument: uttrycksfullhet, spelbarhet, livslängd, anpassningsbarhet, nöje/välbehag, musik och ljudkvalitet, robusthet, multimodalitet samt kausalitiet. Inspirerad av akustiska musikinstrument föreslår jag att främja ökat deltagande av underrepresenterade grupper (exempelvis personer med syn- eller hörselnedsättningar samt äldre människor) genom att designa digitala musikinstrument i form av multimodala gränssnitt. På så sätt kan instrumenten öppna upp för fler olika interaktionssätt och möjliggöra multisensorisk återkoppling. Det är också viktigt att dessa instrument kan anpassas till respektive användares behov, både när det gäller ljudskapande gester samt ljudande material. Jag drar slutsatsen att forskningsfältet inom datormusik (engelska: computer music) har potential att främja breddat deltagande inom musikskapande. Genom att identifiera de utmaningar som personer i underrepresenterade grupper möter kan vi agera för att skapa en mer inkluderande praktik.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology , 2019. , p. 81
Series
TRITA-EECS-AVL ; 2020:2
Keywords [en]
Accessible Digital Musical Instruments, Sonic Interaction Design, Sound and Music Computing, New Interfaces for Musical Expression
National Category
Media and Communication Technology Human Computer Interaction
Research subject
Media Technology; Human-computer Interaction
Identifiers
URN: urn:nbn:se:kth:diva-265159ISBN: 978-91-7873-378-1 (print)OAI: oai:DiVA.org:kth-265159DiVA, id: diva2:1377483
Public defence
2020-01-10, Kollegiesalen, Brinellvägen 8, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20191212

Available from: 2019-12-12 Created: 2019-12-11 Last updated: 2019-12-13Bibliographically approved
List of papers
1. Accessible Digital Musical Instruments: A Review of Musical Interfaces in Inclusive Music Practice
Open this publication in new window or tab >>Accessible Digital Musical Instruments: A Review of Musical Interfaces in Inclusive Music Practice
2019 (English)In: Multimodal Technologies and Interaction, E-ISSN 2414-4088, Vol. 3, no 3, article id 57Article in journal (Refereed) Published
Abstract [en]

Current advancements in music technology enable the creation of customized Digital Musical Instruments (DMIs). This paper presents a systematic review of Accessible Digital Musical Instruments (ADMIs) in inclusive music practice. History of research concerned with facilitating inclusion in music-making is outlined, and current state of developments and trends in the field are discussed. Although the use of music technology in music therapy contexts has attracted more attention in recent years, the topic has been relatively unexplored in Computer Music literature. This review investigates a total of 113 publications focusing on ADMIs. Based on the 83 instruments in this dataset, ten control interface types were identified: tangible controllers, touchless controllers, Brain–Computer Music Interfaces (BCMIs), adapted instruments, wearable controllers or prosthetic devices, mouth-operated controllers, audio controllers, gaze controllers, touchscreen controllers and mouse-controlled interfaces. The majority of the AMDIs were tangible or physical controllers. Although the haptic modality could potentially play an important role in musical interaction for many user groups, relatively few of the ADMIs (15.6%) incorporated vibrotactile feedback. Aspects judged to be important for successful ADMI design were instrument adaptability and customization, user participation, iterative prototyping, and interdisciplinary development teams.

Place, publisher, year, edition, pages
MDPI, 2019
Keywords
musical instruments, accessibility, digital musical instruments, accessible digital musical instruments, multimodal feedback, assistive music technology
National Category
Media and Communication Technology Human Computer Interaction
Research subject
Media Technology; Human-computer Interaction
Identifiers
urn:nbn:se:kth:diva-260773 (URN)10.3390/mti3030057 (DOI)
Available from: 2019-09-30 Created: 2019-09-30 Last updated: 2019-12-11Bibliographically approved
2. Sound Forest - Evaluation of an Accessible Multisensory Music Installation
Open this publication in new window or tab >>Sound Forest - Evaluation of an Accessible Multisensory Music Installation
Show others...
2019 (English)In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, ACM , 2019, p. 1-12, article id 677Conference paper, Published paper (Refereed)
Abstract [en]

Sound Forest is a music installation consisting of a room with light-emitting interactive strings, vibrating platforms and speakers, situated at the Swedish Museum of Performing Arts. In this paper we present an exploratory study focusing on evaluation of Sound Forest based on picture cards and interviews. Since Sound Forest should be accessible for everyone, regardless age or abilities, we invited children, teens and adults with physical and intellectual disabilities to take part in the evaluation. The main contribution of this work lies in its fndings suggesting that multisensory platforms such as Sound Forest, providing whole-body vibrations, can be used to provide visitors of diferent ages and abilities with similar associations to musical experiences. Interviews also revealed positive responses to haptic feedback in this context. Participants of diferent ages used diferent strategies and bodily modes of interaction in Sound Forest, with activities ranging from running to synchronized music-making and collaborative play.

Place, publisher, year, edition, pages
ACM, 2019
Series
CHI ’19
Keywords
accessible digital musical instruments, evaluation of music systems, haptic feedback, music installations, music production
National Category
Media and Communication Technology Interaction Technologies Media Engineering Human Computer Interaction Music
Research subject
Media Technology; Human-computer Interaction; Art, Technology and Design
Identifiers
urn:nbn:se:kth:diva-250780 (URN)10.1145/3290605.3300907 (DOI)000474467908056 ()
Conference
CHI Conference on Human Factors in Computing Systems
Projects
Ljudskogen
Note

QC 20190625

Available from: 2019-05-06 Created: 2019-05-06 Last updated: 2019-12-11Bibliographically approved
3. Sonification of women in sound and music computing - The sound of female authorship in ICMC, SMC and NIME proceedings
Open this publication in new window or tab >>Sonification of women in sound and music computing - The sound of female authorship in ICMC, SMC and NIME proceedings
2017 (English)In: 2017 ICMC/EMW - 43rd International Computer Music Conference and the 6th International Electronic Music Week, Shanghai Conservatory of Music , 2017, p. 233-238Conference paper, Published paper (Refereed)
Abstract [en]

The primary goal of this study was to approximate the number of female authors in the academic field of Sound and Music Computing. This was done through gender prediction from author names for proceedings from the ICMC, SMC and NIME conferences, and by sonifying these results. Although gender classification by first name can only serve as an estimation of the actual number of female authors in the field, some conclusions could be drawn. The total percentage of author names classified as female was 10.3% for ICMC, 11.9% for SMC and 11.9% for NIME. When merging data from all three conferences for years 2004-2016, it could be concluded that names classified as female ranged from 9.5 to 14.3%. Changes in the ratio of female vs. male authors over time were further illustrated by sonifications, allowing the reader to explore, compare and reflect upon the results by listening to sonic representations of the data. The conclusion that can be drawn from this study is that the field of Sound and Music Computing is still far from being gender-balanced.

Place, publisher, year, edition, pages
Shanghai Conservatory of Music, 2017
National Category
Gender Studies Musicology
Identifiers
urn:nbn:se:kth:diva-221122 (URN)2-s2.0-85040054316 (Scopus ID)9780984527465 (ISBN)
Conference
43rd International Computer Music Conference, ICMC 2017 and the 6th International Electronic Music Week, EMW 2017, Shanghai, China, 15 October 2017 through 20 October 2017
Note

QC 20180115

Available from: 2018-01-15 Created: 2018-01-15 Last updated: 2019-12-11Bibliographically approved
4. Interactive sonification of a fluid dance movement: an exploratory study
Open this publication in new window or tab >>Interactive sonification of a fluid dance movement: an exploratory study
2019 (English)In: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 13, no 3, p. 181-189Article in journal (Refereed) Published
Abstract [en]

In this paper we present three different experiments designed to explore sound properties associated with fluid movement: (1) an experiment in which participants adjusted parameters of a sonification model developed for a fluid dance movement, (2) a vocal sketching experiment in which participants sketched sounds portraying fluid versus nonfluid movements, and (3) a workshop in which participants discussed and selected fluid versus nonfluid sounds. Consistent findings from the three experiments indicated that sounds expressing fluidity generally occupy a lower register and has less high frequency content, as well as a lower bandwidth, than sounds expressing nonfluidity. The ideal sound to express fluidity is continuous, calm, slow, pitched, reminiscent of wind, water or an acoustic musical instrument. The ideal sound to express nonfluidity is harsh, non-continuous, abrupt, dissonant, conceptually associated with metal or wood, unhuman and robotic. Findings presented in this paper can be used as design guidelines for future applications in which the movement property fluidity is to be conveyed through sonification.

Place, publisher, year, edition, pages
Springer, 2019
Keywords
Interactive sonification, Fluid movement, Vocal sketching, sound and music computing
National Category
Media Engineering Media and Communication Technology Human Computer Interaction Other Natural Sciences Interaction Technologies
Research subject
Media Technology; Human-computer Interaction
Identifiers
urn:nbn:se:kth:diva-239168 (URN)10.1007/s12193-018-0278-y (DOI)000480549700004 ()2-s2.0-85056702595 (Scopus ID)
Projects
DANCE
Funder
EU, Horizon 2020, 645553
Note

QC 20190904

Available from: 2018-11-18 Created: 2018-11-18 Last updated: 2019-12-11Bibliographically approved
5. Music Creation by Example
Open this publication in new window or tab >>Music Creation by Example
2020 (English)Manuscript (preprint) (Other academic)
Abstract [en]

Short online videos have become the dominating media on social platforms. However, finding suitable music to accompany videos can be a challenging task to some video creators, due to copyright constraints, limitations in search engines, and required audio-editing expertise. One possible solution to these problems is to use AI music generation. In this paper we present a user interface (UI) paradigm that allows users to input a song to an AI music engine and then interactively regenerate and mix AI-generated music. To arrive at this design, we conducted user studies with a total of 104 video creators at several stages of our design and development process. User studies supported the effectiveness of our approach and provided valuable insights about humanAI interaction as well as the design and evaluation of mixedinitiative interfaces in creative practice.

Keywords
Music Generation, Human-AI Interaction
National Category
Media and Communication Technology Human Computer Interaction
Research subject
Media Technology; Human-computer Interaction
Identifiers
urn:nbn:se:kth:diva-265519 (URN)
Conference
CHI Conference on Human Factors in Computing Systems
Note

QC 20191212

Available from: 2019-12-12 Created: 2019-12-12 Last updated: 2019-12-12Bibliographically approved

Open Access in DiVA

Emma Frid - Diverse Sounds(3044 kB)104 downloads
File information
File name FULLTEXT01.pdfFile size 3044 kBChecksum SHA-512
f1c739dfe26ceeefaa026c3d9a285d07da4f2ce3f143d4620119c9cd72c8b680dc7ba1c162a1bf22668935046e892d0bb663f5e3e4fbd0ce4cff0825d3453f4f
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Frid, Emma
By organisation
Media Technology and Interaction Design, MID
Media and Communication TechnologyHuman Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar
Total: 104 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 923 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf