Digitala Vetenskapliga Arkivet

Endre søk
Begrens søket
12 1 - 50 of 57
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Bresin, Roberto
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Elblaus, Ludvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Frid, Emma
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Favero, Federico
    KTH, Skolan för arkitektur och samhällsbyggnad (ABE).
    Annersten, Lars
    Musikverket.
    Berner, David
    Musikverket.
    Morreale, Fabio
    Queen Mary University of London.
    SOUND FOREST/LJUDSKOGEN: A LARGE-SCALE STRING-BASED INTERACTIVE MUSICAL INSTRUMENT2016Inngår i: Sound and Music Computing 2016, SMC Sound&Music Computing NETWORK , 2016, s. 79-84Konferansepaper (Fagfellevurdert)
    Abstract [en]

     In this paper we present a string-based, interactive, largescale installation for a new museum dedicated to performing arts, Scenkonstmuseet, which will be inaugurated in 2017 in Stockholm, Sweden. The installation will occupy an entire room that measures 10x5 meters. We aim to create a digital musical instrument (DMI) that facilitates intuitive musical interaction, thereby enabling visitors to quickly start creating music either alone or together. The interface should be able to serve as a pedagogical tool; visitors should be able to learn about concepts related to music and music making by interacting with the DMI. Since the lifespan of the installation will be approximately five years, one main concern is to create an experience that will encourage visitors to return to the museum for continued instrument exploration. In other words, the DMI should be designed to facilitate long-term engagement. Finally, an important aspect in the design of the installation is that the DMI should be accessible and provide a rich experience for all museum visitors, regardless of age or abilities.

    Fulltekst (pdf)
    fulltext
  • 2.
    Bresin, Roberto
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Frid, Emma
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. IRCAM STMS Lab.
    Latupeirissa, Adrian Benigno
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Panariello, Claudio
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Robust Non-Verbal Expression in Humanoid Robots: New Methods for Augmenting Expressive Movements with Sound2021Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The aim of the SONAO project is to establish new methods basedon sonification of expressive movements for achieving a robust interaction between users and humanoid robots. We want to achievethis by combining competences of the research team members inthe fields of social robotics, sound and music computing, affective computing, and body motion analysis. We want to engineersound models for implementing effective mappings between stylized body movements and sound parameters that will enable anagent to express high-level body motion qualities through sound.These mappings are paramount for supporting feedback to andunderstanding robot body motion. The project will result in thedevelopment of new theories, guidelines, models, and tools forthe sonic representation of high-level body motion qualities in interactive applications. This work is part of the growing researchfield known as data sonification, in which we combine methodsand knowledge from the fields of interactive sonification, embodied cognition, multisensory perception, non-verbal and gesturalcommunication in robots.

  • 3.
    Bresin, Roberto
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Mancini, Maurizio
    University College Cork National University of Ireland: Cork, IE.
    Elblaus, Ludvig
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Frid, Emma
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Sonification of the self vs. sonification of the other: Differences in the sonification of performed vs. observed simple hand movements2020Inngår i: International journal of human-computer studies, ISSN 1071-5819, E-ISSN 1095-9300, Vol. 144Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Existing works on interactive sonification of movements, i.e., the translation of human movement qualities from the physical to the auditory domain, usually adopt a predetermined approach: the way in which movement features modulate the characteristics of sound is fixed. In our work we want to go one step further and demonstrate that the user role can influence the tuning of the mapping between movement cues and sound parameters. Here, we aim to verify if and how the mapping changes when the user is either the performer or the observer of a series of body movements (tracing a square or an infinite shape with the hand in the air). We asked participants to tune movement sonification while they were directly performing the sonified movement vs. while watching another person performing the movement and listening to its sonification. Results show that the tuning of the sonification chosen by participants is influenced by three variables: role of the user (performer vs observer), movement quality (the amount of Smoothness and Directness in the movement), and physical parameters of the movements (velocity and acceleration). Performers focused more on the quality of their movement, while observers focused more on the sonic rendering, making it more expressive and more connected to low-level physical features.

  • 4.
    Falkenberg, Kjetil
    et al.
    Kungl. Musikhögskolan. KTH.
    Latupeirissa, Adrian Benigno
    KTH.
    Frid, Emma
    KTH.
    Lindetorp, Hans
    Kungl. Musikhögskolan, Institutionen för musik- och medieproduktion. KTH.
    Unproved methods from the frontier in the course curriculum: A bidirectional and mutually beneficial research challenge2020Inngår i: INTED2020 Proceedings, IATED , 2020, s. 7033-7038Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we report the experiences of students and teachers in a master course in Musical Communication and Music Technology at KTH Royal Institute of Technology. The students were exposed to vocal sketching [1], a novel sound design method, both as their course material and for the examination. The results in terms of learning outcome and course experience were confirmed and more than convincing, while the results in terms of validating the efficacy of the method were meagre.As part of our research, we designed an experiment where the students first interviewed preschool children who were asked to describe a fantasy musical instrument and then built it. The course schedule included lectures on voice sketching, sound synthesis, sound quality, new musical instruments, parameter mapping, and music programming. The project work and idea was presented during the first lecture, eight weeks before meeting the children. The interview took place in a workshop at the Swedish Museum for Performing Arts who had an exhibition of new musical instruments. Student/child pairs visited the exhibition in order to 1) familiarize themselves, 2) establish communication, and 3) get a common point of reference in terms of the exhibited instruments. After this process, the pairs completed an interview session inspired by [2]. The parents and teacher could join in if desired. The students got two weeks to build the instruments and present these at the museum. The purpose was not to evaluate the instruments, but to explore the vocal sketch method. The design and building phase was a prototyping task which the students were comfortable with. All design decisions needed to be set in relation to the course literature. All the presented projects followed a scenario- and contextual-inspired design approach [3] where a target solution needed to be established quickly grounded on a basic understanding of the agent (the child), its goals, and its presumed actions [4], and where the child mainly acted as informant [5]. While all the children could voice sketch, few actually did so in the interview. Despite this, the finished instruments matched the expectations of the children, and the course work satisfied the intended learning outcomes. As a research outcome, we suggest that future studies should include training vocal sketch techniques to produce suitable sounds. As for the pedagogical outcome, we are convinced from both the high quality of the works and the unusually positive course evaluations compared to previous years that the unproved research method was appropriate as course material. The bidirectional challenge in the research where students know that the method is experimental is hypothesized to further boost student motivation.

  • 5.
    Falkenberg, Kjetil
    et al.
    Kungl. Musikhögskolan. KTH.
    Lindetorp, Hans
    Kungl. Musikhögskolan, Institutionen för musik- och medieproduktion. KTH.
    Frid, Emma
    KTH.
    Creating digital musical instruments with and for children: Including vocal sketching as a method for engaging in codesign2020Inngår i: Human Technology, E-ISSN 1795-6889Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    A class of master of science students and a group of preschool children codesigned new digital musical instruments based on workshop interviews involving vocal sketching, a method for imitating and portraying sounds. The aim of the study was to explore how the students and children would approach vocal sketching as one of several design methods. The children described musical instruments to the students using vocal sketching and other modalities (verbal, drawing, gestures). The resulting instruments built by the students were showcased at the Swedish Museum of Performing Arts in Stockholm. Although all the children tried vocal sketching during preparatory tasks, few employed the method during the workshop. However, the instruments seemed to meet the children’s expectations. Consequently, even though the vocal sketching method alone provided few design directives in the given context, we suggest that vocal sketching, under favorable circumstances, can be an engaging component that complements other modalities in codesign involving children.

  • 6.
    Falkenberg, Kjetil
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Lindetorp, Hans
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Latupeirissa, Adrian Benigno
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Frid, Emma
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    Creating digital musical instruments with and for children: Including vocal sketching as a method for engaging in codesign2020Inngår i: Human Technology, E-ISSN 1795-6889, Vol. 16, nr 3, s. 348-371Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    A class of master of science students and a group of preschool children codesigned new digital musical instruments based on workshop interviews involving vocal sketching, a method for imitating and portraying sounds. The aim of the study was to explore how the students and children would approach vocal sketching as one of several design methods. The children described musical instruments to the students using vocal sketching and other modalities (verbal, drawing, gestures). The resulting instruments built by the students were showcased at the Swedish Museum of Performing Arts in Stockholm. Although all the children tried vocal sketching during preparatory tasks, few employed the method during the workshop. However, the instruments seemed to meet the children’s expectations. Consequently, even though the vocal sketching method alone provided few design directives in the given context, we suggest that vocal sketching, under favorable circumstances, can be an engaging component that complements other modalities in codesign involving children.

  • 7.
    Falkenberg, Kjetil
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Ljungdahl Eriksson, Martin
    Frid, Emma
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Otterbring, Tobias
    Daunfeldt, Sven-Olov
    Auditory notification of customer actions in a virtual retail environment: Sound design, awareness and attention2021Inngår i: Proceedings of International Conference on Auditory Displays ICAD 2021, 2021Konferansepaper (Fagfellevurdert)
  • 8.
    Frid, Emma
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Accessible Digital Musical Instruments: A Review of Musical Interfaces in Inclusive Music Practice2019Inngår i: Multimodal Technologies and Interaction, E-ISSN 2414-4088, Vol. 3, nr 3, artikkel-id 57Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Current advancements in music technology enable the creation of customized Digital Musical Instruments (DMIs). This paper presents a systematic review of Accessible Digital Musical Instruments (ADMIs) in inclusive music practice. History of research concerned with facilitating inclusion in music-making is outlined, and current state of developments and trends in the field are discussed. Although the use of music technology in music therapy contexts has attracted more attention in recent years, the topic has been relatively unexplored in Computer Music literature. This review investigates a total of 113 publications focusing on ADMIs. Based on the 83 instruments in this dataset, ten control interface types were identified: tangible controllers, touchless controllers, Brain–Computer Music Interfaces (BCMIs), adapted instruments, wearable controllers or prosthetic devices, mouth-operated controllers, audio controllers, gaze controllers, touchscreen controllers and mouse-controlled interfaces. The majority of the AMDIs were tangible or physical controllers. Although the haptic modality could potentially play an important role in musical interaction for many user groups, relatively few of the ADMIs (15.6%) incorporated vibrotactile feedback. Aspects judged to be important for successful ADMI design were instrument adaptability and customization, user participation, iterative prototyping, and interdisciplinary development teams.

    Fulltekst (csv)
    Dataset
  • 9.
    Frid, Emma
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Accessible Digital Musical Instruments: A Survey of Inclusive Instruments Presented at the NIME, SMC and ICMC Conferences2018Inngår i: Proceedings of the International Computer Music Conference 2018: Daegu, South Korea / [ed] Tae Hong Park, Doo-Jin Ahn, San Francisco: The International Computer Music Association , 2018, s. 53-59Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper describes a survey of accessible Digital Musical Instruments (ADMIs) presented at the NIME, SMC and ICMC conferences. It outlines the history of research concerned with facilitating inclusion in music making and discusses advances, current state of developments and trends in the field. Based on a systematic analysis of DMIs presented at the three conferences, seven control interface types could be identified: tangible, nontangible, audio, touch-screen, gaze, BCMIs and adapted instruments. Most of the ADMIs were tangible interfaces or physical controllers. Many of the instruments were designed for persons with physical disabilities or children with health conditions or impairments. Little attention was paid to DMIs for blind users. Although the haptic modality could play an important role in musical interaction in this context, relatively few of the ADMIs (26.7%) incorporated vibrotactile feedback. A discussion on future directions for inclusive design of DMIs is presented.

    Fulltekst (csv)
    dataset
  • 10.
    Frid, Emma
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Diverse Sounds: Enabling Inclusive Sonic Interaction2019Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    This compilation thesis collects a series of publications on designing sonic interactions for diversity and inclusion. The presented papers focus on case studies in which musical interfaces were either developed or reviewed. While the described studies are substantially different in their nature, they all contribute to the thesis by providing reflections on how musical interfaces could be designed to enable inclusion rather than exclusion. Building on this work, I introduce two terms: inclusive sonic interaction design and Accessible Digital Musical Instruments (ADMIs). I also define nine properties to consider in the design and evaluation of ADMIs: expressiveness, playability, longevity, customizability, pleasure, sonic quality, robustness, multimodality and causality. Inspired by the experience of playing an acoustic instrument, I propose to enable musical inclusion for under-represented groups (for example persons with visual- and hearing-impairments, as well as elderly people) through the design of Digital Musical Instruments (DMIs) in the form of rich multisensory experiences allowing for multiple modes of interaction. At the same time, it is important to enable customization to fit user needs, both in terms of gestural control and provided sonic output. I conclude that the computer music community has the potential to actively engage more people in music-making activities. In addition, I stress the importance of identifying challenges that people face in these contexts, thereby enabling initiatives towards changing practices.

    Fulltekst (pdf)
    Emma Frid - Diverse Sounds
  • 11.
    Frid, Emma
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Erratum: Accessible digital musical instruments—a review of musical interfaces in inclusive music practice (Multimodal Technologies and Interaction, (2019) 3, 57, 10.3390/mti3030057)2020Inngår i: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 4, nr 3, s. 1-2, artikkel-id 34Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Unfortunately, some errors and imprecise descriptions were made in the final proofreading phase, and the author, therefore, wishes to make the following corrections to this paper [1]: In the Abstract, it is erroneously stated that the percentage of ADMIs that incorporated vibrotactile feedback was 15.6%. The correct percentage should be 14.5%. The same error is replicated in Section 4.4. Output Modalities, on page 11 (13 ADMIs should be 12 ADMIs), and in Section 6. Conclusions, on page 15. The author would like to apologize for any inconvenience caused by these changes. The correct percentage further supports the claim that relatively few of the ADMIs incorporated vibrotactile feedback. Based on guidelines for writing for accessibility [2], the author would like to refrain from using the term “elderly” and instead use the term “older adults” in Sections 4.5 Target User Group (page 11), 5. Discussion (page 13), and Conclusions (page 15). Minor formatting errors were identified in Figure 4, on page 9, where the terms “touchscreen” and “touchless” were mistakenly spelled “touch-screen” and “touch-less”. In Table 2, “Book Sections” should be “Book Chapters”. There were also two errors in Table 3, where “Eyes-web” should be spelled “EyesWeb” and the word “sensor” was misspelled as “senor”. The figure and table were updated to account for these mistakes.

  • 12.
    Frid, Emma
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. STMS Science and Technology of Music and Sound, IRCAM Institute for Research and Coordination in Acoustics/Music, Paris, France.
    Musical Robots: Overview and Methods for Evaluation2023Inngår i: Sound and Robotics: Speech, Non-Verbal Audio and Robotic Musicianship / [ed] Richard Savery, Boca Raton, FL, USA: Informa UK Limited , 2023, s. 1-42Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    Musical robots are complex systems that require the integration of several different functions to successfully operate. These processes range from sound analysis and music representation to mapping and modeling of musical expression. Recent advancements in Computational Creativity (CC) and Artificial Intelligence (AI) have added yet another level of complexity to these settings, with aspects of Human–AI Interaction (HAI) becoming increasingly important. The rise of intelligent music systems raises questions not only about the evaluation of Human-Robot Interaction (HRI) in robot musicianship but also about the quality of the generated musical output. The topic of evaluation has been extensively discussed and debated in the fields of Human–Computer Interaction (HCI) and New Interfaces for Musical Expression (NIME) throughout the years. However, interactions with robots often have a strong social or emotional component, and the experience of interacting with a robot is therefore somewhat different from that of interacting with other technologies. Since musical robots produce creative output, topics such as creative agency and what is meant by the term "success" when interacting with an intelligent music system should also be considered. The evaluation of musical robots thus expands beyond traditional evaluation concepts such as usability and user experience. To explore which evaluation methodologies might be appropriate for musical robots, this chapter first presents a brief introduction to the field of research dedicated to robotic musicianship, followed by an overview of evaluation methods used in the neighboring research fields of HCI, HRI, HAI, NIME, and CC. The chapter concludes with a review of evaluation methods used in robot musicianship literature and a discussion of prospects for future research.

  • 13.
    Frid, Emma
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Sonification of women in sound and music computing - The sound of female authorship in ICMC, SMC and NIME proceedings2017Inngår i: 2017 ICMC/EMW - 43rd International Computer Music Conference and the 6th International Electronic Music Week, Shanghai Conservatory of Music , 2017, s. 233-238Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The primary goal of this study was to approximate the number of female authors in the academic field of Sound and Music Computing. This was done through gender prediction from author names for proceedings from the ICMC, SMC and NIME conferences, and by sonifying these results. Although gender classification by first name can only serve as an estimation of the actual number of female authors in the field, some conclusions could be drawn. The total percentage of author names classified as female was 10.3% for ICMC, 11.9% for SMC and 11.9% for NIME. When merging data from all three conferences for years 2004-2016, it could be concluded that names classified as female ranged from 9.5 to 14.3%. Changes in the ratio of female vs. male authors over time were further illustrated by sonifications, allowing the reader to explore, compare and reflect upon the results by listening to sonic representations of the data. The conclusion that can be drawn from this study is that the field of Sound and Music Computing is still far from being gender-balanced.

  • 14.
    Frid, Emma
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. RepMus - Représentations Musicales, STMS - Sciences et Technologies de la Musique et du Son, IRCAM - Institut de Recherche et Coordination Acoustique/Musique.
    The Gender Gap and the Computer Music Narrative: On the Under-Representation of Women at Computer Music Conferences2021Inngår i: Array - Journal of the International Computer Music Association, E-ISSN 2590-0056, Vol. 1, s. 43-49Artikkel i tidsskrift (Fagfellevurdert)
    Fulltekst (pdf)
    fulltext
  • 15.
    Frid, Emma
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Perceptual Evaluation of Blended Sonification of Mechanical Robot Sounds Produced by Emotionally Expressive Gestures: Augmenting Consequential Sounds to Improve Non-verbal Robot Communication2021Inngår i: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper presents two experiments focusing on perception of mechanical sounds produced by expressive robot movement and blended sonifications thereof. In the first experiment, 31 participants evaluated emotions conveyed by robot sounds through free-form text descriptions. The sounds were inherently produced by the movements of a NAO robot and were not specifically designed for communicative purposes. Results suggested no strong coupling between the emotional expression of gestures and how sounds inherent to these movements were perceived by listeners; joyful gestures did not necessarily result in joyful sounds. A word that reoccurred in text descriptions of all sounds, regardless of the nature of the expressive gesture, was “stress”. In the second experiment, blended sonification was used to enhance and further clarify the emotional expression of the robot sounds evaluated in the first experiment. Analysis of quantitative ratings of 30 participants revealed that the blended sonification successfully contributed to enhancement of the emotional message for sound models designed to convey frustration and joy. Our findings suggest that blended sonification guided by perceptual research on emotion in speech and music can successfully improve communication of emotions through robot sounds in auditory-only conditions.

  • 16.
    Frid, Emma
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Alborno, Paolo
    Elblaus, Ludvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Interactive Sonification of Spontaneous Movement of Children: Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound2016Inngår i: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Vol. 10, artikkel-id 521Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3-4 children were simultaneously tracked and sonified, producing 3-4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children’s spontaneous movement in terms of energy-, smoothness- and directness index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g. expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g. energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a cross-modal mapping of body motion qualities from bodily movement to sounds. Sound can be translated and understood from bodily motion, conveyed through sound visualizations in the shape of drawings and translated back from sound visualizations to audio. The work underlines the potential of using interactive sonification to communicate high-level features of human movement data.

  • 17.
    Frid, Emma
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Alexanderson, Simon
    KTH, Skolan för elektroteknik och datavetenskap (EECS).
    Perception of Mechanical Sounds Inherent to Expressive Gestures of a NAO Robot - Implications for Movement Sonification of Humanoids2018Inngår i: Proceedings of the 15th Sound and Music Computing Conference / [ed] Anastasia Georgaki and Areti Andreopoulou, Limassol, Cyprus, 2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper we present a pilot study carried out within the project SONAO. The SONAO project aims to compen- sate for limitations in robot communicative channels with an increased clarity of Non-Verbal Communication (NVC) through expressive gestures and non-verbal sounds. More specifically, the purpose of the project is to use move- ment sonification of expressive robot gestures to improve Human-Robot Interaction (HRI). The pilot study described in this paper focuses on mechanical robot sounds, i.e. sounds that have not been specifically designed for HRI but are inherent to robot movement. Results indicated a low correspondence between perceptual ratings of mechanical robot sounds and emotions communicated through ges- tures. In general, the mechanical sounds themselves ap- peared not to carry much emotional information compared to video stimuli of expressive gestures. However, some mechanical sounds did communicate certain emotions, e.g. frustration. In general, the sounds appeared to commu- nicate arousal more effectively than valence. We discuss potential issues and possibilities for the sonification of ex- pressive robot gestures and the role of mechanical sounds in such a context. Emphasis is put on the need to mask or alter sounds inherent to robot movement, using for exam- ple blended sonification.

  • 18.
    Frid, Emma
    et al.
    Sound and Music Computing, CSC, KTH Royal Institute of Technology, Stockholm, Sweden.
    Bresin, Roberto
    Sound and Music Computing, CSC, KTH Royal Institute of Technology, Stockholm, Sweden.
    Moll, Jonas
    Interaction Design, CSC, KTH Royal Institute of Technology, Stockholm, Sweden.
    Sallnäs Pysander, Eva-Lotta
    Interaction Design, CSC, KTH Royal Institute of Technology, Stockholm, Sweden.
    Sonification of haptic interaction in a virtual scene2014Inngår i: SMC Sweden 2014 Sound and Music Computing: Bridging science, art, and industry / [ed] Roberto Bresin, Stockholm: KTH Royal Institute of Technology , 2014, s. 14-16Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents a brief overview of work-in-progress for a study on correlations between visual and haptic spatial attention in a multimodal single-user application comparing different modalities. The aim is to gain insight into how auditory and haptic versus visual representations of temporal events may affect task performance and spatial attention. For this purpose, a 3D application involving one haptic model and two different sound models for interactive sonification are developed.

    Fulltekst (pdf)
    Sonification of Haptic Interaction in a Virtual Scene
  • 19.
    Frid, Emma
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Moll, Jonas
    Sallnäs Pysander, Eva-Lotta
    Sonification of haptic interaction in a virtual scene2014Inngår i: Sound and Music Computing Sweden 2014, Stockholm, December 4-5, 2014 / [ed] Roberto Bresin, 2014, s. 14-16Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents a brief overview of work-in-progress for a study on correlations between visual and haptic spatial attention in a multimodal single-user application comparing different modalities. The aim is to gain insight into how auditory and haptic versus visual representations of temporal events may affect task performance and spatial attention. For this purpose, a 3D application involving one haptic model and two different sound models for interactive sonification are developed.

    Fulltekst (pdf)
    fulltext
  • 20.
    Frid, Emma
    et al.
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design.
    Bresin, Roberto
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design.
    Sallnäs Pysander, Eva-Lotta
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design.
    Moll, Jonas
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    An exploratory study on the effect of auditory feedback on gaze behavior in a virtual throwing task with and without haptic feedback2017Inngår i: Proc. 14th Sound and Music Computing Conference, Finland: Aalto University , 2017, s. 242-249Konferansepaper (Fagfellevurdert)
  • 21.
    Frid, Emma
    et al.
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Bresin, Roberto
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Sallnäs Pysander, Eva-Lotta
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Moll, Jonas
    Uppsala University, Uppsala, Sweden.
    An exploratory study on the effect of auditory feedback on gaze behavior in a virtual throwing task with and without haptic feedback2017Inngår i: Proceedings of the 14th Sound and Music Computing Conference 2017 / [ed] Tapio Lokki, Jukka Pätynen, Vesa Välimäki, Aalto University , 2017, s. 242-249Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents findings from an exploratory study on the effect of auditory feedback on gaze behavior. A total of 20 participants took part in an experiment where the task was to throw a virtual ball into a goal in different conditions: visual only, audiovisual, visuohaptic and audio-visuohaptic. Two different sound models were compared in the audio conditions. Analysis of eye tracking metrics indicated large inter-subject variability; difference between subjects was greater than difference between feed-back conditions. No significant effect of condition could be observed, but clusters of similar behaviors were identified. Some of the participants’ gaze behaviors appeared to have been affected by the presence of auditory feedback, but the effect of sound model was not consistent across subjects. We discuss individual behaviors and il-lustrate gaze behavior through sonification of gaze trajectories. Findings from this study raise intriguing questions that motivate future large-scale studies on the effect of auditory feedback on gaze behavior.

  • 22.
    Frid, Emma
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Bresin, Roberto
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Sallnäs Pysander, Eva-Lotta
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Moll, Jonas
    Uppsala University.
    An Exploratory Study On The Effect Of Auditory Feedback On Gaze Behavior In a Virtual Throwing Task With and Without Haptic Feedback2017Inngår i: Proceedings of the 14th Sound and Music Computing Conference / [ed] Tapio Lokki, Jukka Pätynen, and Vesa Välimäki, Espoo, Finland, 2017, s. 242-249Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents findings from an exploratory study on the effect of auditory feedback on gaze behavior. A total of 20 participants took part in an experiment where the task was to throw a virtual ball into a goal in different conditions: visual only, audiovisual, visuohaptic and audio- visuohaptic. Two different sound models were compared in the audio conditions. Analysis of eye tracking metrics indicated large inter-subject variability; difference between subjects was greater than difference between feedback conditions. No significant effect of condition could be observed, but clusters of similar behaviors were identified. Some of the participants’ gaze behaviors appeared to have been affected by the presence of auditory feedback, but the effect of sound model was not consistent across subjects. We discuss individual behaviors and illustrate gaze behavior through sonification of gaze trajectories. Findings from this study raise intriguing questions that motivate future large-scale studies on the effect of auditory feedback on gaze behavior. 

  • 23.
    Frid, Emma
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Elblaus, Ludvig
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Bresin, Roberto
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Interactive sonification of a fluid dance movement: an exploratory study2019Inngår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 13, nr 3, s. 181-189Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper we present three different experiments designed to explore sound properties associated with fluid movement: (1) an experiment in which participants adjusted parameters of a sonification model developed for a fluid dance movement, (2) a vocal sketching experiment in which participants sketched sounds portraying fluid versus nonfluid movements, and (3) a workshop in which participants discussed and selected fluid versus nonfluid sounds. Consistent findings from the three experiments indicated that sounds expressing fluidity generally occupy a lower register and has less high frequency content, as well as a lower bandwidth, than sounds expressing nonfluidity. The ideal sound to express fluidity is continuous, calm, slow, pitched, reminiscent of wind, water or an acoustic musical instrument. The ideal sound to express nonfluidity is harsh, non-continuous, abrupt, dissonant, conceptually associated with metal or wood, unhuman and robotic. Findings presented in this paper can be used as design guidelines for future applications in which the movement property fluidity is to be conveyed through sonification.

  • 24.
    Frid, Emma
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Elblaus, Ludvig
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Sonification of fluidity -
An exploration of perceptual connotations of a particular movement feature2016Inngår i: Proceedings of ISon 2016, 5th Interactive Sonification Workshop, Bielefeld, Germany, 2016, s. 11-17Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this study we conducted two experiments in order to investigate potential strategies for sonification of the expressive movement quality “fluidity” in dance: one perceptual rating experiment (1) in which five different sound models were evaluated on their ability to express fluidity, and one interactive experiment (2) in which participants adjusted parameters for the most fluid sound model in (1) and performed vocal sketching to two video recordings of contemporary dance. Sounds generated in the fluid condition occupied a low register and had darker, more muffled, timbres compared to the non-fluid condition, in which sounds were characterized by a higher spectral centroid and contained more noise. These results were further supported by qualitative data from interviews. The participants conceptualized fluidity as a property related to water, pitched sounds, wind, and continuous flow; non-fluidity had connotations of friction, struggle and effort. The biggest conceptual distinction between fluidity and non-fluidity was the dichotomy of “nature” and “technology”, “natural” and “unnatural”, or even “human” and “unhuman”. We suggest that these distinct connotations should be taken into account in future research focusing on the fluidity quality and its corresponding sonification.

  • 25.
    Frid, Emma
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Falkenberg, Kjetil
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Designing and reporting research on sound design and music for health: Methods and frameworks for impact2021Inngår i: Doing Research in Sound Design / [ed] Michael Filimowicz, Focal Press , 2021, s. 125-150Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    This chapter presents key methodological aspects to consider for researchers in the fields of sound design and music computing when evaluating and making strategic choices for conducting research targeting health, accessibility and disability. We present practical suggestions for how to effectively increase the impact in the research community based on existing methods commonly used in evidence-based research. Although many of the described models, frameworks and methods are not novel, they have so far only been extensively applied in music therapy studies and music medicine interventions, but not in sound design research nor music computing. The frameworks presented here are gathered from, primarily, practices concerning systematic reviews. We conclude with a discussion about the current state of the field and provide examples of how proposed frameworks and guidelines can be used when reporting results from quantitative research studies to systematic reviews.

  • 26.
    Frid, Emma
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. Inst Rech & Coordinat Acoust Mus IRCAM, Sci & Technol Mus & Son STMS, UMR9912, Paris, France..
    Falkenberg, Kjetil
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Agres, Kat
    Natl Univ Singapore, Ctr Mus & Hlth, Yong Siew Toh Conservatory Mus, Singapore, Singapore..
    Lucas, Alex
    Queens Univ Belfast, Son Arts Res Ctr, Belfast, North Ireland..
    Editorial: New advances and novel applications of music technologies for health, well-being, and inclusion2024Inngår i: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 6, artikkel-id 1358454Artikkel i tidsskrift (Fagfellevurdert)
  • 27.
    Frid, Emma
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Giordano, Marcello
    McGill University.
    Schumacher, Marlon M.
    McGill University.
    Wanderley, Marcelo M.
    McGill University.
    Perceptual Characterization of a Tactile Display for a Live-Electronics Notification System2014Inngår i: Proceedings of the ICMC|SMC|2014 Conference, National and Kapodistrian University of Athens , 2014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper we present a study we conducted to assess physical and perceptual properties of a tactile display for a tactile notification system within the CIRMMT Live Electronics Framework (CLEF), a Max-based1 modular environment for composition and performance of live electronic music. Our tactile display is composed of two rotating eccentric mass actuators driven by a PWM signal generated from an Arduino microcontroller. We conducted physical measurements using an accelerometer and two user-based studies in order to evaluate: vibrotactile absolute perception threshold, differential threshold and vibration spectral peaks. Results, obtained through the use of a logit regression model, provide us with precise design guidelines. These guidelines will enable us to ensure robust perceptual discrimination between vibrotactile stimuli at different intensities. Among with other characterizations presented in this study, these guidelines will allow us to better design tactile cues for our notification system for live-electronics performance. 

    Fulltekst (pdf)
    fulltext
  • 28. Frid, Emma
    et al.
    Gomes, Celso
    Jin, Zeyu
    Music Creation by Example2020Manuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    Short online videos have become the dominating media on social platforms. However, finding suitable music to accompany videos can be a challenging task to some video creators, due to copyright constraints, limitations in search engines, and required audio-editing expertise. One possible solution to these problems is to use AI music generation. In this paper we present a user interface (UI) paradigm that allows users to input a song to an AI music engine and then interactively regenerate and mix AI-generated music. To arrive at this design, we conducted user studies with a total of 104 video creators at several stages of our design and development process. User studies supported the effectiveness of our approach and provided valuable insights about humanAI interaction as well as the design and evaluation of mixedinitiative interfaces in creative practice.

  • 29.
    Frid, Emma
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. Sciences et Technologies de la Musique et du Son Laboratoire, STMS, CNRS, Ircam, Sorbonne Université, Ministère de la Culture, Paris, France.
    Ilsar, Alon
    Reimagining (Accessible) Digital Musical Instruments: A Survey on Electronic Music-Making Tools2021Inngår i: Proceedings of the International Conference on New Interfaces for Musical Expression (NIME) 2021, 2021Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper discusses findings from a survey on interfaces for making electronic music. We invited electronic music makers of varying experience to reflect on their practice and setup and to imagine and describe their ideal interface for music-making. We also asked them to reflect on the state of gestural controllers, machine learning, and artificial intelligence in their practice. We had 118 people respond to the survey, with 40.68% professional musicians, and 10.17% identifying as living with a disability or access requirement. Results highlight limitations of music-making setups as perceived by electronic music makers, reflections on how imagined novel interfaces could address such limitations, and positive attitudes towards ML and AI in general.

    Download (csv)
    Data
    Download (pdf)
    Survey
  • 30.
    Frid, Emma
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. Adobe Research.
    Jin, Zeyu
    Adobe Research, Seattle, WA, USA.
    Gomes, Celso
    Adobe Research, Seattle, WA, USA.
    Music Creation by Example2020Inngår i: Proceedings CHI '20: CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery (ACM) , 2020, s. 1-13, artikkel-id 387Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Short online videos have become the dominating media on social platforms. However, finding suitable music to accompany videos can be a challenging task to some video creators, due to copyright constraints, limitations in search engines, and required audio-editing expertise. One possible solution to these problems is to use AI music generation. In this paper we present a user interface (UI) paradigm that allows users to input a song to an AI engine and then interactively regenerate and mix AI-generated music. To arrive at this design, we conducted user studies with a total of 104 video creators at several stages of our design and development process. User studies supported the effectiveness of our approach and provided valuable insights about human-AI interaction as well as the design and evaluation of mixedinitiative interfaces in creative practice.

    Fulltekst (zip)
    supplementary material
  • 31.
    Frid, Emma
    et al.
    KTH.
    Lindetorp, Hans
    Kungl. Musikhögskolan, Institutionen för musik- och medieproduktion. KTH.
    Haptic Music: Exploring Whole-Body Vibrations and Tactile Sound for a Multisensory Music Installation2020Inngår i: Proceedings of the Sound and Music Computing Conference (SMC) 2020, 2020, s. 68-75Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents a study on the composition of haptic music for a multisensory installation and how composers could be aided by a preparatory workshop focusing on the perception of whole-body vibrations prior to such a composition task. Five students from a Master’s program in Music Production were asked to create haptic music for the installation Sound Forest. The students were exposed to a set of different sounds producing whole-body vibrations through a wooden platform and asked to describe perceived sensations for respective sound. Results suggested that the workshop helped the composers successfully complete the composition task and that awareness of haptic possibilities of the multisensory installation could be improved through training. Moreover, the sounds used as stimuli provided a relatively wide range of perceived sensations, ranging from pleasant to unpleasant. Considerable intra-subject differences motivate future large-scale studies on the perception of whole-body vibrations in artistic music practice.

  • 32.
    Frid, Emma
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Lindetorp, Hans
    KMH Royal College of Music.
    Haptic Music: Exploring Whole-Body Vibrations and Tactile Sound for a Multisensory Music Installation2020Inngår i: Proceedings of the Sound and Music Computing Conference (SMC) 2020 / [ed] Simone Spagnol and Andrea Valle, Torino, Italy, 2020, s. 68-75Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents a study on the composition of haptic music for a multisensory installation and how composers could be aided by a preparatory workshop focusing on the perception of whole-body vibrations prior to such a composition task. Five students from a Master’s program in Music Production were asked to create haptic music for the installation Sound Forest. The students were exposed to a set of different sounds producing whole-body vibrations through a wooden platform and asked to describe perceived sensations for respective sound. Results suggested that the workshop helped the composers successfully complete the composition task and that awareness of haptic possibilities of the multisensory installation could be improved through training. Moreover, the sounds used as stimuli provided a relatively wide range of perceived sensations, ranging from pleasant to unpleasant. Considerable intra-subject differences motivate future large-scale studies on the perception of whole-body vibrations in artistic music practice.

    Fulltekst (pdf)
    fulltext
  • 33.
    Frid, Emma
    et al.
    KTH.
    Lindetorp, Hans
    Kungl. Musikhögskolan, Institutionen för musik- och medieproduktion. KTH.
    Falkenberg, Kjetil
    Kungl. Musikhögskolan. KTH.
    Elblaus, Ludvig
    KTH.
    Bresin, Roberto
    KTH.
    Sound Forest - Evaluation of an Accessible Multisensory Music Installation2019Inngår i: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems / [ed] ACM, ACM: ACM Publications, 2019, Vol. 2019, s. 1-12, artikkel-id 677Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Sound Forest is a music installation consisting of a room with light-emitting interactive strings, vibrating platforms and speakers, situated at the Swedish Museum of Performing Arts. In this paper we present an exploratory study focusing on evaluation of Sound Forest based on picture cards and interviews. Since Sound Forest should be accessible for everyone, regardless age or abilities, we invited children, teens and adults with physical and intellectual disabilities to take part in the evaluation. The main contribution of this work lies in its fndings suggesting that multisensory platforms such as Sound Forest, providing whole-body vibrations, can be used to provide visitors of diferent ages and abilities with similar associations to musical experiences. Interviews also revealed positive responses to haptic feedback in this context. Participants of diferent ages used diferent strategies and bodily modes of interaction in Sound Forest, with activities ranging from running to synchronized music-making and collaborative play.

  • 34.
    Frid, Emma
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Lindetorp, Hans
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. KMH Royal College of Music, Stockholm, Sweden.
    Hansen, Kjetil Falkenberg
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Elblaus, Ludvig
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Bresin, Roberto
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Sound Forest - Evaluation of an Accessible Multisensory Music Installation2019Inngår i: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, ACM , 2019, s. 1-12, artikkel-id 677Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Sound Forest is a music installation consisting of a room with light-emitting interactive strings, vibrating platforms and speakers, situated at the Swedish Museum of Performing Arts. In this paper we present an exploratory study focusing on evaluation of Sound Forest based on picture cards and interviews. Since Sound Forest should be accessible for everyone, regardless age or abilities, we invited children, teens and adults with physical and intellectual disabilities to take part in the evaluation. The main contribution of this work lies in its fndings suggesting that multisensory platforms such as Sound Forest, providing whole-body vibrations, can be used to provide visitors of diferent ages and abilities with similar associations to musical experiences. Interviews also revealed positive responses to haptic feedback in this context. Participants of diferent ages used diferent strategies and bodily modes of interaction in Sound Forest, with activities ranging from running to synchronized music-making and collaborative play.

  • 35.
    Frid, Emma
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Ljungdahl Eriksson, Martin
    Otterbring, Tobias
    Falkenberg, Kjetil
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Lidbo, Håkan
    Daunfeldt, Sven-Olov
    On Designing Sounds to Reduce Shoplifting in Retail Environments2021Konferansepaper (Fagfellevurdert)
  • 36.
    Frid, Emma
    et al.
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Moll, Jonas
    Uppsala University, Uppsala, Sweden.
    Bresin, Roberto
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Sallnäs Pysander, Eva-Lotta
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Haptic feedback combined with movement sonification using a friction sound improves task performance in a virtual throwing task2019Inngår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 13, nr 4, s. 279-290Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper we present a study on the effects of auditory- and haptic feedback in a virtual throwing task performed with a point-based haptic device. The main research objective was to investigate if and how task performance and perceived intuitiveness is affected when interactive sonification and/or haptic feedback is used to provide real-time feedback about a movement performed in a 3D virtual environment. Emphasis was put on task solving efficiency and subjective accounts of participants’ experiences of the multimodal interaction in different conditions. The experiment used a within-subjects design in which the participants solved the same task in different conditions: visual-only, visuohaptic, audiovisual and audiovisuohaptic. Two different sound models were implemented and compared. Significantly lower error rates were obtained in the audiovisuohaptic condition involving movement sonification based on a physical model of friction, compared to the visual-only condition. Moreover, a significant increase in perceived intuitiveness was observed for most conditions involving haptic and/or auditory feedback, compared to the visual-only condition. The main finding of this study is that multimodal feedback can not only improve perceived intuitiveness of an interface but that certain combinations of haptic feedback and movement sonification can also contribute with performance-enhancing properties. This highlights the importance of carefully designing feedback combinations for interactive applications.

  • 37.
    Frid, Emma
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Moll, Jonas
    Uppsala University.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Sallnäs Pysander, Eva-Lotta
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Haptic feedback combined with movement sonification using a friction sound improves task performance in a virtual throwing task2018Inngår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 13, nr 4, s. 279-290Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper we present a study on the effects of auditory- and haptic feedback in a virtual throwing task performed with a point-based haptic device. The main research objective was to investigate if and how task performance and perceived intuitiveness is affected when interactive sonification and/or haptic feedback is used to provide real-time feedback about a movement performed in a 3D virtual environment. Emphasis was put on task solving efficiency and subjective accounts of participants’ experiences of the multimodal interaction in different conditions. The experiment used a within-subjects design in which the participants solved the same task in different conditions: visual-only, visuohaptic, audiovisual and audiovisuohaptic. Two different sound models were implemented and compared. Significantly lower error rates were obtained in the audiovisuohaptic condition involving movement sonification based on a physical model of friction, compared to the visual-only condition. Moreover, a significant increase in perceived intuitiveness was observed for most conditions involving haptic and/or auditory feedback, compared to the visual-only condition. The main finding of this study is that multimodal feedback can not only improve perceived intuitiveness of an interface but that certain combinations of haptic feedback and movement sonification can also contribute with performance-enhancing properties. This highlights the importance of carefully designing feedback combinations for interactive applications.

  • 38.
    Frid, Emma
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. Sciences et Technologies de la Musique et du Son Laboratoire, STMS, CNRS, Ircam, Sorbonne Université, Ministère de la Culture, Paris, France.
    Orini, Michele
    Martinelli, Giampaolo
    Chew, Elaine
    Mapping Inter-Cardiovascular Time-Frequency Coherence to Harmonic Tension in Sonification of Ensemble Interaction Between a COVID-19 Patient and the Medical Team2021Inngår i: Proceedings of the International Conference on Auditory Display (ICAD) 2021, 2021Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents exploratory work on sonic and visual representations of heartbeats of a COVID-19 patient and a medical team. The aim of this work is to sonify heart signals to reflect how a medical team comes together during a COVID-19 treatment, i.e. to highlight other aspects of the COVID-19 pandemic than those usually portrayed through sonification, which often focuses on the number of cases. The proposed framework highlights synergies between sound and heart signals through mapping between timefrequency coherence (TFC) of heart signals and harmonic tension and dissonance in music. Results from a listening experiment suggested that the proposed mapping between TFC and harmonic tension was successful in terms of communicating low versus high coherence between heart signals, with an overall accuracy of 69%, which was significantly higher than chance. In the light of the performed work, we discuss how links between heart- and sound signals can be further explored through sonification to promote understanding of aspects related to cardiovascular health.

    Fulltekst (pdf)
    Frid et al. ICAD 2021
    Fulltekst (csv)
    csv
  • 39.
    Frid, Emma
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. STMS IRCAM.
    Panariello, Claudio
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Haptic Music Players for Children with Profound and Multiple Learning Dis-abilities (PMLD): Exploring Different Modes of Interaction for Felt Sound2022Inngår i: Proceedings of the 24th International Congress on Acoustics (ICA2022): A10 -05 Physiological Acoustics - Multi-modal solutions to enhance hearing / [ed] Jeremy Marozeau, Sebastian Merchel, Gyeongju, South Korea: Acoustic Society of Korea , 2022, artikkel-id ABS-0021Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents a six-month exploratory case study on the evaluation of three Haptic Music Players (HMPs) with four pre-verbal children with Profound and Multiple Learning Disabilities (PMLD). The evaluated HMPs were 1) a commercially available haptic pillow, 2) a haptic device embedded in a modified plush-toy backpack, and 3) a custom-built plush toy with a built-in speaker and tactile shaker. We evaluated the HMPs through qualitative interviews with a teacher who served as a proxy for the preverbal children participating in the study; the teacher augmented the students’ communication by reporting observations from each test session. The interviews explored functionality, accessibility, versus user experience aspects of respective HMP and revealed significant differences between devices. Our findings highlighted the influence of physical affordances provided by the HMP designs and the importance of a playful design in this context. Results suggested that sufficient time should be allocated to HMP familiarization prior to any evaluation procedure, since experiencing musical haptics through objects is a novel experience that might require some time to get used to. We discuss design considerations for Haptic Music Players and provide suggestions for future developments of multimodal systems dedicated to enhancing music listening in special education settings. 

    Fulltekst (pdf)
    fulltext
  • 40.
    Frid, Emma
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. IRCAM, STMS Sci & Technol Mus & Son UMR9912, 1 Pl Igor Stravinsky, F-75004 Paris, France..
    Panariello, Claudio
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Núñez-Pacheco, Claudia
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Customizing and Evaluating Accessible Multisensory Music Experiences with Pre-Verbal Children: A Case Study on the Perception of Musical Haptics Using Participatory Design with Proxies2022Inngår i: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 6, nr 7, artikkel-id 55Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Research on Accessible Digital Musical Instruments (ADMIs) has highlighted the need for participatory design methods, i.e., to actively include users as co-designers and informants in the design process. However, very little work has explored how pre-verbal children with Profound and Multiple Disabilities (PMLD) can be involved in such processes. In this paper, we apply in-depth qualitative and mixed methodologies in a case study with four students with PMLD. Using Participatory Design with Proxies (PDwP), we assess how these students can be involved in the customization and evaluation of the design of a multisensory music experience intended for a large-scale ADMI. Results from an experiment focused on communication of musical haptics highlighted the diversity in employed interaction strategies used by the children, accessibility limitations of the current multisensory experience design, and the importance of using a multifaceted variety of qualitative and quantitative methods to arrive at more informed conclusions when applying a design with proxies methodology.

  • 41.
    Frid, Emma
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Pauletto, Sandra
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Bouvier, Baptiste
    STMS IRCAM CNRS SU, Paris, France.
    Fraticelli, Matthieu
    Département d’études cognitives ENS, Paris, France.
    A Dual-Task Experimental Methodology for Exploration of Saliency of Auditory Notifications in a Retail Soundscape2023Inngår i: Proceedings of the 28th International Conference on Auditory Display (ICAD2023): Sonification for the Masses, 2023, 2023Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper presents an experimental design of a dual-task experiment aimed at exploring the salience of auditory notifications. The first task is a Sustained Attention to Response Task (SART) and the second task involves listening to a complex store soundscape that includes ambient sounds, background music and auditory notifications. In this task, subjects are asked to press a button when an auditory notification is detected. The proposed method is based on a triangulation approach in which quantitative variables are combined with perceptual ratings and free-text question replies to obtain a holistic picture of how the sound environment is perceived. Results from this study can be used to inform the design of systems presenting music and peripheral auditory notifications in a retail environment.

    Fulltekst (pdf)
    fulltext
  • 42. Giordano, M
    et al.
    Hattwick, I
    Franco, I
    Egloff, D
    Frid, Emma
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Lamontagne, V
    TeZ, C
    Salter, C
    Wanderley, M
    Design and Implementation of a Whole-Body Haptic Suit for “Ilinx”, a Multisensory Art Installation2015Inngår i: Proc. of the 12th Int. Conference on Sound and Music Computing (SMC-15) / [ed] Joseph Timoney and Thomas Lysaght, Maynooth, Ireland: Maynooth University , 2015, Vol. 1, s. 169-175Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Ilinx is a multidisciplinary art/science research project focusing on the development of a multisensory art installation involving sound, visuals and haptics. In this paper we describe design choices and technical challenges behind the development of the haptic technology embedded into six augment garments. Starting from perceptual experiments, conducted to characterize the thirty vibrating actuators used in the garments, we describe hardware and software design, and the development of several haptic effects. The garments have successfully been used by over 300 people during the premiere of the installation in the TodaysArt 2014 festival in The Hague.

    Fulltekst (pdf)
    fulltext
  • 43.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Latupeirissa, Adrian Benigno
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Frid, Emma
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Lindetorp, Hans
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. KMH Royal Acad Mus, Mus & Media Prod, Stockholm, Sweden.
    Unproved methods from the frontier in the course curriculum: A bidirectional and mutually beneficial research challenge2020Inngår i: INTED2020 Proceedings, IATED , 2020, s. 7033-7038Konferansepaper (Fagfellevurdert)
  • 44.
    Latupeirissa, Adrian Benigno
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Frid, Emma
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Bresin, Roberto
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Sonic characteristics of robots in films2019Inngår i: Proceedings of the 16th Sound and Music Computing Conference, Malaga, Spain, 2019, s. 1-6, artikkel-id P2.7Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Robots are increasingly becoming an integral part of our everyday life. Expectations on robots could be influenced by how robots are represented in science fiction films. We hypothesize that sonic interaction design for real-world robots may find inspiration from sound design of fictional robots. In this paper, we present an exploratory study focusing on sonic characteristics of robot sounds in films. We believe that findings from the current study could be of relevance for future robotic applications involving the communication of internal states through sounds, as well for sonification of expressive robot movements. Excerpts from five films were annotated and analysed using Long Time Average Spectrum (LTAS). As an overall observation, we found that robot sonic presence is highly related to the physical appearance of robots. Preliminary results show that most of the robots analysed in this study have “metallic” voice qualities, matching the material of their physical form. Characteristics of robot voices show significant differences compared to voices of human characters; fundamental frequency of robotic voices is either shifted to higher or lower values, and the voices span over a broader frequency band.

  • 45.
    Lindetorp, Hans
    et al.
    Kungl. Musikhögskolan, Institutionen för musik- och medieproduktion. Royal College of Music.
    Svahn, Maria
    KTH, Medieteknik och interaktionsdesign, MID.
    Hölling, Josefine
    KTH, Medieteknik och interaktionsdesign, MID.
    Falkenberg, Kjetil
    KTH, Medieteknik och interaktionsdesign, MID.
    Frid, Emma
    KTH, Medieteknik och interaktionsdesign, MID.
    Collaborative music-making: special educational needs school assistants as facilitators in performances with accessible digital musical instruments2023Inngår i: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 5, artikkel-id 1165442Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The field of research dedicated to Accessible Digital Musical Instruments (ADMIs) is growing and there is an increased interest in promoting diversity and inclusion in music-making. We have designed a novel system built into previously tested ADMIs that aims at involving assistants, students with Profound and Multiple Learning Disabilities (PMLD), and a professional musician in playing music together. In this study the system is evaluated in a workshop setting using quantitative as well as qualitative methods. One of the main findings was that the sounds from the ADMIs added to the musical context without making errors that impacted the music negatively even when the assistants mentioned experiencing a split between attending to different tasks, and a feeling of insecurity toward their musical contribution. We discuss the results in terms of how we perceive them as drivers or barriers toward reaching our overarching goal of organizing a joint concert that brings together students from the SEN school with students from a music school with a specific focus on traditional orchestral instruments. Our study highlights how a system of networked and synchronized ADMIs could be conceptualized to include assistants more actively in collaborative music-making, as well as design considerations that support them as facilitators.

  • 46.
    Lindetorp, Hans
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. KMH Royal Coll Mus, Dept Mus & Media Prod, Stockholm, Sweden.
    Svahn, Maria
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Hölling, Josefine
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Falkenberg, Kjetil
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Frid, Emma
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. Inst Res & Coordinat Acoust Mus IRCAM, Sci & Technol Mus & Sound STMS, Paris, France..
    Collaborative music-making: special educational needs school assistants as facilitators in performances with accessible digital musical instruments2023Inngår i: Frontiers in Computer Science, E-ISSN 2624-9898, Vol. 5, artikkel-id 1165442Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The field of research dedicated to Accessible Digital Musical Instruments (ADMIs) is growing and there is an increased interest in promoting diversity and inclusion in music-making. We have designed a novel system built into previously tested ADMIs that aims at involving assistants, students with Profound and Multiple Learning Disabilities (PMLD), and a professional musician in playing music together. In this study the system is evaluated in a workshop setting using quantitative as well as qualitative methods. One of the main findings was that the sounds from the ADMIs added to the musical context without making errors that impacted the music negatively even when the assistants mentioned experiencing a split between attending to different tasks, and a feeling of insecurity toward their musical contribution. We discuss the results in terms of how we perceive them as drivers or barriers toward reaching our overarching goal of organizing a joint concert that brings together students from the SEN school with students from a music school with a specific focus on traditional orchestral instruments. Our study highlights how a system of networked and synchronized ADMIs could be conceptualized to include assistants more actively in collaborative music-making, as well as design considerations that support them as facilitators.

  • 47. Ljungdahl Eriksson, Martin
    et al.
    Otterbring, Tobias
    Frid, Emma
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Falkenberg, Kjetil
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Sounds and Satisfaction: A Novel Conceptualization of the Soundscape in Sales and Service Settings2022Inngår i: Proceedings of the Nordic Retail and Wholesale Conference, 2022Konferansepaper (Fagfellevurdert)
  • 48.
    Moll, Jonas
    et al.
    Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Avdelningen för visuell information och interaktion. Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Matematisk-datavetenskapliga sektionen, Institutionen för informationsteknologi, Bildanalys och människa-datorinteraktion.
    Frid, Emma
    KTH, School of Computer Science and Communication (CSC), Media Technology and Interaction Design.
    Using eye-tracking to study the effect of haptic feedback on visual focus during collaborative object managing in a multimodal virtual interface2017Inngår i: Proc. 13th SweCog Conference, Högskolan i Skövde , 2017, s. 49-51Konferansepaper (Fagfellevurdert)
    Fulltekst (pdf)
    Abstract
  • 49.
    Moll, Jonas
    et al.
    Department of Information Technology, Uppsala University, Uppsala, Sweden.
    Frid, Emma
    2Department of Media Technology and Interaction Design, KTH Royal Institute of Technology, Stockholm, Sweden.
    Using eye-tracking to study the effect of haptic feedback on visual focus during collaborative object managing in a multimodal virtual interface2017Inngår i: Proceedings of the 13th SweCog conference, Högskolan i Skövde , 2017, s. 49-51Konferansepaper (Fagfellevurdert)
    Fulltekst (pdf)
    Using Eye-Tracking to Study the Effect of Haptic Feedback on Visual Focus During Collaborative Object Managing in a Multimodal Virtual Interface
  • 50.
    Núñez-Pacheco, Claudia
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID.
    Frid, Emma
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. STMS IRCAM.
    Sharing Earthquake Narratives: Making Space for Others in our Autobiographical Design Process2023Inngår i: CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems / [ed] Albrecht Schmidt, Kaisa Väänänen,Tesh Goyal, Per Ola Kristensson,Anicia Peters, Stefanie Mueller, Julie R. Williamson, Max L. Wilson, New York, NY, United States, 2023, artikkel-id 685Konferansepaper (Fagfellevurdert)
    Abstract [en]

    As interaction designers are venturing to design for others based on autobiographical experiences, it becomes particularly relevant to critically distinguish the designer’s voice from others’ experiences. However, few reports go into detail about how self and others mutually shape the design process and how to incorporate external evaluation into these designs. We describe a one-year process involving the design and evaluation of a prototype combining haptics and storytelling, aiming to materialise and share somatic memories of earthquakes experienced by a designer and her partner. We contribute with three strategies for bringing others into our autobiographical processes, avoiding the dilution of frst-person voices while critically addressing design faws that might hinder the representation of our stories. 

12 1 - 50 of 57
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf