Ändra sökning
Avgränsa sökresultatet
1 - 18 av 18
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Al Moubayed, Samer
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Talteknologi, CTT.
    Beskow, Jonas
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Talteknologi, CTT.
    Granström, Björn
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation. KTH, Skolan för datavetenskap och kommunikation (CSC), Centra, Centrum för Talteknologi, CTT.
    Auditory visual prominence From intelligibility to behavior2009Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 3, nr 4, s. 299-309Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Auditory prominence is defined as when an acoustic segment is made salient in its context. Prominence is one of the prosodic functions that has been shown to be strongly correlated with facial movements. In this work, we investigate the effects of facial prominence cues, in terms of gestures, when synthesized on animated talking heads. In the first study, a speech intelligibility experiment is conducted, speech quality is acoustically degraded and the fundamental frequency is removed from the signal, then the speech is presented to 12 subjects through a lip synchronized talking head carrying head-nods and eyebrows raise gestures, which are synchronized with the auditory prominence. The experiment shows that presenting prominence as facial gestures significantly increases speech intelligibility compared to when these gestures are randomly added to speech. We also present a follow-up study examining the perception of the behavior of the talking heads when gestures are added over pitch accents. Using eye-gaze tracking technology and questionnaires on 10 moderately hearing impaired subjects, the results of the gaze data show that users look at the face in a similar fashion to when they look at a natural face when gestures are coupled with pitch accents opposed to when the face carries no gestures. From the questionnaires, the results also show that these gestures significantly increase the naturalness and the understanding of the talking head.

  • 2.
    Bresin, Roberto
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH.
    Hermann, T.
    Hunt, A.
    Interactive sonification2012Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 5, nr 3-4, s. 85-86Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In October 2010, Roberto Bresin, Thomas Hermann and Andy Hunt launched a call for papers for a special issue on Interactive Sonification of the Journal on Multimodal User Interfaces (JMUI). The call was published in eight major mailing lists in the field of Sound and Music Computing and on related websites. Twenty manuscripts were submit- ted for review, and eleven of them have been accepted for publication after further improvements. Three of the papers are further developments of works presented at ISon 2010— Interactive Sonification workshop. Most of the papers went through a three-stage review process.

    The papers give an interesting overview of the field of Interactive Sonification as it is today. Their topics include the sonification of data exploration and of motion, a new sound synthesis model suitable for interactive sonification applications, a study on perception in the everyday periphery of attention, and the proposal of a conceptual framework for interactive sonification. 

  • 3.
    Burger, Birgitta
    et al.
    Finnish Centre of Excellence in Interdisciplinary Music Research, Department of Music, University of Jyväskylä.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Communication of Musical Expression by Means of Mobile Robot Gestures2010Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 3, nr 1, s. 109-118Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We developed a robotic system that can behave in an emotional way. A 3-wheeled simple robot with limited degrees of freedom was designed. Our goal was to make the robot displaying emotions in music performance by performing expressive movements. These movements have been compiled and programmed based on literature about emotion in music, musicians’ movements in expressive performances, and object shapes that convey different emotional intentions. The emotions happiness, anger, and sadness have been implemented in this way. General results from behavioral experiments show that emotional intentions can be synthesized, displayed and communicated by an artificial creature, also in constrained circumstances.

  • 4. Crook, N.
    et al.
    Field, D.
    Smith, C.
    Harding, S.
    Pulman, S.
    Cavazza, M.
    Charlton, D.
    Moore, R.
    Boye, Johan
    KTH, Skolan för datavetenskap och kommunikation (CSC), Teoretisk datalogi, TCS.
    Generating context-sensitive ECA responses to user barge-in interruptions2012Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 6, nr 1-2, s. 13-25Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present an Embodied Conversational Agent (ECA) that incorporates a context-sensitive mechanism for handling user barge-in. The affective ECA engages the user in social conversation, and is fully implemented. We will use actual examples of system behaviour to illustrate. The ECA is designed to recognise and be empathetic to the emotional state of the user. It is able to detect, react quickly to, and then follow up with considered responses to different kinds of user interruptions. The design of the rules which enable the ECA to respond intelligently to different types of interruptions was informed by manually analysed real data from human-human dialogue. The rules represent recoveries from interruptions as two-part structures: an address followed by a resumption. The system is robust enough to manage long, multi-utterance turns by both user and system, which creates good opportunities for the user to interrupt while the ECA is speaking.

  • 5.
    Dubus, Gaël
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Evaluation of four models for the sonification of elite rowing2012Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 5, nr 3-4, s. 143-156Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Many aspects of sonification represent potential benefits for the practice of sports. Taking advantage of the characteristics of auditory perception, interactive sonification offers promising opportunities for enhancing the training of athletes. The efficient learning and memorizing abilities pertaining to the sense of hearing, together with the strong coupling between auditory and sensorimotor systems, make the use of sound a natural field of investigation in quest of efficiency optimization in individual sports at a high level. This study presents an application of sonification to elite rowing, introducing and evaluating four sonification models.The rapid development of mobile technology capable of efficiently handling numerical information offers new possibilities for interactive auditory display. Thus, these models have been developed under the specific constraints of a mobile platform, from data acquisition to the generation of a meaningful sound feedback. In order to evaluate the models, two listening experiments have then been carried out with elite rowers. Results show a good ability of the participants to efficiently extract basic characteristics of the sonified data, even in a non-interactive context. Qualitative assessment of the models highlights the need for a balance between function and aesthetics in interactive sonification design. Consequently, particular attention on usability is required for future displays to become widespread.

  • 6.
    Fabiani, Marco
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Dubus, Gaël
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Interactive sonification of expressive hand gestures on a handheld device2012Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 6, nr 1-2, s. 49-57Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present here a mobile phone application called MoodifierLive which aims at using expressive music performances for the sonification of expressive gestures through the mapping of the phone’s accelerometer data to the performance parameters (i.e. tempo, sound level, and articulation). The application, and in particular the sonification principle, is described in detail. An experiment was carried out to evaluate the perceived matching between the gesture and the music performance that it produced, using two distinct mappings between gestures and performance. The results show that the application produces consistent performances, and that the mapping based on data collected from real gestures works better than one defined a priori by the authors.

  • 7.
    Frid, Emma
    et al.
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Medieteknik och interaktionsdesign, MID.
    Elblaus, Ludvig
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Medieteknik och interaktionsdesign, MID.
    Bresin, Roberto
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Medieteknik och interaktionsdesign, MID.
    Interactive sonification of a fluid dance movement: an exploratory study2019Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 13, nr 3, s. 181-189Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper we present three different experiments designed to explore sound properties associated with fluid movement: (1) an experiment in which participants adjusted parameters of a sonification model developed for a fluid dance movement, (2) a vocal sketching experiment in which participants sketched sounds portraying fluid versus nonfluid movements, and (3) a workshop in which participants discussed and selected fluid versus nonfluid sounds. Consistent findings from the three experiments indicated that sounds expressing fluidity generally occupy a lower register and has less high frequency content, as well as a lower bandwidth, than sounds expressing nonfluidity. The ideal sound to express fluidity is continuous, calm, slow, pitched, reminiscent of wind, water or an acoustic musical instrument. The ideal sound to express nonfluidity is harsh, non-continuous, abrupt, dissonant, conceptually associated with metal or wood, unhuman and robotic. Findings presented in this paper can be used as design guidelines for future applications in which the movement property fluidity is to be conveyed through sonification.

  • 8.
    Frid, Emma
    et al.
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Moll, Jonas
    Uppsala University, Uppsala, Sweden.
    Bresin, Roberto
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Sallnäs Pysander, Eva-Lotta
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Correction to: Haptic feedback combined with movement sonification using a friction sound improves task performance in a virtual throwing task2018Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The original version of this article unfortunately contained mistakes. The presentation order of Fig 5 and Fig. 6 was incorrect. The plots should have been presented according to the order of the sections in the text; the “Mean Task Duration” plot should have been presented first, followed by the “Perceived Intuitiveness” plot.

  • 9.
    Frid, Emma
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Moll, Jonas
    Uppsala University.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Sallnäs Pysander, Eva-Lotta
    KTH, Skolan för datavetenskap och kommunikation (CSC), Medieteknik och interaktionsdesign, MID.
    Haptic feedback combined with movement sonification using a friction sound improves task performance in a virtual throwing task2018Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper we present a study on the effects of auditory- and haptic feedback in a virtual throwing task performed with a point-based haptic device. The main research objective was to investigate if and how task performance and perceived intuitiveness is affected when interactive sonification and/or haptic feedback is used to provide real-time feedback about a movement performed in a 3D virtual environment. Emphasis was put on task solving efficiency and subjective accounts of participants’ experiences of the multimodal interaction in different conditions. The experiment used a within-subjects design in which the participants solved the same task in different conditions: visual-only, visuohaptic, audiovisual and audiovisuohaptic. Two different sound models were implemented and compared. Significantly lower error rates were obtained in the audiovisuohaptic condition involving movement sonification based on a physical model of friction, compared to the visual-only condition. Moreover, a significant increase in perceived intuitiveness was observed for most conditions involving haptic and/or auditory feedback, compared to the visual-only condition. The main finding of this study is that multimodal feedback can not only improve perceived intuitiveness of an interface but that certain combinations of haptic feedback and movement sonification can also contribute with performance-enhancing properties. This highlights the importance of carefully designing feedback combinations for interactive applications.

  • 10.
    Frid, Emma
    et al.
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Moll, Jonas
    Uppsala University, Uppsala, Sweden.
    Bresin, Roberto
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Sallnäs Pysander, Eva-Lotta
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Haptic feedback combined with movement sonification using a friction sound improves task performance in a virtual throwing task2018Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper we present a study on the effects of auditory- and haptic feedback in a virtual throwing task performed with a point-based haptic device. The main research objective was to investigate if and how task performance and perceived intuitiveness is affected when interactive sonification and/or haptic feedback is used to provide real-time feedback about a movement performed in a 3D virtual environment. Emphasis was put on task solving efficiency and subjective accounts of participants’ experiences of the multimodal interaction in different conditions. The experiment used a within-subjects design in which the participants solved the same task in different conditions: visual-only, visuohaptic, audiovisual and audiovisuohaptic. Two different sound models were implemented and compared. Significantly lower error rates were obtained in the audiovisuohaptic condition involving movement sonification based on a physical model of friction, compared to the visual-only condition. Moreover, a significant increase in perceived intuitiveness was observed for most conditions involving haptic and/or auditory feedback, compared to the visual-only condition. The main finding of this study is that multimodal feedback can not only improve perceived intuitiveness of an interface but that certain combinations of haptic feedback and movement sonification can also contribute with performance-enhancing properties. This highlights the importance of carefully designing feedback combinations for interactive applications.

  • 11.
    Oertel, Catharine
    et al.
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Cummins, Fred
    Edlund, Jens
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Wagner, Petra
    Campbell, Nick
    D64: A corpus of richly recorded conversational interaction2013Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 7, nr 1-2, s. 19-28Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In recent years there has been a substantial debate about the need for increasingly spontaneous, conversational corpora of spoken interaction that are not controlled or task directed. In parallel the need has arisen for the recording of multi-modal corpora which are not restricted to the audio domain alone. With a corpus that would fulfill both needs, it would be possible to investigate the natural coupling, not only in turn-taking and voice, but also in the movement of participants. In the following paper we describe the design and recording of such a corpus and we provide some illustrative examples of how such a corpus might be exploited in the study of dynamic interaction. The D64 corpus is a multimodal corpus recorded over two successive days. Each day resulted in approximately 4 h of recordings. In total five participants took part in the recordings of whom two participants were female and three were male. Seven video cameras were used of which at least one was trained on each participant. The Optitrack motion capture kit was used in order to enrich information. The D64 corpus comprises annotations on conversational involvement, speech activity and pauses as well as information of the average degree of change in the movement of participants.

  • 12.
    Peter, Christian
    et al.
    Graz University ofTechnology, Inffeldgasse 16c, 8010 Graz, Austria.
    Kreiner, Andreas
    Modernfamilies.net GmbH, Linz, Austria.
    Schröter, Martin
    Graz University ofTechnology, Inffeldgasse 16c, 8010 Graz, Austria.
    Kim, Hyosun
    Graz University ofTechnology, Inffeldgasse 16c, 8010 Graz, Austria.
    Beiber, Gerald
    Fraunhofer IGD, Rostock, Germany.
    Öhberg, Fredrik
    Umeå universitet, Medicinska fakulteten, Institutionen för strålningsvetenskaper, Radiofysik.
    Hoshi, Kei
    Umeå universitet, Samhällsvetenskapliga fakulteten, Institutionen för informatik.
    Waterworth, Eva
    Umeå universitet, Samhällsvetenskapliga fakulteten, Institutionen för informatik.
    Waterworth, John
    Umeå universitet, Samhällsvetenskapliga fakulteten, Institutionen för informatik.
    Ballesteros, Soladad
    Facultad de Psicologia, UNED, Madrid, Spain.
    AGNES: connecting people in a multimodal way2013Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 7, nr 3, s. 229-245Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Western societies are confronted with a number of challenges caused by the increasing number of older citizens. One important aspect is the need and wish of older people to live as long as possible in their own home and maintain an independent life. As people grew older, their social networks disperse, with friends and families moving to other parts of town, other cities or even countries. Additionally, people become less mobile with age, leading to less active participation in societal life. Combined, this normal, age-related development leads to increased loneliness and social isolation of older people, with negative effects on mental and physical health of those people. In the AGNES project, a home-based system has been developed that allows connecting elderly with their families, friends and other significant people over the Internet. As most older people have limited experience with computers and often special requirements on technology, one focus of AGNES was to develop with the users novel technological means for interacting with their social network. The resulting system uses ambient displays, tangible interfaces and wearable devices providing ubiquitous options for interaction with the network, and secondary sensors for additionally generating carefully chosen information on the person to be relayed to significant persons. Evaluations show that the chosen modalities for interaction are well adopted by the users. Further it was found that use of the AGNES system had positive effects on the mental state of the users, compared to the control group without the technology.

  • 13. Reidsma, Dennis
    et al.
    de Kok, Iwan
    Neiberg, Daniel
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Tal-kommunikation.
    Pammi, Sathish Chandra
    van Straalen, Bart
    Truong, Khiet
    van Welbergen, Herwin
    Continuous Interaction with a Virtual Human2011Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 4, nr 2, s. 97-118Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper presents our progress in developing a Virtual Human capable of being an attentive speaker. Such a Virtual Human should be able to attend to its interaction partner while it is speaking-and modify its communicative behavior on-the-fly based on what it observes in the behavior of its partner. We report new developments concerning a number of aspects, such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and strategies for generating appropriate reactions to listener responses. On the basis of this progress, a task-based setup for a responsive Virtual Human was implemented to carry out two user studies, the results of which are presented and discussed in this paper.

  • 14.
    Rönnberg, Niklas
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.
    Sonification supports perception of brightness contrast2019Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, s. 1-9Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In complex visual representations, there are several possible challenges for the visual perception that might be eased by adding sound as a second modality (i.e. sonification). It was hypothesized that sonification would support visual perception when facing challenges such as simultaneous brightness contrast or the Mach band phenomena. This hypothesis was investigated with an interactive sonification test, yielding objective measures (accuracy and response time) as well as subjective measures of sonification benefit. In the test, the participant’s task was to mark the vertical pixel line having the highest intensity level. This was done in a condition without sonification and in three conditions where the intensity level was mapped to different musical elements. The results showed that there was a benefit of sonification, with higher accuracy when sonification was used compared to no sonification. This result was also supported by the subjective measurement. The results also showed longer response times when sonification was used. This suggests that the use and processing of the additional information took more time, leading to longer response times but also higher accuracy. There were no differences between the three sonification conditions.

  • 15. Székely, Éva
    et al.
    Steiner, Ingmar
    Ahmed, Zeeshan
    Carson-Berndsen, Julie
    Facial expression-based affective speech translation2014Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 8, nr 1, s. 87-96Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    One of the challenges of speech-to-speech trans- lation is to accurately preserve the paralinguistic informa- tion in the speaker’s message. Information about affect and emotional intent of a speaker are often carried in more than one modality. For this reason, the possibility of multimodal interaction with the system and the conversation partner may greatly increase the likelihood of a successful and gratifying communication process. In this work we explore the use of automatic facial expression analysis as an input annotation modality to transfer paralinguistic information at a symbolic level from input to output in speech-to-speech translation. To evaluate the feasibility of this approach, a prototype sys- tem, FEAST (facial expression-based affective speech trans- lation) has been developed. FEAST classifies the emotional state of the user and uses it to render the translated output in an appropriate voice style, using expressive speech synthesis. 

  • 16. Varni, Giovanna
    et al.
    Dubus, Gaël
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Oksanen, Sami
    Volpe, Gualtiero
    Fabiani, Marco
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Bresin, Roberto
    KTH, Skolan för datavetenskap och kommunikation (CSC), Tal, musik och hörsel, TMH, Musikakustik.
    Kleimola, Jari
    Välimäki, Vesa
    Camurri, Antonio
    Interactive sonification of synchronisation of motoric behaviour in social active listening to music with mobile devices2012Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 5, nr 3-4, s. 157-173Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper evaluates three different interactive sonifications of dyadic coordinated human rhythmic activity. An index of phase synchronisation of gestures was chosen as coordination metric. The sonifications are implemented as three prototype applications exploiting mobile devices: Sync’n’Moog, Sync’n’Move, and Sync’n’Mood. Sync’n’Moog sonifies the phase synchronisation index by acting directly on the audio signal and applying a nonlinear time-varying filtering technique. Sync’n’Move intervenes on the multi-track music content by making the single instruments emerge and hide. Sync’n’Mood manipulates the affective features of the music performance. The three sonifications were also tested against a condition without sonification.

  • 17. Walus, Bartlomiej P.
    et al.
    Pauletto, Sandra
    Department of Theatre, Film and Television, University of York, York, United Kingdom.
    Mason-Jones, Amanda
    Sonification and music as support to the communication of alcohol-related health risks to young people. Study design and results2016Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 10, nr 3, s. 235-246Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Excessive consumption of alcohol has been recognised as a significant risk factor impacting the health of young people. Effective communication of such risk is considered to be one key step to improve behaviour. We evaluated an innovative multimedia intervention that utilised audio (sonification-using sound to display data-and music) and interactivity to support the visual communication of alcohol health risk data. A 3-arm pilot experiment was undertaken. The trial measures included health knowledge, alcohol risk perception and user experience of the intervention. Ninety-six subjects participated in the experiment. At 1 month follow-up, alcohol knowledge and alcohol risk perception improved significantly in the whole sample. However, there was no difference between the intervention groups that experienced (1) visual presentation with interactivity (VI-Exp group) and, (2) visual presentation with audio (sonification and music) and interactivity (VAI-Exp group), when compared to the control group which experienced a (3) visual only presentation (V-Cont group). Participants reported enjoying the presentations and found them educational. The majority of participants indicated that the audio, music and sonification helped to convey the information well, and, although a larger sample size is needed to fully establish the effectiveness of the different interventions, this study provides a useful model for future similar studies.

  • 18.
    Yang, Jiajun
    et al.
    Bielefeld Univ, Citec, Ambient Intelligence Grp, Bielefeld, Germany..
    Hermann, Thomas
    Bielefeld Univ, Citec, Ambient Intelligence Grp, Bielefeld, Germany..
    Bresin, Roberto
    KTH, Skolan för elektroteknik och datavetenskap (EECS), Medieteknik och interaktionsdesign, MID.
    Introduction to the special issue on interactive sonification2019Ingår i: Journal on Multimodal User Interfaces, ISSN 1783-7677, E-ISSN 1783-8738, Vol. 13, nr 3, s. 151-153Artikel i tidskrift (Övrigt vetenskapligt)
1 - 18 av 18
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf