Digitala Vetenskapliga Arkivet

Change search
Refine search result
12 1 - 50 of 77
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Barahona, Adrián
    et al.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Perceptual evaluation of modal synthesis for impact-based sounds2019In: Proceedings of the Sound and Music Computing Conferences, CERN , 2019, p. 34-38Conference paper (Refereed)
    Abstract [en]

    The use of real-time sound synthesis for sound effects can improve the sound design of interactive experiences such as video games. However, synthesized sound effects can be often perceived as synthetic, which hampers their adoption. This paper aims to determine whether or not sounds synthesized using filter-based modal synthesis are perceptually comparable to sounds directly recorded. Sounds from 4 different materials that showed clear modes were recorded and synthesized using filter-based modal synthesis. Modes are the individual sinusoidal frequencies at which objects vibrate when excited. A listening test was conducted where participants were asked to identify, in isolation, whether a sample was recorded or synthesized. Results show that recorded and synthesized samples are indistinguishable from each other. The study outcome proves that, for the analysed materials, filter-based modal synthesis is a suitable technique to synthesize hit sound in real-time without perceptual compromises.

  • 2.
    Barahona-Rios, Adrián
    et al.
    The University of York.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Synthesising Knocking Sound Effects Using Conditional WaveGAN2020In: SMC Sound and Music Computing Conference 2020, 2020Conference paper (Refereed)
    Abstract [en]

    In this paper we explore the synthesis of sound effects using conditional generative adversarial networks (cGANs). We commissioned Foley artist Ulf Olausson to record a dataset of knocking sound effects with different emotions and trained a cGAN on it. We analysed the resulting synthesised sound effects by comparing their temporal acoustic features to the original dataset and by performing an online listening test. Results show that the acoustic features of the synthesised sounds are similar to those of the recorded dataset. Additionally, the listening test results show that the synthesised sounds can be identified by people with experience in sound design, but the model is not far from fooling non-experts. Moreover, on average most emotions can be recognised correctly in both recorded and synthesised sounds. Given that the temporal acoustic features of the two datasets are highly similar, we hypothesise that they strongly contribute to the perception of the intended emotions in the recorded and synthesised knocking sounds.

  • 3. Bowles, Tristan
    et al.
    Pauletto, Sandra
    Emotions in the voice: humanising a robotic voice2010In: Proceedings of the 7th Sound and Music Computing Conference, Barcelona, Spain, 2010Conference paper (Refereed)
  • 4.
    Bresin, Roberto
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Falkenberg, Kjetil
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Holzapfel, Andre
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    KTH Royal Institute of Technology - Sound and Music Computing (SMC) Group2021In: Proceedings of the Sound and Music Computing Conferences 2021, Sound and Music Computing Network , 2021, p. xxv-xxviConference paper (Other academic)
  • 5.
    Bresin, Roberto
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Laaksolahti, Jarmo
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Gandini, Erik
    SKH Stockholm University of the Arts.
    Looking for the soundscape of the future: preliminary results applying the design fiction method2020In: Sound and Music Computing Conference 2020, 2020Conference paper (Refereed)
    Abstract [en]

    The work presented in this paper is a preliminary study in a larger project that aims to design the sound of the future through our understanding of the soundscapes of the present, and through methods of documentary filmmaking, sound computing and HCI. This work is part of a project that will complement and run parallel to Erik Gandini’s research project ”The Future through the Present”, which explores how a documentary narrative can create a projection into the future, and develop a cinematic documentary aesthetics that releases documentary film from the constraints of dealing with the present or the past. The point of departure is our relationship to labour at a time when Robotics, VR/AR and AI applied to Big Data outweigh and augment our physical and cognitive capabilities, with automation expected to replace humans on a large scale within most professional fields. From an existential perspective this poses the question: what will we do when we don’t have to work? And challenges us to formulate a new idea of work beyond its historical role. If the concept of work ethics changes, how would that redefine soundscapes? Will new sounds develop? Will sounds from the past resurface? In the context of this paper we try to tackle these questions by first applying the Design Fiction method. In a workshop with twenty-three participants predicted both positive and negative future scenarios, including both lo-fi and hi-fi soundscapes, and in which people will be able to control and personalize soundscapes. Results are presented, summarized and discussed.

  • 6. Edmonds, E A
    et al.
    Candy, Linda
    Fell, Mark
    Knott, Roger
    Pauletto, Sandra
    Weakley, Alastair
    Developing interactive art using visual programming2003In: Proceedings of Human-Computer Interaction, p. 1183-1187Article in journal (Refereed)
  • 7. Edmonds, Ernest A
    et al.
    Weakley, Alastair
    Candy, Linda
    Fell, Mark
    Knott, Roger
    Pauletto, Sandra
    The studio as laboratory: combining creative practice and digital technology research2005In: International Journal of Human-Computer Studies, Vol. 63, no 4-5, p. 452-481Article in journal (Refereed)
  • 8. Edmonds, Ernest
    et al.
    Martin, Andrew
    Pauletto, Sandra
    Audio-visual interfaces in digital art2004In: Proceedings of the 2004 ACM SIGCHI International Conference on Advances in computer entertainment technology, 2004, p. 331-336Conference paper (Refereed)
  • 9.
    Falkenberg, Kjetil
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Holzapfel, Andre
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Musikkommunikation och ljudinteraktion2021In: Introduktion till medieteknik / [ed] Pernilla Falkenberg Josefsson, Mikael Wiberg, Lund: Studentlitteratur AB, 2021, p. 155-166Chapter in book (Refereed)
  • 10.
    Falkenberg, Kjetil
    et al.
    Royal College of Music in Stockholm. KTH.
    Bresin, Roberto
    KTH.
    Holzapfel, Andre
    KTH.
    Pauletto, Sandra
    KTH.
    Gulz, Torbjörn
    Royal College of Music in Stockholm, Department of Jazz. KTH.
    Lindetorp, Hans
    Royal College of Music in Stockholm, Department of Music and Media Production. KTH.
    Misgeld, Olof
    Royal College of Music in Stockholm, Department of Folk Music. KTH.
    Mattias, Sköld
    Royal College of Music in Stockholm, Department of Folk Music. Royal College of Music in Stockholm, Department of Composition and Conducting. KTH.
    Student involvement in sound and music computing research: Current practices at KTH and KMH2019In: Combined proceedings of the Nordic Sound and Music Computing Conference 2019 and the Interactive Sonification Workshop 2019, 2019, p. 36-42Conference paper (Refereed)
    Abstract [en]

    To engage students in and beyond course activities has been a working practice both at KTH Sound and Music Computing group and at KMH Royal College of Music since many years. This paper collects experiences of involving students in research conducted within the two institutions.  We describe how students attending our courses are given the possibility to be involved in our research activities, and we argue that their involvement both contributes to develop new research and benefits the students in the short and long term.  Among the assignments, activities, and tasks we offer in our education programs are pilot experiments, prototype development, public exhibitions, performing, composing, data collection, analysis challenges, and bachelor and master thesis projects that lead to academic publications.

  • 11.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bouvier, Baptiste
    STMS IRCAM CNRS SU, Paris, France.
    Fraticelli, Matthieu
    Département d’études cognitives ENS, Paris, France.
    A Dual-Task Experimental Methodology for Exploration of Saliency of Auditory Notifications in a Retail Soundscape2023In: Proceedings of the 28th International Conference on Auditory Display (ICAD2023): Sonification for the Masses, 2023, 2023Conference paper (Refereed)
    Abstract [en]

    This paper presents an experimental design of a dual-task experiment aimed at exploring the salience of auditory notifications. The first task is a Sustained Attention to Response Task (SART) and the second task involves listening to a complex store soundscape that includes ambient sounds, background music and auditory notifications. In this task, subjects are asked to press a button when an auditory notification is detected. The proposed method is based on a triangulation approach in which quantitative variables are combined with perceptual ratings and free-text question replies to obtain a holistic picture of how the sound environment is perceived. Results from this study can be used to inform the design of systems presenting music and peripheral auditory notifications in a retail environment.

    Download full text (pdf)
    fulltext
  • 12.
    Hansen, Kjetil Falkenberg
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Holzapfel, Andre
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Gulz, Torbjörn
    KMH Royal College of Music in Stockholm.
    Lindetorp, Hans
    KMH Royal College of Music in Stockholm.
    Misgeld, Olof
    KMH Royal College of Music in Stockholm.
    Mattias, Sköld
    KMH Royal College of Music in Stockholm.
    Student involvement in sound and music computing research: Current practices at KTH and KMH2019In: Combined proceedings of the Nordic Sound and Music Computing Conference 2019 and the Interactive Sonification Workshop 2019, Stockholm, 2019, p. 36-42Conference paper (Refereed)
    Abstract [en]

    To engage students in and beyond course activities has been a working practice both at KTH Sound and Music Computing group and at KMH Royal College of Music since many years. This paper collects experiences of involving students in research conducted within the two institutions. 

    We describe how students attending our courses are given the possibility to be involved in our research activities, and we argue that their involvement both contributes to develop new research and benefits the students in the short and long term.  Among the assignments, activities, and tasks we offer in our education programs are pilot experiments, prototype development, public exhibitions, performing, composing, data collection, analysis challenges, and bachelor and master thesis projects that lead to academic publications.

  • 13. Hillman, Neil
    et al.
    Pauletto, Sandra
    Audio Imagineering: Utilising the Four Sound Areas Framework for Emotive Sound Design within Contemporary Audio Post-production2016In: The New Soundtrack, ISSN 2042-8855, Vol. 6, no 1, p. 77-107Article in journal (Refereed)
  • 14. Hillman, Neil
    et al.
    Pauletto, Sandra
    The Craftsman: The use of sound design to elicit emotions2014In: The Soundtrack, ISSN 1751-4193, Vol. 7, no 1, p. 5-23Article in journal (Refereed)
  • 15.
    Houel, Malcolm
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Arun, Abhilash
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Berg, Alfred
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Iop, Alessandro
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Barahona-Rios, Adrian
    The University of York.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Perception of Emotions in Knocking Sounds: an Evaluation Study2020In: Proceedings of the Sound and Music Computing Conferences, 2020Conference paper (Refereed)
    Abstract [en]

    Knocking sounds are highly meaningful everyday sounds. There exist many ways of knocking, expressing important information about the state of the person knocking and their relationship with the other side of the door. In media production, knocking sounds are important storytelling devices: they allow transitions to new scenes and create expectations in the audience. Despite this important role, knocking sounds have rarely been the focus of research. In this study, we create a data set of knocking actions performed with different emotional intentions. We then verify, through a listening test, whether these emotional intentions are perceived through listening to sound alone. Finally, we perform an acoustic analysis of the experimental data set to identify whether emotion-specific acoustic patterns emerge. The results show that emotional intentions are correctly perceived for some emotions. Additionally, the emerging emotion-specific acoustic patterns confirm, at least in part, findings from previous research in speech and music performance.

    Download full text (pdf)
    fulltext
  • 16. Hunt, Andy
    et al.
    Hermann, Thomas
    Pauletto, Sandra
    Interacting with sonification systems: closing the loop2004In: Proceedings. Eighth International Conference on Information Visualisation, 2004. IV 2004., IEEE, 2004, p. 879-884Conference paper (Refereed)
  • 17.
    Hölling, Josefine
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Svahn, Maria
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Audio-Visual Interactive Art: Investigating the effect of gaze-controlled audio on visual attention and short-term memory2021In: AM '21: Audio Mostly 2021 Proceedings, Association for Computing Machinery (ACM) , 2021Conference paper (Refereed)
    Abstract [en]

    This article presents the development and testing of a system for interactive art. The system utilises eye tracking technology to detect the eye movements of people looking at a painting, and then uses this data to trigger sounds related to the painting. The system was developed in collaboration with a visual artist and a musician with the ultimate aim to integrate it into a new piece of interactive art. Described here in detail is the design and development of a prototype which was tested using a copy of a painting by Hieronymus Bosch: The Garden of Earthly Delights. Through this system we tested whether gaze-controlled audio affects visual attention and short term memory.

  • 18.
    Idrovo, René
    et al.
    The University of York.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Immersive Point-of-Audition: Alfonso Cuarón’s Three-Dimensional Sound Design Approach2019In: Music, Sound, and the Moving Image, ISSN 1753-0768Article in journal (Refereed)
    Abstract [en]

    Technological advances have always had an impact on the development of new audio-visual aesthetics. Recently, exploiting the spatial capabilities of immersive sound technology in the form of Dolby Atmos, Alfonso Cuarón introduced in Gravity (2013) an innovative sound design approach that enhances the illusion of ‘presence’ in the space of the diegesis by always maintaining a coherent, realistic, and immersive representation of a given point-of-audition. Such sonic strategy – which we have termed immersive point-of-audition – provides a three-dimensional representation of the filmic space, localising sound effects, music, and dialogue in accordance to the position of the sources within the diegesis. In this paper, we introduce the definition and main characteristics of this emergent sound design approach, and using Gravity as an illustrative example, we argue that it has the potential of facilitating the processes of transportation and identification in cinema.

  • 19.
    Iop, Alessandro
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Perception of Emotions in Multimodal Stimuli: the Case of Knocking on a Door2021In: Proceedings of the Sound and Music Computing Conferences, Sound and Music Computing Network , 2021, p. 233-237Conference paper (Refereed)
    Abstract [en]

    Knocking sounds are highly expressive. In our previous research we have shown that from the sound of knocking actions alone a person can differentiate between different basic emotional states. In media productions, such as film and games, knocks can be very important storytelling de- vices as they allow the story to transition from one part to another. Research has shown that colours can affect our perception of emotions. However the relationship between colours and emotions is complex and dependent on mul- tiple factors. In this study we investigate how the visual characteristics of a door, more specifically its colour, tex- ture and material, presented together with emotionally ex- pressive knocking actions, can affect the perception of the overall emotion evoked in the audience. Results show that the doorÀs visual characteristics have little effect on the overall perception of emotions, which remains dominated by the emotions expressed by the knocking sounds.

  • 20. Keenan, Fiona
    et al.
    Pauletto, Sandra
    A Mechanical Mapping Model for Real-Time Control of a Complex Physical Modelling Synthesis Engine with a Simple Gesture2017In: International Conference on Digital Audio Effects (DAFx), 2017Conference paper (Refereed)
  • 21. Keenan, Fiona
    et al.
    Pauletto, Sandra
    An Acoustic Wind Machine and its Digital Counterpart: Initial Audio Analysis and Comparison2016In: Interactive Audio Systems Symposium, 2016, p. 1-5Conference paper (Refereed)
  • 22. Keenan, Fiona
    et al.
    Pauletto, Sandra
    Design and evaluation of a digital theatre wind machine.2017In: The International Conference on New Interfaces for Musical Expression (NIME), 2017, p. 431-435Conference paper (Refereed)
  • 23. Keenan, Fiona
    et al.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Evaluating a continuous sonic interaction: Comparing a performable acoustic and digital everyday sound2019In: Proceedings of the Sound and Music Computing Conferences, CERN , 2019, p. 127-134Conference paper (Refereed)
    Abstract [en]

    This paper reports on the procedure and results of an experiment to evaluate a continuous sonic interaction with an everyday wind-like sound created by both acoustic and digital means. The interaction is facilitated by a mechanical theatre sound effect, an acoustic wind machine, which is performed by participants. This work is part of wider research into the potential of theatre sound effect designs as a means to study multisensory feedback and continuous sonic interactions. An acoustic wind machine is a mechanical device that affords a simple rotational gesture to a performer; turning its crank handle at varying speeds produces a wind-like sound. A prototype digital model of a working acoustic wind machine is programmed, and the acoustic interface drives the digital model in performance, preserving the same tactile and kinaesthetic feedback across the continuous sonic interactions. Participants' performances are elicited with sound stimuli produced from simple gestural performances of the wind-like sounds. The results of this study show that the acoustic wind machine is rated as significantly easier to play than its digital counterpart. Acoustical analysis of the corpus of participants' performances suggests that the mechanism of the wind machine interface may play a role in guiding their rotational gestures.

  • 24.
    Keenan, Fiona
    et al.
    The University of York.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Evaluating a Sonic Interaction Design Based on a Historic Theatre Sound Effect2022In: International journal of human-computer studies, ISSN 1071-5819, E-ISSN 1095-9300, Vol. 164, article id 102836Article in journal (Refereed)
    Abstract [en]

    This paper reports on the procedure and results of a preliminary experiment to evaluate participants’ perceptual experiences of a mechanical theatre sound effect and its digital counterpart. The theatre sound effect chosen - an acoustic wind machine - affords a simple rotational gesture; turning its crank handle at varying speeds produces a convincing wind-like sound. A prototype digital model of a working acoustic wind machine was programmed. The mechanical interface of the acoustic wind machine drove both the digital model and its own acoustic sound in performance, therefore preserving the same tactile and kinaesthetic feedback across the two continuous sonic interactions. Participants were presented with two listening tests to examine the perceived similarity of these wind-like sounds and the perceived connection between the speed of the crank handle and the resulting sound. Participants’ performances of both the acoustic and digital systems were then elicited with sound stimuli produced from simple gestural performances of the wind-like sounds. The results of this study show that, while the sound of the prototype digital model requires further calibration to bring the experience of its performance closer to that of its acoustic counterpart, the acoustic wind machine is significantly easier to play, and the mechanism of its interface may play a role in perceptually guiding performance gestures.

  • 25. Keenan, Fiona
    et al.
    Pauletto, Sandra
    Listening Back: Exploring the Sonic Interactions at the Heart of Historical Sound Effects Performance2017In: The New Soundtrack, ISSN 2042-8855, Vol. 7, no 1, p. 15-30Article in journal (Refereed)
  • 26. Lopez, Julieta Mariana
    et al.
    Pauletto, Sandra
    The design of an audio film for the visually impaired2009In: Proceedings of the International Conference on Auditory Displays, Copenhagen, 2009Conference paper (Refereed)
  • 27. Lopez, Mariana J
    et al.
    Pauletto, Sandra
    The sound machine: a study in storytelling through sound design2010In: Audio Mostly Conference, 2010Conference paper (Refereed)
  • 28. Lopez, Mariana Julieta
    et al.
    Pauletto, Sandra
    The Design of an Audio Film: Portraying Story, Action and Interaction through Sound.2009In: Journal of Music & Meaning, Vol. 8, no 2Article in journal (Refereed)
    Abstract [en]

    Nowadays, audio description is used to enable visually impaired people to access films. It presents an important limitation, however, which consists in the need for visually impaired audiences to rely on a describer, not being able to access the work directly. The aim of this project was to design a format of sonic art called audio film that eliminates the need for visual elements and for a describer, by providing information solely through sound, sound processing and spatialisation, and which might be considered as an alternative to audio description. This project is also of interest for the domains of auditory displays and sonic interaction design, as solutions need to be found for effectively portraying storytelling information and characters' actions through sound (not narration). In order to explore the viability of this format, an example has been designed based on Roald Dahl's Lamb to the Slaughter (1954) using a 6.1 surround sound configuration. Through executing the design of this example, we found that this format can successfully convey a story without the need either of visual elements or of a narrator.

  • 29. Lopez, Mariana
    et al.
    Pauletto, Sandra
    Acoustic Measurement Methods for Outdoor Sites: a Comparative Study2012Conference paper (Refereed)
  • 30. Lopez, Mariana
    et al.
    Pauletto, Sandra
    The York Mystery Plays: Acoustics and Staging in Stonegate2013Conference paper (Refereed)
    Abstract [en]

    This paper introduces some of the main questions surrounding the staging and performance of the York Mystery Plays. It proposes that there is a need for interdisciplinary research that includes the consideration of acoustics as an essential part of the analysis of the staging and performance of the plays and introduces a methodology to make such study possible.

  • 31. Lopez, Mariana
    et al.
    Pauletto, Sandra
    Kearney, Gavin
    The application of impulse response measurement techniques to the study of the acoustics of Stonegate, a performance space used in medieval English drama2013In: Acta acustica united with acustica, ISSN 1610-1928, Vol. 99, no 1, p. 98-109Article in journal (Refereed)
  • 32.
    Madaghiele, Vincenzo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Demir, Arife Dila
    Estonian Academy of Arts, Tallinn, Estonia.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Heat-sensitive sonic textiles: fostering awareness of the energy we save by wearing warm fabrics2023In: SMC 2023: Proceedings of the Sound and Music Computing Conference 2023, Sound and Music Computing Network , 2023, p. 395-402Conference paper (Refereed)
    Abstract [en]

    In this paper we describe the development of two heat and movement-sensitive sonic textile prototypes. The prototypes interactively sonify in real-time the bodily temperature of the person who wears them, complementing the user’s felt experience of warmth. The main aim is making users aware of the heat exchanges between the body, the fabric, and the surrounding environment through non-intrusive and creative sonic interactions. After describing the design challenges and the technical development of the prototypes - in terms of textile fabrication, electronics and sound components - we discuss the results of two user experiments. In the first experiment, two different sonification approaches were evaluated allowing us to select the most appropriate for the task. The prototypes’ use-experience was explored in the second experiment.

    Download full text (pdf)
    fulltext
  • 33.
    Madaghiele, Vincenzo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Demir, Dila
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Heat-sensitive sonic textiles: increasing awareness of the energy we save by wearing warm fabrics2023Conference paper (Refereed)
    Abstract [en]

    In this paper we describe the development of two heat and movement-sensitive sonic textile prototypes. The prototypes interactively sonify in real-time the bodily temperature of the person who wears them, complementing the user’s felt experience of warmth. The main aim is making users aware of the heat exchanges between the body, the fabric, and the surrounding environment through nonintrusive and creative sonic interactions. After describing the design challenges and the technical development of the prototypes - in terms of textile fabrication, electronics and sound components - we discuss the results of two user experiments. In the first experiment, two different sonification approaches were evaluated allowing us to select the most appropriate for the task. The prototypes’ use-experience was explored in the second experiment.

    Download full text (pdf)
    fulltext
  • 34.
    Madaghiele, Vincenzo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Experimenting techniques for sonic implicit interactions: a real time sonification of body-textile heat exchange with sound augmented fabrics2022In: In: Pauletto, S., Delle Monache, S. and Selfridge, R. (Eds) Proceedings of the Conference on Sonification of Health and Environmental Data (SoniHED 2022). ISBN: 978-91-8040-358-0 / [ed] Pauletto, S., Delle Monache, S. and Selfridge, R., 2022Conference paper (Refereed)
    Abstract [en]

    In this paper we present our prototype of a sound augmented blanket. With this artifact we aim to investigate the potential to achieve sonic implicit interactions through auditory augmentation of fabrics. We describe the development of a blanket that sonifies the approximate temperature exchange between the body and the fabric, using sound as a medium of interaction and a carrier of information. We propose different methods for auditory augmentation of fabrics through a piezoelectric contact microphone used for movement sensing. After describing the technical development of the prototype, we discuss our early findings from a qualitative standpoint, focusing on the process of sense-making of such an artifact from an evaluation based on free exploration. Our preliminary results suggest that different auditory augmentation models encourage different affordances, and are able to provide a simple creative and aesthetic experience. The ability of the chosen sonic interaction models to effectively communicate information should however be further investigated.

    Download full text (pdf)
    fulltext
  • 35.
    Madaghiele, Vincenzo
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    The Sonic Carpet: Real-Time Feedback of Energy Consumption and Emission Data through Sonic Interaction Design2022In: The 27th International Conference on Auditory Display (ICAD 2022), 2022, p. 55-63Conference paper (Refereed)
    Abstract [en]

    As buildings become increasingly automated and energy efficient, the relative impact of occupants on the overall building carbon footprint is expected to increase. Research shows that by changing occupant behaviour energy savings between 5 and 15 % could be achieved. A commonly used device for energy-related behaviour change is the smart meter, a visual-based interface which provides users with data about energy consumption and emissions of their household. This paper approaches the problem from a Sonic Interaction Design point of view, with the aim of developing an alternative, sound-based design to provide feedback about some of the data usually accessed through smart meters. In this work, we experimented with sonic augmentation of a common household object, a door mat, in order to provide a non-intrusive everyday sonic interaction. The prototype that we built is an energy-aware sonic carpet that provides real-time feedback on home electricity consumption and emissions through sound. An experiment has been designed to evaluate the prototype from a user experience perspective, and to assess how users understand the chosen sonifications.

  • 36. Manolas, Christos
    et al.
    Pauletto, Sandra
    Enlarging the Diegetic Space: Uses of the Multi-channel Soundtrack in Cinematic Narrative2009In: The Soundtrack, Vol. 2, no 1, p. 39-55Article in journal (Refereed)
  • 37. Manolas, Christos
    et al.
    Pauletto, Sandra
    Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema2014In: 3D Research, ISSN 20926731, Vol. 5, no 3Article in journal (Refereed)
  • 38.
    Manolas, Christos
    et al.
    Ravensbourne University London.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Jang, Jon
    Soundtrack Loudness as a Depth Cue in Stereoscopic 3D Media2020In: Convergence. The International Journal of Research into New Media Technologies, ISSN 1354-8565, E-ISSN 1748-7382Article in journal (Refereed)
    Abstract [en]

    Assisted by the technological advances of the past decades, stereoscopic 3D cues are being integrated increasingly in both interactive and non-interactive media. Arguably, the main focus of this effort is placed on the creation of an increased sense of visual depth. Considering that human perception relies heavily on the audiovisual integration rather than on visual information alone, it is rather surprising that relatively little attention has been given so far to the potential effect of the soundtrack on 3D depth perception in contrast to the evident interest towards the study of realistic 3D audio spatialisation techniques and technologies. The multisensory nature of human perception suggests that the potential of sound design as a means to influence depth perception in the stereoscopic 3D visual environments may be worthy of further exploration. This study reports on our research into the possibilities of using alterations of the volume levels of the soundtrack in order to influence the perception of visual depth while viewing stereoscopic 3D animation clips. Based on previous findings indicating that the volume level of the soundtrack may be related to the perception of visual depth, a series of experiments further explored the effectiveness of this auditory cue. Results suggest that, under certain conditions, differences in volume levels of the soundtrack could influence the judgement of visual depth in a way that frequently contradicts real-life expectations. It is suggested that different, more metaphorical perceptual mechanisms may be in play when viewing stereoscopic 3D presentations than in real life. In this context, we conclude that stereoscopic 3D media can benefit from further exploration of the effectiveness of certain auditory cues as a means to influence the perception of depth within the stereoscopic 3D environment.

  • 39. Misdariis, N.
    et al.
    Özcan, E.
    Grassi, M.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Barrass, S.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Susini, P.
    Sound experts’ perspectives on astronomy sonification projects2022In: Nature Astronomy, E-ISSN 2397-3366, Vol. 6, no 11, p. 1249-1255Article in journal (Refereed)
    Abstract [en]

    The Audible Universe project aims to create dialogue between two scientific domains investigating two distinct research objects: stars and sound. It has been instantiated within a collaborative workshop that began to mutually acculturate the two communities, by sharing and transmitting respective knowledge, skills and practices. One main outcome of this exchange was a global view on the astronomical data sonification paradigm for observing the diversity of tools, uses and users (including visually impaired people), but also the current limitations and potential methods of improvement. From this viewpoint, here we present basic elements gathered and contextualized by sound experts in their respective fields (sound perception/cognition, sound design, psychoacoustics, experimental psychology), to anchor sonification for astronomy in a more well informed, methodological and creative process.

  • 40.
    Misdariis, Nicolas
    et al.
    STMS Ircam-CNRS-SU, Paris, France.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bonne, Nicolas
    Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth, UK.
    Harrison, Chris
    School of Mathematics, Statistics and Physics, Newcastle University, Newcastle, UK.
    Meredith, Kate
    Geneva Lake Astrophysics and STEAM, Walworth, WI, USA.
    Zanella, Anita
    Istituto Nazionale di Astrofisica, Padua, Italy.
    The Audible Universe Workshop: an Interdisciplinary Approach to the Design and Evaluation of Tools for Astronomical Data Sonification2023Conference paper (Refereed)
    Abstract [en]

    Even if images of astrophysical objects are used by professional astronomers for research and by the public for outreach, we are all basically blind to the Universe. Challenging the idea that we should always use visualisations, there has been a growing interest in converting astronomical phenomena into sound, motivated by: making astronomy more accessible to people who are blind or visually impaired (BVI); creating more engaging educational resources, and enabling a deeper understanding of complex astronomical data. The Audible Universe (AU) workshop focuses on consolidating what has been done in the field so far and identifying the areas where most effort is required to make progress over the coming years. The second edition of the AU workshop (AU2) took place in 2022, and brought together 50 experts, among whom astronomers interested in sonification, sound designers, experts in sound perception and educators. This community started a multi-disciplinary discussion about how to properly design and evaluate sonification tools. In this methodological and position paper, we present and discuss the main activities of the AU2 workshop, with a particular focus on activities concerned with the development of collaborative design processes, and the implementation of methods for evaluation. While this workshop was dedicated to fostering exchanges between the sonification community and astronomers, the structure and the methods used within the workshop are transferable to other application areas, and a contribution to the effort to develop interdisciplinary strategies for the development of the field of sonification.

    Download full text (pdf)
    fulltext
  • 41.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    ALIVE Exhibition: Art between Life and Science2016Other (Other (popular science, discussion, etc.))
  • 42.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Beyond sound objects2024In: Traces of Sound / [ed] Frisk, Henrik & Sanne Krogh Groth, Lund: Lund University Open Access, 2024, p. 51-66Chapter in book (Refereed)
    Abstract [en]

    In the mid 20th century Pierre Schaeffer introduced the term objet sonore in his now famous Traité des objets musicaux (1966) and Solfège de l'objet sonore (1967). Since then, the English term object has been used in relation to sound in many contexts. In this essay I argue that while conceptualizing sound as an object has had, and probably continues to have, many benefits for the development of audio technology and for production methods, it also obscures and undermines some fundamental and unique characteristics of sound. To exemplify how and when conceptualizing sound as an object seems to be unhelpful, I will use examples from media production, specifically the creative practice of Foley, and the use of sound in documentaries with examples from works by documentary filmmaker Erik Gandini and others. Overall, this essay aims to contribute to a better understanding of what sound is by highlighting its unique, often contradictory, characteristics - its ability to help us trace what is relevant and truthful in what is in front of us - rather than what it might have in common with other creative materials.

    Download full text (pdf)
    fulltext
  • 43. Pauletto, Sandra
    Embodied Knowledge in Foley Artistry2017In: The Routledge Companion to Screen Music and Sound, Routledge, 2017, p. 338-Chapter in book (Refereed)
  • 44.
    Pauletto, Sandra
    Department of Theatre, Film and Television, University of York, United Kingdom.
    Film and theatre-based approaches for sonic interaction design2014In: Digital Creativity, ISSN 1462-6268, E-ISSN 1744-3806, Vol. 25, no 1, p. 15-26Article in journal (Refereed)
    Abstract [en]

    Sonic interaction design studies how digital sound can be used in interactive contexts to convey information, meaning, aesthetic and emotional qualities. This area of research is positioned at the intersection of sound and music computing, auditory displays and interaction design. The key issue the designer is asked to tackle is to create meaningful sound for objects and interactions that are often new. To date, there are no set design methodologies, but a variety of approaches available to the designer. Knowledge and understandingofhow humans listen and interpret sound is the first step toward being able to create such sounds.This article discusses two original approaches that borrow techniques from film sound and theatre. Cinematic sound highlights how our interpretation of sounddependson listening modes and context, while theatre settings allow us to explore sonic interactions from the different perspectives of the interacting subject, the observer and the designer.

  • 45.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Foley performance and sonic implicit interactions: How Foley artists might hold the secret for the design of sonic implicit interactions2022In: The Body in Sound, Music and Performance: Studies in Audio and Sonic Arts, Informa UK Limited , 2022, p. 265-278Chapter in book (Refereed)
  • 46.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Invisible Seams: the Role of Foley and Voice Postproduction Recordings in the Design of Cinematic Performances2019In: Foundations in Sound Design for Linear Media: A Multidisciplinary Approach / [ed] Michael Filimowicz, Routledge, 2019Chapter in book (Refereed)
  • 47.
    Pauletto, Sandra
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Perspectives on Sound Design2014Collection (editor) (Refereed)
  • 48.
    Pauletto, Sandra
    Department of Theatre, Film and Television, University of York, York, United Kingdom.
    Speech technology and cinema: Can they learn from each other?2013In: Logopedics, Phoniatrics, Vocology, ISSN 1401-5439, E-ISSN 1651-2022, Vol. 38, no 3, p. 143-150Article in journal (Refereed)
    Abstract [en]

    The voice is the most important sound of a film soundtrack. It represents a character and it carries language. There are different types of cinematic voices: dialogue, internal monologues, and voice-overs. Conventionally, two main characteristics differentiate these voices: lip synchronization and the voice's attributes that make it appropriate for the character (for example, a voice that sounds very close to the audience can be appropriate for a narrator, but not for an onscreen character). What happens, then, if a film character can only speak through an asynchronous machine that produces a 'robot-like' voice? This article discusses the sound-related work and experimentation done by the author for the short film Voice by Choice. It also attempts to discover whether speech technology design can learn from its cinematic representation, and if such uncommon film protagonists can contribute creatively to transform the conventions of cinematic voices.

  • 49. Pauletto, Sandra
    The sound design of cinematic voices2012In: The New Soundtrack, ISSN 2042-8855, E-ISSN 2042-8863, Vol. 2, no 2, p. 127-142Article in journal (Refereed)
  • 50.
    Pauletto, Sandra
    et al.
    Department of Theatre, Film and Television, University of York, East Campus, Baird Lane, York YO10 5GB, United Kingdom.
    Balentine, Bruce
    Pidcock, Chris
    Jones, Kevin
    Bottaci, Leonardo
    Aretoulaki, Maria
    Wells, Jez
    Mundy, Darren P
    Balentine, James
    Exploring expressivity and emotion with artificial voice and speech technologies2013In: Logopedics, Phoniatrics, Vocology, ISSN 1401-5439, E-ISSN 1651-2022, Vol. 38, no 3, p. 115-125Article in journal (Refereed)
    Abstract [en]

    Emotion in audio-voice signals, as synthesized by text-to-speech (TTS) technologies, was investigated to formulate a theory of expression for user interface design. Emotional parameters were specified with markup tags, and the resulting audio was further modulated with post-processing techniques. Software was then developed to link a selected TTS synthesizer with an automatic speech recognition (ASR) engine, producing a chatbot that could speak and listen. Using these two artificial voice subsystems, investigators explored both artistic and psychological implications of artificial speech emotion. Goals of the investigation were interdisciplinary, with interest in musical composition, augmentative and alternative communication (AAC), commercial voice announcement applications, human-computer interaction (HCI), and artificial intelligence (AI). The work-in-progress points towards an emerging interdisciplinary ontology for artificial voices. As one study output, HCI tools are proposed for future collaboration.

12 1 - 50 of 77
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf