Digitala Vetenskapliga Arkivet

Change search
Refine search result
1 - 12 of 12
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Bresin, Roberto
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. IRCAM STMS Lab.
    Latupeirissa, Adrian Benigno
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Robust Non-Verbal Expression in Humanoid Robots: New Methods for Augmenting Expressive Movements with Sound2021Conference paper (Refereed)
    Abstract [en]

    The aim of the SONAO project is to establish new methods basedon sonification of expressive movements for achieving a robust interaction between users and humanoid robots. We want to achievethis by combining competences of the research team members inthe fields of social robotics, sound and music computing, affective computing, and body motion analysis. We want to engineersound models for implementing effective mappings between stylized body movements and sound parameters that will enable anagent to express high-level body motion qualities through sound.These mappings are paramount for supporting feedback to andunderstanding robot body motion. The project will result in thedevelopment of new theories, guidelines, models, and tools forthe sonic representation of high-level body motion qualities in interactive applications. This work is part of the growing researchfield known as data sonification, in which we combine methodsand knowledge from the fields of interactive sonification, embodied cognition, multisensory perception, non-verbal and gesturalcommunication in robots.

  • 2.
    Fernandez-Martín, Claudio
    et al.
    CVBLab, Instituto Universitario de Investigacin en Tecnologa Centrada en el Ser Humano (HUMAN-tech), addressline=Universitat Politcnica de Valncia, city=Valencia, postcode=46022, state=Valencia, country=Spain, Valencia.
    Colomer, Adrian
    CVBLab, Instituto Universitario de Investigacin en Tecnologa Centrada en el Ser Humano (HUMAN-tech), addressline=Universitat Politcnica de Valncia, city=Valencia, postcode=46022, state=Valencia, country=Spain, Valencia; organization=ValgrAI - Valencian Graduate School and Research Network for Artificial Intelligence, addressline=Universitat Politcnica de Valncia, city=Valencia, postcode=46022, state=Valencia, country=Spain, Valencia.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Naranjo, Valery
    CVBLab, Instituto Universitario de Investigacin en Tecnologa Centrada en el Ser Humano (HUMAN-tech), addressline=Universitat Politcnica de Valncia, city=Valencia, postcode=46022, state=Valencia, country=Spain, Valencia.
    Choosing only the best voice imitators: Top-K many-to-many voice conversion with StarGAN2024In: Speech Communication, ISSN 0167-6393, E-ISSN 1872-7182, Vol. 156, article id 103022Article in journal (Refereed)
    Abstract [en]

    Voice conversion systems have become increasingly important as the use of voice technology grows. Deep learning techniques, specifically generative adversarial networks (GANs), have enabled significant progress in the creation of synthetic media, including the field of speech synthesis. One of the most recent examples, StarGAN-VC, uses a single pair of generator and discriminator to convert voices between multiple speakers. However, the training stability of GANs can be an issue. The Top-K methodology, which trains the generator using only the best K generated samples that “fool” the discriminator, has been applied to image tasks and simple GAN architectures. In this work, we demonstrate that the Top-K methodology can improve the quality and stability of converted voices in a state-of-the-art voice conversion system like StarGAN-VC. We also explore the optimal time to implement the Top-K methodology and how to reduce the value of K during training. Through both quantitative and qualitative studies, it was found that the Top-K methodology leads to quicker convergence and better conversion quality compared to regular or vanilla training. In addition, human listeners perceived the samples generated using Top-K as more natural and were more likely to believe that they were produced by a human speaker. The results of this study demonstrate that the Top-K methodology can effectively improve the performance of deep learning-based voice conversion systems.

  • 3.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. STMS IRCAM.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Haptic Music Players for Children with Profound and Multiple Learning Dis-abilities (PMLD): Exploring Different Modes of Interaction for Felt Sound2022In: Proceedings of the 24th International Congress on Acoustics (ICA2022): A10 -05 Physiological Acoustics - Multi-modal solutions to enhance hearing / [ed] Jeremy Marozeau, Sebastian Merchel, Gyeongju, South Korea: Acoustic Society of Korea , 2022, article id ABS-0021Conference paper (Refereed)
    Abstract [en]

    This paper presents a six-month exploratory case study on the evaluation of three Haptic Music Players (HMPs) with four pre-verbal children with Profound and Multiple Learning Disabilities (PMLD). The evaluated HMPs were 1) a commercially available haptic pillow, 2) a haptic device embedded in a modified plush-toy backpack, and 3) a custom-built plush toy with a built-in speaker and tactile shaker. We evaluated the HMPs through qualitative interviews with a teacher who served as a proxy for the preverbal children participating in the study; the teacher augmented the students’ communication by reporting observations from each test session. The interviews explored functionality, accessibility, versus user experience aspects of respective HMP and revealed significant differences between devices. Our findings highlighted the influence of physical affordances provided by the HMP designs and the importance of a playful design in this context. Results suggested that sufficient time should be allocated to HMP familiarization prior to any evaluation procedure, since experiencing musical haptics through objects is a novel experience that might require some time to get used to. We discuss design considerations for Haptic Music Players and provide suggestions for future developments of multimodal systems dedicated to enhancing music listening in special education settings. 

    Download full text (pdf)
    fulltext
  • 4.
    Frid, Emma
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. IRCAM, STMS Sci & Technol Mus & Son UMR9912, 1 Pl Igor Stravinsky, F-75004 Paris, France..
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Núñez-Pacheco, Claudia
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Customizing and Evaluating Accessible Multisensory Music Experiences with Pre-Verbal Children: A Case Study on the Perception of Musical Haptics Using Participatory Design with Proxies2022In: Multimodal Technologies and Interaction, ISSN 2414-4088, Vol. 6, no 7, article id 55Article in journal (Refereed)
    Abstract [en]

    Research on Accessible Digital Musical Instruments (ADMIs) has highlighted the need for participatory design methods, i.e., to actively include users as co-designers and informants in the design process. However, very little work has explored how pre-verbal children with Profound and Multiple Disabilities (PMLD) can be involved in such processes. In this paper, we apply in-depth qualitative and mixed methodologies in a case study with four students with PMLD. Using Participatory Design with Proxies (PDwP), we assess how these students can be involved in the customization and evaluation of the design of a multisensory music experience intended for a large-scale ADMI. Results from an experiment focused on communication of musical haptics highlighted the diversity in employed interaction strategies used by the children, accessibility limitations of the current multisensory experience design, and the importance of using a multifaceted variety of qualitative and quantitative methods to arrive at more informed conclusions when applying a design with proxies methodology.

  • 5.
    Latupeirissa, Adrian Benigno
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Exploring emotion perception in sonic HRI2020In: 17th Sound and Music Computing Conference, Torino: Zenodo , 2020, p. 434-441Conference paper (Refereed)
    Abstract [en]

    Despite the fact that sounds produced by robots can affect the interaction with humans, sound design is often an overlooked aspect in Human-Robot Interaction (HRI). This paper explores how different sets of sounds designed for expressive robot gestures of a humanoid Pepper robot can influence the perception of emotional intentions. In the pilot study presented in this paper, it has been asked to rate different stimuli in terms of perceived affective states. The stimuli were audio, audio-video and video only and contained either Pepper’s original servomotors noises, sawtooth, or more complex designed sounds. The preliminary results show a preference for the use of more complex sounds, thus confirming the necessity of further exploration in sonic HRI.

  • 6.
    Latupeirissa, Adrian Benigno
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Probing Aesthetics Strategies for Robot Sound: Complexity and Materiality in Movement Sonification2023In: ACM Transactions on Human-Robot Interaction, E-ISSN 2573-9522Article in journal (Refereed)
    Abstract [en]

    This paper presents three studies where we probe aesthetics strategies of sound produced by movement sonification of a Pepper robot by mapping its movements to sound models.

    We developed two sets of sound models. The first set was made by two sound models, a sawtooth-based one and another based on feedback chains, for investigating how the perception of synthesized robot sounds would depend on their design complexity. We implemented the second set of sound models for probing the “materiality” of sound made by a robot in motion. This set consisted of a sound synthesis based on an engine highlighting the robot’s internal mechanisms, a metallic sound synthesis highlighting the robot’s typical appearance, and a whoosh sound synthesis highlighting the movement.

    We conducted three studies. The first study explores how the first set of sound models can influence the perception of expressive gestures of a Pepper robot through an online survey. In the second study, we carried out an experiment in a museum installation with a Pepper robot presented in two scenarios: (1) while welcoming patrons into a restaurant and (2) while providing information to visitors in a shopping center. Finally, in the third study, we conducted an online survey with stimuli similar to those used in the second study.

    Our findings suggest that participants preferred more complex sound models for the sonification of robot movements. Concerning the materiality, participants liked better subtle sounds that blend well with the ambient sound (i.e., less distracting) and soundscapes in which sound sources can be identified. Also, sound preferences varied depending on the context in which participants experienced the robot-generated sounds (e.g., as a live museum installation vs. an online display).

  • 7.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Converging Creativity: Intertwining Music and Code2023Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This compilation thesis is a collection of case studies that presents examples of creative coding in various contexts, focusing on how such practice led to the creation and exploration of musical expressions, and how I in- interact with the design of the code itself. My own experience as a music composer influences this thesis work. By saying so, I mean that although the thesis places itself in the Sound and Music Computing academic tradition, it is also profoundly founded upon a personal artistic perspective. This perspective has been the overarching view that has informed the studies included in the thesis, despite all being quite different. The first part of the thesis describes the practice of creative coding, creativity models, and the interaction between code and coder. Then I propose a perspective on creative coding based on the idea of asymptotic convergence of creativity. This is followed by a presentation of five papers and three music works, all inspected through my stance on this creative practice. Finally, I examine and discuss these works in detail, concluding by suggesting that the asymptotic convergence of creativity framework might serve as a useful tool that adds to the literature on creative coding practice, especially for situations in which such work is carried out in an academic research setting. 

    Download full text (pdf)
    Claudio Panariello - PhD Thesis
  • 8.
    Panariello, Claudio
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Study in three phases: An Adaptive Sound Installation2020In: Leonardo music journal, ISSN 0961-1215, E-ISSN 1531-4812, Vol. 30, p. 44-49Article in journal (Refereed)
    Abstract [en]

    Study in three phases is an adaptive site-specific sound installation that includes 22 solenoids placed on metallic arches that surround visitors and react to environmental perturbations, creating a self-regulating soundscape of metallic hits that serves to renew the visitors' acoustic perspective. Adaptivity is a crucial aspect of the work: Similar perturbations will not generally cause similar reactions from the installation based on past interactions, thus allowing evolution over time to play a key role artistically and technically. This article discusses the author's position on adaptivity in music interaction and composition and reports on the technical and artistic aspects of the installation.

  • 9.
    Panariello, Claudio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Sonification of Computer Processes: The Cases of Computer Shutdown and Idle Mode2022In: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Vol. 16, article id 862663Article in journal (Refereed)
    Abstract [en]

    Software is intangible, invisible, and at the same time pervasive in everyday devices, activities, and services accompanying our life. Therefore, citizens hardly realize its complexity, power, and impact in many aspects of their daily life. In this study, we report on one experiment that aims at letting citizens make sense of software presence and activity in their everyday lives, through sound: the invisible complexity of the processes involved in the shutdown of a personal computer. We used sonification to map information embedded in software events into the sound domain. The software events involved in a shutdown have names related to the physical world and its actions: write events (information is saved into digital memories), kill events (running processes are terminated), and exit events (running programs are exited). The research study presented in this article has a "double character. " It is an artistic realization that develops specific aesthetic choices, and it has also pedagogical purposes informing the causal listener about the complexity of software behavior. Two different sound design strategies have been applied: one strategy is influenced by the sonic characteristics of the Glitch music scene, which makes deliberate use of glitch-based sound materials, distortions, aliasing, quantization noise, and all the "failures " of digital technologies; and a second strategy based on the sound samples of a subcontrabass Paetzold recorder, an unusual and special acoustic instrument which unique sound has been investigated in the contemporary art music scene. Analysis of quantitative ratings and qualitative comments of 37 participants revealed that the sound design strategies succeeded in communicating the nature of the computer processes. Participants also showed in general an appreciation of the aesthetics of the peculiar sound models used in this study.

  • 10.
    Panariello, Claudio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    SuperOM: a SuperCollider class to generate music scores in OpenMusic2023In: Proceedings of the 8th International Conference on Technologies for Music Notation and Representation (TENOR) / [ed] Anthony Paul De Ritis, Victor Zappi, Jeremy Van Buskirk and John Mallia, Boston, MA, USA: Northeastern University Library , 2023, p. 68-75Conference paper (Refereed)
    Abstract [en]

    This paper introduces SuperOM, a class built for the software SuperCollider in order to create a bridge to OpenMu- sic and thus facilitate the creation of musical scores from SuperCollider patches. SuperOM is primarily intended to be used as a tool for SuperCollider users who make use of assisted composition techniques and want the output of such processes to be captured through automatic notation transcription. This paper first presents an overview of existing transcription tools for SuperCollider, followed by a detailed description of SuperOM and its implementation, as well as examples of how it can be used in practice. Finally, a case study in which the transcription tool was used as an assistive composition tool to generate the score of a sonification – which later was turned into a piano piece – is discussed. 

    Download full text (pdf)
    fulltext
  • 11.
    Panariello, Claudio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Mattias, Sköld
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID. KMH Royal College of Music.
    Frid, Emma
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Bresin, Roberto
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    From vocal sketching to sound models by means of a sound-based musical transcription system2019In: Proceedings of the Sound and Music Computing Conferences, CERN , 2019, p. 167-173Conference paper (Refereed)
    Abstract [en]

    This paper explores how notation developed for the representation of sound-based musical structures could be used for the transcription of vocal sketches representing expressive robot movements. A mime actor initially produced expressive movements which were translated to a humanoid robot. The same actor was then asked to illustrate these movements using vocal sketching. The vocal sketches were transcribed by two composers using sound-based notation. The same composers later synthesized new sonic sketches from the annotated data. Different transcriptions and synthesized versions of these were compared in order to investigate how the audible outcome changes for different transcriptions and synthesis routines. This method provides a palette of sound models suitable for the sonification of expressive body movements.

  • 12.
    Panariello, Claudio
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Human Centered Technology, Media Technology and Interaction Design, MID.
    Percivati, Chiara
    University of Antwerp, AP Hogeschool Antwerpen, Antwerp, Belgium.
    “WYPYM”: A Study for Feedback-Augmented Bass Clarinet2023Conference paper (Refereed)
1 - 12 of 12
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf