Digitala Vetenskapliga Arkivet

Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
From vocal sketching to sound models by means of a sound-based musical transcription system
KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. (Sound and Music Computing)ORCID-id: 0000-0002-1244-881x
KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. KMH Royal College of Music. (Sound and Music Computing)ORCID-id: 0000-0003-1239-6746
KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. (Sound and Music Computing)ORCID-id: 0000-0002-4422-5223
KTH, Skolan för elektroteknik och datavetenskap (EECS), Människocentrerad teknologi, Medieteknik och interaktionsdesign, MID. (Sound and Music Computing)ORCID-id: 0000-0002-3086-0322
2019 (engelsk)Inngår i: Proceedings of the Sound and Music Computing Conferences, CERN , 2019, s. 167-173Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This paper explores how notation developed for the representation of sound-based musical structures could be used for the transcription of vocal sketches representing expressive robot movements. A mime actor initially produced expressive movements which were translated to a humanoid robot. The same actor was then asked to illustrate these movements using vocal sketching. The vocal sketches were transcribed by two composers using sound-based notation. The same composers later synthesized new sonic sketches from the annotated data. Different transcriptions and synthesized versions of these were compared in order to investigate how the audible outcome changes for different transcriptions and synthesis routines. This method provides a palette of sound models suitable for the sonification of expressive body movements.

sted, utgiver, år, opplag, sider
CERN , 2019. s. 167-173
Serie
Proceedings of the Sound and Music Computing Conferences, ISSN 2518-3672
Emneord [en]
Computer programming, Computer science, Body movements, Humanoid robot, Musical structures, Musical transcription, Robot movements, Sonifications, Sound models, Anthropomorphic robots
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-274803Scopus ID: 2-s2.0-85084386218OAI: oai:DiVA.org:kth-274803DiVA, id: diva2:1445922
Konferanse
16th Sound and Music Computing Conference, SMC 2019, 28-31 May 2019, Malaga, Spain
Prosjekter
SONAO
Merknad

QC 20210422

Tilgjengelig fra: 2020-06-23 Laget: 2020-06-23 Sist oppdatert: 2023-12-05bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

ScopusPublished fulltext

Søk i DiVA

Av forfatter/redaktør
Panariello, ClaudioMattias, SköldFrid, EmmaBresin, Roberto
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric

urn-nbn
Totalt: 201 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf