Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Virtual virtuosity
KTH, Superseded Departments (pre-2005), Speech, Music and Hearing.ORCID iD: 0000-0002-3086-0322
2000 (English)Doctoral thesis, comprehensive summary (Other scientific)
Abstract [en]

This dissertation presents research in the field ofautomatic music performance with a special focus on piano.

A system is proposed for automatic music performance, basedon artificial neural networks (ANNs). A complex,ecological-predictive ANN was designed thatlistensto the last played note,predictsthe performance of the next note,looksthree notes ahead in the score, and plays thecurrent tone. This system was able to learn a professionalpianist's performance style at the structural micro-level. In alistening test, performances by the ANN were judged clearlybetter than deadpan performances and slightly better thanperformances obtained with generative rules.

The behavior of an ANN was compared with that of a symbolicrule system with respect to musical punctuation at themicro-level. The rule system mostly gave better results, butsome segmentation principles of an expert musician were onlygeneralized by the ANN.

Measurements of professional pianists' performances revealedinteresting properties in the articulation of notes markedstaccatoandlegatoin the score. Performances were recorded on agrand piano connected to a computer.Staccatowas realized by a micropause of about 60% ofthe inter-onset-interval (IOI) whilelegatowas realized by keeping two keys depressedsimultaneously; the relative key overlap time was dependent ofIOI: the larger the IOI, the shorter the relative overlap. Themagnitudes of these effects changed with the pianists' coloringof their performances and with the pitch contour. Theseregularities were modeled in a set of rules for articulation inautomatic piano music performance.

Emotional coloring of performances was realized by means ofmacro-rules implemented in the Director Musices performancesystem. These macro-rules are groups of rules that werecombined such that they reflected previous observations onmusical expression of specific emotions. Six emotions weresimulated. A listening test revealed that listeners were ableto recognize the intended emotional colorings.

In addition, some possible future applications are discussedin the fields of automatic music performance, music education,automatic music analysis, virtual reality and soundsynthesis.

Place, publisher, year, edition, pages
Stockholm: KTH , 2000. , p. ix, 32
Series
Trita-TMH ; 2000:9
Keywords [en]
music, performance, expression, interpretation, piano, automatic, artificial neural networks
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-3049ISBN: 91-7170-643-7 (print)OAI: oai:DiVA.org:kth-3049DiVA, id: diva2:8799
Public defence
2000-12-01, 00:00 (English)
Note
QC 20100518Available from: 2000-11-29 Created: 2000-11-29 Last updated: 2022-06-23Bibliographically approved
List of papers
1. Artificial neural networks based models for automatic performance of musical scores
Open this publication in new window or tab >>Artificial neural networks based models for automatic performance of musical scores
1998 (English)In: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 27, no 3, p. 239-270Article in journal (Refereed) Published
Abstract [en]

This article briefly summarises the author's research on automatic performance, started at CSC (Centro di Sonologia Computazionale, University of Padua) and continued at TMH-KTH (Speech, Music Hearing Department at the Royal Institute of Technology, Stockholm). The focus is on the evolution of the architecture of an artificial neural networks (ANNs) framework, from the first simple model, able to learn the KTH performance rules, to the final one, that accurately simulates the style of a real pianist performer, including time and loudness deviations. The task was to analyse and synthesise the performance process of a professional pianist, playing on a Disklavier. An automatic analysis extracts all performance parameters of the pianist, starting from the KTH rule system. The system possesses good generalisation properties: applying the same ANN, it is possible to perform different scores in the performing style used for the training of the networks. Brief descriptions of the program Melodia and of the two Java applets Japer and Jalisper are given in the Appendix. In Melodia, developed at the CSC, the user can run either rules or ANNs, and study their different effects. Japer and Jalisper, developed at TMH, implement in real time on the web the performance rules developed at TMH plus new features achieved by using ANNs.

Keywords
Rules, Computer Science, Interdisciplinary Applications; Music
National Category
Engineering and Technology
Identifiers
urn:nbn:se:kth:diva-12919 (URN)10.1080/09298219808570748 (DOI)000077650100004 ()2-s2.0-77950849714 (Scopus ID)
Note
QC 20100519Available from: 2010-05-19 Created: 2010-05-19 Last updated: 2022-06-25Bibliographically approved
2. Musical punctuation on the microlevel: Automatic identification and performance of small melodic units
Open this publication in new window or tab >>Musical punctuation on the microlevel: Automatic identification and performance of small melodic units
1998 (English)In: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 27, no 3, p. 271-292Article in journal (Refereed) Published
Abstract [en]

In this investigation we use the term musical punctuation for the marking of melodic structure by commas inserted at the boundaries that separate small structural units. Two models are presented that automatically try to locate the positions of such commas. They both use the score as the input and operate with a short context of maximally five notes. The first model is based on a set of subrules. One group of subrules mark possible comma positions, each provided with a weight value. Another group alters or removes these weight values according to different conditions. The second model is an artificial neural network using a similar input as that used by the rule system. The commas proposed by either model are realized in terms of micropauses and of small lengthenings of interonset durations. The models are evaluated by using a set of 52 musical excerpts, which were marked with punctuations according to the preference of an expert performer.

Keywords
MODEL
National Category
Engineering and Technology
Identifiers
urn:nbn:se:kth:diva-12920 (URN)10.1080/09298219808570749 (DOI)000077650100005 ()2-s2.0-85066219105 (Scopus ID)
Note

QC 20100519

Available from: 2010-05-19 Created: 2010-05-19 Last updated: 2022-09-13Bibliographically approved
3. Articulation strategies in expressive piano performance - Analysis of legato, staccato, and repeated notes in performances of the Andante movement of Mozart's Sonata in G major (K 545)
Open this publication in new window or tab >>Articulation strategies in expressive piano performance - Analysis of legato, staccato, and repeated notes in performances of the Andante movement of Mozart's Sonata in G major (K 545)
2000 (English)In: Journal of New Music Research, ISSN 0929-8215, E-ISSN 1744-5027, Vol. 29, no 3, p. 211-224Article in journal (Refereed) Published
Abstract [en]

Articulation strategies applied by pianists in expressive performances of the same core are analysed. Measurements of key overlap time and its relation to the inter-onset-interval are collected for notes marked legato and staccato in the first sixteen bars of the Andante movement of W.A. Mozart's Piano Sonata in G major, K 545. Five pianists played the piece nine times. First, they played in a wa that they considered "optimal". In the remaining eight performances they were asked to represent different expressive characters, as specified in terms of different adjectives. Legato,staccato, and repeated notes articulation applied by the right hand were examined by means of statistical analysis. Although the results varied considerably between pianists, some trends could be observed. The pianists generally used similar strategies in the rendering intended to represent different expressive characters. legato was played with a key overlap ratio that depended on the inter-onset-interval (IOI). Staccato tones had approximate duration of 40% of the IOI. Repeated notes were played with a duration of about 60% of the IOI. The results seem useful as a basis for articulation rules in grammars for automatic piano performance.

Keywords
DIFFERENT SPEEDS, GRAND PIANO, PERCEPTION, ACOUSTICS
National Category
Engineering and Technology
Identifiers
urn:nbn:se:kth:diva-12921 (URN)10.1076/jnmr.29.3.211.3092 (DOI)000168323100004 ()2-s2.0-0142056378 (Scopus ID)
Note
QC 20100519Available from: 2010-05-19 Created: 2010-05-19 Last updated: 2023-06-08Bibliographically approved
4. Production of staccato articulation in Mozart sonatas played on a grand piano.: Preliminary results
Open this publication in new window or tab >>Production of staccato articulation in Mozart sonatas played on a grand piano.: Preliminary results
2000 (English)In: Speech Music and Hearing Quarterly Progress and Status Report, ISSN 1104-5787, Vol. 41, no 4, p. 001-006Article in journal (Refereed) Published
Place, publisher, year, edition, pages
Stockholm: KTH, 2000
National Category
Engineering and Technology
Identifiers
urn:nbn:se:kth:diva-12922 (URN)
Note
QC 20100519Available from: 2010-05-19 Created: 2010-05-19 Last updated: 2022-06-25Bibliographically approved
5. Emotional coloring of computer controlled music performance
Open this publication in new window or tab >>Emotional coloring of computer controlled music performance
2000 (English)In: Computer music journal, ISSN 0148-9267, E-ISSN 1531-5169, Vol. 24, no 4, p. 44-63Article in journal (Refereed) Published
Place, publisher, year, edition, pages
Cambridge, MA, USA: MIT Press, 2000
National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:kth:diva-12923 (URN)10.1162/014892600559515 (DOI)000166921100005 ()2-s2.0-0002549801 (Scopus ID)
Note

QC 20100519

Available from: 2010-05-19 Created: 2010-05-19 Last updated: 2022-09-02Bibliographically approved
6. Software tools for musical expression.
Open this publication in new window or tab >>Software tools for musical expression.
2000 (English)In: Proceedings of the InternationalComputer Music Conference 2000 / [ed] Zannos, Ioannis, San Francisco, USA: Computer Music Association , 2000, p. 499-502Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
San Francisco, USA: Computer Music Association, 2000
National Category
Computer and Information Sciences
Research subject
Speech and Music Communication
Identifiers
urn:nbn:se:kth:diva-12924 (URN)0-9667927-2-6 (ISBN)
Conference
International Computer Music Conference
Note

QC 20100519

Available from: 2010-05-19 Created: 2010-05-19 Last updated: 2022-06-25Bibliographically approved

Open Access in DiVA

fulltext(280 kB)769 downloads
File information
File name FULLTEXT01.pdfFile size 280 kBChecksum MD5
bc1c2581b5e0644993191cae964970f91dbb9f48e23d4de4300f80813d4f3f9f0270d90b
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Bresin, Roberto
By organisation
Speech, Music and Hearing
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar
Total: 769 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1815 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf