Music performance style transfer for learning expressive musical performance

被引:0
|
作者
Zhe Xiao
Xin Chen
Li Zhou
机构
[1] China University of Geosciences,School of Automation
[2] Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems,School of Arts and Communication
[3] China University of Geosciences,undefined
来源
Signal, Image and Video Processing | 2024年 / 18卷
关键词
Expressive musical performance; Music generation; Music style transfer; Music information retrieval; Deep learning; Generative adversarial networks (GANs);
D O I
暂无
中图分类号
学科分类号
摘要
Generating expressive musical performance (EMP) is a hot issue in the field of music generation. Music played by humans is always more expressive than music produced by machines. To figure this out, it is crucial to explore the role of human performance in the production of music. This paper proposes a performance style transfer model to learn human performance style and implement EMP system. Our model is implemented using generative adversarial networks (GANs), with a multi-channel image composed of four elaborated spectrograms serving as the input to decompose and reconstruct music audio. To ensure training stability, we have designed a multi-channel consistency loss for GANs. Furthermore, given the lack of objective evaluation criteria for music generation, we propose a hybrid evaluation method that combines qualitative and quantitative methods to evaluate human-needs satisfaction. Three quantitative criteria are proposed at the feature and audio levels, respectively. The effectiveness of our method is verified on a public dataset through objective evaluation, which demonstrates its comparability to state-of-the-art algorithms. Additionally, subjective evaluations are conducted through visual analyses of both audio content and style. Finally, we conduct a musical Turing test in which subjects score the performance of the generated music. A series of experimental results show that our method is very competitive.
引用
收藏
页码:889 / 898
页数:9
相关论文
共 50 条
  • [31] Increased body movement equals better performance? Not always! Musical style determines motion degree perceived as optimal in music performance
    Moura, Nadia
    Fonseca, Pedro
    Vilas-Boas, Joao Paulo
    Serra, Sofia
    PSYCHOLOGICAL RESEARCH-PSYCHOLOGISCHE FORSCHUNG, 2024, 88 (04): : 1314 - 1330
  • [32] Computational models of expressive music performance: The state of the art
    Widmer, G
    Goebl, W
    JOURNAL OF NEW MUSIC RESEARCH, 2004, 33 (03) : 203 - 216
  • [33] iFP: A music interface using an expressive performance template
    Katayose, H
    Okudaira, K
    ENTERTAINMENT COMPUTING - ICEC 2004, 2004, 3166 : 529 - 540
  • [34] Understanding expressive music performance using genetic algorithms
    Ramirez, R
    Hazan, A
    APPLICATIONS OF EVOLUTIONARY COMPUTING, PROCEEDINGS, 2005, 3449 : 508 - 516
  • [35] A Multilayered Approach to Automatic Music Generation and Expressive Performance
    Carnovalini, Filippo
    Roda, Antonio
    2019 INTERNATIONAL WORKSHOP ON MULTILAYER MUSIC REPRESENTATION AND PROCESSING (MMRP 2019), 2019, : 41 - 48
  • [36] The effect of various instructional conditions on expressive music performance
    Woody, Robert H.
    JOURNAL OF RESEARCH IN MUSIC EDUCATION, 2006, 54 (01) : 21 - 36
  • [37] Modeling expressive music performance in bassoon audio recordings
    Ramirez, Rafael
    Gomez, Emilia
    Vicente, Veronica
    Puiggros, Montserrat
    Hazan, Amaury
    Maestre, Esteban
    INTELLIGENT COMPUTING IN SIGNAL PROCESSING AND PATTERN RECOGNITION, 2006, 345 : 951 - 957
  • [38] Music instruction and reading performance: Conceptual transfer in learning and development
    Muthivhi, Azwihangwisi E.
    Kriger, Samantha
    SOUTH AFRICAN JOURNAL OF CHILDHOOD EDUCATION, 2019, 9 (01)
  • [39] Using AI and machine learning to study expressive music performance: project survey and first report
    Widmer, G
    AI COMMUNICATIONS, 2001, 14 (03) : 149 - 162