Transformer-Based Seq2Seq Model for Chord Progression Generation

被引:4
|
作者
Li, Shuyu [1 ]
Sung, Yunsick [2 ]
机构
[1] Dongguk Univ Seoul, Grad Sch, Dept Multimedia Engn, Seoul 04620, South Korea
[2] Dongguk Univ Seoul, Dept Multimedia Engn, Seoul 04620, South Korea
基金
新加坡国家研究基金会;
关键词
chord progression generation; transformer; sequence-to-sequence; pre-training;
D O I
10.3390/math11051111
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Machine learning is widely used in various practical applications with deep learning models demonstrating advantages in handling huge data. Treating music as a special language and using deep learning models to accomplish melody recognition, music generation, and music analysis has proven feasible. In certain music-related deep learning research, recurrent neural networks have been replaced with transformers. This has achieved significant results. In traditional approaches with recurrent neural networks, input sequences are limited in length. This paper proposes a method to generate chord progressions for melodies using a transformer-based sequence-to-sequence model, which is divided into a pre-trained encoder and decoder. A pre-trained encoder extracts contextual information from melodies, whereas a decoder uses this information to produce chords asynchronously and finally outputs chord progressions. The proposed method addresses length limitation issues while considering the harmony between chord progressions and melodies. Chord progressions can be generated for melodies in practical music composition applications. Evaluation experiments are conducted using the proposed method and three baseline models. The baseline models included the bidirectional long short-term memory (BLSTM), bidirectional encoder representation from transformers (BERT), and generative pre-trained transformer (GPT2). The proposed method outperformed the baseline models in Hits@k (k = 1) by 25.89, 1.54, and 2.13 %, respectively.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Keyphrase Generation Based on Deep Seq2seq Model
    Zhang, Yong
    Xiao, Weidong
    IEEE ACCESS, 2018, 6 : 46047 - 46057
  • [2] Neural Question Generation based on Seq2Seq
    Liu, Bingran
    2020 5TH INTERNATIONAL CONFERENCE ON MATHEMATICS AND ARTIFICIAL INTELLIGENCE (ICMAI 2020), 2020, : 119 - 123
  • [3] Online time series monitoring method of transformer based on seq2seq model
    Lu, Fei
    Liu, Fan
    INTERNATIONAL JOURNAL OF LOW-CARBON TECHNOLOGIES, 2024, 19 : 142 - 148
  • [4] A Hierarchical Attention Based Seq2Seq Model for Chinese Lyrics Generation
    Fan, Haoshen
    Wang, Jie
    Zhuang, Bojin
    Wang, Shaojun
    Xiao, Jing
    PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT III, 2019, 11672 : 279 - 288
  • [5] Automatic Generation of Pseudocode with Attention Seq2seq Model
    Xu, Shaofeng
    Xiong, Yun
    2018 25TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE (APSEC 2018), 2018, : 711 - 712
  • [6] Exaggerated Portrait Caricatures Generation Based On Seq2Seq
    Xu, Kun
    Tang, Chenwei
    Lv, Jiancheng
    He, Zhenan
    2019 9TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND TECHNOLOGY (ICIST2019), 2019, : 36 - 44
  • [7] SGDG: Improving Transformer Seq2Seq Models through Span Generation and Denoise Generation
    Yang, Zhenfei
    Yu, Beiming
    Dou, Chenxiao
    Zhang, Qian
    Chua, Yansong
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2024, PT 2, 2025, 14851 : 486 - 495
  • [8] A Chinese text corrector based on seq2seq model
    Gu, Sunyan
    Lang, Fei
    2017 INTERNATIONAL CONFERENCE ON CYBER-ENABLED DISTRIBUTED COMPUTING AND KNOWLEDGE DISCOVERY (CYBERC), 2017, : 322 - 325
  • [9] SparQL Query Prediction Based on Seq2Seq Model
    Yang D.-H.
    Zou K.-F.
    Wang H.-Z.
    Wang J.-B.
    Ruan Jian Xue Bao/Journal of Software, 2021, 32 (03): : 805 - 817
  • [10] Knowledge-based Questions Generation with Seq2Seq Learning
    Tang, Xiangru
    Gao, Hanning
    Gao, Junjie
    PROCEEDINGS OF THE 2018 IEEE INTERNATIONAL CONFERENCE ON PROGRESS IN INFORMATICS AND COMPUTING (PIC), 2018, : 180 - 184