Transformer-Based Seq2Seq Model for Chord Progression Generation

被引:4
|
作者
Li, Shuyu [1 ]
Sung, Yunsick [2 ]
机构
[1] Dongguk Univ Seoul, Grad Sch, Dept Multimedia Engn, Seoul 04620, South Korea
[2] Dongguk Univ Seoul, Dept Multimedia Engn, Seoul 04620, South Korea
基金
新加坡国家研究基金会;
关键词
chord progression generation; transformer; sequence-to-sequence; pre-training;
D O I
10.3390/math11051111
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Machine learning is widely used in various practical applications with deep learning models demonstrating advantages in handling huge data. Treating music as a special language and using deep learning models to accomplish melody recognition, music generation, and music analysis has proven feasible. In certain music-related deep learning research, recurrent neural networks have been replaced with transformers. This has achieved significant results. In traditional approaches with recurrent neural networks, input sequences are limited in length. This paper proposes a method to generate chord progressions for melodies using a transformer-based sequence-to-sequence model, which is divided into a pre-trained encoder and decoder. A pre-trained encoder extracts contextual information from melodies, whereas a decoder uses this information to produce chords asynchronously and finally outputs chord progressions. The proposed method addresses length limitation issues while considering the harmony between chord progressions and melodies. Chord progressions can be generated for melodies in practical music composition applications. Evaluation experiments are conducted using the proposed method and three baseline models. The baseline models included the bidirectional long short-term memory (BLSTM), bidirectional encoder representation from transformers (BERT), and generative pre-trained transformer (GPT2). The proposed method outperformed the baseline models in Hits@k (k = 1) by 25.89, 1.54, and 2.13 %, respectively.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Research on Short-Term Load Prediction Based on Seq2seq Model
    Gong, Gangjun
    An, Xiaonan
    Mahato, Nawaraj Kumar
    Sun, Shuyan
    Chen, Si
    Wen, Yafeng
    ENERGIES, 2019, 12 (16)
  • [32] CFCSS : Based on CF Network Convolutional Seq2Seq Model for Abstractive Summarization
    Liang, Qingmin
    Lu, Ling
    Chang, Tianji
    Yang, Wu
    PROCEEDINGS OF THE 15TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA 2020), 2020, : 1160 - 1164
  • [33] An efficient protein homology detection approach based on seq2seq model and ranking
    Gao, Song
    Yu, Shui
    Yao, Shaowen
    BIOTECHNOLOGY & BIOTECHNOLOGICAL EQUIPMENT, 2021, 35 (01) : 633 - 640
  • [34] Research On Human-computer Dialogue Based On Improved Seq2seq Model
    Shang, Wenqian
    Zhu, Sunyu
    Xiao, Dong
    2021 IEEE/ACIS 21ST INTERNATIONAL FALL CONFERENCE ON COMPUTER AND INFORMATION SCIENCE (ICIS 2021-FALL), 2021, : 204 - 209
  • [35] A Transformer Seq2Seq Model with Fast Fourier Transform Layers for Rephrasing and Simplifying Complex Arabic Text
    Alshanqiti, Abdullah
    Alkhodre, Ahmad
    Namoun, Abdallah
    Albouq, Sami
    Nabil, Emad
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2023, 14 (02) : 888 - 898
  • [36] Context-aware Scene Graph Generation with Seq2Seq Transformers
    Lu, Yichao
    Rai, Himanshu
    Chang, Jason
    Knyazev, Boris
    Yu, Guangwei
    Shekhar, Shashank
    Taylor, Graham W.
    Volkovs, Maksims
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 15911 - 15921
  • [37] A Hierarchical Attention Seq2seq Model with CopyNet for Text Summarization
    Zhang, Yong
    Wang, Yuheng
    Liao, Jinzhi
    Xiao, Weidong
    2018 INTERNATIONAL CONFERENCE ON ROBOTS & INTELLIGENT SYSTEM (ICRIS 2018), 2018, : 316 - 320
  • [38] Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation
    Xia, Heming
    Ge, Tao
    Wang, Peiyi
    Chen, Si-Qing
    Wei, Furu
    Sui, Zhifang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 3909 - 3925
  • [39] Sparsing and Smoothing for the seq2seq Models
    Zhao S.
    Liang Z.
    Wen J.
    Chen J.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (03): : 464 - 472
  • [40] Falls Prediction Based on Body Keypoints and Seq2Seq Architecture
    Hua, Minjie
    Nan, Yibing
    Lian, Shiguo
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 1251 - 1259