A LARGE-SCALE STUDY OF LANGUAGE MODELS FOR CHORD PREDICTION

被引:0
|
作者
Korzeniowski, Filip [1 ]
Sears, David R. W. [2 ]
Widmer, Gerhard [1 ]
机构
[1] Johannes Kepler Univ Linz, Dept Computat Percept, Linz, Austria
[2] Texas Tech Univ, Coll Visual & Performing Arts, Lubbock, TX 79409 USA
基金
欧洲研究理事会;
关键词
Language Modelling; Chord Prediction; Recurrent Neural Networks;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
We conduct a large-scale study of language models for chord prediction. Specifically, we compare N-gram models to various flavours of recurrent neural networks on a comprehensive dataset comprising all publicly available datasets of annotated chords known to us. This large amount of data allows us to systematically explore hyper-parameter settings for the recurrent neural networks-a crucial step in achieving good results with this model class. Our results show not only a quantitative difference between the models, but also a qualitative one: in contrast to static N-gram models, certain RNN configurations adapt to the songs at test time. This finding constitutes a further step towards the development of chord recognition systems that are more aware of local musical context than what was previously possible.
引用
收藏
页码:91 / 95
页数:5
相关论文
共 50 条
  • [1] A Study on Prompt Types for Harmlessness Assessment of Large-Scale Language Models
    Shin, Yejin
    Kim, Song-yi
    Byun, Eun Young
    [J]. HCI INTERNATIONAL 2024 POSTERS, PT VII, HCII 2024, 2024, 2120 : 228 - 233
  • [2] Large-scale replication study reveals a limit on probabilistic prediction in language comprehension
    Nieuwland, Mante S.
    Politzer-Ahles, Stephen
    Heyselaar, Evelien
    Segaert, Katrien
    Darley, Emily
    Kazanina, Nina
    Wolfsthurn, Sarah Von Grebmer Zu
    Bartolozzi, Federica
    Kogan, Vita
    Ito, Aine
    Meziere, Diane
    Barr, Dale J.
    Rousselet, Guillaume A.
    Ferguson, Heather J.
    Busch-Moreno, Simon
    Fu, Xiao
    Tuomainen, Jyrki
    Kulakova, Eugenia
    Husband, E. Matthew
    Donaldson, David I.
    Kohut, Zdenko
    Rueschemeyer, Shirley-Ann
    Huettig, Falk
    [J]. ELIFE, 2018, 7
  • [3] Improving Large-scale Language Models and Resources for Filipino
    Cruz, Jan Christian Blaise
    Cheng, Charibeth
    [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 6548 - 6555
  • [4] Large-scale Point-of-Interest Category Prediction Using Natural Language Processing Models
    Zhang, Daniel
    Wang, Dong
    Zheng, Hao
    Mu, Xin
    Li, Qi
    Zhang, Yang
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2017, : 1027 - 1032
  • [5] Large Language Models as Commonsense Knowledge for Large-Scale Task Planning
    Zhao, Zirui
    Lee, Wee Sun
    Hsu, David
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [6] Large-scale study of chord estimation algorithms based on chroma representation and HMM
    Papadopoulos, Helene
    Peeters, Geoffroy
    [J]. 2007 INTERNATIONAL WORKSHOP ON CONTENT-BASED MULTIMEDIA INDEXING, PROCEEDINGS, 2007, : 53 - +
  • [7] On the Multilingual Capabilities of Very Large-Scale English Language Models
    Armengol-Estape, Jordi
    de Gibert Bonet, Ona
    Melero, Maite
    [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 3056 - 3068
  • [8] Limits of Detecting Text Generated by Large-Scale Language Models
    Varshney, Lav R.
    Keskar, Nitish Shirish
    Socher, Richard
    [J]. 2020 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA), 2020,
  • [9] Large-Scale Random Forest Language Models for Speech Recognition
    Su, Yi
    Jelinek, Frederick
    Khudanpur, Sanjeev
    [J]. INTERSPEECH 2007: 8TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION, VOLS 1-4, 2007, : 945 - 948
  • [10] MedBench: A Large-Scale Chinese Benchmark for Evaluating Medical Large Language Models
    Cai, Yan
    Wang, Linlin
    Wang, Ye
    de Melo, Gerard
    Zhang, Ya
    Wang, Yanfeng
    He, Liang
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 16, 2024, : 17709 - 17717