Mutual exclusivity as a challenge for deep neural networks

被引:0
|
作者
Gandhi, Kanishk [1 ]
Lake, Brenden [2 ]
机构
[1] New York Univ, New York, NY 10012 USA
[2] New York Univ, Facebook AI Res, New York, NY USA
关键词
WORD; MODEL;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Strong inductive biases allow children to learn in fast and adaptable ways. Children use the mutual exclusivity (ME) bias to help disambiguate how words map to referents, assuming that if an object has one label then it does not need another. In this paper, we investigate whether or not vanilla neural architectures have an ME bias, demonstrating that they lack this learning assumption. Moreover, we show that their inductive biases are poorly matched to lifelong learning formulations of classification and translation. We demonstrate that there is a compelling case for designing task-general neural networks that learn through mutual exclusivity, which remains an open challenge.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Context, Mutual Exclusivity, and the Challenge of Multiple Mappings in Word Learning
    Poepsel, Tim
    Gerfen, Chip
    Weiss, Daniel J.
    [J]. PROCEEDINGS OF THE 36TH ANNUAL BOSTON UNIVERSITY CONFERENCE ON LANGUAGE DEVELOPMENT, VOLS 1 AND 2, 2012, : 474 - +
  • [2] Entropy and mutual information in models of deep neural networks
    Gabrie, Marylou
    Manoel, Andre
    Luneau, Clement
    Barbier, Jean
    Macris, Nicolas
    Krzakala, Florent
    Zdeborova, Lenka
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [3] Entropy and mutual information in models of deep neural networks
    Gabrie, Marylou
    Manoel, Andre
    Luneau, Clement
    Barbier, Jean
    Macris, Nicolas
    Krzakala, Florent
    Zdeborova, Lenka
    [J]. JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2019, 2019 (12):
  • [4] MUTUAL EXCLUSIVITY LOSS FOR SEMI-SUPERVISED DEEP LEARNING
    Sajjadi, Mehdi
    Javanmardi, Mehran
    Tasdizen, Tolga
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 1908 - 1912
  • [5] ON NETWORK SCIENCE AND MUTUAL INFORMATION FOR EXPLAINING DEEP NEURAL NETWORKS
    Davis, Brian
    Bhatt, Umang
    Bhardwaj, Kartikeya
    Marculescu, Radu
    Moura, Jose M. P.
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8399 - 8403
  • [6] Detecting Adversarial Examples on Deep Neural Networks With Mutual Information Neural Estimation
    Gao, Song
    Wang, Ruxin
    Wang, Xiaoxuan
    Yu, Shui
    Dong, Yunyun
    Yao, Shaowen
    Zhou, Wei
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (06) : 5168 - 5181
  • [7] Testing for Mutual Exclusivity
    Bradley, Jonathan R.
    Farnsworth, David L.
    [J]. JOURNAL OF APPLIED STATISTICS, 2009, 36 (11) : 1307 - 1314
  • [8] Deep Convolutional Neural Networks to Predict Mutual Coupling Effects in Metasurfaces
    An, Sensong
    Zheng, Bowen
    Shalaginov, Mikhail Y.
    Tang, Hong
    Li, Hang
    Zhou, Li
    Dong, Yunxi
    Haerinia, Mohammad
    Agarwal, Anuradha Murthy
    Rivero-Baleine, Clara
    Kang, Myungkoo
    Richardson, Kathleen A.
    Gu, Tian
    Hu, Juejun
    Fowler, Clayton
    Zhang, Hualiang
    [J]. ADVANCED OPTICAL MATERIALS, 2022, 10 (03)
  • [9] INVESTIGATING DEEP NEURAL NETWORKS FOR SPEAKER DIARIZATION IN THE DIHARD CHALLENGE
    Himawan, Ivan
    Rahman, Md Hafizur
    Sridharan, Sridha
    Fookes, Clinton
    Kanagasundaram, Ahilan
    [J]. 2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018), 2018, : 1029 - 1035
  • [10] Adaptive watermarking with self-mutual check parameters in deep neural networks
    Gao, Zhenzhe
    Yin, Zhaoxia
    Zhan, Hongjian
    Yin, Heng
    Lu, Yue
    [J]. PATTERN RECOGNITION LETTERS, 2024, 180 : 9 - 15