Vision Skills Needed to Answer Visual Questions

被引:7
|
作者
Zeng X. [1 ]
Wang Y. [2 ]
Chiu T.-Y. [3 ]
Bhattacharya N. [1 ]
Gurari D. [1 ]
机构
[1] University of Texas at Austin, School of Information, Austin, 78701, Texas
[2] University of Wisconsin-Madison, Department of Computer Science, Madison, 53706, WI
[3] University of Texas at Austin, Austin, 78701, TX
基金
美国国家科学基金会;
关键词
accessibility; computer vision; visual question answering;
D O I
10.1145/3415220
中图分类号
学科分类号
摘要
The task of answering questions about images has garnered attention as a practical service for assisting populations with visual impairments as well as a visual Turing test for the artificial intelligence community. Our first aim is to identify the common vision skills needed for both scenarios. To do so, we analyze the need for four vision skills-object recognition, text recognition, color recognition, and counting-on over 27,000 visual questions from two datasets representing both scenarios. We next quantify the difficulty of these skills for both humans and computers on both datasets. Finally, we propose a novel task of predicting what vision skills are needed to answer a question about an image. Our results reveal (mis)matches between aims of real users of such services and the focus of the AI community. We conclude with a discussion about future directions for addressing the visual question answering task. © 2020 ACM.
引用
收藏
相关论文
共 50 条
  • [1] NEEDED - EXPERIMENTAL RESEARCH TO ANSWER CLINICAL QUESTIONS
    ROSETT, HL
    TERATOLOGY, 1980, 21 (02) : A65 - A65
  • [3] Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?
    Chen, Yang
    Hu, Hexiang
    Luan, Yi
    Sun, Haitian
    Changpinyo, Soravit
    Ritter, Alan
    Chang, Ming-Wei
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 14948 - 14968
  • [4] A dataset to answer visual questions about named entities
    Lerner, Paul
    Messoud, Salem
    Ferret, Olivier
    Guinaudeau, Camille
    Le Borgne, Herve
    Besancon, Romaric
    Moreno, Jose G.
    Melgarejo, Jesus Lovon
    TRAITEMENT AUTOMATIQUE DES LANGUES, 2022, 63 (02): : 15 - 39
  • [5] Rephrasing Visual Questions by Specifying the Entropy of the Answer Distribution
    Terao, Kento
    Tamaki, Toru
    Raytchev, Bisser
    Kaneda, Kazufumi
    Satoh, Shin'ichi
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2020, E103D (11) : 2362 - 2370
  • [6] The role of vision and visual skills in archery
    Strydom, B.
    Ferreira, J. T.
    AFRICAN VISION AND EYE HEALTH JOURNAL, 2010, 69 (01): : 21 - 28
  • [7] Learning to Answer Questions in Dynamic Audio-Visual Scenarios
    Li, Guangyao
    Wei, Yake
    Tian, Yapeng
    Xu, Chenliang
    Wen, Ji-Rong
    Hu, Di
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 19086 - 19096
  • [8] Answer to the questions
    Saito, M
    FOOD AND CHEMICAL TOXICOLOGY, 2005, 43 (11) : 1685 - 1686
  • [9] Damasus' request:: Why Jerome needed to (re-)answer Ambrosiaster's Questions
    Volgers, Annelie
    Studia Patristica, Vol XLIII, 2006, 43 : 531 - 536
  • [10] Answer the questions
    Farman, Rob
    PROFESSIONAL ENGINEERING, 2007, 20 (11) : 18 - 19