Visual Analytics for Fine-grained Text Classification Models and Datasets

被引:0
|
作者
Battogtokh, M. [1 ]
Xing, Y. [1 ]
Davidescu, C. [2 ]
Abdul-Rahman, A. [1 ]
Luck, M. [1 ]
Borgo, R. [1 ]
机构
[1] Kings Coll London, London, England
[2] ContactEngine, Southampton, England
关键词
center dot Computing methodologies -> Natural language processing; center dot Human-centered computing -> Visual analytics;
D O I
10.1111/cgf.15098
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In natural language processing (NLP), text classification tasks are increasingly fine-grained, as datasets are fragmented into a larger number of classes that are more difficult to differentiate from one another. As a consequence, the semantic structures of datasets have become more complex, and model decisions more difficult to explain. Existing tools, suited for coarse-grained classification, falter under these additional challenges. In response to this gap, we worked closely with NLP domain experts in an iterative design-and-evaluation process to characterize and tackle the growing requirements in their workflow of developing fine-grained text classification models. The result of this collaboration is the development of SemLa, a novel Visual Analytics system tailored for 1) dissecting complex semantic structures in a dataset when it is spatialized in model embedding space, and 2) visualizing fine-grained nuances in the meaning of text samples to faithfully explain model reasoning. This paper details the iterative design study and the resulting innovations featured in SemLa. The final design allows contrastive analysis at different levels by unearthing lexical and conceptual patterns including biases and artifacts in data. Expert feedback on our final design and case studies confirm that SemLa is a useful tool for supporting model validation and debugging as well as data annotation.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] A Progressive Gated Attention Model for Fine-Grained Visual Classification
    Zhu, Qiangxi
    Li, Zhixin
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2063 - 2068
  • [22] Learning Hierarchal Channel Attention for Fine-grained Visual Classification
    Guan, Xiang
    Wang, Guoqing
    Xu, Xing
    Bin, Yi
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 5011 - 5019
  • [23] Hierarchical attention vision transformer for fine-grained visual classification
    Hu, Xiaobin
    Zhu, Shining
    Peng, Taile
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 91
  • [24] Using Coarse Label Constraint for Fine-Grained Visual Classification
    Lu, Chaohao
    Zou, Yuexian
    MULTIMEDIA MODELING, MMM 2019, PT II, 2019, 11296 : 266 - 277
  • [25] A collaborative gated attention network for fine-grained visual classification
    Zhu, Qiangxi
    Kuang, Wenlan
    Li, Zhixin
    DISPLAYS, 2023, 79
  • [26] Symmetrical irregular local features for fine-grained visual classification
    Yang, Ming
    Xu, Yang
    Wu, Zebin
    Wei, Zhihui
    NEUROCOMPUTING, 2022, 505 : 304 - 314
  • [27] Fine-grained Image Classification by Visual-Semantic Embedding
    Xu, Huapeng
    Qi, Guilin
    Li, Jingjing
    Wang, Meng
    Xu, Kang
    Gao, Huan
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 1043 - 1049
  • [28] Diagnosing Necrotizing Enterocolitis via Fine-Grained Visual Classification
    Yung, Ka-Wai
    Sivaraj, Jayaram
    De Coppi, Paolo
    Stoyanov, Danail
    Loukogeorgakis, Stavros
    Mazomenos, Evangelos B.
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2024, 71 (11) : 3160 - 3169
  • [29] Adversarially attack feature similarity for fine-grained visual classification
    Wang, Yupeng
    Xu, Can
    Wang, Yongli
    Wang, Xiaoli
    Ding, Weiping
    APPLIED SOFT COMPUTING, 2024, 163
  • [30] Kernelizing Spatially Consistent Visual Matches for Fine-Grained Classification
    Leveau, Valentin
    Joly, Alexis
    Buisson, Olivier
    Valduriez, Patrick
    ICMR'15: PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2015, : 155 - 162