Reflective Instruction Tuning: Mitigating Hallucinations in Large Vision-Language Models

被引:0
|
作者
Zhang, Jinrui [1 ]
Wang, Teng [1 ,2 ]
Zhang, Haigang [3 ]
Lu, Ping [4 ]
Zheng, Feng [1 ,5 ]
机构
[1] Southern Univ Sci & Technol, Shenzhen, Peoples R China
[2] Univ Hong Kong, Hong Kong, Peoples R China
[3] Shenzhen Polytech Univ, Shenzhen, Peoples R China
[4] ZTE Corp, Cloud Comp & IT Inst, Shenzhen, Peoples R China
[5] Peng Cheng Lab, Res Inst Multiple Agents & Embodied Intelligence, Shenzhen, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Large Vision Language Models; Visual Instruction Tuning; Hallucination Mitigation;
D O I
10.1007/978-3-031-73113-6_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large vision-language models (LVLMs) have shown promising performance on a variety of vision-language tasks. However, they remain susceptible to hallucinations, generating outputs misaligned with visual content or instructions. While various mitigation strategies have been proposed, they often neglect a key contributor to hallucinations: lack of fine-grained reasoning supervision during training. Without intermediate reasoning steps, models may establish superficial shortcuts between instructions and responses, failing to internalize the inherent reasoning logic. To address this challenge, we propose reflective instruction tuning, which integrates rationale learning into visual instruction tuning. Unlike previous methods that learning from responses only, our approach entails the model predicting rationales justifying why responses are correct or incorrect. This fosters a deeper engagement with the fine-grained reasoning underlying each response, thus enhancing the model's reasoning proficiency. To facilitate this approach, we propose REVERIE, the first large-scale instruction-tuning dataset with ReflEctiVE RatIonalE annotations. REVERIE comprises 115k machine-generated reasoning instructions, each meticulously annotated with a corresponding pair of correct and confusing responses, alongside comprehensive rationales elucidating the justification behind the correctness or erroneousness of each response. Experimental results on multiple LVLM benchmarks reveal that reflective instruction tuning with the REVERIE dataset yields noticeable performance gain over the baseline model, demonstrating the effectiveness of reflecting from the rationales. Project page is at https://zjr2000.github.io/projects/reverie
引用
收藏
页码:196 / 213
页数:18
相关论文
共 50 条
  • [31] JailbreakZoo: Survey, Landscapes, and Horizons in Jailbreaking Large Language and Vision-Language Models
    Jin, Haibo
    Hu, Leyang
    Li, Xinnuo
    Zhang, Peiyan
    Chen, Chonghan
    Zhuang, Jun
    Wang, Haohan
    arXiv,
  • [32] Overconfidence is Key: Verbalized Uncertainty Evaluation in Large Language and Vision-Language Models
    Groot, Tobias
    Valdenegro-Toro, Matias
    arXiv,
  • [33] CPT: Colorful Prompt Tuning for pre-trained vision-language models
    Yao, Yuan
    Zhang, Ao
    Zhang, Zhengyan
    Liu, Zhiyuan
    Chua, Tat-Seng
    Sun, Maosong
    AI OPEN, 2024, 5 : 30 - 38
  • [34] Black-Box Tuning of Vision-Language Models with Effective Gradient Approximation
    Guo, Zixian
    Wei, Yuxiang
    Liu, Ming
    Ji, Zhilong
    Bai, Jinfeng
    Guo, Yiwen
    Zuo, Wangmeng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 5356 - 5368
  • [35] CTPT: Continual Test-time Prompt Tuning for vision-language models
    Wang, Fan
    Han, Zhongyi
    Liu, Xingbo
    Yin, Yilong
    Gao, Xin
    PATTERN RECOGNITION, 2025, 161
  • [36] Learning to Prompt for Vision-Language Models
    Zhou, Kaiyang
    Yang, Jingkang
    Loy, Chen Change
    Liu, Ziwei
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (09) : 2337 - 2348
  • [37] Vision-Language Models for Biomedical Applications
    Thapa, Surendrabikram
    Naseem, Usman
    Zhou, Luping
    Kim, Jinman
    PROCEEDINGS OF THE FIRST INTERNATIONAL WORKSHOP ON VISION-LANGUAGE MODELS FOR BIOMEDICAL APPLICATIONS, VLM4BIO 2024, 2024, : 1 - 2
  • [38] Learning to Prompt for Vision-Language Models
    Kaiyang Zhou
    Jingkang Yang
    Chen Change Loy
    Ziwei Liu
    International Journal of Computer Vision, 2022, 130 : 2337 - 2348
  • [39] The Neglected Tails in Vision-Language Models
    Parashar, Shubham
    Lin, Zhiqiu
    Liu, Tian
    Dong, Xiangjue
    Li, Yanan
    Ramanan, Deva
    Caverlee, James
    Kong, Shu
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 12988 - 12997
  • [40] VISION-LANGUAGE MODELS AS SUCCESS DETECTORS
    Du, Yuqing
    Konyushkova, Ksenia
    Denil, Misha
    Raju, Akhil
    Landon, Jessica
    Hill, Felix
    de Freitas, Nando
    Cabi, Serkan
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 232, 2023, 232 : 120 - 136