VCoder: Versatile Vision Encoders for Multimodal Large Language Models

被引:2
|
作者
Jain, Jitesh [1 ]
Yang, Jianwei [2 ]
Shi, Humphrey [1 ,3 ]
机构
[1] Georgia Tech, SHI Labs, Atlanta, GA 30332 USA
[2] Microsoft Res, Redmond, WA USA
[3] Picsart AI Res PAIR, Atlanta, GA USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/CVPR52733.2024.02644
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humans possess the remarkable skill of Visual Perception, the ability to see and understand the seen, helping them make sense of the visual world and, in turn, reason. Multimodal Large Language Models (MLLM) have recently achieved impressive performance on vision-language tasks ranging from visual question-answering and image captioning to visual reasoning and image generation. However, when prompted to identify or count (perceive) the entities in a given image, existing MLLM systems fail. Working towards developing an accurate MLLM system for perception and reasoning, we propose using Versatile vision enCoders (VCoder) as perception eyes for Multimodal LLMs. We feed the VCoder with perception modalities such as segmentation or depth maps, improving the MLLM's perception abilities. Secondly, we leverage the images from COCO and outputs from off-the-shelf vision perception models to create our COCO Segmentation Text (COST) dataset for training and evaluating MLLMs on the object perception task. Thirdly, we introduce metrics to assess the object perception abilities in MLLMs on our COST dataset. Lastly, we provide extensive experimental evidence proving the VCoder's improved object-level perception skills over existing Multimodal LLMs, including GPT-4V. We open-source our dataset, code, and models to promote research.
引用
收藏
页码:27992 / 28002
页数:11
相关论文
共 50 条
  • [1] Multimodal Large Language Models in Vision and Ophthalmology
    Lu, Zhiyong
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2024, 65 (07)
  • [2] Modeling Multimodal Uncertainties via Probability Distribution Encoders Included Vision-Language Models
    Wang, Junjie
    Ji, Yatai
    Zhang, Yuxiang
    Zhu, Yanru
    Sakai, Tetsuya
    IEEE ACCESS, 2024, 12 : 420 - 434
  • [3] Lumen: Unleashing Versatile Vision-Centric Capabilities of Large Multimodal Models
    Shanghai Key Lab of Intell. Info. Processing, School of CS, Fudan University, China
    不详
    不详
    arXiv, 1600,
  • [4] A survey on multimodal large language models
    Yin, Shukang
    Fu, Chaoyou
    Zhao, Sirui
    Li, Ke
    Sun, Xing
    Xu, Tong
    Chen, Enhong
    NATIONAL SCIENCE REVIEW, 2024, 11 (12)
  • [5] A survey on multimodal large language models
    Shukang Yin
    Chaoyou Fu
    Sirui Zhao
    Ke Li
    Xing Sun
    Tong Xu
    Enhong Chen
    National Science Review, 2024, 11 (12) : 277 - 296
  • [6] White-box Multimodal Jailbreaks Against Large Vision-Language Models
    Shanghai Key Lab of Intell. Info. Processing, School of CS, Fudan University, Shanghai, China
    不详
    不详
    MM - Proc. ACM Int. Conf. Multimed., (6920-6928):
  • [7] Text encoders bottleneck compositionality in contrastive vision-language models
    Kamath, Amita
    Hessel, Jack
    Chang, Kai-Wei
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 4933 - 4944
  • [8] From Large Language Models to Large Multimodal Models: A Literature Review
    Huang, Dawei
    Yan, Chuan
    Li, Qing
    Peng, Xiaojiang
    APPLIED SCIENCES-BASEL, 2024, 14 (12):
  • [9] A comprehensive survey of large language models and multimodal large models in medicine
    Xiao, Hanguang
    Zhou, Feizhong
    Liu, Xingyue
    Liu, Tianqi
    Li, Zhipeng
    Liu, Xin
    Huang, Xiaoxuan
    INFORMATION FUSION, 2025, 117
  • [10] The application of multimodal large language models in medicine
    Qiu, Jianing
    Yuan, Wu
    Lam, Kyle
    LANCET REGIONAL HEALTH-WESTERN PACIFIC, 2024, 45