Fabric image retrieval based on multi-modal feature fusion

被引:0
|
作者
Ning Zhang
Yixin Liu
Zhongjian Li
Jun Xiang
Ruru Pan
机构
[1] Jiangnan University,Key Laboratory of Eco
[2] Shaoxing University,Textiles, Ministry of Education
来源
关键词
Separable feature extraction; Multi-modal feature fusion; Visual-semantic joint embedding; Fabric retrieval;
D O I
暂无
中图分类号
学科分类号
摘要
With the increasing of multi-source heterogeneous data, flexible retrieval across different modalities is an urgent demand in industrial applications. To allow users to control the retrieval results, a novel fabric image retrieval method is proposed in this paper based on multi-modal feature fusion. First, the image feature is extracted using the modified pre-trained convolutional neural network to separate macroscopic and fine-grained features, which are then selected and aggregated by the multi-layer perception. The feature of the modification text is extracted by long short-term memory networks. Subsequently, the two features are fused in a visual-semantic joint embedding space by gated and residual structures to control the selective expression of separable image features. To validate the proposed scheme, a fabric image database for multi-modal retrieval is created as the benchmark. Qualitative and quantitative experiments indicate that the proposed method is practicable and effective, which can be extended to other similar industrial fields, like wood and wallpaper.
引用
收藏
页码:2207 / 2217
页数:10
相关论文
共 50 条
  • [11] Structured Multi-modal Feature Embedding and Alignment for Image-Sentence Retrieval
    Ge, Xuri
    Chen, Fuhai
    Jose, Joemon M.
    Ji, Zhilong
    Wu, Zhongqin
    Liu, Xiao
    [J]. PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 5185 - 5193
  • [12] Infrared thermal image ROI extraction algorithm based on fusion of multi-modal feature maps
    Zhu Li
    Zhang Jing
    Fu Ying-Kai
    Shen Hui
    Zhang Shou-Feng
    Hong Xiang-Gong
    [J]. JOURNAL OF INFRARED AND MILLIMETER WAVES, 2019, 38 (01) : 125 - 132
  • [13] Multi-modal image feature fusion-based PM2.5 concentration estimation
    Wang, Guangcheng
    Shi, Quan
    Wang, Han
    Sun, Kezheng
    Lu, Yuxuan
    Di, Kexin
    [J]. ATMOSPHERIC POLLUTION RESEARCH, 2022, 13 (03)
  • [14] Image retrieval based on multi-feature fusion
    Dong Wenfei
    Yu Shuchun
    Liu Songyu
    Zhang Zhiqiang
    Gu Wenbo
    [J]. 2014 FOURTH INTERNATIONAL CONFERENCE ON INSTRUMENTATION AND MEASUREMENT, COMPUTER, COMMUNICATION AND CONTROL (IMCCC), 2014, : 240 - 243
  • [15] Adherent Peanut Image Segmentation Based on Multi-Modal Fusion
    Wang, Yujing
    Ye, Fang
    Zeng, Jiusun
    Cai, Jinhui
    Huang, Wangsen
    [J]. SENSORS, 2024, 24 (14)
  • [16] Multi-modal Image Fusion Based on ROI and Laplacian Pyramid
    Gao, Xiong
    Zhang, Hong
    Chen, Hao
    Li, Jiafeng
    [J]. SIXTH INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING (ICGIP 2014), 2015, 9443
  • [17] Multi-modal Image Retrieval for Search-based Image Annotation with RF
    Budikova, Petra
    Batko, Michal
    Zezula, Pavel
    [J]. 2018 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM 2018), 2018, : 52 - 60
  • [18] Multi-level and Multi-modal Target Detection Based on Feature Fusion
    Cheng, Teng
    Sun, Lei
    Hou, Dengchao
    Shi, Qin
    Zhang, Junning
    Chen, Jiong
    Huang, He
    [J]. Qiche Gongcheng/Automotive Engineering, 2021, 43 (11): : 1602 - 1610
  • [19] Based on Multi-Feature Information Attention Fusion for Multi-Modal Remote Sensing Image Semantic Segmentation
    Zhang, Chongyu
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (IEEE ICMA 2021), 2021, : 71 - 76
  • [20] Citrus Huanglongbing Detection Based on Multi-Modal Feature Fusion Learning
    Yang, Dongzi
    Wang, Fengcheng
    Hu, Yuqi
    Lan, Yubin
    Deng, Xiaoling
    [J]. FRONTIERS IN PLANT SCIENCE, 2021, 12