Multi-source shared nearest neighbours for multi-modal image clustering

被引:0
|
作者
Amel Hamzaoui
Alexis Joly
Nozha Boujemaa
机构
[1] INRIA-Rocquencourt,
[2] team-project:IMEDIA,undefined
来源
关键词
Multi source; Clustering; Search results; Shared neighbours; Multimodality;
D O I
暂无
中图分类号
学科分类号
摘要
Shared Nearest Neighbours (SNN) techniques are well known to overcome several shortcomings of traditional clustering approaches, notably high dimensionality and metric limitations. However, previous methods were limited to a single information source whereas such methods appear to be very well suited for heterogeneous data, typically in multi-modal contexts. In this paper, we propose a new technique to accelerate the calculation of shared neighbours and we introduce a new multi-source shared neighbours scheme applied to multi-modal image clustering. We first extend existing SNN-based similarity measures to the case of multiple sources and we introduce an original automatic source selection step when building candidate clusters. The key point is that each resulting cluster is built with its own optimal subset of modalities which improves the robustness to noisy or outlier information sources. We experiment our method in the scope of multi-modal search result clustering, visual search mining and subspace clustering. Experimental results on both synthetic and real data involving different information sources and several datasets show the effectiveness of our method.
引用
收藏
页码:479 / 503
页数:24
相关论文
共 50 条
  • [1] Multi-source shared nearest neighbours for multi-modal image clustering
    Hamzaoui, Amel
    Joly, Alexis
    Boujemaa, Nozha
    MULTIMEDIA TOOLS AND APPLICATIONS, 2011, 51 (02) : 479 - 503
  • [2] Multi-source multi-modal domain adaptation
    Zhao, Sicheng
    Jiang, Jing
    Tang, Wenbo
    Zhu, Jiankun
    Chen, Hui
    Xu, Pengfei
    Schuller, Bjorn W.
    Tao, Jianhua
    Yao, Hongxun
    Ding, Guiguang
    INFORMATION FUSION, 2025, 117
  • [3] Multi-Stage Fusion and Multi-Source Attention Network for Multi-Modal Remote Sensing Image Segmentation
    Zhao, Jiaqi
    Zhou, Yong
    Shi, Boyu
    Yang, Jingsong
    Zhang, Di
    Yao, Rui
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2021, 12 (06)
  • [4] Multi-Stage Fusion and Multi-Source Attention Network for Multi-Modal Remote Sensing Image Segmentation
    Zhao, Jiaqi
    Zhou, Yong
    Shi, Boyu
    Yang, Jingsong
    Zhang, Di
    Yao, Rui
    ACM Transactions on Intelligent Systems and Technology, 2021, 12 (06):
  • [5] Multi-modal Component Representation for Multi-source Domain Adaptation Method
    Zhang, Yuhong
    Lin, Zhihao
    Qian, Lin
    Hui, Xuegang
    PRICAI 2023: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2024, 14325 : 104 - 109
  • [6] Multi-Source Multi-Modal Activity Recognition in Aerial Video Surveillance
    Hammoud, Riad I.
    Sahin, Cem S.
    Blasch, Erik P.
    Rhodes, Bradley J.
    2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2014, : 237 - +
  • [7] Multi-modal, multi-source reading: A multi-representational reader's perspective
    Ainsworth, Shaaron E.
    LEARNING AND INSTRUCTION, 2018, 57 : 71 - 75
  • [8] Introduction to the special issue: Desiderata for a theory of multi-source multi-modal comprehension
    Cromley, Jennifer G.
    LEARNING AND INSTRUCTION, 2018, 57 : 1 - 4
  • [9] Multi-Source Knowledge Reasoning Graph Network for Multi-Modal Commonsense Inference
    Ma, Xuan
    Yang, Xiaoshan
    Xu, Changsheng
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (04)
  • [10] Multi-feature, multi-modal, and multi-source social event detection: A comprehensive survey
    Afyouni, Imad
    Al Aghbari, Zaher
    Razack, Reshma Abdul
    INFORMATION FUSION, 2022, 79 : 279 - 308