QuMinS: Fast and scalable querying, mining and summarizing multi-modal databases

被引:0
|
作者
Cordeiro, Robson L. F. [1 ]
Guo, Fan [2 ]
Haverkamp, Donna S. [3 ]
Horne, James H. [3 ]
Hughes, Ellen K. [3 ]
Kim, Gunhee [2 ]
Romani, Luciana A. S. [4 ]
Coltri, Priscila P. [5 ]
Souza, Tamires T. [1 ]
Traina, Agma J. M. [1 ]
Traina, Caetano, Jr. [1 ]
Faloutsos, Christos [2 ]
机构
[1] Univ Sao Paulo, BR-13560970 Sao Carlos, SP, Brazil
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Sci Applicat Int Corp, Mclean, VA 22102 USA
[4] Embrapa Agr Informat, BR-13083886 Campinas, SP, Brazil
[5] Univ Estadual Campinas, BR-13083970 Campinas, SP, Brazil
基金
巴西圣保罗研究基金会; 美国国家科学基金会;
关键词
Low-labor labeling; Summarization; Outlier detection; Query by example; Clustering; Satellite imagery; IMAGE ANNOTATION; RANDOM-WALK; CLASSIFICATION; RECOGNITION; OBJECT; GRAPH;
D O I
10.1016/j.ins.2013.11.013
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Given a large image set, in which very few images have labels, how to guess labels for the remaining majority? How to spot images that need brand new labels different from the predefined ones? How to summarize these data to route the user's attention to what really matters? Here we answer all these questions. Specifically, we propose QuMinS, a fast, scalable solution to two problems: (i) Low-labor labeling (LLL) - given an image set, very few images have labels, find the most appropriate labels for the rest; and (ii) Mining and attention routing - in the same setting, find clusters, the top-N-O outlier images, and the N-R images that best represent the data. Experiments on satellite images spanning up to 2.25 GB show that, contrasting to the state-of-the-art labeling techniques, QuMinS scales linearly on the data size, being up to 40 times faster than top competitors (GCap), still achieving better or equal accuracy, it spots images that potentially require unpredicted labels, and it works even with tiny initial label sets, i.e., nearly five examples. We also report a case study of our method's practical usage to show that QuMinS is a viable tool for automatic coffee crop detection from remote sensing images. (C) 2013 Elsevier Inc. All rights reserved.
引用
收藏
页码:211 / 229
页数:19
相关论文
共 50 条
  • [31] Sentiment analysis method of consumer reviews based on multi-modal feature mining
    You, Jing
    Zhong, Jiamin
    Kong, Jing
    Peng, Lihua
    International Journal of Cognitive Computing in Engineering, 2025, 6 : 143 - 151
  • [32] Development of a travel recommendation algorithm based on multi-modal and multi-vector data mining
    Liu, Ruixiang
    PEERJ COMPUTER SCIENCE, 2023, 9
  • [33] Deep-Learning-Based Multi-Modal Fusion for Fast MR Reconstruction
    Xiang, Lei
    Chen, Yong
    Chang, Weitang
    Zhan, Yiqiang
    Lin, Weili
    Wang, Qian
    Shen, Dinggang
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2019, 66 (07) : 2105 - 2114
  • [34] Fast template matching in multi-modal image under pixel distribution mapping
    Mei, Lichun
    Wang, Caiyun
    Wang, Huaiye
    Zhao, Yuanfu
    Zhang, Jun
    Zhao, Xiaoxia
    INFRARED PHYSICS & TECHNOLOGY, 2022, 127
  • [35] A multi-modal multi-paradigm agent-based approach to design scalable distributed biometric systems
    Gamassi, M
    Piuri, V
    Sana, D
    Scotti, F
    Scotti, O
    2005 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE FOR HOMELAND SECURITY AND PERSONAL SAFETY, 2005, : 65 - 70
  • [36] ME2: A Scalable Modular Meta-heuristic for Multi-modal Multi-dimension Optimization
    Islam, Mohiul
    Kharma, Nawwaf
    Sultan, Vaibhav
    Yang, Xiaojing
    Mohamed, Mohamed
    Sultan, Kalpesh
    IJCCI: PROCEEDINGS OF THE 11TH INTERNATIONAL JOINT CONFERENCE ON COMPUTATIONAL INTELLIGENCE, 2019, : 196 - 204
  • [37] A Multi-Objective Multi-Modal Optimization Approach for Mining Stable Spatio-Temporal Patterns
    Sebag, Michele
    Tarrisson, Nicolas
    Teytaud, Olivier
    Lefevre, Julien
    Baillet, Sylvain
    19TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-05), 2005, : 859 - 864
  • [38] Multi-modal Video Concept Detection for Scalable Logging of Pre-Production Broadcast Content
    Gray, C.
    Collomosse, J.
    Thorpe, J.
    Turner, A.
    CVMP 2015: PROCEEDINGS OF THE 12TH EUROPEAN CONFERENCE ON VISUAL MEDIA PRODUCTION, 2015,
  • [39] Scalable Multi-Modal Learning for Cross-Link Channel Prediction in Massive IoT Networks
    Cho, Kun Woo
    Cominelli, Marco
    Gringoli, Francesco
    Widmer, Joerg
    Jamieson, Kyle
    PROCEEDINGS OF THE 2023 INTERNATIONAL SYMPOSIUM ON THEORY, ALGORITHMIC FOUNDATIONS, AND PROTOCOL DESIGN FOR MOBILE NETWORKS AND MOBILE COMPUTING, MOBIHOC 2023, 2023, : 221 - 229
  • [40] Multi-Modal Knowledge Representation Learning via Webly-Supervised Relationships Mining
    Nian, Fudong
    Bao, Bing-Kun
    Li, Teng
    Xu, Changsheng
    PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, : 411 - 419