QuMinS: Fast and scalable querying, mining and summarizing multi-modal databases

被引:0
|
作者
Cordeiro, Robson L. F. [1 ]
Guo, Fan [2 ]
Haverkamp, Donna S. [3 ]
Horne, James H. [3 ]
Hughes, Ellen K. [3 ]
Kim, Gunhee [2 ]
Romani, Luciana A. S. [4 ]
Coltri, Priscila P. [5 ]
Souza, Tamires T. [1 ]
Traina, Agma J. M. [1 ]
Traina, Caetano, Jr. [1 ]
Faloutsos, Christos [2 ]
机构
[1] Univ Sao Paulo, BR-13560970 Sao Carlos, SP, Brazil
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Sci Applicat Int Corp, Mclean, VA 22102 USA
[4] Embrapa Agr Informat, BR-13083886 Campinas, SP, Brazil
[5] Univ Estadual Campinas, BR-13083970 Campinas, SP, Brazil
基金
巴西圣保罗研究基金会; 美国国家科学基金会;
关键词
Low-labor labeling; Summarization; Outlier detection; Query by example; Clustering; Satellite imagery; IMAGE ANNOTATION; RANDOM-WALK; CLASSIFICATION; RECOGNITION; OBJECT; GRAPH;
D O I
10.1016/j.ins.2013.11.013
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Given a large image set, in which very few images have labels, how to guess labels for the remaining majority? How to spot images that need brand new labels different from the predefined ones? How to summarize these data to route the user's attention to what really matters? Here we answer all these questions. Specifically, we propose QuMinS, a fast, scalable solution to two problems: (i) Low-labor labeling (LLL) - given an image set, very few images have labels, find the most appropriate labels for the rest; and (ii) Mining and attention routing - in the same setting, find clusters, the top-N-O outlier images, and the N-R images that best represent the data. Experiments on satellite images spanning up to 2.25 GB show that, contrasting to the state-of-the-art labeling techniques, QuMinS scales linearly on the data size, being up to 40 times faster than top competitors (GCap), still achieving better or equal accuracy, it spots images that potentially require unpredicted labels, and it works even with tiny initial label sets, i.e., nearly five examples. We also report a case study of our method's practical usage to show that QuMinS is a viable tool for automatic coffee crop detection from remote sensing images. (C) 2013 Elsevier Inc. All rights reserved.
引用
收藏
页码:211 / 229
页数:19
相关论文
共 50 条
  • [21] Cascades: Scalable, flexible and composable middleware for multi-modal sensor networking applications
    Huang, J
    Feng, WC
    Bulusu, N
    Feng, WC
    MULTIMEDIA COMPUTING AND NETWORKING 2006, 2006, 6071
  • [22] An Embedded, Multi-Modal Sensor System for Scalable Robotic and Prosthetic Hand Fingers
    Weiner, Pascal
    Neef, Caterina
    Shibata, Yoshihisa
    Nakamura, Yoshihiko
    Asfour, Tamim
    SENSORS, 2020, 20 (01)
  • [23] Symbolization and Data Mining of Multi-modal Signals using Bag of Systems
    Sannomiya, Chihiro
    Tanaka, Yusuke
    Kamakura, Hironori
    Kurihara, Keisuke
    Neyama, Ryo
    Nawa, Kazunari
    2016 IEEE INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (CYBER), 2016, : 233 - 238
  • [24] Popularity Prediction of Social Media based on Multi-Modal Feature Mining
    Hsu, Chih-Chung
    Kang, Li-Wei
    Lee, Chia-Yen
    Lee, Jun-Yi
    Zhang, Zhong-Xuan
    Wu, Shao-Min
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 2687 - 2691
  • [25] Robust scalable initialization for Bayesian variational inference with multi-modal Laplace approximations
    Bridgman, Wyatt
    Jones, Reese E.
    Khalil, Mohammad
    PROBABILISTIC ENGINEERING MECHANICS, 2023, 74
  • [26] Heterogeneous Translated Hashing: A Scalable Solution Towards Multi-Modal Similarity Search
    Wei, Ying
    Song, Yangqiu
    Zhen, Yi
    Liu, Bo
    Yang, Qiang
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2016, 10 (04)
  • [27] SIROM3 - A Scalable Intelligent ROaming Multi-Modal Multi-Sensor Framework
    Zhang, Jiaxing
    Qiu, Hanjiao
    Shamsabadi, Salar Shahini
    Birken, Ralf
    Schirner, Gunar
    2014 IEEE 38TH ANNUAL INTERNATIONAL COMPUTERS, SOFTWARE AND APPLICATIONS CONFERENCE (COMPSAC), 2014, : 446 - 455
  • [28] Shear-resize factorizations for fast multi-modal volume registration
    Chen, Y
    Hao, PW
    Yu, J
    ICIP: 2004 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1- 5, 2004, : 1085 - 1088
  • [29] The multi-modal universe of fast-fashion: the Visuelle 2.0 benchmark
    Skenderi, Geri
    Joppi, Christian
    Denitto, Matteo
    Scarpa, Berniero
    Cristani, Marco
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 2240 - 2245
  • [30] A Multi-modal Approach to Fine-grained Opinion Mining on Video Reviews
    Marrese-Taylor, Edison
    Rodriguez-Opazo, Cristian
    Balazs, Jorge A.
    Gould, Stephen
    Matsuo, Yutaka
    PROCEEDINGS OF THE SECOND GRAND CHALLENGE AND WORKSHOP ON MULTIMODAL LANGUAGE (CHALLENGE-HML), VOL 1, 2020, : 8 - 18