Human-interpretable and deep features for image privacy classification

被引:0
|
作者
Baranouskaya, Darya [1 ]
Cavallaro, Andrea [1 ]
机构
[1] Queen Mary Univ London, Ctr Intelligent Sensing, London, England
基金
英国工程与自然科学研究理事会;
关键词
D O I
10.1109/ICIP49359.2023.10222833
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Privacy is a complex, subjective and contextual concept that is difficult to define. Therefore, the annotation of images to train privacy classifiers is a challenging task. In this paper, we analyse privacy classification datasets and the properties of controversial images that are annotated with contrasting privacy labels by different assessors. We discuss suitable features for image privacy classification and propose eight privacy-specific and human-interpretable features. These features increase the performance of deep learning models and, on their own, improve the image representation for privacy classification compared with much higher dimensional deep features.
引用
收藏
页码:3489 / 3492
页数:4
相关论文
共 50 条
  • [21] Food Image Classification with Deep Features
    Sengur, Abdulkadir
    Akbulut, Yaman
    Budak, Umit
    2019 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND DATA PROCESSING (IDAP 2019), 2019,
  • [22] Interpretability Is in the Mind of the Beholder: A Causal Framework for Human-Interpretable Representation Learning
    Marconato, Emanuele
    Passerini, Andrea
    Teso, Stefano
    ENTROPY, 2023, 25 (12)
  • [23] OPTIMIZING HUMAN-INTERPRETABLE DIALOG MANAGEMENT POLICY USING GENETIC ALGORITHM
    Ren, Hang
    Xu, Weiqun
    Yan, Yonghong
    2015 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING (ASRU), 2015, : 791 - 797
  • [24] Construction grammar and procedural semantics for human-interpretable grounded language processing
    De Vos, Liesbet
    Nevens, Jens
    Van Eecke, Paul
    Beuls, Katrien
    LINGUISTICS VANGUARD, 2024,
  • [25] Toward human-interpretable, automated learning of feedback control for the mixing layer
    Li, Hao
    Maceda, Guy Y. Cornejo
    Li, Yiqing
    Tan, Jianguo
    Noack, Bernd R.
    PHYSICS OF FLUIDS, 2025, 37 (03)
  • [26] Human-interpretable clustering of short text using large language models
    Miller, Justin K.
    Alexander, Tristram J.
    ROYAL SOCIETY OPEN SCIENCE, 2025, 12 (01):
  • [27] Interpretable Deep Image Classification Using Rationally Inattentive Utility Maximization
    Pattanayak, Kunal
    Krishnamurthy, Vikram
    Jain, Adit
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2024, 18 (02) : 168 - 183
  • [28] A self-interpretable module for deep image classification on small data
    Biagio La Rosa
    Roberto Capobianco
    Daniele Nardi
    Applied Intelligence, 2023, 53 : 9115 - 9147
  • [29] A self-interpretable module for deep image classification on small data
    La Rosa, Biagio
    Capobianco, Roberto
    Nardi, Daniele
    APPLIED INTELLIGENCE, 2023, 53 (08) : 9115 - 9147
  • [30] LEARNING DEEP FEATURES FOR IMAGE EMOTION CLASSIFICATION
    Chen, Ming
    Zhang, Lu
    Allebach, Jan P.
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 4491 - 4495