Human-interpretable and deep features for image privacy classification

被引:0
|
作者
Baranouskaya, Darya [1 ]
Cavallaro, Andrea [1 ]
机构
[1] Queen Mary Univ London, Ctr Intelligent Sensing, London, England
基金
英国工程与自然科学研究理事会;
关键词
D O I
10.1109/ICIP49359.2023.10222833
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Privacy is a complex, subjective and contextual concept that is difficult to define. Therefore, the annotation of images to train privacy classifiers is a challenging task. In this paper, we analyse privacy classification datasets and the properties of controversial images that are annotated with contrasting privacy labels by different assessors. We discuss suitable features for image privacy classification and propose eight privacy-specific and human-interpretable features. These features increase the performance of deep learning models and, on their own, improve the image representation for privacy classification compared with much higher dimensional deep features.
引用
收藏
页码:3489 / 3492
页数:4
相关论文
共 50 条
  • [31] Image Aesthetics Classification using Deep Features and Image Category
    Maqbool, Hira
    Masek, Martin
    PROCEEDINGS OF THE 2021 36TH INTERNATIONAL CONFERENCE ON IMAGE AND VISION COMPUTING NEW ZEALAND (IVCNZ), 2021,
  • [32] Dynamically Identifying Deep Multimodal Features for Image Privacy Prediction
    Tonge, Ashwini
    Caragea, Cornelia
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 10057 - 10058
  • [33] Human-interpretable features derived from breast cancer pathology slides detect BRCA1/2 gene mutations
    Li, Yi
    Xiong, Xiaomin
    Liu, Xiaohua
    Chen, Lin
    Lin, Bo
    Xu, Bo
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 288 - 293
  • [34] Histological image classification using biologically interpretable shape-based features
    Kothari, Sonal
    Phan, John H.
    Young, Andrew N.
    Wang, May D.
    BMC MEDICAL IMAGING, 2013, 13
  • [35] Affective image classification by jointly using interpretable art features and semantic annotations
    Liu, Xuan
    Li, Na
    Xia, Yong
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 58 : 576 - 588
  • [36] Histological image classification using biologically interpretable shape-based features
    Sonal Kothari
    John H Phan
    Andrew N Young
    May D Wang
    BMC Medical Imaging, 13
  • [37] Unsupervised Extraction of Human-Interpretable Nonverbal Behavioral Cues in a Public Speaking Scenario
    Tanveer, M. Iftekhar
    Liu, Ji
    Hoque, M. Ehsan
    MM'15: PROCEEDINGS OF THE 2015 ACM MULTIMEDIA CONFERENCE, 2015, : 863 - 866
  • [38] DocXclassifier: towards a robust and interpretable deep neural network for document image classification
    Saifullah, Saifullah
    Agne, Stefan
    Dengel, Andreas
    Ahmed, Sheraz
    INTERNATIONAL JOURNAL ON DOCUMENT ANALYSIS AND RECOGNITION, 2024, 27 (03) : 447 - 473
  • [39] A human-interpretable machine learning pipeline based on ultrasound to support leiomyosarcoma diagnosis
    Lombardi, Angela
    Arezzo, Francesca
    Di Sciascio, Eugenio
    Ardito, Carmelo
    Mongelli, Michele
    Di Lillo, Nicola
    Fascilla, Fabiana Divina
    Silvestris, Erica
    Kardhashi, Anila
    Putino, Carmela
    Cazzolla, Ambrogio
    Loizzi, Vera
    Cazzato, Gerardo
    Cormio, Gennaro
    Di Noia, Tommaso
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2023, 146
  • [40] SoFTNet: A concept-controlled deep learning architecture for interpretable image classification
    Zia, Tehseen
    Bashir, Nauman
    Ullah, Mirza Ahsan
    Murtaza, Shakeeb
    KNOWLEDGE-BASED SYSTEMS, 2022, 240