Human-interpretable and deep features for image privacy classification

被引:0
|
作者
Baranouskaya, Darya [1 ]
Cavallaro, Andrea [1 ]
机构
[1] Queen Mary Univ London, Ctr Intelligent Sensing, London, England
基金
英国工程与自然科学研究理事会;
关键词
D O I
10.1109/ICIP49359.2023.10222833
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Privacy is a complex, subjective and contextual concept that is difficult to define. Therefore, the annotation of images to train privacy classifiers is a challenging task. In this paper, we analyse privacy classification datasets and the properties of controversial images that are annotated with contrasting privacy labels by different assessors. We discuss suitable features for image privacy classification and propose eight privacy-specific and human-interpretable features. These features increase the performance of deep learning models and, on their own, improve the image representation for privacy classification compared with much higher dimensional deep features.
引用
收藏
页码:3489 / 3492
页数:4
相关论文
共 50 条
  • [1] Iris Recognition Based on Human-Interpretable Features
    Chen, Jianxu
    Shen, Feng
    Chen, Danny Z.
    Flynn, Patrick J.
    2015 IEEE INTERNATIONAL CONFERENCE ON IDENTITY, SECURITY AND BEHAVIOR ANALYSIS (ISBA), 2015,
  • [2] Iris Recognition Based on Human-Interpretable Features
    Chen, Jianxu
    Shen, Feng
    Chen, Danny Ziyi
    Flynn, Patrick J.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2016, 11 (07) : 1476 - 1485
  • [3] Unsupervised Deep Features for Privacy Image Classification
    Sitaula, Chiranjibi
    Xiang, Yong
    Aryal, Sunil
    Lu, Xuequan
    IMAGE AND VIDEO TECHNOLOGY (PSIVT 2019), 2019, 11854 : 404 - 415
  • [4] AUTOMATIC IDENTIFICATION of DIATOMS USING VISUAL HUMAN-INTERPRETABLE FEATURES
    Fischer, Stefan
    Bunke, Horst
    International Journal of Image and Graphics, 2002, 2 (01) : 67 - 87
  • [5] B-Cos Aligned Transformers Learn Human-Interpretable Features
    Tran, Manuel
    Lahiani, Amal
    Cid, Yashin Dicente
    Boxberg, Melanie
    Lienemann, Peter
    Matek, Christian
    Wagner, Sophia J.
    Theis, Fabian J.
    Klaiman, Eldad
    Peng, Tingying
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT VIII, 2023, 14227 : 514 - 524
  • [6] HiBug: On Human-Interpretable Model Debug
    Chen, Muxi
    Li, Yu
    Xu, Qiang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [7] Editorial: Human-Interpretable Machine Learning
    Tolomei, Gabriele
    Pinelli, Fabio
    Silvestri, Fabrizio
    FRONTIERS IN BIG DATA, 2022, 5
  • [8] Explaining deep convolutional models by measuring the influence of interpretable features in image classification
    Ventura, Francesco
    Greco, Salvatore
    Apiletti, Daniele
    Cerquitelli, Tania
    DATA MINING AND KNOWLEDGE DISCOVERY, 2024, 38 (05) : 3169 - 3226
  • [9] Human-Interpretable Feature Pattern Classification System Using Learning Classifier Systems
    Ebadi, Toktam
    Kukenys, Ignas
    Browne, Will N.
    Zhang, Mengjie
    EVOLUTIONARY COMPUTATION, 2014, 22 (04) : 629 - 650
  • [10] Human-interpretable image features derived from densely mapped cancer pathology slides predict diverse molecular phenotypes
    Diao, James A.
    Wang, Jason K.
    Chui, Wan Fung
    Mountain, Victoria
    Gullapally, Sai Chowdary
    Srinivasan, Ramprakash
    Mitchell, Richard N.
    Glass, Benjamin
    Hoffman, Sara
    Rao, Sudha K.
    Maheshwari, Chirag
    Lahiri, Abhik
    Prakash, Aaditya
    McLoughlin, Ryan
    Kerner, Jennifer K.
    Resnick, Murray B.
    Montalto, Michael C.
    Khosla, Aditya
    Wapinski, Ilan N.
    Beck, Andrew H.
    Elliott, Hunter L.
    Taylor-Weiner, Amaro
    NATURE COMMUNICATIONS, 2021, 12 (01)