A global and local context integration DCNN for adult image classification

被引:17
|
作者
Cheng, Feng [1 ]
Wang, Shi-Lin [1 ]
Wang, Xi-Zi [1 ]
Liew, Alan Wee-Chung [2 ]
Liu, Gong-Shen [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai, Peoples R China
[2] Griffith Univ, Sch Informat & Commun Technol, Gold Coast Campus, Southport, Qld 4222, Australia
关键词
Adult image recognition; Deep convolutional network; Global context; Local context; Multi-tasks learning;
D O I
10.1016/j.patcog.2019.106983
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the wide availability of the Internet and the proliferation of pornographic images online, adult image detection and filtering has become very important to prevent young people from reaching these harmful contents. However, due to the large diversity in adult images, automatic adult image detection is a difficult task. In this paper, a new deep convolutional neural network (DCNN) based approach is proposed to classify images into three classes, i.e. porn, sexy, and benign. Our approach takes both the entire picture (global context) and the meaningful region (local context) information into consideration. The proposed network is composed of three parts, i.e. the image characteristics subnet to extract discriminative low-level image features, the sensitive body part detection subnet to detect adult-image related regions, and the feature extraction and fusion subnet to generate high-level features for image classification. A multi-task learning scheme is designed to optimize the network with both the global and local information. Experiments are carried out on two datasets with over 160,000 images. From the experiment results, it was observed that the proposed network achieved high classification accuracies (96.6% in the AIC dataset and 92.7% in the NPDI dataset) and outperformed the other approaches investigated. (C) 2019 Elsevier Ltd. All rights reserved.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] From Local Similarity to Global Coding; An Application to Image Classification
    Shaban, Amirreza
    Rabiee, Hamid R.
    Farajtabar, Mehrdad
    Ghazvininejad, Marjan
    2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 2794 - 2801
  • [32] Fusion of Global and Local Descriptors for Remote Sensing Image Classification
    Risojevic, Vladimir
    Babic, Zdenka
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2013, 10 (04) : 836 - 840
  • [33] Autonomous deep learning: A genetic DCNN designer for image classification
    Ma, Benteng
    Li, Xiang
    Xia, Yong
    Zhang, Yanning
    NEUROCOMPUTING, 2020, 379 : 152 - 161
  • [34] Aspect-level sentiment classification with fused local and global context
    Ao Feng
    Jiazhi Cai
    Zhengjie Gao
    Xiaojie Li
    Journal of Big Data, 10
  • [35] Researching and transforming adult learning and communities-the local/global context
    Krasovec, Sabina Jelenc
    EUROPEAN JOURNAL FOR RESEARCH ON THE EDUCATION AND LEARNING OF ADULTS, 2016, 7 (01): : 135 - 137
  • [36] Aspect-level sentiment classification with fused local and global context
    Feng, Ao
    Cai, Jiazhi
    Gao, Zhengjie
    Li, Xiaojie
    JOURNAL OF BIG DATA, 2023, 10 (01)
  • [37] Learning Local and Global Multi-Context Representations for Document Classification
    Liu, Yi
    Yuan, Hao
    Ji, Shuiwang
    2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019), 2019, : 1234 - 1239
  • [38] LocoMixer: A Local Context MLP-Like Architecture For Image Classification
    Yin, Mingjun
    Chang, Zhiyong
    Wang, Yan
    2023 IEEE 35TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2023, : 762 - 769
  • [39] A new method of image classification based on local appearance and context information
    Fan, Yuhua
    Qin, Shiyin
    NEUROCOMPUTING, 2013, 119 : 33 - 40
  • [40] MIXLIC: Mixing Global and Local Context Model for learned Image Compression
    Ruan, Haihang
    Wang, Feng
    Xu, Tongda
    Tan, Zhiyong
    Wang, Yan
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 684 - 689