An Evaluation of Traditional and CNN-Based Feature Descriptors for Cartoon Pornography Detection

被引:8
|
作者
Aldahoul, Nouar [1 ]
Karim, Hezerul Abdul [1 ]
Abdullah, Mohd Haris Lye [1 ]
Wazir, Abdulaziz Saleh Ba [1 ]
Fauzi, Mohammad Faizal Ahmad [1 ]
Tan, Myles Joshua Toledo [2 ,3 ]
Mansor, Sarina [1 ]
Lyn, Hor Sui [1 ]
机构
[1] Multimedia Univ, Fac Engn, Cyberjaya 63100, Malaysia
[2] Univ St La Salle, Dept Nat Sci, Bacolod 6100, Philippines
[3] Univ St La Salle, Dept Chem Engn, Bacolod 6100, Philippines
关键词
Feature extraction; Visualization; Image color analysis; Censorship; Transfer learning; Streaming media; Training; Cartoon animation; convolutional neural networks; domain generalization; feature and decision fusion; pornography detection; transfer learning; REPRESENTATION; METHODOLOGY;
D O I
10.1109/ACCESS.2021.3064392
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Inappropriate visual content on the internet has spread everywhere, and thus children are exposed unintentionally to sexually explicit visual content. Animated cartoon movies sometimes have sensitive content such as pornography and sex. Usually, video sharing platforms take children's e-safety into consideration through manual censorship, which is both time-consuming and expensive. Therefore, automated cartoon censorship is highly recommended to be integrated into media platforms. In this paper, various methods and approaches were explored to detect inappropriate visual content in cartoon animation. First, state-of-the-art conventional feature techniques were utilised and evaluated. In addition, a simple end-to-end convolutional neural network (CNN) was used and was found to outperform conventional techniques in terms of accuracy (85.33%) and F1 score (83.46%). Additionally, to target the deeper version of CNNs, ResNet, and EfficientNet were demonstrated and compared. The CNN-based extracted features were mapped into two classes: normal and porn. To improve the model's performance, we utilised feature and decision fusion approaches which were found to outperform state-of-the-art techniques in terms of accuracy (87.87%), F1 score (87.87%), and AUC (94.40%). To validate the domain generalisation performance of the proposed methods, CNNs, pre-trained on the cartoon dataset were evaluated on public NPDI-800 natural videos and found to provide an accuracy of 79.92%, and F1 score of 80.58%. Similarly, CNNs, pre-trained on the public NPDI-800 natural videos, were evaluated on cartoon dataset and found to give an accuracy of 82.666%, and F1 score of 81.588%. Finally, a novel cartoon pornography dataset, with various characters, skin colours, positions, viewpoints, and scales, was proposed.
引用
收藏
页码:39910 / 39925
页数:16
相关论文
共 50 条
  • [31] Concurrent bearing faults diagnosis with CNN-based object detection-An evaluation
    Dong, Yue
    Yang, Weilin
    Zhang, Chao
    Zhang, Yongwei
    Xu, Dezhi
    Pan, Tinglong
    TRANSACTIONS OF THE INSTITUTE OF MEASUREMENT AND CONTROL, 2024,
  • [32] Evaluation of cameras and image distance for CNN-based weed detection in wild blueberry
    Hennessy, Patrick J.
    Esau, Travis J.
    Schumann, Arnold W.
    Zaman, Qamar U.
    Corscadden, Kenneth W.
    Farooque, Aitazaz A.
    SMART AGRICULTURAL TECHNOLOGY, 2022, 2
  • [33] Performance evaluation of an improved deep CNN-based concrete crack detection algorithm
    Pennada, Sanjeetha
    Perry, Marcus
    McAlorum, Jack
    Dow, Hamish
    Dobie, Gordon
    SENSORS AND SMART STRUCTURES TECHNOLOGIES FOR CIVIL, MECHANICAL, AND AEROSPACE SYSTEMS 2023, 2023, 12486
  • [34] A depthwise separable CNN-based interpretable feature extraction network for automatic pathological voice detection
    Zhao, Denghuang
    Qiu, Zhixin
    Jiang, Yujie
    Zhu, Xincheng
    Zhang, Xiaojun
    Tao, Zhi
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 88
  • [35] CNN-based Bottleneck Feature for Noise Robust Query-by-Example Spoken Term Detection
    Lim, Hyungjun
    Kim, Younggwan
    Kim, Yoonhoe
    Kim, Hoirin
    2017 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC 2017), 2017, : 1237 - 1240
  • [36] Experimental Study of the Suitability of CNN-based Holistic Descriptors for Accurate Visual Localization
    Jaenal, Alberto
    Moreno, Francisco-Angel
    Gonzalez-Jimenez, Javier
    PROCEEDINGS OF 2ND INTERNATIONAL CONFERENCE ON APPLICATIONS OF INTELLIGENT SYSTEMS (APPIS 2019), 2019,
  • [37] CNN-based Feature Cross and Classifier for Loan Default Prediction
    Deng, Shizhe
    Li, Rui
    Jin, Yaohui
    He, Hao
    2020 INTERNATIONAL CONFERENCE ON IMAGE, VIDEO PROCESSING AND ARTIFICIAL INTELLIGENCE, 2020, 11584
  • [38] Salient Feature Selection for CNN-Based Visual Place Recognition
    Chen, Yutian
    Gan, Wenyan
    Jiao, Shanshan
    Xu, Youwei
    Feng, Yuntian
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2018, E101D (12) : 3102 - 3107
  • [39] CNN-Based Semantic Change Detection in Satellite Imagery
    Gupta, Ananya
    Welburn, Elisabeth
    Watson, Simon
    Yin, Hujun
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: WORKSHOP AND SPECIAL SESSIONS, 2019, 11731 : 669 - 684
  • [40] CNN-Based Traffic Volume Video Detection Method
    Chen, Tao
    Li, Xuchuan
    Guo, Congshuai
    Fan, Linkun
    CICTP 2020: TRANSPORTATION EVOLUTION IMPACTING FUTURE MOBILITY, 2020, : 2435 - 2445