Automated and Interpretable Fake News Detection With Explainable Artificial Intelligence

被引:1
|
作者
Giri, Moyank [1 ]
Eswaran, Sivaraman [2 ]
Honnavalli, Prasad [1 ]
Daniel, D. [3 ]
机构
[1] PES Univ, Res Ctr Informat Secur Forens & Cyber Resilience, Bangalore, Karnataka, India
[2] Curtin Univ, Dept Elect & Comp Engn, Miri, Sarawak, Malaysia
[3] CHRIST, Dept Comp Sci & Engn, Bangalore, Karnataka, India
关键词
Convolution Neural Network; Decision Tree; explainable AI; ensemble model; error level analysis; Na & iuml; ve Bayes classifier; Random Forest classifier;
D O I
10.1080/19361610.2024.2356431
中图分类号
DF [法律]; D9 [法律];
学科分类号
0301 ;
摘要
Fake news is a piece of misleading or forged information that affects society, business, governments, etc., hence is an imperative issue. The solution presented here to detect fake news involves purely using rigorous machine learning approaches in implementing a hybrid of simple yet accurate fake text detection models and fake image detection models to detect fake news. The solution considers the text and images of any news article, extracted using web scraping, where the text segment of a news article is analyzed using an ensemble model of the Na & iuml;ve Bayes, Random Forest, and Decision Tree classifier, which showed improved results than the individual models. The image segment of a news article is analyzed using only a Convolution Neural Network, which showed optimal accuracy similar to the text model. To better train the text models, data preprocessing and aggregation methods were used to combine various fake-real news datasets to have ample amounts of data. Similarly, the CASIA dataset was used to train the image model, over which Error Level Analysis was performed to detect fake images. model results are represented as confusion matrices and are measured using various performance metrics. Also, to explain predictions from the hybrid model, Explainable Artificial Intelligence is used.
引用
收藏
页数:21
相关论文
共 50 条
  • [21] Explainable vs. interpretable artificial intelligence frameworks in oncology
    Bertsimas, Dimitris
    Margonis, Georgios Antonios
    TRANSLATIONAL CANCER RESEARCH, 2023, 12 (02) : 217 - 220
  • [22] Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review
    Frasca M.
    La Torre D.
    Pravettoni G.
    Cutica I.
    Discov. Artif. Intell., 2024, 1 (1):
  • [23] Interpretable Machine Learning Models for Malicious Domains Detection Using Explainable Artificial Intelligence (XAI)
    Aslam, Nida
    Khan, Irfan Ullah
    Mirza, Samiha
    AlOwayed, Alanoud
    Anis, Fatima M.
    Aljuaid, Reef M.
    Baageel, Reham
    SUSTAINABILITY, 2022, 14 (12)
  • [24] Artificial Intelligence Blockchain Based Fake News Discrimination
    Kim, Seong-Kyu
    Huh, Jun-Ho
    Kim, Byung-Gyu
    IEEE ACCESS, 2024, 12 : 53838 - 53854
  • [25] Interpretable fake news detection with topic and deep variational models
    Hosseini, Marjan
    Sabet, Alireza Javadian
    He, Suining
    Aguiar, Derek
    ONLINE SOCIAL NETWORKS AND MEDIA, 2023, 36
  • [26] A systematic survey on explainable AI applied to fake news detection
    Athira, A. B.
    Kumar, S. D. Madhu
    Chacko, Anu Mary
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 122
  • [27] XFlag: Explainable Fake News Detection Model on Social Media
    Chien, Shih-Yi
    Yang, Cheng-Jun
    Yu, Fang
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2022, 38 (18-20) : 1808 - 1827
  • [28] Fake News Detection on Social Networks with Artificial Intelligence Tools: Systematic Literature Review
    Goksu, Murat
    Cavus, Nadire
    10TH INTERNATIONAL CONFERENCE ON THEORY AND APPLICATION OF SOFT COMPUTING, COMPUTING WITH WORDS AND PERCEPTIONS - ICSCCW-2019, 2020, 1095 : 47 - 53
  • [29] The relationship of artificial intelligence (AI) with fake news detection (FND): a systematic literature review
    Iqbal, Abid
    Shahzad, Khurram
    Khan, Shakeel Ahmad
    Chaudhry, Muhammad Shahzad
    GLOBAL KNOWLEDGE MEMORY AND COMMUNICATION, 2023,
  • [30] Explainable Artificial Intelligence and Cardiac Imaging: Toward More Interpretable Models
    Salih, Ahmed
    Galazzo, Ilaria Boscolo
    Gkontra, Polyxeni
    Lee, Aaron Mark
    Lekadir, Karim
    Raisi-Estabragh, Zahra
    Petersen, Steffen E.
    CIRCULATION-CARDIOVASCULAR IMAGING, 2023, 16 (04) : E014519