Adversarial Machine Learning for Protecting Against Online Manipulation

被引:4
|
作者
Cresci, Stefano [1 ]
Petrocchi, Marinella [1 ,2 ]
Spognardi, Angelo [3 ]
Tognazzi, Stefano [4 ]
机构
[1] IIT CNR, I-56124 Pisa, Italy
[2] Scuola IMT Alti Lucca, I-55100 Lucca, Italy
[3] Sapienza Univ Roma, Comp Sci Dept, I-00161 Rome, Italy
[4] Konstanz Univ, Ctr Adv Study Collect Behav, D-78464 Constance, Germany
基金
欧盟地平线“2020”;
关键词
Alan Turing asked this question to his audience: Can a machine think rationally A question partly answered by the machine learning (ML) paradigm; and in his paper 'Computing Machinery and Intelligence'; as measured by P; I.2.4 Knowledge representation formalisms and methods; H.2.8.d Data mining; O.8.15 Social science methods or tools The year was 1950; if its performance at tasks in T; improves with experience E' [1; whose traditional definition is as follows: 'A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P;
D O I
10.1109/MIC.2021.3130380
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Adversarial examples are inputs to a machine learning system that result in an incorrect output from that system. Attacks launched through this type of input can cause severe consequences: for example, in the field of image recognition, a stop signal can be misclassified as a speed limit indication. However, adversarial examples also represent the fuel for a flurry of research directions in different domains and applications. Here, we give an overview of how they can be profitably exploited as powerful tools to build stronger learning models, capable of better-withstanding attacks, for two crucial tasks: fake news and social bot detection.
引用
收藏
页码:47 / 52
页数:6
相关论文
共 50 条
  • [1] Adversarial Machine Learning Against Digital Watermarking
    Quiring, Erwin
    Rieck, Konrad
    [J]. 2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2018, : 519 - 523
  • [2] A Moving Target Defense against Adversarial Machine Learning
    Roy, Abhishek
    Chhabra, Anshuman
    Kamhoua, Charles A.
    Mohapatra, Prasant
    [J]. SEC'19: PROCEEDINGS OF THE 4TH ACM/IEEE SYMPOSIUM ON EDGE COMPUTING, 2019, : 383 - 388
  • [3] Securing Pervasive Systems Against Adversarial Machine Learning
    Lagesse, Brent
    Burkard, Cody
    Perez, Julio
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATION WORKSHOPS (PERCOM WORKSHOPS), 2016,
  • [4] Making Machine Learning Robust Against Adversarial Inputs
    Goodfellow, Ian
    McDaniel, Patrick
    Papernot, Nicolas
    [J]. COMMUNICATIONS OF THE ACM, 2018, 61 (07) : 56 - 66
  • [5] Using Negative Detectors for Identifying Adversarial Data Manipulation in Machine Learning
    Gupta, Kishor Datta
    Dasgupta, Dipankar
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [6] Online Learning for Patrolling Robots Against Active Adversarial Attackers
    Rahman, Mahmuda
    Oh, Jae C.
    [J]. RECENT TRENDS AND FUTURE TECHNOLOGY IN APPLIED INTELLIGENCE, IEA/AIE 2018, 2018, 10868 : 477 - 488
  • [7] DeepFense: Online Accelerated Defense Against Adversarial Deep Learning
    Rouhani, Bita Darvish
    Samragh, Mohammad
    Javaheripi, Mojan
    Javidi, Tara
    Koushanfar, Farinaz
    [J]. 2018 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD) DIGEST OF TECHNICAL PAPERS, 2018,
  • [8] Online Robust Lagrangian Support Vector Machine against Adversarial Attack
    Ma, Yue
    He, Yiwei
    Tian, Yingjie
    [J]. 6TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND QUANTITATIVE MANAGEMENT, 2018, 139 : 173 - 181
  • [9] Secure machine learning against adversarial samples at test time
    Jing Lin
    Laurent L. Njilla
    Kaiqi Xiong
    [J]. EURASIP Journal on Information Security, 2022
  • [10] Bridging Machine Learning and Cryptography in Defence Against Adversarial Attacks
    Taran, Olga
    Rezaeifar, Shideh
    Voloshynovskiy, Slava
    [J]. COMPUTER VISION - ECCV 2018 WORKSHOPS, PT II, 2019, 11130 : 267 - 279