Adversarial Machine Learning for Protecting Against Online Manipulation

被引:4
|
作者
Cresci, Stefano [1 ]
Petrocchi, Marinella [1 ,2 ]
Spognardi, Angelo [3 ]
Tognazzi, Stefano [4 ]
机构
[1] IIT CNR, I-56124 Pisa, Italy
[2] Scuola IMT Alti Lucca, I-55100 Lucca, Italy
[3] Sapienza Univ Roma, Comp Sci Dept, I-00161 Rome, Italy
[4] Konstanz Univ, Ctr Adv Study Collect Behav, D-78464 Constance, Germany
基金
欧盟地平线“2020”;
关键词
Alan Turing asked this question to his audience: Can a machine think rationally A question partly answered by the machine learning (ML) paradigm; and in his paper 'Computing Machinery and Intelligence'; as measured by P; I.2.4 Knowledge representation formalisms and methods; H.2.8.d Data mining; O.8.15 Social science methods or tools The year was 1950; if its performance at tasks in T; improves with experience E' [1; whose traditional definition is as follows: 'A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P;
D O I
10.1109/MIC.2021.3130380
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Adversarial examples are inputs to a machine learning system that result in an incorrect output from that system. Attacks launched through this type of input can cause severe consequences: for example, in the field of image recognition, a stop signal can be misclassified as a speed limit indication. However, adversarial examples also represent the fuel for a flurry of research directions in different domains and applications. Here, we give an overview of how they can be profitably exploited as powerful tools to build stronger learning models, capable of better-withstanding attacks, for two crucial tasks: fake news and social bot detection.
引用
下载
收藏
页码:47 / 52
页数:6
相关论文
共 50 条
  • [31] Server-Based Manipulation Attacks Against Machine Learning Models
    Liao, Cong
    Zhong, Haoti
    Zhu, Sencun
    Squicciarini, Anna
    PROCEEDINGS OF THE EIGHTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY (CODASPY'18), 2018, : 24 - 34
  • [32] Adversarial Machine Learning for Text
    Lee, Daniel
    Verma, Rakesh
    PROCEEDINGS OF THE SIXTH INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS (IWSPA'20), 2020, : 33 - 34
  • [33] Quantum adversarial machine learning
    Lu, Sirui
    Duan, Lu-Ming
    Deng, Dong-Ling
    PHYSICAL REVIEW RESEARCH, 2020, 2 (03):
  • [34] Machine Learning in Adversarial Settings
    McDaniel, Patrick
    Papernot, Nicolas
    Celik, Z. Berkay
    IEEE SECURITY & PRIVACY, 2016, 14 (03) : 68 - 72
  • [35] Machine learning in adversarial environments
    Laskov, Pavel
    Lippmann, Richard
    MACHINE LEARNING, 2010, 81 (02) : 115 - 119
  • [36] Machine learning in adversarial environments
    Pavel Laskov
    Richard Lippmann
    Machine Learning, 2010, 81 : 115 - 119
  • [37] On the Economics of Adversarial Machine Learning
    Merkle, Florian
    Samsinger, Maximilian
    Schottle, Pascal
    Pevny, Tomas
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 4670 - 4685
  • [38] Adversarial machine learning in dermatology
    Gilmore, Stephen
    AUSTRALASIAN JOURNAL OF DERMATOLOGY, 2022, 63 : 118 - 118
  • [39] Protection against Adversarial Attacks on Malware Detectors Using Machine Learning Algorithms
    Marshev, I. I.
    Zhukovskii, E., V
    Aleksandrova, E. B.
    AUTOMATIC CONTROL AND COMPUTER SCIENCES, 2021, 55 (08) : 1025 - 1028
  • [40] Modeling Attack Resistant PUFs Based on Adversarial Attack Against Machine Learning
    Wang, Sying-Jyan
    Chen, Yu-Sheng
    Li, Katherine Shu-Min
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2021, 11 (02) : 306 - 318