Survey on Trustworthiness Measurement for Artificial Intelligence Systems

被引:0
|
作者
Liu H. [1 ,2 ]
Li K.-X. [1 ,2 ]
Chen Y.-X. [1 ,2 ]
机构
[1] Software Engineering Institute, East China Normal University, Shanghai
[2] Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai
来源
Ruan Jian Xue Bao/Journal of Software | 2023年 / 34卷 / 08期
关键词
artificial intelligence system; measurement; trustworthiness;
D O I
10.13328/j.cnki.jos.006592
中图分类号
学科分类号
摘要
In recent years, artificial intelligence (AI) has rapidly developed. AI systems have penetrated people’s lives and become an indispensable part. However, these systems require a large amount of data to train models, and data disturbances will affect their results. Furthermore, as the business becomes diversified, and the scale gets complex, the trustworthiness of AI systems has attracted wide attention. Firstly, based on the trustworthiness attributes proposed by different organizations and scholars, this study introduces nine trustworthiness attributes of AI systems. Next, in terms of the data, model, and result trustworthiness, the study discusses methods for measuring the data, model, and result trustworthiness of existing AI systems and designs an evidence collection method of AI trustworthiness. Then, it summarizes the trustworthiness measurement theory and methods of AI systems. In addition, combined with attribute-based software trustworthiness measurement methods and blockchain technologies, the study establishes a trustworthiness measurement framework for AI systems, which includes methods of trustworthiness attribute decomposition and evidence acquisition, the federation trustworthiness measurement model, and the blockchain-based trustworthiness measurement structure of AI systems. Finally, it describes the opportunities and challenges of trustworthiness measurement technologies for AI systems. © 2023 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:3774 / 3792
页数:18
相关论文
共 121 条
  • [1] Fang BX., Artificial Intelligence Safety and Security, pp. 1-10, (2020)
  • [2] Hinton GE, Osindero S, Teh YW., A fast learning algorithm for deep belief nets, Neural Computation, 18, 7, pp. 1527-1554, (2006)
  • [3] Silver D, Huang A, Maddison CJ, Guez A, Sifre L, van den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, Dieleman S, Grewe D, Nham J, Kalchbrenner N, Sutskever I, Lillicrap T, Leach M, Kavukcuoglu K, Graepel T, Hassabis D., Mastering the game of Go with deep neural networks and tree search, Nature, 529, 7587, pp. 484-489, (2016)
  • [4] Notice of the State Council on printing and distributing the development plan of new generation artificial intelligence, GF-[2017] No. 35 Development plan of new generation artificial intelligence, (2017)
  • [5] Commentary science and technology hotpots in China 2019, pp. 89-137, (2020)
  • [6] Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R., Intriguing properties of neural networks, Proc. of the 2nd Int’l Conf. on Learning Representations, (2014)
  • [7] Eykholt K, Evtimov I, Fernandes E, Li B, Rahmati A, Xiao CW, Prakash A, Kohno T, Song D., Robust physical-world attacks on deep learning visual classification, Proc. of the 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, pp. 1625-1634, (2018)
  • [8] Xu H, Ma Y, Liu HC, Deb D, Liu H, Tang JL, Jain AK., Adversarial attacks and defenses in images, graphs and text: A review, Int’l Journal of Automation and Computing, 17, 2, pp. 151-178, (2020)
  • [9] Duan RJ, Mao XF, Qin AK, Chen YF, Ye SK, He Y, Yang Y., Adversarial laser beam: Effective physical-world attack to DNNs in a blink, Proc. of the 2021 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, pp. 16057-16066, (2021)
  • [10] Cazzell A., Why algorithmic fairness is elusive, (2019)