In recent years, artificial intelligence technology led by machine learning algorithms has been widely used in many fields, such as computer vision, natural language processing, speech recognition, etc. A variety of machine learning models have greatly facilitated people’s lives. The workflow of a machine learning model consists of three stages. First, the model receives the raw data which is collected or generated by the developers as the model input and preprocesses the data through preprocessing algorithms, such as data augmentation and feature extraction. Subsequently, the model defines the architecture of neurons or layers in the model and constructs a computational graph through operators(e.g., convolution and pooling). Finally, the model calls the machine learning framework function to implement the operators and calculates the prediction result of the input data according to the weights of model neurons. In this process, slight fluctuations in the output of individual neurons in the model may lead to an entirely different model output, which can bring huge security risks. However, due to the insufficient understanding of the inherent vulnerability of machine learning models and their black box characteristic behaviors, it is difficult for researchers to identify or locate these potential security risks in advance. This brings many risks and hidden dangers to personal property safety and even national security. There is great significance to studying the testing and repairing methods for machine learning model security, which can help deeply understand the internal risks and vulnerabilities of models, comprehensively guarantee the security of machine learning systems, and widely apply artificial intelligence technology. The existing testing research for the machine learning model security has mainly focused on the correctness, robustness, and other testing properties of the model, and this research has achieved certain results. This paper intends to start from different security testing attributes, introduces the existing machine learning model security testing and repair technology in detail, summarizes and analyzes the deficiencies in the existing research, and discusses the technical progress and challenges of machine learning model security testing and repairing, providing guidance and reference for the safe application of the model. In this paper, we first introduce the structural composition and main testing properties of the machine learning model security. Afterwards, we systematically summarize and analyze the existing work from the three components of the machine learning model—data, algorithm, and implementation, and six model security-related testing properties-correctness, robustness, fairness, efficiency, interpretability, and privacy. We also discuss the effectiveness and limitations of the existing testing and repairing methods. Finally, we discuss several technical challenges and potential development directions of the testing and repairing methods for machine learning model security in the future. © 2022 Chinese Institute of Electronics. All rights reserved.