Objective Click-through rate (CTR) prediction realizes accurate recommendation of digital advertisements by predicting the user's click probability on advertisements or commodities. However, current CTR prediction models have the following key issues. First, the raw embedding vectors have not been fully refined. Second, the corresponding feature interaction method is too simple. As a result, the performance of the models is heavily restricted. To alleviate these issues, a novel CTR model named self-attention deep field-embedded factorization machine (Self-AtDFEFM) is proposed. Methods First, a well-known multi-head self-attention mechanism is employed to capture the implicit information of the raw embedding vectors on different sub-spaces, and the corresponding weight is calculated to further refine the key low-level features. Second, a novel field-embedded factorization machine (FEFM) is designed to strengthen the interaction intensity between different feature fields by the field pair symmetric matrix. The key low-order feature combinations are fully optimized by the FEFM module for the subsequent high-order feature interaction. Third, a deep neural network (DNN) is built based on the low-order feature combinations to complete implicit high-order feature interaction. Finally, both the explicit and implicit feature interactions are combined together to implement CTR prediction. Results and Discussions Extensive experiments have been performed on the two public available datasets, namely Criteo and Avazu. First, the proposed Self-AtDFEFM is compared with numerous state-of-the-art baselines on the AUC (area under curve) and LogLoss metrics. Second, all parameters of Self-AtDFEFM was tuned, and the parameters included the number of the explicit high-order feature interaction layers, the number of the attention heads, the embedding dimension, and the number of the implicit high-order feature interaction layers. Further, ablation experiments of our model were completed. The results of the experiments showed that: the Self-AtDFEFM model outperformed mainstream baseline models on the AUC and LogLoss metrics; all parameters of Self-AtDFEFM have been adjusted to their optimal values; each module form a kind of joint force to improve the final CTR prediction performance. Notably, the explicit high-order feature interaction layer plays the most important role in Self-AtDFEFM. Conclusions Each module of Self-AtDFEFM is plug-and-play, that is, the Self-AtDFEFM is easier to build and deploy. Hence, Self-AtDFEFM achieves a good trade-off between prediction performance and model complexity, making it highly practical. © 2024 Sichuan University. All rights reserved.