Although black -box models such as ensemble learning models often provide better predictive performance than intrinsic interpretable models such as logistic regression, black -box models are not still applicable due to the lack of interpretability. Recently, there has been an explosion of work on explainable machine learning techniques, which utilize external algorithms or models to explain the behavior of black -box models. However, it is problematic to explain the black -box model behavior because the explanation provided might not reveal the real mechanism or decision process of black -box models. In this study, instead of using explainable machine learning techniques, an automated feature engineering task was formulated to help logistic regression achieve predictive performance comparable to or even better than black -box models while maintaining interpretability. In this paper, an INterpretable Automated Feature ENgineering (INAFEN) framework was designed for logistic regression. This framework automatically transforms the nonlinear relationships between numerical features and labels into linear relationships, conducts feature cross through association rule mining, and distills knowledge from black -box models. A case study was performed on gastric survival prediction to present the rationality of the feature transformations through INAFEN and benchmark experiments to show the validity of INAFEN. Experimental results on 10 classification tasks demonstrated that INAFEN achieved an average ranking of 2.60 in area under the ROC curve (AUROC), 3.35 in area under the PR curve (AUROC), 3.70 in F1 score and 3.00 in Brier score (among 13 models), outperforming other interpretable baselines and even black -box models. In addition, the interpretability measurement of INAFEN is significantly better than that of black -box models.