Cross-domain fault diagnosis methods have been extensively investigated to improve practical engineering implications for data-driven models. However, the annotated data in practical applications is often insufficient, which makes it difficult to train the model effectively. Additionally, existing methods typically transfer knowledge learned from one device to another, where collected data from different devices exhibit different distribution representations. To address the above issues, a dynamic model-driven dictionary learning-inspired domain adaptation strategy is proposed. First, a novel dynamic model that quantitatively considers the effects of slip and lubrication is established to generate a mass of labeled data. Second, a novel deep discriminative transfer dictionary neural network (DDTDNN) is developed, in which a new multi-layer deep dictionary learning module (MDDL) and an adaptive bandwidth maximum mean discrepancy (ABMMD) metric are designed. MDDL leverages iterative soft thresholding and gradient descent processes to extract domain invariant representation within sparse representation space, while ABMMD is incorporated into the loss function and works alongside the classification loss to jointly influence the model. This new metric can dynamically set kernel widths by a median heuristic method, which helps the model to adapt the scale of the data and align feature distributions more effectively. The effectiveness of DDTDNN is validated on two cross-domain datasets. Experiment results show that DDTDNN achieves classification accuracies of 99.1 %, and 98.5 %, respectively, which outperforms several state-of-the-art methods.