Federated Learning (FL) enables multiple distributed local clients to coordinate with a central server to train a global model without sharing their private data. However, the data owned by different clients, even with the same label, may induce conflicts in the latent feature maps, especially under the non-IID FL scenarios. This would fatally impair the performance of the global model. To this end, we propose a novel approach, DAFL, for Dual Adversarial Federated Learning, to mitigate the divergence on latent feature maps among different clients on non-IID data. In particular, a local dual adversarial training is designed to identify the origins of latent feature maps, and then transforms the conflicting latent feature maps to reach a consensus between global and local models in each client. Besides, the latent feature maps of the two models become closer to each other adaptively by reducing their Kullback Leibler divergence. Extensive experiments on benchmark datasets validate the effectiveness of DAFL and also demonstrate that DAFL outperforms the state-of-the-art approaches in terms of test accuracy under different non-IID settings.