-To defend against various adversarial attacks, it is essential to develop a robust and high computing efficiency defence framework. Adversarial ensemble learning is the most effective technique for defending against adversarial example attacks, which constructs ensembles of multiple Deep neural networks (DNNs) with adversarial training to obtain stronger defense. However, ensemble models run noticeably slower on existing DNN accelerators than single-model inference. Deploying ensemble models on the existing DNN accelerators leads to many critical issues such as the underutilization of hardware resources. To tackle emerging challenges, we propose EnsGuard, , a dynamic asymmetric multicore systolic array architecture for adversarial ensemble learning inference to fully exploit both static and dynamic parallelism of ensemble models. Specifically, on the hardware level, we propose a novel instruction set extension and develop efficient architecture components to fully exploit the new hardware abstraction of scattered idle computing cores, and use them to dynamically create on-the-fly neural processing units (fNPUs). Moreover, we propose a computing power recycle mechanism to run on-the-fly models (small models) on fNPUs by carefully orchestrating execution order of ensemble models for maximizing hardware resources and bandwidth utilization. On the software level, EnsGuard adopts an integrated hardware/randomized ensemble co-design optimizer aiming at winning both faster inference and higher adversarial robustness. On top of that, a multimodel mapping method based on decision tree is proposed to enable the interleaving of different DNN executions both spatially and temporally, and mitigate straggler problems. Evaluation with a diverse set of workloads shows significant gains in throughput (4.4x) and energy reduction (3.2x).