Federated learning (FL) exhibits vulnerabilities to poisoning attacks, where Byzantine FL clients send malicious model updates to hamper the accuracy of the global model. However, these efforts are being circumvented by some more advanced stealthy poisoning attacks. In this paper, we propose an Enclave-aided Byzantine-robust Federated Aggregation (EBFA) framework. In particular, at each FL epoch, we first evaluate the layer-wise cosine similarity between the guide model (learned from an extra validation dataset) and local models, and then, utilize the boxplot method to construct a region of outliers to find Byzantine clients. To avoid the interference to the robust federated aggregation caused by classical privacy-preserving method, such as differential privacy and homomorphic encryption, we further design an efficient privacy-preserving scheme for robust aggregation via Trusted Execution Environment (TEE); and, to improve the efficiency, we only deploy the privacy-sensitive aggregation operations within resource limited TEE (or enclave). Finally, we perform extensive experiments on different datasets, and demonstrate that our proposed EBFA outperforms the state-of-the-art Byzantine-robust schemes (e.g., FLTrust) under non-IID settings. Moreover, our proposed enclave-aided privacy-preserving scheme could significantly improve the efficiency (over 40% for Alexnet) in comparison with the TEE-only scheme.