Federated learning (FL) is essentially a distributed machine learning paradigm that enables the joint training of a global model by aggregating gradients from participating clients without exchanging raw data. However, a malicious aggregation server may deliberately return designed results without any operation to save computation overhead, or even launch privacy inference attacks using crafted gradients. There are only a few schemes focusing on verifiable FL, and yet they cannot achieve collusion-resistant verification. In this paper, we propose a novel Verifiable, Collusion-resistant, and Dynamic FL (VCD-FL) to tackle this issue. Specifically, we first optimize Lagrange interpolation by gradient grouping and compression for achieving efficient verifiability of FL. To protect clients' data privacy against collusion attacks, we propose a lightweight commitment scheme using irreversible gradient transformation. By integrating the proposed efficient verification mechanism with the novel commitment scheme, our VCD-FL can detect whether or not the aggregation server is involved in collusion attacks. Moreover, considering that clients might go offline due to some reason such as network anomaly and client crash, we adopt the secret sharing technique to eliminate the effect of federation dynamics on FL. In a nutshell, our VCD-FL can achieve collusion-resistant verification and collusion attack detection with supporting the correctness, privacy, and dynamics. Finally, we theoretically prove the effectiveness of our VCD-FL, make comprehensive comparisons, and conduct a series of experiments on MNIST dataset with MLP and CNN models. The theoretical proof and experimental analysis demonstrate that our VCD-FL is computationally efficient, robust against collusion attacks, and able to support the dynamics of FL.