In the real world, most datasets commonly exhibit a long-tailed distribution phenomenon. Although multi-expert approaches have achieved commendable results, there is still room for improvement. To enhance model performance in long-tailed datasets, we propose PEHD (performance-weighted experts with hybrid distillation), a strategy that integrates various data augmentation techniques with a multi-expert system and hybrid distillation methods. We first use different data augmentation strategies (strong image enhancement and weak image enhancement) to augment the original image, in order to more fully utilize the limited sample. Second, we introduce multi-strategy weighted multi-expert to encourage each expert to focus on categories of different frequencies. Furthermore, in order to reduce the variance of the model and improve its performance, we designed a learning framework that combines self-distillation and mutual distillation. By combining knowledge transfer between teacher models and student models within the same and different categories. Finally, based on the characteristics of the data volume in different categories, different threshold filtering strategies are used. By using this strategy, we can further reduce the negative impact of extremely low-quality small batch samples on model training. The experimental results indicate that the PEHD strategy has reached an advanced level on multiple public datasets. Especially in the CIFAR-100-LT dataset, when the imbalance rate is 100, the classification accuracy reaches 54.11%.