Time-series prediction is a current research hotspot in deep learning. However, due to the complex nature of time-series data, the modeling in this task is often highly nonconvex, which canmake the final convergence unstable. To address this challenge, recentworks have proposed deep mutual learning frameworks that allow models to learn from both ground truth and knowledge of othermodels in order to locate a better convergence point.However, a key disadvantage of deep mutual learning is that models that converge to poor local optima may still share their knowledge, limiting the overall performance. To overcome this limitation, in this article, we propose a new learning framework called mutual adaptation, which selects a prototypemodel that has the least error among all the models in the framework as the common teacher model. In addition, we incorporate a strategy of learning from each individual model's best local optimum in the history of training. Our experimental results show that, on average across multiple datasets, our method improves the performance of both Informer and long short-Term memory (LSTM) models compared to deep mutual learning by 4.73% in mean absolute error (MAE) and 6.99% in mean squared error (MSE) for Informer, and 11.54% in MAE and 18.15% in MSE for LSTM. We also demonstrate the importance of memory of individual best local optima and provide sensitivity analysis and visualization of error and the loss descending process. Our method represents a new state-of-The-Art in group learning for time-series prediction. Impact Statement-Deep mutual learning locates a more robust and better modelling of deep learning by merging different models for interactive learning. Every model learns fromboth groundtruth and knowledge of other models. However, its limitation is that models, which find bad local optima, would also share knowledge and probably lead others to worse performance. In this paper, we propose a framework named mutual adaptation. It selects a prototype model that has the least error from groundtruth among the models as the common teacher model of others. In addition, we recommend each model learn from individual's best output in the history of learning. In the experiment, among all of datasets, the performance of different models is improved by mutual adaptation and shows better performance than deep mutual learning. We believe our method could represents the advances in group deep learning. © 2023 IEEE.