Convergence and diversity are interdependently handled during the evolutionary process by most existing many-objective evolutionary algorithms (MaOEAs). In such a design, the degraded performance of one would deteriorate the other, and only solutions with both are able to improve the performance of MaOEAs. Unfortunately, it is not easy to constantly maintain a population of solutions with both convergence and diversity. In this paper, an MaOEA based on two independent stages is proposed for effectively solving many-objective optimization problems (MaOPs), where the convergence and diversity are addressed in two independent and sequential stages. To achieve this, we first propose a nondominated dynamic weight aggregation method by using a genetic algorithm, which is capable of finding the Pareto-optimal solutions for MaOPs with concave, convex, linear and even mixed Pareto front shapes, and then these solutions are employed to learn the Pareto-optimal subspace for the convergence. Afterward, the diversity is addressed by solving a set of single-objective optimization problems with reference lines within the learned Pareto-optimal subspace. To evaluate the performance of the proposed algorithm, a series of experiments are conducted against six state-of-the-art MaOEAs on benchmark test problems. The results show the significantly improved performance of the proposed algorithm over the peer competitors. In addition, the proposed algorithm can focus directly on a chosen part of the objective space if the preference area is known beforehand. Furthermore, the proposed algorithm can also be used to effectively find the nadir points.