Since large-scale multi-objective problems (LSMOPs) have huge decision variables, the traditional evolutionary algorithms are facing difficulties of low exploitation efficiency and high exploration costs in solving LSMOPs. Therefore, this paper proposes an evolutionary strategy based on two- stage accelerated search optimizers (ATAES). Specifically, a convergence optimizer is devised in the first stage, while a three-layer lightweight convolutional neural network model is built, and the population is homogenized into two subsets, the diversity subset, and the convergence subset, which serve as input nodes and the expected output nodes of the neural network, respectively. Then, by constantly backpropagating the gradient, a satisfactory individual will be produced. Once exploitation stagnation is discovered in the first phase, the second phase will be run, where a diversity optimizer using a differential optimization algorithm with opposite learning is suggested to increase the exploration range of candidate solutions and thereby increase the population's diversity. Finally, to validate the algorithm's performance, on multi-objective LSMOP and DTLZ benchmark suits with decision variable quantities of 100, 300, 500, and 1000, the ATAES demonstrated its superiority with other advanced multi-objective evolutionary algorithms.