Achieving safe lane changing is a crucial function of autonomous vehicles, with the complexity and uncertainty of interaction involved. Learning-based approaches and vehicle collaboration techniques can enhance vehicles' awareness of the dynamic environment, thereby enhancing the interactive capabilities. Therefore, this paper proposes a Multi-Agent Reinforcement Learning (MARL) approach to coordinate connected vehicles in reaching their respective lane changing targets. Vehicle state, scene elements, potential risk, and intention information are abstracted into highly expressive vectorized inputs. Based on this, a lightweight parameter-sharing network framework is designed to learn safe and robust cooperative lane changing policies. To address the challenges arising from multi-objects and multi-targets, a Prioritized Action Extrapolation (PAE) mechanism is employed to train the network. Through priority assignment and action extrapolation, the proposed MARL approach can optimize the decision sequence dynamically and enhance the interaction in multi-vehicle scenarios, thereby improving the vehicles' intention attainment rate. Simulated experiments in 2-lane and 3-lane scenarios have been conducted to verify the adaptability and performance of the proposed MARL method.