This study introduces a strategy inspired by cooperative behavior in nature to enhance information sharing among autonomous vehicles (AVs), advancing intelligent transportation systems. However, in the context of multiple light detection and ranging (LiDAR)-equipped vehicles cooperating, the generated point cloud data can obstruct real-time environment perception. This research assumes real-time, lossless data transmission, and accurate and reliable pose information sharing between cooperative vehicles. Based on human-inspired principles and computer imaging techniques, a method was proposed to encode dynamic grids for fusion LiDAR point cloud data, contingent upon inter-vehicle distances. Each grid cell corresponds to an image pixel, creating smaller cells for dense point clouds and larger cells for sparse point clouds. This maintains an approximately equal number of point clouds per cell. Additionally, a ground segmentation approach is developed, based on density and elevation differences of adjacent grids to retain significant obstacle points. A grid density-based adjacent clustering approach was proposed, which effectively classified the connected grid cells containing the obstacle points. Experiments using the robot operating system on a standard computer with public data show that the perception processing period for six cooperative vehicles is merely 43.217 ms. This demonstrates the efficacy of our method in handling large volumes of LiDAR point cloud data. Comparative analysis with three alternative methods confirmed the superior accuracy and recall of our clustering approach. This underscores the robustness of our biologically inspired methodology for the design of cooperative perception, thereby promoting efficient and safe vehicle navigation.