Recently, the pseudo-LiDAR gains increasing attention since it provides the probability of replacing the expensive LiDAR with the camera in autonomous driving. However, due to the fixed field of view of the camera, the pseudo-LiDAR point cloud suffers from the limited spatial range. In this paper, we present a novel pseudo-LiDAR point cloud magnification algorithm, aiming to extrapolate the narrow baseline and further bridge the gap between LiDAR and camera. To achieve this goal, we design a complete pipe-line consisting of the hybrid view synthesis module, stereo depth estimation module, image-depth associated stitching module, and magnified pseudo-LiDAR point cloud transformation module. Considering the artifacts existed in generated views, we propose a region-aware restoration approach to obtain more realistic synthesized results. To the best of our knowledge, this is the first work for the pseudo-LiDAR point cloud magnification, which shows appealing and meaningful applications in 3D spatial perception systems only equipped with cameras. Experimental results on KITTI benchmark demonstrate that our algorithm can effectively magnify pseudo-LiDAR point cloud with a wider field of view, note that we only use two images captured by stereo cameras. (c) 2020 Elsevier B.V. All rights reserved.