The ability to perceive and understand 3-D space is crucial for autonomous driving to effectively navigate their surroundings and make informed decisions. However, deep learning on point clouds is currently in its early stages due to the unique challenges of processing such data with deep neural networks. One of the major challenges lies in accurately detecting partially occluded objects under various practical conditions. To address this problem, we propose 3-D detector for Occluded Object under Obstructed conditions (3ONet), a two-stage light detection and ranging (LiDAR)-based 3-D object detection framework. Leveraging the advantages of the point-voxel-based method, 3ONet efficiently encodes multiscale features, enabling the generation of high-quality 3-D proposals while preserving detailed object shape information. Specifically, we introduce a point reconstruction network module designed to recover the missing 3-D spatial structures of foreground points. In the first stage, 3ONet identifies the regions containing foreground objects using a point segmentation network and combines them with the proposals to reconstruct the occluded object's 3-D point cloud geometry using an encoder-decoder approach. The refinement stage further enhances the performance by rescoring and adjusting the box location based on the enriched spatial shape information. We evaluate the performance of our proposed framework on the KITTI dataset and the Waymo Open dataset, and the results demonstrate its state-of-the-art performance in 3-D object detection.