LiDAR has become one of the primary 3D object detection sensors in autonomous driving. However, due to the inherent sparsity of point clouds, certain objects exhibit structure incompleteness in occluded and distant areas, which hampers the accurate perception of objects in 3D space. To tackle this challenge, we propose Semantic and Structure Completion Network (S(2)CNet) for 3D object detection. Concretely, we design the Semantic Completion (SeC) module to generate semantic features in Bird's-Eye-View (BEV) space, utilizing a teacher-student paradigm. Notably, we adopt a coarse-to-fine guidance strategy to encourage student network to generate semantic features specifically within foreground regions. This ensures that the student network focuses on the generation of foreground object features. Besides, we introduce an attention-based module to adaptively fuse the generated features and raw features. SeC module faces particular limitation when dealing with objects containing only a few points, in such case, the network is prone to generating low quality proposals with inaccurate localization. Complementary to SeC module, we introduce the Structure Completion (StC) module, in which a group of structural proposals are obtained by traversing most structures in a structure-guided manner, and thus at least one proposal with ground truth similar structure can be guaranteed. Extensive experiments on the KITTI and nuScenes benchmarks demonstrate the effectiveness of our method, especially for the hard setting objects with fewer points.