The benefits of having digital twins of urban buildings are numerous. However, a major difficulty encountered in their creation from airborne LiDAR point clouds is the effective means of accurately reconstructing significant occlusions amidst point density variations and noise. To bridge the noise/sparsity/occlusion gap and generate high fidelity 3D building models, we propose APC2Mesh which integrates point completion into a 3D reconstruction pipeline, enabling the learning of dense geometrically accurate representation of buildings. Specifically, we leveraged complete points generated from occluded ones as input to a linearized skip attention-based deformation network for 3D mesh reconstruction. In our experiments, conducted on 3 different scenes, we demonstrate that: (1) Compared to SoTA methods like NDF, Points2Poly, Point2Mesh, and a few others, APC2Mesh ranked second in positional RMSE and first in directional RSME, with error magnitudes of 0.0134 m and 0.1581, respectively. This indicates the efficacy of APC2Mesh in handling the challenges of airborne building points of diverse styles and complexities. (2) The combination of point completion with typical deep learningbased 3D point cloud reconstruction methods offers a direct and effective solution for reconstructing significantly occluded airborne building points. As such, this neural integration holds promise for advancing the creation of digital twins for urban buildings with greater accuracy and fidelity. Our source code is available at https://github.com/geospatial-lab/APC2Mesh.