The computational complexity of graph neural networks (GNNs) presents a significant obstacle to their widespread adoption in various applications. As the size of the input graph increases, the number of parameters in GNN models grows rapidly, leading to increased training and inference times. Existing techniques for reducing GNN complexity, such as train and prune methods and sparse training, often struggle to balance model accuracy with efficiency. In this paper, we address this challenge by proposing a novel approach to sparsify GNNs using sparsity regularization and compressive sensing. By mapping GNN model parameters into a graph and applying sparsity regularization, we induce sparsity in parameter values. Leveraging compressive sensing with Bayesian learning, we identify critical parameters for sparsification, effectively reducing computational costs without sacrificing model accuracy. We evaluate our method on real-world graph datasets and compare it with state-of-the-art techniques. Experimental results demonstrate that our approach achieves higher accuracy while significantly reducing training sparsity and computational requirements, thereby mitigating the impact of large graph sizes on training and inference times. This work sheds light on the potential of compressive sensing for unlocking efficiency in graph-based learning tasks.