Before the execution of many standard graph signal processing (GSP) modules, such as compression and restoration, learning of a graph that encodes pairwise (dis)similarities in data is an important precursor. In data-starved scenarios, to reduce parameterization, previous graph learning algorithms make assumptions in the nodal domain on i) graph connectivity (e.g., edge sparsity), and/or ii) edge weights (e.g., positive edges only). In this paper, given an empirical covariance matrix (C) over bar estimated from sparse data, we consider instead a spectral-domain assumption on the graph Laplacian matrix L: the first K eigenvectors (called"core" eigenvectors){u(k)}of L are pre-selected-e.g., based on domain-specific knowledge-and only the remaining eigen vectors are learned and parameterized. We first prove that, inside a Hilbert space of real symmetric matrices, the subspace H+u of positive semi-definite (PSD) matrices sharing a common set of core K eigenvectors {u(k)} is a convex cone. Inspired by the Gram-Schmidt procedure, we then construct an efficient operator to project a given positive definite (PD) matrix on to H-u(+). Finally, we design a hybrid graphical lasso/projection algorithm to compute a locally optimal inverse Laplacian L-1 is an element of H-u(+) given (C) over bar. We apply our graph learning algorithm in two practical settings: parliamentary voting interpolation and predictive transform coding in image compression. Experiments show that our algorithm outperformed existing graph learning schemes in data-starved scenarios for both synthetic data and these two settings