Gaussian processes regression models are an appealing machine learning method as they learn expressive nonlinear models from exemplar data with minimal parameter tuning and estimate both the mean and covariance of unseen points. However, cubic computational complexity growth with the number of samples has been a long standing challenge. Training requires the inversion of N x NN x N kernel at every iteration, whereas regression needs computation of an m x Nm x N kernel, where NN and mm are the number of training and test points, respectively. This work demonstrates how approximating the covariance kernel using eigenvalues and functions leads to an approximate Gaussian process with significant reduction in training and regression complexity. Training now requires computing only an N x nN x n eigenfunction matrix and an n x nn x n inverse, where nn is a selected number of eigenvalues. Furthermore, regression now only requires an m x nm x n matrix. Finally, in a special case, the hyperparameter optimization is completely independent from the number of training samples. The proposed method can regress over multiple outputs, learn the correlations between them, and estimate their derivatives to any order. The computational complexity reduction, regression capabilities, multioutput correlation learning, and comparison to the state of the art are demonstrated in simulation examples. Finally we show how the proposed approach can be utilized to model real human data.