In machine learning, Guassian Processes (GPs) are commonly used modelling tools for Bayesian non-parametric inference. GPs, with a suitable choice of the kernel, are universal function approximators. A drawback with GPs is that the computational cost is cubic, O(n^3) , in the number of observations. For example, a dataset containing 20’000 observations (this is not even considered large nowadays) will take hours to return. This makes GPs unsuitable for Big Data. Several general sparse approximation schemes have been proposed for this problem. We have developed different approaches to approximate GPs with complexity O(n).