-
-
Notifications
You must be signed in to change notification settings - Fork 1k
GSoC_2016_project_large_gps
This project will re-factor and extend Shogun's Gaussian Process implementation, using Google tensorflow and its autodiff capabilities.
- Heiko (github: karlnapf, IRC: HeikoS)
- Wu (github: yorkerlin, IRC: yorkerlin)
- Emtiyaz Khan
- Vincent Adam
Medium to Difficult You need know
- Gaussian Process basics (You should understand the GP Notebook)
- Variational approximation basics (You should understand at least the full GP part of the Notebook for Variational Inference)
- Linear Algebra (math & in C++)
- Shogun's parameter framework basics
- Optional: Autodiff basics
- Optional: Tensorflow basics
- Optional: GPFlow basics
Following previous successful project on variational learning for Big Data, we attempt to bring Shogun's Gaussian Processes (GP) to Big Data land. From a high level perspective, this means that the goal is to implement established methodology on how to scale up GPs to be able to process hundreds of thousands of points.
- Variational inference for (full) GP using Tensorflow
- Variational inference for sparse GP using Tensorflow
- Stochastic variational inference for sparse GP using Tensorflow
- Applications
- Exact inference for (full) GP regression using Tensorflow (entrance task) (ExactInferenceMethod)
- Variational inference for (full) GP binary classification using Tensorflow (KLCovarianceInferenceMethod, KLCholeskyInferenceMethod, and KLApproxDiagonalInferenceMethod)
- Variational inference for sparse GP (regression & classification) using Tensorflow (FITCInferenceMethod and SparseVGInferenceMethod)
- Make beautiful demos and benchmarks
- Stochastic variational inference for sparse GP using Tensorflow (optional)
- base class for computing gradient of Evidence Lower BOund (ELBO) wrt variaitonal variables
- base class for computing gradient of ELBO wrt hyper-parameters in likelihoods, mean functions, and co-variance/kernel functions
- (base) class for using external or build-in minimizers (LBFGSMinimizer and NLOPTMinimizer)
- (for full GP) classes for computing gradient wrt variaitonal variables using Tensorflow and existing hand-implemented codes
- (for full GP) classes for computing gradient wrt hyper-parameters using Tensorflow and existing hand-implemented codes (tricky)
- Benchmarks and notebooks for demos
- base class for MC samplers
- classes for using existing MC samplers
- (for sparse GP) classes for computing gradient wrt variaitonal variables using Tensorflow and existing hand-implemented codes
- (for sparse GP) classes for computing gradient wrt hyper-parameters using Tensorflow and existing hand-implemented codes (tricky)
- classes for HMC samplers from Stan (optional)
- base class for model selection (eg, Bayesian OPT) (optional)
Alternatives to scale up kernel machines. Also useful for other of Shogun's methods
- Incomplete (banded) Cholesky inference for GP binary classification using Tensorflow
- Random Fourier Features
Other:
- Deep GP
Our primary goal is to scale up GPs to make it possible to apply GPs to many such applications useful for big data. GPs are becoming more and more popular for big data since not only they provide accurate predictions but they also tell us how confident we should be about our prediction (aka uncertainty quantification) and that whether we have selected the right model (aka model selection). These issues are even more relevant in the era of big data since the amount of noise also increases with the amount of data. Recent work extends the use of GPs beyond regression and classification, to a wide range of appliations from numerical optimization to recommendation system and even to deep networks, making GPs a popular choice. The main bottleneck in these applications is scalability and we want to make easy-to-use scalable code which will help the use of GPs for the machine learning community even more.
- GPflow using Tensorflow
- Shogun's variational Gaussian inference for full GP
- Shogun's variational Gaussian inference for sparse GP
- Notebook about Variational inference for GP
- Approximations for Binary Gaussian Process Classification
- Concave Gaussian Variational Approximations for Inference in Large-Scale Bayesian Linear Models
- Variational Learning of Inducing Variables in Sparse Gaussian Processes
- Gaussian Processes for Big data
- Scalable Variational Gaussian Process Classification
- Distributed Gaussian Processes
- Deep Gaussian Processes