Skip to content

GSoC_2016_project_large_gps

Wu Lin edited this page Feb 21, 2016 · 61 revisions

Large-Scale Gaussian Processes, tensorflow, and autodiff

Note: This project description is likely to be updated soon, in favour of a re-factoring of Shogun's GPs using Google tensorflow and its autodiff capabilities

Mentors

Difficulty & Requirements

Medium to Difficult You need know

  • Gaussian Process basics
  • Variational approximation basics (You should understand at least the full GP part of this Notebook)
  • Linear Algebra
  • Linear Algebra in C++

Description

Following last year's successful project on variational learning for Big Data, we attempt to bring Shogun's Gaussian Processes (GP) to Big Data land. From a high level perspective, this means that the goal is to implement established methodology on how to scale up GPs to be able to process hundreds of thousands of points. The focus is on regression and classification. We also focuss on applications of GPs to big data where we will consider application to recommendation systems and to learning preferences of people in a social network.

Details

  • Variational inference for (full) GP using Tensorflow
  • Variational inference for sparse GP using Tensorflow
  • Stochastic variational inference for sparse GP using Tensorflow
  • Applications

Waypoints and initial work

Refactoring existing framework

Variational Gaussian inference (Suggested Roadmap)

  • base class for computing gradient of Evidence Lower BOund (ELBO) wrt variaitonal variables
  • base class for computing gradient of ELBO wrt hyper-parameters in likelihoods, mean functions, and co-variance/kernel functions
  • (base) class for using external or build-in minimizers (LBFGSMinimizer and NLOPTMinimizer)
  • (for full GP) classes for computing gradient wrt variaitonal variables using Tensorflow and existing hand-implemented codes
  • (for full GP) classes for computing gradient wrt hyper-parameters using Tensorflow and existing hand-implemented codes (tricky)
  • Benchmarks and notebooks for demos
  • base class for MC samplers
  • classes for using existing MC samplers
  • (for sparse GP) classes for computing gradient wrt variaitonal variables using Tensorflow and existing hand-implemented codes
  • (for sparse GP) classes for computing gradient wrt hyper-parameters using Tensorflow and existing hand-implemented codes (tricky)
  • classes for HMC samplers from Stan (optional)
  • base class for model selection (eg, Bayesian OPT) (optional)

MCMC inference (optional)

Optional

Alternatives to scale up kernel machines. Also useful for other of Shogun's methods

  • Incomplete (banded) Cholesky inference for GP binary classification using Tensorflow
  • Random Fourier Features

Other:

  • Deep GP

Why this is cool

Our primary goal is to scale up GPs to make it possible to apply GPs to many such applications useful for big data. GPs are becoming more and more popular for big data since not only they provide accurate predictions but they also tell us how confident we should be about our prediction (aka uncertainty quantification) and that whether we have selected the right model (aka model selection). These issues are even more relevant in the era of big data since the amount of noise also increases with the amount of data. Recent work extends the use of GPs beyond regression and classification, to a wide range of appliations from numerical optimization to recommendation system and even to deep networks, making GPs a popular choice. The main bottleneck in these applications is scalability and we want to make easy-to-use scalable code which will help the use of GPs for the machine learning community even more.

Useful ressources

Clone this wiki locally