Skip to content

README_linalg_refactor

Pan Deng / Zora edited this page Aug 10, 2016 · 13 revisions

Motivation

Linear algebra operations form the backbone for most of the computation components in any Machine Learning library. However, writing all of the required linear algebra operations from scratch is rather redundant and undesired, especially when we have some excellent open source alternatives. In Shogun, we prefer

  • Eigen3 for its speed and simplicity at the usage level,
  • ViennaCL version 1.5 for GPU powered linear algebra operations, and

For Shogun maintainers, however, the usage of different external libraries for different operations can lead to a painful task.

  • For example, consider some part of an algorithm originally written using Eigen3 API. But a Shogun user wishes to use ViennaCL for that algorithm instead, hoping to obtain boosted performance utilizing a GPU powered platform. There is no way of doing that without having the algorithm rewritten by the developers using ViennaCL, which leads to duplication of code and effort.
  • Also, there is no way to do a performance comparison for the developers while using different external linear algebra libraries for the same algorithm in Shogun code.
  • It is also somewhat frustrating for a new developer who has to invest significant amount of time and effort to learn each of these external APIs just to add a new algorithm in Shogun.

Features of internal linear algebra library

Shogun's internal linear algebra library (will be referred as linalg hereinafter) is a work-in-progress attempt to overcome these issues. We designed linalg as a modularized internal header only library in order to

  • provide a uniform API for Shogun developers to choose any supported backend without having to worry about the syntactical differences in the external libraries' operations,
  • have the backend set for each operations at compile-time (for lesser runtime overhead) and therefore intended to be used internally by Shogun developers,
  • allow Shogun developers to add new linear algebra backend plug-ins easily.

For Shogun developers

Setting linalg backend

Users can switch between linalg backends via global variable sg_linalg.

  • Shogun uses Eigen3 backend as default linear algebra backend.
  • Enabling of GPU backend allows the data transfer between CPU and GPU, as well as the operations on GPU. ViennaCL(GPU) backend can be enabled by assigning new ViennaCL backend class to sg_linalg or canceled by:
   sg_linalg->set_gpu_backend(new LinalgBackendViennaCL());
   sg_linalg->set_gpu_backend(nullptr);
  • Though backends can be extended, only one CPU backend and one GPU backend are allowed to be registered each time.

Using linalg operations

linalg library works for both SGVectors and SGMatrices. The operations can be called by:

#include <shogun/mathematics/linalg/LinalgNamespace.h>
shogun::linalg::operation(args)`
  • To use linalg operations on GPU data (vectors or matrices) and transfer data between GPU, one can call to_gpu and from_gpu methods. The methods return results as new instances.

    auto result = linalg::to_gpu(arg)
    auto result = linalg::from_gpu(arg_on_gpu)
    
  • The to_gpu method will return the original CPU vector or matrix if no GPU backend is available. The from_gpu method will return the input argument if it is already on CPU and raise error if no GPU backend is available anymore.

  • The status of data can be checked by: data.on_gpu(). True means the data is on GPU and false means the data is on CPU.

  • The operations will be carried out on GPU only if the data passed to the operations are on GPU and GPU backend is registered: sg_linalg->get_gpu_backend() == true. The linalg will be conducted on CPU if the data is on CPU.

  • linalg will report errors if the data is on GPU but no GPU backend is available anymore. Also errors will occur when an operation requires multiple inputs but the inputs are not on the same backend.

  • An warning will be generated if an operation is not available on specific backend.

Welcome to the Shogun wiki!

Clone this wiki locally