Hi. I have working on my project when I wanted to use compute_svd_entropy but it was taking too much time.
I stumbled upon a library cupy that is replacement for numpy for running on cuda.
I modified locally compute_svd_entropy like this:
def compute_svd_entropy(data, tau=2, emb=10, n_jobs=1):
...
import numpy as np
if n_jobs == "cuda":
import cupy as cp
np = cp
data = cp.asarray(data)
_, sv, _ = np.linalg.svd(_embed(data, d=emb, tau=tau, n_jobs=n_jobs))
m = np.sum(sv, axis=-1)
sv_norm = np.divide(sv, m[:, None])
out = -np.sum(np.multiply(sv_norm, np.log2(sv_norm)), axis=-1)
if n_jobs == "cuda":
out = cp.asnumpy(out)
return out
And for my case it speeded up from 37s per chunk that I was passing to 3-4s per chunk.
I can create PR for that for every function that is applicable will you be willing to merge something like that. I know that it introduces one more dependency (it can be optional) so I wanted just to check first before submitting PR. Thanks in advance.
PS. n_jobs was motivated by the mne that is using n_jobs in that way.
Hi. I have working on my project when I wanted to use
compute_svd_entropybut it was taking too much time.I stumbled upon a library cupy that is replacement for numpy for running on cuda.
I modified locally
compute_svd_entropylike this:And for my case it speeded up from 37s per chunk that I was passing to 3-4s per chunk.
I can create PR for that for every function that is applicable will you be willing to merge something like that. I know that it introduces one more dependency (it can be optional) so I wanted just to check first before submitting PR. Thanks in advance.
PS.
n_jobswas motivated by themnethat is using n_jobs in that way.