Hey, I’ve been diving into pruning and sparse inference techniques (like Wanda, SparseGPT, DS-NOT, N:M, block sparsity), and I noticed Burn doesn’t yet have a dedicated place for these tools.
What do you think about adding a burn-sparse crate to cover things like pruning methods, sparse weight formats (CSR/BSR/N:M), sparse tensors, and potentially sparse kernels down the line?
Nothing is fully scoped out yet — just thought I’d throw the idea out there to start the discussion and see if it fits with Burn's direction.
related issues:
Support Sparse Tensors #846