Skip to content

2023-12-31 - Longstanding missing features #616

Open
@mratsim

Description

Arraymancer has become a key piece of Nim ecosystem. Unfortunately I do not have the time to develop it further for several reasons:

  • family, birth of family member, death of hobby time.
  • competing hobby, I've been focusing on cryptography the last couple years, and feel like Nim has also an unique niche there, and I'm even accelerating Rust libraries with a Nim backend.
  • pace of dev, the deep learning community was moving rapidly in 2012 / 2018, today it's very very very fast and hard to compete. Not to say it's impossible, but you need better infrastructure to catch up.

Furthermore, since then Nim v2 introduced new interesting features like builtin memory management that works with multithreading or views that are quire relevant to Arraymancer.

Let's go over the longstanding missing features to improve Arraymancer, we'll go over the tensor library and over the neural network library.

Tensor backend (~NumPy, ~SciPy)

  1. Mutable operations on slices
  2. Nested parallelism
  3. Doc generation
  4. Versioning / releases
  5. Slow transcendental functions (exp, log, sin, cos, tan, tanh, ...)
  6. Windows: BLAS and Lapack deployment woes
  7. MacOS: OpenMP woes
  8. MacOS: Tensor cores usage

Also: the need for untyped Tensors.

Neural network backend (~PyTorch)

  1. Nvidia Cuda
  2. Implementation woes: CPU forward, CPU backward, GPU forward, GPU backward, all optimized
  3. Ergonomic serialization and deserialization of models
  4. Slowness of reduction operations like sigmnoid or softmax the more cores you have

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions