Skip to content

Releases: ml-explore/mlx

v0.0.10

18 Jan 20:02
f6e911c

Choose a tag to compare

Highlights:

  • Faster matmul: up to 2.5x faster for certain sizes, benchmarks
  • Fused matmul + addition (for faster linear layers)

Core

  • Quantization supports sizes other than multiples of 32
  • Faster GEMM (matmul)
  • ADMM primitive (fused addition and matmul)
  • mx.isnan, mx.isinf, isposinf, isneginf
  • mx.tile
  • VJPs for scatter_min and scatter_max
  • Multi output split primitive

NN

  • Losses: Gaussian negative log-likelihood

Misc

  • Performance enhancements for graph evaluation with lots of outputs
  • Default PRNG seed is based on current time instead of 0
  • Primitive VJP takes output as input. Reduces redundant work without need for simplification
  • PRNGs default seed based on system time rather than fixed to 0
  • Format boolean printing in Python style when in Python

Bugfixes

  • Scatter < 32 bit precision and integer overflow fix
  • Overflow with mx.eye
  • Report Metal out of memory issues instead of silent failure
  • Change mx.round to follow NumPy which rounds to even

v0.0.9

11 Jan 22:07
006d01b

Choose a tag to compare

Highlights:

  • Initial (and experimental) GGUF support
  • Support Python buffer protocol (easy interoperability with NumPy, Jax, Tensorflow, PyTorch, etc)
  • at[] syntax for scatter style operations: x.at[idx].add(y), (min, max, prod, etc)

Core

  • Array creation from other mx.array’s (mx.array([x, y]))
  • Complete support for Python buffer protocol
  • mx.inner, mx.outer
  • mx.logical_and, mx.logical_or, and operator overloads
  • Array at syntax for scatter ops
  • Better support for in-place operations (+=, *=, -=, ...)
  • VJP for scatter and scatter add
  • Constants (mx.pi, mx.inf, mx.newaxis, …)

NN

  • GLU activation
  • cosine_similarity loss
  • Cache for RoPE and ALiBi

Bugfixes / Misc

  • Fix data type with tri
  • Fix saving non-contiguous arrays
  • Fix graph retention for inlace state, and remove retain_graph
  • Multi-output primitives
  • Better support for loading devices

v0.0.7

03 Jan 23:04
526466d

Choose a tag to compare

Core

  • Support for loading and saving HuggingFace's safetensor format
  • Transposed quantization matmul kernels
  • mlx.core.linalg sub-package with mx.linalg.norm (Frobenius, infininty, p-norms)
  • tensordot and repeat

NN

  • Layers
    • Bilinear,Identity, InstanceNorm
    • Dropout2D, Dropout3D
    • more customizable Transformer (pre/post norm, dropout)
    • More activations: SoftSign, Softmax, HardSwish, LogSoftmax
    • Configurable scale in RoPE positional encodings
  • Losses: hinge, huber, log_cosh

Misc

  • Faster GPU reductions for certain cases
  • Change to memory allocation to allow swapping

v0.0.6

22 Dec 02:39
8385f93

Choose a tag to compare

Core

  • quantize, dequantize, quantized_matmul
  • moveaxis, swapaxes, flatten
  • stack
  • floor, ceil, clip
  • tril, triu, tri
  • linspace

Optimizers

  • RMSProp, Adamax, Adadelta, Lion

NN

  • Layers: QuantizedLinear, ALiBi positional encodings
  • Losses: Label smoothing, Smooth L1 loss, Triplet loss

Misc

  • Bug fixes

v0.0.5

13 Dec 22:33
76e1af0

Choose a tag to compare

  • Core ops remainder, eye, identity
  • Additional functionality in mlx.nn
    • Losses: binary cross entropy, kl divergence, mse, l1
    • Activations: PRELU, Mish, and several others
  • More optimizers: AdamW, Nesterov momentum, Adagrad
  • Bug fixes