You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* docs: Basics run on CPU
* docs: Run Polynomial Fitting using Reactant
* feat: allow users to bump the HLO
* docs: update Optimization tutorial
* docs: use Reactant for CPU in SimpleChains
* docs: update PINN2DPDE
* docs: partially move HyperNet to reactant
* chore: run formatter [skip tests]
* docs: highlight Reactant more prominently
* docs: update SimpleRNN
* fix: incorrect check in Embedding
* fix: bump enzyme in project
* feat: handle weight initializers for reactant RNGs
* fix: workaround for #1186
* fix: simpleRNN works with reactant
* fix: failing tests and use overlay
* revert: Hypernet keep in CUDA for now
Look in the [examples](/examples/) directory for self-contained usage examples. The [documentation](https://lux.csail.mit.edu) has examples sorted into proper categories.
Copy file name to clipboardExpand all lines: docs/src/index.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,8 +23,8 @@ hero:
23
23
24
24
features:
25
25
- icon: 🚀
26
-
title: Fast & Extendible
27
-
details: Lux.jl is written in Julia itself, making it extremely extendible. CUDA and AMDGPU are supported first-class, with experimental support for Metal and Intel GPUs.
26
+
title: Fast & Extendable
27
+
details: Lux.jl is written in Julia itself, making it extremely extendable. CUDA and AMDGPU are supported first-class, with experimental support for Metal and Intel GPUs.
0 commit comments