Skip to content

Commit e0db0c1

Browse files
committed
ci: do full build [skip tests]
1 parent e5f0f25 commit e0db0c1

File tree

8 files changed

+30
-30
lines changed

8 files changed

+30
-30
lines changed

.buildkite/cuda_tutorials.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,5 @@ steps:
2424
timeout_in_minutes: 120
2525

2626
env:
27-
LUX_DOCS_DRAFT_BUILD: true # FIXME: remove before merging
2827
DATADEPS_ALWAYS_ACCEPT: true
2928
GKSwstype: "100" # https://discourse.julialang.org/t/generation-of-documentation-fails-qt-qpa-xcb-could-not-connect-to-display/60988

.github/workflows/Documentation.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,6 @@ jobs:
9696
- name: Run Tutorials
9797
run: julia --color=yes --project=docs --threads=auto docs/tutorials.jl
9898
env:
99-
LUX_DOCS_DRAFT_BUILD: true # FIXME: remove before merging
10099
TUTORIAL_BACKEND_GROUP: "CPU"
101100
BUILDKITE_PARALLEL_JOB_COUNT: 4
102101
BUILDKITE_PARALLEL_JOB: ${{ matrix.group }}
@@ -122,6 +121,8 @@ jobs:
122121
needs: [tutorial-cpu]
123122
steps:
124123
- uses: actions/checkout@v6
124+
with:
125+
fetch-depth: 0
125126
- name: Collect Workflow Telemetry
126127
uses: catchpoint/workflow-telemetry-action@v2
127128
with:

docs/make.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ makedocs(;
3131
repo="github.com/LuxDL/Lux.jl",
3232
devbranch="main",
3333
devurl="dev",
34-
deploy_url="https://lux.csail.mit.edu"
34+
deploy_url="https://lux.csail.mit.edu",
3535
),
3636
plugins=[CitationBibliography(joinpath(@__DIR__, "references.bib"))],
3737
draft=DRAFT_MODE,

docs/package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
"@types/node": "^22.13.4",
66
"markdown-it": "^14.1.0",
77
"markdown-it-mathjax3": "^4.3.2",
8-
"vitepress": "^1.6.3",
8+
"vitepress": "^1.6.4",
99
"vitepress-plugin-tabs": "^0.6.0"
1010
},
1111
"scripts": {

docs/src/.vitepress/config.mts

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -492,5 +492,7 @@ export default defineConfig({
492492
timeStyle: "medium",
493493
},
494494
},
495+
metaChunk: true,
496+
mpa: true,
495497
},
496498
});

docs/src/introduction/overview.md

Lines changed: 20 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -12,61 +12,59 @@ it both compiler and autodiff friendly.
1212

1313
Lux.jl takes a **Reactant-first approach** to deliver exceptional performance and seamless deployment capabilities:
1414

15-
* **XLA Compilation** -- Lux models compile to highly optimized XLA code via [Reactant.jl](https://github.com/EnzymeAD/Reactant.jl), delivering significant speedups on CPU, GPU, and TPU.
15+
- **XLA Compilation** -- Lux models compile to highly optimized XLA code via [Reactant.jl](https://github.com/EnzymeAD/Reactant.jl), delivering significant speedups on CPU, GPU, and TPU.
1616

17-
* **Cross-Platform Performance** -- Run the same Lux model with optimal performance across different hardware backends (CPU, NVIDIA GPUs, AMD GPUs, TPUs) without code changes, simply by switching the Reactant backend.
17+
- **Cross-Platform Performance** -- Run the same Lux model with optimal performance across different hardware backends (CPU, NVIDIA GPUs, AMD GPUs, TPUs) without code changes, simply by switching the Reactant backend.
1818

19-
* **Production Deployment** -- Compiled models can be exported and deployed to production servers and edge devices by leveraging the rich TensorFlow ecosystem, making Lux suitable for real-world applications.
19+
- **Production Deployment** -- Compiled models can be exported and deployed to production servers and edge devices by leveraging the rich TensorFlow ecosystem, making Lux suitable for real-world applications.
2020

21-
* **Large Model Support** -- With Reactant compilation, Lux now excels at training very large models that were previously challenging, making it competitive with other frameworks for large-scale deep learning.
21+
- **Large Model Support** -- With Reactant compilation, Lux now excels at training very large models that were previously challenging, making it competitive with other frameworks for large-scale deep learning.
2222

2323
## Design Principles
2424

25-
* **Layers must be immutable** -- cannot store any parameter/state but rather store the
25+
- **Layers must be immutable** -- cannot store any parameter/state but rather store the
2626
information to construct them
27-
* **Layers are pure functions**
28-
* **Layers return a Tuple containing the result and the updated state**
29-
* **Given same inputs the outputs must be same** -- yes this must hold true even for
27+
- **Layers are pure functions**
28+
- **Layers return a Tuple containing the result and the updated state**
29+
- **Given same inputs the outputs must be same** -- yes this must hold true even for
3030
stochastic functions. Randomness must be controlled using `rng`s passed in the state.
31-
* **Easily extensible**
32-
* **Extensive Testing** -- All layers and features are tested across all supported AD
31+
- **Easily extensible**
32+
- **Extensive Testing** -- All layers and features are tested across all supported AD
3333
backends across all supported hardware backends.
3434

3535
## Why use Lux over Flux?
3636

37-
* **High-Performance XLA Compilation** -- Lux's Reactant-first approach enables XLA compilation for dramatic performance improvements across CPU, GPU, and TPU. Models compile to highly optimized code that eliminates Julia overhead and leverages hardware-specific optimizations.
37+
- **High-Performance XLA Compilation** -- Lux's Reactant-first approach enables XLA compilation for dramatic performance improvements across CPU, GPU, and TPU. Models compile to highly optimized code that eliminates Julia overhead and leverages hardware-specific optimizations.
3838

39-
* **Production-Ready Deployment** -- Deploy Lux models to production environments using the mature TensorFlow ecosystem. Compiled models can be exported and run on servers, edge devices, and mobile platforms.
39+
- **Production-Ready Deployment** -- Deploy Lux models to production environments using the mature TensorFlow ecosystem. Compiled models can be exported and run on servers, edge devices, and mobile platforms.
4040

41-
* **Neural Networks for SciML**: For SciML Applications (Neural ODEs, Deep Equilibrium
41+
- **Neural Networks for SciML**: For SciML Applications (Neural ODEs, Deep Equilibrium
4242
Models) solvers typically expect a monolithic parameter vector. Flux enables this via its
4343
`destructure` mechanism, but `destructure` comes with various
4444
[edge cases and limitations](https://fluxml.ai/Optimisers.jl/dev/api/#Optimisers.destructure). Lux
4545
forces users to make an explicit distinction between state variables and parameter
4646
variables to avoid these issues. Also, it comes battery-included for distributed training.
4747

48-
* **Sensible display of Custom Layers** -- Ever wanted to see Pytorch like Network printouts
48+
- **Sensible display of Custom Layers** -- Ever wanted to see Pytorch like Network printouts
4949
or wondered how to extend the pretty printing of Flux's layers? Lux handles all of that
5050
by default.
5151

52-
* **Truly immutable models** - No *unexpected internal mutations* since all layers are
53-
implemented as pure functions. All layers are also *deterministic* given the parameters
52+
- **Truly immutable models** - No _unexpected internal mutations_ since all layers are
53+
implemented as pure functions. All layers are also _deterministic_ given the parameters
5454
and state: if a layer is supposed to be stochastic (say [`Lux.Dropout`](@ref)), the state
5555
must contain a seed which is then updated after the function call.
5656

57-
* **Easy Parameter Manipulation** -- By separating parameter data and layer structures,
57+
- **Easy Parameter Manipulation** -- By separating parameter data and layer structures,
5858
Lux makes implementing [`WeightNorm`](@ref), `SpectralNorm`, etc. downright trivial.
5959
Without this separation, it is much harder to pass such parameters around without
6060
mutations which AD systems don't like.
6161

62-
* **Wider AD Support** -- Lux has extensive support for most
62+
- **Wider AD Support** -- Lux has extensive support for most
6363
[AD systems in julia](@ref autodiff-lux), while Flux is mostly tied to Zygote (with some
6464
initial support for Enzyme).
6565

66-
* **Optimized for All Model Sizes** -- Whether you're working with small prototypes or large production models, Lux delivers optimal performance. For the smallest networks where minimal overhead is critical, you can use [`ToSimpleChainsAdaptor`](@ref) to leverage SimpleChains.jl's specialized CPU optimizations.
66+
- **Optimized for All Model Sizes** -- Whether you're working with small prototypes or large production models, Lux delivers optimal performance. For the smallest networks where minimal overhead is critical, you can use [`ToSimpleChainsAdaptor`](@ref) to leverage SimpleChains.jl's specialized CPU optimizations.
6767

68-
* **Reliability** -- We have learned from the mistakes of the past with Flux and everything
68+
- **Reliability** -- We have learned from the mistakes of the past with Flux and everything
6969
in our core framework is extensively tested, along with downstream CI to ensure that
7070
everything works as expected.
71-
72-

docs/src/introduction/resources.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
# Resources to Get Started
22

3-
* Go through the [Quickstart Example](@ref Quickstart).
4-
* Read the introductory tutorials on
3+
- Go through the [Quickstart Example](@ref Quickstart).
4+
- Read the introductory tutorials on
55
[Julia](https://jump.dev/JuMP.jl/stable/tutorials/getting_started/getting_started_with_julia)
66
and Lux.
7-
* Go through the examples sorted based on their complexity in the documentation.
7+
- Go through the examples sorted based on their complexity in the documentation.
88

99
!!! tip "Have More Questions?"
1010

docs/src/tutorials/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ const beginner = [
3535
},
3636
{
3737
href: "https://luxdl.github.io/Boltz.jl/stable/tutorials/1_GettingStarted",
38-
src: "https://production-media.paperswithcode.com/datasets/ImageNet-0000000008-f2e87edd_Y0fT5zg.jpg",
38+
src: "https://blog.roboflow.com/content/images/2021/06/image-18.png",
3939
caption: "Pre-Built Deep Learning Models",
4040
desc: "Use Boltz.jl to load pre-built DL and SciML models."
4141
}

0 commit comments

Comments
 (0)