Skip to content

Commit baeee9c

Browse files
committed
logos & links
1 parent 784ae8f commit baeee9c

File tree

3 files changed

+10
-8
lines changed

3 files changed

+10
-8
lines changed

docs/src/assets/zygote-crop.png

61.8 KB
Loading

docs/src/guide/models/basics.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -188,7 +188,7 @@ For ordinary pure functions like `(x,y) -> (x*y)`, this `∂f(x,y)/∂f` would a
188188
depends on `θ`.
189189

190190
```@raw html
191-
<h3><img src="https://github.com/FluxML/Optimisers.jl/blob/master/docs/src/assets/logo.png?raw=true" width="40px"/><a href="https://github.com/FluxML/Zygote.jl">Zygote.jl</a></h3>
191+
<h3><img src="../../../assets/zygote-crop.png" width="40px"/>&nbsp;<a href="https://github.com/FluxML/Zygote.jl">Zygote.jl</a></h3>
192192
```
193193

194194
Flux's [`gradient`](@ref) function by default calls a companion packages called [Zygote](https://github.com/FluxML/Zygote.jl).
@@ -327,7 +327,9 @@ grad = Flux.gradient(|>, [1f0], model1)[2]
327327
This gradient is starting to be a complicated nested structure.
328328
But it works just like before: `grad.outer.inner.W` corresponds to `model1.outer.inner.W`.
329329

330-
### <img src="https://github.com/FluxML/Optimisers.jl/blob/master/docs/src/assets/logo.png?raw=true" width="40px"/> &nbsp; [Flux's layers](man-layers)
330+
```@raw html
331+
<h3><img src="https://github.com/FluxML/Optimisers.jl/blob/master/docs/src/assets/logo.png?raw=true" width="40px"/>&nbsp;<a href="../../../reference/models/layers/">Flux's layers</a></h3>
332+
```
331333

332334
Rather than define everything from scratch every time, Flux provides a library of
333335
commonly used layers. The same model could be defined:
@@ -359,14 +361,14 @@ How does this `model2` differ from the `model1` we had before?
359361
Calling [`Flux.@layer Layer`](@ref Flux.@layer) will add this, and some other niceties.
360362

361363
If what you need isn't covered by Flux's built-in layers, it's easy to write your own.
362-
There are more details [later](man-advanced), but the steps are invariably those shown for `struct Layer` above:
364+
There are more details [later](@ref man-advanced), but the steps are invariably those shown for `struct Layer` above:
363365
1. Define a `struct` which will hold the parameters.
364366
2. Make it callable, to define how it uses them to transform the input `x`
365367
3. Define a constructor which initialises the parameters (if the default constructor doesn't do what you want).
366368
4. Annotate with `@layer` to opt-in to pretty printing, and other enhacements.
367369

368370
```@raw html
369-
<h3><img src="https://github.com/FluxML/Optimisers.jl/blob/master/docs/src/assets/logo.png?raw=true" width="40px"/><a href="https://github.com/FluxML/Functors.jl">Functors.jl</a></h3>
371+
<h3><img src="https://github.com/FluxML/Optimisers.jl/blob/master/docs/src/assets/logo.png?raw=true" width="40px"/>&nbsp;<a href="https://github.com/FluxML/Functors.jl">Functors.jl</a></h3>
370372
```
371373

372374
To deal with such nested structures, Flux relies heavily on an associated package
@@ -399,7 +401,7 @@ of the output -- it must be a number, not a vector. Adjusting the parameters
399401
to make this smaller won't lead us anywhere interesting. Instead, we should minimise
400402
some *loss function* which compares the actual output to our desired output.
401403

402-
Perhaps the simplest example is curve fitting. The [previous page](man-overview) fitted
404+
Perhaps the simplest example is curve fitting. The [previous page](@ref man-overview) fitted
403405
a linear function to data. With out two-layer `model2`, we can fit a nonlinear function.
404406
For example, let us use `f(x) = 2x - x^3` evaluated at some points `x in -2:0.1:2` as the data,
405407
and adjust the parameters of `model2` from above so that its output is similar.
@@ -424,6 +426,6 @@ plot(x -> 2x-x^3, -2, 2, label="truth")
424426
scatter!(x -> model2([x]), -2:0.1f0:2, label="fitted")
425427
```
426428

427-
If this general idea is unfamiliar, you may want the [tutorial on linear regression](man-linear-regression).
429+
If this general idea is unfamiliar, you may want the [tutorial on linear regression](@ref man-linear-regression).
428430

429-
More detail about what exactly the function `train!` is doing, and how to use rules other than simple [`Descent`](@ref Optimisers.Descent), is what the next page in this guide is about: [training](man-training).
431+
More detail about what exactly the function `train!` is doing, and how to use rules other than simple [`Descent`](@ref Optimisers.Descent), is what the next page in this guide is about: [training](@ref man-training).

docs/src/reference/models/layers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# [Built-in Layer Types]](@id man-layers)
1+
# [Built-in Layer Types](@id man-layers)
22

33
If you started at the beginning of the guide, then you have already met the
44
basic [`Dense`](@ref) layer, and seen [`Chain`](@ref) for combining layers.

0 commit comments

Comments
 (0)