Skip to content

Commit 9ab4a26

Browse files
authored
docs: autodiff docs (#1580)
1 parent 27b4026 commit 9ab4a26

File tree

2 files changed

+188
-0
lines changed

2 files changed

+188
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,187 @@
1+
# Automatic Differentation
2+
3+
```elixir
4+
Mix.install([
5+
{:nx, "~> 0.7"}
6+
])
7+
```
8+
9+
## What is Function Differentiation?
10+
11+
Nx, through the `Nx.Defn.grad/2` and `Nx.Defn.value_and_grad/3` functions allows the user to differentiate functions that were defined through `defn`.
12+
This is really important in Machine Learning settings because, in general, the training process happens through optimization methods that require calculating the gradient of tensor functions.
13+
14+
Before we get too far ahead of ourselves, let's talk about what is the derivative or the gradient of a function.
15+
In simple terms, the derivative tells us how a function changes at a given point and lets us measure things such as where a function as maximum,
16+
minimum or turning points (for example, where a parabola has its vertex).
17+
18+
The ability to measure local minima and maxima is what makes them important to optimization problems, because if we can find them, we can solve problems that want
19+
to minimize a given function. For higher dimensional problems, we deal with functions of many variables, and thus we use the gradient, which measures the "derivative" in the axis of each function.
20+
The gradient, then, is a vector that points in the direction where the function changes the most, which leads to the so-called gradient descent method of optimization.
21+
22+
In the gradient descent method, we take tiny steps following the gradient of the function in order to find the nearest local minimum (which hopefully is either the global minimum or close enough to it).
23+
This is what makes function differentiation so important for Machine Learning.
24+
25+
Let's take for example the following $f(x)$ and $f'(x)$ scalar function and derivative pair:
26+
27+
$$
28+
f(x) = x^3 + x\\
29+
f'(x) = 3x^2 + 1
30+
$$
31+
32+
We can define a similar function-derivative pair for tensor functions:
33+
34+
$$
35+
f(\bold{x}) = \bold{x}^3 + \bold{x}\\
36+
\nabla f(\bold{x}) = 3 \bold{x} ^ 2 + 1
37+
$$
38+
39+
These may look similar, but the difference is that $f(\bold{x})$ takes in $\bold{x}$ which is a tensor argument. This means that we can have the following argument and results for the function and its gradient:
40+
41+
$$
42+
\bold{x} =
43+
\begin{bmatrix}
44+
1 & 1 \\
45+
2 & 3 \\
46+
5 & 8 \\
47+
\end{bmatrix}\\\
48+
$$
49+
50+
$$
51+
f(\bold{x}) = \bold{x}^3 + \bold{x} =
52+
\begin{bmatrix}
53+
2 & 2 \\
54+
10 & 30 \\
55+
130 & 520
56+
\end{bmatrix}
57+
$$
58+
59+
$$
60+
\nabla f(\bold{x}) = 3 \bold{x} ^ 2 + 1 =
61+
\begin{bmatrix}
62+
4 & 4 \\
63+
13 & 28 \\
64+
76 & 193
65+
\end{bmatrix}
66+
$$
67+
68+
## Automatic Differentiation
69+
70+
Now that we have a general feeling of what a function and its gradient are, we can talk about how Nx can use `defn` to calculate gradients for us.
71+
72+
In the following code blocks we're going to define the same tensor function as above and then we'll differentiate it only using Nx, without having to write the explicit derivative at all.
73+
74+
```elixir
75+
defmodule Math do
76+
import Nx.Defn
77+
78+
defn f(x) do
79+
x ** 3 + x
80+
end
81+
82+
defn grad_f(x) do
83+
Nx.Defn.grad(x, &f/1)
84+
end
85+
end
86+
```
87+
88+
```elixir
89+
x =
90+
Nx.tensor([
91+
[1, 1],
92+
[2, 3],
93+
[5, 8]
94+
])
95+
96+
{
97+
Math.f(x),
98+
Math.grad_f(x)
99+
}
100+
```
101+
102+
As we can see, we get the results we expected, aside from the type of the grad, which will always be a floating-point number, even if you pass an integer tensor as input.
103+
104+
Next, we'll using `Nx.Defn.debug_expr` to see what's happening under the hood.
105+
106+
```elixir
107+
Nx.Defn.debug_expr(&Math.f/1).(x)
108+
```
109+
110+
```elixir
111+
Nx.Defn.debug_expr(&Math.grad_f/1).(x)
112+
```
113+
114+
If we look closely at the returned `Nx.Defn.Expr` representations for `f` and `grad_f`, we can see that they pretty much translate to the mathematical definitions we had originally.
115+
116+
This possible because Nx holds onto the symbolic representation of a `defn` function while inside `defn`-land, and thus `Nx.Defn.grad` (and similar) can operate on that symbolic representation to return a new symbolic representation (as seen in the second block).
117+
118+
<!-- livebook:{"break_markdown":true} -->
119+
120+
`Nx.Defn.value_and_grad` can be used to calculate both things at once for us:
121+
122+
```elixir
123+
Nx.Defn.value_and_grad(x, &Math.f/1)
124+
```
125+
126+
And if we use `debug_expr` again, we can see that the symbolic representation is actually both the function and the grad, returned in a tuple:
127+
128+
```elixir
129+
Nx.Defn.debug_expr(Nx.Defn.value_and_grad(&Math.f/1)).(x)
130+
```
131+
132+
Finally, we can talk about functions that receive many arguments, such as the following `add_multiply` function:
133+
134+
```elixir
135+
add_multiply = fn x, y, z ->
136+
addition = Nx.add(x, y)
137+
Nx.multiply(z, addition)
138+
end
139+
```
140+
141+
At first you may think that if we want to differentiate it, we need to wrap it into a single-argument function so that we can differentiate with respect to a specific argument, which would treat other arguments as constants, as we can see below:
142+
143+
```elixir
144+
x = Nx.tensor([1, 2])
145+
y = Nx.tensor([3, 4])
146+
z = Nx.tensor([5, 6])
147+
148+
{
149+
Nx.Defn.grad(x, fn t -> add_multiply.(t, y, z) end),
150+
Nx.Defn.grad(y, fn t -> add_multiply.(x, t, z) end),
151+
Nx.Defn.grad(z, fn t -> add_multiply.(x, y, t) end)
152+
}
153+
```
154+
155+
However, Nx is smart enough to deal with multi-valued functions through `Nx.Container` representations such as a tuple or a map:
156+
157+
```elixir
158+
Nx.Defn.grad({x, y, z}, fn {x, y, z} -> add_multiply.(x, y, z) end)
159+
```
160+
161+
Likewise, we can also deal with functions that return multiple values.
162+
163+
`Nx.Defn.grad` requires us to return a scalar from function (that is, a tensor of shape `{}`).
164+
However, there are instances where we might want to use `value_and_grad` to get out a tuple from our function, while still calculating its gradient.
165+
166+
For this, we have the `value_and_grad/3` arity, which accepts a transformation argument.
167+
168+
```elixir
169+
x =
170+
Nx.tensor([
171+
[1, 1],
172+
[2, 3],
173+
[5, 8]
174+
])
175+
176+
# Notice that the returned values are the 2 addition terms from `Math.f/1`
177+
multi_valued_return_fn =
178+
fn x ->
179+
{Nx.pow(x, 3), x}
180+
end
181+
182+
transform_fn = fn {x_cubed, x} -> Nx.add(x_cubed, x) end
183+
184+
{{x_cubed, x}, grad} = Nx.Defn.value_and_grad(x, multi_valued_return_fn, transform_fn)
185+
```
186+
187+
If we go back to the start of this livebook, we can see that `grad` holds exactly the result `Math.grad_f`, but now we have access to `x ** 3`, which wasn't accessible before, as originally we could only obtain `x ** 3 + x`.

nx/mix.exs

+1
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,7 @@ defmodule Nx.MixProject do
6060
"guides/intro-to-nx.livemd",
6161
"guides/advanced/vectorization.livemd",
6262
"guides/advanced/aggregation.livemd",
63+
"guides/advanced/automatic_differentiation.livemd",
6364
"guides/exercises/exercises-1-20.livemd"
6465
],
6566
skip_undefined_reference_warnings_on: ["CHANGELOG.md"],

0 commit comments

Comments
 (0)