Skip to content

Commit 15f8c3f

Browse files
committed
tweak docs
1 parent 5259d22 commit 15f8c3f

File tree

2 files changed

+7
-3
lines changed

2 files changed

+7
-3
lines changed

README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,9 @@ Extrapolate `f(x)` to `f₀ ≈ f(x0)`, evaluating `f` only at `x > x0` points
2727
`x=x₀+h`. It returns a tuple `(f₀, err)` of the estimated `f(x0)`
2828
and an error estimate.
2929

30-
More generally, `h` and `x0` can be in an arbitrary vector space,
30+
The return value of `f` can be any type supporting `±` and `norm`
31+
operations (i.e. a normed vector space).
32+
More generally, `h` and `x0` can be in any normed vector space,
3133
in which case `extrapolate` performs Richardson extrapolation
3234
of `f(x0+s*h)` to `s=0⁺` (i.e. it takes the limit as `x` goes
3335
to `x0` along the `h` direction).
@@ -134,7 +136,7 @@ which is the correct result (`1.0`) to machine precision.
134136

135137
### Numerical derivatives
136138

137-
A classic use of Richardson extrapolation is accurately evaluating derivatives via [finite-difference approximations](https://en.wikipedia.org/wiki/Finite_difference) (although analytical derivatives, e.g. by automatic differentiation, are of course vastly more efficient when they are available). In this example, we use Richardson extrapolation on the forward-difference approximation `f'(x) ≈ (f(x+h)-f(x))/h`, for which the error decreases as `O(h)` but a naive application to a very small `h` will yield a huge [cancellation error](https://en.wikipedia.org/wiki/Loss_of_significance) from floating-point roundoff effects. We differentiate `f(x)=sin(x)` at `x=1`, for which the correct answer is `cos(1) ≈ 0.5403023058681397174009366...`, starting with `h=0.1`
139+
A classic application of Richardson extrapolation is the accurate evaluation of derivatives via [finite-difference approximations](https://en.wikipedia.org/wiki/Finite_difference) (although analytical derivatives, e.g. by automatic differentiation, are of course vastly more efficient when they are available). In this example, we use Richardson extrapolation on the forward-difference approximation `f'(x) ≈ (f(x+h)-f(x))/h`, for which the error decreases as `O(h)` but a naive application to a very small `h` will yield a huge [cancellation error](https://en.wikipedia.org/wiki/Loss_of_significance) from floating-point roundoff effects. We differentiate `f(x)=sin(x)` at `x=1`, for which the correct answer is `cos(1) ≈ 0.5403023058681397174009366...`, starting with `h=0.1`
138140
```jl
139141
extrapolate(0.1, rtol=0) do h
140142
@show h

src/Richardson.jl

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,9 @@ Extrapolate `f(x)` to `f₀ ≈ f(x0)`, evaluating `f` only at `x > x0` points
2727
`x=x₀+h`. It returns a tuple `(f₀, err)` of the estimated `f(x0)`
2828
and an error estimate.
2929
30-
More generally, `h` and `x0` can be in an arbitrary vector space,
30+
The return value of `f` can be any type supporting `±` and `norm`
31+
operations (i.e. a normed vector space).
32+
More generally, `h` and `x0` can be in any normed vector space,
3133
in which case `extrapolate` performs Richardson extrapolation
3234
of `f(x0+s*h)` to `s=0⁺` (i.e. it takes the limit as `x` goes
3335
to `x0` along the `h` direction).

0 commit comments

Comments
 (0)