Skip to content

Muller algorithm benchmark #582

Open
@fgittins

Description

@fgittins

Motivated by #501, I was curious to see how the newly added Muller algorithm (#568) fares in the SciML benchmarks, particularly against the implementation in Roots.jl.

Below are performance results on my local machine (MacBook Pro M3, 16 GB RAM), using BenchmarkTools. Each test solves the same root-finding problem N = 100_000 times (see code below):

Method Time Allocations
Roots.jl default 155.340 ms 100,000
ITP 22.850 ms 0
Roots.muller 26.713 ms 0
Muller 11.839 ms 0

Highlights:

  • I was able to recreate the expected speed-up of ITP vs. the default solver in Roots.jl, consistent with the SciML benchmarks.
  • I found that Roots.muller is significantly faster than the default method (likely because it is non-allocating), but both NonlinearSolve.jl algorithms outperform it.
  • Finally, it was interesting that Muller is almost two times faster than ITP for this function. Having run this several times on my own machine, this factor-of-two difference seems consistent. However, this may only reflect the suitability of Muller for this specific problem rather than general performance.

Benchmark script

This is the script that I ran to generate the benchmarks:

using BenchmarkTools
using NonlinearSolve
using Roots

const N = 100_000

myfun(u, level) = u * sin(u) - level

function froots!(out, levels, uspan)
    for i in 1:N
        out[i] = find_zero(x->myfun(x, levels[i]), uspan)
    end
    out
end

function f!(out, levels, uspan)
    for i in 1:N
        prob = IntervalNonlinearProblem{false}(IntervalNonlinearFunction{false}(myfun), uspan, levels[i])
        sol = solve(prob, ITP())
        out[i] = sol.u
    end
    out
end

function groots!(out, levels, uspan)
    xᵢ₋₂, xᵢ = uspan
    xᵢ₋₁ = (xᵢ₋₂ + xᵢ) / 2  # choose midpoint for consistency with Muller()
    for i in 1:N
        out[i] = Roots.muller(x->myfun(x, levels[i]), xᵢ₋₂, xᵢ₋₁, xᵢ)
    end
    out
end

function g!(out, levels, uspan)
    for i in 1:N
        prob = IntervalNonlinearProblem{false}(IntervalNonlinearFunction{false}(myfun), uspan, levels[i])
        sol = solve(prob, Muller())
        out[i] = sol.u
    end
    out
end

function main()
    out = zeros(N)
    levels = 1.5 .* rand(N)
    uspan = (0.0, 2.0)

    @btime froots!($out, $levels, $uspan)
    @btime f!($out, $levels, $uspan)
    @btime groots!($out, $levels, $uspan)
    @btime g!($out, $levels, $uspan)

    nothing
end

main()

If this speed-up is robust, it may be worth adding to the benchmarks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions