Skip to content

Conversation

@Outfluencer
Copy link
Collaborator

No description provided.

@Outfluencer
Copy link
Collaborator Author

Java 21 results

Benchmark                       Mode  Cnt        Score        Error   Units
JMHBenchmark.testDirectCall    thrpt    5  2039208,213 ± 145939,144  ops/ms
JMHBenchmark.testMethodHandle  thrpt    5   265367,053 ±  18371,948  ops/ms
JMHBenchmark.testReflection    thrpt    5   115452,475 ±  15589,299  ops/ms

<version>1.0</version>
</signature>
<ignores>
<!-- Allow using MethodHandle APIs even if the signature implies they don't exist -->
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What versions do they exist in?

Copy link
Contributor

@Janmm14 Janmm14 Dec 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The signatures are polymoprhic (esp. the invoke methods, they "appear" as varargs, but they are special) and iirc the tool can't handle that.

@md-5
Copy link
Member

md-5 commented Dec 2, 2025

Can you include the benchmark?

@Outfluencer
Copy link
Collaborator Author

import java.lang.invoke.MethodHandle;
import java.lang.invoke.MethodHandles;
import java.lang.invoke.MethodType;
import java.lang.reflect.Method;
import java.util.concurrent.TimeUnit;
import org.openjdk.jmh.annotations.*;
import org.openjdk.jmh.infra.Blackhole;

@State(Scope.Thread) // Variables are shared per thread
@BenchmarkMode(Mode.Throughput) // Measure: "Operations per second"
@OutputTimeUnit(TimeUnit.MILLISECONDS) // Output: "Ops / ms"
@Fork(value = 1, warmups = 1) // Fork JVM once, 1 warmup fork (prevents interference)
@Warmup(iterations = 5, time = 1) // 5 warmup iterations, 1 second each
@Measurement(iterations = 5, time = 1) // 5 real measurement iterations, 1 second each
public class JMHBenchmark
{

    // Define our Event and Listener
    public static class TestEvent {}
    public static class TestListener {
        public void onEvent(TestEvent e) {}
    }

    // Variables we need
    private TestEvent event;
    private TestListener listener;
    private Method reflectionMethod;
    private MethodHandle methodHandle;

    @Setup
    public void setup() throws Throwable
    {
        event = new TestEvent();
        listener = new TestListener();
        
        // Setup Reflection
        reflectionMethod = TestListener.class.getMethod("onEvent", TestEvent.class);

        // Setup MethodHandle
        methodHandle = MethodHandles.lookup()
            .unreflect(reflectionMethod)
            .bindTo(listener)
            .asType(MethodType.methodType(void.class, Object.class));
    }

    // --- BENCHMARKS ---

    @Benchmark
    public void testDirectCall(Blackhole bh)
    {
        // Baseline: How fast is a normal Java call?
        listener.onEvent(event);
    }

    @Benchmark
    public void testReflection(Blackhole bh) throws Exception
    {
        // Old Way
        reflectionMethod.invoke(listener, event);
    }

    @Benchmark
    public void testMethodHandle(Blackhole bh) throws Throwable
    {
        // New Way
        methodHandle.invokeExact((Object) event);
    }
}

This is the code i used to compare normal vs reflections vs methodHandle

@md-5
Copy link
Member

md-5 commented Dec 3, 2025

Is there a way to include it in the build like junit?

@Outfluencer
Copy link
Collaborator Author

I'm not sure if it will still be a reliable result if it's integrated into the build process.

But i can add it and have a look

@Outfluencer
Copy link
Collaborator Author

@md-5 done

@Janmm14
Copy link
Contributor

Janmm14 commented Dec 3, 2025

The benchmark should include usage of Blackhole in TestListener's onEvent.

@Outfluencer
Copy link
Collaborator Author

like this?

@Data
public static class TestEvent
{
    private final Blackhole blackhole;
}

public static class TestListener
{
    public void onEvent(TestEvent e)
    {
        e.blackhole.consume(1);
    }
}

@Janmm14
Copy link
Contributor

Janmm14 commented Dec 3, 2025

like this?

@Data
public static class TestEvent
{
    private final Blackhole blackhole;
}

public static class TestListener
{
    public void onEvent(TestEvent e)
    {
        e.blackhole.consume(1);
    }
}

Does this close the gap between direct invocation and reflection/methodhandle in the benchmark?

@Outfluencer
Copy link
Collaborator Author

Outfluencer commented Dec 3, 2025

Benchmark                                Mode  Cnt        Score   Error   Units
MethodHandleBenchmark.testDirectCall    thrpt       1598662.672          ops/ms
MethodHandleBenchmark.testMethodHandle  thrpt        188918.407          ops/ms
MethodHandleBenchmark.testReflection    thrpt        111542.477          ops/ms

this is the java 25 result now

@Janmm14
Copy link
Contributor

Janmm14 commented Dec 3, 2025

So apparantly no relative change. Either way it should be correct to have it in there.

@Outfluencer
Copy link
Collaborator Author

Outfluencer commented Dec 12, 2025

I am thinking about just create an Consumer with a LambdaMetafactory at this point, as i remember @Janmm14, you already did that once but with more changes, where those other changes necessary?

Those would be the the result for consumers with lambda btw

Benchmark                       Mode  Cnt        Score        Error   Units
JMHBenchmark.testConsumer      thrpt    5  1318978,797 ± 110813,010  ops/ms
JMHBenchmark.testDirectCall    thrpt    5  2069743,278 ± 104329,486  ops/ms
JMHBenchmark.testMethodHandle  thrpt    5   202854,376 ± 132370,068  ops/ms
JMHBenchmark.testReflection    thrpt    5   102053,756 ±  19528,771  ops/ms

Edit: i have taken a look, was for having the correct classloaders otherwise it won't work, so using consumers is not that easy, and i'd stick to MethodHandles

@caoli5288
Copy link
Contributor

caoli5288 commented Dec 13, 2025

I am thinking about just create an Consumer with a LambdaMetafactory at this point, as i remember @Janmm14, you already did that once but with more changes, where those other changes necessary?

Those would be the the result for consumers with lambda btw

Benchmark                       Mode  Cnt        Score        Error   Units
JMHBenchmark.testConsumer      thrpt    5  1318978,797 ± 110813,010  ops/ms
JMHBenchmark.testDirectCall    thrpt    5  2069743,278 ± 104329,486  ops/ms
JMHBenchmark.testMethodHandle  thrpt    5   202854,376 ± 132370,068  ops/ms
JMHBenchmark.testReflection    thrpt    5   102053,756 ±  19528,771  ops/ms

Edit: i have taken a look, was for having the correct classloaders otherwise it won't work, so using consumers is not that easy, and i'd stick to MethodHandles

It's wild that I assumed MethodHandles would be optimized to use invokedynamic, achieving performance similar to lambdas, but it seems that's not the case.

Have you tested directly looking up method handles (instead of using unreflect) or privileged-level lookup?

@Outfluencer
Copy link
Collaborator Author

I am thinking about just create an Consumer with a LambdaMetafactory at this point, as i remember @Janmm14, you already did that once but with more changes, where those other changes necessary?
Those would be the the result for consumers with lambda btw

Benchmark                       Mode  Cnt        Score        Error   Units
JMHBenchmark.testConsumer      thrpt    5  1318978,797 ± 110813,010  ops/ms
JMHBenchmark.testDirectCall    thrpt    5  2069743,278 ± 104329,486  ops/ms
JMHBenchmark.testMethodHandle  thrpt    5   202854,376 ± 132370,068  ops/ms
JMHBenchmark.testReflection    thrpt    5   102053,756 ±  19528,771  ops/ms

Edit: i have taken a look, was for having the correct classloaders otherwise it won't work, so using consumers is not that easy, and i'd stick to MethodHandles

It's wild that I assumed MethodHandles would be optimized to use invokedynamic, achieving performance similar to lambdas, but it seems that's not the case.

Have you tested directly looking up method handles (instead of using unreflect) or privileged-level lookup?

I have tested it, looks like its exactly the same

@Janmm14
Copy link
Contributor

Janmm14 commented Dec 13, 2025

It's wild that I assumed MethodHandles would be optimized to use invokedynamic, achieving performance similar to lambdas, but it seems that's not the case.

Have you tested directly looking up method handles (instead of using unreflect) or privileged-level lookup?

How a method handle is retrieved shouldn't change its invoke performance anyway.

@Janmm14
Copy link
Contributor

Janmm14 commented Dec 13, 2025

I am thinking about just create an Consumer with a LambdaMetafactory at this point, as i remember @Janmm14, you already did that once but with more changes, where those other changes necessary?

Those would be the the result for consumers with lambda btw

Benchmark                       Mode  Cnt        Score        Error   Units
JMHBenchmark.testConsumer      thrpt    5  1318978,797 ± 110813,010  ops/ms
JMHBenchmark.testDirectCall    thrpt    5  2069743,278 ± 104329,486  ops/ms
JMHBenchmark.testMethodHandle  thrpt    5   202854,376 ± 132370,068  ops/ms
JMHBenchmark.testReflection    thrpt    5   102053,756 ±  19528,771  ops/ms

Edit: i have taken a look, was for having the correct classloaders otherwise it won't work, so using consumers is not that easy, and i'd stick to MethodHandles

I really think my latest attempt in #3114 is already looking like a quite good impl of that, yes it is a somewhat bigger change, but still very contained.

Edit: Maybe some1 can think of an even easier way of collecting class loaders of event listeners?

Edit 2: Now as I look at that code, I think there's no need to collect the plugin classloaders, just collect the class loaders of all event listeners and it should work.

@caoli5288
Copy link
Contributor

caoli5288 commented Dec 14, 2025

I do another benchmark, calling a setName(String) to simply set a property instead of an empty method, and obtained significantly different results.

Platform: EPYC 9374F+Debian12+jdk25

Benchmark                                           Mode  Cnt         Score         Error  Units
MethodCallBenchmark.cachedMethodHandleCall         thrpt    5  44146986.660 ±  576589.217  ops/s
MethodCallBenchmark.cachedMethodHandleExactCall    thrpt    5  44797972.219 ± 1409320.471  ops/s
MethodCallBenchmark.cachedReflectionCall           thrpt    5  41152508.714 ± 1468606.610  ops/s
MethodCallBenchmark.cachedSpreadMethodHandleExact  thrpt    5  43465262.730 ± 1378856.429  ops/s
MethodCallBenchmark.directCall                     thrpt    5  47991941.426 ±  669672.997  ops/s

Test code:

@State(Scope.Thread)
@BenchmarkMode(Mode.Throughput)
@OutputTimeUnit(TimeUnit.SECONDS)
@Fork(1)
@Warmup(iterations = 3, time = 2)
@Measurement(iterations = 5, time = 2)
public class MethodCallBenchmark {

    @Setup(Level.Invocation)
    public void setUp() {
        person = new Person();
    }

    private Person person;

    @Benchmark
    public Object directCall() {
        person.setName("Peter");
        return person;
    }

    private static Method cachedSetMethod;

    static {
        try {
            cachedSetMethod = Person.class.getMethod("setName", String.class);
        } catch (NoSuchMethodException e) {
            throw new RuntimeException(e);
        }
    }

    @Benchmark
    public Object cachedReflectionCall() throws Exception {
        cachedSetMethod.invoke(person, "Peter");
        return person;
    }

    private static MethodHandle cachedMethodHandle;

    static {
        try {
            MethodHandles.Lookup lookup = MethodHandles.lookup();
            cachedMethodHandle = lookup.findVirtual(Person.class, "setName",
                    MethodType.methodType(void.class, String.class));
        } catch (Exception e) {
            throw new RuntimeException(e);
        }
    }

    @Benchmark
    public Object cachedMethodHandleCall() throws Throwable {
        cachedMethodHandle.invoke(person, "Peter");
        return person;
    }

    private static MethodHandle cachedMethodHandleExact;

    static {
        try {
            MethodHandles.Lookup lookup = MethodHandles.lookup();
            cachedMethodHandleExact = lookup.findVirtual(Person.class, "setName",
                    MethodType.methodType(void.class, String.class));
        } catch (Exception e) {
            throw new RuntimeException(e);
        }
    }

    @Benchmark
    public Object cachedMethodHandleExactCall() throws Throwable {
        cachedMethodHandleExact.invokeExact(person, "Peter");
        return person;
    }

    private static MethodHandle cachedSpreadMethodHandleExact;

    static {
        try {
            MethodHandles.Lookup lookup = MethodHandles.lookup();
            cachedSpreadMethodHandleExact = lookup.findVirtual(Person.class, "setName",
                    MethodType.methodType(void.class, String.class))
                    .asSpreader(Object[].class, 1);
        } catch (Exception e) {
            throw new RuntimeException(e);
        }
    }

    public static final Object[] ARGS = new Object[]{"Peter"};

    @Benchmark
    public Object cachedSpreadMethodHandleExact() throws Throwable {
        cachedSpreadMethodHandleExact.invokeExact(person, ARGS);
        return person;
    }
}

@Outfluencer
Copy link
Collaborator Author

your results pretty much look all the same speed, maybe because you dont use blackholes? i guess the results cannot be correct

@Janmm14
Copy link
Contributor

Janmm14 commented Dec 14, 2025

Yes, I think the jvm can optimize this way easier, because of the static argument and no indirection of the object itself.

Blackhole is used on the resulting object (by returning it), but you want to measure an actual method invocation process, thats why blackhole should be used inside the resulting method, maybe you could also instead use @CompilerControl annotation.

Additionally a setup method per invocation an be very inaccurate for such a tiny micro benchmark, which can be seen in your result at those huge +/- errors.

Edit: There's no need to use a setup method at all here additionally.
Also jmh benchmarks should only be compared between similar benchmarks and not vs different benchmarks.

@caoli5288
Copy link
Contributor

Yes, I think the jvm can optimize this way easier, because of the static argument and no indirection of the object itself.

Blackhole is used on the resulting object (by returning it), but you want to measure an actual method invocation process, thats why blackhole should be used inside the resulting method, maybe you could also instead use @CompilerControl annotation.

Additionally a setup method per invocation an be very inaccurate for such a tiny micro benchmark, which can be seen in your result at those huge +/- errors.

Edit: There's no need to use a setup method at all here additionally. Also jmh benchmarks should only be compared between similar benchmarks and not vs different benchmarks.

You're right, if I remove setup(), the running speed of all methods will become slower, and the gap between them will become larger. MethodHandle is about 2x fast as Reflection, and direct call is about 5x fast as MethodHandle.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants