Skip to content

feat(native): Experimental GraalVM Native Image Support 🚀#2382

Open
TartanLeGrand wants to merge 3 commits intojitsi:masterfrom
TartanLeGrand:master
Open

feat(native): Experimental GraalVM Native Image Support 🚀#2382
TartanLeGrand wants to merge 3 commits intojitsi:masterfrom
TartanLeGrand:master

Conversation

@TartanLeGrand
Copy link

This PR introduces experimental support for building Jitsi Videobridge (JVB) as a GraalVM Native Image. The goal is to explore significant performance optimizations in terms of startup time, memory footprint, and CPU efficiency, making JVB more suitable for auto-scaling and serverless environments.

⚠️ Disclaimer

This is a Proof of Concept (PoC).
While the results are promising, I have not tested all endpoints and features.

  • Validated: /about/health, /about/version, and /colibri/v2/conferences (conference creation).
  • Not Validated: WebSockets, complex media routing scenarios, SCTP, etc.

This contribution is intended as a starting point for the community to test and improve upon.

📊 Benchmark Results (Comparison)

We performed a stress test using k6 (50 concurrent users/sec creating conferences).

Metric JVM (Standard) Native Image (GraalVM) Improvement
Startup Time ~1000 ms ~60 ms ~17x Faster 🚀
Memory (Load) ~280 MB ~80 MB ~3.5x Reduced 💾
CPU (Load) ~48% ~2% ~23x Reduced
Latency 1.81 ms 1.10 ms 40% Faster 🏎️

🛠️ Changes

  • Added Dockerfile.native for multi-stage GraalVM build.
  • Added Dockerfile.agent to easily capture GraalVM reflection configuration.
  • Added config-full/ containing the generated reflection metadata.
  • Updated jvb/pom.xml to fix shading for the native build.
  • Updated Application.java to explicitly register JacksonFeature for JSON support in native mode.

🏃 How to Test

  1. Build the native image:
docker build -f Dockerfile.native -t jvb-native .
  1. Run the container (enabling REST API):
docker run --rm -p 8080:8080 -p 9600:9600/udp \
 -Dvideobridge.http-servers.private.host=0.0.0.0 \
 -Dvideobridge.apis.rest.enabled=true \
 jvb-native

🔮 Future Vision & Applicability

This PR serves as a proposal to modernize the Jitsi Java stack. While this implementation focuses on JVB, the same GraalVM Native Image approach is applicable to other Jitsi components (like Jicofo).

Adopting this across the ecosystem could lead to:

  • Massive cost reductions for infrastructure (lower RAM/CPU).
  • Faster auto-scaling for Jitsi Meet clusters.
  • Simplified deployment (single binary vs JVM tuning).

This work is open for discussion and refinement! 🤝

This commit introduces support for building Jitsi Videobridge as a GraalVM Native Image.

Changes:
- Added Dockerfile.native for multi-stage native build.
- Added Dockerfile.agent for easy configuration capture.
- Added GraalVM configuration in config-full/ (including reflection for Jersey, Jackson, and Colibri v2).
- Updated pom.xml to fix shading for native builds.
- Updated Application.java to register JacksonFeature.
- Added load-test.js and load-test-conferences.js for benchmarking.
- Added BENCHMARK.md detailing performance improvements (17x startup, 23x CPU reduction).

This is a Proof of Concept (PoC) validation.
@jitsi-jenkins
Copy link

Hi, thanks for your contribution!
If you haven't already done so, could you please make sure you sign our CLA (https://jitsi.org/icla for individuals and https://jitsi.org/ccla for corporations)? We would unfortunately be unable to merge your patch unless we have that piece :(.

@TartanLeGrand TartanLeGrand marked this pull request as draft January 16, 2026 18:39
@TartanLeGrand TartanLeGrand marked this pull request as ready for review January 16, 2026 18:39
@bgrozev
Copy link
Member

bgrozev commented Jan 16, 2026

This is interesting, I would definitely like to understand it better. The workload for the test is not very representative. In real world conditions most of the CPU time and memory is used for routing media, and I suspect there's much less opportunity for optimization there. Would you be able to test performance with actual media routing?

@TartanLeGrand
Copy link
Author

This is interesting, I would definitely like to understand it better. The workload for the test is not very representative. In real world conditions most of the CPU time and memory is used for routing media, and I suspect there's much less opportunity for optimization there. Would you be able to test performance with actual media routing?

I assume yes, although I’m still fairly new to Jitsi.

There is jitsi-hammer but is outdated:
https://github.com/jitsi/jitsi-hammer

Otherwise, from what I understand, it’s also possible to use jitsi-meet-torture with a Selenium Grid, combined with the loadtest client, to generate more realistic media-routing workloads.

- Add proxy-config.json for HK2 dynamic proxy registration
- Add reflect-config.json for Jersey internal class reflection
- Add ServicesResourceTransformer to maven-shade-plugin for META-INF/services merging

This fixes IllegalStateException errors when running JVB as a native image
with Jersey REST endpoints.
@TartanLeGrand
Copy link
Author

@bgrozev following your feedback, I've been able to run a more comprehensive test. Here's a summary of the work done:

🔧 Additional Fixes Applied

The initial PoC had Jersey/HK2 dependency injection issues when running under GraalVM. I've added the following fixes:

1. GraalVM Configuration Files:

  • config-full/proxy-config.json - Registers HK2 dynamic proxies (UriInfo, ResourceInfo, ProxyCtl, etc.)
  • config-full/reflect-config.json - Registers Jersey internal classes for reflection (SupplierFactoryBridge, etc.)

2. Maven Shade Plugin Fix:

  • Added ServicesResourceTransformer to properly merge META-INF/services files from Jersey/HK2 jars

✅ Media Routing Validation

I successfully ran jitsi-meet-torture tests against the native JVB image using a full Jitsi Meet stack (prosody, jicofo, web, jvb-native).

Test Environment:

  • Selenium Grid with Chrome nodes in headless mode
  • Full Jitsi Meet stack with native JVB
  • WebRTC media routing enabled

Results during 50 concurrent browser sessions:

Metric Idle Under Load
RAM 24 MB 25-29 MB
CPU 0.11% 0.11%

The native JVB handled real browser-based WebRTC connections with minimal resource increase! 🚀

⚠️ Recommendations for Future Work

  1. Reflection Metadata Management: Each Java library using reflection needs explicit GraalVM configuration. I recommend:

    • Using the tracing agent during development: java -agentlib:native-image-agent=config-output-dir=config-full
    • Documenting which libraries require what configuration
  2. Automated Load Testing: Implement CI/CD load tests using:

    • jitsi-meet-torture with Selenium Grid
    • Metrics collection via /colibri/stats endpoint
    • Performance regression detection
  3. Expand Native Builds: This approach should be applied to:

    • Jicofo - Focus component (same Jersey/HK2 stack)
    • Jigasi - Gateway component
    • Other Jitsi Java services

🔮 Vision

With proper reflection handling and automated testing, the entire Jitsi Java stack could benefit from:

  • ~90% RAM reduction (per component)
  • Sub-100ms startup times
  • Simplified Kubernetes deployments (no JVM tuning)

Happy to discuss and refine this further! 🤝

@bgrozev
Copy link
Member

bgrozev commented Jan 26, 2026

Thank you for doing additional tests!

Frankly, the results don't seem realistic. Can you monitor metrics (:8080/metrics) while the test is running to confirm that media is flowing?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants