Skip to content

Reimplement REST interface using HTTP/2, HTTP3 #371

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Jun 24, 2025
Merged

Reimplement REST interface using HTTP/2, HTTP3 #371

merged 6 commits into from
Jun 24, 2025

Conversation

frankosterfeld
Copy link
Contributor

@frankosterfeld frankosterfeld commented Apr 24, 2025

Implements a new REST interface using HTTP/2, for lower latency and multiplexing of multiple requests on a single connection.
Instead of multithreading + blocking I/O, where each (long-poll) client occupies a thread, using event-driven non-blocking I/O, and poll() the HTTP-related sockets (server socket and client connections) together with the ZMQ sockets. For that, integrate the server more closely with the Broker.

The HTTP/3 implementation has been tested manually using Chrome and curl (see README) but is not covered by unit tests yet, as the native HTTP/3 client implementation would be considerable effort (and probably some more kloc's of code...). As agreed upon with @RalphSteinhagen , the native client might be done later possibly with other libraries like libcurl/CPR that require less code than ngtcp2, given that the client side is not the performance-critical part.

@frankosterfeld frankosterfeld changed the title Frank/http2 Reimplement REST interface using HTTP/2 Apr 24, 2025
@wirew0rm wirew0rm linked an issue Apr 24, 2025 that may be closed by this pull request
@frankosterfeld frankosterfeld temporarily deployed to configure coverage June 24, 2025 14:57 — with GitHub Actions Inactive
Instead of using the httplib-based HTTP server using blocking I/O in
a thread per client, implement a nghttp2-based HTTP/2 server, using
single-threaded non-blocking I/O directly integrated into the Broker
zmq_poll() loop.

Signed-off-by: Frank Osterfeld <[email protected]>
Implements HTTP/3 support in the server, following the implementation
in ngtcp2's server.cc example. A lot of the function logic and structure
from there has been preserved for easier debugging and comparing to the
example if needed.

Signed-off-by: Frank Osterfeld <[email protected]>
Use CRTP to prepare for a Http3ClientSession (not implemented yet).

Signed-off-by: Frank Osterfeld <[email protected]>
HTTP GET is majordomo Get, HTTP PUT/POST is majordomo Set, GET with
"LongPollingIdx" query parameter retrieves cached NOTIFY messages, we
don't need more at the moment.

Signed-off-by: Frank Osterfeld <[email protected]>
Replace hardcoded paths with cmake variables. Since these are also used
in the upstream project, otherwise compilation will fail if they have a
different value than what was hardcoded. (lib vs lib64)

Signed-off-by: Alexander Krimm <[email protected]>
@wirew0rm wirew0rm temporarily deployed to configure coverage June 24, 2025 15:29 — with GitHub Actions Inactive
@wirew0rm wirew0rm temporarily deployed to configure coverage June 24, 2025 15:29 — with GitHub Actions Inactive
@wirew0rm wirew0rm temporarily deployed to configure coverage June 24, 2025 15:29 — with GitHub Actions Inactive
@wirew0rm wirew0rm temporarily deployed to configure coverage June 24, 2025 15:29 — with GitHub Actions Inactive
@wirew0rm wirew0rm temporarily deployed to configure coverage June 24, 2025 15:29 — with GitHub Actions Inactive
@wirew0rm wirew0rm temporarily deployed to configure coverage June 24, 2025 15:29 — with GitHub Actions Inactive
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
36 Security Hotspots
32.4% Coverage on New Code (required ≥ 80%)
E Reliability Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Copy link
Member

@wirew0rm wirew0rm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this big improvement. We did some real-world testing and it improved a lot of long-standing issues, thanks a lot 👍
There are still some small issues and we have to rethink how to handle dependencies, but this can be tackled in individual small PRs.

@wirew0rm wirew0rm merged commit 0f7903e into main Jun 24, 2025
8 of 10 checks passed
@wirew0rm wirew0rm deleted the frank/http2 branch June 24, 2025 16:35
@wirew0rm wirew0rm restored the frank/http2 branch June 24, 2025 16:35
wirew0rm added a commit that referenced this pull request Jul 1, 2025
PR #371 introduced that long polling subscriptions would always fetch the
next n (defaulted to 3) long polling indices to reduce the latency from
roundtrip to single trip.
For the happy path this works well, but in case of errors, the logic has
to be improved to correctly handle responses arriving out of order.
With the current logic this will re-fetch already received updates and
lead to avalanches of requests, as these re-fetched requests will then
also again arive out of order and refetch old data.
Before increasing this value again, it should be ensured that the
responses are either re-sorted on arrival or older updates correctly
dropped without scheduling new requests.

This change restores the behaviour from before the changes but keeps the
logic in place so it can be reenabled after fixing the error-handling.

Signed-off-by: Alexander Krimm <[email protected]>
wirew0rm added a commit that referenced this pull request Jul 1, 2025
PR #371 introduced that long polling subscriptions would always fetch the
next n (defaulted to 3) long polling indices to reduce the latency from
roundtrip to single trip.
For the happy path this works well, but in case of errors, the logic has
to be improved to correctly handle responses arriving out of order.
With the current logic this will re-fetch already received updates and
lead to avalanches of requests, as these re-fetched requests will then
also again arive out of order and refetch old data.
Before increasing this value again, it should be ensured that the
responses are either re-sorted on arrival or older updates correctly
dropped without scheduling new requests.

This change restores the behaviour from before the changes but keeps the
logic in place so it can be reenabled after fixing the error-handling.

Signed-off-by: Alexander Krimm <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[10SP,14SP] OpenCMW: Rework REST/ZeroMQ interface
4 participants