-
Notifications
You must be signed in to change notification settings - Fork 584
Support json serialization into std::ostream
and Asio stream
#10414
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
e10ebdd
to
edc4112
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initially, I was thinking to add another overload that allows to directly write the JSON into an Asio stream,
❤️
but since the json library we use doesn't provide such an implementation we'd have to implement quite a lot of unusual things.
Actually, you'd just need a yc-aware streambuf (for ostream) which writes to a stream of your choice.
So for my understanding: The icinga2/third-party/nlohmann_json/json.hpp Lines 23446 to 23447 in 520aed6
icinga2/third-party/nlohmann_json/json.hpp Lines 17421 to 17467 in 520aed6
Now what does the In other words: doesn't this just add a new copy of the JSON to potentially save a copy in another place?
I don't really see how
We don't want to send them to the network individually. But that sounds exactly like the situation buffered streams are made for. |
Well, I guess you already answered your first question. Yes, it would hold a copy of the given
Hm, then I will add that overload once the other questions are answered regarding how the |
Not entirely. In the current implementation, everything ends up in m_Result which is written by barely three methods: Lines 458 to 476 in 520aed6
(AppendChar could even call AppendChars, making them only two.) Simple enough to overload, I guess. |
Alternatively, since we're shipping the json library in this repo, we can just directly patch it to support our Edit: Sorry, they don't use a patched version of the library but a newer version than that we currently use. |
Well, isn't that just the "or [...] like how it's done previously" part?
I'm not sure what to look for in the code search you've linked. Forking the library (that's basically what you're suggesting) is possible, but sounds like something I'd rather try to avoid. |
I've already updated my previous comment and actually they don't use a patched version of the library but they convert their custom type to the
|
No, it's an implementation detail (to visualize the low effort). |
5354178
to
d088a16
Compare
I think, I've implemented now all the requested points in #10408 and also updated the PR description, including listing two remaining open questions. So, please have a look again! |
std::ostream
std::ostream
and Asio stream
auto end(arr->End()); | ||
// Release the object lock if we're writing to an Asio stream, i.e. every write operation | ||
// is asynchronous which may cause the coroutine to yield and later resume on another thread. | ||
RELEASE_OLOCK_IF_ASYNC_WRITER(m_Writer, olock); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can't just release the lock...
for (auto it(begin); it != end; ++it) { | ||
if (it != begin) { | ||
m_Writer->write_characters(",\n", 2); | ||
} | ||
m_Writer->write_characters(m_IndentStr.c_str(), newIndent); | ||
Encode(*it, newIndent); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... and do stuff with the iterator then!
Consider using GetLength() and Get() instead of the iterator or pinning the coroutine to the thread or something (idk).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... and do stuff with the iterator then!
Why not? Also, I've already listed this as a TBD question in the PR description:
There are some open questions left regarding this though:
- Since the
AsioStreamAdapter<>
class asynchronously writes each character to the stream, potentially causing the coroutine to yield at any time, and resume later on a different thread, how do we handle the object lock needed for container serialization likeDictionary
orArray
? I mean, I don't think that the objects passed to the encoder will ever be accessed/modified by another thread, since we always use only temporary objects, but still, we need to make sure that we don't end up with some undefined behaviours (yes it many not necessarily be a deadlock). The current implementation caches the container begin and end iterators and releases the lock explicitly before writing the JSON tokens to the stream, i.e. the lock is enforced when one want to access the iterators, so the lock is released before even starting to traverse the container.
Consider using GetLength() and Get() instead of the iterator
How does that make any difference? In fact, using GetLength()
and Get()
would actually worsen the performance, since every Array::Get()
call copies the value which I want to avoid in any case.
pinning the coroutine to the thread
And how should I do that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've already listed this as a TBD question in the PR description:
There are some open questions left regarding this though:
- Since the
AsioStreamAdapter<>
class asynchronously writes each character to the stream, potentially causing the coroutine to yield at any time, and resume later on a different thread, how do we handle the object lock needed for container serialization likeDictionary
orArray
?
we need to make sure that we don't end up with some undefined behaviours (yes it many not necessarily be a deadlock).
Oh, I've overseen this 🐘.
pinning the coroutine to the thread
And how should I do that?
This was just an idea.
It's just a wrapper around the `JsonEncoder` class to simplify its usage.
Instead of serializing the events into a JSON `String` and transforming it into an Asio buffer, we can directly write the JSON into an Asio stream using the new `JsonEncoder`.
d088a16
to
6afd670
Compare
Also, this whole adventure is only suitable for /v1/events, except if the other API endpoints can live w/o a pre-known content length. But idk whether Boost Beast supports chunked encoding or we have to live with connections closed after each request. |
Chunked or something else it's out of this PR scope! So, if you want to help, please focus on answering the remaining open questions from the PR description instead of bringing up somewhat unrelated topics. |
This PR kinda rewrites the whole JSON serialization process allowing now to directly write the JSON tokens to either std::ostream or an Asio stream. For the former case, the nlohmann/json library already provides a full support, while the latter is new and is fully implemented by our own. It satisfies the
nlohmann::detail::output_adapter_protocol<>
interface and can be used like any other output adapter for the nlohmann/json library. Please refer to the inline documentation of the individual classes for more details. You can also find an example of how to use the new encoder with an Asio stream here 5354178, and that is basically how we're going to use it for the/v1/objects
endpoint. I know that @julianbrost didn't want to perform all the JSON serialization by ourselves, but the cost of not doing so is just even higher, since we would have to transform our Value types intonlohmann::json
objects to be able to use the existing JSON serialization of the library. I was just too naive to think that my previously proposed solution would actually bring any improvement, but in fact it was even worse.There are some open questions left regarding this though:
AsioStreamAdapter<>
class asynchronously writes each character to the stream, potentially causing the coroutine to yield at any time, and resume later on a different thread, how do we handle the object lock needed for container serialization likeDictionary
orArray
? I mean, I don't think that the objects passed to the encoder will ever be accessed/modified by another thread, since we always use only temporary objects, but still, we need to make sure that we don't end up with some undefined behaviours (yes it many not necessarily be a deadlock). The current implementation caches the container begin and end iterators and releases the lock explicitly before writing the JSON tokens to the stream, i.e. the lock is enforced when one want to access the iterators, so the lock is released before even starting to traverse the container.JsonEncoder
only supports writing to a stream, there's still the previous helper functionJsonEncode()
which wraps the encoder and provides a simple interface if required.Test cases for 5354178:
fixes #10408
closes #10400