Description
AbstractJackson2Decoder.decodeToMono
currently uses DataBufferUtils.join
to join all DataBuffers, converts the result to an input stream, and then uses jackson's blocking parser. This approach reads all DataBuffers for the entire payload into memory at the same time. For large payloads, this is not memory efficient.
On the other hand, AbstractJackson2Decoder.decode
uses jackson's non-blocking parser. This approach does not require reading all buffers into memory at the same time. i.e. each buffer can be processed individually, and released as it is parsed.
For larger payloads, I think it would be better if decodeToMono
also used jackson's non-blocking parser. This would allow each data buffer to be processed individually and released, rather than having to read the entire payload into memory.
Using the non-blocking parser would allow the maxInMemorySize
to be increased to accommodate occasional large payloads, and allow the server to be more memory efficient when reading them.
It would also bring the behavior of JSON parsing in WebFlux more inline with the behavior of JSON parsing in WebMVC. WebMVC's AbstractJackson2HttpMessageConverter
does not read the entire payload into memory and does not enforce a maxInMemorySize
.
FWIW, I started looking into this when I hit a DataBufferLimitException
for some occasional large payloads. I'd like to increase the maxInMemorySize
, but then I noticed the consequence of loading everything into memory at the same time, which was a bit surprising since WebMVC doesn't have this limit.
What are your thoughts?