Skip to content

Conversation

@sbordet
Copy link
Contributor

@sbordet sbordet commented Nov 15, 2024

  • Fixed ProxyWriter.chunks locking.
  • Made writeProxyResponseContent() protected so it can be overridden to flush the response content.
  • Added test case.

* Fixed `ProxyWriter.chunks` locking.
* Made `writeProxyResponseContent()` protected so it can be overridden to flush the response content.
* Added test case.

Signed-off-by: Simone Bordet <[email protected]>
@sbordet sbordet requested a review from lorban November 15, 2024 17:43
@sbordet sbordet linked an issue Nov 15, 2024 that may be closed by this pull request
Signed-off-by: Simone Bordet <[email protected]>
Copy link

@iiliev2 iiliev2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about the fields

private Chunk chunk;
private boolean writePending;

shouldn't those be volatile?

if (_log.isDebugEnabled())
_log.debug("{} proxying content to downstream: {} bytes {}", getRequestId(clientRequest), content.remaining(), callback);
return chunks.offer(new Chunk(content, callback));
try (AutoLock ignored = lock.lock())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the benefit of this solution, rather than just using something like ConcurrentLinkedDeque for the chunks ?

try (AutoLock ignored = lock.lock())
{
this.chunk = chunk = chunks.poll();
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the lock be released after writeProxyResponseContent? I am not sure, but it seems that two threads may poll a chunk(ie there were two chunks in the queue) and then they may race and write the bytes out of order.

protected class ProxyWriter implements WriteListener
{
private final AutoLock lock = new AutoLock();
private final Queue<Chunk> chunks = new ArrayDeque<>();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ProxyResponseListener.buffers also looks like a collection which needs to be threadsafe.

@sbordet
Copy link
Contributor Author

sbordet commented Nov 18, 2024

@iiliev2 we have reworked ProxyWriter to fix concurrency issues.

ProxyResponseListener.buffers does not need locking because it is guaranteed that Response.Listener callbacks are invoked serially.

@iiliev2
Copy link

iiliev2 commented Nov 18, 2024

There is nothing in Response or any of its ResponseListeners that mentions this. What guarantees that they are invoked serially and what does that actually mean? Even if there is no thread concurrency, the collection still needs to be threadsafe if modified/read by different threads. If there is something outside of these interfaces which synchronizes the threads, it must be specified in the javadoc.
Is this why the following variables don't need to be volatile either?

        private boolean hasContent;
        private long contentLength;
        private long length;
        private Response response;

@sbordet
Copy link
Contributor Author

sbordet commented Nov 18, 2024

Even if there is no thread concurrency, the collection still needs to be threadsafe if modified/read by different threads.

No, it does not. The serialization mechanism ensures enough to guarantee visibility.

Regarding the fact that listeners are invoked serially, yes we can improve the documentation, but it is quite natural that events flow sequentially, as they are read from the network.

Copy link
Contributor

@lorban lorban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a few minor improvements that could improve readability/maintainability, otherwise LGTM.

}

private static class Chunk
private record Chunk(ByteBuffer buffer, Callback callback)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chunk is quite a loaded term now, maybe this record should be renamed to something like BufferWithCallback or something to avoid confusion with Content.Chunk?

Signed-off-by: Simone Bordet <[email protected]>
@sbordet sbordet requested a review from lorban November 25, 2024 09:40
Copy link
Contributor

@lorban lorban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd appreciate the extra nullings and assertions, but this LGTM the way it is.

@sbordet sbordet merged commit f52854b into jetty-12.0.x Nov 25, 2024
7 of 9 checks passed
@sbordet sbordet deleted the fix/jetty-12.0.x/12323/amms-flush branch November 25, 2024 14:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

No open projects
Status: ✅ Done

Development

Successfully merging this pull request may close these issues.

AsyncMiddleManServlet response flushing

4 participants