Skip to content

Add BasicPublishAsync overloads that accept IMemoryOwner<byte> as the body#1913

Open
PauloHMattos wants to merge 21 commits intorabbitmq:mainfrom
PauloHMattos:publish-imemoryowner
Open

Add BasicPublishAsync overloads that accept IMemoryOwner<byte> as the body#1913
PauloHMattos wants to merge 21 commits intorabbitmq:mainfrom
PauloHMattos:publish-imemoryowner

Conversation

@PauloHMattos
Copy link
Copy Markdown

Proposed Changes

Alternative to #1912. If the mantainers think this is the best approach I will improve this PR body.

Types of Changes

What types of changes does your code introduce to this project?
Put an x in the boxes that apply

  • Bug fix (non-breaking change which fixes issue #NNNN)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause an observable behavior change in existing systems)
  • Documentation improvements (corrections, new content, etc)
  • Cosmetic change (whitespace, formatting, etc)

Checklist

Put an x in the boxes that apply. You can also fill these out after creating
the PR. If you're unsure about any of them, don't hesitate to ask on the
mailing list. We're here to help! This is simply a reminder of what we are
going to look for before merging your code.

  • I have read the CONTRIBUTING.md document
  • I have signed the CA (see https://cla.pivotal.io/sign/rabbitmq)
  • All tests pass locally with my changes
  • I have added tests that prove my fix is effective or that my feature works
  • I have added necessary documentation (if appropriate)
  • Any dependent changes have been merged and published in related repositories

Further Comments

Benchmark

Official version Body size : 1 MB Iterations : 500 Tasks : 16 Non-copying : False Startup memory : 16 MB

--- Start ---
Memory usage: 17 MB
Memory usage: 19 MB
Memory usage: 19 MB
Memory usage: 21 MB
Memory usage: 21 MB
Memory usage: 2230 MB
Memory usage: 2236 MB
Memory usage: 2248 MB
Memory usage: 2249 MB
Memory usage: 2249 MB
Memory usage: 2249 MB
Memory usage: 2250 MB
Memory usage: 2249 MB
Memory usage: 2249 MB

--- Results ---
Avg time : 10948 ms
Min time : 8910 ms
Max time : 11633 ms
Memory : 2252 MB
Queue length : 8000 / 8000
Valid messages : 100 / 100 (first 100 of 8000)

PR + ReadOnlyMemory Body size : 1 MB Iterations : 500 Tasks : 16 Non-copying : False Startup memory : 16 MB

--- Start ---
Memory usage: 17 MB
Memory usage: 19 MB
Memory usage: 19 MB
Memory usage: 22 MB
Memory usage: 22 MB
Memory usage: 2119 MB
Memory usage: 2129 MB
Memory usage: 2133 MB
Memory usage: 2134 MB
Memory usage: 2135 MB
Memory usage: 2135 MB
Memory usage: 1782 MB

--- Results ---
Avg time : 8835 ms
Min time : 8615 ms
Max time : 9216 ms
Memory : 1782 MB
Queue length : 8000 / 8000
Valid messages : 100 / 100 (first 100 of 8000)

PR + IMemoryOwner Forked version Body size : 1 MB Iterations : 500 Tasks : 16 Non-copying : True Startup memory : 16 MB

--- Start ---
Memory usage: 18 MB
Memory usage: 19 MB
Memory usage: 20 MB
Memory usage: 22 MB
Memory usage: 23 MB
Memory usage: 35 MB
Memory usage: 37 MB
Memory usage: 37 MB
Memory usage: 37 MB
Memory usage: 38 MB
Memory usage: 40 MB
Memory usage: 41 MB

--- Results ---
Avg time : 8940 ms
Min time : 8563 ms
Max time : 9373 ms
Memory : 41 MB
Queue length : 8000 / 8000
Valid messages : 100 / 100 (first 100 of 8000)

@danielmarbach
Copy link
Copy Markdown
Collaborator

I think this is very clean and superior to the other PR. These are some of my thoughts / opinions (I'm aware some of them are highly subjective ;) )

  1. No configuration flag. UseBackgroundFrameWriter is a global behavioral switch that creates two divergent code paths to maintain and test forever. Users have to know it exists, and for mixed workloads things get confusing and complicated.
  2. Explicit ownership semantics. The IMemoryOwner overloads make the contract obvious in the type system; the caller transfers ownership gets zero-copy. No implicit mode switch, no surprise behavior.
  3. No class hierarchy churn. Introducing an abstract base class + two subclasses for SocketFrameHandler is a significant structural change for what is essentially an optimization concern, which has long lasting implications on the maintenance of this client.
  4. The ROM copy remains as is. PR Support for zero-copying and alloc free publish #1912's inline path achieves zero-copy for ReadOnlyMemory<byte> via synchronous writes serialized through SemaphoreSlim, which trades the copy cost for contention. High-throughput concurrent publishers might see semaphore pressure. The IMemoryOwner<byte> overloads don't have this problem, which seems a reasonable tradeoff while keeping the current ordering behavior.
  5. The Channel<OutgoingFrame> is a total-order point. Every frame type—content frames, heartbeats, channel.close, and connection.close — flows through the same FIFO queue. The write order is precisely the enqueue order, which is accurately the order in which callers finished their pre-write work: rate limiting, flow control, publisher confirmation sequence number assignment. The inline semaphore might break that in non-obvious ways. A publisher can pass the flow control gate, get suspended waiting on the semaphore, and a later publisher slips through first. Publisher confirmations are particularly sensitive to this: publisher A gets seq=1, publisher B gets seq=2, but B wins the semaphore and hits the wire first—now the broker acks in a different order than the tracking data expects. With the background channel, none of this can happen. All the middleware is naturally composable because there is one ordered queue, and everything goes through it, which is the behavior the client exhibits today.

The current branch gives callers who want zero-copy a clean opt-in path with explicit ownership semantics, without touching the default behavior or adding runtime configuration.

I have pushed a few small tweaks here main...danielmarbach:rabbitmq-dotnet-client:publish-imemoryowner

@danielmarbach
Copy link
Copy Markdown
Collaborator

I have also quickly done a PoC showing ROS support danielmarbach@c5c9690. The branch is on top of the adjustment branch and is here main...danielmarbach:rabbitmq-dotnet-client:publish-sequence. I think we could do this as a follow up

@paulomorgado
Copy link
Copy Markdown
Contributor

I have also quickly done a PoC showing ROS support danielmarbach@c5c9690. The branch is on top of the adjustment branch and is here main...danielmarbach:rabbitmq-dotnet-client:publish-sequence. I think we could do this as a follow up

I was thinking about that. Also, for subscription. There's no need to materialize everything sequential memory, it that still has to be parsed and might not even be needed.

A ReadOnlySequence<T> might still be holding pooled memory. it would be nice to have a way to be notified when it's been consumed.

@danielmarbach
Copy link
Copy Markdown
Collaborator

A ReadOnlySequence<T> might still be holding pooled memory. it would be nice to have a way to be notified when it's been consumed.

The IMemoryOwner overload gives you that. You can have a wrapper that is aware of the memory you pooled and then you get notified

@PauloHMattos
Copy link
Copy Markdown
Author

I will wait for @lukebakken to also weigh in, but it seems that IMemoryOwner is the preferred approach. I will close the other PR when we lock in on this approach.

I have pushed a few small tweaks here main...danielmarbach:rabbitmq-dotnet-client:publish-imemoryowner

Thanks. Do you want to close this and open a PR with your tweaks, or should I pull them into my branch?

I have also quickly done a PoC showing ROS support danielmarbach@c5c9690. The branch is on top of the adjustment branch and is here main...danielmarbach:rabbitmq-dotnet-client:publish-sequence. I think we could do this as a follow up

What I don't like about this implementation is that if the ROS is an actual sequence, we make a copy for the full size of the ROS. That seems counterproductive because the developer using the library did the work of assembling a sequence to avoid large buffers and LOH allocations, only to have the client allocate a large contiguous buffer anyway. I think if we go down this route, we should at least also use a sequence of IMemoryOwner buffers to avoid this large contiguous buffer.

@danielmarbach
Copy link
Copy Markdown
Collaborator

Thanks. Do you want to close this and open a PR with your tweaks, or should I pull them into my branch?

Feel free to cherry pick then we can keep this PR alive and the discussion associated with it

@danielmarbach
Copy link
Copy Markdown
Collaborator

What I don't like about this implementation is that if the ROS is an actual sequence, we make a copy for the full size of the ROS.

Yeah I did not properly finish this. We would have to write the individual segments internally and not allocate a large buffer. I just wanted to explore how the API shape looks like and not yet spend more time on it

@paulomorgado
Copy link
Copy Markdown
Contributor

A ReadOnlySequence<T> might still be holding pooled memory. it would be nice to have a way to be notified when it's been consumed.

The IMemoryOwner overload gives you that. You can have a wrapper that is aware of the memory you pooled and then you get notified

IMemoryOwner<T> holds 1 Memory<T>.
ReadOnlySequence<T> holds many ReadOnlyMemory<T>. But all those may come from a IMemoryOwner<T>.

Because there's no out-of-the-box slicing of IMemoryOwner<T> and no IReadOnlyMemoryOwner<T>, when I want to keep the APIs with strict read-only semantics, I use a ReadOnlyMemory<T> and an additional IDisposable? memory owner to dispose of that.

Since there's no out-of-the-box implementation of ReadOnlySequenceSegment<T>, maybe the consuming code could check if the segment implements IDisposable and dispose of it if it does.

@danielmarbach
Copy link
Copy Markdown
Collaborator

@PauloHMattos Something like this 863508c ( probably needs some more polishing and thinking). I'm quite time-constrained, but it shows your current OutgoingFrame design would be extendable.

@danielmarbach
Copy link
Copy Markdown
Collaborator

@paulomorgado You meant something like this, right?

@paulomorgado
Copy link
Copy Markdown
Contributor

@danielmarbach, something like that.

But I think it's being overlooked here. If it's a single segment, it won't be disposed.

Having to pass a IMemoryOwner<byte> and a ìnt with the length is something I don't like about using IMemoryOwner<byte>. In most APIs I build, I opted by sending a ReadOnlyMemory<byte> and a IDisposable?. It's more versatile.

By the way, do you know any library with a general-purpose implementation of ReadOnlySequenceSegment<T>?

@PauloHMattos
Copy link
Copy Markdown
Author

PauloHMattos commented Mar 5, 2026

@PauloHMattos Something like this 863508c ( probably needs some more polishing and thinking). I'm quite time-constrained, but it shows your current OutgoingFrame design would be extendable.

@danielmarbach

Yes, that's what I had in mind with the OutgoingFrame. During the next weekend I think I will have a lot of time to work on this, but I think we should leave the ROS for a follow up PR, so a I will focus on finishing leaving this PR ready for review.

Once it's is merged I can finish what you started on the ROS implementation

@PauloHMattos
Copy link
Copy Markdown
Author

@paulomorgado

Having to pass a IMemoryOwner and a ìnt with the length is something I don't like about using IMemoryOwner. In most APIs I build, I opted by sending a ReadOnlyMemory and a IDisposable?. It's more versatile.

I considered 3 ways when working on this:

  • IMemoryOwner + Length
  • ReadOnlyMemory + IDisposable
  • Custom wrapper type

None felt particularly elegant, so I went with the first one for the simple reason that it was what I implemented first 😄.
I don't know if there is a strictly correct way to handle this, as I have never encountered any of these patterns in other libraries before.

Either way, I don't have a strong opinion about it and I'm happy with whichever option we choose.

@danielmarbach
Copy link
Copy Markdown
Collaborator

In most APIs I build, I opted by sending a ReadOnlyMemory<byte> and a IDisposable?. It's more versatile.

My understanding is that memory owner is the canonical way of transferring owner ship, see rule 7 and 8 in https://github.com/dotnet/docs/blob/main/docs/standard/memory-and-spans/memory-t-usage-guidelines.md#rule-7-if-you-have-an-imemoryownert-reference-you-must-at-some-point-dispose-of-it-or-transfer-its-ownership-but-not-both but it could also be that I'm misreading the guidance.

@danielmarbach
Copy link
Copy Markdown
Collaborator

But I think it's being overlooked here. If it's a single segment, it won't be disposed

Yes it is not bullet proof yet. My idea was to only explore the approach a bit not make it done done

@paulomorgado
Copy link
Copy Markdown
Contributor

But I think it's being overlooked here. If it's a single segment, it won't be disposed

Yes it is not bullet proof yet. My idea was to only explore the approach a bit not make it done done

Not a criticism. Please, carry on!

@paulomorgado
Copy link
Copy Markdown
Contributor

@paulomorgado

Having to pass a IMemoryOwner and a ìnt with the length is something I don't like about using IMemoryOwner. In most APIs I build, I opted by sending a ReadOnlyMemory and a IDisposable?. It's more versatile.

I considered 3 ways when working on this:

  • IMemoryOwner + Length
  • ReadOnlyMemory + IDisposable
  • Custom wrapper type

None felt particularly elegant, so I went with the first one for the simple reason that it was what I implemented first 😄. I don't know if there is a strictly correct way to handle this, as I have never encountered any of these patterns in other libraries before.

Either way, I don't have a strong opinion about it and I'm happy with whichever option we choose.

My personal preference, from experience, is:

  • ReadOnlyMemory + IDisposable

@danielmarbach
Copy link
Copy Markdown
Collaborator

I think the custom wrapper type does more clearly express the intent. Going to do a quick spike then we can have a look at it

@danielmarbach
Copy link
Copy Markdown
Collaborator

Here we go danielmarbach@a73b7e6

@lukebakken lukebakken self-assigned this Mar 6, 2026
@lukebakken lukebakken self-requested a review March 6, 2026 18:02
@lukebakken lukebakken added this to the 7.3.0 milestone Mar 6, 2026
@lukebakken
Copy link
Copy Markdown
Collaborator

I will wait for @lukebakken to also weigh in, but it seems that IMemoryOwner is the preferred approach

@PauloHMattos to be honest for low-level .NET stuff I defer to @danielmarbach and @paulomorgado, among other regular contributors. I'll, of course, review this PR, but if those two like this approach better that's a big 👍 for me as well 😸

@lukebakken lukebakken requested a review from danielmarbach March 6, 2026 18:05
@PauloHMattos
Copy link
Copy Markdown
Author

I've cherry-picked all of @danielmarbach changes except for danielmarbach@a73b7e6.

@paulomorgado, do you have any feedback/opinion regarding the IReadOnlyMemoryOwner approach?

I personally don't like that the user would have to allocate a new object just to slice the memory.
I will try to make the wrapper a struct tomorrow, but right now I think I'm liking approach 1 (IMemoryOwner + Length) more because it is the simplest and most explicit.

@danielmarbach
Copy link
Copy Markdown
Collaborator

I will try to make the wrapper a struct tomorrow

I went down this path mentally and concluded it would require carrying it through into the underlying transport. I think it is quite involved at first sight, but I did not spend a lot of time thinking this through. Curious to see your assessment!

@danielmarbach
Copy link
Copy Markdown
Collaborator

Don't get me wrong I do understand the overhead of a single class allocation. But I do wonder if this in the specific case is really such a big deal because at the end of the day you anyway need some sort of scoping mechanism to essentially pool stuff and then release the pool stuff again. So the question is doesn't for example an abstraction like the read-only memory owner give a more concise sort of overloading structure on the methods that would also be a good example we can follow when we introduce the read-only sequence support.

Comment thread projects/RabbitMQ.Client/IChannel.cs Outdated
Comment thread projects/RabbitMQ.Client/Impl/SessionBase.cs Outdated
Comment thread projects/RabbitMQ.Client/IChannelExtensions.cs Outdated
Comment thread projects/RabbitMQ.Client/OutgoingFrame.cs Outdated
Comment thread projects/RabbitMQ.Client/OutgoingFrame.cs
Comment thread projects/RabbitMQ.Client/OutgoingFrame.cs Outdated
Comment thread projects/RabbitMQ.Client/Impl/Channel.BasicPublish.cs
paulomorgado

This comment was marked as resolved.

@PauloHMattos
Copy link
Copy Markdown
Author

I will look at @paulomorgado feedback and make the fixes next weekend. Thanks

@danielmarbach
Copy link
Copy Markdown
Collaborator

@PauloHMattos I tried to address those. We probably also need to resolve the conflicts on the API changes. I might be able to do this later

PauloHMattos and others added 21 commits March 31, 2026 18:02
This commit eliminates large contiguous buffer allocations and redundant
payload copying during AMQP frame serialization by fully leveraging
System.IO.Pipelines.
Co-authored-by: Paulo Morgado <470455+paulomorgado@users.noreply.github.com>
Co-authored-by: Paulo Morgado <470455+paulomorgado@users.noreply.github.com>
When `TransmitAsync` is called with an `IMemoryOwner<byte>` body on a
closed channel, ownership has already been transferred to the callee.
Throwing `AlreadyClosedException` without first disposing `body` leaks
the memory owner.
…Extensions`

In `OutgoingFrame.Dispose`, remove redundant `_methodAndHeader = default`
assignment, use `is not null` instead of `!= null`, and use `default`
instead of `null` for `_body` assignment.

In `IChannelExtensions`, fix remarks ordering on the `PublicationAddress`
overload to be consistent with all other overloads, and add missing
periods after "BasicProperties" in eight remarks blocks.
…zeToFrames`

The PR regressed the `ReadOnlyMemory<byte>` overload of `SerializeToFrames`
from one allocation to two by copying the body into a `MemoryPool` buffer
and delegating to the `IMemoryOwner<byte>` overload.

Restore the original single-allocation approach: pack method, header, and
body frames into one buffer. The `IMemoryOwner<byte>` overload retains its
split method/header + body approach for zero-copy publishing.

Also improve variable naming in the `ReadOnlyMemory<byte>` overload:
split `remainingBodyBytes` into `bodyLength` (body size) and
`remainingBodyBytes` (loop counter), and rename `frameSize` to
`payloadSize` to match the naming used in `OutgoingFrame.WriteTo`.

Add explanatory comments to both overloads to make the copy-vs-zero-copy
distinction explicit.
The `ReadOnlyMemory<byte> + IDisposable?` shape was chosen specifically
because `IDisposable?` is nullable, allowing callers without pooled
memory to pass `null`. The refactor accidentally used non-nullable
`IDisposable`, contradicting the intent of the API shape decision and
the examples used to justify it.
`_bodyOwner` was only disposed inside the `_methodAndHeader is not null`
block, making its disposal contingent on the method/header buffer state.
These are logically independent resources and should be disposed
separately.
- Use `default` instead of `null` for `_body` in `OutgoingFrame`
  no-body constructor (`ReadOnlyMemory<byte>` is a struct)
- Remove extraneous blank line in `SessionBase.TransmitAsync`
- Clarify `bodyOwner` XML docs to mention the parameter is optional
  and that `null` should be passed when no disposal is needed
@danielmarbach
Copy link
Copy Markdown
Collaborator

@PauloHMattos I force pushed after a rebase

Comment on lines +137 to 150
public ValueTask TransmitAsync<TMethod, THeader>(in TMethod cmd, in THeader header, ReadOnlyMemory<byte> body, IDisposable? bodyOwner, CancellationToken cancellationToken = default)
where TMethod : struct, IOutgoingAmqpMethod
where THeader : IAmqpHeader
{
if (!IsOpen && cmd.ProtocolCommandId != ProtocolCommandId.ChannelCloseOk)
{
bodyOwner?.Dispose();
ThrowAlreadyClosedException();
}

RentedMemory bytes = Framing.SerializeToFrames(ref Unsafe.AsRef(in cmd), ref Unsafe.AsRef(in header), body, ChannelNumber, Connection.MaxPayloadSize);
OutgoingFrame bytes = Framing.SerializeToFrames(ref Unsafe.AsRef(in cmd), ref Unsafe.AsRef(in header), body, bodyOwner, ChannelNumber, Connection.MaxPayloadSize);
RabbitMQActivitySource.PopulateMessageEnvelopeSize(Activity.Current, bytes.Size);
return Connection.WriteAsync(bytes, cancellationToken);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't there the chance for leaking the bodyOwner on error?

 public ValueTask TransmitAsync<TMethod, THeader>(..., IDisposable? bodyOwner, ...)
 {
     if (!IsOpen && cmd.ProtocolCommandId != ProtocolCommandId.ChannelCloseOk)
     {
         bodyOwner?.Dispose();
         ThrowAlreadyClosedException();
     }
 
     OutgoingFrame bytes = default;
     try
     {
         bytes = Framing.SerializeToFrames(..., bodyOwner, ...);
         RabbitMQActivitySource.PopulateMessageEnvelopeSize(Activity.Current, bytes.Size);
         return Connection.WriteAsync(bytes, cancellationToken);
     }
     catch
     {
         // If SerializeToFrames failed: bytes is default → Dispose() is no-op,
         // but bodyOwner was never captured → dispose it directly.
         // If SerializeToFrames succeeded: bytes holds bodyOwner → Dispose() covers both.
         bytes.Dispose();
         if (bytes.Size == 0)
             bodyOwner?.Dispose();
         throw;
     }
 }

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@paulomorgado That would only capture synchronous exceptions. Returning a task or value task and expecting a try / catch to fire is a recipe for disaster. Let me have a closer look

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we have a problem here. The real guard against bodyOwner leaking from the BasicPublishCoreAsync to TransmitAsync path is already the bodyOwnerTransferred fix we put in BasicPublishCoreAsync.

For TransmitAsync itself: once SerializeToFrames succeeds, OutgoingFrame owns bodyOwner, and SocketFrameHandler is responsible for disposing of it in all paths (already fixed). The only true gap was PopulateMessageEnvelopeSize throwing after SerializeToFrames but that's a Activity tagging call with no realistic throw surface, and patching it with an incorrect try/catch or unnecessary introducing awaits here seems worse than leaving it

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm starting to think I analyzed this wrong.

Maybe it should be as simple as:

public ValueTask TransmitAsync<TMethod, THeader>(..., IDisposable? bodyOwner, ...)
{
    try
    {
        // ...
    }
    finally
    {
        bodyOwner?.Dispose();
    }
}

But that would require the method to be async.

Or not, with some extra work and creating an async local method if bodyOwner is not null and a sync one if it is.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Asked Copilot and this seems to be the recommendation:

Problem

TransmitAsync has no protection around the synchronous code between the !IsOpen guard and Connection.WriteAsync. If SerializeToFrames or PopulateMessageEnvelopeSize throws, bodyOwner (and possibly the rented method+header memory) leaks:

// Current code — no try/catch around these calls
OutgoingFrame bytes = Framing.SerializeToFrames(..., body, bodyOwner, ...);  // can throw (OOM)
RabbitMQActivitySource.PopulateMessageEnvelopeSize(Activity.Current, bytes.Size);  // can throw
return Connection.WriteAsync(bytes, cancellationToken);  // async faults handled by write pipeline ✅

Meanwhile, BasicPublishCoreAsync has already set bodyOwnerTransferred = true before calling ModelSendAsync, so its outer finally won't clean up either.

Failure analysis

Failure point Who disposes?
!IsOpen early exit bodyOwner?.Dispose() before throw ✅
SerializeToFrames throws (e.g., OOM) Nobody ❌ — frame never created, bodyOwner not captured
PopulateMessageEnvelopeSize throws Nobody ❌ — entire OutgoingFrame (including bodyOwner) leaks
Connection.WriteAsync throws synchronously Nobody ❌ — same as above
Connection.WriteAsync returns faulted ValueTask WriteAsyncCore catch ✅
WriteLoopAsync crashes WriteLoopAsync finally drain ✅

Recommended fix

public ValueTask TransmitAsync<TMethod, THeader>(
    in TMethod cmd, in THeader header,
    ReadOnlyMemory<byte> body, IDisposable? bodyOwner,
    CancellationToken cancellationToken = default)
    where TMethod : struct, IOutgoingAmqpMethod
    where THeader : IAmqpHeader
{
    if (!IsOpen && cmd.ProtocolCommandId != ProtocolCommandId.ChannelCloseOk)
    {
        bodyOwner?.Dispose();
        ThrowAlreadyClosedException();
    }

    OutgoingFrame bytes = default;
    try
    {
        bytes = Framing.SerializeToFrames(
            ref Unsafe.AsRef(in cmd), ref Unsafe.AsRef(in header),
            body, bodyOwner, ChannelNumber, Connection.MaxPayloadSize);
        RabbitMQActivitySource.PopulateMessageEnvelopeSize(Activity.Current, bytes.Size);
        return Connection.WriteAsync(bytes, cancellationToken);
    }
    catch
    {
        // If SerializeToFrames succeeded (Size > 0), bytes owns both
        // the rented method+header memory and bodyOwner — Dispose covers both.
        // If SerializeToFrames failed, bytes is default — Dispose is a no-op,
        // so we must dispose bodyOwner directly.
        bytes.Dispose();
        if (bytes.Size == 0)
        {
            bodyOwner?.Dispose();
        }

        throw;
    }
}

Why this works

Failure What happens
!IsOpen bodyOwner?.Dispose() before throw ✅
SerializeToFrames throws bytes is defaultDispose() is no-op → bodyOwner?.Dispose()
PopulateMessageEnvelopeSize throws bytes.Dispose() frees frame + bodyOwner ✅
Connection.WriteAsync throws synchronously Same as above ✅
Connection.WriteAsync returns faulted ValueTask WriteAsyncCore catch disposes frame ✅ (try/catch not involved)
Happy path Frame flows through write pipeline → WriteLoopAsync disposes ✅

Practical severity

Low-to-medium. SerializeToFrames is arithmetic + MemoryPool.Rent + buffer writes — only OOM or catastrophic failure triggers this. PopulateMessageEnvelopeSize is Activity?.SetTag — essentially cannot throw. But this is the last gap in an otherwise complete ownership chain.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like I mentioned above my assessment was it is not worth it because when those edge cases happen you are in catastrophic territory and returning buffers is the least of your worry since you need to restart anyway.

That being said I might be missing something and I'm happy to be convinced otherwise

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants