Skip to content

Faster Inbound Pipeline #80656

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

original-brownbear
Copy link
Member

@original-brownbear original-brownbear commented Nov 11, 2021

Optimize away a number of refcounting in the inbound pipeline and remove the indirection of collecting pieces of a message into an ArrayList before passing them to the aggregator which does its own collecting of these pieces anyway.
The production impact of this change is expected to be relatively minor, though I think it does save a little indirection and might allow for further optimisations down the line. Also, it makes the code more obviously correct by only incrementing the reference count of buffers that are queued for later use.

I found this while looking into why #79718 breaks on OSX CI while logging an endless stream of slow transport warnings. Theses are in large part caused by instantiating lots and lots of leak tracking exceptions with stack-traces in tests. This change reduces the number of increments and decrements massively, thus speeding up tests (on all platforms not just OSX for what it's worth).

Rough illustration (no real benchmark, just some profiling from tests but you can see the right hand side of the profile going down even for an expensive transport action like get-snapshots with many pending snapshots). I don't have the charts for other tests with me right now, but the effect is obviously much bigger when dealing with smaller transport messages in tests:

before:

image

after:

image

closes #79718 (I assume this will be enough to stabilize the test now combined with the other fix already applied to the issue)

@elasticmachine elasticmachine added the Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. label Nov 11, 2021
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (Team:Distributed)

@original-brownbear
Copy link
Member Author

@elasticmachine update branch

Copy link
Contributor

@DaveCTurner DaveCTurner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a big fan of this change, I'd prefer to keep the ref counting more obviously correct and deal with the leak-detector test slowness directly: perhaps stop collecting stack traces on the platforms on which it's too slow to cope while we're not chasing any particular leak there, or else just extend timeouts as needed. I left a few inline comments too.

}
}
// if handling the messages didn't cause the channel to get closed and we did not fully consume the buffer retain it
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we have a test in InboundPipelineTests showing that we handle the isClosed case correctly here? And elsewhere I guess, but I checked that we never actually exercise the isClosed == true branch here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me try :) Should be doable, these paths are exercised by some internal cluster tests so there should be an obvious way to do it. On it

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test added, should be good for another round now :)

assert aggregator.isAggregating();
assert fragment instanceof ReleasableBytesReference;
aggregator.aggregate((ReleasableBytesReference) fragment);
return;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If isClosed can we assert that pending is now empty?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea that should be fine, will add

do {
final int bytesDecoded;
if (pending.size() == 1) {
bytesDecoded = decode(channel, pending.peekFirst());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the channel is closed then the bytes get released mid-decode. Is that a problem? (possibly not, but I'd like to hear the reasoning)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The channel is closed on the same thread that this runs on, we don't release mid-decode. We release after deserializing and handling what we deserialized in all cases.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I really just meant within the decode() method.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see. I think there we're good because we don't have any incremental decode on a message. We decode a full message and pass it along to message handling, then return from decode. So closing the channel will always be the last step in decode as far as I can see.

}
}
} while (isClosed == false && pending.isEmpty() == false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If isClosed didn't we already return?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++ right

public int decode(
TcpChannel channel,
ReleasableBytesReference reference,
CheckedBiConsumer<TcpChannel, Object, IOException> fragmentConsumer
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we not pass the TcpChannel around everywhere like this? It's always the same channel isn't it?

This comment was marked as outdated.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hiding my above comment, I misread the question initially. I added passing around the channel here to avoid instantiating a capturing lambda here where previously we had a non-capturing one.

import java.util.function.BiConsumer;
import java.util.function.Function;
import java.util.function.LongSupplier;
import java.util.function.Supplier;

public class InboundPipeline implements Releasable {

private static final ThreadLocal<ArrayList<Object>> fragmentList = ThreadLocal.withInitial(ArrayList::new);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@original-brownbear
Copy link
Member Author

I'd prefer to keep the ref counting more obviously correct

IMO this is exactly what this PR does. We shouldn't be ref-count incrementing and decrementing left and right when we don't need it. We should increment when we need to hold on to something outside of the current execution stack and only then, anything else just makes the code needlessly hard to follow IMO. To me incrementing a reference always means we fork off or need something later and this wasn't the case here at all.

@DaveCTurner
Copy link
Contributor

I sort of see what you mean but also this change means that refcounting is different depending on whether there are multiple fragments to combine or not. I don't see a way to avoid the extra refcounting in the multi-fragment case, but it isn't as simple as it was and it took some thought.

@Tim-Brooks
Copy link
Contributor

I looked at this briefly yesterday. I was also opposed to the change. I don’t think it fixes the underlying issue. If normal ref count incs and decs kill the test framework, I think that is an issue. And removing incs/decs used in production does not seem like a good solution.

Additionally this code was structured similar to Netty specifically to separate decoding from aggregation, from handling. I didn’t want the decoder wrapping further pipeline steps. Always attempted to step back to the top (Pipeline) before moving to the next step. I did not want the decoder state to be concerned about what happened further down the pipeline.

The Consumer interface was just for testing. If I did it again I would probably pass the list in similar to netty.

I also don’t agree on the correctness point. try with resource for each scope and passing a new retain into another scope guarantees an uncaught exception does not lead to a leak.

I’m not rejecting the PR if anything. You all can find whatever approach you want. Those were just my thoughts looking at it.

DaveCTurner added a commit to DaveCTurner/elasticsearch that referenced this pull request Nov 25, 2021
Today we use a leaky `NON_RECYCLING_INSTANCE` in
`InboundPipelineTests#testPipelineHandling`. It's actually fine, we
don't use it for anything important, but it looks suspicious and it
means that a little bit of harmless-looking reordering would seriously
affect the coverage of these tests. This commit gets rid of it to ensure
that we're always watching for leaks.

Noticed when reviewing elastic#80656.
@original-brownbear
Copy link
Member Author

perhaps stop collecting stack traces on the platforms on which it's too slow to cope while we're not chasing any particular leak there

One important point I'd make here is that the problem is not necessarily just that we collect stack traces for every message. It's simply how many we have to collect with the redundant ref counting we do. This is the traces we get for a tiny message (so no aggregation or anything in here) once it hits InboundHandler, so before we even get around to deserializing in master.


Recent access records: 
#1:
	org.elasticsearch.core.AbstractRefCounted.incRef(AbstractRefCounted.java:24)
	org.elasticsearch.common.bytes.ReleasableBytesReference.retain(ReleasableBytesReference.java:81)
	org.elasticsearch.transport.InboundAggregator.aggregate(InboundAggregator.java:80)
	org.elasticsearch.transport.InboundPipeline.forwardFragments(InboundPipeline.java:153)
	org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:120)
	org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:85)
	org.elasticsearch.transport.nio.MockNioTransport$MockTcpReadWriteHandler.consumeReads(MockNioTransport.java:351)
	org.elasticsearch.nio.SocketChannelContext.handleReadBytes(SocketChannelContext.java:222)
	org.elasticsearch.nio.BytesChannelContext.read(BytesChannelContext.java:35)
	org.elasticsearch.nio.EventHandler.handleRead(EventHandler.java:128)
	org.elasticsearch.transport.nio.TestEventHandler.handleRead(TestEventHandler.java:143)
	org.elasticsearch.nio.NioSelector.handleRead(NioSelector.java:415)
	org.elasticsearch.nio.NioSelector.processKey(NioSelector.java:241)
	org.elasticsearch.nio.NioSelector.singleLoop(NioSelector.java:168)
	org.elasticsearch.nio.NioSelector.runLoop(NioSelector.java:125)
	java.base/java.lang.Thread.run(Thread.java:833)
#2:
	org.elasticsearch.common.bytes.ReleasableBytesReference.close(ReleasableBytesReference.java:96)
	org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:113)
	org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:85)
	org.elasticsearch.transport.nio.MockNioTransport$MockTcpReadWriteHandler.consumeReads(MockNioTransport.java:351)
	org.elasticsearch.nio.SocketChannelContext.handleReadBytes(SocketChannelContext.java:222)
	org.elasticsearch.nio.BytesChannelContext.read(BytesChannelContext.java:35)
	org.elasticsearch.nio.EventHandler.handleRead(EventHandler.java:128)
	org.elasticsearch.transport.nio.TestEventHandler.handleRead(TestEventHandler.java:143)
	org.elasticsearch.nio.NioSelector.handleRead(NioSelector.java:415)
	org.elasticsearch.nio.NioSelector.processKey(NioSelector.java:241)
	org.elasticsearch.nio.NioSelector.singleLoop(NioSelector.java:168)
	org.elasticsearch.nio.NioSelector.runLoop(NioSelector.java:125)
	java.base/java.lang.Thread.run(Thread.java:833)
#3:
	org.elasticsearch.common.bytes.ReleasableBytesReference.close(ReleasableBytesReference.java:96)
	org.elasticsearch.transport.InboundPipeline.releasePendingBytes(InboundPipeline.java:188)
	org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:106)
	org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:85)
	org.elasticsearch.transport.nio.MockNioTransport$MockTcpReadWriteHandler.consumeReads(MockNioTransport.java:351)
	org.elasticsearch.nio.SocketChannelContext.handleReadBytes(SocketChannelContext.java:222)
	org.elasticsearch.nio.BytesChannelContext.read(BytesChannelContext.java:35)
	org.elasticsearch.nio.EventHandler.handleRead(EventHandler.java:128)
	org.elasticsearch.transport.nio.TestEventHandler.handleRead(TestEventHandler.java:143)
	org.elasticsearch.nio.NioSelector.handleRead(NioSelector.java:415)
	org.elasticsearch.nio.NioSelector.processKey(NioSelector.java:241)
	org.elasticsearch.nio.NioSelector.singleLoop(NioSelector.java:168)
	org.elasticsearch.nio.NioSelector.runLoop(NioSelector.java:125)
	java.base/java.lang.Thread.run(Thread.java:833)
#4:
	org.elasticsearch.core.AbstractRefCounted.incRef(AbstractRefCounted.java:24)
	org.elasticsearch.common.bytes.ReleasableBytesReference.retain(ReleasableBytesReference.java:81)
	org.elasticsearch.common.bytes.ReleasableBytesReference.retainedSlice(ReleasableBytesReference.java:87)
	org.elasticsearch.transport.InboundDecoder.internalDecode(InboundDecoder.java:94)
	org.elasticsearch.transport.InboundDecoder.decode(InboundDecoder.java:44)
	org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:104)
	org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:85)
	org.elasticsearch.transport.nio.MockNioTransport$MockTcpReadWriteHandler.consumeReads(MockNioTransport.java:351)
	org.elasticsearch.nio.SocketChannelContext.handleReadBytes(SocketChannelContext.java:222)
	org.elasticsearch.nio.BytesChannelContext.read(BytesChannelContext.java:35)
	org.elasticsearch.nio.EventHandler.handleRead(EventHandler.java:128)
	org.elasticsearch.transport.nio.TestEventHandler.handleRead(TestEventHandler.java:143)
	org.elasticsearch.nio.NioSelector.handleRead(NioSelector.java:415)
	org.elasticsearch.nio.NioSelector.processKey(NioSelector.java:241)
	org.elasticsearch.nio.NioSelector.singleLoop(NioSelector.java:168)
	org.elasticsearch.nio.NioSelector.runLoop(NioSelector.java:125)
	java.base/java.lang.Thread.run(Thread.java:833)
#5:
	org.elasticsearch.core.AbstractRefCounted.incRef(AbstractRefCounted.java:24)
	org.elasticsearch.common.bytes.ReleasableBytesReference.retain(ReleasableBytesReference.java:81)
	org.elasticsearch.transport.InboundPipeline.getPendingBytes(InboundPipeline.java:164)
	org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:103)
	org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:85)
	org.elasticsearch.transport.nio.MockNioTransport$MockTcpReadWriteHandler.consumeReads(MockNioTransport.java:351)
	org.elasticsearch.nio.SocketChannelContext.handleReadBytes(SocketChannelContext.java:222)
	org.elasticsearch.nio.BytesChannelContext.read(BytesChannelContext.java:35)
	org.elasticsearch.nio.EventHandler.handleRead(EventHandler.java:128)
	org.elasticsearch.transport.nio.TestEventHandler.handleRead(TestEventHandler.java:143)
	org.elasticsearch.nio.NioSelector.handleRead(NioSelector.java:415)
	org.elasticsearch.nio.NioSelector.processKey(NioSelector.java:241)
	org.elasticsearch.nio.NioSelector.singleLoop(NioSelector.java:168)
	org.elasticsearch.nio.NioSelector.runLoop(NioSelector.java:125)
	java.base/java.lang.Thread.run(Thread.java:833)
#6:
	org.elasticsearch.core.AbstractRefCounted.incRef(AbstractRefCounted.java:24)
	org.elasticsearch.common.bytes.ReleasableBytesReference.retainedSlice(ReleasableBytesReference.java:90)
	org.elasticsearch.transport.InboundPipeline.releasePendingBytes(InboundPipeline.java:183)
	org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:106)
	org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:85)
	org.elasticsearch.transport.nio.MockNioTransport$MockTcpReadWriteHandler.consumeReads(MockNioTransport.java:351)
	org.elasticsearch.nio.SocketChannelContext.handleReadBytes(SocketChannelContext.java:222)
	org.elasticsearch.nio.BytesChannelContext.read(BytesChannelContext.java:35)
	org.elasticsearch.nio.EventHandler.handleRead(EventHandler.java:128)
	org.elasticsearch.transport.nio.TestEventHandler.handleRead(TestEventHandler.java:143)
	org.elasticsearch.nio.NioSelector.handleRead(NioSelector.java:415)
	org.elasticsearch.nio.NioSelector.processKey(NioSelector.java:241)
	org.elasticsearch.nio.NioSelector.singleLoop(NioSelector.java:168)
	org.elasticsearch.nio.NioSelector.runLoop(NioSelector.java:125)
	java.base/java.lang.Thread.run(Thread.java:833)
#7:
	org.elasticsearch.core.AbstractRefCounted.incRef(AbstractRefCounted.java:24)
	org.elasticsearch.common.bytes.ReleasableBytesReference.retain(ReleasableBytesReference.java:81)
	org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:95)
	org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:85)
	org.elasticsearch.transport.nio.MockNioTransport$MockTcpReadWriteHandler.consumeReads(MockNioTransport.java:351)
	org.elasticsearch.nio.SocketChannelContext.handleReadBytes(SocketChannelContext.java:222)
	org.elasticsearch.nio.BytesChannelContext.read(BytesChannelContext.java:35)
	org.elasticsearch.nio.EventHandler.handleRead(EventHandler.java:128)
	org.elasticsearch.transport.nio.TestEventHandler.handleRead(TestEventHandler.java:143)
	org.elasticsearch.nio.NioSelector.handleRead(NioSelector.java:415)
	org.elasticsearch.nio.NioSelector.processKey(NioSelector.java:241)
	org.elasticsearch.nio.NioSelector.singleLoop(NioSelector.java:168)
	org.elasticsearch.nio.NioSelector.runLoop(NioSelector.java:125)
	java.base/java.lang.Thread.run(Thread.java:833)
Created at:
	org.elasticsearch.transport.nio.MockNioTransport$MockTcpReadWriteHandler.consumeReads(MockNioTransport.java:347)
	org.elasticsearch.nio.SocketChannelContext.handleReadBytes(SocketChannelContext.java:222)
	org.elasticsearch.nio.BytesChannelContext.read(BytesChannelContext.java:35)
	org.elasticsearch.nio.EventHandler.handleRead(EventHandler.java:128)
	org.elasticsearch.transport.nio.TestEventHandler.handleRead(TestEventHandler.java:143)
	org.elasticsearch.nio.NioSelector.handleRead(NioSelector.java:415)
	org.elasticsearch.nio.NioSelector.processKey(NioSelector.java:241)
	org.elasticsearch.nio.NioSelector.singleLoop(NioSelector.java:168)
	org.elasticsearch.nio.NioSelector.runLoop(NioSelector.java:125)
	java.base/java.lang.Thread.run(Thread.java:833)
: 3 leak records were discarded because they were duplicates

After my change it's this:


Recent access records: 
#1:
	org.elasticsearch.core.AbstractRefCounted.incRef(AbstractRefCounted.java:24)
	org.elasticsearch.common.bytes.ReleasableBytesReference.retain(ReleasableBytesReference.java:81)
	org.elasticsearch.transport.InboundAggregator.aggregate(InboundAggregator.java:80)
	org.elasticsearch.transport.InboundPipeline.forwardFragment(InboundPipeline.java:157)
	org.elasticsearch.transport.InboundDecoder.internalDecode(InboundDecoder.java:124)
	org.elasticsearch.transport.InboundDecoder.decode(InboundDecoder.java:48)
	org.elasticsearch.transport.InboundPipeline.decode(InboundPipeline.java:116)
	org.elasticsearch.transport.InboundPipeline.doHandleBytes(InboundPipeline.java:102)
	org.elasticsearch.transport.InboundPipeline.handleBytes(InboundPipeline.java:83)
	org.elasticsearch.transport.nio.MockNioTransport$MockTcpReadWriteHandler.consumeReads(MockNioTransport.java:351)
	org.elasticsearch.nio.SocketChannelContext.handleReadBytes(SocketChannelContext.java:222)
	org.elasticsearch.nio.BytesChannelContext.read(BytesChannelContext.java:35)
	org.elasticsearch.nio.EventHandler.handleRead(EventHandler.java:128)
	org.elasticsearch.transport.nio.TestEventHandler.handleRead(TestEventHandler.java:143)
	org.elasticsearch.nio.NioSelector.handleRead(NioSelector.java:415)
	org.elasticsearch.nio.NioSelector.processKey(NioSelector.java:241)
	org.elasticsearch.nio.NioSelector.singleLoop(NioSelector.java:168)
	org.elasticsearch.nio.NioSelector.runLoop(NioSelector.java:125)
	java.base/java.lang.Thread.run(Thread.java:833)
Created at:
	org.elasticsearch.transport.nio.MockNioTransport$MockTcpReadWriteHandler.consumeReads(MockNioTransport.java:347)
	org.elasticsearch.nio.SocketChannelContext.handleReadBytes(SocketChannelContext.java:222)
	org.elasticsearch.nio.BytesChannelContext.read(BytesChannelContext.java:35)
	org.elasticsearch.nio.EventHandler.handleRead(EventHandler.java:128)
	org.elasticsearch.transport.nio.TestEventHandler.handleRead(TestEventHandler.java:143)
	org.elasticsearch.nio.NioSelector.handleRead(NioSelector.java:415)
	org.elasticsearch.nio.NioSelector.processKey(NioSelector.java:241)
	org.elasticsearch.nio.NioSelector.singleLoop(NioSelector.java:168)
	org.elasticsearch.nio.NioSelector.runLoop(NioSelector.java:125)
	java.base/java.lang.Thread.run(Thread.java:833)

... IMO this is logically much closer to the reality of what gets retained where and makes it far easier to understand where a leak may be coming from when debugging.

I agree that many of the redundant ref-count changes we have right now are trivial to reason about because they're all try-with-resources, but they also don't add anything because of their simplicity. They burn cycles now (though not that many in production) and make debugging a leak needlessly hard, because you have to go through all of them and make sure none of the 7 (instead of 2 that are needed) recorded touch points are correct.

Also. without this change we are not actually operating the way Netty operates either I'd say because we'd just deserialize messages one by one anyway and then pass them along the pipeline as soon as we had one full message instead of Netty which at least optionally batches things in decoding here and there via the flag io.netty.handler.codec.ByteToMessageDecoder#singleDecode? Since we don't have that kind of batching I don't really see the point in pushing stuff up the stack again like that, it just adds complexity for working with the list that is always <=3 in length anyway?

@original-brownbear
Copy link
Member Author

I don't see a way to avoid the extra refcounting in the multi-fragment case, but it isn't as simple as it was and it took some thought.

True, but that simplicity also meant a lot of needless cycles went into wrapping trivial messages multiple times. In a world where we cache 3-element lists thread-local because we think this code is hot enough to justify it, I find this to be an acceptable bit of complexity to add.

@DaveCTurner
Copy link
Contributor

Yeah it's neater for sure, but that's still fundamentally a bug in the test-only leak tracker for which we're proposing a production code change. If we properly paired every acquire with its corresponding release then we could clean up the stack traces for the acquires that weren't released.

Have we considered not separating the parts of the InboundPipeline so much? Acknowledging that Tim says we're following Netty's patterns here, but Netty is super-flexible so it needs these things to be separate whereas we have much simpler needs.

@original-brownbear
Copy link
Member Author

but Netty is super-flexible so it needs these things to be separate whereas we have much simpler needs.

I'd be all for it. Technically, there is no reason for the code to be spread out like this in our case IMO. Netty allows for things like multiple layers of decoding and aggregating messages where tracking a list makes sense to neatly do stuff like read a chunked HTTP message step by step and whatnot. We don't do anything like that and IMO could just make the code as flat as the ref counting is now, rather than have redundant complicated ref-counting to wrap around all the various steps in the current code.

but that's still fundamentally a bug in the test-only leak tracker for which we're proposing a production code change. I

Not sure I fully agree with this. I'd say yes, I wouldn't have proposed this without the leak-tracker causing us trouble, but I think it's worthwhile in isolation to not have needless wrapping of buffers and ref counting in code this hot. Also, I still don't see how redundant ref count incrementing and releasing without forking adds anything but confusion to this code (the case where we handle aggregate and single buffer message is the only exception to this where it maybe does).

@original-brownbear
Copy link
Member Author

Build failure is just a Jenkins issue, the build (part-2) went through fine.

@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-distributed-obsolete (Team:Distributed (Obsolete))

@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-distributed-coordination (Team:Distributed Coordination)

@original-brownbear
Copy link
Member Author

closing in favor of #123390

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Coordination/Network Http and internode communication implementations >refactoring Team:Distributed Coordination Meta label for Distributed Coordination team Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. v9.1.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[CI] SnapshotStressTestsIT testRandomActivities failing