-
Notifications
You must be signed in to change notification settings - Fork 25.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Log stack traces on data nodes before they are cleared for transport #125732
base: main
Are you sure you want to change the base?
Conversation
Pinging @elastic/es-search-foundations (Team:Search Foundations) |
Hi @benchaplin, I've created a changelog YAML for you. |
) { | ||
boolean header = true; | ||
if (version.onOrAfter(ERROR_TRACE_IN_TRANSPORT_HEADER) && threadPool.getThreadContext() != null) { | ||
if (request.getChannelVersion().onOrAfter(ERROR_TRACE_IN_TRANSPORT_HEADER) && threadPool.getThreadContext() != null) { | ||
header = Boolean.parseBoolean(threadPool.getThreadContext().getHeaderOrDefault("error_trace", "false")); | ||
} | ||
if (header == false) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not see where we are logging the trace? It seems we are only logging that its actually removed.
I am thinking we should log the exception as a WARN
if:
- header == false (meaning the trace wouldn't be provided user side)
- the exception isn't a user exception (e.g. should be considered a 5xx)
However, I defer to @javanna here. He has much better context around what we are trying to achieve.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Passing e
to the logger here outputs the stack trace. Maybe my log message isn't clear... think something like "search failed with exception:" would be better?
Interesting point on WARN
- do you think a WARN
log per shard for the same underlying error is acceptable here? Luca and I decided it would be difficult to dedupe these at the moment. It might become easier with batched query execution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see.
I think that this shouldn't be debug. Maybe debug for all exceptions. But we should log WARN
for things that are 5xx, indicating the failure.
I think adding a "Clearing stack trace before...." is unnecessary. Logging isn't just for our debugging, but also for users.
I am not sure indicating that the trace is being removed is overly useful here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the log message to be more clear for users, and raised the level WARN
for 5XX. Thanks for the idea there - I think it makes sense for this new message to log at the same level as rest suppressed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left some comments, thanks @benchaplin !
header = Boolean.parseBoolean(threadPool.getThreadContext().getHeaderOrDefault("error_trace", "false")); | ||
} | ||
if (header == false) { | ||
return listener.delegateResponse((l, e) -> { | ||
logger.debug( | ||
() -> format("[%s]%s Clearing stack trace before transport:", clusterService.localNode().getId(), request.shardId()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like the best place to add the logging indeed, because we ensure that we do the additional logging exclusively for the cases where we suppress the stack trace, before doing so.
If we log this at debug, we are not going to see it with the default log level, are we? I think we should use warn instead at least?
The error message looks a little misleading also, all we are interested in is the error itself, so I would log the same that we'd get on the coord node, but this time we'd get the stacktrace.
There's a couple more aspects that deserve attention I think:
- if we keep on logging on the coord node, we should probably only log in the data nodes when the error trace is not requested, otherwise we just add redundant logging?
- if we keep on logging on the coord node, it may happen that the node acting as coord node acts as a data node as well as part of serving a search request. That would lead to duplicated logging on that node, that may be ok but not ideal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the log message to be more clear for users and raised the level to WARN
on the same condition that the rest suppressed logger logs at WARN
.
- Agreed, and that is the current behavior as this log is only wrapped in
if (header == false) {
. - That is true. I think the shard failure logs on the coord node (see my example below) are important, but an argument could be made to remove the rest suppressed log if
error_trace=false
. Then again rest.suppressed is only one log line. But I imagine removing any of these logs would count as a breaking change (?), as alerts out there (like our own) might rely on them.
server/src/main/java/org/elasticsearch/search/SearchService.java
Outdated
Show resolved
Hide resolved
server/src/main/java/org/elasticsearch/search/SearchService.java
Outdated
Show resolved
Hide resolved
private BooleanSupplier transportMessageHasStackTrace; | ||
@BeforeClass | ||
public static void setDebugLogLevel() { | ||
Configurator.setLevel("org.elasticsearch.search.SearchService", Level.DEBUG); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hopefully this is not required, if we log at WARN ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For ITs, I found it easiest to trigger 4XX failures which I think should still log at the DEBUG
level.
...rch/src/internalClusterTest/java/org/elasticsearch/xpack/search/AsyncSearchErrorTraceIT.java
Outdated
Show resolved
Hide resolved
@@ -136,6 +150,106 @@ public void testAsyncSearchFailingQueryErrorTraceFalse() throws IOException, Int | |||
assertFalse(transportMessageHasStackTrace.getAsBoolean()); | |||
} | |||
|
|||
public void testLoggingInAsyncSearchFailingQueryErrorTraceDefault() throws IOException, InterruptedException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you could probably fold this test into the errortrace true test? You could randomly set the flag to true, otherwise not, the result should be the same?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my view, the three potential values of error_trace
(true, false, and empty) all deserve their own test so that any changes in behavior immediately lead to a test failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I can see that, yet knowing that empty and false lead to the same, we could save some code and test runtime too perhaps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heard, saving code and test runtime sounds like a good tradeoff.
@@ -37,19 +44,27 @@ protected Collection<Class<? extends Plugin>> nodePlugins() { | |||
return CollectionUtils.appendToCopyNoNullElements(super.nodePlugins(), MockTransportService.TestPlugin.class); | |||
} | |||
|
|||
@BeforeClass | |||
public static void setDebugLogLevel() { | |||
Configurator.setLevel("org.elasticsearch.search.SearchService", Level.DEBUG); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hopefully we adjust log level and this is no longer needed.
@@ -108,6 +123,80 @@ public void testSearchFailingQueryErrorTraceFalse() throws IOException { | |||
assertFalse(hasStackTrace.getAsBoolean()); | |||
} | |||
|
|||
public void testLoggingInSearchFailingQueryErrorTraceDefault() throws IOException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you could probably fold this test into the errortrace true test? You could randomly set the flag to true, otherwise not, the result should be the same?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(done)
private void setupIndexWithDocs() { | ||
createIndex("test1", "test2"); | ||
private int setupIndexWithDocs() { | ||
int numShards = between(DEFAULT_MIN_NUM_SHARDS, DEFAULT_MAX_NUM_SHARDS); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the number of shards is already randomized if you call createIndex
. See ESIntegTestCase#numberOfShards
.
@@ -32,24 +41,41 @@ | |||
public class SearchErrorTraceIT extends HttpSmokeTestCase { | |||
private BooleanSupplier hasStackTrace; | |||
|
|||
private static final String loggerName = "org.elasticsearch.search.SearchService"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd probably use SearchService.class
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call, done.
Map<String, Object> responseEntity = performRequestAndGetResponseEntityAfterDelay(searchRequest, TimeValue.ZERO); | ||
String asyncExecutionId = (String) responseEntity.get("id"); | ||
Request request = new Request("GET", "/_async_search/" + asyncExecutionId); | ||
while (responseEntity.get("is_running") instanceof Boolean isRunning && isRunning) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can use assertBusy
here?
I think our main problem with this is that users are giving us (or we giving ourselves) logs with useful part of the backtrace removed. So I wonder if this patch really fixes that? Would the users see the missing part in the data node logs? Would they know how to get it and how to give it to us? |
@@ -552,6 +557,7 @@ static <T> ActionListener<T> maybeWrapListenerForStackTrace( | |||
} | |||
if (header == false) { | |||
return listener.delegateResponse((l, e) -> { | |||
logger.debug(() -> format("[%s]%s: failed to execute search request", nodeId, shardId), e); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to correlate the request log on the coordinate node with the error reported on the data node? If so, is the logging information here enough?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question, I modeled the nodeId/shardId prefix to lead us right back to the coord node's logs. If you take a look at the example below: [CrhugeEAQNGHtZ14Y6Apjg][test][2]
could be taken from the r.suppressed log and grepped across data nodes to find the new log.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand the nodeId/shardId
combination, which associates a coord
with a data
node. What if there are multiple failing requests on the coord node? How can we identify the error of a single request? Is it a use case we must consider?
For reference, here's a brief example of what logs we have today + what I'm adding. I've thrown a NPE in SearchService to trigger. Setup: 3 nodes, 3 primary shards, 3 replicas. (Coord node: we get 6 of these, one per shard)
(Coord node: after 6 above failures)
(Coord node:
(Data node: this PR's new log - we get 6 of these spread across the nodes, one per shard)
Edit - after b34afc1, the new log will match the level of the r.suppressed log, so it would be |
org.apache.logging.log4j.util.Supplier<String> messageSupplier = () -> format( | ||
"[%s]%s: failed to execute search request", | ||
nodeId, | ||
shardId | ||
); | ||
if (ExceptionsHelper.status(e).getStatus() < 500 || ExceptionsHelper.isNodeOrShardUnavailableTypeException(e)) { | ||
logger.debug(messageSupplier, e); | ||
} else { | ||
logger.warn(messageSupplier, e); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😍
#118266 cleared stack traces on data nodes before transport back to the coordinating node when
error_trace=false
. However, all logging of exceptions happens on the coordinating node. This change made it impossible to debug errors via stack trace whenerror_trace=false
.Here, I've logged the exception on the data node right before the stack trace is cleared. It's prefixed with
[nodeId][indexName][shard]
to match therest.suppressed
shard failures log on the coordinating node, allowing for easy error tracing from the coordinating node to the responsible data node.Might this flood the (debug level) logs?
This change has the potential to log [# of shards] times for each index in a search. However, this log:
elasticsearch/server/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java
Line 405 in 937bcd9