Skip to content

Track errors through the inference return path#3776

Open
tdene wants to merge 6 commits intoNVIDIA:mainfrom
tdene:tde/track_errors
Open

Track errors through the inference return path#3776
tdene wants to merge 6 commits intoNVIDIA:mainfrom
tdene:tde/track_errors

Conversation

@tdene
Copy link
Contributor

@tdene tdene commented Mar 10, 2026

What does this PR do ?

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Mar 10, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@tdene tdene marked this pull request as ready for review March 10, 2026 16:33
@tdene tdene requested review from a team as code owners March 10, 2026 16:33
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team March 10, 2026 16:33
@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Mar 10, 2026
@tdene tdene added the Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. label Mar 10, 2026
@Phlip79 Phlip79 removed the Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. label Mar 10, 2026
@tdene tdene requested a review from a team as a code owner March 10, 2026 19:43
@tdene tdene force-pushed the tde/track_errors branch from dfb2674 to 3b0e2ac Compare March 10, 2026 20:38
@tdene tdene added the Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. label Mar 10, 2026
@tdene tdene removed the request for review from a team March 10, 2026 20:38
@tdene tdene added Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. and removed Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. labels Mar 11, 2026
Comment on lines +786 to +787
entry = self.requests[request_id]
request = entry.record[-1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be

request = self.requests[request_id]
entry = request.record[-1]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand what you mean.

But unfortunately, self.requests is a misnomer. self.requests is a Dict[int, RequestEntry], where each RequestEntry contains a DynamicInferenceRequestRecord, and each DynamicInferenceRequestRecord contains a list[DynamicInferenceRequest].

So if anything, we should be changing the name of self.requests to self.request_entries.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, can you make it this then?

request_entry = self.requests[request_id]
request = request_entry.record[-1]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed!

@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Final Review PR is in the "final review" stage label Mar 11, 2026
@tdene tdene removed the Expert Review [deprecated] Apply this label to indicate that your PR is ready for expert review. label Mar 11, 2026
# Send the reply immediately, because it may never get a chance to be sent again.
if self.use_coordinator and self.is_mp_coordinator:
payload = msgpack.packb(
[Headers.ENGINE_REPLY.value, [entry.record.serialize()]], use_bin_type=True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should entry here and down below be request_entry?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ach, my bad. Resolved!

request.prompt_tokens.tolist()
)
request.generated_text = self.controller.tokenizer.detokenize(request.generated_tokens)
entry.future.set_result(entry.record)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

entry -> request_entry 2x

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved

return self.requests[request_id].record[-1]

def _handle_failed_request(self, request_id: int):
"""Handle a failed request by sending the reply immediately.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the reason exactly for needing to return failed requests immediately? you mentioned offline that a failure can prevent the next step from ever running. can you give an example of this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the first set of requests in the engines fail (because they're all too long, for example), the coordinator does not go into the forward step (because there are no active requests), and the whole system hangs.

finished_request_records.append(failed_entry.record)
failed_entry.future.set_result(failed_entry.record)
assert (
failed_entry.future.done()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't there a race condition between: 1) resolving the future in _handle_failed_request(), and 2) this assert failed_entry.future.done()? does anything prevent this from running before the future is resolved?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is not, because there's no async yield points in _handle_failed_request or even _add_request. All of _add_request can be considered to run atomically, so the future will be created and resolved before async_bookkeep gets a chance to run.

if self.rank == 0:
warnings.warn(f"Request {request_id} failed to be added to the engine due to errors.")

request.add_event_fail()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in async_bookkeep, we used to also set request.status = Status.FAILED. We should update the status.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved

request.prompt = self.controller.tokenizer.detokenize(
request.prompt_tokens.tolist()
)
request.generated_text = self.controller.tokenizer.detokenize(request.generated_tokens)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does detokenize() work fine even if generated_tokens is empty?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was working in my tests (before the request_entry change :) ), but you're right, there's no guarantee it'll always work for all tokenizers. And there's also no point in doing this tokenization if there's no generated tokens.

Addressed!

@tdene
Copy link
Contributor Author

tdene commented Mar 11, 2026

/claude test

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Final Review PR is in the "final review" stage

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants