-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add drop_example flag to the RunInference and Model Handler #23266
Conversation
Run Python 3.9 PostCommit |
6978a63
to
60435d8
Compare
Codecov Report
@@ Coverage Diff @@
## master #23266 +/- ##
==========================================
- Coverage 73.59% 73.46% -0.14%
==========================================
Files 716 718 +2
Lines 95338 95528 +190
==========================================
+ Hits 70162 70177 +15
- Misses 23880 24055 +175
Partials 1296 1296
Flags with carried forward coverage won't be shown. Click here to find out more.
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
Assigning reviewers. If you would like to opt out of this review, comment R: @yeandy for label python. Available commands:
The PR bot will only process comments in the main thread (not review comments). |
One more way to do this is to pass the flag through the |
I agree |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this gets merged before TensorRT, we'll need to remember to rebase that.
"""Runs inferences on a batch of examples. | ||
|
||
Args: | ||
batch: A sequence of examples or features. | ||
model: The model used to make inferences. | ||
inference_args: Extra arguments for models whose inference call requires | ||
extra parameters. | ||
drop_example: Enable this to drop the example from PredictionResult |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
drop_example: Enable this to drop the example from PredictionResult | |
drop_example: Boolean flag indicating whether or not to drop the example from PredictionResult |
|
||
pipeline = TestPipeline() | ||
examples = [1, 3, 5] | ||
model_handler = FakeModelHandlerWithPredictionResult() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this have drop_example=True
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are passing drop_example
via base.RunInference
, which passes it to the ModelHandler
.
More explanation: #23266 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I missed seeing the drop=True
in the test example 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
R: @pabloem for final approval |
I'd vote we let that one merge first since it is the higher complexity change (and less easy for us to control). Its currently just blocked on Python PreCommit flakes I think |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know it came in after this PR, but could you please extend these changes to tensorrt_inference.py as well?
Run Python 3.8 PostCommit |
stop reviewer notifications |
@@ -272,7 +273,8 @@ def run_inference( | |||
|
|||
return [ | |||
PredictionResult( | |||
x, [prediction[idx] for prediction in cpu_allocations]) for idx, | |||
x if not drop_example else None, | |||
[prediction[idx] for prediction in cpu_allocations]) for idx, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use _convert_to_result
here too? I don't think there's a reason these predictions can't be dictionaries, so this will handle that case automatically
Stopping reviewer notifications for this pull request: requested by reviewer |
x, [prediction[idx] for prediction in cpu_allocations]) for idx, | ||
x in enumerate(batch) | ||
] | ||
predictions = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PTA:L @damccorm I think this is how we use _convert_to_result
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc : @yeandy
That looks right to me, lets run the PostCommit as well though |
Run Python 3.8 PostCommit |
@AnandInguva looks like there's a linting violation. If that is fixed and we have a successful postcommit run I think this should be good to merge. |
Run Python PreCommit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - will merge once we get past this flaky precommit
…pache#23266)" This reverts commit f477b85.
Fixes: #21444
Added a flag
drop_example
. This can be passed to RunInference API to drop the example from the PredictionResult.Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
R: @username
).addresses #123
), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>
instead.CHANGES.md
with noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI.