Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add drop_example flag to the RunInference and Model Handler #23266

Merged
merged 12 commits into from
Sep 18, 2022

Conversation

AnandInguva
Copy link
Contributor

@AnandInguva AnandInguva commented Sep 15, 2022

Fixes: #21444

Added a flag drop_example. This can be passed to RunInference API to drop the example from the PredictionResult.

if drop_example is True, prediction_result.example would be None

Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Choose reviewer(s) and mention them in a comment (R: @username).
  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI.

@AnandInguva
Copy link
Contributor Author

Run Python 3.9 PostCommit

@codecov
Copy link

codecov bot commented Sep 16, 2022

Codecov Report

Merging #23266 (8ccf608) into master (a8ca305) will decrease coverage by 0.13%.
The diff coverage is 55.00%.

@@            Coverage Diff             @@
##           master   #23266      +/-   ##
==========================================
- Coverage   73.59%   73.46%   -0.14%     
==========================================
  Files         716      718       +2     
  Lines       95338    95528     +190     
==========================================
+ Hits        70162    70177      +15     
- Misses      23880    24055     +175     
  Partials     1296     1296              
Flag Coverage Δ
python 83.19% <55.00%> (-0.22%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
...thon/apache_beam/ml/inference/pytorch_inference.py 0.00% <0.00%> (ø)
...hon/apache_beam/ml/inference/tensorrt_inference.py 0.00% <0.00%> (ø)
sdks/python/apache_beam/ml/inference/base.py 95.83% <100.00%> (+0.18%) ⬆️
...thon/apache_beam/ml/inference/sklearn_inference.py 95.45% <100.00%> (-0.33%) ⬇️
...am/examples/inference/tensorrt_object_detection.py 0.00% <0.00%> (ø)
...ks/python/apache_beam/runners/worker/sdk_worker.py 88.94% <0.00%> (+0.31%) ⬆️
sdks/python/apache_beam/io/localfilesystem.py 91.72% <0.00%> (+0.75%) ⬆️
...ks/python/apache_beam/runners/worker/data_plane.py 89.26% <0.00%> (+1.69%) ⬆️
...python/apache_beam/runners/worker/worker_status.py 76.66% <0.00%> (+2.00%) ⬆️

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@github-actions
Copy link
Contributor

Assigning reviewers. If you would like to opt out of this review, comment assign to next reviewer:

R: @yeandy for label python.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

@AnandInguva
Copy link
Contributor Author

One more way to do this is to pass the flag through the ModelHandler. The reason why I added it in the RunInference because RunInference emits PredictionResult and I thought it would be the appropriate place to control how the output[prediction result] should look.

@damccorm
Copy link
Contributor

One more way to do this is to pass the flag through the ModelHandler. The reason why I added it in the RunInference because RunInference emits PredictionResult and I thought it would be the appropriate place to control how the output[prediction result] should look.

I agree

Copy link
Contributor

@yeandy yeandy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this gets merged before TensorRT, we'll need to remember to rebase that.

"""Runs inferences on a batch of examples.

Args:
batch: A sequence of examples or features.
model: The model used to make inferences.
inference_args: Extra arguments for models whose inference call requires
extra parameters.
drop_example: Enable this to drop the example from PredictionResult
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
drop_example: Enable this to drop the example from PredictionResult
drop_example: Boolean flag indicating whether or not to drop the example from PredictionResult


pipeline = TestPipeline()
examples = [1, 3, 5]
model_handler = FakeModelHandlerWithPredictionResult()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this have drop_example=True?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are passing drop_example via base.RunInference, which passes it to the ModelHandler.

More explanation: #23266 (comment)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I missed seeing the drop=True in the test example 😄

@AnandInguva
Copy link
Contributor Author

PTAL @yeandy @damccorm

Copy link
Contributor

@yeandy yeandy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@github-actions
Copy link
Contributor

R: @pabloem for final approval

@damccorm
Copy link
Contributor

If this gets merged before TensorRT, we'll need to remember to rebase that.

I'd vote we let that one merge first since it is the higher complexity change (and less easy for us to control). Its currently just blocked on Python PreCommit flakes I think

Copy link
Contributor

@damccorm damccorm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know it came in after this PR, but could you please extend these changes to tensorrt_inference.py as well?

@AnandInguva
Copy link
Contributor Author

Run Python 3.8 PostCommit

@AnandInguva
Copy link
Contributor Author

stop reviewer notifications

@@ -272,7 +273,8 @@ def run_inference(

return [
PredictionResult(
x, [prediction[idx] for prediction in cpu_allocations]) for idx,
x if not drop_example else None,
[prediction[idx] for prediction in cpu_allocations]) for idx,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use _convert_to_result here too? I don't think there's a reason these predictions can't be dictionaries, so this will handle that case automatically

@github-actions
Copy link
Contributor

Stopping reviewer notifications for this pull request: requested by reviewer

x, [prediction[idx] for prediction in cpu_allocations]) for idx,
x in enumerate(batch)
]
predictions = []
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PTA:L @damccorm I think this is how we use _convert_to_result

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc : @yeandy

@damccorm
Copy link
Contributor

That looks right to me, lets run the PostCommit as well though

@damccorm
Copy link
Contributor

Run Python 3.8 PostCommit

@damccorm
Copy link
Contributor

@AnandInguva looks like there's a linting violation. If that is fixed and we have a successful postcommit run I think this should be good to merge.

@AnandInguva
Copy link
Contributor Author

@damccorm
Copy link
Contributor

Run Python PreCommit

Copy link
Contributor

@damccorm damccorm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - will merge once we get past this flaky precommit

@damccorm damccorm merged commit f477b85 into apache:master Sep 18, 2022
AnandInguva added a commit to AnandInguva/beam that referenced this pull request Sep 28, 2022
damccorm pushed a commit that referenced this pull request Oct 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add flag to drop example from PredicitonResult
3 participants