-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Triton Inference Server Support #34252
base: master
Are you sure you want to change the base?
Conversation
Assigning reviewers. If you would like to opt out of this review, comment R: @jrmccluskey for label python. Available commands:
The PR bot will only process comments in the main thread (not review comments). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taking code from the JSON handler is fine, but not actually updating it to match the triton inference use case isn't going to work. Please write some unit tests and an integration test (for the latter I can help you get resources stood up in apache-beam-testing to run against.)
def _retrieve_endpoint( | ||
self, endpoint_id: str, | ||
location: str, | ||
is_private: bool) -> aiplatform.Endpoint: | ||
"""Retrieves an AI Platform endpoint and queries it for liveness/deployed | ||
models. | ||
|
||
Args: | ||
endpoint_id: the numerical ID of the Vertex AI endpoint to retrieve. | ||
is_private: a boolean indicating if the Vertex AI endpoint is a private | ||
endpoint | ||
Returns: | ||
An aiplatform.Endpoint object | ||
Raises: | ||
ValueError: if endpoint is inactive or has no models deployed to it. | ||
""" | ||
if is_private: | ||
endpoint: aiplatform.Endpoint = aiplatform.PrivateEndpoint( | ||
endpoint_name=endpoint_id, location=location) | ||
LOGGER.debug("Treating endpoint %s as private", endpoint_id) | ||
else: | ||
endpoint = aiplatform.Endpoint( | ||
endpoint_name=endpoint_id, location=location) | ||
LOGGER.debug("Treating endpoint %s as public", endpoint_id) | ||
|
||
try: | ||
mod_list = endpoint.list_models() | ||
except Exception as e: | ||
raise ValueError( | ||
"Failed to contact endpoint %s, got exception: %s", endpoint_id, e) | ||
|
||
if len(mod_list) == 0: | ||
raise ValueError("Endpoint %s has no models deployed to it.", endpoint_id) | ||
|
||
return endpoint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do triton endpoints function correctly in this way?
self.region = region | ||
self.endpoint_name = endpoint_name | ||
self.endpoint_url = f"https://{region}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{region}/endpoints/{endpoint_name}:predict" | ||
self.is_private = private |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are there distinctions between public and private triton endpoints?
def run_inference( | ||
self, | ||
batch: Sequence[Any], | ||
model: aiplatform.Endpoint, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does not align with usage, an endpoint object is not the model name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jrmccluskey Can you explain why model parameter should not be aiplatform Endpoint. Since load_model returns an Endpoint object, it seems logical to use it for Vertex AI’s raw_predict method (e.g., with Triton).
@jrmccluskey Thank you for your response I have written some unit test for Trition inference server. And have made changes in the Inference code from nvidia-triton-custom-container-prediction.ipynb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I ask questions or point out issues, resolving them without a comment explaining the code is not good practice.
@SaumilPatel03 any updates ? |
@chamikaramj I am a bit preoccupied right now. but I’ll go ahead and convert this PR to draft in the meantime. |
fixes: #31173
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.