Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Triton Inference Server Support #34252

Draft
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

SaumilPatel03
Copy link
Contributor

fixes: #31173


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

Copy link
Contributor

Assigning reviewers. If you would like to opt out of this review, comment assign to next reviewer:

R: @jrmccluskey for label python.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

Copy link
Contributor

@jrmccluskey jrmccluskey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taking code from the JSON handler is fine, but not actually updating it to match the triton inference use case isn't going to work. Please write some unit tests and an integration test (for the latter I can help you get resources stood up in apache-beam-testing to run against.)

Comment on lines +305 to +339
def _retrieve_endpoint(
self, endpoint_id: str,
location: str,
is_private: bool) -> aiplatform.Endpoint:
"""Retrieves an AI Platform endpoint and queries it for liveness/deployed
models.

Args:
endpoint_id: the numerical ID of the Vertex AI endpoint to retrieve.
is_private: a boolean indicating if the Vertex AI endpoint is a private
endpoint
Returns:
An aiplatform.Endpoint object
Raises:
ValueError: if endpoint is inactive or has no models deployed to it.
"""
if is_private:
endpoint: aiplatform.Endpoint = aiplatform.PrivateEndpoint(
endpoint_name=endpoint_id, location=location)
LOGGER.debug("Treating endpoint %s as private", endpoint_id)
else:
endpoint = aiplatform.Endpoint(
endpoint_name=endpoint_id, location=location)
LOGGER.debug("Treating endpoint %s as public", endpoint_id)

try:
mod_list = endpoint.list_models()
except Exception as e:
raise ValueError(
"Failed to contact endpoint %s, got exception: %s", endpoint_id, e)

if len(mod_list) == 0:
raise ValueError("Endpoint %s has no models deployed to it.", endpoint_id)

return endpoint
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do triton endpoints function correctly in this way?

self.region = region
self.endpoint_name = endpoint_name
self.endpoint_url = f"https://{region}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{region}/endpoints/{endpoint_name}:predict"
self.is_private = private
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are there distinctions between public and private triton endpoints?

def run_inference(
self,
batch: Sequence[Any],
model: aiplatform.Endpoint,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not align with usage, an endpoint object is not the model name

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jrmccluskey Can you explain why model parameter should not be aiplatform Endpoint. Since load_model returns an Endpoint object, it seems logical to use it for Vertex AI’s raw_predict method (e.g., with Triton).

@SaumilPatel03
Copy link
Contributor Author

@jrmccluskey Thank you for your response I have written some unit test for Trition inference server. And have made changes in the Inference code from nvidia-triton-custom-container-prediction.ipynb
Can you give me some resources to write integration test.

Copy link
Contributor

@jrmccluskey jrmccluskey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I ask questions or point out issues, resolving them without a comment explaining the code is not good practice.

@chamikaramj
Copy link
Contributor

@SaumilPatel03 any updates ?

@SaumilPatel03
Copy link
Contributor Author

@chamikaramj I am a bit preoccupied right now. but I’ll go ahead and convert this PR to draft in the meantime.

@SaumilPatel03 SaumilPatel03 marked this pull request as draft March 29, 2025 05:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request]: Vertex AI Triton Inference Server Support
3 participants