Skip to content

Add Triton Inference Server Support #34252

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 10 commits into from

Conversation

SaumilPatel03
Copy link
Contributor

fixes: #31173


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

Copy link
Contributor

Assigning reviewers. If you would like to opt out of this review, comment assign to next reviewer:

R: @jrmccluskey for label python.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

Copy link
Contributor

@jrmccluskey jrmccluskey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taking code from the JSON handler is fine, but not actually updating it to match the triton inference use case isn't going to work. Please write some unit tests and an integration test (for the latter I can help you get resources stood up in apache-beam-testing to run against.)

Comment on lines 305 to 339
def _retrieve_endpoint(
self, endpoint_id: str,
location: str,
is_private: bool) -> aiplatform.Endpoint:
"""Retrieves an AI Platform endpoint and queries it for liveness/deployed
models.

Args:
endpoint_id: the numerical ID of the Vertex AI endpoint to retrieve.
is_private: a boolean indicating if the Vertex AI endpoint is a private
endpoint
Returns:
An aiplatform.Endpoint object
Raises:
ValueError: if endpoint is inactive or has no models deployed to it.
"""
if is_private:
endpoint: aiplatform.Endpoint = aiplatform.PrivateEndpoint(
endpoint_name=endpoint_id, location=location)
LOGGER.debug("Treating endpoint %s as private", endpoint_id)
else:
endpoint = aiplatform.Endpoint(
endpoint_name=endpoint_id, location=location)
LOGGER.debug("Treating endpoint %s as public", endpoint_id)

try:
mod_list = endpoint.list_models()
except Exception as e:
raise ValueError(
"Failed to contact endpoint %s, got exception: %s", endpoint_id, e)

if len(mod_list) == 0:
raise ValueError("Endpoint %s has no models deployed to it.", endpoint_id)

return endpoint
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do triton endpoints function correctly in this way?

self.region = region
self.endpoint_name = endpoint_name
self.endpoint_url = f"https://{region}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{region}/endpoints/{endpoint_name}:predict"
self.is_private = private
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are there distinctions between public and private triton endpoints?

def run_inference(
self,
batch: Sequence[Any],
model: aiplatform.Endpoint,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not align with usage, an endpoint object is not the model name

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jrmccluskey Can you explain why model parameter should not be aiplatform Endpoint. Since load_model returns an Endpoint object, it seems logical to use it for Vertex AI’s raw_predict method (e.g., with Triton).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

raw_predict isn't using an endpoint object, it uses a PredictionServiceClient (https://cloud.google.com/vertex-ai/docs/predictions/get-online-predictions#raw-predict-request) because you are forced to use the raw_predict API (https://cloud.google.com/vertex-ai/docs/predictions/using-nvidia-triton#deploy_the_model_to_endpoint)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're still deploying the model to a vertex endpoint, but that object's abstraction in the SDK is not useful here

@SaumilPatel03
Copy link
Contributor Author

@jrmccluskey Thank you for your response I have written some unit test for Trition inference server. And have made changes in the Inference code from nvidia-triton-custom-container-prediction.ipynb
Can you give me some resources to write integration test.

Copy link
Contributor

@jrmccluskey jrmccluskey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I ask questions or point out issues, resolving them without a comment explaining the code is not good practice.

@chamikaramj
Copy link
Contributor

@SaumilPatel03 any updates ?

@SaumilPatel03
Copy link
Contributor Author

@chamikaramj I am a bit preoccupied right now. but I’ll go ahead and convert this PR to draft in the meantime.

@SaumilPatel03 SaumilPatel03 marked this pull request as draft March 29, 2025 05:24
@SaumilPatel03 SaumilPatel03 marked this pull request as ready for review April 9, 2025 19:28
Copy link
Contributor

Reminder, please take a look at this pr: @jrmccluskey

Copy link
Contributor

Assigning new set of reviewers because Pr has gone too long without review. If you would like to opt out of this review, comment assign to next reviewer:

R: @damccorm for label python.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

Copy link
Contributor

github-actions bot commented May 7, 2025

Reminder, please take a look at this pr: @damccorm

@damccorm
Copy link
Contributor

damccorm commented May 8, 2025

R: @jrmccluskey

assigning to jack since he started to take a look. With that said, it looks like there are many failing precommits - @SaumilPatel03 please take a look at those

Copy link
Contributor

github-actions bot commented May 8, 2025

Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment assign set of reviewers

Copy link
Contributor

@jrmccluskey jrmccluskey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please address comments + fix precommit errors. I'm also not particularly confident that the code as-written works with the actual service.

Comment on lines +1 to +9
import unittest
from unittest.mock import patch, MagicMock, ANY, call
import json
from google.cloud import aiplatform
from apache_beam.ml.inference.vertex_ai_inference import VertexAITritonModelHandler
from apache_beam.ml.inference import utils
from apache_beam.ml.inference.base import PredictionResult
import numpy as np
import base64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

import order is wrong, the linting/formatting checks should have the correct order listed but for reference you should be importing in at least two distinct blocks: native python imports first, then third-party imports. These should be in alphabetical order within each block as well.

Comment on lines +33 to 35
import numpy as np
MSEC_TO_SEC = 1000
from apache_beam.ml.inference.base import RemoteModelHandler
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MSEC_TO_SEC should not be defined in the import block

import logging
from collections.abc import Iterable
from collections.abc import Mapping
from collections.abc import Sequence
from typing import Any
from typing import Optional
from typing import Dict
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use the built-in dict type for hints instead of typing.Dict

def run_inference(
self,
batch: Sequence[Any],
model: aiplatform.Endpoint,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

raw_predict isn't using an endpoint object, it uses a PredictionServiceClient (https://cloud.google.com/vertex-ai/docs/predictions/get-online-predictions#raw-predict-request) because you are forced to use the raw_predict API (https://cloud.google.com/vertex-ai/docs/predictions/using-nvidia-triton#deploy_the_model_to_endpoint)

def run_inference(
self,
batch: Sequence[Any],
model: aiplatform.Endpoint,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're still deploying the model to a vertex endpoint, but that object's abstraction in the SDK is not useful here

aiplatform.Endpoint object.
"""
return self.endpoint

def _retrieve_endpoint(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I cannot find any sort of discussion around public versus private triton endpoints, but as I've said before the aiplatform.Endpoint classes aren't what you should be using anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature Request]: Vertex AI Triton Inference Server Support
4 participants