-
Notifications
You must be signed in to change notification settings - Fork 228
feat: Add prediction dedicated endpoint colab sample #3942
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @TJ-Liu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
This pull request introduces a new Colab notebook that guides users on how to get started with online prediction using dedicated endpoints in Vertex AI. The notebook covers creating a dedicated endpoint, deploying a TensorFlow model to it, making predictions using both the Python SDK and direct HTTP/gRPC requests, and exploring features like traffic splitting, custom timeouts, and request/response logging. The notebook also includes cleanup instructions to remove created resources.
Highlights
- Dedicated Endpoints: The notebook demonstrates how to create and use dedicated endpoints for online prediction, highlighting their benefits such as dedicated networking, optimized latency, larger payload support, longer timeouts, and Generative AI readiness.
- Prediction Methods: The notebook provides examples of making predictions using the Vertex AI SDK for Python, as well as direct HTTP/gRPC requests to the dedicated endpoint.
- Feature Exploration: The notebook showcases advanced features like traffic splitting between different model versions, customizing inference timeouts, and enabling request/response logging for monitoring and debugging.
- Chat Completion: The notebook provides instructions on how to use OpenAI client library to do chat completion.
Changelog
Click here to see the changelog
- notebooks/official/CODEOWNERS
- Added an entry to the CODEOWNERS file, assigning ownership of the new notebook to @tianjiaoliu.
- notebooks/official/prediction/get_started_with_dedicated_endpoint.ipynb
- Created a new Colab notebook that guides users on how to get started with online prediction using dedicated endpoints in Vertex AI.
- The notebook covers the following topics:
-
- Installing the Vertex AI SDK and other required packages
-
- Authenticating your notebook environment (Colab only)
-
- Setting Google Cloud project information and initializing the Vertex AI SDK
-
- Creating a dedicated endpoint
-
- Deploying a TensorFlow model to the endpoint
-
- Making predictions using the Python SDK and direct HTTP/gRPC requests
-
- Exploring features like traffic splitting, custom timeouts, and request/response logging
-
- Cleaning up created resources
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
A dedicated endpoint's grace,
For predictions, time, and space.
No noisy neighbors near,
Just focused service here,
In Vertex AI's embrace.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new Colab notebook demonstrating how to use dedicated endpoints for online prediction with Vertex AI. The notebook covers creating endpoints, deploying models, making predictions using both the Python SDK and direct HTTP/GRPC requests, and managing traffic splits, custom timeouts, and request/response logging. Overall, the notebook provides a comprehensive guide to using dedicated endpoints. However, there are a few areas that could be improved for clarity and completeness.
Summary of Findings
- Missing Project Number: The notebook uses
PROJECT_IDbut notPROJECT_NUMBERwhich is needed for constructing the dedicated endpoint DNS. This could lead to confusion for users who are not familiar with the difference between the two. - Incomplete Chat Completion Example: The Chat Completion example is incomplete, with
...indicating missing code. A more complete example would be beneficial. - Stream Raw Predict Issues: The
stream_raw_predictmethod is not actually a method of the endpoint object. It should beendpoint.raw_predictwith streaming enabled. Also, the code does not correctly iterate through the stream responses. - Inconsistent HTTP Request Examples: The HTTP request examples use
${DEDICATED_ENDPOINT}which is not defined in the notebook. It should be replaced with the actual dedicated endpoint DNS.
Merge Readiness
The pull request introduces a valuable new notebook. However, the issues identified above should be addressed before merging to ensure the notebook is accurate, complete, and easy to use. I am unable to directly approve this pull request, and recommend that others review and approve this code before merging. In particular, the issues related to the chat completion example and stream raw predict should be addressed before merging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's important to note that the dedicated DNS requires the project number, not the project ID. Consider adding a step to retrieve the project number and use that in the DNS string. Otherwise, users may get confused when the notebook doesn't work for them.
# Get project number
PROJECT_NUMBER = !gcloud projects describe $PROJECT_ID --format='value(projectNumber)'
PROJECT_NUMBER = PROJECT_NUMBER[0]
# ...
# Dedicated endpoint DNS
dedicated_endpoint_dns = f"https://{endpoint.gca_resource.id}-{PROJECT_NUMBER}.{LOCATION}-aiplatform.googleapis.com"
notebooks/official/prediction/get_started_with_dedicated_endpoint.ipynb
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding a check to see if the bucket already exists, and if so, skip the creation step. This will make the notebook more robust.
import subprocess
try:
subprocess.check_call(['gsutil', 'mb', '-l', LOCATION, '-p', PROJECT_ID, BUCKET_URI])
except subprocess.CalledProcessError as e:
print(f"Bucket {BUCKET_URI} already exists or another error occurred: {e}")
notebooks/official/prediction/get_started_with_dedicated_endpoint.ipynb
Outdated
Show resolved
Hide resolved
notebooks/official/prediction/get_started_with_dedicated_endpoint.ipynb
Outdated
Show resolved
Hide resolved
notebooks/official/prediction/get_started_with_dedicated_endpoint.ipynb
Outdated
Show resolved
Hide resolved
notebooks/official/prediction/get_started_with_dedicated_endpoint.ipynb
Outdated
Show resolved
Hide resolved
notebooks/official/prediction/get_started_with_dedicated_endpoint.ipynb
Outdated
Show resolved
Hide resolved
|
@TJ-Liu: looks like your notebook requires 3.10. Please add the following in the introduction section. Thanks |
REQUIRED: Add a summary of your PR here, typically including why the change is needed and what was changed. Include any design alternatives for discussion purposes.
--- YOUR PR SUMMARY GOES HERE ---
REQUIRED: Fill out the below checklists or remove if irrelevant
Official Notebooksunder the notebooks/official folder, follow this mandatory checklist:Official Notebookssection, pointing to the author or the author's team.Community Notebooksunder the notebooks/community folder:Community Notebookssection, pointing to the author or the author's team.Community Contentunder the community-content folder:Content Directory Nameis descriptive, informative, and includes some of the key products and attributes of your content, so that it is differentiable from other contentCommunity Contentsection, pointing to the author or the author's team.