Skip to content

Conversation

omrishiv
Copy link
Contributor

@omrishiv omrishiv commented Oct 8, 2025

What does this PR do?

🛑 Please open an issue first to discuss any significant work and flesh out details/direction. When we triage the issues, we will add labels to the issue like "Enhancement", "Bug" which should indicate to you that this issue can be worked on and we are looking forward to your PR. We would hate for your time to be wasted.
Consult the CONTRIBUTING guide for submitting pull-requests.

This PR:

  • enables S3 Gateway in the VPC
  • creates a inference-chart template for S3 model copies from Hugging Face
  • bumps vllm templates to 0.10.2
  • enables runai model streaming for gpu vllm templates

Motivation

RunAI vLLM model streaming vastly improves load times for vLLM. This is possible to do either through Hugging Face or using S3. S3 with a Gateway provides a much faster model load, but even directly from Hugging Face improves load times.

More

  • Yes, I have tested the PR using my local account setup (Provide any test evidence report under Additional Notes)
  • Mandatory for new blueprints. Yes, I have added a example to support my blueprint PR
  • Mandatory for new blueprints. Yes, I have updated the website/docs or website/blog section for this feature
  • Yes, I ran pre-commit run -a with this PR. Link for installing pre-commit locally

For Moderators

  • E2E Test successfully complete before merge?

Additional Notes

@omrishiv omrishiv force-pushed the enable-runai-model-streamer-for-vllm branch from d9a0878 to e1194e9 Compare October 8, 2025 00:51
@omrishiv omrishiv force-pushed the enable-runai-model-streamer-for-vllm branch from e1194e9 to 924549c Compare October 8, 2025 00:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants