Enable reverse proxy in server-requesting pod#423
Enable reverse proxy in server-requesting pod#423delavet wants to merge 1 commit intollm-d-incubation:mainfrom
Conversation
1b84ee7 to
605acef
Compare
| // holds that starting position of a log chunk. | ||
| const LogStartPosParam = "startPos" | ||
|
|
||
| // InitProxy is the path for initializing the HTTP reverse proxy。 |
There was a problem hiding this comment.
There's a strange period at the end here...
There was a problem hiding this comment.
Yes. Please change it to a regular period (character code 0x2E).
| var ready atomic.Bool | ||
|
|
||
| var wg sync.WaitGroup | ||
| wg.Add(2) |
There was a problem hiding this comment.
Must change to wg.Add(3)?
rubambiza
left a comment
There was a problem hiding this comment.
A few suggested changes for debugging and code readability. Not major blockers.
| url := fmt.Sprintf("http://%s:%s%s", requestingPod.Status.PodIP, adminPort, stubapi.InitProxy) | ||
| if err := doPostWithData(url, bytes.NewReader([]byte(fmt.Sprintf("{\"address\":\"%s\",\"port\":%d}", | ||
| launcherIP, desiredPort)))); err != nil { | ||
| logger.Error(err, "Failed to initialize requester proxy") |
There was a problem hiding this comment.
Perhaps we should include the requesting pod ip:port pair in the error for troubleshooting an environment with multiple requesters?
There was a problem hiding this comment.
The error message should include both host:port pairs.
|
|
||
| // proxy is a lazy HTTP reverse proxy that only starts after receiving | ||
| // the first configuration request | ||
| type proxy struct { |
There was a problem hiding this comment.
The type being proxy and having a field named proxy is a bit confusing. I'd like to suggest making the type more specific to what it does in FMA, though I don't have an appropriate name off the top of my head.
There was a problem hiding this comment.
I agree. But I think that this is moot because it should be a TCP proxy rather than an HTTP proxy.
| // The request body should contain a JSON object with "address" | ||
| // and "port" fields. After successful initialization, | ||
| // the proxy will forward requests to the configured target server. | ||
| const InitProxy = "/v1/proxy/init" |
There was a problem hiding this comment.
Dumb question but is it possible for this single proxy server to route requests to the launcher mgmt API (/v2/vllm/instances....*) running on port 8001 to get status of the vllm instances, get logs etc in addition to the actual vllm's inference endpoints?
There was a problem hiding this comment.
The requester should not expose information about more than itself and its corresponding vllm instance.
| defer wg.Done() | ||
| err := proxy.Run(ctx, proxyPort) | ||
| if err != nil { | ||
| logger.Error(err, "failed to start requester proxy server") |
| @@ -0,0 +1,187 @@ | |||
| /* | |||
| Copyright 2025 The llm-d Authors. | |||
There was a problem hiding this comment.
New files should be born with the current year here.
| type ConfigRequest struct { | ||
| Address string `json:"address"` | ||
| Port int `json:"port"` | ||
| } |
There was a problem hiding this comment.
This should be declared in a common place where the dual-pods controller can use it too.
| WriteTimeout: 5 * time.Minute, // Long timeout for inference requests | ||
| IdleTimeout: 120 * time.Second, |
There was a problem hiding this comment.
Maybe these two timeouts should be configurable?
| } | ||
|
|
||
| // singleton instance initialized once at startup | ||
| var instance = &proxy{} |
There was a problem hiding this comment.
The level of pointer indirection here is just unhelpful complexity. It could be just
var instance proxy| // Try initialize server | ||
| if instance.initialized.Load() { | ||
| http.Error(w, "proxy already initialized", http.StatusConflict) | ||
| return | ||
| } |
There was a problem hiding this comment.
This code is unnecessary complexity. Just delete it.
| ```json | ||
| {"address": "10.244.1.5", "port": 8005} | ||
| ``` | ||
|
|
There was a problem hiding this comment.
The proxy also implements GET /v1/proxy/init, and that should be documented too.
|
|
||
| The reverse proxy operates as follows: | ||
|
|
||
| 1. **Initialization**: When the dual-pods controller binds a |
There was a problem hiding this comment.
Since there is also a GET on the same path, the design should be modified to conform to REST. Define a schema for the resource at /v1/proxy/init (or maybe just /v1/proxy or /v1/proxy/config?), and define PUT and GET to write and read this resource. GET when uninitialized returns HTTP status 404.
| include those details. | ||
|
|
||
| #### Requester Reverse Proxy | ||
|
|
There was a problem hiding this comment.
HTTP is too specific. It forecloses things like HTTPS and HTTP 2 or 3. The proxy should be simply a TCP proxy.
MikeSpreitzer
left a comment
There was a problem hiding this comment.
I have some comments on the study in https://docs.google.com/document/d/1qX4KtcTJEfdatgVmtsveOMfjm883MG-HW-e7ddmlqkA
"20 req/sec" is cited as a concurrency level, but that is actually a rate. Please be accurate and give a short sharp statement of the behavior of the benchmarking client(s).
Does every request go on a new TCP connection? If not then there is an additional thing to measure, the added latency in TCP connection setup. The measurement would be something like time from (a) client sending request to open connection to (b) client receiving first token from first request.
When doing performance studies we usually pay attention to the distribution of the result. Stuff like average, median, and high percentiles (90, 95, 99, 99.9). Of course, for high percentiles you need enough cycles to get statistically significant results.
Based on our discussion in #363, I have split PR #363 and am resubmitting this PR to introduce the first part of the changes: adding a reverse proxy for server-requesting pods.
I have re-conducted the reverse proxy overhead experiments, primarily updating the experimental parameter configurations and adding monitoring and analysis of pod resource usage. For details, please refer to the documentation below.
https://docs.google.com/document/d/1qX4KtcTJEfdatgVmtsveOMfjm883MG-HW-e7ddmlqkA/edit?usp=sharing
The part on dynamic port allocation in PR #363 can be considered for addition after Milestone 3.