-
Notifications
You must be signed in to change notification settings - Fork 508
Add support for publish port ranges #801
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
27ee82d to
fe4ecd4
Compare
|
@caztanj this are the cases where can you think of how we can cover all the invalid test cases Valid cases Invalid cases |
fe4ecd4 to
5de814d
Compare
5de814d to
258b4d1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@caztanj We hadn't done ranges as each port forward runs a dedicated Swift proxy task. Let's discuss the implications of that.
In Docker I think it's less resource intensive since it's just programming iptables. In theory on macOS it's possible to use BSD packet filter rules to do this but that requires admin privilege and we're not going to do anything that requires a privileged helper right now.
| } | ||
|
|
||
| var publishPorts = [PublishPort]() | ||
| for i in 0..<hostPortRangeEnd - hostPortRangeStart + 1 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if I specify -p 127.0.0.1:1024-65535:1024-65535?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your response!
I have a bunch of other things running on my machine but if I specify -p 7001-8020:7001-8020 -p 8022-49325:8022-49325 -p 58000-63000:58000-63000 -p 63765-65535:63765-65535 for a total of 49092 ports it works just fine and I cannot see any degraded performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, thanks for doing the experiment.
Could you try the exact same command with Activity Monitor open and see what you see for memory utilization before and after?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since everything's getting multiplexed down onto NIO and an event loop group, we might be able to do this. I'll do a little asking around and see if folks more expert than me in this area can see any gotchas.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Memory usage when not publishing any ports:
Virtual Machine Service for container-runtime-linux: 179.4MB
container-runtime-linux: 20.5MB
Memory usage when publishing the same ports as in my answer above:
Virtual Machine Service for container-runtime-linux: 179.6MB
container-runtime-linux: 170.4MB
So the memory usage does increase significantly but I think 170.4MB is still acceptable. Especially considering that most people won't publish that many ports.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'm not surprised by that. It works out to a little under 3.5K worth of memory per port forward.
The way NIO works I don't think you'd see much performance degradation other than what might arise from cache misses if you're sending data concurrently through a lot of different ports at once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Appreciate you following up with these tests. One other one to try: could you do the same memory experiment for the UDP case?
UDP is a bit different as we need to carry a bit of "connection state" in a LRU cache. I don't think it's a dealbreaker but it'd be good to characterize what goes on there.
I'm also working with some experts to review not your PR, but our NIO port proxy implementation, to make sure this won't break under load and see if we can reduce memory utilization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The UDP case uses more memory:
Virtual Machine Service for container-runtime-linux: 179.5MB
container-runtime-linux: 222.4MB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, so another 1KB per proxy for the LRU cache entries and UDP proxy context.
I hope to know enough tomorrow to know whether there's work we need to do on the proxy implementation to make sure we can scale reliably.
|
@jglogan i have some questions here
|
@Ronitsabhaya75 Feel free to review the PR and ask those questions of @caztanj using comments on the change set! Those are all good questions. The ones I will speak to are the ones that relate to the underlying proxy implementation and not the PR.
If you try to publish a port that already has a listener you'd get an error message like: Error: internalError: "failed to bootstrap container" (cause: "internalError: "failed to bootstrap container server2 (cause: "unknown: "bind(descriptor:ptr:bytes:): Address already in use (errno: 48)"")"")
With a solid proxy implementation it should perform about as well as any reasonably well-written swift NIO server. It's not an OS thread per proxy. @caztanj has provided some information on memory impact, what isn't known there is how much memory might get used with buffering if there's a lot of data in flight. |
|
@caztanj Had a little more time to think about this and do some consultation. I have one remaining concern which is that this can create a very large number of PublishPort objects, resulting in huge container configuration files and ls/inspect payloads. Would you be willing to create a follow-up issue and PR to address that concern? What I had in mind was this - what do you think?
|
|
That sounds great but I am a bit busy in the coming months so I don't think I will have time to work on the follow-up issue. |
Type of Change
Motivation and Context
Adding many
-parguments for each port gets a bit annoying. Adding an entire range of ports with one-parguments is much nicer.Testing