Skip to content

Conversation

@caztanj
Copy link

@caztanj caztanj commented Oct 23, 2025

Type of Change

  • Bug fix
  • New feature
  • Breaking change
  • Documentation update

Motivation and Context

Adding many -p arguments for each port gets a bit annoying. Adding an entire range of ports with one -p arguments is much nicer.

Testing

  • Tested locally
  • Added/updated tests
  • Added/updated docs

@Ronitsabhaya75
Copy link
Contributor

Ronitsabhaya75 commented Oct 23, 2025

@caztanj this are the cases where can you think of how we can cover all the invalid test cases

Valid cases
✓ Single port: "8080:80"
✓ Small range: "8080-8082:80-82"
✓ Large range: "8000-8100:9000-9100"
✓ With protocol: "8080-8082:80-82/udp"
✓ With host IP: "127.0.0.1:8080-8082:80-82"

Invalid cases
✗ Reversed range: "8082-8080:80"
✗ Mismatched sizes: "8080-8090:80-82"
✗ Invalid port numbers: "0-10:80-90"
✗ Out of bounds: "65530-65540:80-90"
✗ Malformed: "8080-:80-82"
✗ Non-numeric: "abc-def:80-82"

Copy link
Contributor

@jglogan jglogan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@caztanj We hadn't done ranges as each port forward runs a dedicated Swift proxy task. Let's discuss the implications of that.

In Docker I think it's less resource intensive since it's just programming iptables. In theory on macOS it's possible to use BSD packet filter rules to do this but that requires admin privilege and we're not going to do anything that requires a privileged helper right now.

}

var publishPorts = [PublishPort]()
for i in 0..<hostPortRangeEnd - hostPortRangeStart + 1 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if I specify -p 127.0.0.1:1024-65535:1024-65535?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your response!

I have a bunch of other things running on my machine but if I specify -p 7001-8020:7001-8020 -p 8022-49325:8022-49325 -p 58000-63000:58000-63000 -p 63765-65535:63765-65535 for a total of 49092 ports it works just fine and I cannot see any degraded performance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, thanks for doing the experiment.

Could you try the exact same command with Activity Monitor open and see what you see for memory utilization before and after?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since everything's getting multiplexed down onto NIO and an event loop group, we might be able to do this. I'll do a little asking around and see if folks more expert than me in this area can see any gotchas.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Memory usage when not publishing any ports:
Virtual Machine Service for container-runtime-linux: 179.4MB
container-runtime-linux: 20.5MB

Memory usage when publishing the same ports as in my answer above:
Virtual Machine Service for container-runtime-linux: 179.6MB
container-runtime-linux: 170.4MB

So the memory usage does increase significantly but I think 170.4MB is still acceptable. Especially considering that most people won't publish that many ports.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'm not surprised by that. It works out to a little under 3.5K worth of memory per port forward.

The way NIO works I don't think you'd see much performance degradation other than what might arise from cache misses if you're sending data concurrently through a lot of different ports at once.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Appreciate you following up with these tests. One other one to try: could you do the same memory experiment for the UDP case?

UDP is a bit different as we need to carry a bit of "connection state" in a LRU cache. I don't think it's a dealbreaker but it'd be good to characterize what goes on there.

I'm also working with some experts to review not your PR, but our NIO port proxy implementation, to make sure this won't break under load and see if we can reduce memory utilization.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The UDP case uses more memory:
Virtual Machine Service for container-runtime-linux: 179.5MB
container-runtime-linux: 222.4MB

Copy link
Contributor

@jglogan jglogan Oct 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, so another 1KB per proxy for the LRU cache entries and UDP proxy context.

I hope to know enough tomorrow to know whether there's work we need to do on the proxy implementation to make sure we can scale reliably.

@Ronitsabhaya75
Copy link
Contributor

@jglogan i have some questions here

  1. Is there a maximum range limit?
  2. What happens if ranges are mismatched?
  3. How are duplicate ports handled?
  4. What's the performance impact?
  5. Are there integration tests?
  6. What happens on cleanup?

@jglogan
Copy link
Contributor

jglogan commented Oct 27, 2025

i have some questions here

@Ronitsabhaya75 Feel free to review the PR and ask those questions of @caztanj using comments on the change set!

Those are all good questions. The ones I will speak to are the ones that relate to the underlying proxy implementation and not the PR.

How are duplicate ports handled?

If you try to publish a port that already has a listener you'd get an error message like:

Error: internalError: "failed to bootstrap container" (cause: "internalError: "failed to bootstrap container server2 (cause: "unknown: "bind(descriptor:ptr:bytes:): Address already in use (errno: 48)"")"")

What's the performance impact?

With a solid proxy implementation it should perform about as well as any reasonably well-written swift NIO server. It's not an OS thread per proxy. @caztanj has provided some information on memory impact, what isn't known there is how much memory might get used with buffering if there's a lot of data in flight.

@jglogan
Copy link
Contributor

jglogan commented Oct 29, 2025

@caztanj Had a little more time to think about this and do some consultation.

I have one remaining concern which is that this can create a very large number of PublishPort objects, resulting in huge container configuration files and ls/inspect payloads. Would you be willing to create a follow-up issue and PR to address that concern? What I had in mind was this - what do you think?

  • In ContainerClient enforce a limit on the number of PortPublish descriptors that there can be in a bundle. I think a limit of 64 seems a reasonable start and we can adjust it later if there are cases that require more.
  • Accumulate host port numbers into a Set in ContainerClient when accumulating PortPublish descriptors, and signal an error if duplicate ports / overlapping ranges are found.
  • Make a backward-compatible extension to PortPublish for a new count field that indicates the range size, along with a decoder that defaults count to 1 for legacy bundles.

@caztanj
Copy link
Author

caztanj commented Nov 1, 2025

That sounds great but I am a bit busy in the coming months so I don't think I will have time to work on the follow-up issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants