Skip to content

Conversation

@maggie44
Copy link
Contributor

@maggie44 maggie44 commented Dec 11, 2025

Refactors the CA pool handling to reduce memory usage when loading large config files on low memory devices:

Before (full-slice parsing):

100 CAs: 4.13 ms/op, 1.46 MB, 1,596 allocs
250 CAs: 9.90 ms/op, 8.75 MB, 3,576 allocs
500 CAs: 20.8 ms/op, 33.4 MB, 6,840 allocs
1000 CAs: 45.5 ms/op, 129 MB, 13,347 allocs
5000 CAs: 337 ms/op, 3.14 GB, 65,415 allocs

After (streaming parser):

100 CAs: 4.72 ms/op, 166 KB, 1,493 allocs
250 CAs: 9.61 ms/op, 330 KB, 3,300 allocs
500 CAs: 18.96 ms/op, 667 KB, 6,314 allocs
1000 CAs: 37.0 ms/op, 1.36 MB, 12,343 allocs
5000 CAs: 187.37 ms/op, 6.78 MB, 60,495 allocs

@JackDoan
Copy link
Collaborator

Hi @maggie44!

I'm still reading through this, but I'm really curious about your use-case. How many CAs do you typically need to load (any why so many?)

One of the sort of unwritten assumptions throughout Nebula is that while there should always be more than one CA, there are probably are much fewer than 100. If you do have that many, it'd be good for us to know so we can consider it moving forward!

On the other hand, if you just cranked N up to show the difference in the benchmark, that would make sense too.

@maggie44
Copy link
Contributor Author

Hi @JackDoan,

For my use case, the cranking of N to levels like 5000 is mostly to show the difference. But there is value in having that many CAs, it allows one Lighthouse to serve many different isolated Nebula networks. Hosts accept traffic from the Lighthouse (CA: LH1), and Lighthouse accepts traffic from all hosts (CA: H1, CA: H2, CA: H3), but hosts don't accept traffic from each other, allowing small groups of networks from one lighthouse.

The managing of the CAs for handling traffic is very efficient and looks like it could comfortably handle that many CAs. The reading of the CAs though is memory intensive, hence the change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants