Skip to content

Support custom fullsweep_after#144

Open
rodrigues wants to merge 3 commits intopepsico-ecommerce:masterfrom
rodrigues:fullsweep_after
Open

Support custom fullsweep_after#144
rodrigues wants to merge 3 commits intopepsico-ecommerce:masterfrom
rodrigues:fullsweep_after

Conversation

@rodrigues
Copy link
Contributor

@rodrigues rodrigues commented Feb 25, 2026

Hello!

With this PR init/1 optionally calls :erlang.process_flag(:fullsweep_after, n) when the option is present in opts.

This lets consumers force aggressive GC on long-lived pool workers that accumulate memory.

https://www.erlang.org/doc/apps/erts/erlang.html#spawn_opt/4

Screenshot 2026-02-25 at 22 42 02

I checked locally these processes have {:fullsweep_after, 0} if I pass that option on the snowflake opts.

What do you think?

Cheers,

Victor

Copy link

@windsurf-bot windsurf-bot bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me 🤙

💡 To request another review, post a new comment with "/windsurf-review".

@rodrigues
Copy link
Contributor Author

/windsurf-review

Copy link

@windsurf-bot windsurf-bot bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me 🤙

💡 To request another review, post a new comment with "/windsurf-review".

@mphfish
Copy link
Contributor

mphfish commented Feb 26, 2026

Thanks @rodrigues for your contribution! I'm curious about your use case a little bit. Forcing more aggressive GC feels like maybe we are bandaiding over a more serious memory leak issue though.

Would it be possible for you to give me a little bit of insight into your query patterns? Looking at our internal memory usage I don't see anything close to those numbers so would love to try to reproduce to see if there is something more fundamental going on.

@mphfish
Copy link
Contributor

mphfish commented Feb 27, 2026

Hey @rodrigues, your PR prompted me to take a deeper look at some of the memory usage, so thanks!

We have version 1.2.1 out that might solve the problem for you. I'm not fully opposed to supporting additional configuration via fullsweep_after, but want to make sure we are solving the right problem.

Can you give 1.2.1 a shot and let me know how it's working for you?

@rodrigues
Copy link
Contributor Author

Thank you for looking into this, @mphfish!

I was trying to get more information on the binary leaks to share with you, didn't have the time.

I think hibernate will resolve this issue, will give it a shot and let you know. Thanks!

@rodrigues
Copy link
Contributor Author

Working great @mphfish, problem solved 👌

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants