Skip to content

wip: Raw structure of improvements to gun that run in production#690

Open
speeddragon wants to merge 158 commits intoneo/edgefrom
impr/gun
Open

wip: Raw structure of improvements to gun that run in production#690
speeddragon wants to merge 158 commits intoneo/edgefrom
impr/gun

Conversation

@speeddragon
Copy link
Collaborator

@speeddragon speeddragon commented Feb 26, 2026

In this PR, there are a few improvements to how gun handles the connection.

  • Connections are fetched using ETS instead of relying on the gen_server.
  • Support for multiple connections for the same peer.
  • Separate them from read (GET/HEAD) and write (POST/PUT).
    • This was most useful when handling S3 read/write, not sure if very useful for this case.
    • This can be configured in a config file by setting conn_pool_read_size and conn_pool_write_size.
  • Improve and fix metrics for both gun and httpc clients.
    • Added category to separate duration metrics by endpoint (/tx, /chunk, etc).

TODO

  • Improve code structure.
  • Review TODO:

@speeddragon speeddragon force-pushed the impr/gun branch 2 times, most recently from acd6a2f to bed3351 Compare February 27, 2026 00:14
@speeddragon speeddragon changed the base branch from edge to neo/edge February 27, 2026 00:14
Comment on lines +22 to +25
setup_conn(Opts) ->
ConnPoolReadSize = hb_maps:get(conn_pool_read_size,Opts, ?DEFAULT_CONN_POOL_READ_SIZE),
ConnPoolWriteSize = hb_maps:get(conn_pool_write_size,Opts, ?DEFAULT_CONN_POOL_WRITE_SIZE),
persistent_term:put(?CONN_TERM, {ConnPoolReadSize, ConnPoolWriteSize}).
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying to find a better way to define this, but I couldn't.

@samcamwilliams samcamwilliams force-pushed the impr/gun branch 6 times, most recently from c7d1484 to 418d802 Compare March 3, 2026 21:59
shyba and others added 30 commits March 16, 2026 20:10
Use hb_message:convert rather than dev_codec_ans104:to to correctly
handle converting to/from TABM. Previous implementation could
allow structure messages to make it to the codec which were not
handled correctly
fix: resolve hyperbuddy paths from server root
feat: Add support for serving custom local files as part of the `hyperbuddy@1.0` device
The current logic to convert a list of items into a TX bundle is not completely
HyperBEAM compliant. To avoid confusion we've moved the logic out of
the core dev_codec_tx module.
fix: correctly handle inputs to dev_codec_tx when building a bundle
…sses

Sampler runs periodically and gather message queue length, reductions, and
memory use by process and adds to promethues. Arweave uses a similar
system and it's been incredibly useful for debugging and we haven't
noticed any negative performance impacts.

controlled via the following opts:
process_sampler: true|false
process_sampler_interval: how often to sampe, default 15000ms
impr: Blacklist handling of long blacklist file
impr: add hb_process_sampler to track attributes of all running processes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants