Skip to content

Conversation

@jayanth-kumar-morem
Copy link

Description

1/ Added a Cached helper that stores raw JSON so serialized fragments can be cloned and reused without re-encoding.
2/ Reworked the solver auction DTO to build cached tokens, orders, and liquidity entries and return a lightweight serializable request wrapper.
3/ Added an executable example that benchmarks uncached versus cached solver auction serialization, ensures both payloads remain JSON-identical, and reports the observed speedup when run.
4/ Generated deterministic synthetic tokens, orders, liquidity pools, and owner lists so both serializers operate on the same reproducible workload in the benchmark.

Changes

How to test

1/ cargo check -p driver
2/ cargo run -p driver --example auction_serialization --release

(base) jayanthkumar@Jayanths-MacBook-Air cow-services % cargo check -p driver
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.88s
warning: the following packages contain code that will be rejected by a future version of Rust: sqlx-postgres v0.7.4
note: to see what the problems were, use the option `--future-incompat-report`, or run `cargo report future-incompatibilities --
id 1`
(base) jayanthkumar@Jayanths-MacBook-Air cow-services % cargo run -p driver --example auction_serialization --release
Compiling driver v0.1.0 (/workspace/cow-services/crates/driver)
Finished `release` profile [optimized] target(s) in 5.75s
warning: the following packages contain code that will be rejected by a future version of Rust: sqlx-postgres v0.7.4
note: to see what the problems were, use the option `--future-incompat-report`, or run `cargo report future-incompatibilities --
id 1`
     Running `target/release/examples/auction_serialization`
Uncached serialization: 923.528 µs/iter (4.617639109s total)
Cached serialization:   18.945 µs/iter (94.724644ms total)
Speedup: 48.75x faster

Related Issues

Fixes #3617

@jayanth-kumar-morem jayanth-kumar-morem requested a review from a team as a code owner October 9, 2025 07:27
@jayanth-kumar-morem
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

Copy link
Contributor

@MartinquaXD MartinquaXD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAICS this will currently not speed up the process in the real world case.
The driver's setup looks like this:

  • there is 1 domain object of the auction that gets passed to all the connected solvers
  • every solver does some extra logic and ultimately converts their (theoretically) unique auction representation into a DTO that then gets serialized

AFAIU your optimization makes it faster to serialize the exact same DTO multiple times but since every solver currently generates their own unique DTOs and serializes them once this will not help.

One problem is that the order DTO contains how much of an order should be fillable. Theoretically this value can depend on how each individual solver prioritizes the orders and allocates available balances to it. This makes the optimization very complicated IMO.
If we ignore this issue and assume all orders structs are actually identical across all solvers we could use your wrapper approach. With a slight modification.
Instead of introducing this wrapper when each solver turns the order domain objects into DTOs we'd have to have this serialization wrapper already in the domain object (e.g. Arc<SerializationWrapper<Order>>). That way there is exactly 1 instance of each order and the serialization cache is tied to that 1 struct all solvers could then have access to the same cache.
The first solver would serialize the DTO and write it into the cache. The next solver could then just wait for the first solver to finish the serialization and copy the cached bytes.

However if we consider that orders can theoretically be distinct (due to the different allocated balances) we probably would have to have a serialization cache per auction. In order to avoid incorrect reuse of serialized data each order would have to have a key that includes all the data that can possibly be different. In our case the key would probably just be orderUid_allocatedBalance. In practice I expect all solvers to allocate the same balance for each order which effectively means we'll have 1 serialized version per order. However, if there are solvers which allocate available balances differently every order would have 1 serialized representation for each balance allocation.

Please feel free to ask follow up questions because as you can see implementing this optimization has surprisingly many nuances. 😬

@github-actions
Copy link

This pull request has been marked as stale because it has been inactive a while. Please update this pull request or it will be automatically closed.

@github-actions github-actions bot added the stale label Oct 17, 2025
@MartinquaXD
Copy link
Contributor

Closed due to inactivity

@github-actions github-actions bot locked and limited conversation to collaborators Oct 24, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

cache serialized versions of immutable data

2 participants