Skip to content

rpc: eth_feeHistory optimisation#19526

Merged
lupin012 merged 5 commits intomainfrom
eth_feeHistoryOptim
Mar 1, 2026
Merged

rpc: eth_feeHistory optimisation#19526
lupin012 merged 5 commits intomainfrom
eth_feeHistoryOptim

Conversation

@lupin012
Copy link
Contributor

@lupin012 lupin012 commented Feb 27, 2026

This PR introduces two major optimizations inspired by the Geth/Netmind client architectures to decrease latency:

  • Parallel Block Processing: Switched from sequential to concurrent block fetching using 4 workers, matching Geth's maxBlockFetchers architecture.

  • FeeHistory Cache: Added a 2048-entry LRU cache for eth_feeHistory results. This avoids repetitive fee/percentile computations by caching the final processedFees struct instead of raw block data.

📊 Click to view detailed Benchmark results
Scenario Before After Speedup
full/200 (sequential cold) 451 ms 93 ms 4.9×
full/1024 (sequential cold) 2,230 ms 118 ms 18.9×
full/1024 (sequential warm) 2,346 ms 95 ms 24.7×
full/1024 (concurrent warm) 1 req/s 40 req/s 40.0×

@lupin012 lupin012 marked this pull request as ready for review February 28, 2026 08:55
@lupin012 lupin012 requested a review from canepat as a code owner February 28, 2026 08:55
@lupin012 lupin012 marked this pull request as draft February 28, 2026 08:56
@lupin012 lupin012 marked this pull request as ready for review February 28, 2026 08:56
lupin012 and others added 3 commits March 1, 2026 12:19
BeginTemporalRo in Fork intentionally transfers tx ownership to the
caller via the returned cleanup func; defer tx.Rollback() would
invalidate the tx immediately on return.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@lupin012 lupin012 merged commit 84fe7e5 into main Mar 1, 2026
25 checks passed
@lupin012 lupin012 deleted the eth_feeHistoryOptim branch March 1, 2026 21:45
sudeepdino008 pushed a commit that referenced this pull request Mar 4, 2026
This PR introduces two major optimizations inspired by the Geth/Netmind
client architectures to decrease latency:

* Parallel Block Processing: Switched from sequential to concurrent
block fetching using 4 workers, matching Geth's maxBlockFetchers
architecture.

* FeeHistory Cache: Added a 2048-entry LRU cache for eth_feeHistory
results. This avoids repetitive fee/percentile computations by caching
the final processedFees struct instead of raw block data.

<details>
<summary><b>📊 Click to view detailed Benchmark results</b></summary>

| Scenario | Before | After | Speedup |
| :--- | :--- | :--- | :--- |
| **full/200** (sequential cold) | 451 ms | 93 ms | **4.9×** |
| **full/1024** (sequential cold) | 2,230 ms | 118 ms | **18.9×** |
| **full/1024** (sequential warm) | 2,346 ms | 95 ms | **24.7×** |
| **full/1024** (concurrent warm) | 1 req/s | 40 req/s | **40.0×** |

</details>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
sudeepdino008 pushed a commit that referenced this pull request Mar 4, 2026
This PR introduces two major optimizations inspired by the Geth/Netmind
client architectures to decrease latency:

* Parallel Block Processing: Switched from sequential to concurrent
block fetching using 4 workers, matching Geth's maxBlockFetchers
architecture.

* FeeHistory Cache: Added a 2048-entry LRU cache for eth_feeHistory
results. This avoids repetitive fee/percentile computations by caching
the final processedFees struct instead of raw block data.

<details>
<summary><b>📊 Click to view detailed Benchmark results</b></summary>

| Scenario | Before | After | Speedup |
| :--- | :--- | :--- | :--- |
| **full/200** (sequential cold) | 451 ms | 93 ms | **4.9×** |
| **full/1024** (sequential cold) | 2,230 ms | 118 ms | **18.9×** |
| **full/1024** (sequential warm) | 2,346 ms | 95 ms | **24.7×** |
| **full/1024** (concurrent warm) | 1 req/s | 40 req/s | **40.0×** |

</details>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants