-
Notifications
You must be signed in to change notification settings - Fork 901
Description
I am using ethereum-etl to parse all specific token transfers from start to end block. As a node I'm running a local snap node with all the receipts being saved parameter (--txlookuplimit 0
).
To test this instruments work I've decided to parse USDT token (0xdAC17F958D2ee523a2206206994597C13D831ec7
) transfers from 20_500_000
block to 20_505_000
block.
The issue I'am having that it works really slow. I am talking taking about 15±0.5 seconds to parse all the transfers.
The official docs state that:
You can tune --batch-size, --max-workers for performance.
But tunning --max-workers
parameters actually gives me a slight decrease in speed; --batch-size
set to 4096
increases speed to 12.4±0.3 seconds, but it's further tweaking worsens the result.
For my original task I woult need to parse millisons of blocks, so the current speed is unacceptable.
The command I'm running:
ethereumetl export_token_transfers --start-block 20500000 --end-block 20505000 --provider-uri http://127.0.0.1:8545 --output token_transfers.csv --tokens 0xdAC17F958D2ee523a2206206994597C13D831ec7
I've read this Medium article: How to Export the Entire Ethereum Blockchain to CSV in 2 hours for $10, but I can replicate, since I can't get access to AWS services.
Am I facing a hardware bottleneck or is there something I can do to make it faster?
Would much appreciate any suggestions!