A high-performance, production-ready blockchain indexing SDK written in Go for building scalable EVM-compatible blockchain indexers. Features automatic reorganization handling, intelligent multi-contract event routing, concurrent multi-chain processing, and structured event persistence.
- Concurrent Multi-Chain Indexing: Process events across multiple EVM-compatible chains simultaneously
- Automatic Reorganization Handling: Built-in detection and rollback for blockchain reorganizations
- Intelligent Event Routing: DecoderRouter enables complex multi-contract scenarios
- High-Performance Processing: Concurrent fetching with configurable worker pools and batch RPC requests
- Production-Ready Storage: Transactional event persistence with atomic rollback support
- Comprehensive Observability: Structured logging, metrics collection, and health monitoring
go get github.com/ryuux05/godexpackage main
import (
"context"
"log/slog"
"os"
"os/signal"
"syscall"
"github.com/jackc/pgx/v5/pgxpool"
"github.com/ryuux05/godex/pkg/core"
"github.com/ryuux05/godex/pkg/core/decoder"
"github.com/ryuux05/godex/adapters/sink/postgres"
)
func main() {
// Initialize RPC client
rpc := core.NewHTTPRPC("https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY", 20, 5)
// Initialize PostgreSQL sink
pool, _ := pgxpool.New(context.Background(), "postgres://user:pass@localhost:5432/godex")
handler := &MyEventHandler{}
sink, _ := postgres.NewSink(postgres.SinkConfig{
Pool: pool,
Handler: handler,
})
// Configure indexing options
opts := &core.Options{
RangeSize: 1000,
FetcherConcurrency: 4,
StartBlock: 18000000,
ConfirmationDepth: 12,
Topics: [][]string{{
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
}},
}
// Setup decoder
dec := decoder.NewStandardDecoder()
dec.RegisterABI("ERC20", erc20ABI) // Load ABI from file or embed
// Create and run processor
processor := core.NewProcessor(nil, sink)
processor.SetLogger(slog.Default())
processor.AddChain(core.ChainInfo{
ChainId: "1",
Name: "Ethereum",
RPC: rpc,
}, opts, dec)
ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer cancel()
processor.Run(ctx)
}| Option | Type | Default | Description |
|---|---|---|---|
RangeSize |
int |
Required | Blocks per batch (see tuning guide below) |
FetcherConcurrency |
int |
Required | Concurrent RPC workers (see tuning guide below) |
StartBlock |
uint64 |
0 | Starting block (0 = resume from cursor) |
ConfirmationDepth |
uint64 |
Required | Blocks to wait before processing |
EnableTimestamps |
bool |
false |
Include block timestamps (increases RPC calls) |
Topics |
[][]string |
Required | Event signature hashes to filter |
Addresses |
[]string |
Optional | Contract addresses to monitor |
FetchMode |
FetchMode |
FetchModeLogs |
FetchModeLogs or FetchModeReceipts |
ReorgLookbackBlocks |
uint64 |
64 | Max blocks for reorg ancestor search |
RetryConfig |
*RetryConfig |
Default | Retry configuration |
rpc := core.NewHTTPRPC(
"https://your-rpc-endpoint.com",
20, // Requests per second (match provider limits)
5, // Burst capacity
)retryConfig := &core.RetryConfig{
MaxAttempts: 3,
InitialBackoff: 1 * time.Second,
MaxBackoff: 30 * time.Second,
Multiplier: 2.0,
EnableJitter: true,
PerRequestTimeout: 10 * time.Second,
}Too Low:
- Underutilized RPC provider capacity
- Slower indexing speed
- Symptoms: Low blocks/second, RPC calls not hitting rate limits
Too High:
- Rate limit errors (429)
- Provider throttling
- Symptoms: Frequent retries, "rate limit exceeded" errors
Recommended:
- Start with provider's documented QPS limit
- Monitor rate limit errors and adjust down if needed
- Example: Alchemy (20-50), Infura (10-20), Public RPC (5-10)
Too Small (< 100):
- High RPC overhead
- Many small transactions
- Symptoms: High RPC call count, slow progress
Too Large (> 2000):
- May exceed provider block range limits
- May exceed RPC response size limits (5-10MB typical)
- Higher memory usage
- Larger rollback scope on reorgs
- Symptoms: RPC errors ("response too large"), memory spikes
Recommended:
- Historical sync: 500-2000 blocks (faster catch-up, watch for response size limits)
- Live sync: 100-500 blocks (lower latency)
- High event density: 100-500 blocks (manage memory and response size)
- Low event density: 500-1000 blocks (efficiency)
- If hitting response size limits: Reduce to 50-200 blocks
Too Low:
- Frequent reorgs detected
- More rollback operations
- Symptoms: High reorg count, frequent cursor updates
Too High:
- Delayed event availability
- Slower indexing progress
- Symptoms: Events appear late, slow block advancement
Recommended:
- Ethereum: 12 blocks (PoS finality)
- Polygon/Arbitrum: 100+ blocks (faster finality)
- BSC: 15 blocks
- Optimism: 12 blocks
FetchModeLogs:
- Most efficient for indexed events
- Lower RPC cost
- May miss uncle blocks
- Use for: Most scenarios, historical sync
FetchModeReceipts:
- Comprehensive (includes all transactions)
- Higher RPC overhead
- More reliable for contract-specific indexing
- Use for: Targeted contract monitoring, when completeness is critical
- Monitor RPC rate limits: Adjust
FetcherConcurrencyif hitting limits - Check block processing speed: Increase
RangeSizeif too slow - Monitor reorg frequency: Increase
ConfirmationDepthif too many reorgs - Watch memory usage: Decrease
RangeSizeif memory spikes - Review retry frequency: Adjust
RetryConfigif too many retries
Symptoms:
- Frequent "rate limit exceeded" errors
- High retry count in logs
- Slow indexing progress
Solutions:
- Reduce FetcherConcurrency: Lower to 50-75% of provider limit
- Increase burst capacity: Set burst to 20-30% of rate limit
- Check provider limits: Verify you're not exceeding plan limits
- Use multiple RPC endpoints: Distribute load across providers
// Example: Reduce concurrency
opts.FetcherConcurrency = 10 // Down from 20Symptoms:
- RPC errors about response size limits
- "response too large" or "result exceeds limit" errors
- Fetches failing for large block ranges
- Errors when processing blocks with many events
Automatic Handling:
The SDK automatically handles "response too big" errors (typically error code -32008) by recursively splitting the block range into smaller chunks until the response size is acceptable. This happens transparently during normal operation - you'll see log messages like "too big response occur, split request" when this occurs.
Important: If a single block returns a "response too big" error, this indicates a fundamental problem with the RPC endpoint itself, not the request. In such cases:
- Switch to a different RPC provider with higher response size limits
- Reduce the number of events being indexed (narrow
TopicsorAddressesfilters) - Use a dedicated RPC node with custom limits
Manual Solutions:
- Reduce RangeSize: Smaller block ranges produce smaller responses (prevents splitting overhead)
- Use RPC provider with larger limits: Some providers support bigger responses
- Filter more aggressively: Use address filters to reduce event count
- Switch to FetchModeReceipts: May have different size limits (though less efficient)
// Example: Reduce range size to avoid automatic splitting
opts.RangeSize = 100 // Down from 1000 to avoid large responses
// Or use provider with larger limits
rpc := core.NewHTTPRPC("https://provider-with-larger-limits.com", 20, 5)Provider Response Size Limits:
- Alchemy: ~10MB response limit
- Infura: ~5MB response limit
- Public RPC: Varies, often lower
- Self-hosted: Configurable (check node settings)
When to Reduce RangeSize:
- High event density blocks (many events per block)
- Large event data (complex events with large data fields)
- Multiple contracts emitting events in same range
Symptoms:
- Low blocks/second rate
- Indexer falling behind chain head
- High block lag
Solutions:
- Increase FetcherConcurrency: Up to provider's rate limit
- Increase RangeSize: Larger batches reduce overhead
- Use FetchModeLogs: More efficient than receipts
- Enable UseLogsForHistoricalSync: Faster historical catch-up
- Check RPC latency: Switch to faster/closer RPC endpoint
// Example: Optimize for speed
opts.FetcherConcurrency = 20 // Increase workers
opts.RangeSize = 2000 // Larger batches
opts.FetchMode = core.FetchModeLogs
opts.UseLogsForHistoricalSync = trueSymptoms:
- Memory growing over time
- OOM errors
- System slowdown
Solutions:
- Reduce RangeSize: Smaller batches use less memory
- Reduce FetcherConcurrency: Fewer concurrent operations
- Check ReorgLookbackBlocks: Lower if too high (default 64 is usually fine)
- Monitor channel buffering: Ensure backpressure is working
// Example: Reduce memory usage
opts.RangeSize = 500 // Smaller batches
opts.FetcherConcurrency = 4 // Fewer workers
opts.ReorgLookbackBlocks = 64 // Keep defaultSymptoms:
- High reorg count in metrics
- Frequent rollback operations
- Cursor frequently updated backward
Solutions:
- Increase ConfirmationDepth: Wait more blocks before processing
- Monitor chain stability: Some chains have more reorgs
- Check ReorgLookbackBlocks: Ensure sufficient lookback range
// Example: Reduce reorgs
opts.ConfirmationDepth = 20 // Up from 12 for Ethereum
// For faster chains:
opts.ConfirmationDepth = 200 // Polygon/ArbitrumSymptoms:
- "context canceled" errors in logs
- Indexer stops unexpectedly
- Premature shutdown
Solutions:
- Check timeout settings: Ensure sufficient
PerRequestTimeout - Verify context propagation: Don't cancel parent context prematurely
- Check graceful shutdown: Use signal-based cancellation properly
// Example: Increase timeouts
retryConfig.PerRequestTimeout = 30 * time.Second // Up from 10sSymptoms:
- Database connection errors
- Transaction failures
- Events not persisting
Solutions:
- Check database connection: Verify connection string and pool size
- Monitor connection pool: Ensure sufficient connections
- Check transaction size: Reduce batch size if transactions too large
- Verify schema: Ensure tables exist and migrations applied
// Example: Optimize sink
sink, _ := postgres.NewSink(postgres.SinkConfig{
Pool: pool,
Handler: handler,
CopyThreshold: 32, // Use COPY for large batches
})Symptoms:
- Events not decoded
- Warnings about failed decoding
- Zero events stored
Solutions:
- Verify ABI registration: Ensure ABI includes all event definitions
- Check matcher logic: Verify router matchers match your logs
- Verify topic filters: Ensure Topics configuration matches events
- Check address filters: Verify Addresses includes target contracts
// Example: Debug decoder
router := decoder.NewDecoderRouter()
router.Register(
decoder.ByAddress("0xYourContract"), // Verify address
"YourABI",
decoder,
)Symptoms:
- Starts from StartBlock instead of cursor
- Duplicate events
- Lost progress
Solutions:
- Verify cursor exists: Check
chronicle_cursorstable - Check LoadCursor implementation: Ensure sink loads cursor correctly
- Set StartBlock to 0: Let processor use cursor when available
// Example: Proper cursor usage
opts.StartBlock = 0 // Use cursor if available// Get chain status
status, err := processor.Status("1")
if err != nil {
log.Fatal(err)
}
fmt.Printf("Block: %d/%d (%.1f%%) - %.0f blk/s\n",
status.CurrentBlock, status.HeadBlock,
status.ProgressPct, status.BlocksPerSec)
// Health check
health, err := processor.Health(ctx)
if err != nil {
log.Fatal(err)
}
if !health.Healthy {
log.Printf("Unhealthy: %v", health.Errors)
}Enable Prometheus metrics for monitoring:
import "github.com/ryuux05/godex/adapters/metrics"
metrics := metrics.NewPrometheusMetrics()
processor := core.NewProcessor(metrics, sink)
// Expose metrics endpoint
http.Handle("/metrics", promhttp.Handler())Key Metrics to Monitor:
godex_blocks_processed_total- Indexing progressgodex_block_lag- How far behind chain headgodex_block_fetched_duration_seconds- RPC performancegodex_sink_events_writes_total- Storage throughputgodex_sink_events_errors_total- Storage failuresgodex_reorgs_total- Reorg frequency
- ERC20 Indexer - Complete example with PostgreSQL storage
- See examples/ directory for more examples
- Architecture Overview - System architecture and design principles
- Processor Guide - Processor configuration and behavior
- Decoder Guide - Event decoding and routing
- RPC Guide - RPC client configuration and optimization
- Sink Guide - Storage backends and persistence
See LICENSE file.