⚠️ WARNING: Dingo is under heavy active development and is not yet ready for production use. It should only be used on testnets (preview, preprod) and devnets. Do not use Dingo on mainnet with real funds.
A high-performance Cardano blockchain node implementation in Go by Blink Labs. Dingo provides:
- Full chain synchronization and validation via Ouroboros consensus protocol
- UTxO tracking with 41 UTXO validation rules and Plutus V1/V2/V3 smart contract execution
- Block production with VRF leader election and stake snapshots
- Multi-peer chain selection with density comparison and VRF tie-breaking
- Client connectivity for wallets and applications
- Pluggable storage backends (Badger, SQLite, PostgreSQL, MySQL, GCS, S3)
- Tiered storage modes ("core" for consensus, "api" for full indexing)
- Peer governance with dynamic peer selection, ledger peers, and topology support
- Chain rollback support for handling forks with automatic state restoration
- Fast bootstrapping via built-in Mithril client
- Multiple API servers: UTxO RPC, WIP Blockfrost-compatible REST, Mesh (Coinbase Rosetta)
Note: On Windows systems, named pipes are used instead of Unix sockets for node-to-client communication.
Dingo supports configuration via a YAML config file (dingo.yaml), environment variables, and command-line flags. Priority: CLI flags > environment variables > YAML config > defaults.
A sample configuration file is provided at dingo.yaml.example. You can copy and edit this file to configure Dingo for your local or production environment.
The following environment variables modify Dingo's behavior:
CARDANO_BIND_ADDR- IP address to bind for listening (default:
0.0.0.0)
- IP address to bind for listening (default:
CARDANO_CONFIG- Full path to the Cardano node configuration (default:
./config/cardano/preview/config.json) - Use your own configuration files for different networks
- Genesis configuration files are read from the same directory by default
- Full path to the Cardano node configuration (default:
CARDANO_DATABASE_PATH- A directory which contains the ledger database files (default:
.dingo) - This is the location for persistent data storage for the ledger
- A directory which contains the ledger database files (default:
CARDANO_INTERSECT_TIP- Ignore prior chain history and start from current position (default:
false) - This is experimental and will likely break... use with caution
- Ignore prior chain history and start from current position (default:
CARDANO_METRICS_PORT- TCP port to bind for listening for Prometheus metrics (default:
12798)
- TCP port to bind for listening for Prometheus metrics (default:
CARDANO_NETWORK- Named Cardano network (default:
preview)
- Named Cardano network (default:
CARDANO_PRIVATE_BIND_ADDR- IP address to bind for listening for Ouroboros NtC (default:
127.0.0.1)
- IP address to bind for listening for Ouroboros NtC (default:
CARDANO_PRIVATE_PORT- TCP port to bind for listening for Ouroboros NtC (default:
3002)
- TCP port to bind for listening for Ouroboros NtC (default:
CARDANO_RELAY_PORT- TCP port to bind for listening for Ouroboros NtN (default:
3001)
- TCP port to bind for listening for Ouroboros NtN (default:
CARDANO_SOCKET_PATH- UNIX socket path for listening (default:
dingo.socket) - This socket speaks Ouroboros NtC and is used by client software
- UNIX socket path for listening (default:
CARDANO_TOPOLOGY- Full path to the Cardano node topology (default: "")
DINGO_UTXORPC_PORT- TCP port to bind for listening for UTxO RPC (default:
0, disabled)
- TCP port to bind for listening for UTxO RPC (default:
DINGO_BLOCKFROST_PORT- TCP port for the Blockfrost-compatible REST API (default:
0, disabled)
- TCP port for the Blockfrost-compatible REST API (default:
DINGO_MESH_PORT- TCP port for the Mesh (Coinbase Rosetta) API (default:
0, disabled)
- TCP port for the Mesh (Coinbase Rosetta) API (default:
DINGO_BARK_PORT- TCP port for the Bark block archive API (default:
0, disabled)
- TCP port for the Bark block archive API (default:
DINGO_STORAGE_MODE- Storage mode:
core(default) orapi corestores only consensus data (UTxOs, certs, pools, protocol params)apiadditionally stores witnesses, scripts, datums, redeemers, and tx metadata- API servers (Blockfrost, UTxO RPC, Mesh) require
apimode
- Storage mode:
DINGO_RUN_MODE- Run mode:
serve(full node, default),load(batch import),dev(development mode), orleios(experimental Leios protocol support)
- Run mode:
TLS_CERT_FILE_PATH- SSL certificate to use, requiresTLS_KEY_FILE_PATH(default: empty)TLS_KEY_FILE_PATH- SSL certificate key to use (default: empty)
To run Dingo as a stake pool operator producing blocks:
CARDANO_BLOCK_PRODUCER- Enable block production (default:false)CARDANO_SHELLEY_VRF_KEY- Path to VRF signing key fileCARDANO_SHELLEY_KES_KEY- Path to KES signing key fileCARDANO_SHELLEY_OPERATIONAL_CERTIFICATE- Path to operational certificate file
# Preview network (default)
./dingo
# Mainnet
CARDANO_NETWORK=mainnet ./dingo
# Or with explicit config path
CARDANO_NETWORK=mainnet CARDANO_CONFIG=path/to/mainnet/config.json ./dingoDingo creates a dingo.socket file that speaks Ouroboros node-to-client and is compatible with cardano-cli, adder, kupo, and other Cardano client tools.
Cardano configuration files are bundled in the Docker image. For local builds, you can find them at docker-cardano-configs.
# Run on preview (default)
docker run -p 3001:3001 ghcr.io/blinklabs-io/dingo
# Run on mainnet with persistent storage
docker run -p 3001:3001 \
-e CARDANO_NETWORK=mainnet \
-v dingo-data:/data/db \
-v dingo-ipc:/ipc \
ghcr.io/blinklabs-io/dingoThe image is based on Debian bookworm-slim and includes cardano-cli, nview, and txtop. Mithril snapshot support is built into dingo natively (dingo mithril sync). The Dockerfile sets CARDANO_DATABASE_PATH=/data/db and CARDANO_SOCKET_PATH=/ipc/dingo.socket, overriding the local defaults of .dingo and dingo.socket — the volume mounts above map to these container paths.
| Port | Service |
|---|---|
| 3001 | Ouroboros NtN (node-to-node) |
| 3002 | Ouroboros NtC over TCP |
| 12798 | Prometheus metrics |
Dingo has two storage modes that control how much data is persisted:
| Mode | What's Stored | Use Case |
|---|---|---|
core (default) |
UTxOs, certificates, pools, protocol parameters | Relays, block producers |
api |
Core data + witnesses, scripts, datums, redeemers, tx metadata | Nodes serving API queries |
# Relay or block producer (default)
./dingo
# API node
DINGO_STORAGE_MODE=api ./dingoOr in dingo.yaml:
storageMode: "api"Dingo includes three API servers. All APIs require storageMode: "api" and start automatically on their default ports in that mode. Set an individual port to 0 to disable a specific API. The Blockfrost server provides a WIP Blockfrost-compatible REST API with a growing subset of the compatibility surface.
| API | Port Env Var | Default | Protocol |
|---|---|---|---|
| UTxO RPC | DINGO_UTXORPC_PORT |
disabled | gRPC |
| Blockfrost | DINGO_BLOCKFROST_PORT |
disabled | REST |
| Mesh (Rosetta) | DINGO_MESH_PORT |
disabled | REST |
# Enable Blockfrost API on port 3100 and UTxO RPC on port 9090
DINGO_STORAGE_MODE=api \
DINGO_BLOCKFROST_PORT=3100 \
DINGO_UTXORPC_PORT=9090 \
./dingoOr in dingo.yaml:
storageMode: "api"
blockfrostPort: 3100
utxorpcPort: 9090Relay node (consensus only, no APIs):
./dingoAPI / data node (full indexing, one or more APIs):
DINGO_STORAGE_MODE=api DINGO_BLOCKFROST_PORT=3100 ./dingoBlock producer (consensus only, with SPO keys):
CARDANO_BLOCK_PRODUCER=true \
CARDANO_SHELLEY_VRF_KEY=/keys/vrf.skey \
CARDANO_SHELLEY_KES_KEY=/keys/kes.skey \
CARDANO_SHELLEY_OPERATIONAL_CERTIFICATE=/keys/opcert.cert \
./dingoWhen storageMode=core, the Badger blob store defaults to mmap-only settings: block-cache-size=0, index-cache-size=0, and compression=false. When storageMode=api, the default Badger profile is block-cache-size=268435456, index-cache-size=0, and compression=true. YAML, environment variable, and CLI Badger options override those defaults only when explicitly set.
See dingo.yaml.example for the full set of configuration options.
Instead of syncing from genesis (which can take days on mainnet), you can bootstrap Dingo using a Mithril snapshot. Dingo has a built-in Mithril client that handles download, extraction, and import automatically. This is the fastest way to get a node running.
# Bootstrap from Mithril and start syncing
./dingo -n preview sync --mithril
# Then start the node
./dingo -n preview serveOr use the subcommand form for more control:
# List available snapshots
./dingo -n preview mithril list
# Show snapshot details
./dingo -n preview mithril show <digest>
# Download and import
./dingo -n preview mithril syncThis imports:
- All blocks from genesis (stored in blob store for serving peers)
- Current UTxO set, stake accounts, pool registrations, DRep registrations
- Stake snapshots (mark/set/go) for leader election
- Protocol parameters, governance state, treasury/reserves
- Complete epoch history for slot-to-time calculations
What is NOT included: Individual transaction records, certificate history, witness/script/datum storage, and governance vote records for blocks before the snapshot. These are not needed for consensus, block production, or serving blocks to peers. New blocks processed after bootstrap will have full metadata.
Performance (preview network, ~4M blocks):
| Phase | Time |
|---|---|
| Download snapshot (~2.6 GB) | ~1-2 min |
| Extract + download ancillary | ~1 min |
| Import ledger state (UTxOs, accounts, pools, DReps, epochs) | ~12 min |
| Load blocks into blob store | ~36 min |
| Total | ~50 min |
For indexers and API nodes that need full historical data (transaction lookups, certificate queries, datum/script resolution), configure the storage mode to api and dingo mithril sync will automatically backfill historical metadata after loading the snapshot.
Bootstrapping requires temporary disk space for both the downloaded snapshot and the Dingo database:
| Network | Snapshot Size | Dingo DB | Total Needed |
|---|---|---|---|
| mainnet | ~180 GB | ~200+ GB | ~400 GB |
| preprod | ~60 GB | ~80 GB | ~150 GB |
| preview | ~15 GB | ~25 GB | ~50 GB |
These are approximate values that grow over time. The snapshot can be deleted after import, but you need sufficient space for both during the load process.
Dingo supports pluggable storage backends for both blob storage (blocks, transactions) and metadata storage. This allows you to choose the best storage solution for your use case.
Blob Storage Plugins:
badger- BadgerDB local key-value store (default)gcs- Google Cloud Storage blob stores3- AWS S3 blob store
Metadata Storage Plugins:
sqlite- SQLite relational database (default)postgres- PostgreSQL relational databasemysql- MySQL relational database
Plugins can be selected via command-line flags, environment variables, or configuration file:
# Command line
./dingo --blob gcs --metadata sqlite
# Environment variables
DINGO_DATABASE_BLOB_PLUGIN=gcs
DINGO_DATABASE_METADATA_PLUGIN=sqlite
# Configuration file (dingo.yaml)
database:
blob:
plugin: "gcs"
metadata:
plugin: "sqlite"Each plugin supports specific configuration options. See dingo.yaml.example for detailed configuration examples.
BadgerDB Options:
data-dir- Directory for database filesblock-cache-size- Block cache size in bytesindex-cache-size- Index cache size in bytescompression- Enable Snappy compressiongc- Enable garbage collection
Leave the mode-sensitive Badger settings unset if you want Dingo's storage-mode defaults. storageMode=core uses block-cache-size=0, index-cache-size=0, and compression=false; storageMode=api uses block-cache-size=268435456, index-cache-size=0, and compression=true.
Google Cloud Storage Options:
bucket- GCS bucket nameproject-id- Google Cloud project IDprefix- Path prefix within bucket
AWS S3 Options:
bucket- S3 bucket nameregion- AWS regionprefix- Path prefix within bucketaccess-key-id- AWS access key ID (optional - uses default credential chain if not provided)secret-access-key- AWS secret access key (optional - uses default credential chain if not provided)
SQLite Options:
data-dir- Path to SQLite database file
PostgreSQL Options:
host- PostgreSQL server hostnameport- PostgreSQL server portusername- Database userpassword- Database passworddatabase- Database name
MySQL Options:
host- MySQL server hostnameport- MySQL server portuser- Database userpassword- Database passworddatabase- Database namessl-mode- MySQL TLS mode (mapped to tls= in DSN)timezone- MySQL time zone location (default: UTC)dsn- Full MySQL DSN (overrides other options when set)storage-mode- Storage tier: core or api (default: core)
You can see all available plugins and their descriptions:
./dingo listFor information on developing custom storage plugins, see PLUGIN_DEVELOPMENT.md.
- Network
- UTxO RPC
- Ouroboros
- Node-to-node
- ChainSync
- BlockFetch
- TxSubmission2
- Node-to-client
- ChainSync
- LocalTxMonitor
- LocalTxSubmission
- LocalStateQuery
- Peer governor
- Topology config
- Peer churn (full PeerChurnEvent with gossip/public root churn, bootstrap events)
- Ledger peers
- Peer sharing
- Denied peers tracking
- Connection manager
- Inbound connections
- Node-to-client over TCP
- Node-to-client over UNIX socket
- Node-to-node over TCP
- Outbound connections
- Node-to-node over TCP
- Inbound connections
- Node-to-node
- Ledger
- Blocks
- Block storage
- Chain selection (density comparison, VRF tie-breaker, ChainForkEvent)
- UTxO tracking
- Protocol parameters
- Genesis validation
- Block header validation (VRF/KES/OpCert cryptographic verification)
- Certificates
- Pool registration
- Stake registration/delegation
- Account registration checks
- DRep registration
- Governance
- Transaction validation
- Phase 1 validation
- UTxO rules
- Fee validation (full fee calculation with script costs)
- Transaction size and ExUnit budget validation
- Witnesses
- Block body
- Certificates
- Delegation/pools
- Governance
- Phase 2 validation
- Plutus V1 smart contract execution
- Plutus V2 smart contract execution
- Plutus V3 smart contract execution
- Phase 1 validation
- Blocks
- Block production
- VRF leader election with stake snapshots
- Block forging with KES/OpCert signing
- Slot battle detection
- Mempool
- Accept transactions from local clients
- Distribute transactions to other nodes
- Validation of transaction on add
- Consumer tracking
- Transaction purging on chain update
- Watermark-based eviction and rejection
- Database Recovery
- Chain rollback support (SQLite, PostgreSQL, and MySQL plugins)
- State restoration on rollback
- WAL mode for crash recovery
- Automatic rollback on transaction error
- Stake Snapshots
- Mark/Set/Go rotation at epoch boundaries
- Genesis snapshot capture
- API Servers
- UTxO RPC (gRPC)
- WIP Blockfrost-compatible REST API
- Mesh (Coinbase Rosetta) API
- Mithril Bootstrap
- Built-in Mithril client
- Ledger state import (UTxOs, accounts, pools, DReps, epochs)
- Block loading from ImmutableDB
Additional planned features can be found in our issue tracker and project boards.
Catalyst Fund 12 - Go Node (Dingo)
Catalyst Fund 13 - Archive Node
Check the issue tracker for known issues. Due to rapid development, bugs happen especially as there is functionality which has not yet been developed.
This requires Go 1.25 or later. You also need make.
# Format, test, and build (default target)
make
# Build only
make build
# Run
./dingo
# Run without building a binary
go run ./cmd/dingo/make test # All tests with race detection
go test -v -race -run TestName ./package/ # Single test
make bench # Benchmarks# Load testdata with CPU and memory profiling
make test-load-profile
# Analyze
go tool pprof cpu.prof
go tool pprof mem.profThe DevNet runs a private Cardano network with Dingo and cardano-node producing blocks side by side. It validates that Dingo forges blocks, maintains consensus, and interoperates with the reference node.
The DevNet uses Docker Compose to run 3 containers on a bridge network:
| Container | Role | Host Port |
|---|---|---|
dingo-producer |
Dingo block producer (pool 1) | 3010 |
cardano-producer |
cardano-node block producer (pool 2) | 3011 |
cardano-relay |
Relay node (no block production) | 3012 |
A configurator init container generates fresh pool keys and genesis files before nodes start.
- Docker with the Compose plugin (
docker compose) - Go 1.24+
The test suite builds the Dingo Docker image, starts all containers, waits for health checks, and runs Go integration tests tagged with //go:build devnet:
cd internal/test/devnet/
# Run all devnet tests
./run-tests.sh
# Run a specific test
./run-tests.sh -run TestBasicBlockForging
# Keep containers running after tests pass (for inspection)
./run-tests.sh --keep-upOverride host ports if needed:
DEVNET_DINGO_PORT=4010 DEVNET_CARDANO_PORT=4011 DEVNET_RELAY_PORT=4012 ./run-tests.shFor longer-running manual tests (soak testing, observing behavior over multiple epochs, debugging):
cd internal/test/devnet/
# Start all containers
./start.sh
# Watch logs
docker compose -f docker-compose.yml logs -f
# Watch a specific node
docker compose -f docker-compose.yml logs -f dingo-producer
# Stop and clean up
./stop.shContainers remain running until you stop them. The DevNet parameters (in testnet.yaml) use 1-second slots and 1500-slot epochs (~25 minutes per epoch), so you can observe epoch transitions, leader election, and stake snapshot rotation relatively quickly.
For quick iteration without Docker, devmode.sh runs Dingo directly against a local devnet genesis. It resets state and updates genesis timestamps on each run:
# Run in devnet mode
./devmode.sh
# With debug logging
DEBUG=true ./devmode.shThis stores state in .devnet/ and uses genesis configs from config/cardano/devnet/. It runs a single Dingo node (no cardano-node counterpart), which is useful for testing startup, epoch transitions, and block production in isolation.

