ChainGraph provides performant GraphQL APIs for blockchain applications featuring state of art subscriptions, advanced filtering, sorting, pagination and aggregations across multiple blockchains.
🚧 Note: This project is in active development - feel free to explore and contribute! 🏗️
- Real-Time GraphQL Subscriptions – Subscribe to blockchain state and transactions/instructions/actions
- Advanced Data Operations – Powerful search, filtering, sorting and aggregation capabilities
- Blockchain RPC Facade – Push through guarantees for reliable data access
- Multi-Blockchain Support – Read data from multiple contracts, tables and blockchains on a single request
- Microfork Handling – Subscribe to state on the clients not to deltas
- Developer Tools – CLI with high quality application starters to speed up go-to-market
Hasura is a high-performance GraphQL engine that exposes the GraphQL schema and optimizes subscriptions. It includes an API authentication service and real-time data indexing services, which are currently written in NodeJS.
For more information on scaling, read this blog post: Scaling to 1 Million Active GraphQL Subscriptions
- apps/chaingraph.io - Main website
- apps/supabase - Supabase support ( experimental )
- apps/hasura - GraphQL engine and database migrations using Hasura
- apps/indexer - Multi-threaded NodeJS service for real-time data deserialization and indexing
- packages/genql - GenQL client for type-safe GraphQL queries
- packages/mappings - Data mappings for indexing ( temporary )
- packages/tsconfig - TypeScript configuration
ChainGraph API nodes are light and index whitelisted data tables and actions. The project is split into separate micro-services to make it easier to scale:
- chaingraph-graphql: GraphQL engine and database migrations using Hasura
- chaingraph-indexer: Multi-threaded NodeJS service for real-time data deserialization and indexing
ChainGraph is currently using a contract mapping protocol that allows developers to define how the data is indexed. Through these mappings ChainGraph can index the data in a way that is good for introspection of the blockchain heuristics. We will iterate on the mapping protocol to achieve fully typed schemas in the future.
- Bun 1.0+
bun install
git clone https://github.com/chaingraph/chaingraph.git
cd chaingraph
bun install
# Hasura Setup and Management
bun run hasura:start # Start Hasura services (GraphQL Engine, Postgres, Data Connector)
bun run hasura:stop # Stop Hasura services
bun run hasura:reset # Reset Hasura environment (removes volumes and restarts)
bun run hasura:logs # View Hasura logs in real-time
bun run psql # Connect to Postgres database directly
ChainGraph runs with the following default configuration:
- GraphQL API: http://localhost:3333
- Hasura Console: http://localhost:3333/console
- Postgres Database: localhost:5432
Key environment variables:
HASURA_GRAPHQL_ADMIN_SECRET
: Required for console access and admin operationsHASURA_GRAPHQL_METADATA_DATABASE_URL
: Postgres connection for Hasura metadataPG_DATABASE_URL
: Main database connection string
Note: In production, make sure to change the admin secret and secure your environment variables.
See CONTRIBUTING.md for development guidelines.
MIT License
Full Stack (Docker/Elestio)
- Env file: place your environment in the repo root
./.env
(Elestio CI/CD injects env here). The compose file reads from it for both variable substitution and runtime envs. - Start stack: from repo root run
pnpm run full:start
(ordocker compose --env-file .env -f docker/full-elestio.yml -p chaingraph up --build -d
). - Logs:
pnpm run full:logs
ordocker compose --env-file .env -f docker/full-elestio.yml -p chaingraph logs -f
. - Stop/clean:
pnpm run full:stop
/pnpm run full:down
. - Hasura Console:
http://<host>:3333/console
(headerx-hasura-admin-secret: <HASURA_GRAPHQL_ADMIN_SECRET>
). GraphQL API athttp://<host>:3333/v1/graphql
. - Required envs (examples in
docker/.env_elestio_example
):POSTGRES_USER
,POSTGRES_PASSWORD
,POSTGRES_DB
PG_DATABASE_URL
(e.g.,postgres://user:pass@db:5432/dbname
)HASURA_GRAPHQL_METADATA_DATABASE_URL
(can reusePG_DATABASE_URL
)HASURA_GRAPHQL_ADMIN_SECRET
SHIP_WS_URL
(SHiP websocket, e.g.,wss://...
)SHIP_WS_URL_BACKUP
(optional; backup SHiP websocket used for failover)RPC_URL
(HTTP RPC endpoint for/v1/chain/get_info
)RPC_URL_BACKUP
(optional; backup HTTP RPC endpoint used for failover)CHAIN_ID
(network chain id)CHAIN_NAME
(optional; defaults tol1
)INDEX_FROM_BLOCK
(optional): if DB is empty, starts exactly here (0
allowed for genesis). If DB has data, it serves as a lower bound for backfills (see behavior below).REPROCESS_FROM_ENV
(optional, defaultfalse
): whentrue
andINDEX_FROM_BLOCK
≤ DB tip, also reprocesses the entire[INDEX_FROM_BLOCK..tip]
range. Whenfalse
, only internal gaps and any slice earlier than the earliest indexed block are backfilled.TABLE_ROWS_PAGE_LIMIT
(optional, default100000
): page size used when loading current table state viaget_table_rows
.NODE_OPTIONS
(optional): override Node heap (compose defaults to--max-old-space-size=3584
).
- Indexer behavior: on start it upserts the
chains
row (usingCHAIN_NAME
,CHAIN_ID
,RPC_URL
). Then:- Internal gaps: automatically backfills any internal missing block ranges detected in the DB for your
CHAIN_NAME
. - Env-driven backfill: if
INDEX_FROM_BLOCK
is set and ≤ DB tip, the indexer always includes any slice earlier than the earliest indexed block. To force reprocessing of the full[INDEX_FROM_BLOCK..tip]
range, setREPROCESS_FROM_ENV=true
.
- Internal gaps: automatically backfills any internal missing block ranges detected in the DB for your
- Skip ahead: if
INDEX_FROM_BLOCK
is set and > DB tip, the indexer skips backfill and starts realtime atINDEX_FROM_BLOCK
.- Head clamp: if the requested start/end exceed the chain head height, the indexer logs a warning and clamps to the current head (SHiP cannot stream future blocks).
- Real-time: after backfill (when applicable), the indexer starts real-time from
DB tip + 1
. - Empty DB: if the DB is empty and
INDEX_FROM_BLOCK
is not set, it starts from the node head. It writes intoblocks
,transactions
,actions
,table_rows
.
Failover behavior
- SHiP (state history): set
SHIP_WS_URL
and optionalSHIP_WS_URL_BACKUP
.- Realtime failover: if primary errors/closes, auto-reconnects to backup. On next reconnect event, prefers primary again.
- Backfill failover: missing-range backfills also use SHiP failover and resume the range from the last processed block after reconnecting.
- RPC (HTTP): set
RPC_URL
and optionalRPC_URL_BACKUP
.- All RPC calls (e.g.,
get_info
,get_abi
,get_table_by_scope
,get_table_rows
) try the active endpoint; on failure they automatically switch to the alternate and succeed if available. - When running on backup, the next call will attempt the primary first to fail back automatically.
- All RPC calls (e.g.,
- Database checks:
- Shell:
docker compose -f docker/full-elestio.yml exec -it db psql -U $POSTGRES_USER -d $POSTGRES_DB
- Quick queries:
- Chains:
SELECT chain_name, chain_id FROM chains;
- DB tip:
SELECT MAX(block_num) FROM blocks WHERE chain = '$CHAIN_NAME';
- Earliest:
SELECT MIN(block_num) FROM blocks WHERE chain = '$CHAIN_NAME';
- Missing ranges:
SELECT block_num+1 AS missing_start, next-1 AS missing_end FROM (SELECT block_num, LEAD(block_num) OVER (ORDER BY block_num) AS next FROM blocks WHERE chain='$CHAIN_NAME') s WHERE next > block_num + 1;
- Chains:
- Shell:
- Linux host RPC: if your RPC runs on the host, set
RPC_URL=http://host.docker.internal:8888
and add underindexer
:extra_hosts: ["host.docker.internal:host-gateway"]
indocker/full-elestio.yml
.
Elestio Deploy
- Configure envs in Elestio CI/CD (Project Settings → Environment). Use
docker/.env_elestio_example
as a reference. At minimum set:POSTGRES_USER
,POSTGRES_PASSWORD
,POSTGRES_DB
,PG_DATABASE_URL
,HASURA_GRAPHQL_ADMIN_SECRET
,HASURA_GRAPHQL_METADATA_DATABASE_URL
,SHIP_WS_URL
,RPC_URL
,CHAIN_ID
, optionallyCHAIN_NAME
,INDEX_FROM_BLOCK
. - Build/Run: have Elestio execute from the repo root either:
docker compose -f docker/full-elestio.yml -p chaingraph up -d
(envs come from CI/CD environment), ordocker compose --env-file .env -f docker/full-elestio.yml -p chaingraph up -d
if you also commit a.env
in the repo root.
- Ports: expose TCP 3333 publicly (maps to Hasura 8080). Optionally front with your domain/proxy.
- Verify after deploy:
- Health:
curl http://<host>:3333/healthz
- Console:
http://<host>:3333/console
with headerx-hasura-admin-secret: ...
- Logs (via SSH):
docker compose -f docker/full-elestio.yml logs -f indexer hasura db
- Health:
- Persistence: the named volume
pg_data
holds Postgres data across deploys. Remove with caution if you need a clean reset (docker volume rm <project>_pg_data
). Notes - Avoid quoting URLs in env files (use
RPC_URL=https://...
, notRPC_URL="https://..."
). Some runners pass quotes through, producing invalid URLs.