IMPORTANT - If you run a migration, the graphql and permissions for hasura are not automatically updated. You must manually update the hasura schema and permissions via the web UI. Once you do this you MUST update the hasura package syncing from the live client to this repo. If you don't do this we have no guarantees that the CI/CD will function properly. Instructions to do this are below.
Generating migrations are the recommended first step.
cd packages/database
bun run migrate:create
This will create the files for migration which can be put into a PR for review (this helps to make sure we've got the data model correct before we actually run the migration).
Once the migration is approved, you can run it with:
cd packages/database
FUTARCHY_PG_URL=... bun migrate
Assuming you need permissions for the graphql client, you'll want to login and track everything in hasura.
- This begins with tracking the tables (if you created any).
- Then you'll want to track the relationships between tables.
- Then you'll want to update the permissions on the columns and tables via the UI.
Once this is done you can export the metadata to bring into your PR with the following:
cd packages/hasura
pnpm hasura metadata export --admin-secret "" --endpoint ""
Once the above is completed, the PR is good to merge.
Assuming everything is sync'd up then you can update all the graphql clients with the genql command, importantly with that command it will override the index.ts file importing ONLY the url used to call the genql command. It is recommended that you copy the ENV lines from the existing index.ts before you run the genql command.
AN EXAMPLE, BE SURE TO MATCH YOUR OUTPUT PATH AND ENV VARS
genql --endpoint <> --output ./__generated__ -H "X-Hasura-Admin-Secret: <>"
We do our best to match staging with production, however there may be some differences before we go live with a new feature or implement changes.
You will use the same migration command above with the URL pointing to the production database. And then what we've done is to export the hasura metadata from staging as a .json file and then import it into production.
This ensures all the systems are in sync.
For current indexer see futarchy-indexer-v2
This project aims to index order data from each of The Meta DAO's proposals into candlestick data. This way we can show charts on how the proposals do over time in the UI.
The indexer is made of 3 components:
- the indexer service which contacts an RPC periodically to poll for any orders on not yet concluded proposals; the indexer will consolidate the order data into candles then store these in a database
- a postgres database
- a hasura instance which exposes a real-time GraphQL read-only API over the postgres data
Since this is just a generic means to cache on-chain data into Postgres then expose a real-time GraphQL API over this data, it could be used for more than just candlestick indexing, but we'll begin with that use-case.
Historical data on Solana is not available from your standard Solana RPC. Geyser was created in order to achieve lossless real-time data streaming into a db, however it only can give you the current account states an RPC has stored plus any future state, allowing you to construct an accurate account history from the moment you enable the Geyser plugin but not an account history from prior transactions. Not to mention this costs $2k per month from Triton and $1.1k per month from Helius. There are likely better ways to spend The Meta DAO's treasury, plus one of the selling points of an indexer in the first place is its potential to save on rather than balloon RPC costs. There is an ongoing collab between Triton, Firedancer, and Protocol Labs devs to store all historical data on IPFS (project Old Faithful) but this is still a work in progress and just storing the index used to lookup accounts on IPFS is already 50 terabytes!
Is there a simpler, cost effective way to get historical data?
The approach futarchy-indexer takes is to replay the transaction history to recreate each historical account state. Historical transactions are not pruned as aggressively as account state (where only the latest account state is kept) so this works with standard solana RPCs without needing to upgrade to more expensive infra tiers. The downside of this approach is a lot more complexity since we have to actually parse each historical transaction and know how the Solana program executing that transaction would have translated this into a mutation on any account states. If that translation is very complex, this approach can be difficult to maintain. Thankfully, the states and transactions we're concerned about here: token balances, twap markets, proposal metadata, orderbooks and swaps, aren't too complex.
Futarchy Indexer operates on 2 core entities:
- transaction watchers
- indexers
A transaction watcher takes an account, then subscribes real time to all signatures for that account. It's job is to ensure it both
- stores real time transactions for an account using RPC webhook APIs
- has not skipped storing any transaction metadata, utilizing the
getSignaturesForAddress
API.
An indexer depends on one or more transaction watchers. Once it sees all its dependencies have backed up transactions to a certain slot, it can process all these transactions up to that slot, parsing instruction data and updating corresponding tables representing proposals, twaps, order books and trading history.
Why do we want multiple indexers?
- This allows no-downtime upgrades to the indexing and transaction caching logic.
- If a bug is identified in prior indexer logic, we simply create a new indexer starting at slot 0 which will overwrite existing data until it catches up with the existing indexer, at which point we can remove the duplicate indexer.
- If a bug is identified in the transaction caching logic, we update the logic then set the transaction watcher's slot back to 0, and start a new indexer at 0 which will overrite existing data using the corrected transactions.
- As we upgrade the Meta DAO we'll need to watch different sets of accounts. For example autocrat V0 and V0.1 have different programs and DAO accounts and should be represented by different watchers. Once we switch from OpenBook to an in-house AMM, we'll also need a new watcher. Multiple wathcers / indexers in parallel means we can index data for proposals based on old and new accounts in simultaneously, and not lose the ability to index historical proposal data even as the DAO is upgraded
FUTARCHY_HELIUS_API_KEY
used by indexerFUTARCHY_PG_URL
used by indexer
After cloning run pnpm install
in the project directory
Docs on each top-level script below
Migrate db to match definition in packages/database/lib/schema.ts
. Assumes you have set the FUTARCHY_PG_URL
env var.
Also regenerates the graphql client (TODO).
Run raw sql against the database. Assumes you have set the FUTARCHY_PG_URL
env var.
You can add to the COMMON_STATEMENTS
const in packages/database/src/run-sql.ts
if you have a long sql query you want to save for later reuse.
TODO
Starts the service
Creates a new transaction watcher on a particular account
Resets an existing transaction watcher to a particular transaction/slot (or resets it back to 0)
Validates whether the cached txs for an account matches the
TODO
Syncs the current Hasura GraphQL schema types to the client in futarchy-sdk using genql