- Overview
- System Requirements
- Configuration
- Installation & Execution
- Configurable Application Parameters
- Optional Export of OfferAccepted Events
- Database Structure
- Design Considerations
- Subgraph Requirements
This project is a standalone Python module dedicated to indexing all YAM v1 on-chain events on the Gnosis blockchain, storing them locally in a SQLite database for fast and reliable access.
It performs:
- Continuous live indexing using multiple RPC endpoints.
- Automatic recovery and RPC rotation on failure.
- Full historical backfill during initialization using The Graph.
- Periodic integrity checks through short backfills using The Graph.
- Optional export of offerAccepted events to JSON files, to enable integration with external applications
The local SQLite database and the JSON exports are used by other projects, such as the yam-transactions-report-generator and the yam-sale-notify-bot.
- Python 3.11+
- Docker & Docker Compose (optional but recommended)
An example configuration file is provided: .env.example.
Copy it to .env and update the values with your own secrets.
YAM_INDEXING_W3_URLS=https://lb.nodies.app/v1/...,https://gnosis-mainnet.core.chainstack.com/
YAM_INDEXING_DB_PATH=yam_indexing_db/yam_events.db
YAM_INDEXING_THE_GRAPH_API_KEY=
YAM_INDEXING_SUBGRAPH_URL=https://gateway.thegraph.com/api/subgraphs/id/7xsjkvdDtLJuVkwCigMaBqGqunBvhYjUSPFhpnGL1rvu
YAM_INDEXING_EXPORT_EVENT_PATH_HOST=/path/to/transactions_queue
# Telegram alerts [optional]
TELEGRAM_ALERT_BOT_TOKEN=
TELEGRAM_ALERT_GROUP_ID=Note:
RPC URLs must be provided as a comma-separated string on a single line, without spaces.
For alerts, you can configure a Telegram bot and a Telegram group: the bot (usingTELEGRAM_ALERT_BOT_TOKEN) must be added to the telegram chat group (TELEGRAM_ALERT_GROUP_ID) to receive automatic notifications about critical events such as failures or application stops.
The project includes a ready-to-use Docker integration.
From the project root directory (where docker-compose.yml is located), build
(or rebuild) and start the service with:
docker compose up --build -dThis single command:
- Rebuilds the image if the source code changed
- Recreates the existing container without duplication
- Starts the service from a clean state
To stop the service:
docker compose stopNote on database persistence
The SQLite database is stored in a Docker volume and is therefore persistent across container restarts, rebuilds, and upgrades.
On first startup, if no database is found, the container automatically runs the initialization process, creates the database, and backfills the full on-chain history (this may take some time). Once completed, the service seamlessly switches to the live indexing loop.
On subsequent starts, if the database already exists, the initialization step is skipped and the live indexing service starts immediately. For detailed information about initialization and runtime behavior, see the sections below.
# Optional but recommended: create and activate a virtual environment
python3 -m venv .venv
source .venv/bin/activatepip install -r requirements.txtRun the initialization script to:
- Create the SQLite database
- Create all necessary tables
- Backfill historical YAM events using The Graph
python3 -m initialize_indexing_moduleThis step may take up to 30 minutes depending on the number of blockchain transactions that need to be fetched and stored in the database.
Start the continuous indexing loop:
python3 -m mainThe service performs:
Checks for missing blocks since last shutdown and fills any gaps using The Graph.
- Pulls raw logs directly from RPCs
- Decodes events
- Stores them in SQLite
- Rotates RPCs automatically on repeated failures
Ensures data consistency using The Graph.
With Docker, all of those above commands are handled internally by the container.
The module exposes several application parameters that can be tuned inside config.py:
| Parameter | Description |
|---|---|
BLOCK_TO_RETRIEVE |
Number of blocks retrieved per RPC HTTP request. |
COUNT_BEFORE_RESYNC |
Number of iterations before resynchronizing to the latest block. |
BLOCK_BUFFER |
Safety gap between the latest known block and the one actually requested. |
TIME_TO_WAIT_BEFORE_RETRY |
Seconds to wait before retrying an unavailable RPC. |
MAX_RETRIES_PER_BLOCK_RANGE |
Maximum retries before switching to another RPC provider. |
COUNT_PERIODIC_BACKFILL_THEGRAPH |
Number of iterations before triggering the periodic TheGraph backfill. |
The module can optionally export OfferAccepted events to JSON files, allowing them to be consumed by external applications such as the YAM Sale Notify Bot.
This feature is disabled by default and requires no additional configuration beyond setting the following environment variable in the .env file:
YAM_INDEXING_EXPORT_EVENT_PATH_HOST=/path/to/transactions_queueWhen this variable is set, exported JSON files are written to the specified directory.
If the variable is not defined, the export mechanism is automatically disabled and no files are generated.
Stores all offers ever created on the YAM contract:
- Offer ID
- Seller
- Token details
- Status (in progress, sold out, deleted)
All events related to an offer:
- Creations
- Updates
- Purchases
- Deletions
Tracks:
- Last indexed block number
Ensures indexing resumes from the correct block after a restart.
- Uses multiple RPCs with automatic failover
- Subgraph-based backfilling avoids missing historical events
- Local SQLite DB ensures zero dependency on external services at runtime
The indexer performs a fixed number of queries regardless of the number of users.
All applications query the local DB.
Even if The Graph becomes temporarily unavailable, the module continues indexing live events from RPCs.
You need access to a subgraph that exposes YAM offer-related events as entities:
- OfferCreated
- OfferUpdated
- OfferAccepted
- OfferDeleted
These entities must exist in the subgraph schema and be queryable.
The complete subgraph.yaml file is provided in the project, but below is an example of the relevant parts illustrating the minimum expected configuration:
entities:
- OfferAccepted
- OfferCreated
- OfferDeleted
- OfferUpdated
eventHandlers:
- event: OfferAccepted(...)
handler: handleOfferAccepted
- event: OfferCreated(...)
handler: handleOfferCreated
- event: OfferDeleted(...)
handler: handleOfferDeleted
- event: OfferUpdated(...)
handler: handleOfferUpdated