Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,18 +91,18 @@ Is leader = leaderValue < threshold
```

### Nonce Evolution
Per block: `nonceValue = BLAKE2b-256(0x4E || vrfOutput)`, then `eta_v = BLAKE2b-256(eta_v || nonceValue)`. Rolling eta_v accumulates across epoch boundaries (no reset). Candidate nonce freezes at 70% epoch progress (stability window). Koios used as fallback when local nonce unavailable.
Per block: `nonceValue = BLAKE2b-256(0x4E || vrfOutput)`, then `eta_v = eta_v XOR nonceValue` (Cardano Nonce semigroup: `Nonce a <> Nonce b = Nonce (xor a b)`). Rolling eta_v accumulates across epoch boundaries (no reset). Candidate nonce (`η_c`) freezes at 60% epoch progress (stability window = 259,200 slots on mainnet). At each epoch boundary, the TICKN rule computes: `epochNonce = η_c XOR η_ph` where `η_ph` is the block hash from the last block of the prior epoch. Koios used as fallback in lite mode when local nonce unavailable.

**Batch processing:** `ProcessBatch()` method in `nonce.go` performs in-memory nonce evolution for batches of blocks (used during historical sync), then persists the final nonce state in a single DB transaction. This dramatically improves sync performance vs per-block DB writes.

### Trigger Flow
1. Every block: extract VRF output from header, update evolving nonce
2. At 70% epoch progress: freeze candidate nonce
1. Every block: extract VRF output from header, update evolving nonce via XOR
2. At 60% epoch progress (stability window): freeze candidate nonce
3. After freeze: calculate next epoch schedule (mutex-guarded, one goroutine per epoch)
4. Post schedule to Telegram, store in database

### Race Condition Prevention
`checkLeaderlogTrigger` fires on every block after 70% — uses `leaderlogMu` mutex + `leaderlogCalcing` map to ensure only one goroutine runs per epoch. Map entry is cleaned up after goroutine completes.
`checkLeaderlogTrigger` fires on every block after 60% — uses `leaderlogMu` mutex + `leaderlogCalcing` map to ensure only one goroutine runs per epoch. Map entry is cleaned up after goroutine completes.

### VRF Extraction by Era

Expand Down Expand Up @@ -246,7 +246,7 @@ Test files:
- `leaderlog_test.go` — SlotToEpoch (all networks), round-trip, formatNumber

## Key Dependencies
- `blinklabs-io/adder` v0.37.0 — live chain tail (must match gouroboros version)
- `blinklabs-io/adder` v0.37.1-pre (commit 460d03e, fixes auto-reconnect channel orphaning) — live chain tail
- `blinklabs-io/gouroboros` v0.153.1 — VRF (ECVRF-ED25519-SHA512-Elligator2), NtN ChainSync, NtC LocalStateQuery, ledger types
- `modernc.org/sqlite` — pure Go SQLite (no CGO required)
- `jackc/pgx/v5` — PostgreSQL driver with COPY protocol support for bulk inserts
Expand Down
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ A Cardano stake pool companion. Block notifications, leader schedule, epoch nonc

**Leader Schedule** — Pure Go CPRAOS implementation checking every slot per epoch against your VRF key. Calculates next epoch schedule automatically at the stability window (60% into epoch). On-demand via `/leaderlog`.

**Epoch Nonces** — In full mode, streams every block from Shelley genesis extracting VRF outputs per era, evolving the nonce via BLAKE2b-256, and freezing at the stability window. Backfills ~400 epochs in under 2 minutes.
**Epoch Nonces** — In full mode, streams every block from Shelley genesis extracting VRF outputs per era, evolving the nonce via XOR (Cardano Nonce semigroup), and freezing at the stability window. Backfills ~400 epochs in under 2 minutes.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix the backfill timing claim to match actual sync performance.

Line 13 says “~400 epochs in under 2 minutes,” but later in this README the full Shelley-to-tip sync is ~43 minutes. Please align these statements to avoid misleading expectations.

✏️ Suggested edit
-**Epoch Nonces** — In full mode, streams every block from Shelley genesis extracting VRF outputs per era, evolving the nonce via XOR (Cardano Nonce semigroup), and freezing at the stability window. Backfills ~400 epochs in under 2 minutes.
+**Epoch Nonces** — In full mode, streams every block from Shelley genesis extracting VRF outputs per era, evolving the nonce via XOR (Cardano Nonce semigroup), and freezing at the stability window. Full Shelley‑to‑tip backfill completes in ~43 minutes.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
**Epoch Nonces** — In full mode, streams every block from Shelley genesis extracting VRF outputs per era, evolving the nonce via XOR (Cardano Nonce semigroup), and freezing at the stability window. Backfills ~400 epochs in under 2 minutes.
**Epoch Nonces** — In full mode, streams every block from Shelley genesis extracting VRF outputs per era, evolving the nonce via XOR (Cardano Nonce semigroup), and freezing at the stability window. Full Shelley‑to‑tip backfill completes in ~43 minutes.
🤖 Prompt for AI Agents
In `@README.md` at line 13, Update the "Epoch Nonces" blurb to correct the
backfill timing: replace the phrase "~400 epochs in under 2 minutes" with a
figure that aligns with the later README statement (e.g., reflect the measured
full Shelley-to-tip sync of ~43 minutes or remove the numeric claim), and ensure
the revised sentence mentions the same sync context as "full Shelley-to-tip" to
avoid inconsistency with the rest of the README.


**Stake Queries** — Direct NtC local state query to your cardano-node for mark/set/go stake snapshots. Falls back to Koios if NtC is unavailable.

Expand Down Expand Up @@ -234,9 +234,9 @@ The epoch nonce is a 32-byte hash serving as randomness for VRF leader election.
**In full mode**, duckBot self-computes nonces by streaming every block from Shelley genesis:

1. Per block: extract VRF output from header, compute `nonceValue = BLAKE2b-256("N" || vrfOutput)`
2. Evolve: `eta_v = BLAKE2b-256(eta_v || nonceValue)` — rolling accumulation across the epoch
3. At stability window (60% epoch progress): freeze candidate nonce
4. Final nonce: `BLAKE2b-256(candidateNonce || previousEpochNonce)`
2. Evolve: `eta_v = eta_v XOR nonceValue` — Cardano Nonce semigroup (`Nonce a <> Nonce b = Nonce (xor a b)`)
3. At stability window (60% epoch progress): freeze candidate nonce (`η_c`)
4. Epoch transition (TICKN rule): `epochNonce = η_c XOR η_ph` where `η_ph` is the last block hash from the prior epoch boundary

**In lite mode**, nonces are fetched from the Koios API.

Expand Down Expand Up @@ -367,7 +367,7 @@ The adder pipeline handles reconnection at the outer level — duckBot wraps it
| Library | Version | Purpose |
| ------- | ------- | ------- |
| `blinklabs-io/gouroboros` | v0.153.1 | VRF, NtN ChainSync, NtC LocalStateQuery, ledger types |
| `blinklabs-io/adder` | v0.37.0 | Live chain tail pipeline (full block data incl. tx count) |
| `blinklabs-io/adder` | v0.37.1-pre | Live chain tail pipeline (full block data incl. tx count) |
| `cardano-community/koios-go-client/v3` | | Koios API for stake data and nonce fallback |
| `gopkg.in/telebot.v4` | | Telegram bot framework |
| `michimani/gotwi` | | Twitter/X API v2 client |
Expand Down
21 changes: 11 additions & 10 deletions db.go
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@ func (s *PgStore) InsertBlockBatch(ctx context.Context, blocks []BlockData) erro

func (s *PgStore) StreamBlockNonces(ctx context.Context) (BlockNonceRows, error) {
rows, err := s.pool.Query(ctx,
`SELECT epoch, slot, nonce_value FROM blocks ORDER BY slot`,
`SELECT epoch, slot, nonce_value, block_hash FROM blocks ORDER BY slot`,
)
if err != nil {
return nil, err
Expand Down Expand Up @@ -343,12 +343,13 @@ func (s *PgStore) GetLeaderSchedule(ctx context.Context, epoch int) (*LeaderSche

// pgBlockNonceRows wraps pgx.Rows to implement BlockNonceRows.
type pgBlockNonceRows struct {
rows pgx.Rows
epoch int
slot uint64
nonce []byte
err error
closed bool
rows pgx.Rows
epoch int
slot uint64
nonce []byte
blockHash string
err error
closed bool
}

func (r *pgBlockNonceRows) Next() bool {
Expand All @@ -361,13 +362,13 @@ func (r *pgBlockNonceRows) Next() bool {
return false
}
var slotInt64 int64
r.err = r.rows.Scan(&r.epoch, &slotInt64, &r.nonce)
r.err = r.rows.Scan(&r.epoch, &slotInt64, &r.nonce, &r.blockHash)
r.slot = uint64(slotInt64)
return r.err == nil
}

func (r *pgBlockNonceRows) Scan() (epoch int, slot uint64, nonceValue []byte, err error) {
return r.epoch, r.slot, r.nonce, r.err
func (r *pgBlockNonceRows) Scan() (epoch int, slot uint64, nonceValue []byte, blockHash string, err error) {
return r.epoch, r.slot, r.nonce, r.blockHash, r.err
}

func (r *pgBlockNonceRows) Close() {
Expand Down
38 changes: 19 additions & 19 deletions epoch612_integration_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,6 @@ import (
"os"
"testing"
"time"

"golang.org/x/crypto/blake2b"
)

func TestEpoch612LeaderSchedule(t *testing.T) {
Expand All @@ -36,9 +34,9 @@ func TestEpoch612LeaderSchedule(t *testing.T) {
ctx := context.Background()

// === Step 1: Compute epoch 612 nonce from chain data ===
// Stream ALL blocks from Shelley genesis, evolving nonce exactly like the bot does.
// Stream ALL blocks from Shelley genesis, evolving nonce via XOR (Nonce semigroup).
// At each epoch's 60% stability window: freeze candidate nonce.
// At each epoch boundary: compute epoch_nonce(e+1) = hash(eta_c(e) || eta_0(e)).
// At each epoch boundary: TICKN rule — η(new) = η_c XOR η_ph

overallStart := time.Now()
nonceStart := time.Now()
Expand All @@ -48,7 +46,9 @@ func TestEpoch612LeaderSchedule(t *testing.T) {
copy(etaV, genesisHash)
eta0 := make([]byte, 32) // epoch nonce — eta_0(208) = shelley genesis hash
copy(eta0, genesisHash)
etaC := make([]byte, 32) // candidate nonce (frozen at stability window)
etaC := make([]byte, 32) // candidate nonce (frozen at stability window)
prevHashNonce := make([]byte, 32) // η_ph — NeutralNonce at Shelley start
var lastBlockHash string

currentEpoch := ShelleyStartEpoch
candidateFrozen := false
Expand All @@ -57,7 +57,7 @@ func TestEpoch612LeaderSchedule(t *testing.T) {
log.Printf("Streaming blocks from Shelley genesis to compute epoch nonces...")

rows, err := store.pool.Query(ctx,
"SELECT epoch, slot, nonce_value FROM blocks WHERE epoch <= 611 ORDER BY slot ASC")
"SELECT epoch, slot, nonce_value, block_hash FROM blocks WHERE epoch <= 611 ORDER BY slot ASC")
if err != nil {
t.Fatalf("Failed to query blocks: %v", err)
}
Expand All @@ -67,29 +67,29 @@ func TestEpoch612LeaderSchedule(t *testing.T) {
var epoch int
var slot uint64
var nonceValue []byte
if err := rows.Scan(&epoch, &slot, &nonceValue); err != nil {
var blockHash string
if err := rows.Scan(&epoch, &slot, &nonceValue, &blockHash); err != nil {
t.Fatalf("Scan failed: %v", err)
}

// Epoch transition
// Epoch transition — TICKN rule: η(new) = η_c ⊕ η_ph
if epoch != currentEpoch {
// If we didn't freeze candidate yet (epoch had < 60% blocks), freeze now
if !candidateFrozen {
etaC = make([]byte, 32)
copy(etaC, etaV)
}
// Compute epoch nonce for new epoch: eta_0(e+1) = hash(eta_c(e) || eta_0(e))
h, _ := blake2b.New256(nil)
h.Write(etaC)
h.Write(eta0)
eta0 = h.Sum(nil)
eta0 = xorBytes(etaC, prevHashNonce)
if lastBlockHash != "" {
prevHashNonce, _ = hex.DecodeString(lastBlockHash)
}

currentEpoch = epoch
candidateFrozen = false
}

// Evolve eta_v
// Evolve eta_v via XOR
etaV = evolveNonce(etaV, nonceValue)
lastBlockHash = blockHash
blockCount++

// Freeze candidate at 60% stability window
Expand All @@ -112,10 +112,10 @@ func TestEpoch612LeaderSchedule(t *testing.T) {
etaC = make([]byte, 32)
copy(etaC, etaV)
}
h, _ := blake2b.New256(nil)
h.Write(etaC)
h.Write(eta0)
epoch612Nonce := h.Sum(nil)
if lastBlockHash != "" {
prevHashNonce, _ = hex.DecodeString(lastBlockHash)
}
epoch612Nonce := xorBytes(etaC, prevHashNonce)

nonceElapsed := time.Since(nonceStart)
log.Printf("Nonce computation: %d blocks processed in %v", blockCount, nonceElapsed)
Expand Down
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ go 1.24.0
toolchain go1.24.11

require (
github.com/blinklabs-io/adder v0.37.0
github.com/blinklabs-io/adder v0.37.1-0.20260209154719-460d03ed24c1
github.com/blinklabs-io/gouroboros v0.153.1
github.com/cardano-community/koios-go-client/v3 v3.1.3
github.com/cenkalti/backoff/v4 v4.3.0
Expand Down
4 changes: 2 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ github.com/aws/aws-sdk-go v1.55.6 h1:cSg4pvZ3m8dgYcgqB97MrcdjUmZ1BeMYKUxMMB89IPk
github.com/aws/aws-sdk-go v1.55.6/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
github.com/bits-and-blooms/bitset v1.24.4 h1:95H15Og1clikBrKr/DuzMXkQzECs1M6hhoGXLwLQOZE=
github.com/bits-and-blooms/bitset v1.24.4/go.mod h1:7hO7Gc7Pp1vODcmWvKMRA9BNmbv6a/7QIWpPxHddWR8=
github.com/blinklabs-io/adder v0.37.0 h1:yfBA+4S34LLgqOcOS5H5XIOvNEhmdypEV7xw+ES/nQ4=
github.com/blinklabs-io/adder v0.37.0/go.mod h1:CJsbQyGJKGgDksRXgHxda5Uco11HSXhYmLqjcxrGFH8=
github.com/blinklabs-io/adder v0.37.1-0.20260209154719-460d03ed24c1 h1:C6Y03ERl3OFKknBKSpTnbckbJ1Zr0hJm5sbgfvGu9aU=
github.com/blinklabs-io/adder v0.37.1-0.20260209154719-460d03ed24c1/go.mod h1:a8OjDZFulnrpWAzPZR/htfHzc2gPRL+Lm975fK6Hm4Q=
github.com/blinklabs-io/gouroboros v0.153.1 h1:9Jj4hHFrVmmUbFAg4Jg8p8R07Cimb7rXJL5xrimGOi8=
github.com/blinklabs-io/gouroboros v0.153.1/go.mod h1:MTwq+I/IMtzzWGN2Jd87uuryMfOD+3mQGYNFJn1PSFY=
github.com/blinklabs-io/ouroboros-mock v0.9.0 h1:O4FhgxKt43RcZGcxRQAOV9GMF6F06qtpU76eFeBKWeQ=
Expand Down
3 changes: 3 additions & 0 deletions helm-chart/templates/configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,9 @@ data:
{{- end }}
timezone: {{ .Values.config.leaderlog.timezone | quote }}
timeFormat: {{ .Values.config.leaderlog.timeFormat | default "12h" | quote }}
{{- if .Values.config.leaderlog.ntcQueryTimeout }}
ntcQueryTimeout: {{ .Values.config.leaderlog.ntcQueryTimeout | quote }}
{{- end }}
database:
driver: {{ .Values.config.database.driver | quote }}
{{- if eq .Values.config.database.driver "sqlite" }}
Expand Down
2 changes: 2 additions & 0 deletions helm-chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,8 @@ config:
timezone: ""
# Time format: "12h" or "24h"
timeFormat: "12h"
# NtC query timeout for stake snapshot queries (Go duration, e.g. "10m", "30m")
ntcQueryTimeout: "10m"

database:
driver: "sqlite"
Expand Down
9 changes: 7 additions & 2 deletions localquery.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,15 +18,20 @@ import (
type NodeQueryClient struct {
nodeAddress string
networkMagic uint32
queryTimeout time.Duration
}

// NewNodeQueryClient creates a new node query client.
// nodeAddress can be a TCP address ("host:port"), a UNIX socket path ("/ipc/node.socket"),
// or explicitly prefixed ("unix:///ipc/node.socket", "tcp://host:port").
func NewNodeQueryClient(nodeAddress string, networkMagic int) *NodeQueryClient {
func NewNodeQueryClient(nodeAddress string, networkMagic int, queryTimeout time.Duration) *NodeQueryClient {
if queryTimeout == 0 {
queryTimeout = 10 * time.Minute
}
return &NodeQueryClient{
nodeAddress: nodeAddress,
networkMagic: uint32(networkMagic),
queryTimeout: queryTimeout,
}
}

Expand Down Expand Up @@ -68,7 +73,7 @@ func (c *NodeQueryClient) withQuery(ctx context.Context, fn func(*localstatequer
ouroboros.WithKeepAlive(false),
ouroboros.WithLocalStateQueryConfig(
localstatequery.NewConfig(
localstatequery.WithQueryTimeout(10*time.Minute),
localstatequery.WithQueryTimeout(c.queryTimeout),
),
),
)
Expand Down
5 changes: 3 additions & 2 deletions main.go
Original file line number Diff line number Diff line change
Expand Up @@ -287,8 +287,9 @@ func (i *Indexer) Start() error {
}
if ntcHost != "" {
network, address := parseNodeAddress(ntcHost)
i.nodeQuery = NewNodeQueryClient(ntcHost, i.networkMagic)
log.Printf("Node query client initialized (NtC): %s://%s", network, address)
ntcQueryTimeout := viper.GetDuration("leaderlog.ntcQueryTimeout")
i.nodeQuery = NewNodeQueryClient(ntcHost, i.networkMagic, ntcQueryTimeout)
log.Printf("Node query client initialized (NtC): %s://%s (timeout: %v)", network, address, i.nodeQuery.queryTimeout)
}

// Twitter toggle
Expand Down
Loading