Skip to content

Perform integrity check on custody columns against database on startup #6974

Open
@jimmygchen

Description

@jimmygchen

Under PeerDAS, each full node only store a subset of all blob data ("custody data columns"), and these are computed based on:

  1. the node's NodeId (generated on first startup)
  2. the node's custody group count (this is 4 by default for a full node)

e.g. given a node ID running a full node, it could be a custodian of data column 1, 3, 5, 7

On a restart, if either the node id or custody group count changes (e.g. switching between supernode or full node via the --subscribe-all-data-column-subnets flag), its set of custody columns also change, and this would result in a inconsistent data column DB - similar to switching between an archive node and non-archive node.

This means the node would not be able to serve data columns that it's expected to store, and may result in the node getting downscored by all peers.

Proposed Solution

Handling custody column changes are likely quite complex, and might be easier and quicker to resync from scratch.

We could persist the custody info in the DB and perform an integrity check on startup to see if the newly computed custody columns matches what's in the database.

If it doesn't match the DB, exit the process and inform the user to re-sync instead.

Note: we may need end up storing this info as part of validator custody (#6767), and we should be able to use the same info from the DB for both features.

Metadata

Metadata

Assignees

No one assigned

    Labels

    blockeddasData Availability SamplingdatabasefuluRequired for the upcoming Fulu hard fork

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions