The Bulletin Chain is a parachain providing distributed data storage and retrieval infrastructure for the Polkadot ecosystem. It stores arbitrary data with proof-of-storage guarantees and makes it accessible via IPFS, with data retention managed over a configurable period (default ~14 days). It is run using Polkadot SDK's polkadot-omni-node.
The main purpose of the Bulletin Chain is to provide storage for the People Chain (Proof-of-Personhood). Data is added via authorized extrinsics, indexed with Blake2b-256 hashes, and retrievable from IPFS or directly from the node.
- Authorization - Storage access is controlled via root-origin calls. Authorization is granted either for a specific account (
authorize_account) or for data with a specific content hash (authorize_preimage). - Storage - Once authorized, data is submitted via
transactionStorage.store. Large files are automatically chunked with DAG-PB manifests for IPFS compatibility. - Retrieval - Stored data can be retrieved from IPFS via Bitswap, or directly from the node via the transaction index or content hash.
- Retention & Renewal - Data is retained for a configurable period. It can be renewed before expiry to extend retention, with support for automatic renewal.
The People Chain root calls transactionStorage.authorize_preimage (over XCM) to prime Bulletin to expect data with a given hash. A user account then submits the data via transactionStorage.store.
polkadot-bulletin-chain/
├── runtimes/
│ ├── bulletin-westend/ # Parachain runtime (Westend testnet)
│ │ └── integration-tests/ # XCM emulator integration tests
│ └── bulletin-paseo/ # Parachain runtime (Paseo testnet)
├── pallets/
│ ├── transaction-storage/ # Core storage pallet
│ │ └── primitives/ # Shared types (ContentHash, CID utilities)
│ ├── hop-promotion/ # HOP pool data promotion to chain storage
│ └── common/ # Shared pallet utilities (NoCurrency, call inspection)
├── sdk/
│ ├── rust/ # Rust SDK (no_std compatible)
│ └── typescript/ # TypeScript SDK (@parity/bulletin-sdk)
├── console-ui/ # React web interface
├── examples/ # JavaScript/TypeScript/Rust integration examples
├── stress-test/ # Write throughput & Bitswap read benchmarks
├── docs/ # SDK book, authorization docs, operational playbook
├── scripts/ # Build, benchmarking, and deployment scripts
└── zombienet/ # Local parachain network configurations
All data on the Bulletin Chain has the same retention period (~14 days). Two operations interact with this storage, differing in how they consume allowances:
store— writes new data and starts a fresh retention countdown.renew— re-indexes data that is about to expire, resetting its retention countdown. The chain tracks total renewed bytes in a globalPermanentStorageUsedcounter for capacity planning.
When data reaches the end of its retention period without being renewed, it is automatically cleaned up.
All storage operations require prior authorization, granted via root-origin calls. Each authorization carries an AuthorizationExtent — a set of counters that share a single bytes_allowance cap but enforce it differently depending on the operation:
| Counter | Enforcement | Behavior |
|---|---|---|
bytes / transactions |
Soft (store) | Saturate upward on every store. Never reject — exceeding the allowance just reduces the transaction's priority boost (via AllowanceBasedPriority), letting under-budget accounts land first. |
bytes_permanent |
Hard (renew) | Increments on every renew. Rejects with PermanentAllowanceExceeded when bytes_permanent + size > bytes_allowance. |
bytes_allowance / transactions_allowance |
Caps | Set at grant time. bytes_allowance is shared between store (soft) and renew (hard). |
This design means store is always accepted (authorization just needs to exist and not be expired), but accounts that have exceeded their budget are naturally deprioritized in favor of those still within budget. Renewals, which commit to retaining data longer, are strictly capped.
All counters reset to zero when an expired authorization is re-granted, starting a fresh window.
A global MaxPermanentStorageSize limits total renewed bytes across all authorizations. A renew is rejected when PermanentStorageUsed + size > MaxPermanentStorageSize. When usage crosses 80% of the cap, a PermanentStorageNearCap event is emitted as a signal for off-chain governance to raise the cap or coordinate another bulletin chain.
Core storage pallet providing distributed data storage and retrieval with authorization-based access control.
Extrinsics:
store/store_with_cid_config- Store data (with optional CID codec/hash configuration)renew/renew_content_hash- Extend retention of stored dataauthorize_account- Grant an account permission to store (with transaction/byte limits)authorize_preimage- Authorize storage of data with a specific content hashrefresh_account_authorization/refresh_preimage_authorization- Extend authorization expiration
Key features:
- Authorization-based access control (account-scoped or content-addressed)
- Configurable retention period with automatic cleanup
- Auto-renewal tracking for important data
- Merkle-based storage proofs with chunk validation
- Soft-cap (priority signal) and hard-cap (per-window renewal quota) for storage capacity
- Feeless transaction support via
pallet-skip-feeless-payment
Promotes near-expiry HOP (Human-Operated Peer) pool data to permanent chain storage. Uses general (unsigned authorized) transactions to fill unused blockspace without charging users. Validates sr25519 signatures and checks that the promoting account has an active Bulletin authorization.
Shared utilities including NoCurrency (a no-op fungible currency for pallets that require one) and call inspection helpers for unwrapping utility/sudo/proxy wrappers during authorization tracking.
Two parachain runtimes (bulletin-westend, bulletin-paseo) share the same pallet composition with network-specific constants. Both use 24-second slots (4 relay chain slots), 10 MiB max block length, and a ~14 day retention period.
Multi-language client SDKs for submitting data, managing authorizations, and generating IPFS-compatible DAG-PB manifests.
no_std compatible core with optional std features for direct transaction submission via subxt.
- Automatic chunking with configurable chunk size (default 1 MiB)
- DAG-PB manifest generation for chunked data
BulletinClientfor offline prepare operations- Progress tracking via callbacks
Published as @parity/bulletin-sdk on npm. Browser and Node.js compatible (requires Node >= 22).
AsyncBulletinClientfor end-to-end storage workflowsFixedSizeChunkerandUnixFsDagBuilderfor large file handling- Built on
polkadot-api(PAPI)
Quick start: See sdk/README.md
Full documentation: See docs/book/ (viewable locally with mdbook serve --open)
A React 19 + Vite web application for interacting with the Bulletin Chain in the browser. Built with Polkadot API, Smoldot light client, Helia (IPFS), and Tailwind CSS. Includes Playwright E2E tests.
# Build production runtime
cargo build --profile production -p bulletin-westend-runtime --features on-chain-release-build
# Build with runtime benchmarks enabled
cargo build --release --features runtime-benchmarks
# Run all tests
cargo test
# Run pallet tests
cargo test -p pallet-bulletin-transaction-storage
# Run runtime tests
cargo test -p bulletin-westend-runtime# Run benchmarks for a specific runtime
python3 scripts/cmd/cmd.py bench --runtime bulletin-westend
# Run all benchmarks
python3 scripts/cmd/cmd.py benchThe stress-test/ directory contains a benchmarking tool for measuring write throughput and Bitswap read performance:
# Throughput benchmark across payload sizes (1KB - 2MB)
bulletin-stress-test throughput
# Bitswap read benchmark across concurrency levels (1-64 clients)
bulletin-stress-test bitswapLocal parachain networks can be spun up using the configurations in zombienet/:
bulletin-westend-local.toml- Local Westend relay + Bulletin parachainbulletin-paseo-local.toml- Local Paseo relay + Bulletin parachain
The examples/ directory contains JavaScript, TypeScript, and Rust scripts demonstrating chain interaction:
- Authorization and storage workflows (WebSocket RPC and Smoldot light client)
- Content-addressed (preimage) authorization
- Chunked data storage with DAG-PB manifests
- Large file handling with parallel uploads
- Auto-renewal monitoring
- Runtime upgrades
See examples/README.md for setup and usage.
GitHub Actions workflows in .github/workflows/ cover checks (Rust, SDK, console UI), integration and stress tests, runtime migration testing, crate publishing, releases, and UI deployment.
This means C++ standard library headers can't be found. Fix:
xcode-select --installIf already installed, reinstall:
sudo rm -rf /Library/Developer/CommandLineTools
xcode-select --installVerify the active developer path: xcode-select -p (should be /Applications/Xcode.app/Contents/Developer or /Library/Developer/CommandLineTools).
If incorrect, set manually: sudo xcode-select --switch /Library/Developer/CommandLineTools
See the official Polkadot SDK macOS guide for more.
brew install llvm
export LIBCLANG_PATH="$(brew --prefix llvm)/lib"
export LD_LIBRARY_PATH="$LIBCLANG_PATH:$LD_LIBRARY_PATH"
export DYLD_LIBRARY_PATH="$LIBCLANG_PATH:$DYLD_LIBRARY_PATH"
export PATH="$(brew --prefix llvm)/bin:$PATH"Verify libclang.dylib exists: ls "$(brew --prefix llvm)/lib/libclang.dylib", then rebuild:
cargo clean
cargo build --release