A Rust-based issuance bot that acts as the Issuer in Alpaca's Instant
Tokenization Network (ITN). The bot implements the Issuer-side endpoints that
Alpaca calls during mint/redeem operations, and coordinates with the Rain
OffchainAssetReceiptVault contracts to execute the actual on-chain minting and
burning of tokenized shares.
The issuance bot serves as the bridge between traditional equity holdings (at Alpaca) and on-chain semi-fungible tokenized representations (Rain SFT contracts). This is general infrastructure - any Authorized Participant (AP) can use it to mint and redeem tokenized equities.
- Account Linking: Connect AP accounts to the system
- Asset Management: Configure which tokenized assets are supported
- Minting: Convert traditional equity holdings to on-chain tokens
- Redemption: Burn on-chain tokens and return underlying equity
- Event Sourcing: Complete audit trail with time-travel debugging capabilities
- CQRS Architecture: Separation of command and query responsibilities for scalability
The system uses Event Sourcing (ES) and Command Query Responsibility Segregation (CQRS) patterns:
- Commands: Requests to perform actions (e.g.,
InitiateMint,ConfirmJournal) - Events: Immutable facts about what happened (e.g.,
MintInitiated,TokensMinted) - Aggregates: Business entities that process commands and produce events
(
Mint,Redemption,Account,TokenizedAsset) - Views: Read-optimized projections built from events for efficient querying
- Event Store: Single source of truth - append-only log of all domain events in SQLite
- HTTP Server: Rocket.rs-based server implementing Alpaca ITN Issuer endpoints
- Blockchain Client: Alloy-based client for interacting with Rain vault contracts
- Alpaca Integration: Client for Alpaca's API endpoints
- Monitor Service: Watches redemption wallet for incoming token transfers
- SQLite Database: Event store and view repositories
- Nix with flakes enabled
-
Clone the repository:
git clone https://github.com/ST0x-Technology/st0x.issuance.git cd st0x.issuance -
Enter development environment:
nix develop
-
Set up environment variables:
cp .env.example .env # Edit .env with your configuration -
Create and migrate database:
sqlx db create sqlx migrate run
-
Run tests:
cargo test -q -
Start the server:
cargo run
Endpoints require API key authentication with IP whitelisting and rate limiting.
Configuration:
# Generate API key (min 32 chars)
ISSUER_API_KEY=$(openssl rand -hex 32)
# Configure IP whitelist (CIDR notation)
ALPACA_IP_RANGES="1.2.3.0/24,5.6.7.8/32"Request format:
curl -X POST https://issuer.example.com/inkind/issuance \
-H "X-API-KEY: <api-key>" \
-H "Content-Type: application/json"Security: API key constant-time comparison, 10 failed auth attempts/IP/min rate limit
cargo build # Build the project
cargo run # Run the HTTP servercargo test --workspace # Run all tests (including crates/)
cargo test -q # Run all tests quietly
cargo test -q --lib # Run library tests only
cargo test -q <name> # Run specific testsqlx db create # Create the database
sqlx migrate run # Apply migrations
sqlx migrate revert # Revert last migration
sqlx migrate reset -y # Drop DB and re-run all migrationscargo fmt # Format code
cargo fmt --all -- --check # Check formatting
cargo clippy --workspace --all-targets --all-features -- -D clippy::all -D warnings # Run lintingst0x.issuance/
├── src/
│ ├── lib.rs # Library entry point with rocket setup
│ ├── main.rs # Binary entry point (minimal)
│ ├── test_utils.rs # Shared test utilities
│ ├── account/ # Account aggregate and endpoints
│ │ ├── mod.rs # Aggregate, commands, events
│ │ ├── api.rs # HTTP endpoints
│ │ └── view.rs # Read model projections
│ ├── mint/ # Mint aggregate and endpoints
│ │ ├── mod.rs # Aggregate, commands, events
│ │ ├── cmd.rs # Command definitions
│ │ ├── event.rs # Event definitions
│ │ ├── api/ # HTTP endpoints
│ │ └── view.rs # Read model projections
│ ├── tokenized_asset/ # TokenizedAsset aggregate and endpoints
│ │ └── ... # Similar structure to above
│ ├── alpaca/ # Alpaca API service
│ │ ├── mod.rs # Service trait and types
│ │ ├── service.rs # Real HTTP implementation
│ │ └── mock.rs # Mock implementation for testing
│ └── blockchain/ # Blockchain service and types
│ ├── mod.rs # Service trait and types
│ ├── service.rs # Real Alloy implementation
│ └── mock.rs # Mock implementation for testing
├── tests/ # End-to-end integration tests
│ ├── e2e_mint_flow.rs # Complete mint flow with Anvil
│ └── e2e_redemption_flow.rs # Redemption flow with Anvil
├── crates/
│ └── sqlite-es/ # SQLite event store implementation
├── migrations/ # Database migrations
├── AGENTS.md # AI agent development guidelines
├── CLAUDE.md # Claude Code instructions
├── SPEC.md # Detailed specification
├── ROADMAP.md # Development roadmap
└── README.md # This file
Note: This project uses package by feature organization, not package by
layer. Each feature module (account/, mint/, tokenized_asset/) contains
all related code: types, errors, commands, events, aggregates, views, and
endpoints.
POST /accounts/connect- Link AP account to our systemPOST /accounts/{client_id}/wallets- Whitelist a wallet for an AP accountGET /tokenized-assets- List supported tokenized assetsPOST /inkind/issuance- Receive mint request from AlpacaPOST /inkind/issuance/confirm- Receive journal confirmation from Alpaca
POST /v1/accounts/{account_id}/tokenization/callback/mint- Confirm mint completedPOST /v1/accounts/{account_id}/tokenization/redeem- Initiate redemptionGET /v1/accounts/{account_id}/tokenization/requests- Poll request status
- AP requests mint → Alpaca calls our
/inkind/issuanceendpoint - We validate and respond with
issuer_request_id - Alpaca journals shares from AP to our custodian account
- Alpaca confirms journal → we receive
/inkind/issuance/confirm - We mint tokens on-chain via
vault.deposit() - We call Alpaca's callback endpoint
- AP sends tokens to our redemption wallet → we detect transfer
- We call Alpaca's redeem endpoint
- We poll for journal completion
- We burn tokens on-chain via
vault.withdraw()
Configuration is managed through environment variables. See .env.example for
all available options.
Key configuration areas:
- HTTP server settings
- Alpaca API credentials
- Blockchain RPC endpoints
- Database connection
- Operational parameters (gas limits, poll intervals, etc.)
The project uses Given-When-Then testing for aggregate logic:
MintTestFramework::with(mock_services)
.given(vec![MintInitiated { /* ... */ }])
.when(ConfirmJournal { issuer_request_id: "123" })
.then_expect_events(vec![
JournalConfirmed { /* ... */ },
MintingStarted { /* ... */ }
]);This approach enables:
- Testing business logic in isolation
- Clear test intent and readability
- Complete coverage of state transitions
- Easy mocking of external services
E2E tests in tests/ use Anvil (local Ethereum blockchain) for realistic
on-chain testing:
- LocalEvm: Test infrastructure that deploys vault contracts to Anvil
- Real blockchain interactions: Tests execute actual on-chain deposits and transfers
- WebSocket monitoring: Tests verify event subscriptions and real-time detection
- In-memory database: Tests use SQLite in-memory for fast, isolated execution
- Mock external APIs: Alpaca API calls use httpmock for deterministic testing
E2E tests validate complete flows from HTTP request through CQRS to on-chain execution.
- SPEC.md - Detailed specification of the system
- ROADMAP.md - Development roadmap and milestones
- AGENTS.md - Development guidelines for AI agents
- CLAUDE.md - Instructions for Claude Code
This project follows strict development practices focused on code quality and maintainability:
- Event Sourcing & CQRS: All state changes captured as immutable events
- Type-Driven Design: Use algebraic data types to make invalid states unrepresentable
- Functional Patterns: Prefer functional programming patterns and iterators over imperative loops
- Feature Development: Implement complete vertical slices (HTTP → commands → events → views)
- No Lint Suppression: Never use
#[allow(clippy::*)]without explicit permission - fix the underlying code instead - Financial Data Integrity: All numeric conversions and financial operations must use explicit error handling - never silently cap, truncate, or provide default values
- Error Handling: Avoid
unwrap()even after validation - use proper error propagation - Visibility Levels: Keep visibility as restrictive as possible
(
pub(crate)overpub, private overpub(crate)) - Comments: Only comment when adding context that cannot be expressed through code structure - avoid redundant comments
Before submitting changes, always run in order:
cargo test -q- Run all tests firstcargo clippy --all-targets --all-features -- -D clippy::all- Fix all linting issuescargo fmt- Format code last
For detailed architectural patterns and design decisions, see SPEC.md.