Full-stack prototype — pnpm workspaces monorepo deploying to AWS via SST v3.
hyperspace/
├── sst.config.ts # SST v3 infrastructure (app stack — API, website, queues, etc.)
├── infra/ # SST v3 infrastructure (base infra — OIDC provider, IAM roles)
├── contracts/ # Foundry smart contracts
├── packages/
│ ├── shared/ # TypeScript interfaces shared between website and backend
│ ├── aurora-backoffice-client/ # Generated TS client for Aurora Back Office API
│ ├── aurora-portal-client/ # Generated TS client for Aurora Portal API
│ ├── backend/ # Lambda handlers (upload → DynamoDB)
│ ├── ui/ # UI component library (git submodule → joemocode-business/ui-hyperspace)
│ └── website/ # Vite + React 19 + TanStack Router SPA + Tailwind v4
packages/uiis a git submodule — a standalone fork of@filecoin-foundation/ui-filecoinadapted for React/Vite. The upstream fork lives atjoemocode-business/filecoin-foundationfor tracking upstream changes. This package does not build on its own! We import the UI components we use and build through Website package. TODO Move this to something more official and not my Github, probably.
| Staging/dev Account | 654654381893 |
| Region | us-east-2 |
| SSO portal | https://d-9067ff87d6.awsapps.com/start |
- Node.js >= 24
- AWS CLI — required for S3 sync and CloudFront invalidation during deploy. Install guide
1. Configure the AWS profile (one-time)
aws configure sso --profile filoneWhen prompted:
- SSO Session name:
filone-sandbox - SSO start URL:
https://d-9067ff87d6.awsapps.com/start - SSO region:
us-east-1 - SSO registration scopes:
sso:account:access - Account ID:
654654381893 - Role:
AdministratorAccess - Default region:
us-east-2- Or whatever region you want. - Output format:
json
2. Log in and activate the profile
MUST do this before you can deploy.
aws sso login --profile filoneThen set the profile for your shell session so SST picks it up:
export AWS_PROFILE=filoneTo make this permanent, add it to your shell config:
# Add to ~/.zshrc (or ~/.bashrc)
echo 'export AWS_PROFILE=filone' >> ~/.zshrc
source ~/.zshrcYou can verify it's working with:
aws sts get-caller-identity3. Initialize submodules
git submodule update --init --recursive4. Install dependencies
pnpm install5. Set SST secrets (one-time per stage)
pnpx sst secret set Auth0ClientId <value> [--stage <stage>]
pnpx sst secret set Auth0ClientSecret <value> [--stage <stage>]
pnpx sst secret set Auth0MgmtClientId <value> [--stage <stage>]
pnpx sst secret set Auth0MgmtClientSecret <value> [--stage <stage>]
pnpx sst secret set Auth0MgmtRuntimeClientId <value> [--stage <stage>]
pnpx sst secret set Auth0MgmtRuntimeClientSecret <value> [--stage <stage>]
pnpx sst secret set StripeSecretKey <value> [--stage <stage>]
pnpx sst secret set StripePriceId <value> [--stage <stage>]
pnpx sst secret set AuroraBackofficeToken <value> [--stage <stage>]
pnpx sst secret set SendGridApiKey <value> [--stage <stage>]
pnpx sst secret set GrafanaLokiAuth '<instanceId>:<apiKey>' [--stage <stage>]Omit --stage to set for your personal dev stage (defaults to OS username).
There are two Auth0 M2M credentials with different scopes — see the Auth0 M2M Setup section below. The AuroraBackofficeToken is from the Aurora Back Office dashboard — see the API token section below. The GrafanaLokiAuth secret is from Grafana Cloud — see the Observability section below.
pnpm run dev # SST live dev mode (live Lambda debugging)
pnpm run build # Build all packages
pnpm run deploy:dev # Build and deploy personal dev stack (uses OS username as stage)
pnpm run remove # Remove your personal dev stack
pnpm run storybook # Start Storybook dev server on port 6006
pnpm run test:storybook # Run Storybook tests (browser-based, requires Playwright)
pnpm run lint # Lint and typecheck TypeScript code (via oxlint)
pnpm run lint:fix # Lint and auto-fix where possibleDo not run
deploy:stagingordeploy:productionmanually. Staging and production deployments should go through CI/CD.
# Local website dev server (for frontend-only changes)
cd packages/website && pnpm run devThe repo includes a Playwright end-to-end test suite under tests/e2e/. The @playwright/test package is already a devDependency, so pnpm install covers it.
Install browser binaries (one-time):
pnpm exec playwright install --with-depsRun tests against a deployed stage:
BASE_URL=<your-cloudfront-url> pnpm test:e2eBASE_URL is required and should point to a deployed SST stage (personal dev stack, staging, etc.).
After a run, an HTML report is generated at playwright-report/. To view it:
pnpm exec playwright show-reportCI runs these tests automatically against preview deployments on PRs.
Integration tests, located in tests/integration/, confirm that individual modules or services interact correctly with one another — for instance, ensuring Stripe webhook handlers produce the expected state transitions in DynamoDB — by running against real AWS and Stripe resources.
While E2E tests (Playwright) cover full, business-critical user journeys spanning the entire system, integration tests focus more narrowly on backend logic at specific integration points.
Run tests (requires deployed SST stage):
pnpm test:integrationTests run inside sst shell so that SST resource bindings (table names, Stripe keys, etc.) are available as environment variables.
pnpm deploy:devUses your OS username as the stage name. No custom domain — outputs a CloudFront URL.
If you are having trouble deploying after SST changes (e.g., a version bump of SST or drift on components from manual actions), you may need to refresh the stack:
pnpm run refresh
pnpm deploy:devDo not deploy to staging or production manually unless there is a very good reason. Use CI/CD.
For reference, the CI/CD pipeline runs:
pnpm run deploy:staging
pnpm run deploy:productionCustom domains require a pre-provisioned ACM certificate in us-east-1 and a DNS CNAME pointing to the CloudFront distribution (managed by a separate pipeline).
Infrastructure-only deploys are available for cases where only the base infra (OIDC, IAM roles) needs updating:
pnpm run deploy:infra:staging
pnpm run deploy:infra:productionpnpx sst devRuns Lambda functions locally with live reload. Changes to handler code take effect immediately without redeploying.
Custom domains require an ACM certificate in us-east-1 (CloudFront requirement):
We manage this in another repo: https://github.com/FilecoinFoundationWeb/FilHyperspace-Infrastructure
Example PR To add staging.fil.one: https://github.com/FilecoinFoundationWeb/FilHyperspace-Infrastructure/pull/3
| Dev environment | FilOneDev |
| Tenant domain | dev-oar2nhqh58xf5pwf.us.auth0.com |
| Dashboard | https://manage.auth0.com/dashboard/us/dev-oar2nhqh58xf5pwf/applications/hAHMVzFTsFMrtxHDfzOvQCLHgaAf3bPQ/settings |
Auth0 credentials are managed as SST secrets (Auth0ClientId, Auth0ClientSecret). See the "Set SST secrets" step above.
Callback and logout URLs are configured automatically during deploy — no manual Dashboard edits needed. The deploy-time setup Lambda adds the correct URLs for the deployed domain (custom domain or CloudFront).
Application settings (Applications > your app > Settings):
- Under Advanced Settings > Grant Types, ensure Authorization Code and Refresh Token are enabled.
API setup (APIs > Create API):
- Identifier (audience):
app.fil.one(prod) — this must matchAUTH0_AUDIENCEinsst.config.ts. It's what makes Auth0 issue a JWT access token (instead of an opaque one) and is theaudclaim the middleware validates. - Under the API's Machine to Machine Applications tab, authorize your application so it can exchange tokens.
Two separate M2M applications are used to limit the scope of credentials exposed to Lambda functions.
Used only by the deploy-time setup Lambda to configure Auth0 on each deploy. Not available to runtime Lambda functions.
One-time setup in Auth0 Dashboard:
- Go to Applications > Create Application
- Choose Machine to Machine Applications
- Name it something like
Fil.one Deploy Automation - Authorize it for the Auth0 Management API (
https://<tenant>.us.auth0.com/api/v2/) - Grant these scopes:
read:clients,update:clients,read:email_provider,create:email_provider,update:email_provider - Copy the Client ID and Client Secret
pnpx sst secret set Auth0MgmtClientId <M2M-client-id> [--stage <stage>]
pnpx sst secret set Auth0MgmtClientSecret <M2M-client-secret> [--stage <stage>]Used by request-time Lambda handlers (update-profile, resend-verification) to manage user records and trigger verification emails.
One-time setup in Auth0 Dashboard:
- Go to Applications > Create Application
- Choose Machine to Machine Applications
- Name it something like
Fil.one Runtime - Authorize it for the Auth0 Management API (
https://<tenant>.us.auth0.com/api/v2/) - Grant these scopes:
update:users,create:user_tickets - Copy the Client ID and Client Secret
pnpx sst secret set Auth0MgmtRuntimeClientId <M2M-client-id> [--stage <stage>]
pnpx sst secret set Auth0MgmtRuntimeClientSecret <M2M-client-secret> [--stage <stage>]Use test mode first. Switch to live mode for production.
- Products > Add product
- Name:
Fil.one Storage - Description:
Decentralized cloud storage — $4.99/TiB/month
- Name:
- Add price on that product:
- Pricing model: Standard
- Recurring: Monthly
- Usage type: Metered (sum of usage values during period)
- Price:
$4.99per unit, unit label:TiB
- Note the Price ID (
price_xxxxx)
Settings > Billing > Customer portal — enable:
- Update payment method
- View billing history / invoices
- Cancel subscription
Webhook endpoints are created and managed automatically during deploy. The deploy-time setup Lambda creates the Stripe webhook endpoint with the correct URL for the deployed domain and stores the signing secret in AWS SSM Parameter Store. No manual configuration needed.
Events registered: customer.subscription.created, customer.subscription.updated, customer.subscription.deleted, customer.subscription.trial_will_end, invoice.payment_succeeded, invoice.payment_failed
Run this command to delete all webhooks created by PR preview deployments (including hooks from active pull requests):
stripe webhook_endpoints list --limit 100 | \
jq -r '.data[] | select(.metadata.stage // "" | startswith("pr-")) | .id' | \
xargs -I{} stripe webhook_endpoints delete {} --confirmStripe credentials are managed as SST secrets (StripeSecretKey, StripePriceId, StripePublishableKey). See the "Set SST secrets" step above.
Auth0 transactional emails (verification, password reset, etc.) are sent via SendGrid. The deploy-time setup Lambda configures Auth0 to use SendGrid automatically.
API keys are managed at: https://app.sendgrid.com/settings/api_keys
The SendGridApiKey SST secret should be a key with Mail Send permission only. See the "Set SST secrets" step above.
The project includes generated TypeScript clients for the Aurora APIs, built with Hey API:
- Back Office API —
packages/aurora-backoffice-client/(tenant management, admin operations) - Portal API —
packages/aurora-portal-client/(access keys, buckets, tenant-facing operations)
The backend uses an API token to authenticate with the Aurora Back Office API (e.g., to create tenants on user registration).
Generating the token:
- Log in to the Aurora Back Office dashboard at https://backoffice.dev.aur.lu/ff/docs/backoffice-api
- Navigate to the API token management section
- Generate a new token with the required permissions
Setting the SST secret:
pnpx sst secret set AuroraBackofficeToken <token> [--stage <stage>]After API changes, update the relevant Swagger spec and regenerate:
Back Office client:
- Download the updated Swagger spec from https://backoffice.dev.aur.lu/ff/docs/backoffice-api (open the page, then save the JSON loaded by the page)
- Replace
packages/aurora-backoffice-client/aurora-backoffice.swagger.jsonwith the downloaded file
Portal client:
Download the updated Swagger spec from the Aurora Portal API docs and save it to packages/aurora-portal-client/aurora-portal.swagger.json:
curl https://portal-ff.dev.aur.lu/api/v1/docs/swagger.json -o packages/aurora-portal-client/aurora-portal.swagger.json && oxfmtReformat the file:
pnpm lint:fixRegenerate both clients:
pnpm generate:api-clientspackages/ui is a git submodule pointing to joemocode-business/ui-hyperspace — a fork of @filecoin-foundation/ui-filecoin adapted for Vite/React. It is consumed from source by the website (no separate build step in dev).
Importing components in the website
import { Button } from '@hyperspace/ui/Button';
import { Section } from '@hyperspace/ui/Section/Section';
import { Heading } from '@hyperspace/ui/Heading';Styles are loaded globally via packages/website/src/styles.css which imports @hyperspace/ui/styles (Tailwind v4 theme + component CSS).
Updating the submodule to a new commit
cd packages/ui
git pull origin main
cd ../..
git add packages/ui
git commit -m "chore: bump ui submodule"Pulling upstream changes from the original library
The full fork at joemocode-business/filecoin-foundation tracks the upstream FilecoinFoundationWeb/filecoin-foundation. To bring in upstream changes:
# In the filecoin-foundation fork, sync upstream then cherry-pick or copy
# changed files from packages/ui-filecoin/ into the ui-hyperspace repo manually.Note: Several components in
packages/uiuse Next.js-specific APIs (next/navigation,next/image) ornuqsand are not usable as-is in this Vite app. These includeNavigation/*,Network/*, andSearch/Search. They will be adapted for React Router as needed.
Telemetry is sent to Grafana Cloud. See docs/architectural-decisions/2026-03-observability-architecture.md for details.
Logs: CloudWatch Logs → Kinesis Firehose → Grafana Cloud Loki (per-stage, managed by the main stack).
Metrics: CloudWatch Metrics → Metric Stream → Kinesis Firehose → Grafana Cloud Prometheus (per-account, managed by the infra/ stack — one stream captures all Lambda metrics in the account regardless of stage). Developer stacks do not stream metrics to Grafana; use the CloudWatch console instead.
Generate API keys in Grafana Cloud (grafana.com → your stack → Connections → API keys):
- GrafanaLokiAuth (main stack): Plain
<instanceId>:<apiKey>where instanceId is your Loki instance ID (sent as-is in the FirehoseX-Amz-Firehose-Access-Keyheader) - GrafanaPrometheusAuth (infra stack): Plain
<instanceId>:<apiKey>where instanceId is your Prometheus instance ID
# Main stack secrets
pnpx sst secret set GrafanaLokiAuth '<instanceId>:<apiKey>' [--stage <stage>]
# Infra stack secrets (run from infra/ directory)
cd infra && pnpx sst secret set GrafanaPrometheusAuth '<instanceId>:<apiKey>' --stage <stage>Foundry project for EVM smart contracts.
Prerequisites — install Foundry if you haven't already:
curl -L https://foundry.paradigm.xyz | bash && foundryupforge install # install from foundry.lock
forge update # update all dependencies
forge update contracts/lib/forge-std # update a specific dependencyforge buildforge testforge fmtforge snapshotanvilforge script contracts/script/Counter.s.sol:CounterScript --rpc-url <your_rpc_url> --private-key <your_private_key>