This repo provides a CI/CD pipeline that automates deployment of TLSNotary backend and frontend services to Azure SGX-based virtual machines.
Supporting infrastructure and necessary IaC components are defined in:
https://github.com/privacy-scaling-explorations/devops/tree/azure/tlsnotary/terraform/azure
This document focuses only on the permission model and deployment structure. Azure resource setup (e.g., networking or virtual machines) is out of scope.
The GitHub Actions workflow for deployment is currently manually triggered and assumes all required container images are pre-built.
The workflow accepts the following variables:
- Resource group name
- Environment type (
prod
ortest
) - Source branch
- Deployment target (
frontend
,backend
, orboth
)
Note: Deployments to production are only allowed from the
main
branch.
The runner uses MS Entra federated identity to authenticate and receives short-lived credentials associated with a Service Principal that has:
- Reader role on the target resource group (to query resources and locate the appropriate VMs using tags)
- Azure Run Command permission to execute deployment logic remotely on the VMs
Backend VMs must have a system-assigned managed identity with read access to the Azure Key Vault. Access is granted using RBAC-based roles (e.g., Key Vault Reader).
Add new notary services using the following pattern:
alpha9-sgx:
container_name: alpha9-sgx
labels:
- "traefik.enable=true"
- "traefik.http.middlewares.stripalpha9-sgx.stripprefix.prefixes=/alpha9-sgx"
- "traefik.http.routers.alpha9-sgx.middlewares=stripalpha9-sgx"
image: ghcr.io/tlsnotary/tlsn/notary-server-sgx:v0.1.0-alpha.9
restart: unless-stopped
devices:
- /dev/sgx_enclave
- /dev/sgx_provision
volumes:
- /var/run/aesmd/aesm.socket:/var/run/aesmd/aesm.socket
ports:
- "4109:7047"
entrypoint: [ "gramine-sgx", "notary-server" ]
- Each new service must define a
traefik
HTTP router and a strip-prefix middleware to route requests correctly. - Also update the landing page in
proxy/index.html
to reflect the new service route.
Use the upload-secrets.sh
script located in this repo.
- Tools:
yq
,az-cli
- The user or automation agent must have permission to write to the Azure Key Vault.
The script expects the following structure relative to your current working directory:
.
├── alpha10
│ └── fixture
│ ├── auth
│ │ └── whitelist.csv
│ ├── notary
│ │ ├── notary.key
│ │ └── notary.pub
│ └── tls
│ ├── notary.crt
│ ├── notary.csr
│ ├── notary.key
│ ├── openssl.cnf
│ ├── README.md
│ ├── rootCA.crt
│ ├── rootCA.key
│ └── rootCA.srl
├── alpha6
│ └── fixture
│ └── ...
├── alpha7
│ └── fixture
│ └── ...
├── alpha9
│ └── fixture
│ └── ...
├── docker-compose.yml
├── upload-secrets.sh
Secrets are uploaded using the pattern:
<service-name>--<base64_encoded_relative_path>
Encoding behavior:
- Relative file paths (e.g.,
fixture/tls/rootCA.key
) are encoded usingbase64 --wrap=0
- Characters are made URL-safe by:
- Replacing
/
with_
- Replacing
+
with-
- Stripping any
=
padding
- Replacing
This ensures secret names are compliant with Azure Key Vault constraints and file system safe.
If your services require additional configs, include them in the repo under the docker/
folder:
docker/
├── alpha10
│ └── config
│ └── config.yaml
├── alpha10-sgx
│ └── config.yaml
...
Trigger the GitHub Actions workflow manually and supply:
- The target resource group
- The environment type (prod/test)
- The source branch
- The deployment role (backend/frontend/both)
Note: Federated identity tokens require exact subject matches. Example:
repo:privacy-scaling-explorations/tlsn-infra:ref:refs/heads/main
will only work for themain
branch.No wildcards supported yet.
Once triggered:
- Runner performs
az login
- Validates branch/environment match
- Identifies VMs in the resource group with matching tags (
role
,env
) - Executes
az vm run-command create
to:- For backend:
docker compose down
- Clear existing directories
- Check out correct branch
- Use
fetch-fixtures.sh
to reconstruct fixture directory from Key Vault docker compose up
- For frontend:
- Copy updated
index.html
to the web frontend VM
- Copy updated
- For backend:
- Secrets are encrypted at rest in Azure Key Vault.
- Secrets are not stored as plaintext but are accessible as plaintext upon retrieval.
- Access is controlled via RBAC and scoped managed identities.