You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. Added definition of a pouta VM with a kind cluster to host a dev/test environment
2. Converted k8s yaml's into a helm chart
3. Adjusted the ansible playbook to work nicely with the dev/test environment
run: if (for file in $(find . -type f -not -path './.git/*' -not -path './.git-crypt/*' -not -path './terraform/secrets/*') ; do [ "$(tail -c 1 < "${file}")" == "" ] || echo "${file} has no newline at the end..." ; done) | grep . ; then exit 1 ; fi
15
+
- name: Checking for trailing whitespaces
16
+
run: if find . -type f -not -path './.git/*' -exec egrep -l " +$" {} \; | grep . ; then exit 1 ; fi
17
+
18
+
- name: Running shellcheck on *.sh files
19
+
run: |
20
+
find . -name .git -type d -prune -o -type f -name \*.sh -print0 |
21
+
xargs -0 -r -n1 shellcheck
22
+
helm_lint:
23
+
runs-on: ubuntu-latest
24
+
steps:
25
+
- uses: actions/checkout@v2
26
+
- name: run helm lint on hpcs-stack
27
+
run: docker run --rm -v $(pwd)/k8s:/apps alpine/helm:latest lint hpcs-stack
28
+
terraform_lint:
29
+
runs-on: ubuntu-latest
30
+
steps:
31
+
- uses: actions/checkout@v2
32
+
- name: run terraform fmt
33
+
run: docker run --rm -v $(pwd):/data docker.io/hashicorp/terraform fmt -check /data/terraform
Copy file name to clipboardexpand all lines: README.md
+27-105
Original file line number
Diff line number
Diff line change
@@ -191,7 +191,7 @@ To run one of the containers :
191
191
docker compose run --rm [data/container/job]-prep
192
192
```
193
193
194
-
If you want to run the whole process by yourself :
194
+
If you want to run the whole process by yourself :
195
195
196
196
```bash
197
197
docker compose run --rm data-prep
@@ -203,9 +203,10 @@ An example demonstration is available [here](https://asciinema.org/a/PWDzxlaVQmf
203
203
204
204
### Server
205
205
206
-
HPCS Server is an API, interfacing HPCS client with Vault and Spire. This section needs basic knowledge of [SPIFFE/SPIRE](https://spiffe.io/) and [HashiCorp Vault](https://www.vaultproject.io/).
206
+
HPCS Server is an API, interfacing HPCS client with Vault and Spire. This section needs basic knowledge of [SPIFFE/SPIRE](https://spiffe.io/) and [HashiCorp Vault](https://www.vaultproject.io/).
207
207
208
208
For k8s, we only consider `kubectl` and `ansible` as available tools and that `kubectl` can create pods. Vault roles, spire identities are created automatically.
209
+
For development and demonstrative purposes we provide a `terraform` definition of a VM with an operational kubernetes cluster, for documentation and deployment instructions go [there](terraform).
209
210
210
211
For docker-compose, we consider the Vault and the Spire Server as setup and the Spire-OIDC provider implemented to allow login to the vault using SVID identity. We also consider that proper roles are created in Vault to authorize HPCS Server to write roles and policies to the Vault, using a server SPIFFEID.
211
212
@@ -257,77 +258,29 @@ Before proceeding to HPCS' deployment, an original setup is required including :
257
258
- A ready-to-run k8s cluster
258
259
-`kubectl` and `helm` available and able to run kubernetes configurations (`.yaml`)
259
260
-`rbac`, `storage` and `dns` and `helm` kubernetes capabilities, f.e : `microk8s enable rbac storage dns helm` with microk8s.
260
-
261
+
261
262
Please note down the name of your k8s cluster in order to run later deployments.
262
263
263
264
##### Configuration
264
265
265
-
Several configurations are to be reviewed before proceeding.
266
-
- Nginx SSL Certificate path : Please review in `/k8s/spire-server-nginx-configmap.yaml` (section `ssl_certificate`) and `/k8s/spire-server-statefulset.yaml` (section `volumeMounts` of container `hpcs-nginx` and section `volumes` of the pod configuration). If you plan to run the deployment using ansible, please review `/k8s/deploy-all.yaml`, section `Copy oidc cert to vault's pod` and `Create spire-oidc {key, csr, cert}` for the host path to the certificate. Create the directory configured before running deployment.
267
-
268
-
- Cluster name : Please review in `/k8s/hpcs-server-configmap.yaml`, section "`agent.conf`", then "`k8s_psat`" and `/k8s/spire-server-configmap.yaml`, section "`server.conf`", then "`k8s_psat`", replace "`docker-desktop`" with your k8s cluster name.
269
-
270
-
- For further information about spire agent/ server configurations under `/k8s/hpcs-server-configmap.yaml` and `/k8s/spire-server-configmap.yaml`, please refer to spire-server [configuration reference](https://spiffe.io/docs/latest/deploying/spire_server) and spire-agent [configuration reference](https://spiffe.io/docs/latest/deploying/spire_agent/).
271
-
272
-
273
266
##### Bash
274
267
275
268
This part of the documentation walks you through the different steps necessary in order to run a manual deployment of HPCS' serverside (including Vault, Spire-Server and HPCS Server).
-> kubectl get --namespace hpcs pods/hpcs-server-0
364
+
NAME READY STATUS RESTARTS AGE
365
+
hpcs-server-0 1/1 Running 3 (75m ago) 75m
442
366
```
443
367
444
368
That's it, you can now use HPCS server as you please.
445
369
446
370
##### Ansible
447
371
448
-
:warning: This method is currently still under development. You could run into non-documented issues.
449
-
450
-
The previously explained steps can be automatically run using an ansible playbook available under `/k8s/deploy-all.yaml`
372
+
The previously explained steps can be automatically run using an ansible [playbook](k8s/deploy-all.yaml).
451
373
452
374
All the pre-requisites listed before are necessary to run this playbook. If you are running kubernetes using `microk8s`, you will need to create aliases or fake commands for `helm`, for example using a script :
453
375
```bash
@@ -585,7 +507,7 @@ Using TPM, for example, it is very easy to run automatic node attestation, based
585
507
586
508
### Encrypted container
587
509
588
-
The goal of this project was to leverage Singularity/Apptainer's [encrypted containers](https://docs.sylabs.io/guides/3.4/user-guide/encryption.html). This feature enables the end user to protect the runtime of the container, allowing it to confine unencrypted data within the encrypted container, adding an extra layer of security.
510
+
The goal of this project was to leverage Singularity/Apptainer's [encrypted containers](https://docs.sylabs.io/guides/3.4/user-guide/encryption.html). This feature enables the end user to protect the runtime of the container, allowing it to confine unencrypted data within the encrypted container, adding an extra layer of security.
589
511
590
512
Unfortunately for LUMI, this feature relies on different technologies, depending the permission level at which the container is encrypted, this behaviour is documented in the following table for usage on LUMI :
591
513
@@ -599,7 +521,7 @@ Unfortunately for LUMI, this feature relies on different technologies, depending
599
521
Two main reasons for the issues with the encrypted containers :
600
522
- Cannot run as root on a node (no workaround, as this is a feature of HPC environments).
601
523
- User namespaces are disabled on LUMI (for secure reason, [this stackexchange](https://security.stackexchange.com/questions/267628/user-namespaces-do-they-increase-security-or-introduce-new-attack-surface) has some explanations).
602
-
524
+
603
525
To run encrypted containers as described above, we would need to enable user namespaces on the platform. This would require a thorough risk/benefit assessment, since it introduces new attack surfaces and therefore will not be introduced lightly, at least not on on LUMI in the near future.
604
526
605
527
We mitigate the unavailability of encrypted containers in two steps :
@@ -616,7 +538,7 @@ When a client wants to encrypt its data or container and to give access to it to
616
538
- Client runs containers using cgroupsv2
617
539
- Client runs on Linux
618
540
-`spire-agent api fetch` can be attested using spire-agent binary's `sha256sum`
619
-
-`python3 ./utils/spawn_agent.py` can't be attested since the `sha256sum` recognised by the workload API is `python3`'s. A mitigation to that would be to compile the code, if possible. This would potentially provide a unique binary that would then be able to be attested using `sha256sum`
541
+
-`python3 ./utils/spawn_agent.py` can't be attested since the `sha256sum` recognised by the workload API is `python3`'s. A mitigation to that would be to compile the code, if possible. This would potentially provide a unique binary that would then be able to be attested using `sha256sum`
620
542
- Client runs on MacOS
621
543
- No attestation is doable at the moment since MacOS doesn't support docker and runs container inside of a Linux VM
0 commit comments