You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Informations about Spire-Server, HPCS-Server and Vault depends on the server installation and the setup choices made on it's side. If you don't know those informations, please contact your HPCS-Server service provider.
46
+
Information about the Spire-Server, the HPCS-Server and the Vault depends on the respective server installation setup choices. For more information on the configuration, please contact your HPCS-Server service provider.
47
47
48
48
The client configuration is made of 4 main sections, in INI format. In depth description of the configuration files is available [here](https://github.com/CSCfi/HPCS/docs/configuration).
Please replace `spire-server` section configuration with the informations relative to your Spire Server.
69
-
You will also need to replace `hpcs-server` with the address of your HPCS server, eventually the `port` with the port on which HPCS server is exposed.
70
-
The `vault` is the same as the `hpcs-server` section, please complete it with your vault informations.
68
+
Please replace the `spire-server` section configuration with the settings relative to your Spire Server.
69
+
You will also need to replace `hpcs-server` with the address of your HPCS server, eventually the `port` with the port on which the HPCS server is exposed.
70
+
The `vault` is the same as the `hpcs-server` section, please complete it with your vault settings.
71
71
Finally, configure the supercomputer to use in the `supercomputer` section, specifying it's address under `address` and your `username` on the system. Your SSH Key needs to be setup.
72
72
73
73
#### Prepare the runtime
@@ -215,7 +215,7 @@ WIP
215
215
216
216
#### Docker-compose
217
217
218
-
:warning: This method is not the officially supporter method for HPCS Server
218
+
:warning: This method is not the officially supported method for HPCS Server and merely intended for testing purposes.
219
219
220
220
Pull server's image using Docker pull :
221
221
@@ -228,9 +228,9 @@ The server configuration is made of 2 main sections, in INI format. In depth des
228
228
You'll be able to configure your Spire Server interfacing specifying :
229
229
- Address and port of the spire-server API.
230
230
- Spire trust domain.
231
-
- pre-command and spire-server-bin : f.e pre-command = "`kubectl exec -n spire spire-server-0 -- `" and spire-server-bin = "`spire-server`" will then be used to create cli interactions with the Spire Server socket (i.e : `kubectl exec -n spire spire-server-0 -- spire-server entry show`). Please keep this part as-it-is when running docker standalone and mount the spire-server directory at it's default path (`/tmp/spire-server`).
231
+
- pre-command and spire-server-bin : e.g pre-command = "`kubectl exec -n spire spire-server-0 -- `" and spire-server-bin = "`spire-server`" will then be used to create cli interactions with the Spire Server socket (i.e : `kubectl exec -n spire spire-server-0 -- spire-server entry show`). Please keep this part as-it-is when running docker standalone and mount the spire-server directory at it's default path (`/tmp/spire-server`).
232
232
233
-
And vault configuration will work the same as for the client (using a base `url` config). The main difference is that you need to specify the name of the spire-server role in the Vault. This role needs to be created manually and needs to be bound to a policy allowing it to create policies and roles for clients (data/container preparation) and workloads (accessing data/container).
233
+
Vault configuration will work the same as for the client (using a base `url` config). The main difference is that you need to specify the name of the spire-server role in the Vault. This role needs to be created manually and needs to be bound to a policy allowing it to create policies and roles for clients (data/container preparation) and workloads (accessing data/container).
234
234
235
235
```ini
236
236
[spire-server]
@@ -313,25 +313,25 @@ docker-compose run server
313
313
314
314
## Limitations
315
315
316
-
Currently still under development this project has been developped using LUMI's environment. The philosophy of it aims to limit as possible the need of supercomputer's administrators. Even though it makes it easier to adapt to other environments, it also means that some supercomputer's limitations can prevent HPCS to achieve it's full potential.
316
+
This project has been developed to work on LUMI and is currently (03/2024) still under development. The goal was to use LUMI as-is, without the need for changes by administrators. Even though this makes it easier to adapt to other environments, it also means introducing limitations that can prevent HPCS to achieve its full potential. These limitations are discussed below.
317
317
318
318
### Node attestation
319
319
320
-
This project enable users to chose who can read their data or containers based on UNIX identities on the super-computer platform. Another important feature is the possibility for them to limit this access to a specific set of node on the supercomputer site. However, this feature requires the attestation of the nodes.
320
+
This project enables users to chose who can read their data or containers based on UNIX identities on the super-computer platform. Another important feature is the possibility for them to limit this access to a specific set of nodes on the supercomputer site. However, this feature requires the attestation of the nodes.
321
321
322
-
[Several methods of attestation](https://github.com/spiffe/spire/tree/main/doc)exists using Spire HPCS mainly benefits from these :
322
+
[Several methods of attestation](https://github.com/spiffe/spire/tree/main/doc)exist using Spire. The following are most relevant for HPCS:
323
323
- Token based attestation (user provides a token that is pre-registered to attest the node using it).
324
-
- Slurm based attestation (nothing done, need first to make sure that slurm is a trustable source of informations to attest the node).
324
+
- Slurm based attestation (not in use at the moment, needs first to make sure that slurm is a trustable source of information to attest the node).
325
325
- TPM based attestation ([with DevID](https://github.com/spiffe/spire/blob/main/doc/plugin_agent_nodeattestor_tpm_devid.md) or [without](https://github.com/boxboat/spire-tpm-plugin)).
326
326
- Other hardware based key management based attestation (ex : [sev-snp](https://github.com/ufcg-lsd/spire-amd-sev-snp-node-attestor), in the future).
327
327
328
-
Using TPM, for example, it's very easy to run automatic node attestation, based on hardware managed keys that can't be easily spoofed. Unfortunately, LUMI's premise doesn't provide TPM at the moment and for this reason, node attestation is currently made using a dummy endpoint providing join tokens to anyone. However, this behaviour could easily be modified to strenghten the node attestation with very low code modification for other supercomputers.
328
+
Using TPM, for example, it is very easy to run automatic node attestation, based on hardware managed keys that can't be easily spoofed. Unfortunately, LUMI does not provide TPM at the moment and for this reason, node attestation is currently made using a dummy endpoint providing join tokens to anyone. However, this behaviour could easily be modified to strengthen the node attestation with very low code modification for other supercomputers. For example, the node attestation could be performed by admins instead of the present user-initiated attestation.
329
329
330
330
### Encrypted container
331
331
332
-
This project leverage Singularity / Apptainer's [encrypted containers](https://docs.sylabs.io/guides/3.4/user-guide/encryption.html). This feature provides to the final user a way of protecting the runtime of the container, allowing it to protect data from every interactions of the container with the outside world.
332
+
The goal of this project was to leverage Singularity/Apptainer's [encrypted containers](https://docs.sylabs.io/guides/3.4/user-guide/encryption.html). This feature enables the end user to protect the runtime of the container, allowing it to confine unencrypted data within the encrypted container, adding an extra layer of security.
333
333
334
-
Unfortunately, for LUMI, this feature relies on different technologies, depending the permission level at which the container is encrypted, this behaviour is documented in the following table for usage on LUMI :
334
+
Unfortunately for LUMI, this feature relies on different technologies, depending the permission level at which the container is encrypted, this behaviour is documented in the following table for usage on LUMI :
335
335
336
336
| Build \ Run | root ? | singularity-ce version 3.11.4-1 (LUMI) | apptainer version 1.2.5 (Binary able to be shipped to LUMI) |
337
337
| --- | --- | --- | --- |
@@ -340,21 +340,21 @@ Unfortunately, for LUMI, this feature relies on different technologies, dependin
340
340
| apptainer version 1.2.5 | yes | Unable to decrypt filesystem (no dm_crypt) | Failure (says user namespaces are needed) |
341
341
| apptainer version 1.2.5 | no | Filesystem not recognized | Failure (says user namespaces are needed) |
342
342
343
-
Two main reasons to the issues with the encrypted containers :
344
-
- Cannot run as root on a node (no workaround, completely normal)
343
+
Two main reasons for the issues with the encrypted containers :
344
+
- Cannot run as root on a node (no workaround, as this is a feature of HPC environments).
345
345
- User namespaces are disabled on LUMI (for secure reason, [this stackexchange](https://security.stackexchange.com/questions/267628/user-namespaces-do-they-increase-security-or-introduce-new-attack-surface) has some explanations).
346
346
347
-
At the end of the day, to secure container's runtime, we would need to open a relative potential breach (by enabling user namespaces on the platform), which isn't so logical and seems not to be possible on some platforms (LUMI, f.e).
347
+
To run encrypted containers as described above, we would need to enable user namespaces on the platform. This would require a thorough risk/benefit assessment, since it introduces new attack surfaces and therefore will not be introduced lightly, at least not on on LUMI in the near future.
348
348
349
-
Our mitigation to the lack of confidentiality that leaves the unavailability of encrypted containers works in two steps :
349
+
We mitigate the unavailability of encrypted containers in two steps :
350
350
- Encryption of the container at rest (encryption of the image file while stored on the supercomputer, decryption right before runtime)
351
351
- Usage of encrypted FUSE Filesystems in the container. This is achieved using `gocryptfs` (actually the same way as Singularity does it for encrypted containers) but only for some mountpoints. This for example allows us to certify that the input dataset won't ever be written as plaintext on the node as well as the output data.
352
352
353
-
However, again, this limitation has known solutions (cf. user namespaces) that will be leveraged or not on the platforms. The code was originally written to work with encrypted containers and this code is currently commented out but still available in case of usage on platform supporting user namespaces. Another lead that hasn't been explored as of today is [the newest version of Apptainer](https://github.com/apptainer/apptainer/releases/tag/v1.3.0), introducing new behaviour based on setuid.
353
+
However, this limitation has known solutions (cf. user namespaces) that will be leveraged or not on the platforms. The code was originally written to work with encrypted containers and this code is currently commented out but still available in case of usage on platform supporting user namespaces. Another lead that hasn't been explored as of today is [the newest version of Apptainer](https://github.com/apptainer/apptainer/releases/tag/v1.3.0), introducing new behaviour based on setuid.
354
354
355
355
### Client attestation
356
356
357
-
When a client shows up to encrypt it's data or container and to give access to it to someone, it's automatically attested, based on it's public IP. A workload identity is then automatically created, based on the `sha256sum` of the binary calling the workload API or the image_id of the container where the workload is running (See #5). This behaviour represents a problem because this attestation method isn't appliable to every clients :
357
+
When a client wants to encrypt its data or container and to give access to it to someone, it's automatically attested, based on it's public IP. A workload identity is then automatically created, based on the `sha256sum` of the binary calling the workload API or the image_id of the container where the workload is running (See #5). This behaviour represents a problem because this attestation method isn't applicable to every client:
358
358
- Client runs containers using cgroupsv1
359
359
- Fine, the docker image_id can be used. However, this image_id can be spoofed
Copy file name to clipboardexpand all lines: docs/architecture.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -139,4 +139,4 @@ HPCSCJPB --"SSH (As user - SBATCH file & CLI Call to SBATCH)"--> LN
139
139
LN --"SSH (As user - Info files)"--> HPCSCJPB
140
140
```
141
141
142
-
This diagram doesn't show the HTTPS requests from client/compute node to HPCS Server used to register the agents since this behaviour is a practical workaround.
142
+
This diagram doesn't show the HTTPS requests from client/compute node to HPCS Server used to register the agents since this behaviour is a practical workaround. See section "Limitations" in [HPCS/README.md](https://github.com/CSCfi/HPCS/blob/main/README.md#limitations) for more information.
0 commit comments