If you have an on-premise hardware, these scripts may be modified to configure and run the demo locally. You will need to make modificiations for this to work on-premise.
[!WARN] We have not validated these steps. We have run the demo on local hardware in the past, but we leave fully automating this for future work. We outline steps below that will point you in the right direction, but these steps are by no means correct nor complete.
- Render: amd64 machine with an NVIDIA GPU running Ubuntu 22.04 or 24.04 with a connected display or remote desktop enabled
- On-Vehicle amd64 or arm64 (arm64 is required for EWAOL)
- Xronos Dashboard: amd64 or arm64 machine
Operating system support: Ubutu 22.04, Ubuntu 24.04, or EWAOL (on-vehicle instance only).
Recommended hardware configuration:
- 24-core amd64 instance with NVIDIA GPU and 32 GB RAM running Ubuntu 24.04 for render and dashboard roles
- NVIDIA Jetson or Orin board running EWAOL for compute role
Alternate hardware configurations:
- one amd64 instance running render, compute and dashboard
- one amd64 instance running render and dashbaord, one arm64 instance running compute
- one amd64 instance running render, one amd64 instance running dashboard, and one arm64 instance running compute
EWAOL is not strictly required to run this demo since the distributed application runs in containers via k3s. You may provide your own EWAOL instance if you wish to explore alternative orchestration tools such as BlueChi.
It may be possible to use the EWAOL image from this blueprint, but this has not been tested. See SOAFEE Blueprint EWAOL AMI.
The modifications needed to adapt this repository for on-premise deployment should include (but aren't limited to) the following:
-
Use a deployment name such as
onsitethat reflects the name of your site or deployment. To do this, runset-deplloymentfrom theblueprintscripts. This differentiates your onsite deployment from the defaultsoafeecloud deployment. Use this deployment name for allblueprintcommands. -
Create instance information for your onsite deployment. Template instance information is located in onsite-instance-template/ and may be copied into your local
instancesfolder. The instance information includes an Ansible inventory, POSIX hosts file, and OpenSSH config file for your deployment. If you plan to use a single hardware instance for more than one role, such as running Xronos Dashboard and the AVP render on the same host, define the host only once in the inventory and then include it as a child of any additional roles.The inventory_hostname of each hardware instance (e.g.
amd64-machine&arm64-machine) does not matter as long as they are correctly assigned to each group underchildren.Starting with an Ubuntu 24.04 server installation (and not a desktop installation) for the render instance will ensure the installation of an NVIDIA driver that is compatible with LG SVL Simulator.
-
Configure your container registry for federate images.
- If using ECR:
- Ensure AWS access keys are configured in the environment variables
AWS_ACCESS_KEY,AWS_SECRET_KEYandAWS_REGION. - Set the ECR address to the Ansible inventory.
- Ensure AWS access keys are configured in the environment variables
- If using Docker hub:
- Comment-out the Ansible steps to create ECR repositories for each federate.
- Manually create Docker repositories with names matching the names that would have been created for ECR.
- Remove references to the ECR registry in Ansible steps, which will then default to Docker Hub.
- If using a local container registry:
- Add steps to configure your container registry locally.
- Replace references to the ECR registry in Ansible steps to the URI for your local repository.
- If using ECR:
-
Revise the Ansible scripts to use your preferred method of authenticating with your container registry. Note that both docker and k3s steps will need to be modified.
- You may update the Ansible playbook invocation of the role
xronos_docker_ansibleand set thedocker_authvariable to your docker auth key.
- You may update the Ansible playbook invocation of the role
-
Update the Xauthority location and
DISPLAYindex for your render instance. RViz and LGSVL steps reference these environment variables to display to the correct XWindows session and display. -
If using Ubuntu 24.04 instead of EWAOL for the compute node, revise the Ansible playbook to include the compute instance in Ubuntu configure steps, and comment-out the EWAOL configure role.
Then run the blueprint scripts, omitting the first provision step.
[!NOTE] these scripts assume the user ubuntu exists and that it has sudo access (avp-ewaol expects user user). If the sudo password is required, append --ask-become to run commands that invoke ansible.*
An SSH key must be generated and added each host's authorized_keys file for its respective user. The private and public key must then be added to the instances directory and respectively be named <deployment>-default.pem and <deployment>-default.pub
Some hardware setups may encounter connection issues with K3s between hosts. A workaround that often resolves this is to restart the k3s server running on the render host after starting the demo the AVP containers have been deployed.
blueprint start
blueprint shell soafee-avp-render 'sudo systemctl restart k3s'