This section provides instructions for building the containers for NCX Infra Controller (NICo).
Before you begin, ensure you have the following prerequisites:
- An Ubuntu 24.04 Host or VM with 150GB+ of disk space (MacOS is not supported)
Use the following steps to install the prerequisite software on the Ubuntu Host or VM. These instructions
assume an apt-based distribution such as Ubuntu 24.04.
apt-get install build-essential direnv mkosi uidmap curl fakeroot git docker.io docker-buildx sccache protobuf-compiler libopenipmi-dev libudev-dev libboost-dev libgrpc-dev libprotobuf-dev libssl-dev libtss2-dev kea-dev systemd-boot systemd-ukify- Add the correct hook for your shell
- Install rustup:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh(select Option 1) - Start a new shell to pick up changes made from direnv and rustup.
- Clone NICo -
git clone git@github.com:NVIDIA/ncx-infra-controller-core.git ncx-infra-controller cd ncx-infra-controllerdirenv allowcd $REPO_ROOT/pxegit clone https://github.com/systemd/mkosi.gitcd mkosi && git checkout 26673f6cd $REPO_ROOT/pxe/ipxegit clone https://github.com/ipxe/ipxe.git upstreamcd upstream && git checkout d7e58c5sudo systemctl enable docker.socketcd $REPO_ROOTcargo install cargo-make cargo-cacheecho "kernel.apparmor_restrict_unprivileged_userns=0" | sudo tee /etc/sysctl.d/99-userns.confsudo usermod -aG docker <username>reboot
NOTE: Execute these tasks in order. All commands are run from the top of the ncx-infra-controller directory.
docker build --file dev/docker/Dockerfile.build-container-x86_64 -t nico-buildcontainer-x86_64 .docker build --file dev/docker/Dockerfile.runtime-container-x86_64 -t nico-runtime-container-x86_64 .cargo make --cwd pxe --env SA_ENABLEMENT=1 build-boot-artifacts-x86-host-sa
docker build --build-arg "CONTAINER_RUNTIME_X86_64=alpine:latest" -t boot-artifacts-x86_64 -f dev/docker/Dockerfile.release-artifacts-x86_64 .docker build --build-arg CONTAINER_RUNTIME_X86_64=nico-runtime-container-x86_64 -t machine-validation-runner -f dev/docker/Dockerfile.machine-validation-runner .
docker save --output crates/machine-validation/images/machine-validation-runner.tar machine-validation-runner:latest
// This copies `machine-validation-runner.tar` into the `/images` directory on the `machine-validation-config` container. When using a kubernetes deployment model
// this is the only `machine-validation` container you need to configure on the `carbide-pxe` pod.
docker build --build-arg CONTAINER_RUNTIME_X86_64=nico-runtime-container-x86_64 -t machine-validation-config -f dev/docker/Dockerfile.machine-validation-config .
docker build --build-arg "CONTAINER_RUNTIME_X86_64=nico-runtime-container-x86_64" --build-arg "CONTAINER_BUILD_X86_64=nico-buildcontainer-x86_64" -f dev/docker/Dockerfile.release-container-sa-x86_64 -t nico .docker build --file dev/docker/Dockerfile.build-artifacts-container-cross-aarch64 -t build-artifacts-container-cross-aarch64 .The admin-cli build does not produce a container. It produces a binary:
$REPO_ROOT/target/release/carbide-admin-cli
BUILD_CONTAINER_X86_URL="nico-buildcontainer-x86_64" cargo make build-cli
docker pull --platform=linux/arm64 nvcr.io/nvidia/doca/doca_hbn:3.2.0-doca3.2.0
docker save --output=/tmp/doca_hbn.tar nvcr.io/nvidia/doca/doca_hbn:3.2.0-doca3.2.0
#!/usr/bin/env bash
HBN_VERSION="3.2.0"
set -e
mkdir -p temp
cd temp || exit 1
files=$(curl -s "https://api.ngc.nvidia.com/v2/resources/org/nvidia/team/doca/doca_hbn/${HBN_VERSION}/files")
printf '%s\n' "$files" |
jq -c '
.urls as $u
| .filepath as $p
| .sha256_base64 as $s
| range(0; $u | length) as $i
| {url: $u[$i], filepath: $p[$i], sha256_base64: $s[$i]}
' |
while IFS= read -r obj; do
url=$(printf '%s\n' "$obj" | jq -r '.url')
path=$(printf '%s\n' "$obj" | jq -r '.filepath')
sha=$(printf '%s\n' "$obj" | jq -r '.sha256_base64' | base64 -d | od -An -vtx1 | tr -d ' \n')
mkdir -p "$(dirname "$path")"
curl -sSL "$url" -o "$path"
printf '%s %s\n' "$sha" "$path" | sha256sum -c --status || exit 1
done
cd ..
mkdir -p doca_container_configs
mv temp/scripts/${HBN_VERSION}/ doca_container_configs/scripts
mv temp/configs/${HBN_VERSION}/ doca_container_configs/configs
cd doca_container_configs
zip -r ../doca_container_configs.zip .After running the script above:
cp doca_container_configs.zip /tmpcargo make --cwd pxe --env SA_ENABLEMENT=1 build-boot-artifacts-bfb-sa
docker build --build-arg "CONTAINER_RUNTIME_AARCH64=alpine:latest" -t boot-artifacts-aarch64 -f dev/docker/Dockerfile.release-artifacts-aarch64 .NOTE: The CONTAINER_RUNTIME_AARCH64=alpine:latest build argument must be included. The aarch64 binaries are bundled into an x86 container.