-
Notifications
You must be signed in to change notification settings - Fork 9
Architecture and implementation
- Run entirely locally: allows usage for development of SCF, and iterating development on catapult itself.
- Included linting, unit tests and integration tests of Catapult: allows iterating on catapult without breaking the pipelines.
- Non-interactive interface that can be used as basis for automation (eg, a pipeline).
-
Defined persistent states: the state of the system is represented by the files inside of the
build<CLUSTER_NAME> folder, and not ephemeral inside a process. Allows inspecting deployments in pipelines easier, and simpler abstractions. Allows also to separate loosely coupled states (getting a k8s cluster, deploying scf, stratos, tests, utilities) under a simple framework. -
Idempotent state transitions: Executing the same state transition nets you with an equivalent state, and doesn't break the cluster. Allows for easy reusing of states, as one can always be sure of the current state. eg: being able to call several times make scf-clean to get a clean slate without breaking the cluster, or being able to run scf-gen-config as much as needed, giving you the same
scf-config-values.yaml. - Conscious separation of public and private API: public API is not bound to change and useful for automation such as pipelines, and private API allows for expanding state functionality seamlessly in the background.
- Defer decision taking until the last possible moment, to reduce complexity by not depending on pre-existing conditions: eg: deploying a k8s backend doesn't need information about scf or stratos, and should work separately.
- Leverage existing solutions to deploy k8s and scf such as cap-terraform and ekcp.
- Write it on a common denominator for all engineers: bash shell scripts, simple utilities.
Catapult is a finite state machine, implemented with bash scripts and Makefiles.
A simplified vision is:

-
States: the state of the system is saved to file and defined by the contents of the
build<CLUSTER_NAME>. The content represents the state of the k8s cluster and scf. Each state exposes a public API in the form of make targets, and has an internal state machine in the form of private make targets. -
Transitions: implemented as
maketargets, they remove or add content to the folder. Each make target accepts specific env vars.
$> make k8s
(…)
$> tree -L 1 builkind/
buildkind/
├── bin
├── .envrc
├── .helm
├── kind-config.yaml
├── kubeconfig
└── storageclass.yaml
Achieving the first state gives us a buildir, a kubeconfig of a cluster prepared for CAP (storageclasses,RBAC, etc), and sets up helm and .helm home. We also get artifacts specific to the backend.
$> make scf scf-login
(…)
$> tree -L 1 builkind/
buildkind/
├── bin
├── .cf (new)
├── chart (new)
├── .envrc
├── .helm
├── helm (new)
├── kind-config.yaml
├── kube (new)
├── kubeconfig
├── kube-ready-state-check.sh (new)
├── scf-config-values.yaml (new)
└── storageclass.yaml
We have now deployed scf and logged into it; which gives us the extracted scf chart at helm/ and kube/, the scf-config-values.yaml, and the cfcli and .cf home.
$> make scf-clean
(…)
$> tree -L 1 builkind/
buildkind/
├── bin
├── .envrc
├── .helm
├── kind-config.yaml
├── kubeconfig
└── storageclass.yaml
Deleting the scf deployment returns us to the state of getting a k8s backend, allowing us to reuse it.
Some states are complex enough that is useful to cut them into chunks; following the same implementation as the overall project, a Makefile with targets as state transitions. For example, the scf state:
Here we do:
-
SCF_CHART=<url_or_file> make scf-chartprocures us with a chart and extracts it -
DOCKER_*=<docker_registry_creds> \ ENABLE_EIRINI<true/false> \ DEFAULT_STACK=<sle15,etc> \ SCF_OPERATOR=<true/false> \ DOCKER_*=<docker_registry_creds> \ make scf-gen-configgenerates
scf-config-values.yamlwith the needed options -
EMBEDDED_UAA=<true/false> SCF_OPERATOR=<true/false> \ OPERATOR_CHART_URL=<url> \ make scf-installcalls
helm installand installs scf -
make scf-loginlogs into scf with cfcli
States are implemented by folders.
Calling BAKCKEND=kind make k8s, as the Makefile code shows, calls a make target that does besides other things: make -C backend/kind all. The same for BACKEND=gke make k8s, it will execute whatever it is in backend/gke/Makefile, which are scripts contained in backend/gke/*.
The same is true for other states that are not backends. Example: calling make scf will, as the Makefile code shows, call make -C module/scf. Which means that it executes modules/scf/Makefile all target. Which itself does: make clean chart gen-config install. This executes, in order, all the following scripts:
modules/scf/clean.sh
modules/scf/chart.sh
modules/scf/gen-config.sh
modules/scf/install.sh
This is the implementation of the finite state machine that we have talked before. This allows for seamlessly adding intermediate substeps without breaking the public API/UI that users and pipelines use.
The public API consists on all the make targets exposed by catapult/Makefile. Deprecated targets are marked as such when being run.
It is safe to assume this API will not change.
Each folder (eg: backends, modules/{scf,stratos,tests,extra,experiments}) has it's own Makefile, where additional targets are added. They can be run with make private <module_path> <private target>.
options are passed as env vars. Them and their default values are sourced from: (in descendent order of priority)
backend/foo/defaults.sh (if doing make k8s or make kubeconfig)
modules/foo/defaults.sh
include/defaults_global{,_private}.sh
modules/common/defaults.sh
Look at any script that is run by a substate. Eg: make scf-chart executes modules/scf/scf-chart. At the beginning there are 3 lines:
. ./defaults.sh # loads the module options
. ../../include/common.sh # loads general options, sets bash options, etc and at the end pushd into buildir
. .envrc # we are now in buildir, load env vars to operate *only* against it (KUBECONFIG, etc)
Now the script can continue doing operations inside the buildfolder; in the case of make scf-chart, download and untar the chart.
Catapult has its own unit and integration tests, run by make catapult-test. They use shunit2. Running make catapult-test runs also make catapult-lint, which lints for shell and yaml errors.