make release target can be used to create a release. Environment variable RELEASE_VERSION (default value latest) can be used to define the release version. The release target will:
- Update all tags of Docker images to
RELEASE_VERSION - Update documentation version to
RELEASE_VERSION - Set version of the main Maven projects (
topic-operatorandcluster-operator) toRELEASE_VERSION - Create TAR.GZ and ZIP archives with the Kubernetes and OpenShift YAML files which can be used for deployment and documentation in HTML format.
The release target will not build the Docker images - they should be built and pushed automatically by the GitHub Actions when the release is tagged in the GitHub repository. It also doesn't deploy the Java artifacts anywhere. They are only used to create the Docker images.
The release process should normally look like this:
-
Create a release branch starting from the
mainone. The new release branch has to be named likerelease-<Major>.<minor>.x, for examplerelease-0.45.xto be used for all patch releases for the 0.45. -
On the
maingit branch of the repository:- Update the versions to the next SNAPSHOT version using the
next_versionmaketarget. For example to update the next version to0.46.0-SNAPSHOTrun:NEXT_VERSION=0.46.0-SNAPSHOT make next_version. - Update the product version in the
documentation/shared/attributes.adocfile to the next version by setting theProductVersionvariable and also the previous version to the one you are releasing by setting theProductVersionPreviousvariable. Also check that theBridgeVersionandOAuthVersionvariables have the correct values with the HTTP bridge and OAuth library currently in use. - Add a header for the next release to the
CHANGELOG.mdfile - Update the project roadmap (add the next version to the planned releases, make sure the current release is up to date, etc.)
- Update the versions to the next SNAPSHOT version using the
-
Move to the release branch and run
make clean -
Run
RELEASE_VERSION=<desired version> make release, for exampleRELEASE_VERSION=0.45.0 make release- For
RELEASE_VERISONalways use the GA version here (e.g.0.45.0) and not the RC version (e.g0.45.0-rc1) - This will automatically update several
pom.xmlfiles and all files inpackaging/,install/,example/andhelm-charts/folders.
- For
-
Update the checksums for released files in
.checksumsin the release branch- Use the Make commands
make checksum_helm,make checksum_install, andmake checksum_examplesto generate the new checksums - Updated the checksums in the
.checksumsfile in the root directory of the GitHub repository
- Use the Make commands
-
Commit the changes to the existing files (do not add the newly created top level tar.gz, zip archives or .yaml files named like
strimzi-*into Git) -
Push the changes to the release branch on GitHub
-
Wait for the CI to complete the build
- Copy the build ID (from the URL in GitHub Actions)
-
Run the
releaseworkflow manually in GitHub Actions UI- Select the release branch from the list
- Set the desired release version (e.g.
0.45.0-rc1for RCs or0.45.0GA releases) - Set the release suffix as
0(but check the "Build suffixed images" checkbox only for GA releases) - Set the build ID to the build ID from previous step (For GA, this should be the build ID used for the last RC since there should be no changes)
-
Once the release build is complete:
- Download the release artifacts (binary, documentation and sbom)
- Check the images pushed to Quay.io
- Go to Maven Central > Publish and publish the release artifacts
- Because GitHub Actions doesn't support retaining builds forever, the workflow push OCI image with Java binaries into GHCR to make them available forever
-
Create a GitHub tag and release based on the release branch. Attach the release artifacts and docs as downloaded from the GitHub Actions.
- For GAs, prepare steps 13 and 14 before creating the release in GitHub.
- For RCs, the tag should be named with the RC suffix, e.g.
0.45.0-rc1
-
_(only for RCs, not for GAs) Run system tests workflow. For more details see Running System Tests workflow section.
-
(only for GA, not for RCs) Update the website
- Update the
_redirectsfile to make sure the/install/latestredirect points to the new release. - Update the
_data/releases.yamlfile to add new release - Download the release artifacts from the CI workflow and unpack them
- Update the documentation:
- Create new directories
docs/operators/<new-version>anddocs/operators/<new-version>/fullin the website repository - Delete the old HTML files and images from
docs/operators/latestanddocs/operators/latest/full(keep the*.mdfiles) - Copy files from the release artifacts under
documentation/htmlnoheadertodocs/operators/<new-version>anddocs/operators/latestin the website repository - Copy files from the release artifacts under
documentation/htmltodocs/operators/<new-version>/fullanddocs/operators/latest/fullin the website repository - Create new files
configuring.md,deploying.mdandoverview.mdindocs/operators/<new-version>- the content of these files should be the same as for older versions, so you can copy them and update the version number.
- Create new directories
- Add the Helm Chart repository
index.yamlon our website:- Download the release artifacts from the CI workflow and unpack them
- Use the
helmcommand to add the new version to theindex.yamlfile:For example, for Strimzi 0.45.0 release, if you unpacked the release artifacts tohelm repo index <PATH_TO_THE_DIRECTORY_WITH_THE_ARTIFACTS> --merge <PATH_TO_THE_INDEX_YAML> --url <URL_OF_THE_GITHUB_RELEASE_PAGE>./strimzi-0.45.0/and have the Strimzi website checkout instrimzi.github.io/, you would run:helm repo index ./strimzi-0.45.0/ --merge ./strimzi.github.io/charts/index.yaml --url https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.45.0/ - The updated
index.yamlwill be generated in the directory with the artifacts. Verify the added data and the digest and if they are correct, copy it tocharts/index.yamlon the website.
- Update the
-
(only for GA, not for RCs) On the
maingit branch of the repository:- Check the
ProductVersionvariable is correct indocumentation/shared/attributes.adoc - Update the
install,examplesandhelm-chartdirectories in themainbranch with the newly released files - Update the checksums for released files in
.checksums
- Check the
-
(only for GA, not for RCs) Publicise the release on the
strimzi-usersmailing list,strimziSlack channel and social accounts -
(only for GA, not for RCs) Update the Strimzi manifest files in Operator Hub community operators repository and submit a pull request upstream. You can find more details in the Operators Catalog section at the bottom of this documentation.
-
(only for GA, not for RCs) Add the new version to the
systemtest/src/test/resources/upgrade/BundleUpgrade.yamlfile for the upgrade tests -
(only for GA, not for RCs) Add the new version to the
systemtest/src/test/resources/upgrade/BundleDowngrade.yamlfile and remove the old one for the downgrade tests -
(only for GA, not for RCs) Add the new version to the
systemtest/src/test/resources/upgrade/OlmUpgrade.yamlfile for the OLM upgrade tests
The version of Strimzi Kafka Bridge is defined in the file ./bridge.version.
Even the main branch is using this fixed version and not the version build from the main branch of Kafka Bridge.
If you need to update the Kafka bridge to newer version, you should do it with following steps:
- Edit the
bridge.versionfile and update it to contain the new Bridge version - Run
make bridge_versionto update the related files to the new version - Commit all modified files to Git and open a PR.
After releasing RC, we need to run the System tests:
- helm-acceptance (only one time by setting the latest supported Kafka version)
- upgrade (only one time by setting the latest supported Kafka version)
- regression (multiple times, one for each supported Kafka version)
- regression-fg (multiple times, one for each supported Kafka version)
Run them manually in GitHub Actions UI:
- Select the release branch from the list
- Set the "Release Version" (i.e.
0.45.0-rc1) - Set the "Kafka Version" (i.e.
3.9.0) - Set the "Pipeline list" (i.e.
regression,upgrade,helm-acceptance,gh-regressionbased on pipelines defined in pipelines.yaml)
The workflow will generate jobs based on passed pipeline list and predefined values from the config file. The workflow has to be triggered multiple times in case you want to run tests for multiple Kafka versions.
Overtime, the base container image could be affected by CVEs related to the installed JVM, operating system libraries and so on.
Security issues are usually reported by security scanner tools used by the community users as well as project contributors.
The Quay.io registry also runs such scans periodically to look for security issues reported on the website.
Checking the Quay.io website is a way to get the status of security vulnerabilities affecting the operator container image.
In this case, we might need to rebuild the operator container image.
This can be done by using the cve-rebuild workflow.
This workflow will take a previously built binaries from GHCR based on passed parameters and use them to build a new container image, which is then pushed to the container registry with the suffixed tag (e.g. 0.45.0-2).
The suffix can be specified when starting the re-build workflow.
You should always check what was the previous suffix and increment it.
When starting the workflow, it will ask for several parameters which you need to fill:
- Release version (for example
0.45.0) - Release suffix (for example
2- it is used to create the suffixed images such asstrimzi/operator:0.45.0-2to identify different builds done for different CVEs) - Source build ID (the ID of the build from which the artifacts should be used - use the long build ID from the URL and not the shorter build number). You can also get the build ID by referring to the latest run of the corresponding release workflow.
After pushing the suffixed tag image, the older images will be still available in the container registry under their own suffixes.
Only the latest rebuild will be available under the un-suffixed tag (for example, the 0.45.0 tagged image is still the previous one and not up to date with the CVEs respin).
Afterwards, it will wait for a manual approval from maintainers (configured in GitHub cve-validation environment).
This gives additional time to manually test the new container image.
After the manual approval, the image will be also pushed under the tag without suffix (e.g. 0.45.0).
This process should be used only for CVEs in the base images. Any CVEs in our code or in the Java dependencies require new patch (or minor) release.
In order to make the Strimzi operator available in the OperatorHub.io catalog, you need to build a bundle containing the Strimzi operator metadata together with its Custom Resource Definitions.
The metadata are described through a ClusterServiceVersion (CSV) resource declared by using a corresponding YAML file.
The bundle for the OperatorHub.io is available in the Community-Operators GitHub repo.
In order to provide the bundle for a new release, you can use the previous one as a base.
Create a folder for the new release by copying the previous one and make the following changes:
- if releasing a new minor or major version (rather than fix), change the
metadata/annotations.yamlto update the second channel listed next tooperators.operatorframework.io.bundle.channels.v1to the new release version range (e.g.strimzi-0.45.x). - copy the CRDs, the Cluster Roles YAML (excluding the
strimzi-cluster-operatorroles) and the operatorConfigMapto themanifestsfolder by taking them from theinstall/cluster-operatorfolder (within the Strimzi repo). - take the
strimzi-cluster-operator.v<VERSION>.clusterserviceversion.yamlCSV file (by using the new release as<VERSION>) in order to update the following:alm-examples-metadataandalm-examplessections by using the examples from theexamplesfolder (within the Strimzi repo).containerImagefield with the new operator image (using the SHA).createdAtfield with date/time creation of the current CSV file.namefield by setting the new version in the operator name.customresourcedefinitions.ownedsection with the CRDs descriptions, from theinstall/cluster-operatorfolder (within the Strimzi repo).descriptionsection with all the Strimzi operator information already used for the release on GitHub.install.spec.permissionssection by using the Cluster Role files from theinstall/cluster-operator(within the Strimzi repo).deploymentssection by using the Strimzi Cluster Operator Deployment YAML from theinstall/cluster-operator(within the Strimzi repo) but using the SHAs for the images.relatedImagessection with the same images as the step before.replacesfield by setting the old version that this new one is going to replace.versionfield with the new release.
After making all these changes, you can double-check the validity of the CSV by copy/paste its content into the OperatorHub.io preview tool.
When the operator manifests and metadata files are ready, you should test the operator bundle on an actual Kubernetes and OpenShift cluster. If you are using Kubernetes, then you first need to install the Operator Lifecycle Manager (OLM) on it:
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.31.0/install.sh | bash -s v0.31.0The above step is not necessary if you are using OpenShift, because it has the OLM out-of-box.
In this section, the following steps show how to:
- create the operator bundle image and put it into a catalog image.
- publish the catalog on the cluster.
- install the operator from the catalog itself.
Prerequisites:
- opm tool.
- docker, podman or buildah (the following steps allow you to configure the one to use via the
DOCKER_CMDenv var)
Note For further details about the following steps, you can also refer to the official Operator Lifecycle Manager documentation.
In this step, you are going to create a container image containing the bundle with the operator manifests and metadata.
Inside the bundle directory (i.e. operators/strimzi-kafka-operator/0.45.0), export the following environment variables to specify the operator version and the container registry and user used to push the bundle and catalog images:
export OPERATOR_VERSION=0.45.0
export DOCKER_REGISTRY=quay.io
export DOCKER_ORG=ppatierno
export DOCKER_CMD=dockerRun the following command in order to generate a bundle.Dockerfile representing the operator bundle image.
opm alpha bundle generate --directory ./manifests/ --package strimzi-kafka-operator --channels stable,strimzi-0.45.x --default stableNote
The channels are the same as the ones specified in the metadata/annotations.yaml file.
The generated Dockerfile will look like the following:
FROM scratch
LABEL operators.operatorframework.io.bundle.mediatype.v1=registry+v1
LABEL operators.operatorframework.io.bundle.manifests.v1=manifests/
LABEL operators.operatorframework.io.bundle.metadata.v1=metadata/
LABEL operators.operatorframework.io.bundle.package.v1=strimzi-kafka-operator
LABEL operators.operatorframework.io.bundle.channels.v1=stable,strimzi-0.45.x
LABEL operators.operatorframework.io.bundle.channel.default.v1=stable
COPY manifests /manifests/
COPY metadata /metadata/Run the following command to build the operator bundle image and push it to a repository:
$DOCKER_CMD build -t $DOCKER_REGISTRY/$DOCKER_ORG/strimzi-kafka-operator-bundle:$OPERATOR_VERSION -f bundle.Dockerfile .
$DOCKER_CMD push $DOCKER_REGISTRY/$DOCKER_ORG/strimzi-kafka-operator-bundle:$OPERATOR_VERSIONYou can also run some validations to ensure that your bundle is valid and in the correct format.:
opm alpha bundle validate --tag $DOCKER_REGISTRY/$DOCKER_ORG/strimzi-kafka-operator-bundle:$OPERATOR_VERSION --image-builder $DOCKER_CMDIn this step, you are going to create a catalog and put the operator bundle into it.
Inside the root operator directory (i.e. operators/strimzi-kafka-operator), create a folder for the catalog:
mkdir -p strimzi-catalogRun the following command in order to generate a strimzi-catalog.Dockerfile representing the catalog image.
opm generate dockerfile strimzi-catalogInitialize the catalog:
opm init strimzi-kafka-operator --default-channel=stable --output yaml > strimzi-catalog/index.yamlAdd the bundle to the catalog (repeat for each operator version/bundle you want to add in order to be able to test operator upgrades):
opm render $DOCKER_REGISTRY/$DOCKER_ORG/strimzi-kafka-operator-bundle:$OPERATOR_VERSION --output=yaml >> strimzi-catalog/index.yamlEdit the index.yaml to add the bundle to a channel (i.e. using the stable one):
cat << EOF >> strimzi-catalog/index.yaml
---
schema: olm.channel
package: strimzi-kafka-operator
name: stable
entries:
- name: strimzi-cluster-operator.v0.45.0
EOFWhen adding more than one operator version/bundle, each operator entry has to have the replaces field to specify which release it's going to replace:
schema: olm.channel
package: strimzi-kafka-operator
name: stable
entries:
- name: strimzi-cluster-operator.v0.45.0
replaces: strimzi-cluster-operator.v0.44.0
- name: strimzi-cluster-operator.v0.44.0
replaces: strimzi-cluster-operator.v0.43.0Build and push the catalog image:
$DOCKER_CMD build -f strimzi-catalog.Dockerfile -t $DOCKER_REGISTRY/$DOCKER_ORG/olm-catalog:latest .
$DOCKER_CMD push $DOCKER_REGISTRY/$DOCKER_ORG/olm-catalog:latestCreate a CatalogSource resource to make the catalog available on the cluster:
kubectl apply -f - <<EOF
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: strimzi-catalog
namespace: olm # use openshift-marketplace if running on OpenShift
spec:
sourceType: grpc
image: quay.io/ppatierno/olm-catalog:latest # use the correct image
displayName: Strimzi Catalog
EOFYou can now list the operator as part of the "Strimzi Catalog" catalog and not the "Community Operators" catalog (coming from OLM):
kubectl get packagemanifest -n olm | grep strimzi-kafka-operatorYou can test operator upgrades by starting from an existing catalog and then building a new catalog with a new operator version/bundle.
In this case, the new catalog image is pulled by Kubernetes/OpenShift.
It happens automatically if you are using a specific tag for the catalog image, for example going from $DOCKER_REGISTRY/$DOCKER_ORG/olm-catalog:1.0 to $DOCKER_REGISTRY/$DOCKER_ORG/olm-catalog:1.1.
If you are using the latest tag instead, you have to force pulling the new catalog image and one way is to kill the pod running the catalog.
kubectl get pods -n olm
kubectl delete pod strimzi-catalog-<id> -n olmRemember to use the openshift-marketplace namespace instead if you are using OpenShift.
Create a kafka namespace where the operator will be installed.
On Kubernetes, you can install the operator by creating an OperatorGroup and a Subscription:
kubectl apply -f - <<EOF
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: strimzi-group
namespace: kafka
EOFkubectl apply -f - <<EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: strimzi-subscription
namespace: kafka
spec:
channel: stable
name: strimzi-kafka-operator
source: strimzi-catalog
sourceNamespace: olm
installPlanApproval: Automatic
EOFWhen the Subscription is created the OLM will use the "Strimzi Catalog" to install the operator from there.
On OpenShift, after the CatalogSource creation, you can use the web interface and install the operator from the Operator Catalog page.
There is no need to create an OperatorGroup and Subscription manually.
You can now deploy a Kafka cluster by using the installed operator and running some smoke tests.
Finally, uninstall the operator by deleting the Subscription and the corresponding ClusterServiceVersion:
CSV=$(kubectl get subscription strimzi-subscription -n kafka -o json | jq -r '.status.installedCSV')
kubectl delete subscription strimzi-subscription -n kafka
kubectl delete csv $CSV -n kafkaWhen the bundle was successfully tested, you can finally open a PR against the k8s-operatorhub/community-operators repository.
The PR will run some sanity checks which could need some fixes in case of errors.
After being reviewed by maintainers and merged, the Strimzi operator will be available in the OperatorHub.io website.
The operator should be made available in the OpenShift Operator Catalog as well. The bundle for the OpenShift Operator Catalog is available in the https://github.com/redhat-openshift-ecosystem/community-operators-prod/tree/main/operators/strimzi-kafka-operator GitHub repo. Just follow the same steps as for OperatorHub.io for building the bundle.