This guide walks you through reverting from versioned workers to unversioned workers. It assumes you have enabled worker versioning and currently have (or previously had) versioned workers polling the Temporal Server.
This guide uses specific terminology that is defined in the Concepts document. Please review the concepts document first to understand key terms like Worker Deployment, WorkerDeployment CRD, and Kubernetes Deployment, as well as the relationship between them.
Before starting the migration, ensure you have:
- ✅ Temporal CLI version >= 1.5.0
Update your worker initialization code to remove versioning configuration.
Before (Versioned):
// Worker must use the build ID and deployment name from environment variables.
// These are set on the deployment by the controller.
buildID := os.Getenv("TEMPORAL_WORKER_BUILD_ID")
deploymentName := os.Getenv("TEMPORAL_DEPLOYMENT_NAME")
if buildID == "" || deploymentName == "" {
// exit with an error
}
workerOptions := worker.Options{
DeploymentOptions: worker.DeploymentOptions{
UseVersioning: true,
Version: worker.WorkerDeploymentVersion{
DeploymentName: deploymentName,
BuildID: buildID,
},
},
}
worker := worker.New(client, "my-task-queue", workerOptions)After (Unversioned):
// Worker connects without versioning
worker := worker.New(client, "my-task-queue", worker.Options{})Deploy your unversioned workers as you would without the Worker Controller (ie. as their own Deployments not connected to a WorkerDeployment resource) and ensure they are polling all of the Task Queues in your Worker Deployment. This can be done by verifying their presence on the Task Queues page (https://cloud.temporal.io/namespaces//task-queues/)
Run the following Temporal CLI command to set the current version of the Worker Deployment to unversioned:
temporal worker deployment set-current-version \
--deployment-name <your-deployment-name> \
--build-id ""After completing the migration steps:
- Verify in the Temporal UI that traffic is shifting from versioned workers to unversioned workers.
- AutoUpgrade workflows will eventually move onto the unversioned worker(s).
- Pinned workflows that were started on versioned workers will continue and complete execution on those pinned workers.
- New executions of workflows on Task Queues in your now-unversioned Worker Deployment will start on the unversioned workers, regardless of them being Pinned or AutoUpgrade
NOTE: The cleanup steps below are optional as it is not required to clean up the Worker Versions or Worker Deployment resources from the Temporal Server while migrating to unversioned workers.
The Worker Controller will delete Kubernetes Deployments, which represent versioned workers, based on your configured Sunset Strategy. Specifically, the Controller deletes Kubernetes Deployments for a version once the time since it became drained exceeds the combined Scaledown and Delete delay. Deleting these deployments stops the workers from polling the Temporal Server, which is a pre-requisite for deleting a Worker Version.
A Worker Version can be deleted once it has been drained and has no active pollers. Once it's drained and is without active pollers, delete the Worker Version using:
temporal worker deployment delete-version \
--deployment-name <your-deployment-name> \
--build-id <your-build-id>To delete a Worker Deployment:
- First, ensure all its Worker Versions have been deleted.
- Then run:
temporal worker deployment delete --name <your-deployment-name>