Summary : terraform apply fails even though the base environment is successfully created in Databricks, because the environment ID is not being saved into the Terraform state file after creation.
Configuration
terraform {
required_providers {
databricks = {
source = "databricks/databricks"
version = "1.112.0"
}
}
}
provider "databricks" {
host = "https://<workspace>.azuredatabricks.net"
token = var.databricks_token
}
resource "databricks_workspace_file" "base_env_yaml" {
source = "./my-base-env.yaml"
path = "/Workspace/base-env/my-base-env.yaml"
}
resource "databricks_environments_workspace_base_environment" "env" {
display_name = "my-base-env"
filepath = databricks_workspace_file.base_env_yaml.path
base_environment_type = "CPU"
}
Expected Behavior
terraform apply completes successfully. The resource is created and all attributes including workspace_base_environment_id are saved in state.
Actual Behavior
The resource is created successfully in the Databricks workspace (visible in Settings → Compute → Base environments, status = Ready for use, packages installed). But Terraform exits with code 1:
Error: Provider returned invalid result object after apply
After the apply operation, the provider still indicated an unknown value for
databricks_environments_workspace_base_environment.env["my-base-env"].workspace_base_environment_id.
All values must be known after apply, so this is always a bug in the provider
and should be reported in the provider's own repository. Terraform will still
save the other known object values in the state.
The resource is then marked as tainted in state, so the next terraform apply will try to destroy and recreate it even though it is fully functional.
Steps to Reproduce
- Use the configuration above
terraform init
terraform apply
- Resource is created in Databricks UI — but Terraform exits with code 1.
- Inspect
terraform.tfstate — workspace_base_environment_id is null, instance status is tainted.
Terraform and Provider Versions
Terraform v1.5.7
+ provider registry.terraform.io/databricks/databricks v1.112.0
Is it a regression?
No — this resource was introduced in v1.112.0 (Public Beta) and the bug has been present since it shipped.
Debug Output
╷
│ Error: Provider returned invalid result object after apply
│
│ After the apply operation, the provider still indicated an unknown value for
│ databricks_environments_workspace_base_environment.env["my_base_env"].workspace_base_environment_id. All values must be known
│ after apply, so this is always a bug in the provider and should be reported in the provider's own repository. Terraform will still save
│ the other known object values in the state.
╵
Important Factoids
- Tested on Azure Databricks, Terraform v1.5.7, provider v1.112.0
- The resource is fully functional in Databricks despite the Terraform error — the bug is purely in state handling.
Summary :
terraform applyfails even though the base environment is successfully created in Databricks, because the environment ID is not being saved into the Terraform state file after creation.Configuration
Expected Behavior
terraform applycompletes successfully. The resource is created and all attributes includingworkspace_base_environment_idare saved in state.Actual Behavior
The resource is created successfully in the Databricks workspace (visible in Settings → Compute → Base environments, status =
Ready for use, packages installed). But Terraform exits with code 1:The resource is then marked as tainted in state, so the next
terraform applywill try to destroy and recreate it even though it is fully functional.Steps to Reproduce
terraform initterraform applyterraform.tfstate—workspace_base_environment_idisnull, instancestatusistainted.Terraform and Provider Versions
Is it a regression?
No — this resource was introduced in v1.112.0 (Public Beta) and the bug has been present since it shipped.
Debug Output
Important Factoids