Open
Description
Description
One-liner:
- I want the
kubernetes
Terraform provider to work with Entra-enabled AKS without managing any secrets (just OIDC federations).
Scenario:
- Have AKS cluster, pre-created in separate Terraform codebase/run, with managed Entra ID integration.
- Creating a Terraform module that utilizes both
azurerm
andkubernetes
providers, for onboarding new apps/apis into AKS cluster. (azurerm_user_assigned_identity
,kubernetes_namespace_v1
,kubernetes_service_account_v1
, etc.) - Using Terraform Cloud with a workspace that is configured with Dynamic Credentials for Azure, and it authenticates the
azurerm
provider perfectly - The Azure identity being targeted for dynamic credentials holds:
Owner
role of the resource group where theazurerm
resources go- the
Azure Kubernetes Service RBAC Cluster Admin
role, sufficient to make any changes through the Kubernetes API of the AKS cluster
Manual version illustrating a similar idea:
# Login to Azure
az login # use whatever details/parameters for your environment
# Convert kubeconfig to inherit the Azure CLI credential you've already established
# This switches kubeconfig to use an `exec` to `kubelogin`
kubelogin convert-kubeconfig -l azurecli
# Now, do stuff with kubectl
kubectl get nodes -o wide
# Each call of kubectl runs `kubelogin get-token` to get a short-lived credential, inheriting the identity already captured for Azure
Goal:
- The
kubernetes
Terraform provider is able to take on the same identity being pulled in by theazurerm
provider, using that identity to call the AKS cluster's Kubernetes API when provisioningkubernetes_*
resources - have zero secrets to store/rotate/protect (as is accomplished by the
azurerm
provider federating via OIDC)
Potential Terraform Configuration
I can imagine two ways to do this:
Option 1: kubernetes
provider can be told to use the same Azure Dyamic Credentials as the azurerm
provider
terraform {
cloud {
organization = "my-org"
workspaces {
name = "this-workspace" # this workspace is set up for dyamic azure credentials
}
}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.113.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.31.0"
}
}
}
provider "azurerm" {
features {
# Empty, we don't need anything special, but this block has to be here
}
# the main provider configuration comes from the following environment variables being set in the TFC workspace, per
# https://developer.hashicorp.com/terraform/cloud-docs/workspaces/dynamic-provider-credentials/azure-configuration#configure-the-azurerm-or-microsoft-entra-id-provider
#
# ARM_TENANT_ID = <our tenant id>
# ARM_SUBSCRIPTION_ID = <our subscription id>
# TFC_AZURE_PROVIDER_AUTH = true
# TFC_AZURE_RUN_CLIENT_ID = <the client id of our pipeline credential that is configured to accept oidc>
}
data "azurerm_kubernetes_cluster" "aks" {
resource_group_name = local.cluster_resource_group_name
name = local.cluster_name
}
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.aks.kube_config.0.host
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
use_tfc_azure_dynamic_credentials = true # <== this is the thing that would have to be invented, maybe borrowing code from `azurerm` provider
}
# Off in a module somewhere:
# This resource is provisioned by the `kubernetes` provider, but using the Azure dynamic credential
resource "kubernetes_namespace_v1" "ns" {
metadata {
name = local.kubernetes_namespace_name
labels = {
# ...
}
}
}
Option 2: kubernetes
provider exchanges the TFC-provided OIDC token on its own:
terraform {
cloud {
organization = "my-org"
workspaces {
name = "this-workspace" # this workspace is set up for dyamic azure credentials
}
}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.113.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.31.0"
}
}
}
provider "azurerm" {
features {
# Empty, we don't need anything special, but this block has to be here
}
# the main provider configuration comes from the following environment variables being set in the TFC workspace, per
# https://developer.hashicorp.com/terraform/cloud-docs/workspaces/dynamic-provider-credentials/azure-configuration#configure-the-azurerm-or-microsoft-entra-id-provider
#
# ARM_TENANT_ID = <our tenant id>
# ARM_SUBSCRIPTION_ID = <our subscription id>
# TFC_AZURE_PROVIDER_AUTH = true
# TFC_AZURE_RUN_CLIENT_ID = <the client id of our pipeline credential that is configured to accept oidc>
}
data "azurerm_kubernetes_cluster" "aks" {
resource_group_name = local.cluster_resource_group_name
name = local.cluster_name
}
# https://developer.hashicorp.com/terraform/cloud-docs/workspaces/dynamic-provider-credentials/azure-configuration#required-terraform-variable
# This "magic" variable is populated by the TFC workspace at runtime,
# And is especially required if you have multiple instances of the `azurerm` provider with aliases
variable "tfc_azure_dynamic_credentials" {
description = "Object containing Azure dynamic credentials configuration"
type = object({
default = object({
client_id_file_path = string
oidc_token_file_path = string
})
aliases = map(object({
client_id_file_path = string
oidc_token_file_path = string
}))
})
}
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.aks.kube_config.0.host
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "kubelogin"
args = [
"get-token",
"--environment", "AzurePublicCloud",
"--server-id", "6dae42f8-4368-4678-94ff-3960e28e3630", # Always the same, https://azure.github.io/kubelogin/concepts/aks.html
"--client-id", "80faf920-1908-4b52-b5ef-a8e7bedfc67a", # Always the same, https://azure.github.io/kubelogin/concepts/aks.html
"--tenant-id", data.azurerm_kubernetes_cluster.aks.azure_active_directory_role_based_access_control.0.tenant_id,
"--authority-host", "https://login.microsoftonline.com/${data.azurerm_kubernetes_cluster.aks.azure_active_directory_role_based_access_control.0.tenant_id}", # or something similar, if it would work
"--login", "workloadidentity",
"--federated-token-file", var.tfc_azure_dynamic_credentials.default.oidc_token_file_path
]
}
}
# Off in a module somewhere:
# This resource is provisioned by the `kubernetes` provider, but using the Azure dynamic credential
resource "kubernetes_namespace_v1" "ns" {
metadata {
name = local.kubernetes_namespace_name
labels = {
# ...
}
}
}
Notes:
- 📝 This option requires
kubelogin
to be available within the context of the Terraform run. We need a self-hosted TFC agent anyways, due to use of a private cluster, so the TFC-provided agents wouldn't have line-of-sight to the Kubernetes API, and have installedkubelogin
ourselves. - When the Azure Dynamic Credentials are set up, TFC places a valid JWT at the path:
/home/tfc-agent/.tfc-agent/component/terraform/runs/{run-id-here}/tfc-azure-token
, with issuer ofhttps://app.terraform.io
and audience ofapi://AzureADTokenExchange
, but using that JWT withkubelogin
isn't working - If I manually do
kubelogin get-token
command as specified in mykubeconfig
afterkubelogin convert-kubeconfig -l azurecli
, I get a JWT with an issuer ofhttps://sts.windows.net/{my-tenant-id-here}/
and audience of6dae42f8-4368-4678-94ff-3960e28e3630
, which is that static Entra ID for the AKS OIDC application that is the same for every customer. I believe this JWT is what is being submitted with calls to the Kubernetes API.
References
- relates Unable to use kubernetes provider with fixed limited permissions - see here: https://github.com/hashicorp/terraform-provider-azurerm/pull/21229 #2072
- relates Linking k8s provider issue with azure aks limited permissions here. Most likely the helm provider is affected also. terraform-provider-helm#1114
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment