diff --git a/docs/content/concepts/apis/admission-webhooks.md b/docs/content/concepts/apis/admission-webhooks.md index 5948aa10945..46b2a2c4ab9 100644 --- a/docs/content/concepts/apis/admission-webhooks.md +++ b/docs/content/concepts/apis/admission-webhooks.md @@ -1,13 +1,18 @@ +--- +description: > + How admission webhooks and validating admission policies work across workspaces in kcp. +--- + # Admission Webhooks -kcp extends the vanilla [admission plugins](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) for webhooks, and makes them cluster-aware. +kcp extends the vanilla [admission plugins](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) for webhooks and validating admission policies, and makes them cluster-aware. ```mermaid flowchart TD subgraph ws1["API Provider Workspace ws1"] export["Widgets APIExport"] schema["Widgets APIResourceSchema
(widgets.v1.example.org)"] - webhook["Mutating/ValidatingWebhookConfiguration
for widgets.v1.example.org

Handle a from ws2 (APIResourceSchema)
Handle b from ws3 (APIResourceSchema)
Handle a from ws1 (CRD)"] + webhook["Mutating/ValidatingWebhookConfiguration
ValidatingAdmissionPolicy
for widgets.v1.example.org

Handle a from ws2 (APIResourceSchema)
Handle b from ws3 (APIResourceSchema)
Handle a from ws1 (CRD)"] crd["Widgets CustomResourceDefinition
(widgets.v1.example.org)"] export --> schema @@ -37,9 +42,39 @@ flowchart TD class export,schema,webhook,crd,binding1,binding2 resource; ``` -When an object is to be mutated or validated, the webhook admission plugin ([`apis.kcp.io/MutatingWebhook`](https://github.com/kcp-dev/kcp/tree/main/pkg/admission/mutatingwebhook) and [`apis.kcp.io/ValidatingWebhook`](https://github.com/kcp-dev/kcp/tree/main/pkg/admission/validatingwebhook) respectively) looks for the owner of the resource schema. Once found, it then dispatches the handling for that object in the owner's workspace. There are two such cases in the diagram above: +When an object is to be mutated or validated, the admission plugins ([`apis.kcp.io/MutatingWebhook`](https://github.com/kcp-dev/kcp/tree/main/pkg/admission/mutatingwebhook), [`apis.kcp.io/ValidatingWebhook`](https://github.com/kcp-dev/kcp/tree/main/pkg/admission/validatingwebhook), and [`ValidatingAdmissionPolicy`](https://github.com/kcp-dev/kcp/tree/main/pkg/admission/validatingadmissionpolicy) respectively) look for the owner of the resource schema. Once found, they then dispatch the handling for that object in the owner's workspace. There are two such cases in the diagram above: + +* **Admitting bound resources.** During the request handling, Widget objects inside the consumer workspaces `ws2` and `ws3` are picked up by the respective admission plugin. The plugin sees the resource's schema comes from an APIBinding, and so it sets up an instance of the admission plugin to be working with its APIExport's workspace, in `ws1`. Afterwards, normal admission flow continues: the request is dispatched to all eligible webhook configurations or validating admission policies inside `ws1` and the object in request is mutated or validated. +* **Admitting local resources.** The second case is when the webhook configuration or validating admission policy exists in the same workspace as the object it's handling. The admission plugin sees the resource is not sourced via an `APIBinding`, and so it looks for eligible webhook configurations or policies locally, and dispatches the request accordingly. The same would of course be true if `APIExport` and its `APIBinding` lived in the same workspace: the `APIExport` would resolve to the same cluster. + +## ValidatingAdmissionPolicy Support + +kcp supports cross-workspace `ValidatingAdmissionPolicy` and `ValidatingAdmissionPolicyBinding` resources, similar to how it supports cross-workspace webhooks. When a resource is created in a consumer workspace that is bound via an `APIBinding`, the `ValidatingAdmissionPolicy` plugin will: + +1. Check the `APIBinding` to find the source workspace (`APIExport` workspace) +2. Look for `ValidatingAdmissionPolicy` and `ValidatingAdmissionPolicyBinding` resources in the source workspace +3. Apply those policies to validate the resource in the consumer workspace + +This means that policies defined in an `APIExport` workspace will automatically apply to all resources created in consuming workspaces, providing a consistent validation experience across all consumers of an API. + +### Example + +Consider a scenario where: +- **Provider workspace** (`root:provider`) has: + - An `APIExport` for `cowboys.wildwest.dev` + - A `ValidatingAdmissionPolicy` that rejects cowboys with `intent: "bad"` + - A `ValidatingAdmissionPolicyBinding` that binds the policy + +- **Consumer workspace** (`root:consumer`) has: + - An `APIBinding` that binds to the provider's `APIExport` + - A user trying to create a cowboy with `intent: "bad"` + +When the user creates the cowboy in the consumer workspace, the `ValidatingAdmissionPolicy` plugin will: +1. Detect that the cowboy resource comes from an `APIBinding` +2. Look up the source workspace (provider workspace) +3. Find and apply the policy from the provider workspace +4. Reject the cowboy creation because it violates the policy -* **Admitting bound resources.** During the request handling, Widget objects inside the consumer workspaces `ws2` and `ws3` are picked up by the respective webhook admission plugin. The plugin sees the resource's schema comes from an APIBinding, and so it sets up an instance of `{Mutating,Validating}Webhook` to be working with its APIExport's workspace, in `ws1`. Afterwards, normal webhook admission flow continues: the request is dispatched to all eligible webhook configurations inside `ws1` and the object in request is mutated or validated. -* **Admitting local resources.** The second case is when the webhook configuration exists in the same workspace as the object it's handling. The admission plugin sees the resource is not sourced via an APIBinding, and so it looks for eligible webhook configurations locally, and dispatches the request to the webhooks there. The same would of course be true if APIExport and its APIBinding lived in the same workspace: the APIExport would resolve to the same cluster. +This ensures that API providers can enforce consistent validation rules across all consumers of their APIs. -Lastly, objects in admission review are annotated with the name of the workspace that owns that object. For example, when Widget `b` from `ws3` is being validated, its caught by `ValidatingWebhookConfiguration` in `ws1`, but the webhook will see `kcp.io/cluster: ws3` annotation on the reviewed object. +Lastly, objects in admission review are annotated with the name of the workspace that owns that object. For example, when Widget `b` from `ws3` is being validated, its caught by `ValidatingWebhookConfiguration` or `ValidatingAdmissionPolicy` in `ws1`, but the webhook or policy evaluator will see `kcp.io/cluster: ws3` annotation on the reviewed object. diff --git a/pkg/admission/validatingadmissionpolicy/validating_admission_policy.go b/pkg/admission/validatingadmissionpolicy/validating_admission_policy.go index 3029a76e0cb..479f16bdbe5 100644 --- a/pkg/admission/validatingadmissionpolicy/validating_admission_policy.go +++ b/pkg/admission/validatingadmissionpolicy/validating_admission_policy.go @@ -22,6 +22,8 @@ import ( "sync" "k8s.io/apimachinery/pkg/api/meta" + "k8s.io/apimachinery/pkg/labels" + "k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apiserver/pkg/admission" "k8s.io/apiserver/pkg/admission/initializer" "k8s.io/apiserver/pkg/admission/plugin/policy/generic" @@ -33,12 +35,15 @@ import ( "k8s.io/client-go/informers" "k8s.io/client-go/kubernetes" "k8s.io/client-go/restmapper" + "k8s.io/client-go/tools/cache" "k8s.io/klog/v2" kcpdynamic "github.com/kcp-dev/client-go/dynamic" kcpkubernetesinformers "github.com/kcp-dev/client-go/informers" kcpkubernetesclientset "github.com/kcp-dev/client-go/kubernetes" "github.com/kcp-dev/logicalcluster/v3" + apisv1alpha2 "github.com/kcp-dev/sdk/apis/apis/v1alpha2" + corev1alpha1 "github.com/kcp-dev/sdk/apis/core/v1alpha1" kcpinformers "github.com/kcp-dev/sdk/client/informers/externalversions" corev1alpha1informers "github.com/kcp-dev/sdk/client/informers/externalversions/core/v1alpha1" @@ -75,8 +80,10 @@ type KubeValidatingAdmissionPolicy struct { serverDone <-chan struct{} authorizer authorizer.Authorizer - lock sync.RWMutex - delegates map[logicalcluster.Name]*stoppableValidatingAdmissionPolicy + getAPIBindings func(clusterName logicalcluster.Name) ([]*apisv1alpha2.APIBinding, error) + + delegatesLock sync.RWMutex + delegates map[logicalcluster.Name]*stoppableValidatingAdmissionPolicy logicalClusterDeletionMonitorStarter sync.Once } @@ -84,6 +91,7 @@ type KubeValidatingAdmissionPolicy struct { var _ admission.ValidationInterface = &KubeValidatingAdmissionPolicy{} var _ = initializers.WantsKubeClusterClient(&KubeValidatingAdmissionPolicy{}) var _ = initializers.WantsKubeInformers(&KubeValidatingAdmissionPolicy{}) +var _ = initializers.WantsKcpInformers(&KubeValidatingAdmissionPolicy{}) var _ = initializers.WantsServerShutdownChannel(&KubeValidatingAdmissionPolicy{}) var _ = initializers.WantsDynamicClusterClient(&KubeValidatingAdmissionPolicy{}) var _ = initializer.WantsAuthorizer(&KubeValidatingAdmissionPolicy{}) @@ -95,6 +103,32 @@ func (k *KubeValidatingAdmissionPolicy) SetKubeClusterClient(kubeClusterClient k func (k *KubeValidatingAdmissionPolicy) SetKcpInformers(local, global kcpinformers.SharedInformerFactory) { k.logicalClusterInformer = local.Core().V1alpha1().LogicalClusters() + k.getAPIBindings = func(clusterName logicalcluster.Name) ([]*apisv1alpha2.APIBinding, error) { + return local.Apis().V1alpha2().APIBindings().Lister().Cluster(clusterName).List(labels.Everything()) + } + + _, _ = local.Core().V1alpha1().LogicalClusters().Informer().AddEventHandler( + cache.ResourceEventHandlerFuncs{ + DeleteFunc: func(obj interface{}) { + cl, ok := obj.(*corev1alpha1.LogicalCluster) + if !ok { + return + } + + clName := logicalcluster.Name(cl.Annotations[logicalcluster.AnnotationKey]) + + k.delegatesLock.Lock() + defer k.delegatesLock.Unlock() + + for key, delegate := range k.delegates { + if key == clName { + delete(k.delegates, key) + delegate.stop() + } + } + }, + }, + ) } func (k *KubeValidatingAdmissionPolicy) SetKubeInformers(local, global kcpkubernetesinformers.SharedInformerFactory) { @@ -129,7 +163,12 @@ func (k *KubeValidatingAdmissionPolicy) Validate(ctx context.Context, a admissio return err } - delegate, err := k.getOrCreateDelegate(cluster.Name) + sourceCluster, err := k.getSourceClusterForGroupResource(cluster.Name, a.GetResource().GroupResource()) + if err != nil { + return err + } + + delegate, err := k.getOrCreateDelegate(sourceCluster) if err != nil { return err } @@ -137,18 +176,35 @@ func (k *KubeValidatingAdmissionPolicy) Validate(ctx context.Context, a admissio return delegate.Validate(ctx, a, o) } +func (k *KubeValidatingAdmissionPolicy) getSourceClusterForGroupResource(clusterName logicalcluster.Name, groupResource schema.GroupResource) (logicalcluster.Name, error) { + objs, err := k.getAPIBindings(clusterName) + if err != nil { + return "", err + } + + for _, apiBinding := range objs { + for _, br := range apiBinding.Status.BoundResources { + if br.Group == groupResource.Group && br.Resource == groupResource.Resource { + return logicalcluster.Name(apiBinding.Status.APIExportClusterName), nil + } + } + } + + return clusterName, nil +} + // getOrCreateDelegate creates an actual plugin for clusterName. func (k *KubeValidatingAdmissionPolicy) getOrCreateDelegate(clusterName logicalcluster.Name) (*stoppableValidatingAdmissionPolicy, error) { - k.lock.RLock() + k.delegatesLock.RLock() delegate := k.delegates[clusterName] - k.lock.RUnlock() + k.delegatesLock.RUnlock() if delegate != nil { return delegate, nil } - k.lock.Lock() - defer k.lock.Unlock() + k.delegatesLock.Lock() + defer k.delegatesLock.Unlock() delegate = k.delegates[clusterName] if delegate != nil { @@ -210,8 +266,8 @@ func (k *KubeValidatingAdmissionPolicy) getOrCreateDelegate(clusterName logicalc } func (k *KubeValidatingAdmissionPolicy) logicalClusterDeleted(clusterName logicalcluster.Name) { - k.lock.Lock() - defer k.lock.Unlock() + k.delegatesLock.Lock() + defer k.delegatesLock.Unlock() delegate := k.delegates[clusterName] diff --git a/test/e2e/conformance/validatingadmissionpolicy_test.go b/test/e2e/conformance/validatingadmissionpolicy_test.go index 1cdc66a4bc3..646843b18e9 100644 --- a/test/e2e/conformance/validatingadmissionpolicy_test.go +++ b/test/e2e/conformance/validatingadmissionpolicy_test.go @@ -18,6 +18,7 @@ package conformance import ( "context" + "fmt" "strings" "testing" "time" @@ -26,6 +27,7 @@ import ( v1 "k8s.io/api/admission/v1" admissionregistrationv1 "k8s.io/api/admissionregistration/v1" + apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1" "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" @@ -35,8 +37,12 @@ import ( kcpapiextensionsclientset "github.com/kcp-dev/client-go/apiextensions/client" kcpkubernetesclientset "github.com/kcp-dev/client-go/kubernetes" "github.com/kcp-dev/logicalcluster/v3" + apisv1alpha1 "github.com/kcp-dev/sdk/apis/apis/v1alpha1" + apisv1alpha2 "github.com/kcp-dev/sdk/apis/apis/v1alpha2" "github.com/kcp-dev/sdk/apis/core" + kcpclientset "github.com/kcp-dev/sdk/client/clientset/versioned/cluster" kcptesting "github.com/kcp-dev/sdk/testing" + kcptestinghelpers "github.com/kcp-dev/sdk/testing/helpers" "github.com/kcp-dev/kcp/test/e2e/fixtures/wildwest" wildwestv1alpha1 "github.com/kcp-dev/kcp/test/e2e/fixtures/wildwest/apis/wildwest/v1alpha1" @@ -178,3 +184,224 @@ func TestValidatingAdmissionPolicyInWorkspace(t *testing.T) { _, err = cowbyClusterClient.Cluster(ws2Path).WildwestV1alpha1().Cowboys("default").Create(ctx, &badCowboy, metav1.CreateOptions{}) require.NoError(t, err) } + +func TestValidatingAdmissionPolicyCrossWorkspaceAPIBinding(t *testing.T) { + t.Parallel() + framework.Suite(t, "control-plane") + + server := kcptesting.SharedKcpServer(t) + + ctx, cancelFunc := context.WithCancel(context.Background()) + t.Cleanup(cancelFunc) + + cfg := server.BaseConfig(t) + + scheme := runtime.NewScheme() + err := admissionregistrationv1.AddToScheme(scheme) + require.NoError(t, err, "failed to add admission registration v1 scheme") + err = v1.AddToScheme(scheme) + require.NoError(t, err, "failed to add admission v1 scheme") + err = wildwestv1alpha1.AddToScheme(scheme) + require.NoError(t, err, "failed to add cowboy v1alpha1 to scheme") + + orgPath, _ := kcptesting.NewWorkspaceFixture(t, server, core.RootCluster.Path(), kcptesting.WithType(core.RootCluster.Path(), "organization")) + sourcePath, _ := kcptesting.NewWorkspaceFixture(t, server, orgPath) + targetPath, _ := kcptesting.NewWorkspaceFixture(t, server, orgPath) + + kcpClusterClient, err := kcpclientset.NewForConfig(cfg) + require.NoError(t, err, "failed to construct kcp cluster client for server") + + kubeClusterClient, err := kcpkubernetesclientset.NewForConfig(cfg) + require.NoError(t, err, "failed to construct client for server") + + cowbyClusterClient, err := wildwestclientset.NewForConfig(cfg) + require.NoError(t, err, "failed to construct cowboy client for server") + + t.Logf("Install a cowboys APIResourceSchema into workspace %q", sourcePath) + + apiResourceSchema := &apisv1alpha1.APIResourceSchema{ + ObjectMeta: metav1.ObjectMeta{ + Name: "today.cowboys.wildwest.dev", + }, + Spec: apisv1alpha1.APIResourceSchemaSpec{ + Group: "wildwest.dev", + Names: apiextensionsv1.CustomResourceDefinitionNames{ + Kind: "Cowboy", + ListKind: "CowboyList", + Plural: "cowboys", + Singular: "cowboy", + }, + Scope: "Namespaced", + Versions: []apisv1alpha1.APIResourceVersion{ + { + Name: "v1alpha1", + Served: true, + Storage: true, + Schema: runtime.RawExtension{ + Raw: []byte(`{ + "description": "Cowboy is part of the wild west", + "properties": { + "apiVersion": {"type": "string"}, + "kind": {"type": "string"}, + "metadata": {"type": "object"}, + "spec": { + "type": "object", + "properties": { + "intent": {"type": "string"} + } + }, + "status": { + "type": "object", + "properties": { + "result": {"type": "string"} + } + } + }, + "type": "object" + }`), + }, + Subresources: apiextensionsv1.CustomResourceSubresources{ + Status: &apiextensionsv1.CustomResourceSubresourceStatus{}, + }, + }, + }, + }, + } + _, err = kcpClusterClient.Cluster(sourcePath).ApisV1alpha1().APIResourceSchemas().Create(ctx, apiResourceSchema, metav1.CreateOptions{}) + require.NoError(t, err) + + t.Logf("Create an APIExport for it") + cowboysAPIExport := &apisv1alpha2.APIExport{ + ObjectMeta: metav1.ObjectMeta{ + Name: "cowboybebop", + }, + Spec: apisv1alpha2.APIExportSpec{ + Resources: []apisv1alpha2.ResourceSchema{ + { + Name: "cowboys", + Group: "wildwest.dev", + Schema: "today.cowboys.wildwest.dev", + Storage: apisv1alpha2.ResourceSchemaStorage{ + CRD: &apisv1alpha2.ResourceSchemaStorageCRD{}, + }, + }, + }, + }, + } + _, err = kcpClusterClient.Cluster(sourcePath).ApisV1alpha2().APIExports().Create(ctx, cowboysAPIExport, metav1.CreateOptions{}) + require.NoError(t, err) + + t.Logf("Create an APIBinding in workspace %q that points to the cowboybebop export", targetPath) + apiBinding := &apisv1alpha2.APIBinding{ + ObjectMeta: metav1.ObjectMeta{ + Name: "cowboys", + }, + Spec: apisv1alpha2.APIBindingSpec{ + Reference: apisv1alpha2.BindingReference{ + Export: &apisv1alpha2.ExportBindingReference{ + Path: sourcePath.String(), + Name: cowboysAPIExport.Name, + }, + }, + }, + } + + kcptestinghelpers.Eventually(t, func() (bool, string) { + _, err := kcpClusterClient.Cluster(targetPath).ApisV1alpha2().APIBindings().Create(ctx, apiBinding, metav1.CreateOptions{}) + return err == nil, fmt.Sprintf("Error creating APIBinding: %v", err) + }, wait.ForeverTestTimeout, time.Millisecond*100) + + t.Logf("Ensure cowboys are served in target workspace") + require.Eventually(t, func() bool { + _, err := cowbyClusterClient.Cluster(targetPath).WildwestV1alpha1().Cowboys("default").List(ctx, metav1.ListOptions{}) + return err == nil + }, wait.ForeverTestTimeout, 100*time.Millisecond) + + t.Logf("Installing validating admission policy into the source workspace") + policy := &admissionregistrationv1.ValidatingAdmissionPolicy{ + ObjectMeta: metav1.ObjectMeta{ + GenerateName: "policy-", + }, + Spec: admissionregistrationv1.ValidatingAdmissionPolicySpec{ + FailurePolicy: ptr.To(admissionregistrationv1.Fail), + MatchConstraints: &admissionregistrationv1.MatchResources{ + ResourceRules: []admissionregistrationv1.NamedRuleWithOperations{ + { + RuleWithOperations: admissionregistrationv1.RuleWithOperations{ + Operations: []admissionregistrationv1.OperationType{ + admissionregistrationv1.Create, + admissionregistrationv1.Update, + }, + Rule: admissionregistrationv1.Rule{ + APIGroups: []string{wildwestv1alpha1.SchemeGroupVersion.Group}, + APIVersions: []string{wildwestv1alpha1.SchemeGroupVersion.Version}, + Resources: []string{"cowboys"}, + }, + }, + }, + }, + }, + Validations: []admissionregistrationv1.Validation{{ + Expression: "object.spec.intent != 'bad'", + }}, + }, + } + policy, err = kubeClusterClient.Cluster(sourcePath).AdmissionregistrationV1().ValidatingAdmissionPolicies().Create(ctx, policy, metav1.CreateOptions{}) + require.NoError(t, err, "failed to create ValidatingAdmissionPolicy") + require.Eventually(t, func() bool { + p, err := kubeClusterClient.Cluster(sourcePath).AdmissionregistrationV1().ValidatingAdmissionPolicies().Get(ctx, policy.Name, metav1.GetOptions{}) + if err != nil { + return false + } + + return p.Generation == p.Status.ObservedGeneration && p.Status.TypeChecking != nil && len(p.Status.TypeChecking.ExpressionWarnings) == 0 + }, wait.ForeverTestTimeout, 100*time.Millisecond) + + newCowboy := func(intent string) *wildwestv1alpha1.Cowboy { + return &wildwestv1alpha1.Cowboy{ + ObjectMeta: metav1.ObjectMeta{ + GenerateName: "cowboy-", + }, + Spec: wildwestv1alpha1.CowboySpec{ + Intent: intent, + }, + } + } + + t.Logf("Verifying that creating bad cowboy resource in target workspace succeeds before binding. Although, the policy is inactive without binding)") + _, err = cowbyClusterClient.Cluster(targetPath).WildwestV1alpha1().Cowboys("default").Create(ctx, newCowboy("bad"), metav1.CreateOptions{}) + require.NoError(t, err) + + t.Logf("Installing validating admission policy binding into the source workspace") + binding := &admissionregistrationv1.ValidatingAdmissionPolicyBinding{ + ObjectMeta: metav1.ObjectMeta{ + GenerateName: "binding-", + }, + Spec: admissionregistrationv1.ValidatingAdmissionPolicyBindingSpec{ + PolicyName: policy.Name, + ValidationActions: []admissionregistrationv1.ValidationAction{admissionregistrationv1.Deny}, + }, + } + + _, err = kubeClusterClient.Cluster(sourcePath).AdmissionregistrationV1().ValidatingAdmissionPolicyBindings().Create(ctx, binding, metav1.CreateOptions{}) + require.NoError(t, err, "failed to create ValidatingAdmissionPolicyBinding") + + t.Logf("Verifying that creating bad cowboy resource in target workspace is rejected by policy in source workspace") + require.Eventually(t, func() bool { + _, err := cowbyClusterClient.Cluster(targetPath).WildwestV1alpha1().Cowboys("default").Create(ctx, newCowboy("bad"), metav1.CreateOptions{}) + if err != nil { + if errors.IsInvalid(err) { + t.Logf("Error: %v", err) + if strings.Contains(err.Error(), "failed expression: object.spec.intent != 'bad'") { + return true + } + } + t.Logf("Unexpected error when trying to create bad cowboy: %s", err) + } + return false + }, wait.ForeverTestTimeout, 1*time.Second) + + t.Logf("Verifying that creating good cowboy resource in target workspace succeeds") + _, err = cowbyClusterClient.Cluster(targetPath).WildwestV1alpha1().Cowboys("default").Create(ctx, newCowboy("good"), metav1.CreateOptions{}) + require.NoError(t, err) +}