██████╗██╗ ██╗ █████╗ ██████╗
██╔════╝██║ ██╔╝██╔══██╗██╔══██╗
██║ █████╔╝ ███████║██║ ██║
██║ ██╔═██╗ ██╔══██║██║ ██║
╚██████╗██║ ██╗██║ ██║██████╔╝
╚═════╝╚═╝ ╚═╝╚═╝ ╚═╝╚═════╝ Disclaimer: These practice questions are designed as a brush-up in your preparation! The goal is to ensure you don't encounter similar tasks for the first time during your actual exam.
I am excited to see these questions being used to help the community! These practice scenarios have been integrated into the following interactive platforms for a more realistic exam experience:
| Integration | Source Repository | Author |
|---|---|---|
| 🐸 ckad-dojo (Simulation 4) | TiPunchLabs/ckad-dojo | @xgueret |
Note: The
ckad-dojointegration (Dojo Kappa) provides a live terminal, a 120-minute timer, and automated scoring to simulate the actual exam environment.
- Question 1 – Create Secret from Hardcoded Variables
- Question 2 – Create CronJob with Schedule and History Limits
- Question 3 – Create ServiceAccount, Role, and RoleBinding from Logs Error
- Question 4 – Fix Broken Pod with Correct ServiceAccount
- Question 5 – Build Container Image with Podman and Save as Tarball
- Question 6 – Create Canary Deployment with Manual Traffic Split
- Question 7 – Fix NetworkPolicy by Updating Pod Labels
- Question 8 – Fix Broken Deployment YAML
- Question 9 – Perform Rolling Update and Rollback
- Question 10 – Add Readiness Probe to Deployment
- Question 11 – Configure Pod and Container Security Context
- Question 12 – Fix Service Selector
- Question 13 – Create NodePort Service
- Question 14 – Create Ingress Resource
- Question 15 – Fix Ingress PathType
- Question 16 – Add Resource Requests and Limits to Pod
In namespace default, Deployment api-server exists with hard-coded environment variables:
DB_USER=adminDB_PASS=Secret123!
Your task:
- Create a Secret named
db-credentialsin namespacedefaultcontaining these credentials - Update Deployment
api-serverto use the Secret viavalueFrom.secretKeyRef - Do not change the Deployment name or namespace
Step 1 – Create the Secret
kubectl create secret generic db-credentials \
--from-literal=DB_USER=admin \
--from-literal=DB_PASS=Secret123! \
-n defaultStep 2 – Update Deployment to use Secret
kubectl edit deploy api-server -n defaultReplace the hardcoded environment variables:
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_USER
- name: DB_PASS
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_PASSSave and exit. Verify the rollout:
kubectl rollout status deploy api-server -n defaultDocs
Create a CronJob named backup-job in namespace default with the following specifications:
- Schedule: Run every 30 minutes (
*/30 * * * *) - Image:
busybox:latest - Container command:
echo "Backup completed" - Set
successfulJobsHistoryLimit: 3 - Set
failedJobsHistoryLimit: 2 - Set
activeDeadlineSeconds: 300 - Use
restartPolicy: Never
Tip: Use kubectl explain cronjob.spec to find the correct field names.
kubectl apply -f - <<EOF
apiVersion: batch/v1
kind: CronJob
metadata:
name: backup-job
namespace: default
spec:
schedule: "*/30 * * * *"
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 2
jobTemplate:
spec:
activeDeadlineSeconds: 300
template:
spec:
restartPolicy: Never
containers:
- name: backup
image: busybox:latest
command: ["/bin/sh", "-c"]
args: ["echo Backup completed"]
EOFVerify the CronJob:
kubectl get cronjob backup-job
kubectl describe cronjob backup-jobTo test immediately, create a Job from the CronJob:
kubectl create job backup-job-test --from=cronjob/backup-job
kubectl logs job/backup-job-testDocs
In namespace audit, Pod log-collector exists but is failing with authorization errors.
Check the Pod logs to identify what permissions are needed:
kubectl logs -n audit log-collectorThe logs show: User "system:serviceaccount:audit:default" cannot list pods in the namespace "audit"
Your task:
- Create a ServiceAccount named
log-sain namespaceaudit - Create a Role
log-rolethat grantsget,list, andwatchon resourcepods - Create a RoleBinding
log-rbbindinglog-roletolog-sa - Update Pod
log-collectorto use ServiceAccountlog-sa
Step 1 – Create ServiceAccount
kubectl create sa log-sa -n auditStep 2 – Create Role
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: log-role
namespace: audit
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
EOFStep 3 – Create RoleBinding
kubectl create rolebinding log-rb \
--role=log-role \
--serviceaccount=audit:log-sa \
-n auditStep 4 – Update Pod to use ServiceAccount
Since Pods have immutable serviceAccountName, delete and recreate:
kubectl get pod log-collector -n audit -o yaml > /tmp/log-collector.yamlEdit the file to change:
spec.serviceAccountName: log-sa- Remove
spec.serviceAccountif present
Then:
kubectl delete pod log-collector -n audit
kubectl apply -f /tmp/log-collector.yamlOr use patch if the Pod allows it (may fail due to immutability):
kubectl patch pod log-collector -n audit \
-p '{"spec":{"serviceAccountName":"log-sa"}}'If patch fails, delete and recreate.
Docs
In namespace monitoring, Pod metrics-pod is using ServiceAccount wrong-sa and receiving authorization errors.
Multiple ServiceAccounts, Roles, and RoleBindings already exist in the namespace:
- ServiceAccounts:
monitor-sa,wrong-sa,admin-sa - Roles:
metrics-reader,full-access,view-only - RoleBindings:
monitor-binding,admin-binding
Your task:
- Identify which ServiceAccount/Role/RoleBinding combination has the correct permissions
- Update Pod
metrics-podto use the correct ServiceAccount - Verify the Pod stops showing authorization errors
Hint: Check existing RoleBindings to see which ServiceAccount is bound to which Role.
Step 1 – Investigate existing RBAC resources
kubectl get rolebindings -n monitoring -o yaml
kubectl get roles -n monitoring -o yamlLook for a RoleBinding that binds a ServiceAccount to a Role with appropriate permissions. For example:
kubectl describe rolebinding monitor-binding -n monitoring
kubectl describe role metrics-reader -n monitoringIf monitor-binding binds monitor-sa to metrics-reader, and metrics-reader has the needed permissions, use monitor-sa.
Step 2 – Update Pod
Delete and recreate with correct ServiceAccount:
kubectl get pod metrics-pod -n monitoring -o yaml > /tmp/metrics-pod.yaml
# Edit to change serviceAccountName to monitor-sa
kubectl delete pod metrics-pod -n monitoring
kubectl apply -f /tmp/metrics-pod.yamlStep 3 – Verify
kubectl logs metrics-pod -n monitoring
# Should no longer show authorization errorsDocs
- ServiceAccounts: https://kubernetes.io/docs/concepts/security/service-accounts/
On the node, directory /root/app-source contains a valid Dockerfile.
Your task:
- Build a container image using Podman with name
my-app:1.0using/root/app-sourceas build context - Save the image as a tarball to
/root/my-app.tar
Note: The exam environment typically uses Podman, but Docker commands are nearly identical.
Step 1 – Build the image
cd /root/app-source
podman build -t my-app:1.0 .Verify the image was created:
podman images | grep my-appStep 2 – Save image as tarball
podman save -o /root/my-app.tar my-app:1.0Verify the file was created:
ls -lh /root/my-app.tarStep 1 – Build the image
cd /root/app-source
docker build -t my-app:1.0 .Verify the image was created:
docker images | grep my-appStep 2 – Save image as tarball
docker save -o /root/my-app.tar my-app:1.0Verify the file was created:
ls -lh /root/my-app.tarDocs
- Podman: https://docs.podman.io/
- Docker: https://docs.docker.com/
In namespace default, the following resources exist:
- Deployment
web-appwith 5 replicas, labelsapp=webapp, version=v1 - Service
web-servicewith selectorapp=webapp
Your task:
- Scale Deployment
web-appto 8 replicas (80% of 10 total) - Create a new Deployment
web-app-canarywith 2 replicas, labelsapp=webapp, version=v2 - Both Deployments should be selected by
web-service - Verify the traffic split using the provided test command (if available)
Note: This is a manual canary pattern where traffic is split based on replica counts.
Step 1 – Scale existing Deployment
kubectl scale deploy web-app --replicas=8 -n defaultStep 2 – Export and create canary Deployment
Export the existing Deployment:
kubectl get deploy web-app -n default -o yaml > /tmp/web-app-canary.yamlEdit the file to change:
metadata.name: web-app-canaryspec.replicas: 2spec.template.metadata.labels.version: v2spec.selector.matchLabels.version: v2- Keep
app=webapplabel on both selector and template
Apply:
kubectl apply -f /tmp/web-app-canary.yamlOr create directly:
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-canary
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: webapp
version: v2
template:
metadata:
labels:
app: webapp
version: v2
spec:
containers:
- name: web
image: nginx:latest
EOFStep 3 – Verify Service selects both
kubectl get endpoints web-service -n default
kubectl get pods -n default -l app=webapp --show-labelsBoth version=v1 and version=v2 pods should appear in endpoints.
Step 4 – Test traffic split (if curl available)
# Run multiple requests to see distribution
for i in {1..10}; do
kubectl exec -it <pod-name> -n default -- curl http://web-service
doneDocs
In namespace network-demo, three Pods exist:
frontendwith labelrole=wrong-frontendbackendwith labelrole=wrong-backenddatabasewith labelrole=wrong-db
Three NetworkPolicies exist:
deny-all(default deny)allow-frontend-to-backend(allows traffic fromrole=frontendtorole=backend)allow-backend-to-db(allows traffic fromrole=backendtorole=db)
Your task:
Update the Pod labels (do NOT modify NetworkPolicies) to enable the communication chain:
frontend → backend → database
Time Saver Tip: Use kubectl label instead of editing YAML and recreating Pods.
Step 1 – View existing NetworkPolicies
kubectl get networkpolicies -n network-demo -o yamlIdentify the label selectors used in the NetworkPolicies (likely role=frontend, role=backend, role=db).
Step 2 – Update Pod labels
kubectl label pod frontend -n network-demo role=frontend --overwrite
kubectl label pod backend -n network-demo role=backend --overwrite
kubectl label pod database -n network-demo role=db --overwriteVerify:
kubectl get pods -n network-demo --show-labelsStep 3 – Verify NetworkPolicy rules
kubectl describe networkpolicy allow-frontend-to-backend -n network-demo
kubectl describe networkpolicy allow-backend-to-db -n network-demoDocs
File /root/broken-deploy.yaml contains a Deployment manifest that fails to apply.
The file has the following issues:
- Uses deprecated API version
- Missing required
selectorfield - Selector doesn't match template labels
Your task:
- Fix the YAML file to use
apiVersion: apps/v1 - Add a proper
selectorfield that matches the template labels - Apply the fixed manifest and ensure the Deployment is running
Step 1 – View the broken file
cat /root/broken-deploy.yamlYou'll likely see something like:
apiVersion: extensions/v1beta1 # Deprecated
kind: Deployment
metadata:
name: broken-app
spec:
replicas: 2
template: # Missing selector
metadata:
labels:
app: myapp
spec:
containers:
- name: web
image: nginxStep 2 – Fix the file
vi /root/broken-deploy.yamlUpdate to:
apiVersion: apps/v1
kind: Deployment
metadata:
name: broken-app
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: web
image: nginxStep 3 – Apply and verify
kubectl apply -f /root/broken-deploy.yaml
kubectl get deploy broken-app
kubectl rollout status deploy broken-app
kubectl get pods -l app=myappDocs
In namespace default, Deployment app-v1 exists with image nginx:1.20.
Your task:
- Update the Deployment to use image
nginx:1.25 - Verify the rolling update completes successfully
- Rollback to the previous revision
- Verify the rollback completed
Step 1 – Update the image
kubectl set image deploy/app-v1 web=nginx:1.25 -n defaultOr use edit:
kubectl edit deploy app-v1 -n default
# Change image to nginx:1.25Step 2 – Monitor the rollout
kubectl rollout status deploy app-v1 -n default
kubectl get pods -n default -l app=app-v1 -wStep 3 – View rollout history
kubectl rollout history deploy app-v1 -n defaultStep 4 – Rollback to previous revision
kubectl rollout undo deploy app-v1 -n defaultOr rollback to specific revision:
kubectl rollout undo deploy app-v1 --to-revision=1 -n defaultStep 5 – Verify rollback
kubectl rollout status deploy app-v1 -n default
kubectl get deploy app-v1 -o jsonpath='{.spec.template.spec.containers[0].image}'
# Should show nginx:1.20Docs
- Rolling Updates: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment
In namespace default, Deployment api-deploy exists with a container listening on port 8080.
Your task: Add a readiness probe to the Deployment with:
- HTTP GET on path
/ready - Port
8080 initialDelaySeconds: 5periodSeconds: 10
Ensure the Deployment rolls out successfully.
Step 1 – Edit the Deployment
kubectl edit deploy api-deploy -n defaultAdd under the container spec:
spec:
template:
spec:
containers:
- name: api
image: nginx
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10Save and exit.
Step 2 – Verify rollout
kubectl rollout status deploy api-deploy -n default
kubectl describe deploy api-deploy -n defaultStep 3 – Check probe status
kubectl get pods -n default -l app=api-deploy
kubectl describe pod <pod-name> -n default
# Look for Readiness in Conditions sectionDocs
- Readiness Probes: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
In namespace default, Deployment secure-app exists without any security context.
Your task:
- Set Pod-level
runAsUser: 1000 - Add container-level capability
NET_ADMINto the container namedapp
Note: Capabilities are set at the container level, not the Pod level.
Step 1 – Edit the Deployment
kubectl edit deploy secure-app -n defaultAdd security context at Pod level and container level:
spec:
template:
spec:
securityContext: # Pod-level
runAsUser: 1000
containers:
- name: app
image: nginx
securityContext: # Container-level
capabilities:
add:
- NET_ADMINSave and exit.
Step 2 – Verify rollout
kubectl rollout status deploy secure-app -n defaultStep 3 – Verify security context
kubectl get pod -n default -l app=secure-app -o yaml | grep -A 10 securityContextOr describe a pod:
kubectl describe pod <pod-name> -n defaultDocs
In namespace default, Deployment web-app exists with Pods labeled app=webapp, tier=frontend.
Service web-svc exists but has incorrect selector app=wrongapp.
Your task:
Update Service web-svc to correctly select Pods from Deployment web-app.
Step 1 – Check current state
kubectl get pods -n default --show-labels
kubectl get svc web-svc -n default -o yaml
kubectl get endpoints web-svc -n default # Should be empty or wrongStep 2 – Update Service selector
kubectl edit svc web-svc -n defaultChange:
spec:
selector:
app: wrongappTo:
spec:
selector:
app: webappSave and exit.
Step 3 – Verify endpoints
kubectl get endpoints web-svc -n default
# Should now show IPs of web-app pods
kubectl describe svc web-svc -n defaultDocs
In namespace default, Deployment api-server exists with Pods labeled app=api and container port 9090.
Your task:
Create a Service named api-nodeport that:
- Type:
NodePort - Selects Pods with label
app=api - Exposes Service port
80mapping to target port9090
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: api-nodeport
namespace: default
spec:
type: NodePort
selector:
app: api
ports:
- port: 80
targetPort: 9090
protocol: TCP
EOFVerify:
kubectl get svc api-nodeport -n default
kubectl describe svc api-nodeport -n default
# Note the NodePort port (e.g., 30080)Docs
- NodePort Services: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
In namespace default, the following resources exist:
- Deployment
web-deploywith Pods labeledapp=web - Service
web-svcwith selectorapp=webon port8080
Your task:
Create an Ingress named web-ingress that:
- Routes host
web.example.com - Path
/withpathType: Prefix - Backend Service
web-svcon port8080 - Uses API version
networking.k8s.io/v1
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
namespace: default
spec:
rules:
- host: web.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-svc
port:
number: 8080
EOFVerify:
kubectl get ingress web-ingress -n default
kubectl describe ingress web-ingress -n defaultDocs
File /root/fix-ingress.yaml contains an Ingress manifest that fails to apply due to an invalid pathType value.
Your task:
- Apply the file and note the error
- Fix the
pathTypeto a valid value (Prefix,Exact, orImplementationSpecific) - Ensure the Ingress routes path
/apito Serviceapi-svcon port8080 - Apply the fixed manifest successfully
Step 1 – Try to apply (will fail)
kubectl apply -f /root/fix-ingress.yaml
# Error: pathType: Unsupported value: "InvalidType"Step 2 – View and fix the file
cat /root/fix-ingress.yaml
vi /root/fix-ingress.yamlChange the invalid pathType (e.g., InvalidType) to a valid value:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
namespace: default
spec:
rules:
- http:
paths:
- path: /api
pathType: Prefix # Changed from InvalidType
backend:
service:
name: api-svc
port:
number: 8080Step 3 – Apply the fixed manifest
kubectl apply -f /root/fix-ingress.yaml
kubectl get ingress api-ingress -n defaultDocs
- Ingress Path Types: https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types
In namespace prod, a ResourceQuota exists that sets resource limits for the namespace.
Your task:
- Check the ResourceQuota for namespace
prodto see the limits set - Create a Pod named
resource-podwith:- Image:
nginx:latest - Set the CPU and memory limits to half of the limits set in the ResourceQuota
- Set appropriate requests (at least
100mCPU and128Mimemory)
- Image:
Step 1 – Check the ResourceQuota
kubectl get quota -n prod
kubectl describe quota <quota-name> -n prodFor example, if the quota shows:
limits.cpu: "2"limits.memory: "4Gi"
Then half would be:
- CPU limit:
1(or1000m) - Memory limit:
2Gi
Step 2 – Create the Pod with half the quota limits
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: resource-pod
namespace: prod
spec:
containers:
- name: web
image: nginx:latest
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "1"
memory: "2Gi"
EOFNote: Adjust the limit values (cpu: "1", memory: "2Gi") based on what you found in the ResourceQuota. If quota shows limits.cpu: "4", use cpu: "2". If quota shows limits.memory: "8Gi", use memory: "4Gi".
Docs
- ResourceQuota: https://kubernetes.io/docs/concepts/policy/resource-quotas/
- Resource Management: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Key Tips for the Exam:
- Use
kubectl explain <resource>.<field>extensively - Check pod logs for error hints
- Use
kubectl labelfor quick label fixes - Export YAML, edit, and reapply for complex changes
- Verify changes with
kubectl get,kubectl describe, andkubectl logs - Practice time management - flag difficult questions and move on
Good luck with your CKAD exam!
Link to my Medium post: CKAD 2026 — What to Expect & How I Passed
If you found this helpful for your exam, star the repo! ⭐