ThisCertified Kubernetes Administrator (CKA) certification Guide provides a comprehensive, hands-on approach to mastering Kubernetes, with practical labs and real-world examples. Tailored specifically for the CKA exam, this guide covers key topics like cluster setup, networking, and security, ensuring you're fully prepared. Stay ahead with the latest Kubernetes features and updates, and gain the skills needed to pass the CKA certification with confidence.
This guide is part of our blog Pass the CKA Certification Exam: The Complete Study Guide .
If you are using this repo for guidance, please hit the star. Thanks A lot !
| CKA Exam Details | Information |
|---|---|
| Exam Type | Performance Based ( NOT MCQ ) |
| Exam Duration | 2 hours |
| Pass Percentage | 66% ( One Retake ) |
| CKA Exam Kubernetes Version | Kubernetes v1.33 |
| CKA Validity | 2 Years |
| Exam Cost | $445 USD ( check some coupons online you will get up to 30% and sometimes in the year 50% during Cyber Monday Savings) |
While writing this notes , the last Kubernetes version is 1.34.1 (released: 2025-09-09) : check this releases page
The current exam CKA is based on v1.33 of Kubernetes (codenamed "Octarine") , Kubernetes v1.33.5 ((released: 2025-09-09 ) brings a mix of stability, new features, and important deprecations that are directly relevant for the CKA exam. Official Changes in Kubernetes 1.33
As Iβve been preparing for the CKA certification, I've found a few of these features particularly useful, and I'm excited to share them with you.
| Feature | Description |
|---|---|
| In-Place Pod Vertical Scaling | Allows you to adjust CPU and memory of a running pod without needing to restart it. Great for stateful applications and long-running workloads to reduce downtime. |
| **Sidecar Containers ** | Sidecar containers are now stable, ensuring they start before your main application container and run throughout the pod's lifecycle. Ideal for logging, monitoring, and proxying tasks. |
| OCI Artifact and Image Volumes | Allows you to mount OCI artifacts and container images as read-only volumes inside a pod. Useful for workflows like sidecar injection and CLI tooling. |
| **User Namespaces for Pods ** | Provides security isolation by keeping container root users isolated from the host system, making Kubernetes more secure, especially in multi-tenant or production environments. |
| Job Success Policies | Adds flexibility to batch workloads by allowing you to define custom conditions for when a job is considered successful, improving workflow control. |
| **Ordered Namespace Deletion ** | Ensures a systematic cleanup process when deleting namespaces, making it more predictable and reducing the risk of errors in large environments. |
| Service Account Token Configuration Enhancements | Enhances flexibility for multi-tenant environments, allowing you to configure and manage service account tokens more dynamically. |
| Extended Loopback Client Certificate Validity | Extends the validity of loopback client certificates to 14 months, reducing the administrative burden of frequent certificate rotations and helping maintain stability in control plane operations. |
Personally, Iβm particularly excited about the in-place scaling and user namespaces features. These improvements make Kubernetes administration not only more efficient but also more secure. Whether you're dealing with production environments or scaling applications, these updates are a huge step forward. As Kubernetes evolves, it continues to be a powerful tool for handling complex, real-world workloads with ease and security.
- CKA Exam Syllabus ( Kubernetes 1.33 - Oct 2025)
- CKA Exam Questions And Answers
- Additional Resources
- Practice
| Topic | Concepts | Weightage |
|---|---|---|
| 1. Cluster Architecture, Installation & Configuration - 25% | 1. Manage role-based access control (RBAC) 2. Prepare underlying infrastructure for installing a Kubernetes cluster 3. Create and manage Kubernetes clusters using kubeadm 4. Manage the lifecycle of Kubernetes clusters 5. Implement and configure a highly-available control plane 6. Use Helm and Kustomize to install cluster components 7. Understand extension interfaces (CNI, CSI, CRI, etc.) 8. Understand CRDs, install and configure operators |
25% |
| 2. Workloads & Scheduling - 15% | 1. Understand application deployments and how to perform rolling update and rollbacks 2. Use ConfigMaps and Secrets to configure applications 3. Configure workload autoscaling 4. Understand the primitives used to create robust, self-healing, application deployments 5. Configure Pod admission and scheduling (limits, node affinity, etc.) |
15% |
| 3. Services & Networking - 20% | 1. Understand connectivity between Pods 2. Define and enforce Network Policies 3. Use ClusterIP, NodePort, LoadBalancer service types and endpoints 4. Use the Gateway API to manage Ingress traffic 5. Know how to use Ingress controllers and Ingress resources 6. Understand and use CoreDNS |
20% |
| 4. Storage - 10% | 1. Implement storage classes and dynamic volume provisioning 2. Configure volume types, access modes and reclaim policies 3. Manage persistent volumes and persistent volume claims |
10% |
| 5. Troubleshooting - 30% | 1. Troubleshoot clusters and nodes 2. Troubleshoot cluster components 3. Monitor cluster and application resource usage 4. Manage and evaluate container output streams 5. Troubleshoot services and networking |
30% |
This section tests your ability to build, configure, and manage Kubernetes clusters using kubeadm and core Kubernetes components.
RBAC is essential for managing access to cluster resources. Hereβs how to create a role and bind it to a service account:
kubectl create role cm-writer --verb=create --resource=configmaps -n dev
kubectl create rolebinding writer-bind --role=cm-writer --serviceaccount=dev:app-sa -n dev
kubectl auth can-i create configmap --as system:serviceaccount:dev:app-sa -n devBefore setting up Kubernetes, you must prepare your underlying infrastructure. Hereβs how to configure the node and disable swap:
hostnamectl set-hostname control-plane-node
echo "10.0.0.2 worker-node" >> /etc/hosts
swapoff -aThis prepares the system to host Kubernetes components.
To initialize the Kubernetes control plane and deploy a network plugin, use the following commands:
kubeadm init --pod-network-cidr=192.168.0.0/16
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico.yamlTo join worker nodes:
kubeadm join <CONTROL_PLANE_IP>:6443 --token <TOKEN> --discovery-token-ca-cert-hash sha256:<HASH>Youβll need to manage your cluster lifecycle, including upgrades. Hereβs how to upgrade the cluster to v1.33:
kubeadm upgrade plan
kubeadm upgrade apply v1.33.1
kubelet --versionThese commands allow you to upgrade your Kubernetes version to the latest stable release.
High availability (HA) for the control plane is essential for production environments. Hereβs how to set up a highly available control plane:
kubeadm init --control-plane-endpoint "LOADBALANCER_DNS:6443" --upload-certsTo add an additional control-plane node:
kubeadm join LOADBALANCER_DNS:6443 --control-plane --token <TOKEN> --discovery-token-ca-cert-hash sha256:<HASH> --certificate-key <CERT_KEY>Helm is a Kubernetes package manager that simplifies application deployment. Kustomize helps you manage configurations. Hereβs how you can deploy nginx using Helm and apply custom configurations using Kustomize:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install nginx bitnami/nginx
kubectl apply -k ./my-kustomization/Container Network Interface (CNI), Container Storage Interface (CSI), and Container Runtime Interface (CRI) are crucial for network and storage management. Hereβs how to verify the CNI configuration and check for available storage drivers:
crictl info
ls /etc/cni/net.d/
kubectl get csidriversThese commands help you manage and troubleshoot networking and storage drivers in Kubernetes.
Custom Resource Definitions (CRDs) and Operators help extend Kubernetes capabilities. To interact with CRDs and manage an operator like Prometheus, use the following commands:
kubectl get crds
kubectl describe crd <customresource>
kubectl apply -f https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.70.0/bundle.yamlKubernetes v1.33 introduces in-place pod resizingβallowing you to increase resources without restarting the pod. Additionally, debugging has improved with tools like kubectl debug:
kubectl patch pod mypod --subresource=resize --type=merge -p '{"spec":{"containers":[{"name":"app","resources":{"limits":{"cpu":"1","memory":"512Mi"}}}]}}'
kubectl debug pod/mypod -it --image=busybox --target=app -- shThese features make it easier to manage resources dynamically and debug issues on the fly without downtime.
This domain evaluates your proficiency in defining and managing application workloads, along with controlling their deployment and scheduling within a Kubernetes cluster. Key areas include rolling updates, autoscaling, resource limits, health probes, and scheduling constraints.
Deployment: A controller that ensures the desired number of Pods are running. It supports rolling updates (gradual replacement of Pods) and rollbacks (return to the last known good state).
You're expected to:
- Deploy applications using
kubectlor YAML - Upgrade images with zero downtime
- Rollback failed updates
Example:
kubectl create deployment nginx --image=nginx:1.27 --replicas=2
kubectl set image deployment nginx nginx=nginx:1.28
kubectl rollout undo deployment nginx- ConfigMap: Stores non-sensitive configuration (e.g., app modes, URLs).
- Secret: Stores sensitive values (e.g., passwords, tokens). Data is base64-encoded but not encrypted by default.
Example:
kubectl create configmap app-config --from-literal=APP_MODE=prod
kubectl create secret generic app-secret --from-literal=PASSWORD=1234Injected into a Pod:
env:
- name: APP_MODE
valueFrom:
configMapKeyRef:
name: app-config
key: APP_MODE
- name: PASSWORD
valueFrom:
secretKeyRef:
name: app-secret
key: PASSWORDHorizontal Pod Autoscaler (HPA): Automatically increases or decreases the number of Pods based on CPU, memory, or custom metrics. Requires metrics-server or an external metrics adapter.
Example:
kubectl autoscale deployment nginx --cpu-percent=50 --min=1 --max=5
kubectl get hpaKubernetes uses several probes and controllers to make applications resilient:
- Liveness Probe β restarts containers if they hang or crash.
- Readiness Probe β ensures traffic only reaches Pods ready to serve.
- Startup Probe β for slow apps, prevents false liveness failures.
- ReplicaSet β guarantees a certain number of Pods run at all times.
Example:
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 10Check in Deployment:
kubectl get deploy nginx -o yaml | grep probe -A 5Kubernetes schedules Pods onto nodes based on resources and constraints:
- Requests/Limits: Reserve minimum CPU/memory (requests) and enforce maximum usage (limits).
- NodeSelector: Place Pods only on nodes with specific labels.
- Affinity/Anti-Affinity: Fine-grained rules to group or separate Pods.
- Taints/Tolerations: Prevent Pods from scheduling on nodes unless explicitly tolerated.
Example:
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
nodeSelector:
disktype: ssd
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"This domain focuses on the networking mechanisms in Kubernetes. You need to understand **Pod-to-Pod communication, Services, DNS, ingress traffic, the new Gateway API, and network security (NetworkPolicies).
Pod networking: Each Pod in Kubernetes gets its own unique IP address. By default, Pods within a cluster can communicate with each other without the need for Network Address Translation (NAT).
You should be able to:
- Troubleshoot unreachable Pods.
- Test connectivity using tools like
ping,curl, andnslookup.
Example:
kubectl exec -it pod-a -- ping pod-b
kubectl exec -it pod-a -- curl http://<pod-ip>:<port>NetworkPolicy: Controls the traffic between Pods. By default, all traffic is allowed. However, once a NetworkPolicy is applied, traffic is blocked unless explicitly allowed.
Network policies rely on labels (for Pods and Namespaces) and are enforced only when the CNI plugin supports them (such as Calico or Cilium).
Example:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-frontend
namespace: default
spec:
podSelector:
matchLabels:
role: backend
ingress:
- from:
- podSelector:
matchLabels:
role: frontendThis policy allows traffic from frontend Pods β to backend Pods, blocking all other incoming traffic.
Service: Services in Kubernetes provide stable networking to Pods since Pod IPs are ephemeral.
Service types:
- ClusterIP (default): Internal-only access within the cluster.
- NodePort: Exposes the service on each nodeβs IP at a static port (port range 30000β32767).
- LoadBalancer: Integrates with cloud providers to provision an external Load Balancer.
Example:
kubectl expose pod nginx --port=80 --type=NodePort
kubectl get svcThe Gateway API is the modern replacement for Ingress, providing more flexible routing, standardization, and portability across different implementations.
Key resources in Gateway API:
- GatewayClass: Defines the type of gateway (similar to
IngressClass). - Gateway: Represents a load balancer or data plane instance.
- HTTPRoute: Defines routing rules for HTTP traffic.
Example:
kubectl get gatewayclasses
kubectl describe gateway <name>Expect Gateway API tasks on the exam. Ingress is still relevant, but the Gateway API is the future.
Ingress: Routes external HTTP(S) traffic into the Kubernetes cluster based on hostnames and paths. To make use of Ingress, you must have an Ingress Controller deployed (such as nginx-ingress, HAProxy, or Traefik).
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80Verify if the Ingress Controller is deployed:
kubectl get pods -n ingress-nginxCoreDNS: CoreDNS is the DNS server inside Kubernetes that handles service discovery by resolving service names like service.namespace.svc.cluster.local.
CoreDNS is configurable via the Corefile in the coredns ConfigMap.
Example:
kubectl exec -it <pod> -- nslookup kubernetes.default
kubectl -n kube-system get configmap coredns -o yamlThis section evaluates your knowledge of persistent storage in Kubernetes. Unlike temporary storage (like emptyDir which disappears when the Pod dies), persistent storage ensures that data survives Pod restarts or rescheduling.
In Kubernetes, storage is managed through Persistent Volumes (PVs) and Persistent Volume Claims (PVCs), typically provisioned dynamically via StorageClasses.
- StorageClass: This is like a template for Kubernetes. It defines how volumes should be created, specifying things like volume type, speed, and reclaim policies.
- Dynamic Provisioning: Instead of manually creating volumes, Kubernetes can automatically provision them when a PVC requests storage.
Example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumerApply and check:
kubectl apply -f storageclass.yaml
kubectl get scIn real-world clusters, cloud providers (AWS, GCP, Azure) or CSI drivers (e.g., rook-ceph, longhorn) are often used as the provisioner.
Volume Types define where the data is stored:
hostPath: A path on the node's filesystem (typically used for single-node testing).emptyDir: Temporary space that disappears when the Pod is deleted.nfs,cephfs,glusterfs, CSI drivers: External shared storage backends.
Access Modes specify who can read or write the volume:
ReadWriteOnce (RWO): One node can read/write the volume.ReadOnlyMany (ROX): Many nodes can read the volume.ReadWriteMany (RWX): Many nodes can read and write the volume simultaneously.
Reclaim Policies determine what happens when the PVC is deleted:
Retain: Keeps the data for manual cleanup.Delete: Deletes the volume automatically when the PVC is deleted.Recycle: Deprecated in v1.33 and no longer used.
Example PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: demo-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /mnt/dataApply and verify:
kubectl apply -f pv.yaml
kubectl get pv- PersistentVolume (PV): The actual piece of storage (think of it like a disk).
- PersistentVolumeClaim (PVC): A request for storage by a Pod. Kubernetes automatically binds a PVC to a matching PV.
PVC Example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: demo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500MiPod using the PVC:
apiVersion: v1
kind: Pod
metadata:
name: pvc-pod
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: "/data"
name: storage
volumes:
- name: storage
persistentVolumeClaim:
claimName: demo-pvcCheck:
kubectl get pvc
kubectl describe pod pvc-podThis is the most important section of the CKA exam. It tests your ability to debug and recover a Kubernetes cluster when things go wrong. You'll need to quickly identify issues with nodes, Pods, control plane components, networking, and logs, all while managing time pressure.
Nodes are the physical or virtual machines that run your Pods. When a node is unhealthy, workloads can fail or become unresponsive.
- Check cluster health:
kubectl get nodes
kubectl describe node <node-name>- Put a node into maintenance mode (drain):
kubectl drain <node-name> --ignore-daemonsets- Bring it back after maintenance:
kubectl uncordon <node-name>This is how Kubernetes lets you manage node availability. When you drain a node, it stops scheduling new Pods, and once maintenance is done, you uncordon it to allow scheduling again.
The control plane is composed of several important components:
- etcd: The cluster's database
- kube-apiserver: The API server that handles requests from kubectl and other clients
- kube-scheduler: Decides where Pods should run within the cluster
- kube-controller-manager: Ensures the cluster is in its desired state
These are often run as static Pods and can be found in /etc/kubernetes/manifests.
- Check control plane Pods:
ls /etc/kubernetes/manifests- Check kubelet logs (runs on every node):
journalctl -u kubeletIf the API server is down, your kubectl command wonβt work, and youβll need to check logs directly on the node to diagnose the issue.
Resource bottlenecks (such as CPU or memory pressure) often cause issues in Kubernetes clusters.
- Use metrics-server (make sure itβs installed):
kubectl top nodes
kubectl top pods --all-namespaces- Sort by resource usage:
kubectl top pods --sort-by=cpuThis will help you identify if Pods are consuming too many resources, or if nodes are overloaded.
Logs are your best tool for troubleshooting Kubernetes issues.
- Check Pod logs:
kubectl logs <pod-name>
kubectl logs <pod-name> -c <container-name> # for multi-container Pods
kubectl logs -f <pod-name> # stream logs in real time- Check node process logs (e.g., kubelet logs):
journalctl -u kubelet -fUse the -f flag to follow logs in real-time as you reproduce issues, which can be incredibly helpful in identifying what went wrong.
Networking is one of the most common failure points in Kubernetes.
- Check Services: Verify that your services are correctly configured:
kubectl get svc
kubectl describe svc <service-name>- Test DNS resolution inside a Pod: Make sure your Pods can resolve DNS names:
kubectl exec -it <pod-name> -- nslookup <service-name>- Test connectivity between Pods: If Pods canβt communicate, check connectivity using
curlorping:
kubectl exec -it <source-pod> -- curl <target-pod-ip>:<port>This will help confirm whether the issue is related to DNS resolution, Pod IPs, or Service routing.
This section contains hands-on practice scenarios that mirror the exam.
For each, we include: Task β Reasoning β Solution (YAML/commands) β Exam tips.
Task
Create a Pod pod1 in default using image httpd:2.4.41-alpine.
- Container name:
pod1-container - Pod must only run on a master node (do not add new labels).
- Write to
/opt/course/2/master_schedule_reasonwhy Pods are not scheduled on masters by default.
Reasoning
Masters are typically tainted with NoSchedule to protect control-plane resources. To place a Pod there:
- toleration to ignore the taint, and 2) nodeSelector to target the master.
Solution
Inspect taints/labels:
kubectl get nodes
kubectl describe node cluster1-master1 | grep -i taint
kubectl get node cluster1-master1 --show-labelsCreate YAML:
kubectl run pod1 --image=httpd:2.4.41-alpine -o yaml --dry-run=client > pod1.yamlEdit and apply:
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: pod1-container
image: httpd:2.4.41-alpine
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/master: ""kubectl apply -f pod1.yaml
kubectl get pod pod1 -o wide
echo "Master nodes are tainted to avoid scheduling regular workloads" > /opt/course/2/master_schedule_reasonExam tips
- You need both toleration and nodeSelector.
- Donβt relabel nodes; the task forbids βadding new labelsβ.
Task
- PV
safari-pv: capacity 2Gi,ReadWriteOnce,hostPath: /Volumes/Data, no storageClass. - PVC
safari-pvcinproject-tiger: request 2Gi,ReadWriteOnce, no storageClass. - Deployment
safariinproject-tigerrunninghttpd:2.4.41-alpinemounting at/tmp/safari-data.
Reasoning
PV is the βdiskβ, PVC is the βrequestβ. If specs match, binding occurs. Deployment mounts the claim.
Solution
PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: safari-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Volumes/Data"PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: safari-pvc
namespace: project-tiger
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2GiDeployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: safari
namespace: project-tiger
spec:
replicas: 1
selector:
matchLabels:
app: safari
template:
metadata:
labels:
app: safari
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: safari-pvc
containers:
- name: container
image: httpd:2.4.41-alpine
volumeMounts:
- name: data
mountPath: /tmp/safari-dataApply & verify:
kubectl apply -f pv.yaml
kubectl apply -f pvc.yaml
kubectl apply -f deploy.yaml
kubectl -n project-tiger get pv,pvc,deploy,pods -o wideExam tips
- If PVC stays
Pending, check size/access modes and that storageClassName is unset on both PV & PVC. - HostPath is node-local; for exams itβs fine.
Task
Create ServiceAccount processor in project-hamster. Create Role/RoleBinding (both named processor) to allow only create on Secrets and ConfigMaps in that namespace.
Reasoning
Role defines allowed verbs/resources in namespace; RoleBinding attaches it to the ServiceAccount.
Solution
kubectl -n project-hamster create sa processor
kubectl -n project-hamster create role processor --verb=create --resource=secrets --resource=configmaps
kubectl -n project-hamster create rolebinding processor --role=processor --serviceaccount=project-hamster:processorVerify:
kubectl -n project-hamster auth can-i create secret --as system:serviceaccount:project-hamster:processor
kubectl -n project-hamster auth can-i create configmap --as system:serviceaccount:project-hamster:processor
kubectl -n project-hamster auth can-i delete secret --as system:serviceaccount:project-hamster:processorExam tips
- Mind pluralization: secrets, configmaps.
- Use
kubectl -n <ns> auth can-i ...to prove it.
Task
Namespace: project-tiger. Create DaemonSet ds-important (image httpd:2.4-alpine), labels id=ds-important, uuid=18426a0b-5f59-4e10-923f-c0e078e82462. Requests: cpu:10m, memory:10Mi. Must run on all nodes including masters.
Reasoning
DaemonSet = one Pod per node. Need toleration for master taint.
Solution
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ds-important
namespace: project-tiger
labels:
id: ds-important
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
spec:
selector:
matchLabels:
id: ds-important
template:
metadata:
labels:
id: ds-important
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: ds-important
image: httpd:2.4-alpine
resources:
requests:
cpu: 10m
memory: 10MiVerify:
kubectl -n project-tiger get ds
kubectl -n project-tiger get pods -o wide -l id=ds-importantExam tips
- DaemonSets donβt have
replicas. - Add toleration for scheduling on masters.
Task
Namespace project-tiger. Create Deployment deploy-important with label id=very-important, replicas=3.
Two containers per Pod:
container1:nginx:1.17.6-alpinecontainer2:kubernetes/pause
Ensure max one Pod per worker node; with 2 workers, only 2 Pods run, 1 Pending.
Reasoning
Use podAntiAffinity or topologySpreadConstraints to distribute Pods across nodes and avoid co-location.
Solution A β podAntiAffinity
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-important
namespace: project-tiger
labels:
id: very-important
spec:
replicas: 3
selector:
matchLabels:
id: very-important
template:
metadata:
labels:
id: very-important
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: id
operator: In
values: ["very-important"]
topologyKey: kubernetes.io/hostname
containers:
- name: container1
image: nginx:1.17.6-alpine
- name: container2
image: kubernetes/pauseSolution B β topologySpreadConstraints
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-important
namespace: project-tiger
labels:
id: very-important
spec:
replicas: 3
selector:
matchLabels:
id: very-important
template:
metadata:
labels:
id: very-important
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
id: very-important
containers:
- name: container1
image: nginx:1.17.6-alpine
- name: container2
image: kubernetes/pauseVerify behavior:
kubectl -n project-tiger get deploy,pods -o wide -l id=very-importantExpect 2/3 Pods running (one per worker), 1 Pending.
Exam tips
topologyKeyusuallykubernetes.io/hostname.- If a Pod stays
Pending, read the Events inkubectl describe pod.
Task
Create Pod multi-container-playground in default with three containers: c1, c2, c3 and a shared, non-persistent volume.
c1:nginx:1.17.6-alpine, envMY_NODE_NAME= node name.c2:busybox:1.31.1, writedateevery second to/vol/date.log.c3:busybox:1.31.1,tail -f /vol/date.log.
Confirm logs inc3.
Reasoning
Use emptyDir: as in-memory/shared ephemeral storage between containers in the same Pod.
Solution
apiVersion: v1
kind: Pod
metadata:
name: multi-container-playground
spec:
volumes:
- name: vol
emptyDir: {}
containers:
- name: c1
image: nginx:1.17.6-alpine
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: vol
mountPath: /vol
- name: c2
image: busybox:1.31.1
command: ["sh","-c","while true; do date >> /vol/date.log; sleep 1; done"]
volumeMounts:
- name: vol
mountPath: /vol
- name: c3
image: busybox:1.31.1
command: ["sh","-c","tail -f /vol/date.log"]
volumeMounts:
- name: vol
mountPath: /volApply & verify:
kubectl apply -f pod.yaml
kubectl get pod multi-container-playground
kubectl exec multi-container-playground -c c1 -- env | grep MY_NODE_NAME
kubectl logs multi-container-playground -c c3 | headExam tips
emptyDiris perfect for sharing within a Pod, not across Pods.fieldRef: spec.nodeNameexposes the node name.
Task
- Create namespace
secret. - There is an existing Secret manifest at
/opt/course/19/secret1.yamlβ create it insecretand mount read-only at/tmp/secret1. - Create Secret
secret2insecretwithuser=user1,pass=1234. Inject as envAPP_USERandAPP_PASSin a long-runningbusybox:1.31.1Pod namedsecret-podthat can run on masters (add toleration). Verify.
Reasoning
Secrets can be mounted as files or exposed as env vars. Toleration allows scheduling on tainted masters.
Solution
Create namespace & Secret #1:
kubectl create ns secret
# edit the namespace inside the provided file if needed
sed -e 's/namespace: .*/namespace: secret/' /opt/course/19/secret1.yaml > /tmp/secret1.yaml
kubectl apply -f /tmp/secret1.yamlCreate Secret #2:
kubectl -n secret create secret generic secret2 --from-literal=user=user1 --from-literal=pass=1234Pod:
apiVersion: v1
kind: Pod
metadata:
name: secret-pod
namespace: secret
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: secret-pod
image: busybox:1.31.1
command: ["sh","-c","sleep 1d"]
env:
- name: APP_USER
valueFrom:
secretKeyRef:
name: secret2
key: user
- name: APP_PASS
valueFrom:
secretKeyRef:
name: secret2
key: pass
volumeMounts:
- name: secret1
mountPath: /tmp/secret1
readOnly: true
volumes:
- name: secret1
secret:
secretName: secret1Verify:
kubectl -n secret apply -f pod.yaml
kubectl -n secret exec secret-pod -- env | grep APP_
kubectl -n secret exec secret-pod -- ls -l /tmp/secret1Exam tips
- Mounting a Secret as a volume is read-only by default.
- Ensure namespace matches in Secret manifests.
Task
A node (cluster3-worker2) is not part of the cluster and runs an older Kubernetes. Upgrade it to the exact version running on cluster3-master1 and join it to the cluster using kubeadm.
Reasoning
Workers must match (or be compatible with) control-plane minor version. Use kubeadm to join.
Solution
On control plane, find version & get join command:
kubectl get nodes -o wide
sudo kubeadm token create --print-join-command
# Example output:
# kubeadm join <CP_IP>:6443 --token <TOKEN> --discovery-token-ca-cert-hash sha256:<HASH>On the target worker (cluster3-worker2), install matching versions (example for Debian/Ubuntu):
# Make sure repos are configured per Kubernetes docs, then:
sudo apt-get update
# Pin to the version you saw on the control plane (example v1.33.0-00):
sudo apt-get install -y kubelet=1.33.0-00 kubeadm=1.33.0-00 kubectl=1.33.0-00
sudo systemctl enable --now kubeletIf the node was previously initialized, reset first:
sudo kubeadm reset -f
sudo rm -rf ~/.kubeJoin:
sudo <PASTE_THE_JOIN_COMMAND_FROM_MASTER>
# e.g. sudo kubeadm join <CP_IP>:6443 --token ... --discovery-token-ca-cert-hash sha256:...Back on control plane:
kubectl get nodesExam tips
- Use the exact version numbers from the control plane.
- If join fails, check
journalctl -u kubelet -fand considerkubeadm reset -fthen re-join.
Task
Namespace project-snake. Create NetworkPolicy np-backend to allow only:
- backend-* Pods β db1-* Pods on port 1111
- backend-* Pods β db2-* Pods on port 2222
Afterwards, backend β vault on 3333 must be blocked. Selector uses labelapp.
Reasoning
Using egress rules with label selectors and ports. When any NetworkPolicy selects Pods for egress, all other egress is denied unless allowed.
Solution
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-backend
namespace: project-snake
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: db1
ports:
- protocol: TCP
port: 1111
- to:
- podSelector:
matchLabels:
app: db2
ports:
- protocol: TCP
port: 2222Test:
kubectl -n project-snake exec backend-0 -- curl -s <db1-ip>:1111
kubectl -n project-snake exec backend-0 -- curl -s <db2-ip>:2222
kubectl -n project-snake exec backend-0 -- curl -s <vault-ip>:3333Exam tips
- Multiple egress items are OR; within each item,
to+portsare AND. - Your CNI must support NetworkPolicy (Calico, Cilium, etc.).
Task
- Take an etcd snapshot on
cluster3-master1to/tmp/etcd-backup.db. - Create any test Pod.
- Restore the snapshot and verify the test Pod is gone (state rolled back).
Reasoning
etcdctl snapshot save needs certs to auth to etcd. After restore to a new data dir, point the etcd static Pod to that dir and restart control plane.
Solution
On control plane, save snapshot (paths may vary with your install):
export ETCDCTL_API=3
etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /tmp/etcd-backup.dbCreate a test Pod:
kubectl run test --image=nginx
kubectl get pods -l run=test -wStop control plane by moving static Pod manifests out temporarily:
sudo mkdir -p /root/manifests-bak
sudo mv /etc/kubernetes/manifests/* /root/manifests-bak/
# wait for components to stopRestore snapshot to a fresh data dir:
etcdctl snapshot restore /tmp/etcd-backup.db --data-dir /var/lib/etcd-backupPoint etcd manifest to the new data dir (/etc/kubernetes/manifests/etcd.yaml):
# find the hostPath volume named etcd-data and change its path:
hostPath:
path: /var/lib/etcd-backup
type: DirectoryOrCreateStart control plane again:
sudo mv /root/manifests-bak/* /etc/kubernetes/manifests/
# wait for components to come back
kubectl get pods -A
kubectl get pods -l run=test
# expected: Not found (rolled back state)Exam tips
- Use the server cert/key or apiserver-etcd-client pair depending on your setup.
- Be careful editing files under
/etc/kubernetes/manifestsβ kubelet auto-restarts components.
βThe exam has shifted. Itβs less about copying manifests and more about troubleshooting, Helm, CRDs, and Gateway APIs.β
βI left 3 questions blank β Helm packaging, Gateway API, and cluster setup using
kubeadm. It was tougher than expected.β
βGateway API and HTTPRoute threw me off β the documentation wasnβt clear. Helm was tricky too, but overall manageable.β
βTopics I saw: HPA, Ingress, Helm, ArgoCD, cert-manager CRDs, container runtimes, and setting up NetworkPolicies with least privilege.β
βThe new format made me think more deeply. I had to actually solve problems β not just recognize YAML.β
βLow-resolution VM made it hard to read docs. And some questions were significantly more complex than the mock exams.β
βExpect lag, tab loading delays, and broken copy-paste. Know your
kubectl,vim, and command-line tools well.β
βMouse movement was sluggish β keyboard navigation with
:vsplitandkubectl explainhelped me save time.β
| Topic Area | Common Feedback Summary |
|---|---|
| Helm & Packaging | Required for install and templating questions; Helm + dpkg appeared |
| Gateway API / HTTPRoute | Often noted as tricky; vague docs made questions harder |
| CRI / containerd | Manual installs and configuration came up in several questions |
| CRDs & Admission Webhooks | Edge-case topics that still showed up β be familiar |
| HPA, ArgoCD, NetPol | Popular topics; applied in realistic scenarios like scaling and access control |
| PVC, Volumes, Ingress | Included in questions requiring debug/fix/validate workflows |
| Cluster Setup (kubeadm) | Appeared unexpectedly; time-consuming if unprepared |
| PSI Exam UI | Reported lag, poor copy/paste support, and low-res experience |
| Keyboard Proficiency | Critical due to interface slowness β vim, tabs, and shortcuts helped significantly |
- π¬ Kubernetes Slack Channel #certificationsSlack
- π Guide to Certified Kubernetes Administrator (CKA)Blog
- ποΈ Udemy: CKA Certified Kubernetes AdministratorCrash CourseVideo Course
- ποΈ Certified Kubernetes Administrator (CKA) - A Cloud Guru (formerly Linux Academy)Video Course
- ποΈ Kubernetes Fundamentals (LFS258) - Linux FoundationOfficial Course
- ποΈ Kubernetes Deep Dive - A Cloud GuruVideo Course
Practice a lot with Kubernetes:
- CKA Simulator - killer.sh
- Kubernetes the Hard Way by Kelsey Hightower
- CKA Scenarios - killercoda.com
- Learning Playground - by Docker
If this repo has helped you in any way, feel free to share !
