Which GitOps engine can be used to orchestrate parallel jobs on Kubernetes?
Jenkins X
Flagger
Flux
Argo Workflows
Argo Workflows (D) is the correct answer because it is a Kubernetes-native workflow engine designed to define and run multi-step workflows—often with parallelization—directly on Kubernetes. Argo Workflows models workflows as DAGs (directed acyclic graphs) or step-based sequences, where each step is typically a Pod. Because each step is expressed as Kubernetes resources (custom resources), Argo can schedule many tasks concurrently, control fan-out/fan-in patterns, and manage dependencies between steps (e.g., “run these 10 jobs in parallel, then aggregate results”).
The question calls it a “GitOps engine,” but the capability being tested is “orchestrate parallel jobs.” Argo Workflows fits because it is purpose-built for running complex job orchestration, including parallel tasks, retries, timeouts, artifacts passing, and conditional execution. In practice, many teams store workflow manifests in Git and apply GitOps practices around them, but the distinguishing feature here is the workflow orchestration engine itself.
Why the other options are not best:
Flux (C) is a GitOps controller that reconciles cluster state from Git; it doesn’t orchestrate parallel job graphs as its core function.
Flagger (B) is a progressive delivery operator (canary/blue-green) often paired with GitOps and service meshes/Ingress; it’s not a general workflow orchestrator for parallel batch jobs.
Jenkins X (A) is CI/CD-focused (pipelines), not primarily a Kubernetes-native workflow engine for parallel job DAGs in the way Argo Workflows is.
So, the Kubernetes-native tool specifically used to orchestrate parallel jobs and workflows is Argo Workflows (D).
=========
In the DevOps framework and culture, who builds, automates, and offers continuous delivery tools for developer teams?
Application Users
Application Developers
Platform Engineers
Cluster Operators
The correct answer is C (Platform Engineers). In modern DevOps and platform operating models, platform engineering teams build and maintain the shared delivery capabilities that product/application teams use to ship software safely and quickly. This includes CI/CD pipeline templates, standardized build and test automation, artifact management (registries), deployment tooling (Helm/Kustomize/GitOps), secrets management patterns, policy guardrails, and paved-road workflows that reduce cognitive load for developers.
While application developers (B) write the application code and often contribute pipeline steps for their service, the “build, automate, and offer tooling for developer teams” responsibility maps directly to platform engineering: they provide the internal platform that turns Kubernetes and cloud services into a consumable product. This is especially common in Kubernetes-based organizations where you want consistent deployment standards, repeatable security checks, and uniform observability.
Cluster operators (D) typically focus on the health and lifecycle of the Kubernetes clusters themselves: upgrades, node pools, networking, storage, cluster security posture, and control plane reliability. They may work closely with platform engineers, but “continuous delivery tools for developer teams” is broader than cluster operations. Application users (A) are consumers of the software, not builders of delivery tooling.
In cloud-native application delivery, this division of labor is important: platform engineers enable higher velocity with safety by automating the software supply chain—builds, tests, scans, deploys, progressive delivery, and rollback. Kubernetes provides the runtime substrate, but the platform team makes it easy and safe for developers to use it repeatedly and consistently across many services.
Therefore, Platform Engineers (C) is the verified correct choice.
=========
Which group of container runtimes provides additional sandboxed isolation and elevated security?
rune, cgroups
docker, containerd
runsc, kata
crun, cri-o
The runtimes most associated with sandboxed isolation are gVisor’s runsc and Kata Containers, making C correct. Standard container runtimes (like containerd with runc) rely primarily on Linux namespaces and cgroups for isolation. That isolation is strong for many use cases, but it shares the host kernel, which can be a concern for multi-tenant or high-risk workloads.
gVisor (runsc) provides a user-space kernel-like layer that intercepts and mediates system calls, reducing the container’s direct interaction with the host kernel. Kata Containers takes a different approach: it runs containers inside lightweight virtual machines, providing hardware-virtualization boundaries (or VM-like isolation) while still integrating into container workflows. Both are used to increase isolation compared to traditional containers, and both can be integrated with Kubernetes through compatible CRI/runtime configurations.
The other options are incorrect for the question’s intent. “rune, cgroups” is not a meaningful pairing here (cgroups is a Linux resource mechanism, not a runtime). “docker, containerd” are commonly used container platforms/runtimes but are not specifically the “sandboxed isolation” category (containerd typically uses runc for standard isolation). “crun, cri-o” represents a low-level OCI runtime (crun) and a CRI implementation (CRI-O), again not specifically a sandboxed-isolation grouping.
So, when the question asks for the group that provides additional sandboxing and elevated security, the correct, well-established answer is runsc + Kata.
What is the difference between a Deployment and a ReplicaSet?
With a Deployment, you can’t control the number of pod replicas.
A ReplicaSet does not guarantee a stable set of replica pods running.
A Deployment is basically the same as a ReplicaSet with annotations.
A Deployment is a higher-level concept that manages ReplicaSets.
A Deployment is a higher-level controller that manages ReplicaSets and provides rollout/rollback behavior, so D is correct. A ReplicaSet’s primary job is to ensure that a specified number of Pod replicas are running at any time, based on a label selector and Pod template. It’s a fundamental “keep N Pods alive” controller.
Deployments build on that by managing the lifecycle of ReplicaSets over time. When you update a Deployment (for example, changing the container image tag or environment variables), Kubernetes creates a new ReplicaSet for the new Pod template and gradually shifts replicas from the old ReplicaSet to the new one according to the rollout strategy (RollingUpdate by default). Deployments also retain revision history, making it possible to roll back to a previous ReplicaSet if a rollout fails.
Why the other options are incorrect:
A is false: Deployments absolutely control the number of replicas via spec.replicas and can also be controlled by HPA.
B is false: ReplicaSets do guarantee that a stable number of replicas is running (that is their core purpose).
C is false: a Deployment is not “a ReplicaSet with annotations.” It is a distinct API resource with additional controller logic for declarative updates, rollouts, and revision tracking.
Operationally, most teams create Deployments rather than ReplicaSets directly because Deployments are safer and more feature-complete for application delivery. ReplicaSets still appear in real clusters because Deployments create them automatically; you’ll commonly see multiple ReplicaSets during rollout transitions. Understanding the hierarchy is crucial for troubleshooting: if Pods aren’t behaving as expected, you often trace from Deployment → ReplicaSet → Pod, checking selectors, events, and rollout status.
So the key difference is: ReplicaSet maintains replica count; Deployment manages ReplicaSets and orchestrates updates. Therefore, D is the verified answer.
=========
What is a Pod?
A networked application within Kubernetes.
A storage volume within Kubernetes.
A single container within Kubernetes.
A group of one or more containers within Kubernetes.
A Pod is the smallest deployable/schedulable unit in Kubernetes and consists of a group of one or more containers that are deployed together on the same node—so D is correct. The key idea is that Kubernetes schedules Pods, not individual containers. Containers in the same Pod share important runtime context: they share the same network namespace (one Pod IP and port space) and can share storage volumes defined at the Pod level. This is why a Pod is often described as a “logical host” for its containers.
Most Pods run a single container, but multi-container Pods are common for sidecar patterns. For example, an application container might run alongside a service mesh proxy sidecar, a log shipper, or a config reloader. Because these containers share localhost networking, they can communicate efficiently without exposing extra network endpoints. Because they can share volumes, one container can produce files that another consumes (for example, writing logs to a shared volume).
Options A and B are incorrect because a Pod is not “an application” abstraction nor is it a storage volume. Pods can host applications, but they are the execution unit for containers rather than the application concept itself. Option C is incorrect because a Pod is not limited to a single container; “one or more containers” is fundamental to the Pod definition.
Operationally, understanding Pods is essential because many Kubernetes behaviors key off Pods: Services select Pods (typically by labels), autoscalers scale Pods (replica counts), probes determine Pod readiness/liveness, and scheduling constraints place Pods on nodes. When a Pod is replaced (for example during a Deployment rollout), a new Pod is created with a new UID and potentially a new IP—reinforcing why Services exist to provide stable access.
Therefore, the verified correct answer is D: a Pod is a group of one or more containers within Kubernetes.
=========
Which of the following are tasks performed by a container orchestration tool?
Schedule, scale, and manage the health of containers.
Create images, scale, and manage the health of containers.
Debug applications, and manage the health of containers.
Store images, scale, and manage the health of containers.
A container orchestration tool (like Kubernetes) is responsible for scheduling, scaling, and health management of workloads, making A correct. Orchestration sits above individual containers and focuses on running applications reliably across a fleet of machines. Scheduling means deciding which node should run a workload based on resource requests, constraints, affinities, taints/tolerations, and current cluster state. Scaling means changing the number of running instances (replicas) to meet demand (manually or automatically through autoscalers). Health management includes monitoring whether containers and Pods are alive and ready, replacing failed instances, and maintaining the declared desired state.
Options B and D include “create images” and “store images,” which are not orchestration responsibilities. Image creation is a CI/build responsibility (Docker/BuildKit/build systems), and image storage is a container registry responsibility (Harbor, ECR, GCR, Docker Hub, etc.). Kubernetes consumes images from registries but does not build or store them. Option C includes “debug applications,” which is not a core orchestration function. While Kubernetes provides tools that help debugging (logs, exec, events), debugging is a human/operator activity rather than the orchestrator’s fundamental responsibility.
In Kubernetes specifically, these orchestration tasks are implemented through controllers and control loops: Deployments/ReplicaSets manage replica counts and rollouts, kube-scheduler assigns Pods to nodes, kubelet ensures containers run, and probes plus controller logic replace unhealthy replicas. This is exactly what makes Kubernetes valuable at scale: instead of manually starting/stopping containers on individual hosts, you declare your intent and let the orchestration system continually reconcile reality to match. That combination—placement + elasticity + self-healing—is the core of container orchestration, matching option A precisely.
=========
Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called:
Namespaces
Containers
Hypervisors
cgroups
Kubernetes provides “virtual clusters” within a single physical cluster primarily through Namespaces, so A is correct. Namespaces are a logical partitioning mechanism that scopes many Kubernetes resources (Pods, Services, Deployments, ConfigMaps, Secrets, etc.) into separate environments. This enables multiple teams, applications, or environments (dev/test/prod) to share a cluster while keeping their resource names and access controls separated.
Namespaces are often described as “soft multi-tenancy.” They don’t provide full isolation like separate clusters, but they do allow administrators to apply controls per namespace:
RBAC rules can grant different permissions per namespace (who can read Secrets, who can deploy workloads, etc.).
ResourceQuotas and LimitRanges can enforce fair usage and prevent one namespace from consuming all cluster resources.
NetworkPolicies can isolate traffic between namespaces (depending on the CNI).
Containers are runtime units inside Pods and are not “virtual clusters.” Hypervisors are virtualization components for VMs, not Kubernetes partitioning constructs. cgroups are Linux kernel primitives for resource control, not Kubernetes virtual cluster constructs.
While there are other “virtual cluster” approaches (like vcluster projects) that create stronger virtualized control planes, the built-in Kubernetes mechanism referenced by this question is namespaces. Therefore, the correct answer is A: Namespaces.
=========
What is CRD?
Custom Resource Definition
Custom Restricted Definition
Customized RUST Definition
Custom RUST Definition
A CRD is a CustomResourceDefinition, making A correct. Kubernetes is built around an API-driven model: resources like Pods, Services, and Deployments are all objects served by the Kubernetes API. CRDs allow you to extend the Kubernetes API by defining your own resource types. Once a CRD is installed, the API server can store and serve custom objects (Custom Resources) of that new type, and Kubernetes tooling (kubectl, RBAC, admission, watch mechanisms) can interact with them just like built-in resources.
CRDs are a core building block of the Kubernetes ecosystem because they enable operators and platform extensions. A typical pattern is: define a CRD that represents the desired state of some higher-level concept (for example, a database cluster, a certificate request, an application release), and then run a controller (often called an “operator”) that watches those custom resources and reconciles the cluster to match. That controller may create Deployments, StatefulSets, Services, Secrets, or cloud resources to implement the desired state encoded in the custom resource.
The incorrect answers are made-up expansions. CRDs are not related to Rust in Kubernetes terminology, and “custom restricted definition” is not the standard meaning.
So the verified meaning is: CRD = CustomResourceDefinition, used to extend Kubernetes APIs and enable Kubernetes-native automation via controllers/operators.
When a Kubernetes Secret is created, how is the data stored by default in etcd?
As Base64-encoded strings that provide simple encoding but no actual encryption.
As plain text values that are directly stored without any obfuscation or additional encoding.
As compressed binary objects that are optimized for space but not secured against access.
As encrypted records automatically protected using the Kubernetes control plane master key.
By default, Kubernetes Secrets are stored in etcd as Base64-encoded values, which makes option A the correct answer. This is a common point of confusion because Base64 encoding is often mistaken for encryption, but in reality, it provides no security—only a reversible text encoding.
When a Secret is defined in a Kubernetes manifest or created via kubectl, its data fields are Base64-encoded before being persisted in etcd. This encoding ensures that binary data (such as certificates or keys) can be safely represented in JSON and YAML formats, which require text-based values. However, anyone with access to etcd or the Secret object via the Kubernetes API can easily decode these values.
Option B is incorrect because Secrets are not stored as raw plaintext; they are encoded using Base64 before storage. Option C is incorrect because Kubernetes does not compress Secret data by default. Option D is incorrect because Secrets are not encrypted at rest by default. Encryption at rest must be explicitly configured using an encryption provider configuration in the Kubernetes API server.
Because of this default behavior, Kubernetes strongly recommends additional security measures when handling Secrets. These include enabling encryption at rest for etcd, restricting access to Secrets using RBAC, using short-lived ServiceAccount tokens, and integrating with external secret management systems such as HashiCorp Vault or cloud provider key management services.
Understanding how Secrets are stored is critical for designing secure Kubernetes clusters. While Secrets provide a convenient abstraction for handling sensitive data, they rely on cluster-level security controls to ensure confidentiality. Without encryption at rest and proper access restrictions, Secret data remains vulnerable to unauthorized access.
Therefore, the correct and verified answer is Option A: Kubernetes stores Secrets as Base64-encoded strings in etcd by default, which offers encoding but not encryption.
Which of these commands is used to retrieve the documentation and field definitions for a Kubernetes resource?
kubectl explain
kubectl api-resources
kubectl get --help
kubectl show
kubectl explain is the command that shows documentation and field definitions for Kubernetes resource schemas, so A is correct. Kubernetes resources have a structured schema: top-level fields like apiVersion, kind, and metadata, and resource-specific structures like spec and status. kubectl explain lets you explore these structures directly from your cluster’s API discovery information, including field types, descriptions, and nested fields.
For example, kubectl explain deployment describes the Deployment resource, and kubectl explain deployment.spec dives into the spec structure. You can continue deeper, such as kubectl explain deployment.spec.template.spec.containers to discover container fields. This is especially useful when writing or troubleshooting manifests, because it reduces guesswork and prevents invalid YAML fields that would be rejected by the API server. It also helps when APIs evolve: you can confirm which fields exist in your cluster’s current version and what they mean.
The other commands do different things. kubectl api-resources lists resource types and their shortnames, whether they are namespaced, and supported verbs—useful discovery, but not detailed field definitions. kubectl get --help shows CLI usage help for kubectl get, not the Kubernetes object schema. kubectl show is not a standard kubectl subcommand.
From a Kubernetes “declarative configuration” perspective, correct manifests are critical: controllers reconcile desired state from spec, and subtle field mistakes can change runtime behavior. kubectl explain is a built-in way to learn the schema and write manifests that align with the Kubernetes API’s expectations. That’s why it’s commonly recommended in Kubernetes documentation and troubleshooting workflows.
=========
In Kubernetes, what is the primary responsibility of the kubelet running on each worker node?
To allocate persistent storage volumes and manage distributed data replication for Pods.
To manage cluster state information and handle all scheduling decisions for workloads.
To ensure that containers defined in Pod specifications are running and remain healthy on the node.
To provide internal DNS resolution and route service traffic between Pods and nodes.
The kubelet is a critical Kubernetes component that runs on every worker node and acts as the primary execution agent for Pods. Its core responsibility is to ensure that the containers defined in Pod specifications are running and remain healthy on the node, making option C the correct answer.
Once the Kubernetes scheduler assigns a Pod to a specific node, the kubelet on that node becomes responsible for carrying out the desired state described in the Pod specification. It continuously watches the API server for Pods assigned to its node and communicates with the container runtime (such as containerd or CRI-O) to start, stop, and restart containers as needed. The kubelet does not make scheduling decisions; it simply executes them.
Health management is another key responsibility of the kubelet. It runs liveness, readiness, and startup probes as defined in the Pod specification. If a container fails a liveness probe, the kubelet restarts it. If a readiness probe fails, the kubelet marks the Pod as not ready, preventing traffic from being routed to it. The kubelet also reports detailed Pod and node status information back to the API server, enabling controllers to take corrective actions when necessary.
Option A is incorrect because persistent volume provisioning and data replication are handled by storage systems, CSI drivers, and controllers—not by the kubelet. Option B is incorrect because cluster state management and scheduling are responsibilities of control plane components such as the API server, controller manager, and kube-scheduler. Option D is incorrect because DNS resolution and service traffic routing are handled by components like CoreDNS and kube-proxy.
In summary, the kubelet serves as the node-level guardian of Kubernetes workloads. By ensuring containers are running exactly as specified and continuously reporting their health and status, the kubelet forms the essential bridge between Kubernetes’ declarative control plane and the actual execution of applications on worker nodes.
Which of the following resources helps in managing a stateless application workload on a Kubernetes cluster?
DaemonSet
StatefulSet
kubectl
Deployment
The correct answer is D: Deployment. A Deployment is the standard Kubernetes controller for managing stateless applications. It provides declarative updates, replica management, and rollout/rollback functionality. You define the desired state (container image, environment variables, ports, replica count) in the Deployment spec, and Kubernetes ensures the specified number of Pods are running and updated according to strategy (RollingUpdate by default).
Stateless workloads are ideal for Deployments because each replica is interchangeable. If a Pod dies, a new one can be created anywhere; if traffic increases, replicas can be increased; if you need to update the app, a new ReplicaSet is created and traffic shifts gradually to new Pods. Deployments integrate naturally with Services for stable networking and load balancing.
Why the other options are incorrect:
A DaemonSet ensures one Pod per node (or selected nodes). It’s for node-level agents, not generic stateless service replicas.
A StatefulSet is for workloads needing stable identity, ordered rollout, and persistent storage per replica (databases, quorum systems). That’s not the typical stateless app case.
kubectl is a CLI tool; it doesn’t “manage” workloads as a controller resource.
In real cluster operations, almost every stateless microservice is represented as a Deployment plus a Service (and often an Ingress/Gateway for edge routing). Deployments also support advanced delivery patterns (maxSurge/maxUnavailable tuning) and easy integration with HPA for horizontal scaling. Because the question is specifically “managing a stateless application workload,” the Kubernetes resource designed for that is clearly the Deployment.
=========
Which component of the node is responsible to run workloads?
The kubelet.
The kube-proxy.
The kube-apiserver.
The container runtime.
The verified correct answer is D (the container runtime). On a Kubernetes node, the container runtime (such as containerd or CRI-O) is the component that actually executes containers—it creates container processes, manages their lifecycle, pulls images, and interacts with the underlying OS primitives (namespaces, cgroups) through an OCI runtime like runc. In that direct sense, the runtime is what “runs workloads.”
It’s important to distinguish responsibilities. The kubelet (A) is the node agent that orchestrates what should run on the node: it watches the API server for Pods assigned to the node and then asks the runtime to start/stop containers accordingly. Kubelet is essential for node management, but it does not itself execute containers; it delegates execution to the runtime via CRI. kube-proxy (B) handles Service traffic routing rules (or is replaced by other dataplanes) and does not run containers. kube-apiserver (C) is a control plane component that stores and serves cluster state; it is not a node workload runner.
So, in the execution chain: scheduler assigns Pod → kubelet sees Pod assigned → kubelet calls runtime via CRI → runtime launches containers. When troubleshooting “containers won’t start,” you often inspect kubelet logs and runtime logs because the runtime is the component that can fail image pulls, sandbox creation, or container start operations.
Therefore, the best answer to “which node component is responsible to run workloads” is the container runtime, option D.
=========
Which of the following would fall under the responsibilities of an SRE?
Developing a new application feature.
Creating a monitoring baseline for an application.
Submitting a budget for running an application in a cloud.
Writing policy on how to submit a code change.
Site Reliability Engineering (SRE) focuses on reliability, availability, performance, and operational excellence using engineering approaches. Among the options, creating a monitoring baseline for an application is a classic SRE responsibility, so B is correct. A monitoring baseline typically includes defining key service-level signals (latency, traffic, errors, saturation), establishing dashboards, setting sensible alert thresholds, and ensuring telemetry is complete enough to support incident response and capacity planning.
In Kubernetes environments, SRE work often involves ensuring that workloads expose health endpoints for probes, that resource requests/limits are set to allow stable scheduling and autoscaling, and that observability pipelines (metrics, logs, traces) are consistent. Building a monitoring baseline also ties into SLO/SLI practices: SREs define what “good” looks like, measure it continuously, and create alerts that notify teams when the system deviates from those expectations.
Option A is primarily an application developer task—SREs may contribute to reliability features, but core product feature development is usually owned by engineering teams. Option C is more aligned with finance, FinOps, or management responsibilities, though SRE data can inform costs. Option D is closer to governance, platform policy, or developer experience/process ownership; SREs might influence processes, but “policy on how to submit code change” is not the defining SRE duty compared to monitoring and reliability engineering.
Therefore, the best verified choice is B, because establishing monitoring baselines is central to operating reliable services on Kubernetes.
=========
What is the telemetry component that represents a series of related distributed events that encode the end-to-end request flow through a distributed system?
Metrics
Logs
Spans
Traces
In observability, traces represent an end-to-end view of a request as it flows through multiple services, so D is correct. Tracing is particularly important in cloud-native microservices architectures because a single user action (like “checkout” or “search”) may traverse many services via HTTP/gRPC calls, message queues, and databases. Traces link those related events together so you can see where time is spent, where errors occur, and how dependencies behave.
A trace is typically composed of multiple spans (option C). A span is a single timed operation (e.g., “HTTP GET /orders”, “DB query”, “call payment service”). Spans include timing, attributes (tags), status/error information, and parent/child relationships. While spans are essential building blocks, the “series of related distributed events encoding end-to-end request flow” is the trace as a whole, not an individual span.
Metrics (option A) are numeric time series used for aggregation and alerting (rates, latency percentiles when derived, resource usage). Logs (option B) are discrete event records (text or structured) useful for forensic detail and debugging. Both are valuable, but neither inherently provides a stitched, causal, end-to-end request path across services. Traces do exactly that by propagating trace context (trace IDs/span IDs) across service boundaries (often via headers).
In Kubernetes environments, traces are commonly exported via OpenTelemetry instrumentation/collectors and visualized in tracing backends. Tracing enables faster incident resolution by pinpointing the slow hop, the failing downstream dependency, or unexpected fan-out. Therefore, the correct telemetry component for end-to-end distributed request flow is Traces (D).
=========
What edge and service proxy tool is designed to be integrated with cloud native applications?
CoreDNS
CNI
gRPC
Envoy
The correct answer is D: Envoy. Envoy is a high-performance edge and service proxy designed for cloud-native environments. It is commonly used as the data plane in service meshes and modern API gateways because it provides consistent traffic management, observability, and security features across microservices without requiring every application to implement those capabilities directly.
Envoy operates at Layer 7 (application-aware) and supports protocols like HTTP/1.1, HTTP/2, gRPC, and more. It can handle routing, load balancing, retries, timeouts, circuit breaking, rate limiting, TLS termination, and mutual TLS (mTLS). Envoy also emits rich telemetry (metrics, access logs, tracing) that integrates well with cloud-native observability stacks.
Why the other options are incorrect:
CoreDNS (A) provides DNS-based service discovery within Kubernetes; it is not an edge/service proxy.
CNI (B) is a specification and plugin ecosystem for container networking (Pod networking), not a proxy.
gRPC (C) is an RPC protocol/framework used by applications; it’s not a proxy tool. (Envoy can proxy gRPC traffic, but gRPC itself isn’t the proxy.)
In Kubernetes architectures, Envoy often appears in two places: (1) at the edge as part of an ingress/gateway layer, and (2) sidecar proxies alongside Pods in a service mesh (like Istio) to standardize service-to-service communication controls and telemetry. This is why it is described as “designed to be integrated with cloud native applications”: it’s purpose-built for dynamic service discovery, resilient routing, and operational visibility in distributed systems.
So the verified correct choice is D (Envoy).
=========
Which of the following is the correct command to run an nginx deployment with 2 replicas?
kubectl run deploy nginx --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --replicas=2
kubectl create nginx deployment --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --count=2
The correct answer is B: kubectl create deploy nginx --image=nginx --replicas=2. This uses kubectl create deployment (shorthand create deploy) to generate a Deployment resource named nginx with the specified container image. The --replicas=2 flag sets the desired replica count, so Kubernetes will create two Pod replicas (via a ReplicaSet) and keep that number stable.
Option A is incorrect because kubectl run is primarily intended to run a Pod (and in older versions could generate other resources, but it’s not the recommended/consistent way to create a Deployment in modern kubectl usage). Option C is invalid syntax: kubectl subcommand order is incorrect; you don’t say kubectl create nginx deployment. Option D uses a non-existent --count flag for Deployment replicas.
From a Kubernetes fundamentals perspective, this question tests two ideas: (1) Deployments are the standard controller for running stateless workloads with a desired number of replicas, and (2) kubectl create deployment is a common imperative shortcut for generating that resource. After running the command, you can confirm with kubectl get deploy nginx, kubectl get rs, and kubectl get pods -l app=nginx (label may vary depending on kubectl version). You’ll see a ReplicaSet created and two Pods brought up.
In production, teams typically use declarative manifests (kubectl apply -f) or GitOps, but knowing the imperative command is useful for quick labs and validation. The key is that replicas are managed by the controller, not by manually starting containers—Kubernetes reconciles the state continuously.
Therefore, B is the verified correct command.
=========
Kubernetes ___ protect you against voluntary interruptions (such as deleting Pods, draining nodes) to run applications in a highly available manner.
Pod Topology Spread Constraints
Pod Disruption Budgets
Taints and Tolerations
Resource Limits and Requests
The correct answer is B: Pod Disruption Budgets (PDBs). A PDB is a policy object that limits how many Pods of an application can be voluntarily disrupted at the same time. “Voluntary disruptions” include actions such as draining a node for maintenance (kubectl drain), cluster upgrades, or an administrator deleting Pods. The core purpose is to preserve availability by ensuring that a minimum number (or percentage) of replicas remain running and ready while those planned disruptions occur.
A PDB is typically defined with either minAvailable (e.g., “at least 3 Pods must remain available”) or maxUnavailable (e.g., “no more than 1 Pod can be unavailable”). Kubernetes uses this budget when performing eviction operations. If evicting a Pod would violate the PDB, the eviction is blocked (or delayed), which forces maintenance workflows to proceed more safely—either by draining more slowly, scaling up first, or scheduling maintenance in stages.
Why the other options are not correct: topology spread constraints (A) influence scheduling distribution across failure domains but don’t directly protect against voluntary disruptions. Taints and tolerations (C) control where Pods can schedule, not how many can be disrupted. Resource requests/limits (D) control CPU/memory allocation and do not guard availability during drains or deletions.
PDBs also work best when paired with Deployments/StatefulSets that maintain replicas and with readiness probes that accurately represent whether a Pod can serve traffic. PDBs do not prevent involuntary disruptions (node crashes), but they materially reduce risk during planned operations—exactly what the question is targeting.
=========
Which of the following options includes valid API versions?
alpha1v1, beta3v3, v2
alpha1, beta3, v2
v1alpha1, v2beta3, v2
v1alpha1, v2beta3, 2.0
Kubernetes API versions follow a consistent naming pattern that indicates stability level and versioning. The valid forms include stable versions like v1, and pre-release versions such as v1alpha1, v1beta1, etc. Option C contains valid-looking Kubernetes version strings—v1alpha1, v2beta3, v2—so C is correct.
In Kubernetes, the “v” prefix is part of the standard for API versions. A stable API uses v1, v2, etc. Pre-release APIs include a stability marker: alpha (earliest, most changeable) and beta (more stable but still may change). The numeric suffix (e.g., alpha1, beta3) indicates iteration within that stability stage.
Option A is invalid because strings like alpha1v1 and beta3v3 do not match Kubernetes conventions (the v comes first, and alpha/beta are qualifiers after the version: v1alpha1). Option B is invalid because alpha1 and beta3 are missing the leading version prefix; Kubernetes API versions are not just “alpha1.” Option D includes 2.0, which looks like semantic versioning but is not the Kubernetes API version format. Kubernetes uses v2, not 2.0, for API versions.
Understanding this matters because API versions signal compatibility guarantees. Stable APIs are supported for a defined deprecation window, while alpha/beta APIs may change in incompatible ways and can be removed more easily. When authoring manifests, selecting the correct apiVersion ensures the API server accepts your resource and that controllers interpret fields correctly.
Therefore, among the choices, C is the only option comprised of valid Kubernetes-style API version strings.
=========
What is the correct hierarchy of Kubernetes components?
Containers → Pods → Cluster → Nodes
Nodes → Cluster → Containers → Pods
Cluster → Nodes → Pods → Containers
Pods → Cluster → Containers → Nodes
The correct answer is C: Cluster → Nodes → Pods → Containers. This expresses the fundamental structural relationship in Kubernetes. A cluster is the overall system (control plane + nodes) that runs your workloads. Inside the cluster, you have nodes (worker machines—VMs or bare metal) that provide CPU, memory, storage, and networking. The scheduler assigns workloads to nodes.
Workloads are executed as Pods, which are the smallest deployable units Kubernetes schedules. Pods represent one or more containers that share networking (one Pod IP and port space) and can share storage volumes. Within each Pod are containers, which are the actual application processes packaged with their filesystem and runtime dependencies.
The other options are incorrect because they break these containment relationships. Containers do not contain Pods; Pods contain containers. Nodes do not exist “inside” Pods; Pods run on nodes. And the cluster is the top-level boundary that contains nodes and orchestrates Pods.
This hierarchy matters for troubleshooting and design. If you’re thinking about capacity, you reason at the node and cluster level (node pools, autoscaling, quotas). If you’re thinking about application scaling, you reason at the Pod level (replicas, HPA, readiness probes). If you’re thinking about process-level concerns, you reason at the container level (images, security context, runtime user, resources). Kubernetes intentionally uses this layered model so that scheduling and orchestration operate on Pods, while the container runtime handles container execution details.
So the accurate hierarchy from largest to smallest unit is: Cluster → Nodes → Pods → Containers, which corresponds to C.
=========
What is the common standard for Service Meshes?
Service Mesh Specification (SMS)
Service Mesh Technology (SMT)
Service Mesh Interface (SMI)
Service Mesh Function (SMF)
A widely referenced interoperability standard in the service mesh ecosystem is the Service Mesh Interface (SMI), so C is correct. SMI was created to provide a common set of APIs for basic service mesh capabilities—helping users avoid being locked into a single mesh implementation for core features. While service meshes differ in architecture and implementation (e.g., Istio, Linkerd, Consul), SMI aims to standardize how common behaviors are expressed.
In cloud native architecture, service meshes address cross-cutting concerns for service-to-service communication: traffic policies, observability, and security (mTLS, identity). Rather than baking these concerns into every application, a mesh typically introduces data-plane proxies and a control plane to manage policy and configuration. SMI sits above those implementations as a common API model.
The other options are not commonly used industry standards. You may see other efforts and emerging APIs, but among the listed choices, SMI is the recognized standard name that appears in cloud native discussions and tooling integrations.
Also note a practical nuance: even with SMI, not every mesh implements every SMI spec fully, and many users still adopt mesh-specific CRDs and APIs for advanced features. But for this question’s framing—“common standard”—Service Mesh Interface is the correct answer.
What is the main purpose of a DaemonSet?
A DaemonSet ensures that all (or certain) nodes run a copy of a Pod.
A DaemonSet ensures that the kubelet is constantly up and running.
A DaemonSet ensures that there are as many pods running as specified in the replicas field.
A DaemonSet ensures that a process (agent) runs on every node.
The correct answer is A. A DaemonSet is a workload controller whose job is to ensure that a specific Pod runs on all nodes (or on a selected subset of nodes) in the cluster. This is fundamentally different from Deployments/ReplicaSets, which aim to maintain a certain replica count regardless of node count. With a DaemonSet, the number of Pods is implicitly tied to the number of eligible nodes: add a node, and the DaemonSet automatically schedules a Pod there; remove a node, and its Pod goes away.
DaemonSets are commonly used for node-level services and background agents: log collectors, node monitoring agents, storage daemons, CNI components, or security agents—anything where you want a presence on each node to interact with node resources. This aligns with option D’s phrasing (“agent on every node”), but option A is the canonical definition and is slightly broader because it covers “all or certain nodes” (via node selectors/affinity/taints-tolerations) and the fact that the unit is a Pod.
Why the other options are wrong: DaemonSets do not “keep kubelet running” (B); kubelet is a node service managed by the OS. DaemonSets do not use a replicas field to maintain a specific count (C); that’s Deployment/ReplicaSet behavior.
Operationally, DaemonSets matter for cluster operations because they provide consistent node coverage and automatically react to node pool scaling. They also require careful scheduling constraints so they land only where intended (e.g., only Linux nodes, only GPU nodes). But the main purpose remains: ensure a copy of a Pod runs on each relevant node—option A.
=========
A Pod has been created, but when checked with kubectl get pods, the READY column shows 0/1. What Kubernetes feature causes this behavior?
Node Selector
Readiness Probes
DNS Policy
Security Contexts
The READY column in the output of kubectl get pods indicates how many containers in a Pod are currently considered ready to serve traffic, compared to the total number of containers defined in that Pod. A value of 0/1 means that the Pod has one container, but that container is not yet marked as ready. The Kubernetes feature responsible for determining this readiness state is the readiness probe.
Readiness probes are used by Kubernetes to decide when a container is ready to accept traffic. These probes can be configured to perform HTTP requests, execute commands, or check TCP sockets inside the container. If a readiness probe is defined and it fails, Kubernetes marks the container as not ready, even if the container is running successfully. As a result, the READY column will show 0/1, and the Pod will be excluded from Service load balancing until the probe succeeds.
Option A (Node Selector) is incorrect because node selectors influence where a Pod is scheduled, not whether its containers are considered ready after startup. Option C (DNS Policy) affects how DNS resolution works inside a Pod and has no direct impact on readiness reporting. Option D (Security Contexts) define security-related settings such as user IDs, capabilities, or privilege levels, but they do not control the READY status shown by kubectl.
Readiness probes are particularly important for applications that take time to initialize, load configuration, or warm up caches. By using readiness probes, Kubernetes ensures that traffic is only sent to containers that are fully prepared to handle requests. This improves reliability and prevents failed or premature connections.
According to Kubernetes documentation, a container without a readiness probe is considered ready by default once it is running. However, when a readiness probe is defined, its result directly controls the READY state. Therefore, the presence and behavior of readiness probes is the verified and correct reason why a Pod may show 0/1 in the READY column, making option B the correct answer.
Which persona is normally responsible for defining, testing, and running an incident management process?
Site Reliability Engineers
Project Managers
Application Developers
Quality Engineers
The role most commonly responsible for defining, testing, and running an incident management process is Site Reliability Engineers (SREs), so A is correct. SRE is an operational engineering discipline focused on ensuring reliability, availability, and performance of services in production. Incident management is a core part of that mission: when outages or severe degradations occur, someone must coordinate response, restore service quickly, and then drive follow-up improvements to prevent recurrence.
In cloud native environments (including Kubernetes), incident response involves both technical and process elements. On the technical side, SREs ensure observability is in place—metrics, logs, traces, dashboards, and actionable alerts—so incidents can be detected and diagnosed quickly. They also validate operational readiness: runbooks, escalation paths, on-call rotations, and post-incident review practices. On the process side, SREs often establish severity classifications, response roles (incident commander, communications lead, subject matter experts), and “game day” exercises or simulated incidents to test preparedness.
Project managers may help coordinate schedules and communication for projects, but they are not typically the owners of operational incident response mechanics. Application developers are crucial participants during incidents, especially for debugging application-level failures, but they are not usually the primary maintainers of the incident management framework. Quality engineers focus on testing and quality assurance, and while they contribute to preventing defects, they are not usually the owners of real-time incident operations.
In Kubernetes specifically, incidents often span multiple layers: workload behavior, cluster resources, networking, storage, and platform dependencies. SREs are positioned to manage the cross-cutting operational view and to continuously improve reliability through error budgets, SLOs/SLIs, and iterative hardening. That’s why the correct persona is Site Reliability Engineers.
=========
Which option best represents the Pod Security Standards ordered from most permissive to most restrictive?
Privileged, Baseline, Restricted
Baseline, Privileged, Restricted
Baseline, Restricted, Privileged
Privileged, Restricted, Baseline
Pod Security Standards define a set of security profiles for Pods in Kubernetes, establishing clear expectations for how securely workloads should be configured. These standards were introduced to replace the deprecated PodSecurityPolicies (PSP) and are enforced through the Pod Security Admission controller. The standards are intentionally ordered from least restrictive to most restrictive to allow clusters to adopt security controls progressively.
The correct order from most permissive to most restrictive is: Privileged → Baseline → Restricted, which makes option A the correct answer.
The Privileged profile is the least restrictive. It allows Pods to run with elevated permissions, including privileged containers, host networking, host PID/IPC namespaces, and unrestricted access to host resources. This level is intended for trusted system components, infrastructure workloads, or cases where full access to the host is required. It offers maximum flexibility but minimal security enforcement.
The Baseline profile introduces a moderate level of security. It prevents common privilege escalation vectors, such as running privileged containers or using host namespaces, while still allowing typical application workloads to function without significant modification. Baseline is designed to be broadly compatible with most applications and serves as a reasonable default security posture for many clusters.
The Restricted profile is the most secure and restrictive. It enforces strong security best practices, such as requiring containers to run as non-root users, dropping unnecessary Linux capabilities, enforcing read-only root filesystems where possible, and preventing privilege escalation. Restricted is ideal for highly sensitive workloads or environments with strict security requirements, though it may require application changes to comply.
Options B, C, and D are incorrect because they misrepresent the intended progression of security strictness defined in Kubernetes documentation.
According to Kubernetes documentation, the Pod Security Standards are explicitly ordered to support gradual adoption: start permissive where necessary and move toward stronger security over time. Therefore, Privileged, Baseline, Restricted is the accurate and fully verified ordering, making option A the correct answer.
What happens with a regular Pod running in Kubernetes when a node fails?
A new Pod with the same UID is scheduled to another node after a while.
A new, near-identical Pod but with different UID is scheduled to another node.
By default, a Pod can only be scheduled to the same node when the node fails.
A new Pod is scheduled on a different node only if it is configured explicitly.
B is correct: when a node fails, Kubernetes does not “move” the same Pod instance; instead, a new Pod object (new UID) is created to replace it—assuming the Pod is managed by a controller (Deployment/ReplicaSet, StatefulSet, etc.). A Pod is an API object with a unique identifier (UID) and is tightly associated with the node it’s scheduled to via spec.nodeName. If the node becomes unreachable, that original Pod cannot be restarted elsewhere because it was bound to that node.
Kubernetes’ high availability comes from controllers maintaining desired state. For example, a Deployment desires N replicas. If a node fails and the replicas on that node are lost, the controller will create replacement Pods, and the scheduler will place them onto healthy nodes. These replacement Pods will be “near-identical” in spec (same template), but they are still new instances with new UIDs and typically new IPs.
Why the other options are wrong:
A is incorrect because the UID does not remain the same—Kubernetes creates a new Pod object rather than reusing the old identity.
C is incorrect; pods are not restricted to the same node after failure. The whole point of orchestration is to reschedule elsewhere.
D is incorrect; rescheduling does not require special explicit configuration for typical controller-managed workloads. The controller behavior is standard. (If it’s a bare Pod without a controller, it will not be recreated automatically.)
This also ties to the difference between “regular Pod” vs controller-managed workloads: a standalone Pod is not self-healing by itself, while a Deployment/ReplicaSet provides that resilience. In typical production design, you run workloads under controllers specifically so node failure triggers replacement and restores replica count.
Therefore, the correct outcome is B.
=========
In Kubernetes, what is the primary purpose of using annotations?
To control the access permissions for users and service accounts.
To provide a way to attach metadata to objects.
To specify the deployment strategy for applications.
To define the specifications for resource limits and requests.
Annotations in Kubernetes are a flexible mechanism for attaching non-identifying metadata to Kubernetes objects. Their primary purpose is to store additional information that is not used for object selection or grouping, which makes Option B the correct answer.
Unlike labels, which are designed to be used for selection, filtering, and grouping of resources (for example, by Services or Deployments), annotations are intended purely for informational or auxiliary purposes. They allow users, tools, and controllers to store arbitrary key–value data on objects without affecting Kubernetes’ core behavior. This makes annotations ideal for storing data such as build information, deployment timestamps, commit hashes, configuration hints, or ownership details.
Annotations are commonly consumed by external tools and controllers rather than by the Kubernetes scheduler or control plane for decision-making. For example, ingress controllers, service meshes, monitoring agents, and CI/CD systems often read annotations to enable or customize specific behaviors. Because annotations are not used for querying or selection, Kubernetes places no strict size or structure requirements on their values beyond general object size limits.
Option A is incorrect because access permissions are managed using Role-Based Access Control (RBAC), which relies on roles, role bindings, and service accounts—not annotations. Option C is incorrect because deployment strategies (such as RollingUpdate or Recreate) are defined in the specification of workload resources like Deployments, not through annotations. Option D is also incorrect because resource limits and requests are specified explicitly in the Pod or container spec under the resources field.
In summary, annotations provide a powerful and extensible way to associate metadata with Kubernetes objects without influencing scheduling, selection, or identity. They support integration, observability, and operational tooling while keeping core Kubernetes behavior predictable and stable. This design intent is clearly documented in Kubernetes metadata concepts, making Option B the correct and verified answer.
What framework does Kubernetes use to authenticate users with JSON Web Tokens?
OpenID Connect
OpenID Container
OpenID Cluster
OpenID CNCF
Kubernetes commonly authenticates users using OpenID Connect (OIDC) when JSON Web Tokens (JWTs) are involved, so A is correct. OIDC is an identity layer on top of OAuth 2.0 that standardizes how clients obtain identity information and how JWTs are issued and validated.
In Kubernetes, authentication happens at the API server. When OIDC is configured, the API server validates incoming bearer tokens (JWTs) by checking token signature and claims against the configured OIDC issuer and client settings. Kubernetes can use OIDC claims (such as sub, email, groups) to map the authenticated identity to Kubernetes RBAC subjects. This is how enterprises integrate clusters with identity providers such as Okta, Dex, Azure AD, or other OIDC-compliant IdPs.
Options B, C, and D are fabricated phrases and not real frameworks. Kubernetes documentation explicitly references OIDC as a supported method for token-based user authentication (alongside client certificates, bearer tokens, static token files, and webhook authentication). The key point is that Kubernetes does not “invent” JWT auth; it integrates with standard identity providers through OIDC so clusters can participate in centralized SSO and group-based authorization.
Operationally, OIDC authentication is typically paired with:
RBAC for authorization (“what you can do”)
Audit logging for traceability
Short-lived tokens and rotation practices for security
Group claim mapping to simplify permission management
So, the verified framework Kubernetes uses with JWTs for user authentication is OpenID Connect.
=========
What is the practice of bringing financial accountability to the variable spend model of cloud resources?
FaaS
DevOps
CloudCost
FinOps
The practice of bringing financial accountability to cloud spending—where costs are variable and usage-based—is called FinOps, so D is correct. FinOps (Financial Operations) is an operating model and culture that helps organizations manage cloud costs by connecting engineering, finance, and business teams. Because cloud resources can be provisioned quickly and billed dynamically, traditional budgeting approaches often fail to keep pace. FinOps addresses this by introducing shared visibility, governance, and optimization processes that enable teams to make cost-aware decisions while still moving fast.
In Kubernetes and cloud-native architectures, variable spend shows up in many ways: autoscaling node pools, over-provisioned resource requests, idle clusters, persistent volumes, load balancers, egress traffic, managed services, and observability tooling. FinOps practices encourage tagging/labeling for cost attribution, defining cost KPIs, enforcing budget guardrails, and continuously optimizing usage (right-sizing resources, scaling policies, turning off unused environments, and selecting cost-effective architectures).
Why the other options are incorrect: FaaS (Function as a Service) is a compute model (serverless), not a financial accountability practice. DevOps is a cultural and technical practice focused on collaboration and delivery speed, not specifically cloud cost accountability (though it can complement FinOps). CloudCost is not a widely recognized standard term in the way FinOps is.
In practice, FinOps for Kubernetes often involves improving resource efficiency: aligning requests/limits with real usage, using HPA/VPA appropriately, selecting instance types that match workload profiles, managing cluster autoscaler settings, and allocating shared platform costs to teams via labels/namespaces. It also includes forecasting and anomaly detection, because cloud-native spend can spike quickly due to misconfigurations (e.g., runaway autoscaling or excessive log ingestion).
So, the correct term for financial accountability in cloud variable spend is FinOps (D).
=========
A platform engineer is tasked with ensuring that an application can securely access the Kubernetes API without using a developer’s personal credentials. What is the correct way to configure this?
Create a ServiceAccount and bind it to the Pod for API access.
Generate a certificate for the application to access the API.
Use a developer’s kubeconfig file with restricted permissions.
Set the application to use the default ServiceAccount in the namespace.
In Kubernetes, applications that need to interact with the Kubernetes API should never use a developer’s personal credentials. Instead, Kubernetes provides a built-in, secure, and auditable mechanism for workload authentication and authorization using ServiceAccounts. Creating a dedicated ServiceAccount and binding it to the Pod is the correct and recommended approach, making option A the correct answer.
A ServiceAccount represents an identity for processes running inside Pods. When a Pod is configured to use a specific ServiceAccount, Kubernetes automatically injects a short-lived authentication token into the Pod. This token is securely mounted and can be used by the application to authenticate to the Kubernetes API server. Access to API resources is then controlled using RBAC (Role-Based Access Control) by binding roles or cluster roles to the ServiceAccount, ensuring the application has only the permissions it needs—following the principle of least privilege.
Option B is incorrect because manually generating certificates for application access is not the standard or recommended method for in-cluster authentication. Kubernetes manages ServiceAccount tokens automatically and rotates them as needed, providing a simpler and more secure solution. Option C is incorrect because using a developer’s kubeconfig file inside an application introduces serious security risks and violates best practices by coupling workloads to personal credentials. Option D is also incorrect because relying on the default ServiceAccount is discouraged; it often has no permissions or, in some cases, broader permissions than intended. Creating a dedicated ServiceAccount provides clearer security boundaries and auditability.
Using ServiceAccounts integrates cleanly with Kubernetes’ authentication and authorization model and is explicitly designed for applications and controllers running inside the cluster. This approach ensures secure API access, centralized permission management, and operational consistency across environments.
Therefore, the correct and verified answer is Option A: Create a ServiceAccount and bind it to the Pod for API access.
What does CNCF stand for?
Cloud Native Community Foundation
Cloud Native Computing Foundation
Cloud Neutral Computing Foundation
Cloud Neutral Community Foundation
CNCF stands for the Cloud Native Computing Foundation, making B correct. CNCF is the foundation that hosts and sustains many cloud-native open source projects, including Kubernetes, and provides governance, neutral stewardship, and community infrastructure to help projects grow and remain vendor-neutral.
CNCF’s scope includes not only Kubernetes but also a broad ecosystem of projects across observability, networking, service meshes, runtime security, CI/CD, and application delivery. The foundation defines processes for project incubation and graduation, promotes best practices, organizes community events, and supports interoperability and adoption through reference architectures and education.
In the Kubernetes context, CNCF’s role matters because Kubernetes is a massive multi-vendor project. Neutral governance reduces the risk that any single company can unilaterally control direction. This fosters broad contribution and adoption across cloud providers and enterprises. CNCF also supports the broader “cloud native” definition, often associated with containerization, microservices, declarative APIs, automation, and resilience principles.
The incorrect options are close-sounding but not accurate expansions. “Cloud Native Community Foundation” and the “Cloud Neutral …” variants are not the recognized meaning. The correct official name is Cloud Native Computing Foundation.
So, the verified answer is B, and understanding CNCF helps connect Kubernetes to its broader ecosystem of standardized, interoperable cloud-native tooling.
=========
Which of the following options is true about considerations for large Kubernetes clusters?
Kubernetes supports up to 1000 nodes and recommends no more than 1000 containers per node.
Kubernetes supports up to 5000 nodes and recommends no more than 500 Pods per node.
Kubernetes supports up to 5000 nodes and recommends no more than 110 Pods per node.
Kubernetes supports up to 50 nodes and recommends no more than 1000 containers per node.
The correct answer is C: Kubernetes scalability guidance commonly cites support up to 5000 nodes and recommends no more than 110 Pods per node. The “110 Pods per node” recommendation is a practical limit based on kubelet, networking, and IP addressing constraints, as well as performance characteristics for scheduling, service routing, and node-level resource management. It is also historically aligned with common CNI/IPAM defaults where node Pod CIDRs are sized for ~110 usable Pod IPs.
Why the other options are incorrect: A and D reference “containers per node,” which is not the standard sizing guidance (Kubernetes typically discusses Pods per node). B’s “500 Pods per node” is far above typical recommended limits for many environments and would stress IPAM, kubelet, and node resources significantly.
In large clusters, several considerations matter beyond the headline limits: API server and etcd performance, watch/list traffic, controller reconciliation load, CoreDNS scaling, and metrics/observability overhead. You must also plan for IP addressing (cluster CIDR sizing), node sizes (CPU/memory), and autoscaling behavior. On each node, kubelet and the container runtime must handle churn (starts/stops), logging, and volume operations. Networking implementations (kube-proxy, eBPF dataplanes) also have scaling characteristics.
Kubernetes provides patterns to keep systems stable at scale: request/limit discipline, Pod disruption budgets, topology spread constraints, namespaces and quotas, and careful observability sampling. But the exam-style fact this question targets is the published scalability figure and per-node Pod recommendation.
Therefore, the verified true statement among the options is C.
=========
In the Kubernetes platform, which component is responsible for running containers?
etcd
CRI-O
cloud-controller-manager
kube-controller-manager
In Kubernetes, the actual act of running containers on a node is performed by the container runtime. The kubelet instructs the runtime via CRI, and the runtime pulls images, creates containers, and manages their lifecycle. Among the options provided, CRI-O is the only container runtime, so B is correct.
It’s important to be precise: the component that “runs containers” is not the control plane and not etcd. etcd (option A) stores cluster state (API objects) as the backing datastore. It never runs containers. cloud-controller-manager (option C) integrates with cloud APIs for infrastructure like load balancers and nodes. kube-controller-manager (option D) runs controllers that reconcile Kubernetes objects (Deployments, Jobs, Nodes, etc.) but does not execute containers on worker nodes.
CRI-O is a CRI implementation that is optimized for Kubernetes and typically uses an OCI runtime (like runc) under the hood to start containers. Another widely used runtime is containerd. The runtime is installed on nodes and is a prerequisite for kubelet to start Pods. When a Pod is scheduled to a node, kubelet reads the PodSpec and asks the runtime to create a “pod sandbox” and then start the container processes. Runtime behavior also includes pulling images, setting up namespaces/cgroups, and exposing logs/stdout streams back to Kubernetes tooling.
So while “the container runtime” is the most general answer, the question’s option list makes CRI-O the correct selection because it is a container runtime responsible for running containers in Kubernetes.
=========
The Kubernetes project work is carried primarily by SIGs. What does SIG stand for?
Special Interest Group
Software Installation Guide
Support and Information Group
Strategy Implementation Group
In Kubernetes governance and project structure, SIG stands for Special Interest Group, so A is correct. Kubernetes is a large open source project under the Cloud Native Computing Foundation (CNCF), and its work is organized into groups that focus on specific domains—such as networking, storage, node, scheduling, security, docs, testing, and many more. SIGs provide a scalable way to coordinate contributors, prioritize work, review design proposals (KEPs), triage issues, and manage releases in their area.
Each SIG typically has regular meetings, mailing lists, chat channels, and maintainers who guide the direction of that part of the project. For example, SIG Network focuses on Kubernetes networking architecture and components, SIG Storage on storage APIs and CSI integration, and SIG Scheduling on scheduler behavior and extensibility. This structure helps Kubernetes evolve while maintaining quality, review rigor, and community-driven decision making.
The other options are not part of Kubernetes project terminology. “Software Installation Guide” and the others might sound plausible, but they are not how Kubernetes defines SIGs.
Understanding SIGs matters operationally because many Kubernetes features and design changes originate from SIGs. When you read Kubernetes enhancement proposals, release notes, or documentation, you’ll often see SIG ownership and references. In short, SIGs are the primary organizational units for Kubernetes engineering and stewardship, and SIG = Special Interest Group.
How long should a stable API element in Kubernetes be supported (at minimum) after deprecation?
9 months
24 months
12 months
6 months
Kubernetes has a formal API deprecation policy to balance stability for users with the ability to evolve the platform. For a stable (GA) API element, Kubernetes commits to supporting that API for a minimum period after it is deprecated. The correct minimum in this question is 12 months, which corresponds to option C.
In practice, Kubernetes releases occur roughly every three to four months, and the deprecation policy is commonly communicated in terms of “releases” as well as time. A GA API that is deprecated in one release is typically kept available for multiple subsequent releases, giving cluster operators and application teams time to migrate manifests, client libraries, controllers, and automation. This matters because Kubernetes is often at the center of production delivery pipelines; abrupt API removals would break deployments, upgrades, and tooling. By guaranteeing a minimum support window, Kubernetes enables predictable upgrades and safer lifecycle management.
This policy also encourages teams to track API versions and plan migrations. For example, workloads might start on a beta API (which can change), but once an API reaches stable, users can expect a stronger compatibility promise. Deprecation warnings help surface risk early. In many clusters, you’ll see API server warnings and tooling hints when manifests use deprecated fields/versions, allowing proactive remediation before the removal release.
Options 6 or 9 months would be too short for many enterprises to coordinate changes across multiple teams and environments. 24 months may be true for some ecosystems, but the Kubernetes stated minimum in this exam-style framing is 12 months. The key operational takeaway is: don’t ignore deprecation notices—they’re your clock for migration planning. Treat API version upgrades as part of routine cluster lifecycle hygiene to avoid being blocked during Kubernetes version upgrades when deprecated APIs are finally removed.
=========
Which of the following options include only mandatory fields to create a Kubernetes object using a YAML file?
apiVersion, template, kind, status
apiVersion, metadata, status, spec
apiVersion, template, kind, spec
apiVersion, metadata, kind, spec
D is correct: the mandatory top-level fields for creating a Kubernetes object manifest are apiVersion, kind, metadata, and (for most objects you create) spec. These fields establish what the object is and what you want Kubernetes to do with it.
apiVersion tells Kubernetes which API group/version schema to use (e.g., apps/v1, v1). This determines valid fields and behavior.
kind identifies the resource type (e.g., Pod, Deployment, Service).
metadata contains identifying information like name, namespace, and labels/annotations used for organization, selection, and automation.
spec describes the desired state. Controllers and the kubelet reconcile actual state to match spec.
Why other choices are wrong:
status is not a mandatory input field. It’s generally written by Kubernetes controllers and reflects observed state (conditions, readiness, assigned node, etc.). Users typically do not set status when creating objects.
template is not a universal top-level field. It exists inside some resources (notably Deployment.spec.template), but it’s not a required top-level field across Kubernetes objects.
It’s true that some resources can be created without a spec (or with minimal fields), but in the exam-style framing—“mandatory fields… using a YAML file”—the canonical expected set is exactly the four in D. This aligns with how Kubernetes documentation and examples present manifests: identify the API schema and kind, give object metadata, and declare desired state.
Therefore, apiVersion + metadata + kind + spec is the only option that includes only the mandatory fields, making D the verified correct answer.
=========
Which command lists the running containers in the current Kubernetes namespace?
kubectl get pods
kubectl ls
kubectl ps
kubectl show pods
The correct answer is A: kubectl get pods. Kubernetes does not manage “containers” as standalone top-level objects; the primary schedulable unit is the Pod, and containers run inside Pods. Therefore, the practical way to list what’s running in a namespace is to list the Pods in that namespace. kubectl get pods shows Pods and their readiness, status, restarts, and age—giving you the canonical view of running workloads.
If you need the container-level details (images, container names), you typically use additional commands and output formatting:
kubectl describe pod
kubectl get pods -o jsonpath=... or -o wide to surface more fields
kubectl get pods -o=json to inspect .spec.containers and .status.containerStatuses
But among the provided options, kubectl get pods is the only real kubectl command that lists the running workload objects in the current namespace.
The other options are not valid kubectl subcommands: kubectl ls, kubectl ps, and kubectl show pods are not standard Kubernetes CLI operations. Kubernetes intentionally centers around the API resource model, so listing resources uses kubectl get
So while the question says “running containers,” the Kubernetes-correct interpretation is “containers in running Pods,” and the appropriate listing command in the namespace is kubectl get pods, option A.
=========
In Kubernetes, which abstraction defines a logical set of Pods and a policy by which to access them?
Service Account
NetworkPolicy
Service
Custom Resource Definition
The correct answer is C: Service. A Kubernetes Service is an abstraction that provides stable access to a logical set of Pods. Pods are ephemeral: they can be rescheduled, recreated, and scaled, which changes their IP addresses over time. A Service solves this by providing a stable identity—typically a virtual IP (ClusterIP) and a DNS name—and a traffic-routing policy that directs requests to the current set of backend Pods.
Services commonly select Pods using labels via a selector (e.g., app=web). Kubernetes then maintains the backend endpoint list (Endpoints/EndpointSlices). The cluster networking layer routes traffic sent to the Service IP/port to one of the Pod endpoints, enabling load distribution across replicas. This is fundamental to microservices architectures: clients call the Service name, not individual Pods.
Why the other options are incorrect:
A ServiceAccount is an identity for Pods to authenticate to the Kubernetes API; it doesn’t define a set of Pods nor traffic access policy.
A NetworkPolicy defines allowed network flows (who can talk to whom) but does not provide stable addressing or load-balanced access to Pods. It is a security policy, not an exposure abstraction.
A CustomResourceDefinition extends the Kubernetes API with new resource types; it’s unrelated to service discovery and traffic routing for a set of Pods.
Understanding Services is core Kubernetes fundamentals: they decouple backend Pod churn from client connectivity. Services also integrate with different exposure patterns via type (ClusterIP, NodePort, LoadBalancer, ExternalName) and can be paired with Ingress/Gateway for HTTP routing. But the essential definition in the question—“logical set of Pods and a policy to access them”—is exactly the textbook description of a Service.
Therefore, the verified correct answer is C.
=========
Which tool is used to streamline installing and managing Kubernetes applications?
apt
helm
service
brew
Helm is the Kubernetes package manager used to streamline installing and managing applications, so B is correct. Helm packages Kubernetes resources into charts, which contain templates, default values, and metadata. When you install a chart, Helm renders templates into concrete manifests and applies them to the cluster. Helm also tracks a “release,” enabling upgrades, rollbacks, and consistent lifecycle operations across environments.
This is why Helm is widely used for complex applications that require multiple Kubernetes objects (Deployments/StatefulSets, Services, Ingresses, ConfigMaps, RBAC, CRDs). Rather than manually maintaining many YAML files per environment, teams can parameterize configuration with values and reuse the same chart across dev/stage/prod with different overrides.
Option A (apt) and option D (brew) are OS package managers (Debian/Ubuntu and macOS/Linuxbrew respectively), not Kubernetes application managers. Option C (service) is a Linux service manager command pattern and not relevant here.
In cloud-native delivery pipelines, Helm often integrates with GitOps and CI/CD: the pipeline builds an image, updates chart values (image tag/digest), and deploys via Helm or via GitOps controllers that render/apply Helm charts. Helm also supports chart repositories and versioning, making it easier to standardize deployments and manage dependencies.
So, the verified tool for streamlined Kubernetes app install/management is Helm (B).
=========
When modifying an existing Helm release to apply new configuration values, which approach is the best practice?
Use helm upgrade with the --set flag to apply new values while preserving the release history.
Use kubectl edit to modify the live release configuration and apply the updated resource values.
Delete the release and reinstall it with the desired configuration to force an updated deployment.
Edit the Helm chart source files directly and reapply them to push the updated configuration values.
Helm is a package manager for Kubernetes that provides a declarative and versioned approach to application deployment and lifecycle management. When updating configuration values for an existing Helm release, the recommended and best-practice approach is to use helm upgrade, optionally with the --set flag or a values file, to apply the new configuration while preserving the release’s history.
Option A is correct because helm upgrade updates an existing release in a controlled and auditable manner. Helm stores each revision of a release, allowing teams to inspect past configurations and roll back to a previous known-good state if needed. Using --set enables quick overrides of individual values, while using -f values.yaml supports more complex or repeatable configurations. This approach aligns with GitOps and infrastructure-as-code principles, ensuring consistency and traceability.
Option B is incorrect because modifying Helm-managed resources directly with kubectl edit breaks Helm’s state tracking. Helm maintains a record of the desired state for each release, and manual edits can cause configuration drift, making future upgrades unpredictable or unsafe. Kubernetes documentation and Helm guidance strongly discourage modifying Helm-managed resources outside of Helm itself.
Option C is incorrect because deleting and reinstalling a release discards the release history and may cause unnecessary downtime or data loss, especially for stateful applications. Helm’s upgrade mechanism is specifically designed to avoid this disruption while still applying configuration changes safely.
Option D is also incorrect because editing chart source files directly and reapplying them bypasses Helm’s release management model. While chart changes are appropriate during development, applying them directly to a running release without helm upgrade undermines versioning, rollback, and repeatability.
According to Helm documentation, helm upgrade is the standard and supported method for modifying deployed applications. It ensures controlled updates, preserves operational history, and enables safe rollbacks, making option A the correct and fully verified best practice.
Which type of Service requires manual creation of Endpoints?
LoadBalancer
Services without selectors
NodePort
ClusterIP with selectors
A Kubernetes Service without selectors requires you to manage its backend endpoints manually, so B is correct. Normally, a Service uses a selector to match a set of Pods (by labels). Kubernetes then automatically maintains the backend list (historically Endpoints, now commonly EndpointSlice) by tracking which Pods match the selector and are Ready. This automation is one of the key reasons Services provide stable connectivity to dynamic Pods.
When you create a Service without a selector, Kubernetes has no way to know which Pods (or external IPs) should receive traffic. In that pattern, you explicitly create an Endpoints object (or EndpointSlices, depending on your approach and controller support) that maps the Service name to one or more IP:port tuples. This is commonly used to represent external services (e.g., a database running outside the cluster) while still providing a stable Kubernetes Service DNS name for in-cluster clients. Another use case is advanced migration scenarios where endpoints are controlled by custom controllers rather than label selection.
Why the other options are wrong: Service types like ClusterIP, NodePort, and LoadBalancer describe how a Service is exposed, but they do not inherently require manual endpoint management. A ClusterIP Service with selectors (D) is the standard case where endpoints are automatically created and updated. NodePort and LoadBalancer Services also typically use selectors and therefore inherit automatic endpoint management; the difference is in how traffic enters the cluster, not how backends are discovered.
Operationally, when using Services without selectors, you must ensure endpoint IPs remain correct, health is accounted for (often via external tooling), and you update endpoints when backends change. The key concept is: no selector → Kubernetes can’t auto-populate endpoints → you must provide them.
=========
What does the livenessProbe in Kubernetes help detect?
When a container is ready to serve traffic.
When a container has started successfully.
When a container exceeds resource limits.
When a container is unresponsive.
The liveness probe in Kubernetes is designed to detect whether a container is still running correctly or has entered a failed or unresponsive state. Its primary purpose is to determine whether a container should be restarted. When a liveness probe fails repeatedly, Kubernetes assumes the container is unhealthy and automatically restarts it to restore normal operation.
Option D correctly describes this behavior. Liveness probes are used to identify situations where an application is running but no longer functioning as expected—for example, a deadlock, infinite loop, or hung process that cannot recover on its own. In such cases, restarting the container is often the most effective remediation, and Kubernetes handles this automatically through the liveness probe mechanism.
Option A is incorrect because readiness probes—not liveness probes—determine whether a container is ready to receive traffic. A container can be alive but not ready, such as during startup or temporary maintenance. Option B is incorrect because startup success is handled by startup probes, which are specifically designed to manage slow-starting applications and delay liveness and readiness checks until initialization is complete. Option C is incorrect because exceeding resource limits is managed by the container runtime and kubelet (for example, OOMKills), not by probes.
Liveness probes can be implemented using HTTP requests, TCP socket checks, or command execution inside the container. If the probe fails beyond a configured threshold, Kubernetes restarts the container according to the Pod’s restart policy. This self-healing behavior is a core feature of Kubernetes and contributes significantly to application reliability.
Kubernetes documentation emphasizes using liveness probes carefully, as misconfiguration can cause unnecessary restarts. However, when used correctly, they provide a powerful way to automatically recover from application-level failures that Kubernetes cannot otherwise detect.
In summary, the liveness probe’s role is to detect when a container is unresponsive and needs to be restarted, making option D the correct and fully verified answer.
How can you extend the Kubernetes API?
Adding a CustomResourceDefinition or implementing an aggregation layer.
Adding a new version of a resource, for instance v4beta3.
With the command kubectl extend api, logged in as an administrator.
Adding the desired API object as a kubelet parameter.
A is correct: Kubernetes’ API can be extended by adding CustomResourceDefinitions (CRDs) and/or by implementing the API Aggregation Layer. These are the two canonical extension mechanisms.
CRDs let you define new resource types (new kinds) that the Kubernetes API server stores in etcd and serves like native objects. You typically pair a CRD with a controller/operator that watches those custom objects and reconciles real resources accordingly. This pattern is foundational to the Kubernetes ecosystem (many popular add-ons install CRDs).
The aggregation layer allows you to add entire API services (aggregated API servers) that serve additional endpoints under the Kubernetes API. This is used when you want custom API behavior, custom storage, or specialized semantics beyond what CRDs provide (or when implementing APIs like metrics APIs historically).
Why the other answers are wrong:
B is not how API extension works. You don’t “extend the API” by inventing new versions like v4beta3; versions are defined and implemented by API servers/controllers, not by users arbitrarily.
C is fictional; there is no standard kubectl extend api command.
D is also incorrect; kubelet parameters configure node agent behavior, not API server types and discovery.
So, the verified ways to extend Kubernetes’ API surface are CRDs and API aggregation, which is option A.
=========
What is the primary mechanism to identify grouped objects in a Kubernetes cluster?
Custom Resources
Labels
Label Selector
Pod
Kubernetes groups and organizes objects primarily using labels, so B is correct. Labels are key-value pairs attached to objects (Pods, Deployments, Services, Nodes, etc.) and are intended to be used for identifying, selecting, and grouping resources in a flexible, user-defined way.
Labels enable many core Kubernetes behaviors. For example, a Service selects the Pods that should receive traffic by matching a label selector against Pod labels. A Deployment’s ReplicaSet similarly uses label selectors to determine which Pods belong to the replica set. Operators and platform tooling also rely on labels to group resources by application, environment, team, or cost center. This is why labeling is considered foundational Kubernetes hygiene: consistent labels make automation, troubleshooting, and governance easier.
A “label selector” (option C) is how you query/group objects based on labels, but the underlying primary mechanism is still the labels themselves. Without labels applied to objects, selectors have nothing to match. Custom Resources (option A) extend the API with new kinds, but they are not the primary grouping mechanism across the cluster. “Pod” (option D) is a workload unit, not a grouping mechanism.
Practically, Kubernetes recommends common label keys like app.kubernetes.io/name, app.kubernetes.io/instance, and app.kubernetes.io/part-of to standardize grouping. Those conventions improve interoperability with dashboards, GitOps tooling, and policy engines.
So, when the question asks for the primary mechanism used to identify grouped objects in Kubernetes, the most accurate answer is Labels (B)—they are the universal metadata primitive used to group and select resources.
=========
A platform engineer wants to ensure that a new microservice is automatically deployed to every cluster registered in Argo CD. Which configuration best achieves this goal?
Set up a Kubernetes CronJob that redeploys the microservice to all registered clusters on a schedule.
Manually configure every registered cluster with the deployment YAML for installing the microservice.
Create an Argo CD ApplicationSet that uses a Git repository containing the microservice manifests.
Use a Helm chart to package the microservice and manage it with a single Application defined in Argo CD.
Argo CD is a declarative GitOps continuous delivery tool designed to manage Kubernetes applications across one or many clusters. When the requirement is to automatically deploy a microservice to every cluster registered in Argo CD, the most appropriate and scalable solution is to use an ApplicationSet.
The ApplicationSet controller extends Argo CD by enabling the dynamic generation of multiple Argo CD Applications from a single template. One of its most powerful features is the cluster generator, which automatically discovers all clusters registered with Argo CD and creates an Application for each of them. By combining this generator with a Git repository containing the microservice manifests, the platform engineer ensures that the microservice is consistently deployed to all existing clusters—and any new clusters added in the future—without manual intervention.
This approach aligns perfectly with GitOps principles. The desired state of the microservice is defined once in Git, and Argo CD continuously reconciles that state across all target clusters. Any updates to the microservice manifests are automatically rolled out everywhere in a controlled and auditable manner. This provides strong guarantees around consistency, scalability, and operational simplicity.
Option A is incorrect because a CronJob introduces imperative redeployment logic and does not integrate with Argo CD’s reconciliation model. Option B is not scalable or maintainable, as it requires manual configuration for each cluster and increases the risk of configuration drift. Option D, while useful for packaging applications, still results in a single Application object and does not natively handle multi-cluster fan-out by itself.
Therefore, the correct and verified answer is Option C: creating an Argo CD ApplicationSet backed by a Git repository, which is the recommended and documented solution for multi-cluster application delivery in Argo CD.
Manual reclamation policy of a PV resource is known as:
claimRef
Delete
Retain
Recycle
The correct answer is C: Retain. In Kubernetes persistent storage, a PersistentVolume (PV) has a persistentVolumeReclaimPolicy that determines what happens to the underlying storage asset after its PersistentVolumeClaim (PVC) is deleted. The reclaim policy options historically include Delete and Retain (and Recycle, which is deprecated/removed in many modern contexts). “Manual reclamation” refers to the administrator having to manually clean up and/or rebind the storage after the claim is released—this behavior corresponds to Retain.
With Retain, when the PVC is deleted, the PV moves to a “Released” state, but the actual storage resource (cloud disk, NFS path, etc.) is not deleted automatically. Kubernetes will not automatically make that PV available for a new claim until an administrator takes action—typically cleaning the data, removing the old claim reference, and/or creating a new PV/PVC binding flow. This is important for data safety: you don’t want to automatically delete sensitive or valuable data just because a claim was removed.
By contrast, Delete means Kubernetes (via the storage provisioner/CSI driver) will delete the underlying storage asset when the claim is deleted—useful for dynamic provisioning and disposable environments. Recycle used to scrub the volume contents and make it available again, but it’s not the recommended modern approach and has been phased out in favor of dynamic provisioning and explicit workflows.
So, the policy that implies manual intervention and manual cleanup/reuse is Retain, which is option C.
=========
Ceph is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. Which open-source cloud native storage orchestrator automates deployment and management of Ceph to provide self-managing, self-scaling, and self-healing storage services?
CubeFS
OpenEBS
Rook
MinIO
Rook is the open-source, cloud-native storage orchestrator specifically designed to automate the deployment, configuration, and lifecycle management of Ceph within Kubernetes environments. Its primary goal is to transform complex, traditionally manual storage systems like Ceph into Kubernetes-native services that are easy to operate and highly resilient.
Ceph itself is a mature and powerful distributed storage platform that supports block storage (RBD), object storage (RGW), and shared filesystems (CephFS). However, operating Ceph directly requires deep expertise, careful configuration, and continuous operational management. Rook addresses this challenge by running Ceph as a set of Kubernetes-managed components and exposing storage capabilities through Kubernetes Custom Resource Definitions (CRDs). This allows administrators to declaratively define storage clusters, pools, filesystems, and object stores using familiar Kubernetes patterns.
Rook continuously monitors the health of the Ceph cluster and takes automated actions to maintain the desired state. If a Ceph daemon fails or a node becomes unavailable, Rook works with Kubernetes scheduling and Ceph’s internal replication mechanisms to ensure data durability and service continuity. This enables self-healing behavior. Scaling storage capacity is also simplified—adding nodes or disks allows Rook and Ceph to automatically rebalance data, providing self-scaling capabilities without manual intervention.
The other options are incorrect for this use case. CubeFS is a distributed filesystem but is not a Ceph orchestrator. OpenEBS focuses on container-attached storage and local or replicated volumes rather than managing Ceph itself. MinIO is an object storage server compatible with S3 APIs, but it does not orchestrate Ceph or provide block and filesystem services.
Therefore, the correct and verified answer is Option C: Rook, which is the officially recognized Kubernetes-native orchestrator for Ceph, delivering automated, resilient, and scalable storage management aligned with cloud-native principles.
The cloud native architecture centered around microservices provides a strong system that ensures ______________.
fallback
resiliency
failover
high reachability
The best answer is B (resiliency). A microservices-centered cloud-native architecture is designed to build systems that continue to operate effectively under change and failure. “Resiliency” is the umbrella concept: the ability to tolerate faults, recover from disruptions, and maintain acceptable service levels through redundancy, isolation, and automated recovery.
Microservices help resiliency by reducing blast radius. Instead of one monolith where a single defect can take down the entire application, microservices separate concerns into independently deployable components. Combined with Kubernetes, you get resiliency mechanisms such as replication (multiple Pod replicas), self-healing (restart and reschedule on failure), rolling updates, health probes, and service discovery/load balancing. These enable the platform to detect and replace failing instances automatically, and to keep traffic flowing to healthy backends.
Options C (failover) and A (fallback) are resiliency techniques but are narrower terms. Failover usually refers to switching to a standby component when a primary fails; fallback often refers to degraded behavior (cached responses, reduced features). Both can exist in microservice systems, but the broader architectural guarantee microservices aim to support is resiliency overall. Option D (“high reachability”) is not the standard term used in cloud-native design and doesn’t capture the intent as precisely as resiliency.
In practice, achieving resiliency also requires good observability and disciplined delivery: monitoring/alerts, tracing across service boundaries, circuit breakers/timeouts/retries, and progressive delivery patterns. Kubernetes provides platform primitives, but resilient microservices also need careful API design and failure-mode thinking.
So the intended and verified completion is resiliency, option B.
=========
What’s the difference between a security profile and a security context?
Security Contexts configure Clusters and Namespaces at runtime. Security profiles are control plane mechanisms to enforce specific settings in the Security Context.
Security Contexts configure Pods and Containers at runtime. Security profiles are control plane mechanisms to enforce specific settings in the Security Context.
Security Profiles configure Pods and Containers at runtime. Security Contexts are control plane mechanisms to enforce specific settings in the Security Profile.
Security Profiles configure Clusters and Namespaces at runtime. Security Contexts are control plane mechanisms to enforce specific settings in the Security Profile.
The correct answer is B. In Kubernetes, a securityContext is part of the Pod and container specification that configures runtime security settings for that workload—things like runAsUser, runAsNonRoot, Linux capabilities, readOnlyRootFilesystem, allowPrivilegeEscalation, SELinux options, seccomp profile selection, and filesystem group (fsGroup). These settings directly affect how the Pod’s containers run on the node.
A security profile, in contrast, is a higher-level policy/standard enforced by the cluster control plane (typically via admission control) to ensure workloads meet required security constraints. In modern Kubernetes, this concept aligns with mechanisms like Pod Security Standards (Privileged, Baseline, Restricted) enforced through Pod Security Admission. The “profile” defines what is allowed or forbidden (for example, disallow privileged containers, disallow hostPath mounts, require non-root, restrict capabilities). The control plane enforces these constraints by validating or rejecting Pod specs that do not comply—ensuring consistent security posture across namespaces and teams.
Option A and D are incorrect because security contexts do not “configure clusters and namespaces at runtime”; security contexts apply to Pods/containers. Option C reverses the relationship: security profiles don’t configure Pods at runtime; they constrain what security context settings (and other fields) are acceptable.
Practically, you can think of it as:
SecurityContext = workload-level configuration knobs (declared in manifests, applied at runtime).
SecurityProfile/Standards = cluster-level guardrails that determine which knobs/settings are permitted.
This separation supports least privilege: developers declare needed runtime settings, and cluster governance ensures those settings stay within approved boundaries. Therefore, B is the verified answer.
=========
How do you deploy a workload to Kubernetes without additional tools?
Create a Bash script and run it on a worker node.
Create a Helm Chart and install it with helm.
Create a manifest and apply it with kubectl.
Create a Python script and run it with kubectl.
The standard way to deploy workloads to Kubernetes using only built-in tooling is to create Kubernetes manifests (YAML/JSON definitions of API objects) and apply them with kubectl, so C is correct. Kubernetes is a declarative system: you describe the desired state of resources (e.g., a Deployment, Service, ConfigMap, Ingress) in a manifest file, then submit that desired state to the API server. Controllers reconcile the actual cluster state to match what you declared.
A manifest typically includes mandatory fields like apiVersion, kind, and metadata, and then a spec describing desired behavior. For example, a Deployment manifest declares replicas and the Pod template (containers, images, ports, probes, resources). Applying the manifest with kubectl apply -f
Option B (Helm) is indeed a popular deployment tool, but Helm is explicitly an “additional tool” beyond kubectl and the Kubernetes API. The question asks “without additional tools,” so Helm is excluded by definition. Option A (running Bash scripts on worker nodes) bypasses Kubernetes’ desired-state control and is not how Kubernetes workload deployment is intended; it also breaks portability and operational safety. Option D is not a standard Kubernetes deployment mechanism; kubectl does not “run Python scripts” to deploy workloads (though scripts can automate kubectl, that’s still not the primary mechanism).
From a cloud native delivery standpoint, manifests support GitOps, reviewable changes, and repeatable deployments across environments. The Kubernetes-native approach is: declare resources in manifests and apply them to the cluster. Therefore, C is the verified correct answer.
What element allows Kubernetes to run Pods across the fleet of nodes?
The node server.
The etcd static pods.
The API server.
The kubelet.
The correct answer is D (the kubelet) because the kubelet is the node agent responsible for actually running Pods on each node. Kubernetes can orchestrate workloads across many nodes because every worker node (and control-plane node that runs workloads) runs a kubelet that continuously watches the API server for PodSpecs assigned to that node and then ensures the containers described by those PodSpecs are started and kept running. In other words, the kube-scheduler decides where a Pod should run (sets spec.nodeName), but the kubelet is what makes the Pod run on that chosen node.
The kubelet integrates with the container runtime (via CRI) to pull images, create sandboxes, start containers, and manage their lifecycle. It also reports node and Pod status back to the control plane, executes liveness/readiness/startup probes, mounts volumes, and performs local housekeeping that keeps the node aligned with the declared desired state. This node-level reconciliation loop is a key Kubernetes pattern: the control plane declares intent, and the kubelet enforces it on the node.
Option C (API server) is critical but does not run Pods; it is the control plane’s front door for storing and serving cluster state. Option A (“node server”) is not a Kubernetes component. Option B (etcd static pods) is a misunderstanding: etcd is the datastore for Kubernetes state and may run as static Pods in some installations, but it is not the mechanism that runs user workloads across nodes.
So, Kubernetes runs Pods “across the fleet” because each node has a kubelet that can realize scheduled PodSpecs locally and keep them healthy over time.
=========
What methods can you use to scale a Deployment?
With kubectl edit deployment exclusively.
With kubectl scale-up deployment exclusively.
With kubectl scale deployment and kubectl edit deployment.
With kubectl scale deployment exclusively.
A Deployment’s replica count is controlled by spec.replicas. You can scale a Deployment by changing that field—either directly editing the object or using kubectl’s scaling helper. Therefore C is correct: you can scale using kubectl scale and also via kubectl edit.
kubectl scale deployment
kubectl edit deployment
Option B is invalid because kubectl scale-up deployment is not a standard kubectl command. Option A is incorrect because kubectl edit is not the only method; scaling is commonly done with kubectl scale. Option D is also incorrect because while kubectl scale is a primary method, kubectl edit is also a valid method to change replicas.
In production, you often scale with autoscalers (HPA/VPA), but the question is asking about kubectl methods. The key Kubernetes concept is that scaling is achieved by updating desired state (spec.replicas), and controllers reconcile Pods to match.
=========
What is Flux constructed with?
GitLab Environment Toolkit
GitOps Toolkit
Helm Toolkit
GitHub Actions Toolkit
The correct answer is B: GitOps Toolkit. Flux is a GitOps solution for Kubernetes, and in Flux v2 the project is built as a set of Kubernetes controllers and supporting components collectively referred to as the GitOps Toolkit. This toolkit provides the building blocks for implementing GitOps reconciliation: sourcing artifacts (Git repositories, Helm repositories, OCI artifacts), applying manifests (Kustomize/Helm), and continuously reconciling cluster state to match the desired state declared in Git.
This construction matters because it reflects Flux’s modular architecture. Instead of being a single monolithic daemon, Flux is composed of controllers that each handle a part of the GitOps workflow: fetching sources, rendering configuration, and applying changes. This makes it more Kubernetes-native: everything is declarative, runs in the cluster, and can be managed like other workloads (RBAC, namespaces, upgrades, observability).
Why the other options are wrong:
“GitLab Environment Toolkit” and “GitHub Actions Toolkit” are not what Flux is built from. Flux can integrate with many SCM providers and CI systems, but it is not “constructed with” those.
“Helm Toolkit” is not the named foundational set Flux is built upon. Flux can deploy Helm charts, but that’s a capability, not its underlying construction.
In cloud-native delivery, Flux implements the key GitOps control loop: detect changes in Git (or other declared sources), compute desired Kubernetes state, and apply it while continuously checking for drift. The GitOps Toolkit is the set of controllers enabling that loop.
Therefore, the verified correct answer is B.
=========
What is a Service?
A static network mapping from a Pod to a port.
A way to expose an application running on a set of Pods.
The network configuration for a group of Pods.
An NGINX load balancer that gets deployed for an application.
The correct answer is B: a Kubernetes Service is a stable way to expose an application running on a set of Pods. Pods are ephemeral—IPs can change when Pods are recreated, rescheduled, or scaled. A Service provides a consistent network identity (DNS name and usually a ClusterIP virtual IP) and a policy for routing traffic to the current healthy backends.
Typically, a Service uses a label selector to determine which Pods are part of the backend set. Kubernetes then maintains the corresponding endpoint data (Endpoints/EndpointSlice), and the cluster dataplane (kube-proxy or an eBPF-based implementation) forwards traffic from the Service IP/port to one of the Pod IPs. This enables reliable service discovery and load distribution across replicas, especially during rolling updates where Pods are constantly replaced.
Option A is incorrect because Service routing is not a “static mapping from a Pod to a port.” It’s dynamic and targets a set of Pods. Option C is too vague and misstates the concept; while Services relate to networking, they are not “the network configuration for a group of Pods” (that’s closer to NetworkPolicy/CNI configuration). Option D is incorrect because Kubernetes does not automatically deploy an NGINX load balancer when you create a Service. NGINX might be used as an Ingress controller or external load balancer in some setups, but a Service is a Kubernetes API abstraction, not a specific NGINX component.
Services come in several types (ClusterIP, NodePort, LoadBalancer, ExternalName), but the core definition remains the same: stable access to a dynamic set of Pods. This is foundational for microservices and for decoupling clients from the churn of Pod lifecycles.
So, the verified correct definition is B.
=========
In a cloud native environment, who is usually responsible for maintaining the workloads running across the different platforms?
The cloud provider.
The Site Reliability Engineering (SRE) team.
The team of developers.
The Support Engineering team (SE).
B (the Site Reliability Engineering team) is correct. In cloud-native organizations, SREs are commonly responsible for the reliability, availability, and operational health of workloads across platforms (multiple clusters, regions, clouds, and supporting services). While responsibilities vary by company, the classic SRE charter is to apply software engineering to operations: build automation, standardize runbooks, manage incident response, define SLOs/SLIs, and continuously improve system reliability.
Maintaining workloads “across different platforms” implies cross-cutting operational ownership: deployments need to behave consistently, rollouts must be safe, monitoring and alerting must be uniform, and incident practices must work across environments. SRE teams typically own or heavily influence the observability stack (metrics/logs/traces), operational readiness, capacity planning, and reliability guardrails (error budgets, progressive delivery, automated rollback triggers). They also collaborate closely with platform engineering and application teams, but SRE is often the group that ensures production workloads meet reliability targets.
Why other options are less correct:
The cloud provider (A) maintains the underlying cloud services, but not your application workloads’ correctness, SLOs, or operational processes.
Developers (C) do maintain application code and may own on-call in some models, but the question asks “usually” in cloud-native environments; SRE is the widely recognized function for workload reliability across platforms.
Support Engineering (D) typically focuses on customer support and troubleshooting from a user perspective, not maintaining platform workload reliability at scale.
So, the best and verified answer is B: SRE teams commonly maintain and ensure reliability of workloads across cloud-native platforms.
=========
Scenario: You have a Kubernetes cluster hosted in a public cloud provider. When trying to create a Service of type LoadBalancer, the external-ip is stuck in the "Pending" state. Which Kubernetes component is failing in this scenario?
Cloud Controller Manager
Load Balancer Manager
Cloud Architecture Manager
Cloud Load Balancer Manager
When you create a Service of type LoadBalancer in a cloud environment, Kubernetes relies on cloud-provider integration to provision an external load balancer and allocate a public IP (or equivalent). The control plane component responsible for this integration is the cloud-controller-manager, so A is correct.
In Kubernetes, a LoadBalancer Service triggers a controller loop that calls the cloud provider APIs to create/update a load balancer that forwards traffic to the cluster (often via NodePorts on worker nodes, or via provider-specific mechanisms). The Service remains with EXTERNAL-IP: Pending until the cloud provider resource is successfully created and the controller updates the Service status with the assigned external address. If that status never updates, it usually indicates the cloud integration path is broken—commonly due to: missing cloud provider configuration, broken credentials/IAM permissions, the cloud-controller-manager not running/healthy, or a misconfigured cloud provider implementation.
The other options are not real Kubernetes components. Kubernetes does not include a “Load Balancer Manager” or “Cloud Architecture Manager” component name in its standard architecture. In many managed Kubernetes offerings, the cloud-controller-manager (or its equivalent) is provided/managed by the provider, but the responsibility remains the same: reconcile Kubernetes Service resources into cloud load balancer resources.
Therefore, in this scenario, the failing component is the Cloud Controller Manager, which is the Kubernetes control plane component that interfaces with the cloud provider to provision external load balancers and update the Service status.
=========
What is a cloud native application?
It is a monolithic application that has been containerized and is running now on the cloud.
It is an application designed to be scalable and take advantage of services running on the cloud.
It is an application designed to run all its functions in separate containers.
It is any application that runs in a cloud provider and uses its services.
B is correct. A cloud native application is designed to be scalable, resilient, and adaptable, and to leverage cloud/platform capabilities rather than merely being “hosted” on a cloud VM. Cloud-native design emphasizes principles like elasticity (scale up/down), automation, fault tolerance, and rapid, reliable delivery. While containers and Kubernetes are common enablers, the key is the architectural intent: build applications that embrace distributed systems patterns and cloud-managed primitives.
Option A is not enough. Simply containerizing a monolith and running it in the cloud does not automatically make it cloud native; that may be “lift-and-shift” packaging. The application might still be tightly coupled, hard to scale, and operationally fragile. Option C is too narrow and prescriptive; cloud native does not require “all functions in separate containers” (microservices are common but not mandatory). Many cloud-native apps use a mix of services, and even monoliths can be made more cloud native by adopting statelessness, externalized state, and automated delivery. Option D is too broad; “any app running in a cloud provider” includes legacy apps that don’t benefit from elasticity or cloud-native operational models.
Cloud-native applications typically align with patterns: stateless service tiers, declarative configuration, health endpoints, horizontal scaling, graceful shutdown, and reliance on managed backing services (databases, queues, identity, observability). They are built to run reliably in dynamic environments where instances are replaced routinely—an assumption that matches Kubernetes’ reconciliation and self-healing model.
So, the best verified definition among these options is B.
=========
What helps an organization to deliver software more securely at a higher velocity?
Kubernetes
apt-get
Docker Images
CI/CD Pipeline
A CI/CD pipeline is a core practice/tooling approach that enables organizations to deliver software faster and more securely, so D is correct. CI (Continuous Integration) automates building and testing code changes frequently, reducing integration risk and catching defects early. CD (Continuous Delivery/Deployment) automates releasing validated builds into environments using consistent, repeatable steps—reducing manual errors and enabling rapid iteration.
Security improves because automation enables standardized checks on every change: static analysis, dependency scanning, container image scanning, policy validation, and signing/verification steps can be integrated into the pipeline. Instead of relying on ad-hoc human processes, security controls become repeatable gates. In Kubernetes environments, pipelines commonly build container images, run tests, publish artifacts to registries, and then deploy via manifests, Helm, or GitOps controllers—keeping deployments consistent and auditable.
Option A (Kubernetes) is a platform that helps run and manage workloads, but by itself it doesn’t guarantee secure high-velocity delivery. It provides primitives (rollouts, declarative config, RBAC), yet the delivery workflow still needs automation. Option B (apt-get) is a package manager for Debian-based systems and is not a delivery pipeline. Option C (Docker Images) are artifacts; they improve portability and repeatability, but they don’t provide the end-to-end automation of building, testing, promoting, and deploying across environments.
In cloud-native application delivery, the pipeline is the “engine” that turns code changes into safe production releases. Combined with Kubernetes’ declarative deployment model (Deployments, rolling updates, health probes), a CI/CD pipeline supports frequent releases with controlled rollouts, fast rollback, and strong auditability. That is exactly what the question is targeting. Therefore, the verified answer is D.
=========
How are ReplicaSets and Deployments related?
Deployments manage ReplicaSets and provide declarative updates to Pods.
ReplicaSets manage stateful applications, Deployments manage stateless applications.
Deployments are runtime instances of ReplicaSets.
ReplicaSets are subsets of Jobs and CronJobs which use imperative Deployments.
In Kubernetes, a Deployment is a higher-level controller that manages ReplicaSets, and ReplicaSets in turn manage Pods. That is exactly what option A states, making it the correct answer.
A ReplicaSet’s job is straightforward: ensure that a specified number of Pod replicas matching a selector are running. It continuously reconciles actual state to desired state by creating new Pods when replicas are missing or removing Pods when there are too many. However, ReplicaSets alone do not provide the richer application rollout lifecycle features most teams need.
A Deployment adds those features by managing ReplicaSets across versions of your Pod template. When you update a Deployment (for example, change the container image tag), Kubernetes creates a new ReplicaSet with the new Pod template and then gradually scales the new ReplicaSet up and the old one down according to the Deployment strategy (RollingUpdate by default). Deployments also maintain rollout history, support rollback (kubectl rollout undo), and allow pause/resume of rollouts. This is why the common guidance is: you almost always create Deployments rather than ReplicaSets directly for stateless apps.
Option B is incorrect because stateful workloads are typically handled by StatefulSets, not ReplicaSets. Deployments can run stateless apps, but ReplicaSets are also used under Deployments and are not “for stateful only.” Option C is reversed: ReplicaSets are not “instances” of Deployments; Deployments create/manage ReplicaSets. Option D is incorrect because Jobs/CronJobs are separate controllers for run-to-completion workloads and do not define ReplicaSets as subsets.
So the accurate relationship is: Deployment → manages ReplicaSets → which manage Pods, enabling declarative updates and controlled rollouts.
=========
How many different Kubernetes service types can you define?
2
3
4
5
Kubernetes defines four primary Service types, which is why C (4) is correct. The commonly recognized Service spec.type values are:
ClusterIP: The default type. Exposes the Service on an internal virtual IP reachable only within the cluster. This supports typical east-west traffic between workloads.
NodePort: Exposes the Service on a static port on each node. Traffic to
LoadBalancer: Integrates with a cloud provider (or load balancer implementation) to provision an external load balancer and route traffic to the Service. This is common in managed Kubernetes.
ExternalName: Maps the Service name to an external DNS name via a CNAME record, allowing in-cluster clients to use a consistent Service DNS name to reach an external dependency.
Some people also talk about “Headless Services,” but headless is not a separate type; it’s a behavior achieved by setting clusterIP: None. Headless Services still use the Service API object but change DNS and virtual-IP behavior to return endpoint IPs directly rather than a ClusterIP. That’s why the canonical count of “Service types” is four.
This question tests understanding of the Service abstraction: Service type controls how a stable service identity is exposed (internal VIP, node port, external LB, or DNS alias), while selectors/endpoints control where traffic goes (the backend Pods). Different environments will favor different types: ClusterIP for internal microservices, LoadBalancer for external exposure in cloud, NodePort for bare-metal or simple access, ExternalName for bridging to outside services.
Therefore, the verified answer is C (4).
=========
What is the default value for authorization-mode in Kubernetes API server?
--authorization-mode=RBAC
--authorization-mode=AlwaysAllow
--authorization-mode=AlwaysDeny
--authorization-mode=ABAC
The Kubernetes API server supports multiple authorization modes that determine whether an authenticated request is allowed to perform an action (verb) on a resource. Historically, the API server’s default authorization mode was AlwaysAllow, meaning that once a request was authenticated, it would be authorized without further checks. That is why the correct answer here is B.
However, it’s crucial to distinguish “default flag value” from “recommended configuration.” In production clusters, running with AlwaysAllow is insecure because it effectively removes authorization controls—any authenticated user (or component credential) could do anything the API permits. Modern Kubernetes best practices strongly recommend enabling RBAC (Role-Based Access Control), often alongside Node and Webhook authorization, so that permissions are granted explicitly using Roles/ClusterRoles and RoleBindings/ClusterRoleBindings. Many managed Kubernetes distributions and kubeadm-based setups commonly enable RBAC by default as part of cluster bootstrap profiles, even if the API server’s historical default flag value is AlwaysAllow.
So, the exam-style interpretation of this question is about the API server flag default, not what most real clusters should run. With RBAC enabled, authorization becomes granular: you can control who can read Secrets, who can create Deployments, who can exec into Pods, and so on, scoped to namespaces or cluster-wide. ABAC (Attribute-Based Access Control) exists but is generally discouraged compared to RBAC because it relies on policy files and is less ergonomic and less commonly used. AlwaysDeny is useful for hard lockdown testing but not for normal clusters.
In short: AlwaysAllow is the API server’s default mode (answer B), but RBAC is the secure, recommended choice you should expect to see enabled in almost any serious Kubernetes environment.
=========
Which component of the Kubernetes architecture is responsible for integration with the CRI container runtime?
kubeadm
kubelet
kube-apiserver
kubectl
The correct answer is B: kubelet. The Container Runtime Interface (CRI) defines how Kubernetes interacts with container runtimes in a consistent, pluggable way. The component that speaks CRI is the kubelet, the node agent responsible for running Pods on each node. When the kube-scheduler assigns a Pod to a node, the kubelet reads the PodSpec and makes the runtime calls needed to realize that desired state—pull images, create a Pod sandbox, start containers, stop containers, and retrieve status and logs. Those calls are made via CRI to a CRI-compliant runtime such as containerd or CRI-O.
Why not the others:
kubeadm bootstraps clusters (init/join/upgrade workflows) but does not run containers or speak CRI for workload execution.
kube-apiserver is the control plane API frontend; it stores and serves cluster state and does not directly integrate with runtimes.
kubectl is just a client tool that sends API requests; it is not involved in runtime integration on nodes.
This distinction matters operationally. If the runtime is misconfigured or CRI endpoints are unreachable, kubelet will report errors and Pods can get stuck in ContainerCreating, image pull failures, or runtime errors. Debugging often involves checking kubelet logs and runtime service health, because kubelet is the integration point bridging Kubernetes scheduling/state with actual container execution.
So, the node-level component responsible for CRI integration is the kubelet—option B.
=========
Can a Kubernetes Service expose multiple ports?
No, you can only expose one port per each Service.
Yes, but you must specify an unambiguous name for each port.
Yes, the only requirement is to use different port numbers.
No, because the only port you can expose is port number 443.
Yes, a Kubernetes Service can expose multiple ports, and when it does, each port should have a unique, unambiguous name, making B correct. In the Service spec, the ports field is an array, allowing you to define multiple port mappings (e.g., 80 for HTTP and 443 for HTTPS, or grpc and metrics). Each entry can include port (Service port), targetPort (backend Pod port), and protocol.
The naming requirement becomes important because Kubernetes needs to disambiguate ports, especially when other resources refer to them. For example, an Ingress backend or some proxies/controllers can reference Service ports by name. Also, when multiple ports exist, a name helps humans and automation reliably select the correct port. Kubernetes documentation and common practice recommend naming ports whenever there is more than one, and in several scenarios it’s effectively required to avoid ambiguity.
Option A is incorrect because multi-port Services are common and fully supported. Option C is insufficient: while different port numbers are necessary, naming is the correct distinguishing rule emphasized by Kubernetes patterns and required by some integrations. Option D is incorrect and nonsensical—Services can expose many ports and are not restricted to 443.
Operationally, exposing multiple ports through one Service is useful when a single backend workload provides multiple interfaces (e.g., application traffic and a metrics endpoint). You can keep stable discovery under one DNS name while still differentiating ports. The backend Pods must still listen on the target ports, and selectors determine which Pods are endpoints. The key correctness point for this question is: multi-port Services are allowed, and each port should be uniquely named to avoid confusion and integration issues.
=========
A Kubernetes Pod is returning a CrashLoopBackOff status. What is the most likely reason for this behavior?
There are insufficient resources allocated for the Pod.
The application inside the container crashed after starting.
The container’s image is missing or cannot be pulled.
The Pod is unable to communicate with the Kubernetes API server.
A CrashLoopBackOff status in Kubernetes indicates that a container within a Pod is repeatedly starting, crashing, and being restarted by Kubernetes. This behavior occurs when the container process exits shortly after starting and Kubernetes applies an increasing back-off delay between restart attempts to prevent excessive restarts.
Option B is the correct answer because CrashLoopBackOff most commonly occurs when the application inside the container crashes after it has started. Typical causes include application runtime errors, misconfigured environment variables, missing configuration files, invalid command or entrypoint definitions, failed dependencies, or unhandled exceptions during application startup. Kubernetes itself is functioning as expected by restarting the container according to the Pod’s restart policy.
Option A is incorrect because insufficient resources usually lead to different symptoms. For example, if a container exceeds its memory limit, it may be terminated with an OOMKilled status rather than repeatedly crashing immediately. While resource constraints can indirectly cause crashes, they are not the defining reason for a CrashLoopBackOff state.
Option C is incorrect because an image that cannot be pulled results in statuses such as ImagePullBackOff or ErrImagePull, not CrashLoopBackOff. In those cases, the container never successfully starts.
Option D is incorrect because Pods do not need to communicate directly with the Kubernetes API server for normal application execution. Issues with API server communication affect control plane components or scheduling, not container restart behavior.
From a troubleshooting perspective, Kubernetes documentation recommends inspecting container logs using kubectl logs and reviewing Pod events with kubectl describe pod to identify the root cause of the crash. Fixing the underlying application error typically resolves the CrashLoopBackOff condition.
In summary, CrashLoopBackOff is a protective mechanism that signals a repeatedly failing container process. The most likely and verified cause is that the application inside the container is crashing after startup, making option B the correct answer.
Which are the two primary modes for Service discovery within a Kubernetes cluster?
Environment variables and DNS
API calls and LDAP
Labels and RADIUS
Selectors and DHCP
Kubernetes supports two primary built-in modes of Service discovery for workloads: environment variables and DNS, making A correct.
Environment variables: When a Pod is created, kubelet can inject environment variables for Services that exist in the same namespace at the time the Pod starts. These variables include the Service host and port (for example, MY_SERVICE_HOST and MY_SERVICE_PORT). This approach is simple but has limitations: values are captured at Pod creation time and don’t automatically update if Services change, and it can become cluttered in namespaces with many Services.
DNS-based discovery: This is the most common and flexible method. Kubernetes cluster DNS (usually CoreDNS) provides names like service-name.namespace.svc.cluster.local. Clients resolve the name and connect to the Service, which then routes to backend Pods. DNS scales better, is dynamic with endpoint updates, supports headless Services for per-Pod discovery, and is the default pattern for microservice communication.
The other options are not Kubernetes service discovery modes. Labels and selectors are used internally to relate Services to Pods, but they are not what application code uses for discovery (apps typically don’t query selectors; they call DNS names). LDAP and RADIUS are identity/authentication protocols, not service discovery. DHCP is for IP assignment on networks, not for Kubernetes Service discovery.
Operationally, DNS is central: many applications assume name-based connectivity. If CoreDNS is misconfigured or overloaded, service-to-service calls may fail even if Pods and Services are otherwise healthy. Environment-variable discovery can still work for some legacy apps, but modern cloud-native practice strongly prefers DNS (and sometimes service meshes on top of it). The key exam concept is: Kubernetes provides service discovery via env vars and DNS.
=========
In Kubernetes, what is the primary function of a RoleBinding?
To provide a user or group with permissions across all resources at the cluster level.
To assign the permissions of a Role to a user, group, or service account within a namespace.
To enforce namespace network rules by binding policies to Pods running in the namespace.
To create and define a new Role object that contains a specific set of permissions.
In Kubernetes, authorization is managed using Role-Based Access Control (RBAC), which defines what actions identities can perform on which resources. Within this model, a RoleBinding plays a crucial role by connecting permissions to identities, making option B the correct answer.
A Role defines a set of permissions—such as the ability to get, list, create, or delete specific resources—but by itself, a Role does not grant those permissions to anyone. A RoleBinding is required to bind that Role to a specific subject, such as a user, group, or service account. This binding is namespace-scoped, meaning it applies only within the namespace where the RoleBinding is created. As a result, RoleBindings enable fine-grained access control within individual namespaces, which is essential for multi-tenant and least-privilege environments.
When a RoleBinding is created, it references a Role (or a ClusterRole) and assigns its permissions to one or more subjects within that namespace. This allows administrators to reuse existing roles while precisely controlling who can perform certain actions and where. For example, a RoleBinding can grant a service account read-only access to ConfigMaps in a single namespace without affecting access elsewhere in the cluster.
Option A is incorrect because cluster-wide permissions are granted using a ClusterRoleBinding, not a RoleBinding. Option C is incorrect because network rules are enforced using NetworkPolicies, not RBAC objects. Option D is incorrect because Roles are defined independently and only describe permissions; they do not assign them to identities.
In summary, a RoleBinding’s primary purpose is to assign the permissions defined in a Role to users, groups, or service accounts within a specific namespace. This separation of permission definition (Role) and permission assignment (RoleBinding) is a fundamental principle of Kubernetes RBAC and is clearly documented in Kubernetes authorization architecture.
A Kubernetes _____ is an abstraction that defines a logical set of Pods and a policy by which to access them.
Selector
Controller
Service
Job
A Kubernetes Service is the abstraction that defines a logical set of Pods and the policy for accessing them, so C is correct. Pods are ephemeral: their IPs change as they are recreated, rescheduled, or scaled. A Service solves this by providing a stable endpoint (DNS name and virtual IP) and routing rules that send traffic to the current healthy Pods backing the Service.
A Service typically uses a label selector to identify which Pods belong to it. Kubernetes then maintains endpoint data (Endpoints/EndpointSlice) for those Pods and uses the cluster dataplane (kube-proxy or eBPF-based implementations) to forward traffic from the Service IP/port to one of the backend Pod IPs. This is what the question means by “logical set of Pods” and “policy by which to access them” (for example, round-robin-like distribution depending on dataplane, session affinity options, and how ports map via targetPort).
Option A (Selector) is only the query mechanism used by Services and controllers; it is not itself the access abstraction. Option B (Controller) is too generic; controllers reconcile desired state but do not provide stable network access policies. Option D (Job) manages run-to-completion tasks and is unrelated to network access abstraction.
Services can be exposed in different ways: ClusterIP (internal), NodePort, LoadBalancer, and ExternalName. Regardless of type, the core Service concept remains: stable access to a dynamic set of Pods. This is foundational to Kubernetes networking and microservice communication, and it is why Service discovery via DNS works effectively across rolling updates and scaling events.
Thus, the correct answer is Service (C).
=========
What Kubernetes control plane component exposes the programmatic interface used to create, manage and interact with the Kubernetes objects?
kube-controller-manager
kube-proxy
kube-apiserver
etcd
The kube-apiserver is the front door of the Kubernetes control plane and exposes the programmatic interface used to create, read, update, delete, and watch Kubernetes objects—so C is correct. Every interaction with cluster state ultimately goes through the Kubernetes API. Tools like kubectl, client libraries, GitOps controllers, operators, and core control plane components (scheduler and controllers) all communicate with the API server to submit desired state and to observe current state.
The API server is responsible for handling authentication (who are you?), authorization (what are you allowed to do?), and admission control (should this request be allowed and possibly mutated/validated?). After a request passes these gates, the API server persists the object’s desired state to etcd (the backing datastore) and returns a response. The API server also provides a watch mechanism so controllers can react to changes efficiently, enabling Kubernetes’ reconciliation model.
It’s important to distinguish this from the other options. etcd stores cluster data but does not expose the cluster’s primary user-facing API; it’s an internal datastore. kube-controller-manager runs control loops (controllers) that continuously reconcile resources (like Deployments, Nodes, Jobs) but it consumes the API rather than exposing it. kube-proxy is a node-level component implementing Service networking rules and is unrelated to the control-plane API endpoint.
Because Kubernetes is “API-driven,” the kube-apiserver is central: if it is unavailable, you cannot create workloads, update configurations, or even reliably observe cluster state. This is why high availability architectures prioritize multiple API server instances behind a load balancer, and why securing the API server (RBAC, TLS, audit) is a primary operational concern.
=========
Which of the following capabilities are you allowed to add to a container using the Restricted policy?
CHOWN
SYS_CHROOT
SETUID
NET_BIND_SERVICE
Under the Kubernetes Pod Security Standards (PSS), the Restricted profile is the most locked-down baseline intended to reduce container privilege and host attack surface. In that profile, adding Linux capabilities is generally prohibited except for very limited cases. Among the listed capabilities, NET_BIND_SERVICE is the one commonly permitted in restricted-like policies, so D is correct.
NET_BIND_SERVICE allows a process to bind to “privileged” ports below 1024 (like 80/443) without running as root. This aligns with restricted security guidance: applications should run as non-root, but still sometimes need to listen on standard ports. Allowing NET_BIND_SERVICE enables that pattern without granting broad privileges.
The other capabilities listed are more sensitive and typically not allowed in a restricted profile: CHOWN can be used to change file ownership, SETUID relates to privilege changes and can be abused, and SYS_CHROOT is a broader system-level capability associated with filesystem root changes. In hardened Kubernetes environments, these are normally disallowed because they increase the risk of privilege escalation or container breakout paths, especially if combined with other misconfigurations.
A practical note: exact enforcement depends on the cluster’s admission configuration (e.g., the built-in Pod Security Admission controller) and any additional policy engines (OPA/Gatekeeper). But the security intent of “Restricted” is consistent: run as non-root, disallow privilege escalation, restrict capabilities, and lock down host access. NET_BIND_SERVICE is a well-known exception used to support common application networking needs while staying non-root.
So, the verified correct choice for an allowed capability in Restricted among these options is D: NET_BIND_SERVICE.
=========
Which of the following is the name of a container orchestration software?
OpenStack
Docker
Apache Mesos
CRI-O
C (Apache Mesos) is correct because Mesos is a cluster manager/orchestrator that can schedule and manage workloads (including containerized workloads) across a pool of machines. Historically, Mesos (often paired with frameworks like Marathon) was used to orchestrate services and batch jobs at scale, similar in spirit to Kubernetes’ scheduling and cluster management role.
Why the other answers are not correct as “container orchestration software” in this context:
OpenStack (A) is primarily an IaaS cloud platform for provisioning compute, networking, and storage (VM-focused). It’s not a container orchestrator, though it can host Kubernetes or containers.
Docker (B) is a container platform/tooling ecosystem (image build, runtime, local orchestration via Docker Compose/Swarm historically), but “Docker” itself is not the best match for “container orchestration software” in the multi-node cluster orchestration sense that the question implies.
CRI-O (D) is a container runtime implementing Kubernetes’ CRI; it runs containers on a node but does not orchestrate placement, scaling, or service lifecycle across a cluster.
Container orchestration typically means capabilities like scheduling, scaling, service discovery integration, health management, and rolling updates across multiple hosts. Mesos fits that definition: it provides resource management and scheduling over a cluster and can run container workloads via supported containerizers. Kubernetes ultimately became the dominant orchestrator for many use cases, but Mesos is clearly recognized as orchestration software in this category.
So, among these choices, the verified orchestration platform is Apache Mesos (C).
=========
What is a probe within Kubernetes?
A monitoring mechanism of the Kubernetes API.
A pre-operational scope issued by the kubectl agent.
A diagnostic performed periodically by the kubelet on a container.
A logging mechanism of the Kubernetes API.
In Kubernetes, a probe is a health check mechanism that the kubelet executes against containers, so C is correct. Probes are part of how Kubernetes implements self-healing and safe traffic management. The kubelet runs probes periodically according to the configuration in the Pod spec and uses the results to decide whether a container is healthy, ready to receive traffic, or still starting up.
Kubernetes supports three primary probe types:
Liveness probe: determines whether the container should be restarted. If liveness fails repeatedly, kubelet restarts the container (subject to restartPolicy).
Readiness probe: determines whether the Pod should receive traffic via Services. If readiness fails, the Pod is removed from Service endpoints, preventing traffic from being routed to it until it becomes ready again.
Startup probe: used for slow-starting containers. It disables liveness/readiness failures until startup succeeds, preventing premature restarts during initialization.
Probe mechanisms can be HTTP GET, TCP socket checks, or exec commands run inside the container. These checks are performed by kubelet on the node where the Pod is running, not by the API server.
Options A and D incorrectly attribute probes to the Kubernetes API. While probe configuration is stored in the API as part of Pod specs, execution is node-local. Option B is not a Kubernetes concept.
So the correct definition is: a probe is a periodic diagnostic run by kubelet to assess container health/readiness, enabling reliable rollouts, traffic gating, and automatic recovery.
=========
Copyright © 2021-2026 CertsTopics. All Rights Reserved