
Context
AppArmor is enabled on the cluster's worker node. An AppArmor profile is prepared, but not enforced yet.

Task
On the cluster's worker node, enforce the prepared AppArmor profile located at /etc/apparmor.d/nginx_apparmor.
Edit the prepared manifest file located at /home/candidate/KSSH00401/nginx-pod.yaml to apply the AppArmor profile.
Finally, apply the manifest file and create the Pod specified in it.
Documentation Deployments, Pods, bom Command Help bom-help
You must connect to the correct host. Failure to do so may result in a zero score.
[candidate@base] $ ssh cks000035
Task
The alpine Deployment in the alpine namespace has three containers that run different versions of the alpine image.
First, find out which version of the alpine image contains the libcrypto3 package at version 3.1.4-r5.
Next, use the pre-installed bom tool to create an SPDX document for the identified image version at /home/candidate/alpine.spdx.
You can find the bom tool documentation at bom.
Finally, update the alpine Deployment and remove the container that uses the idenfied image version.
The Deployment's manifest file can be found at /home/candidate/alpine-deployment.yaml.
Do not modify any other containers of the Deployment.

Context
The kubeadm-created cluster's Kubernetes API server was, for testing purposes, temporarily configured to allow unauthenticated and unauthorized access granting the anonymous user duster-admin access.
Task
Reconfigure the cluster's Kubernetes API server to ensure that only authenticated and authorized REST requests are allowed.
Use authorization mode Node,RBAC and admission controller NodeRestriction.
Cleaning up, remove the ClusterRoleBinding for user system:anonymous.



Context
A PodSecurityPolicy shall prevent the creation of privileged Pods in a specific namespace.
Task
Create a new PodSecurityPolicy named prevent-psp-policy,which prevents the creation of privileged Pods.
Create a new ClusterRole named restrict-access-role, which uses the newly created PodSecurityPolicy prevent-psp-policy.
Create a new ServiceAccount named psp-restrict-sa in the existing namespace staging.
Finally, create a new ClusterRoleBinding named restrict-access-bind, which binds the newly created ClusterRole restrict-access-role to the newly created ServiceAccount psp-restrict-sa.


Context
A default-deny NetworkPolicy avoids to accidentally expose a Pod in a namespace that doesn't have any other NetworkPolicy defined.
Task
Create a new default-deny NetworkPolicy named defaultdeny in the namespace testing for all traffic of type Egress.
The new NetworkPolicy must deny all Egress traffic in the namespace testing.
Apply the newly created default-deny NetworkPolicy to all Pods running in namespace testing.


Context
A Role bound to a Pod's ServiceAccount grants overly permissive permissions. Complete the following tasks to reduce the set of permissions.
Task
Given an existing Pod named web-pod running in the namespace security.
Edit the existing Role bound to the Pod's ServiceAccount sa-dev-1 to only allow performing watch operations, only on resources of type services.
Create a new Role named role-2 in the namespace security, which only allows performing update
operations, only on resources of type namespaces.
Create a new RoleBinding named role-2-binding binding the newly created Role to the Pod's ServiceAccount.

Create a RuntimeClass named untrusted using the prepared runtime handler named runsc.
Create a Pods of image alpine:3.13.2 in the Namespace default to run on the gVisor runtime class.

Task
Create a NetworkPolicy named pod-access to restrict access to Pod users-service running in namespace dev-team.
Only allow the following Pods to connect to Pod users-service:
Pods in the namespace qa
Pods with label environment: testing, in any namespace


Analyze and edit the given Dockerfile
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-install nginx -y
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
USER ROOT
Fixing two instructions present in the file being prominent security best practice issues
Analyze and edit the deployment manifest file
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo-2
spec:
securityContext:
runAsUser: 1000
containers:
- name: sec-ctx-demo-2
image: gcr.io/google-samples/node-hello:1.0
securityContext:
runAsUser: 0
privileged: True
allowPrivilegeEscalation: false
Fixing two fields present in the file being prominent security best practice issues
Don't add or remove configuration settings; only modify the existing configuration settings
Whenever you need an unprivileged user for any of the tasks, use user test-user with the user id 5487
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context prod-account
Context:
A Role bound to a Pod's ServiceAccount grants overly permissive permissions. Complete the following tasks to reduce the set of permissions.
Task:
Given an existing Pod named web-pod running in the namespace database.
1. Edit the existing Role bound to the Pod's ServiceAccount test-sa to only allow performing get operations, only on resources of type Pods.
2. Create a new Role named test-role-2 in the namespace database, which only allows performing update operations, only on resources of type statuefulsets.
3. Create a new RoleBinding named test-role-2-bind binding the newly created Role to the Pod's ServiceAccount.
Note: Don't delete the existing RoleBinding.
Fix all issues via configuration and restart the affected components to ensure the new setting takes effect.
Fix all of the following violations that were found against the API server:-
a. Ensure that the RotateKubeletServerCertificate argument is set to true.
b. Ensure that the admission control plugin PodSecurityPolicy is set.
c. Ensure that the --kubelet-certificate-authority argument is set as appropriate.
Fix all of the following violations that were found against the Kubelet:-
a. Ensure the --anonymous-auth argument is set to false.
b. Ensure that the --authorization-mode argument is set to Webhook.
Fix all of the following violations that were found against the ETCD:-
a. Ensure that the --auto-tls argument is not set to true
b. Ensure that the --peer-auto-tls argument is not set to true
Hint: Take the use of Tool Kube-Bench
Create a network policy named allow-np, that allows pod in the namespace staging to connect to port 80 of other pods in the same namespace.
Ensure that Network Policy:-
1. Does not allow access to pod not listening on port 80.
2. Does not allow access from Pods, not in namespace staging.
Cluster: dev
Master node: master1
Worker node: worker1
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context dev
Task:
Retrieve the content of the existing secret named adam in the safe namespace.
Store the username field in a file names /home/cert-masters/username.txt, and the password field in a file named /home/cert-masters/password.txt.
1. You must create both files; they don't exist yet.
2. Do not use/modify the created files in the following steps, create new temporary files if needed.
Create a new secret names newsecret in the safe namespace, with the following content:
Username: dbadmin
Password: moresecurepas
Finally, create a new Pod that has access to the secret newsecret via a volume:
Namespace:safe
Pod name:mysecret-pod
Container name:db-container
Image:redis
Volume name:secret-vol
Mount path:/etc/mysecret
Documentation
ServiceAccount, Deployment,
Projected Volumes
You must connect to the correct host . Failure to do so may
result in a zero score.
[candidate@base] $ ssh cks000033
Context
A security audit has identified a Deployment improperly handling service account tokens, which could lead to security vulnerabilities.
Task
First, modify the existing ServiceAccount stats-monitor-sa in the namespace monitoring to turn off automounting of API credentials.
Next, modify the existing Deployment stats-monitor in the namespace monitoring to inject a ServiceAccount token mounted at /var/run/secrets/kubernetes.io/serviceaccount/token.
Use a Projected Volume named token to inject the ServiceAccount token and ensure that it is mounted read-only.
The Deployment's manifest file can be found at /home/candidate/stats-monitor/deployment.yaml.
Service is running on port 389 inside the system, find the process-id of the process, and stores the names of all the open-files inside the /candidate/KH77539/files.txt, and also delete the binary.

Context
This cluster uses containerd as CRI runtime.
Containerd's default runtime handler is runc. Containerd has been prepared to support an additional runtime handler, runsc (gVisor).
Task
Create a RuntimeClass named sandboxed using the prepared runtime handler named runsc.
Update all Pods in the namespace server to run on gVisor.

You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context test-account
Task: Enable audit logs in the cluster.
To do so, enable the log backend, and ensure that:
1. logs are stored at /var/log/Kubernetes/logs.txt
2. log files are retained for 5 days
3. at maximum, a number of 10 old audit log files are retained
A basic policy is provided at /etc/Kubernetes/logpolicy/audit-policy.yaml. It only specifies what not to log.
Note: The base policy is located on the cluster's master node.
Edit and extend the basic policy to log:
1. Nodes changes at RequestResponse level
2. The request body of persistentvolumes changes in the namespace frontend
3. ConfigMap and Secret changes in all namespaces at the Metadata level
Also, add a catch-all rule to log all other requests at the Metadata level
Note: Don't forget to apply the modified policy.
Cluster: scanner
Master node: controlplane
Worker node: worker1
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context scanner
Given:
You may use Trivy's documentation.
Task:
Use the Trivy open-source container scanner to detect images with severe vulnerabilities used by Pods in the namespace nato.
Look for images with High or Critical severity vulnerabilities and delete the Pods that use those images.
Trivy is pre-installed on the cluster's master node. Use cluster's master node to use Trivy.