- Take me to the Mock Exam 2
Solutions for lab - Mock Exam 2:
With questions where you need to modify API server, you can use this resource to diagnose a failure of the API server to restart.
-
A pod called
redis-backend
has been created in theprod-x12cs namespace
. It has been exposed as a service of typeClusterIP
.Using a network policy called allow-redis-access, lock down access to this pod only to the following:
- Any pod in the same namespace with the label `backend=prod-x12cs``.
- All pods in the
prod-yx13cs
namespace.
All other incoming connections should be blocked.
Use the existing labels when creating the network policy.
This is not dissimilar to Q8 in Mock Exam 1. We are going to need an initial pod selector to say which pods the rules will apply to, and two rules - remember that each rule begins with
-
- one for each of the conditions above.Also we are told to use existing labels, so that's also going to include labels on namespaces, therefore run a command to get namespace with labels to see what these labels are.
We are also told about a clusterIP service. View this service to find any port restriction we should also apply.
Reveal policy
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-redis-access namespace: prod-x12cs spec: podSelector: matchLabels: run: redis-backend policyTypes: - Ingress ingress: - from: - podSelector: # Rule 1, for first condition matchLabels: backend: prod-x12cs - namespaceSelector: # Ruie 2 for second condition matchLabels: access: redis ports: - protocol: TCP port: 6379
-
A few pods have been deployed in the
apps-xyz
namespace. There is a pod calledredis-backend
which serves as the backend for the appsapp1
andapp2
. The pod calledapp3
on the other hand, does not need access to thisredis-backend
pod. Create a network policy calledallow-app1-app2
that will only allow incoming traffic fromapp1
andapp2
to theredis-pod
.Make sure that all the available labels are used correctly to target the correct pods. Do not make any other changes to these objects.
-
View the labels on all the pods
kubectl get pods -n apps-xyz --show-labels
-
Note the labels on
redis-backend
pod. This will give use the initial pod selector to select the pod to apply the policy to. -
Note that all three
app
pods carry the labeltier=frontend
, therefore we are going to require all the labels forapp1
andapp2
in order to excludeapp3
, which means we will need onepodSelector
rule for each of the two pods we want to include. Remember that each rule begins with-
Reveal policy
Final policy:
kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-app1-app2 namespace: apps-xyz spec: podSelector: matchLabels: tier: backend role: db ingress: - from: - podSelector: matchLabels: name: app1 tier: frontend - podSelector: matchLabels: name: app2 tier: frontend
-
-
A pod has been created in the
gamma
namespace using a service account calledcluster-view
. This service account has been granted additional permissions as compared to the default service account and canview
resources cluster-wide on this Kubernetes cluster. While these permissions are important for the application in this pod to work, the secret token is still mounted on this pod.Secure the pod in such a way that the secret token is no longer mounted on this pod. You may delete and recreate the pod.
This is quite easy. There are two changes we need to make to the pod manifest. Since there is little that may be changed in a pod, then it will indeed have to be deleted and recreated.
Reveal
-
Update the Pod to use the field
automountServiceAccountToken: false
Using this option makes sure that the service account token secret is not mounted in the pod at the location
/var/run/secrets/kubernetes.io/serviceaccount
, provided you have removed any explicit volumes and volumeMounts which will be present if you extracted the manifest from the running pod with-o yaml
Note that this option merely tells the controller not to add a volume and mount if not already present. It does not remove any existing mount for the secret, therefore...
-
If you did retreve the pod manifest with
-o yaml
then delete any volume and mount information referring to/var/run/secrets/kubernetes.io/serviceaccount
before recreating the pod!
apiVersion: v1 kind: Pod metadata: labels: run: apps-cluster-dash name: apps-cluster-dash namespace: gamma spec: containers: - image: nginx name: apps-cluster-dash serviceAccountName: cluster-view automountServiceAccountToken: false # Note that we have manually deleted volume/mount that previously existed for secret.
-
-
A pod in the
sahara
namespace has generated alerts that a shell was opened inside the container.To recognize such alerts, set the priority to ALERT and change the format of the output so that it looks like the below:
ALERT timestamp of the event without nanoseconds,User ID,the container id,the container image repository Make sure to update the rule in such a way that the changes will persists across Falco updates.
You can refer the falco documentation Here
Pretty much all Falco questions will involve you modifying an existing rule to change its output in order with what the question asks. You should not be asked to create a new rule from scratch.
Modifying an existing rule means finding it in
/etc/falco/falco_rules.yaml
, copying it to/etc/falco/falco_rules.local.yaml
and then making the changes there.The question suggests that Falco is already running and should be logging the rule we need to change, so here's how to solve
-
Get the falco log to see the existing rule
journalctl -fu falco | grep shell
We see something like this
Aug 06 15:18:31 controlplane falco[7283]: 15:18:31.802309590: Notice A shell was spawned in a container with an attached terminal (user=<NA> user_loginuid=-1 apps-240616 (id=3a66590fa9e3) shell=bash parent=runc cmdline=bash -c date
-
Now we know what the current message looks like, we can look for it in the existing rules. Tune the
-A
and-B
arguments of grep until you can see the entire rule.grep -A 10 -B 10 'A shell was spawned in a container' /etc/falco/falco_rules.yaml
Select and copy the rule (use mouse and rght-click, copy)
-
Paste the rule into the local rules file
vi /etc/falco/falco_rules.local.yaml
Enter INSERT mode and paste with the mouse
-
Make the required edits to this rule, which in this case is the
outputs
section as we are asked to change what is logged, not the condtions for the event.Reveal
- rule: Terminal shell in container desc: A shell was used as the entrypoint/exec point into a container with an attached terminal. condition: > spawned_process and container and shell_procs and proc.tty != 0 and container_entrypoint and not user_expected_terminal_shell_in_container_conditions output: > %evt.time.s,%user.uid,%container.id,%container.image.repository priority: ALERT tags: [container, shell, mitre_execution]
-
Restart falco using
systemctl restart falco
to override the current rule
-
-
martin
is a developer who needs access to work on thedev-a
,dev-b
anddev-z
namespace. He should have the ability to carry out any operation on any pod indev-a
anddev-b
namespaces. However, on thedev-z
namespace, he should only have the permission toget
andlist
the pods.The current set-up is too permissive and violates the above condition. Use the above requirement and secure martin's access in the cluster. You may re-create objects, however, make sure to use the same name as the ones in effect currently.
So this one is about
role
, and the fact that the permissions are wrong.-
Quickly find the roles by getting all roles in the cluster and grepping for anything that matches
dev
. This will match all the above mentioned namespaces whose names will be present in the outputkubectl get role -A | grep dev
Note that there is a role
dev-user-access
in each of the three namespaces indicated. From how the question is worded and the fact that there is only this role in each of the three namespaces, then we can deduce that it is this role that we must examine. -
Examine the role permissions
kubectl describe role -n dev-a dev-user-access kubectl describe role -n dev-b dev-user-access kubectl describe role -n dev-z dev-user-access
Note that each of these roles have the same access - all access for
pods
. We are told that this is correct fordev-a
anddev-b
, but not fordev-z
so it is that role we need to change. -
Fix the role
kubectl edit role -n dev-z dev-user-access
Change it to allow only
get
andlist
Reveal
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: dev-user-access namespace: dev-z rules: - apiGroups: - "" resources: - pods verbs: - get - list
-
-
On the
controlplane
node, an unknown process is bound to the port8088
. Identify the process and prevent it from running again by stopping and disabling any associated services. Finally, remove the package that was responsible for starting this process.-
Check the process which is bound to port 8088 on this node using netstat
netstat -ptln | grep 8088
Netstat arguments
p
- Show the PID and name of the program to which each socket belongs.t
- Show TCP sockets.l
- Show only listening sockets.n
- Show numerical addresses instead of trying to determine symbolic host, port or user names.
This shows that the the process
openlitespeed
is the one which is using this port. -
Check if any service is running with the same name
systemctl list-units -t service --state active | grep -i openlitespeed
lshttpd.service loaded active running OpenLiteSpeed HTTP Server
This shows that a service called
openlitespeed
is managed bylshttpd.service
which is currently active. -
Stop the service and disable it
systemctl stop lshttpd systemctl disable lshttpd
-
Check for the package by the same name
apt list --installed | grep openlitespeed
-
Uninstall the package
apt remove openlitespeed -y
-
-
A pod has been created in the
omega
namespace using the pod definition file located at/root/CKS/omega-app.yaml
. However, there is something wrong with it and the pod is not in a running state.We have used a custom seccomp profile located at
/var/lib/kubelet/seccomp/custom-profile.json
to ensure that this pod can only make use of limited syscalls to the Linux Kernel of the host operating system. However, it appears the profile does not allow theread
andwrite
syscalls. Fix this by adding it to the profile and use it to start the pod.
-
Find out why the pod isn't starting
kubectl -n omega describe pod omega-app
Check the
Events
section:Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 23m default-scheduler Successfully assigned omega/omega-app to controlplane Normal Pulling 23m kubelet Pulling image "hashicorp/http-echo:0.2.3" Normal Pulled 23m kubelet Successfully pulled image "hashicorp/http-echo:0.2.3" in 2.094171079s (9.750752135s including waiting) Warning Failed 21m (x12 over 23m) kubelet Error: failed to create containerd container: cannot load seccomp profile "/var/lib/kubelet/seccomp/profiles/custom-profile.json": open /var/lib/kubelet/seccomp/profiles/custom-profile.json: no such file or directory
The path to the seccomp profile is incorrectly specified for the omega-app pod.
As per the question, the profile is created at/var/lib/kubelet/seccomp/custom-profile.json
. -
Fix the seccomp profile path in the POD Definition file
/root/CKS/omega-app.yaml
vi /root/CKS/omega-app.yaml
Update the security context as follows
securityContext: seccompProfile: localhostProfile: custom-profile.json type: Localhost
-
Next, update the
custom-profile.json
to allowread
andwrite
syscalls.vi /var/lib/kubelet/seccomp/custom-profile.json
-
Finally, re-create the pod
kubectl replace -f /root/CKS/omega-app.yaml --force
pod "omega-app" deleted
pod/omega-app replacedThe POD should now run successfully.
NOTE:
It may still run even if the above two syscalls are not added. However, adding the syscalls is required to successfully complete this question.
-
-
A pod definition file has been created at
/root/CKS/simple-pod.yaml
. Using thekubesec
tool, generate a report for this pod definition file and fix the major issues so that the subsequent scan report no longer fails.Once done, generate the report again and save it to the file
/root/CKS/kubesec-report.txt
-
Find the issues.
We can just run the scan fully and look for issues
kubesec scan /root/CKS/simple-pod.yaml
OR, use JQ to give us the critical issues directly:
kubesec scan /root/CKS/simple-pod.yaml | jq '.[] | .scoring.critical'
It tells us the following is an issue
containers[] .securityContext .capabilities .add == SYS_ADMIN
-
Remove the
SYS_ADMIN
capability from the container for the simple-webapp-1 pod in the POD definition file and re-run the scan.kubesec scan /root/CKS/simple-pod.yaml
The fixed report should PASS with a message like this:
[ { "object": "Pod/simple-webapp-1.default", "valid": true, "fileName": "/root/CKS/simple-pod.yaml", "message": "Passed with a score of 0 points", "score": 0,
-
Save the passing report as directed
kubesec scan /root/CKS/simple-pod.yaml > /root/CKS/kubesec-report.txt
-
-
Create a new pod called
secure-nginx-pod
in theseth
namespace. Use one of the images from the below which has a least number ofCRITICAL
vulnerabilities.- nginx
- nginx:1.19
- nginx:1.17
- nginx:1.20
- gcr.io/google-containers/nginx
- bitnami/jenkins:latest
-
Run trivy image scan on each of the images and check which one has the least HIGH or CRITICAL vulnerabilities. Use
grep
to filter out most of the output. TheTotal
line is what we really care abouttrivy i nginx | grep Total
Repeat the above for remaining images
-
Create the pod (use correct image found above)
kubectl -n seth run secure-nginx-pod --image nginx:alpine