k

kubehan

V1

2023/03/09阅读:17主题:默认主题

CKA考试题(一)

考试前准备

alias k = kubectl                          # 已经预先配置好
export  do = "--dry-run=client -o yaml"     # k 创建部署 nginx --image=nginx $do
export  now = "--force --grace-period 0"    # k 删除 pod x $now

考题1:

Task weight: 1% You have access to multiple clusters from your main terminal through kubectl contexts. Write all those context names into /opt/course/1/contexts.

Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh, the command should use kubectl.

Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh, but without the use of kubectl.

解答

分析:使用kubectl config等命令查看集群上下文信息

  1. 使用kubectl命令
#-o name 列出集群上下文名称
kubectl config get-contexts -o name > /opt/course/1/contexts
或者
kubectl config current-context > /opt/course/1/contexts
  1. 不适用kubectl命令
cat ~/.kube/config | grep current|awk '{print$2}'
或者
cat ~/.kube/config | grep current | sed -e "s/current-context: //"

考题2

Task weight: 3% Use context: kubectl config use-context k8s-c1-H

Create a single Pod of image httpd:2.4.41-alpine in Namespace default. The Pod should be named pod1 and the container should be named pod1-container. This Pod should only be scheduled on a controlplane node, do not add new labels any nodes.

解答

分析: 考点为Pod的调度,污点容忍及nodeSelector调度

k run pod1 --image=httpd:2.4.41-alpine $do > 2.yaml

查看节点label并记录,编辑2.yaml,修改容器名称,并添加污点容忍及节点选择器配置

修改后的yaml

# 2.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
spec:
  containers:
  - image: httpd:2.4.41-alpine
    name: pod1-container                       # change
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  tolerations:                                 # add
  - effect: NoSchedule                         # add
    key: node-role.kubernetes.io/control-plane # add
  nodeSelector:                                # add
    node-role.kubernetes.io/control-plane: ""  # add
status: {}

考题3

Task weight: 1% Use context: kubectl config use-context k8s-c1-H

There are two Pods named o3db-* in Namespace project-c13. C13 management asked you to scale the Pods down to one replica to save resources.

解答

分析: 考察副本的扩容及缩容

kubectl -n project-c13 scale sts o3db --replicas 1

考题4

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Do the following in Namespace default. Create a single Pod named ready-if-service-ready of image nginx:1.16.1-alpine. Configure a LivenessProbe which simply executes command true. Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80 is reachable, you can use wget -T2 -O- http://service-am-i-ready:80 for this. Start the Pod and confirm it isn't ready because of the ReadinessProbe.

Create a second Pod named am-i-ready of image nginx:1.16.1-alpine with label id: cross-server-ready. The already existing Service service-am-i-ready should now have that second Pod as endpoint.

Now the first Pod should be in ready state, confirm that.

解答

分析: 考察LivenessProbe及ReadinessProbe的配置及探测方式,在service未就绪时pod不应该启动

k run ready-if-service-ready --image=nginx:1.16.1-alpine $do > 4_pod1.yaml

编辑生成的yaml

#vim 4_pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: ready-if-service-ready
  name: ready-if-service-ready
spec:
  containers:
  - image: nginx:1.16.1-alpine
    name: ready-if-service-ready
    resources: {}
    livenessProbe:                                      # 添加存活探针
      exec:
        command:
        - 'true'
    readinessProbe: #添加就绪探针
      exec:
        command:
        - sh
        - -c
        - 'wget -T2 -O- http://service-am-i-ready:80'   # 探测命令执行
  dnsPolicy: ClusterFirst
  restartPolicy: Always

创建第二个Pod,并查看endpoint及pod运行状态正常后pod就可以启动了

k run am-i-ready --image=nginx:1.16.1-alpine --labels="id=cross-server-ready"

考题5

Task weight: 1%

Use context: kubectl config use-context k8s-c1-H

There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh which lists all Pods sorted by their AGE (metadata.creationTimestamp).

Write a second command into /opt/course/5/find_pods_uid.sh which lists all Pods sorted by field metadata.uid. Use kubectl sorting for both commands.

解答

分析:考察sort命令的用法 将下述命令写入到对应的sh文件

kubectl get pod -A --sort-by=.metadata.creationTimestamp

kubectl get pod -A --sort-by=.metadata.uid

考题6

Task weight: 8%

Use context: kubectl config use-context k8s-c1-H

Create a new PersistentVolume named safari-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.

Next create a new PersistentVolumeClaim in Namespace project-tiger named safari-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.

Finally create a new Deployment safari in Namespace project-tiger which mounts that volume at /tmp/safari-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.

解答

分析: 考察Pod绑定PV PVC,及pv pvc的创建

# 6_pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
 name: safari-pv
spec:
 capacity:
  storage: 2Gi
 accessModes:
  - ReadWriteOnce
 hostPath:
  path: "/Volumes/Data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: safari-pvc
  namespace: project-tiger
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
     storage: 2Gi
---
# 6_dep.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: safari
  name: safari
  namespace: project-tiger
spec:
  replicas: 1
  selector:
    matchLabels:
      app: safari
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: safari
    spec:
      volumes:                                      # add
      - name: data                                  # add
        persistentVolumeClaim:                      # add
          claimName: safari-pvc                     # add
      containers:
      - image: httpd:2.4.41-alpine
        name: container
        volumeMounts:                               # add
        - name: data                                # add
          mountPath: /tmp/safari-data               # add

考题7

Task weight: 2%

Use context: kubectl config use-context k8s-c1-H

Ssh into the controlplane node with ssh cluster1-controlplane1. Check how the controlplane components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the controlplane node. Also find out the name of the DNS application and how it's started/installed on the controlplane node.

Write your findings into file /opt/course/8/controlplane-components.txt. The file should be structured like:

# /opt/course/8/controlplane-components.txt
kubelet: [TYPE]
kube-apiserver: [TYPE]
kube-scheduler: [TYPE]
kube-controller-manager: [TYPE]
etcd: [TYPE]
dns: [TYPE] [NAME]
Choices of [TYPE] are: not-installed, process, static-pod, pod

解答

分析: 检查k8s组件的部署方式及运行状态,

  1. 采取ps aux等方式查看进程是否存在
  2. 采取find 查看/etc/systemd/system看看哪些组件是systemd管理的
  3. 集群一般是kubeadm部署的,检查部署清单/etc/kubernetes/manifests/
  4. kubelet 也可以通过其 systemd 启动配置中的参数指定不同的清单目录)--pod-manifest-path
  5. kubeadmin 部署的集群组件启停通过部署清单进行控制,将yaml移走即可关停组件

考题8

Task weight: 6%

Use context: kubectl config use-context k8s-c1-H

Create a new ServiceAccount processor in Namespace project-hamster. Create a Role and RoleBinding, both named processor as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.

解答

分析: 考察k8s rbac策略制定,需要注意以下几点

  1. Role + RoleBinding (单命名空间可用,单命名空间应用)
  2. ClusterRole + ClusterRoleBinding(全集群可用,全集群应用)
  3. ClusterRole + RoleBinding (在集群范围内可用,应用于单个命名空间)
  4. Role + ClusterRoleBinding(不可能:在单个命名空间中可用,在集群范围内应用)
# kubectl -n project-hamster create role processor --verb=create --resource=secret --resource=configmap
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: processor
  namespace: project-hamster
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  - configmaps
  verbs:
  - create
# kubectl -n project-hamster create rolebinding processor --role processor --serviceaccount project-hamster:processor
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: processor
  namespace: project-hamster
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: processor
subjects:
- kind: ServiceAccount
  name: processor
  namespace: project-hamster

考题9

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Use Namespace project-tiger for the following. Create a DaemonSet named ds-important with image httpd:2.4-alpine and labels id=ds-important and uuid=18426a0b-5f59-4e10-923f-c0e078e82462. The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. The Pods of that DaemonSet should run on all nodes, also controlplanes.

解答

分析:要在所有节点上面创建DaemonSet,需要注意节点是否可调度,Pod是否对污点容忍, 使用run生成deploy模板

k -n project-tiger create deployment --image=httpd:2.4-alpine ds-important $do > 11.yaml
# 11.yaml
apiVersion: apps/v1
kind: DaemonSet                                     # change from Deployment to Daemonset
metadata:
  creationTimestamp: null
  labels:                                           # add
    id: ds-important                                # add
    uuid: 18426a0b-5f59-4e10-923f-c0e078e82462      # add
  name: ds-important
  namespace: project-tiger                          # important
spec:
  #replicas: 1                                      # remove
  selector:
    matchLabels:
      id: ds-important                              # add
      uuid: 18426a0b-5f59-4e10-923f-c0e078e82462    # add
  #strategy: {}                                     # remove
  template:
    metadata:
      creationTimestamp: null
      labels:
        id: ds-important                            # add
        uuid: 18426a0b-5f59-4e10-923f-c0e078e82462  # add
    spec:
      containers:
      - image: httpd:2.4-alpine
        name: ds-important
        resources:
          requests:                                 # add
            cpu: 10m                                # add
            memory: 10Mi                            # add
      tolerations:                                  # add
      - effect: NoSchedule                          # add
        key: node-role.kubernetes.io/control-plane  # add
#status: {}                                         # remove

考题10

Task weight: 6%

Use context: kubectl config use-context k8s-c1-H

Use Namespace project-tiger for the following. Create a Deployment named deploy-important with label id=very-important (the Pods should also have this label) and 3 replicas. It should contain two containers, the first named container1 with image nginx:1.17.6-alpine and the second one named container2 with image kubernetes/pause.

There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-node1 and cluster1-node2. Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won't be scheduled, unless a new worker node will be added.

In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.

解答

分析:考察podAntiAffinity,Pod反亲和性,需根据节点的数量创建对应的副本数并设置上Pod反亲和性,

分类:

后端

标签:

运维部署

作者介绍

k
kubehan
V1