Downward API

前面我们从pod的原理到生命周期介绍了pod的一些使用,作为kubernetes中最核心的对象,最基本的调度单元,我们可以发现pod中的属性还是非常繁多的,前面我们使用过一个volumes的属性,表示声明一个数据卷,我们可以通过命令kubectl explain pod.sec.volumes去查看该对象下面的属性非常多,前面我们只是简单的使用了hostpath和empryDir{}这两种模式,其中还有一种叫做downwardAPI这个模式和其他模式不一样的地方在于它不是为了存放容器的数据也不是用来进行容器和宿主机的数据交换的,而是让pod里的容器能够直接获取到这个pod对象本身的一些信息。

downwardAPI提供了两种方式用于将pod的信息注入到容器内部:

环境变量: 用于单个变量,可以将pod信息和容器信息直接注入容器内部

volume挂载:将pod信息生成为文件,直接挂载到容器内部中去

环境变量

我们通过downwardAPI来讲pod的ip,名称以及所对应的namespace注入到容器的环境变量中去,然后再容器中打印全部的环境变量来进行验证

  1. [root@master1 ~]# cat env-pod.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: env-pod
  6. namespace: kube-system
  7. spec:
  8. containers:
  9. - name: env-pod
  10. image: busybox
  11. command: ["/bin/sh", "-c","env"]
  12. env:
  13. - name: POD_NAME
  14. valueFrom:
  15. fieldRef:
  16. fieldPath: metadata.name
  17. - name: POD_NAMESPACE
  18. valueFrom:
  19. fieldRef:
  20. fieldPath: metadata.namespace
  21. - name: POD_IP
  22. valueFrom:
  23. fieldRef:
  24. fieldPath: status.podIP

我们可以看到上面我们使用了一种新到方式来设置env的值,valueFrom,由于pod的name和namespace属于元数据,是在pod创建之前就已经定下来的,所以我们可以使用metadata就可获取到,但是对于pod的IP则不一样,因为我们知道pod ip是不固定的,pod重建了就变了,它属于状态数据,所以我们使用status这个属性去获取,另外出了使用fieldRef获取pod的基本信息,还通过resourceFieldRef去获取容器的资源请求和资源限制信息。

 kubectl  create -f   env-pod.yaml

kubectl  logs  env-pod  -n  kube-system | grep  POD

kubectl  logs  -f  env-pod  -n  kube-system

  1. [root@master1 ~]# kubectl logs env-pod -n kube-system
  2. POD_IP=10.244.2.38
  3. KUBERNETES_PORT=tcp://10.96.0.1:443
  4. KUBERNETES_SERVICE_PORT=443
  5. KUBE_DNS_SERVICE_PORT_DNS_TCP=53
  6. HOSTNAME=env-pod
  7. SHLVL=1
  8. HOME=/root
  9. KUBE_DNS_SERVICE_HOST=10.96.0.10
  10. KUBE_DNS_PORT_9153_TCP_ADDR=10.96.0.10
  11. KUBE_DNS_PORT_9153_TCP_PORT=9153
  12. KUBE_DNS_PORT_9153_TCP_PROTO=tcp
  13. KUBE_DNS_SERVICE_PORT=53
  14. KUBE_DNS_PORT=udp://10.96.0.10:53
  15. POD_NAME=env-pod
  16. KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
  17. PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
  18. KUBE_DNS_PORT_53_TCP_ADDR=10.96.0.10
  19. KUBERNETES_PORT_443_TCP_PORT=443
  20. KUBE_DNS_SERVICE_PORT_METRICS=9153
  21. KUBE_DNS_PORT_9153_TCP=tcp://10.96.0.10:9153
  22. KUBERNETES_PORT_443_TCP_PROTO=tcp
  23. KUBE_DNS_PORT_53_UDP_ADDR=10.96.0.10
  24. KUBE_DNS_PORT_53_TCP_PORT=53
  25. KUBE_DNS_PORT_53_TCP_PROTO=tcp
  26. KUBE_DNS_PORT_53_UDP_PORT=53
  27. KUBE_DNS_SERVICE_PORT_DNS=53
  28. KUBE_DNS_PORT_53_UDP_PROTO=udp
  29. KUBERNETES_SERVICE_PORT_HTTPS=443
  30. KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
  31. POD_NAMESPACE=kube-system
  32. KUBERNETES_SERVICE_HOST=10.96.0.1
  33. PWD=/
  34. KUBE_DNS_PORT_53_TCP=tcp://10.96.0.10:53
  35. KUBE_DNS_PORT_53_UDP=udp://10.96.0.10:53

我们可以看到pod 的ip ,name,namespace都通过环境变量打印出来的

kubectl  get svc  -n kube-system

 volume挂载

downward API除了提供环境变量方式外,还提供通过volume挂载的方式去获取pod的基本信息,接下来通过 downward API将pod的label,annotation等信息通过volume挂载到容器的某个文件中去,然后在容器中打印机出的值来验证的,对应的资源清单

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: volume-pod
  5. namespace: kube-system
  6. labels:
  7. k8s-app: test-volume
  8. node-env: test
  9. annotations:
  10. own: wangmuniangniang
  11. bulid: test
  12. spec:
  13. volumes:
  14. - name: podinfo
  15. downwardAPI:
  16. items:
  17. - path: labels
  18. fieldRef:
  19. fieldPath: metadata.annotations
  20. - path: anntations
  21. fieldRef:
  22. fieldPath: metadata.annotations
  23. containers:
  24. - name: volume-pod
  25. image: busybox
  26. args:
  27. - sleep
  28. - "3600"
  29. volumeMounts:
  30. - name: podinfo
  31. mountPath: /etc/podinfo

我们将元数据labels和annotaions以文件的形式挂载到/etc/podinfo目录下,创建上面的pod

 创建成功后,我们可以进入容器中查看元信息是不是已经存入到文件中了

  1. [root@master1 ~]# kubectl exec -it volume-pod /bin/sh -n kube-system
  2. kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
  3. / # ls /etc/podinfo/
  4. anntations labels
  5. / # ls
  6. bin dev etc home proc root sys tmp usr var
  7. / # cat /etc/podinfo/labels
  8. bulid="test"
  9. kubernetes.io/config.seen="2022-04-27T03:23:02.840574876-04:00"
  10. kubernetes.io/config.source="api"
  11. own="wangmuniangniang"/ # ^C
  12. / #

可以看到pod的labels和annotations信息都被挂载到/etc/podinfo目录下面lables和annotations文件了  目前downwardAPI支持的字段已经非常丰富了

  1. DownwardAPI支持的字段已经非常丰富了,比如
  2. fieldRef可以声明使用
  3. spec.nodeName 宿主机的名字
  4. status.hostIP 宿主机ip
  5. metadata.name pod的名字
  6. metadata.namespace pod的Namespace
  7. status.podIP pod的ip
  8. spec.serviceAccountName pod的service Account的名字
  9. metadata.uid pod的UID
  10. metadata.labels 指定key的label值
  11. metadata.annotations 指定key的annotation
  12. metadata.labels pod的所有label
  13. mwetadata.annotations pod的所有的annotation
  14. 使用 resourceFieldRef可以声明使用
  15. 容器cpu limit cpu requesr memory limit memory request

需要注意的是downwardAPI能够获取到的信息,一定是pod里的容器进程启动之前就能够确定袭下来的信息,而如果想要获取pod容器运行后才会出现信息,比如容器进程的PID,那肯定不能使用downwardAPI,而应该考虑在pod里定义了一个sidecar容器来获取了

 在实际应用中,如果你的应用有获取pod的基本信息需要,一般我们就可以利用downwardAPI来获取基本信息,然后编写一个启动脚本或者利用initcontainer将pod的信息注入到我们的容器中去,然后再我们自己的应用中就可以正常的处理相关逻辑了

除了通过dowenwardAPI客户可以获取pod本身的信息之外,其实我们还可以通过映射其他资源对象来获取对应的信息,比如secret,configMap资源对象,同样我们可以通过环境变量和挂载volume的方式来获取他们的信息,但是通过环境变量获取这些信息的方式,不具备自动更新的能力,所以一般情况下,都建议使用volume文件的方式获取这些信息,因为通过voulme的方式挂载文件在pod中会进行热更新

podpreset

我们已经学些了很多pod的知识点,但是可能有部分同学还觉得pod的字段属性太多了。kubernetes能提供一个功能为pod自动填充一些字段呢,这个需求还是很实际的,

kubernetes版本后提供了一个叫做podpreset的功能可以解决

 kubernetes提供了一个podpreset准入控制器,当启用后,podpreset会将应用创建请求传入到该控制器上,当有pod创建请求时,系统将执行一下操作。

检索所有可用的podpreset

检查有podpreset的标签选择器上的标签与正在创建的pod的标签是否匹配

尝试将有podpreset定义的各种资源合并到正在创建的pod中

出现错误时,该pod上引发记录合并错误的时间,podPreset不会注入任何资源到创建的pod中

注释刚生成的修改过的pod  spec以表明她已被podprese修改过,

每个pod可以匹配podPres、et并且每个podpreset可以应用于多个pod,podpreset应用于kubernetes会修改pod spec,对于env,envfrom和volumeMounts的修改,kubernetes修改pod中所有容器的spec,对于volume的更改,kubernetes修改pod spec

启用podpreset

要启用podpreset功能,需要确保你使用的是kubernetes1.8以上版本,然后需要准入控制中加入podpreset

我们经常有一个需求是同步pod和宿主机的时间,一般情况下,我们是通过挂载宿主机的localtime来完成的

  1. [root@master1 ~]# cat time-demo.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: time-demo
  6. labels:
  7. app: time
  8. spec:
  9. containers:
  10. - name: time-demo
  11. image: nginx
  12. ports:
  13. - containerPort: 80

我们可以看到pod的时间和nodes时间是不一样的,这时候我们可以挂载主机的localtime文件到pod中去

  1. [root@master1 ~]# cat time-demo.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: time-demo
  6. labels:
  7. app: time
  8. spec:
  9. volumes:
  10. - name: host-time
  11. hostPath:
  12. path: /etc/localtime
  13. containers:
  14. - name: time-demo
  15. image: nginx
  16. volumeMounts:
  17. - name: host-time
  18. mountPath: /etc/localtime
  19. ports:
  20. - containerPort: 80

此时pod就和node的时间一样了。但是往往我们素有的pod都有时间同步的需要,如果所有pod都挂载太麻烦,这个时候可以使用PodPreset来预设模板

  1. [root@master1 ~]# cat time-preset.yaml
  2. apiVersion: settings.k8s.io/v1alpha1
  3. kind: PodPreset
  4. metadata:
  5. name: time-preset
  6. namespace: default
  7. spec:
  8. selector:
  9. mathLables:
  10. volumeMounts:
  11. - name: localtime
  12. mountPath: /etc/localtime
  13. volumes:
  14. - name: localtime
  15. hostPath:
  16. path: /etc/localtime

在此之前我们需要修改apiVersion的参数 

/etc/kubernetes/manifests/kube-apiserver.yaml 修改完移动一下kube-apiversion.yaml文件,相当于强制启动

  1. root@master1 ~]# cat /etc/kubernetes/manifests/kube-apiserver.yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. annotations:
  6. kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.1.126:6443
  7. creationTimestamp: null
  8. labels:
  9. component: kube-apiserver
  10. tier: control-plane
  11. name: kube-apiserver
  12. namespace: kube-system
  13. spec:
  14. containers:
  15. - command:
  16. - kube-apiserver
  17. - --advertise-address=192.168.1.126
  18. - --allow-privileged=true
  19. - --authorization-mode=Node,RBAC
  20. - --client-ca-file=/etc/kubernetes/pki/ca.crt
  21. - --enable-admission-plugins=NodeRestriction
  22. - --enable-bootstrap-token-auth=true
  23. - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
  24. - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
  25. - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
  26. - --etcd-servers=https://127.0.0.1:2379
  27. - --insecure-port=0
  28. - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
  29. - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
  30. - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  31. - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
  32. - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
  33. - --requestheader-allowed-names=front-proxy-client
  34. - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
  35. - --requestheader-extra-headers-prefix=X-Remote-Extra-
  36. - --requestheader-group-headers=X-Remote-Group
  37. - --requestheader-username-headers=X-Remote-User
  38. - --secure-port=6443
  39. - --service-account-key-file=/etc/kubernetes/pki/sa.pub
  40. - --service-cluster-ip-range=10.96.0.0/12
  41. - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
  42. - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
  43. - --enable-admission-plugins=NodeRestriction,PodPreset //新加的这两行
  44. - --runtime-config=settings.k8s.io/v1alpha1=true // 新加的这两行
  45. image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.0
  46. imagePullPolicy: IfNotPresent
  47. livenessProbe:
  48. failureThreshold: 8
  49. httpGet:
  50. host: 192.168.1.126
  51. path: /livez
  52. port: 6443
  53. scheme: HTTPS
  54. initialDelaySeconds: 10
  55. periodSeconds: 10
  56. timeoutSeconds: 15
  57. name: kube-apiserver
  58. readinessProbe:
  59. failureThreshold: 3
  60. httpGet:
  61. host: 192.168.1.126
  62. path: /readyz
  63. port: 6443
  64. scheme: HTTPS
  65. periodSeconds: 1
  66. timeoutSeconds: 15
  67. resources:
  68. requests:
  69. cpu: 250m
  70. startupProbe:
  71. failureThreshold: 24
  72. httpGet:
  73. host: 192.168.1.126
  74. path: /livez
  75. port: 6443
  76. scheme: HTTPS
  77. initialDelaySeconds: 10
  78. periodSeconds: 10
  79. timeoutSeconds: 15
  80. volumeMounts:
  81. - mountPath: /etc/ssl/certs
  82. name: ca-certs
  83. readOnly: true
  84. - mountPath: /etc/pki
  85. name: etc-pki
  86. readOnly: true
  87. - mountPath: /etc/kubernetes/pki
  88. name: k8s-certs
  89. readOnly: true
  90. hostNetwork: true
  91. priorityClassName: system-node-critical
  92. volumes:
  93. - hostPath:
  94. path: /etc/ssl/certs
  95. type: DirectoryOrCreate
  96. name: ca-certs
  97. - hostPath:
  98. path: /etc/pki
  99. type: DirectoryOrCreate
  100. name: etc-pki
  101. - hostPath:
  102. path: /etc/kubernetes/pki
  103. type: DirectoryOrCreate
  104. name: k8s-certs
  105. status: {}

 这种就是开启的

验证一下pod的状态

 kubectl  get  pods  -l  app=time  筛选一下podname

 可以看到pod和node的时间是一致的 

  1. [root@master1 ~]# kubectl get pod time-demo -o yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. annotations:
  6. kubectl.kubernetes.io/last-applied-configuration: |
  7. {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"time"},"name":"time-demo","namespace":"default"},"spec":{"containers":[{"image":"nginx","name":"time-demo","ports":[{"containerPort":80}],"volumeMounts":[{"mountPath":"/etc/localtime","name":"host-time"}]}],"volumes":[{"hostPath":{"path":"/etc/localtime"},"name":"host-time"}]}}
  8. creationTimestamp: "2022-04-28T09:19:55Z"
  9. labels:
  10. app: time
  11. managedFields:
  12. - apiVersion: v1
  13. fieldsType: FieldsV1
  14. fieldsV1:
  15. f:metadata:
  16. f:annotations:
  17. .: {}
  18. f:kubectl.kubernetes.io/last-applied-configuration: {}
  19. f:labels:
  20. .: {}
  21. f:app: {}
  22. f:spec:
  23. f:containers:
  24. k:{"name":"time-demo"}:
  25. .: {}
  26. f:image: {}
  27. f:imagePullPolicy: {}
  28. f:name: {}
  29. f:ports:
  30. .: {}
  31. k:{"containerPort":80,"protocol":"TCP"}:
  32. .: {}
  33. f:containerPort: {}
  34. f:protocol: {}
  35. f:resources: {}
  36. f:terminationMessagePath: {}
  37. f:terminationMessagePolicy: {}
  38. f:volumeMounts:
  39. .: {}
  40. k:{"mountPath":"/etc/localtime"}:
  41. .: {}
  42. f:mountPath: {}
  43. f:name: {}
  44. f:dnsPolicy: {}
  45. f:enableServiceLinks: {}
  46. f:restartPolicy: {}
  47. f:schedulerName: {}
  48. f:securityContext: {}
  49. f:terminationGracePeriodSeconds: {}
  50. f:volumes:
  51. .: {}
  52. k:{"name":"host-time"}:
  53. .: {}
  54. f:hostPath:
  55. .: {}
  56. f:path: {}
  57. f:type: {}
  58. f:name: {}
  59. manager: kubectl-client-side-apply
  60. operation: Update
  61. time: "2022-04-28T09:19:55Z"
  62. - apiVersion: v1
  63. fieldsType: FieldsV1
  64. fieldsV1:
  65. f:status:
  66. f:conditions:
  67. k:{"type":"ContainersReady"}:
  68. .: {}
  69. f:lastProbeTime: {}
  70. f:lastTransitionTime: {}
  71. f:status: {}
  72. f:type: {}
  73. k:{"type":"Initialized"}:
  74. .: {}
  75. f:lastProbeTime: {}
  76. f:lastTransitionTime: {}
  77. f:status: {}
  78. f:type: {}
  79. k:{"type":"Ready"}:
  80. .: {}
  81. f:lastProbeTime: {}
  82. f:lastTransitionTime: {}
  83. f:status: {}
  84. f:type: {}
  85. f:containerStatuses: {}
  86. f:hostIP: {}
  87. f:phase: {}
  88. f:podIP: {}
  89. f:podIPs:
  90. .: {}
  91. k:{"ip":"10.244.1.43"}:
  92. .: {}
  93. f:ip: {}
  94. f:startTime: {}
  95. manager: kubelet
  96. operation: Update
  97. time: "2022-04-28T09:20:14Z"
  98. name: time-demo
  99. namespace: default
  100. resourceVersion: "239546"
  101. selfLink: /api/v1/namespaces/default/pods/time-demo
  102. uid: c0f154d6-e20f-48b9-a787-bdaadcec6249
  103. spec:
  104. containers:
  105. - image: nginx
  106. imagePullPolicy: Always
  107. name: time-demo
  108. ports:
  109. - containerPort: 80
  110. protocol: TCP
  111. resources: {}
  112. terminationMessagePath: /dev/termination-log
  113. terminationMessagePolicy: File
  114. volumeMounts:
  115. - mountPath: /etc/localtime
  116. name: host-time
  117. - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
  118. name: default-token-sbhsk
  119. readOnly: true
  120. dnsPolicy: ClusterFirst
  121. enableServiceLinks: true
  122. nodeName: node1
  123. preemptionPolicy: PreemptLowerPriority
  124. priority: 0
  125. restartPolicy: Always
  126. schedulerName: default-scheduler
  127. securityContext: {}
  128. serviceAccount: default
  129. serviceAccountName: default
  130. terminationGracePeriodSeconds: 30
  131. tolerations:
  132. - effect: NoExecute
  133. key: node.kubernetes.io/not-ready
  134. operator: Exists
  135. tolerationSeconds: 300
  136. - effect: NoExecute
  137. key: node.kubernetes.io/unreachable
  138. operator: Exists
  139. tolerationSeconds: 300
  140. volumes:
  141. - hostPath:
  142. path: /etc/localtime
  143. type: ""
  144. name: host-time
  145. - name: default-token-sbhsk
  146. secret:
  147. defaultMode: 420
  148. secretName: default-token-sbhsk
  149. status:
  150. conditions:
  151. - lastProbeTime: null
  152. lastTransitionTime: "2022-04-28T09:19:55Z"
  153. status: "True"
  154. type: Initialized
  155. - lastProbeTime: null
  156. lastTransitionTime: "2022-04-28T09:20:14Z"
  157. status: "True"
  158. type: Ready
  159. - lastProbeTime: null
  160. lastTransitionTime: "2022-04-28T09:20:14Z"
  161. status: "True"
  162. type: ContainersReady
  163. - lastProbeTime: null
  164. lastTransitionTime: "2022-04-28T09:19:55Z"
  165. status: "True"
  166. type: PodScheduled
  167. containerStatuses:
  168. - containerID: docker://c92926918312220c9dc2360244d8b72a1e3452c54f36179b775c8b46f6dab73f
  169. image: nginx:latest
  170. imageID: docker-pullable://nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
  171. lastState: {}
  172. name: time-demo
  173. ready: true
  174. restartCount: 0
  175. started: true
  176. state:
  177. running:
  178. startedAt: "2022-04-28T09:20:14Z"
  179. hostIP: 192.168.1.127
  180. phase: Running
  181. podIP: 10.244.1.43
  182. podIPs:
  183. - ip: 10.244.1.43
  184. qosClass: BestEffort
  185. startTime: "2022-04-28T09:19:55Z"