问题描述
启动日志收集器,或者监控代理等基础设施服务,都需要每个节点上运行一个 POD 实例
解决方案
使用 DaemonSet 服务;
例如,每个节点上运行一个 Fluentd 的容器服务,
按照以下的例子,编写一个 fluentd-daemonset.yaml 文件:
kind: DaemonSet apiVersion: extensions/v1beat1 metadata: name: fluentd spec: template: metadata: name: fluentd labels: app: fluented spec: containers: - name: fluentd image: 'gci.io/google_conntainers/fluentd-elasticsearch:1.3' env: - name: FLUENTD_ARGS value: '-qq' columeMounts: - name: varlog mountPath: /var/log - name: containers mountPath: /var/lib/docker/containers volumes: - hostPath: path: /var/log name: varlog - hostPath: path: /var/lib/docker/containers name: containers
部署资源文件:
kubectl craete -f fluented-daemonset.yaml kubectl get ds kubectl describe ds/fluentd
数据存储(PVE)
storageclass & pvc for daemonset
针对 DaemonSet 类型,所有的 Pod 将共享相同的 PVC 资源。
如果每个 Pod 需要独立的数据存储:
1)需要通过应用程序控制,限制当前 Pod 写入目录(诸如 目录锁 等等);
2)通过 hostPath 将数据写入节点;
3)
原理及调度
Q:PodTolerationRestriction doesn’t work for DaemonSet.?
A:
1)The DaemonSet controller creates a Pod for each eligible node and adds the spec.affinity.nodeAffinity field of the Pod to match the target host.
2)After the Pod is created, the default scheduler typically takes over and then binds the Pod to the target host by setting the .spec.nodeName field. If the new Pod cannot fit on the node, the default scheduler may preempt (evict) some of the existing Pods based on the priority of the new Pod.
Normally, the node that a Pod runs on is selected by the Kubernetes scheduler. However, previously daemonSet pods are created and scheduled by the DaemonSet controller. That introduces the following issues:
Since 1.12, DaemonSet pods are handled by the default scheduler and relies on pod priority to determine the order in which pods are scheduled.
参考文献