问题描述
该笔记将记录:在 UOS Server 20 Military 中,如何搭建 Kubernetes 集群(供测试使用),以及相关问题处理;
解决方案
注意事项
该笔记记录的集群部署方法只能用于实验,不能用于生产环境;
# 02/16/2022 该笔记最早记录搭建 Kubernetes 1.22 集群的方法;
环境概述
Before you begin
1)A compatible Linux host. 操作系统:UOS Server 20 Military
2)2 GB or more of RAM per machine (any less will leave little room for your apps). 2 CPUs or more.
3)Full network connectivity between all machines in the cluster (public or private network is fine).
网络信息:k8s-master: 172.16.0.125;k8s-worker-01: 172.16.0.126;k8s-worker-02: 172.16.0.128
4)Unique hostname, MAC address, and product_uuid for every node. See here for more details.
5)Certain ports are open on your machines. See here for more details.
6)Swap disabled. You MUST disable swap in order for the kubelet to work properly.
软件版本:Kubernetes v1.22
从主节点开始,我们使用 kubeadm 和 kubectl 命令管理集群及其节点;
第一步、在所有节点上执行
环境初始化
# 关闭防火墙 systemctl stop firewalld && systemctl disable firewalld # 设置 SELINUX 关闭 # setenforce 0 yes | cp /etc/selinux/config /etc/selinux/config.backup sed -i 's%SELINUX=enforcing%SELINUX=disabled%g' /etc/selinux/config # 关闭 Swap 分区 # swapoff -a && sysctl -w vm.swappiness=0 yes | cp /etc/fstab /etc/fstab.backup sed -i -E 's/(.+\s+swap\s+.+)/# \1/g' /etc/fstab swapoff -a # 加载内核模块 cat > /etc/modules-load.d/kubernetes.conf <<EOF br_netfilter EOF cat > /etc/sysctl.d/kubernets.conf <<EOF net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-arptables = 1 net.ipv4.ip_forward = 1 EOF sysctl --system
安装 docker 服务
部署方式与其他无异,但是需要注意 Docker 的部署,为了简化问题,我们直接使用 systemd 驱动;
1)服务安装:Docker/Installing
2)服务配置:
mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
systemctl enable docker
systemctl daemon-reload
systemctl restart docker
安装 kubeadm 命令
配置源仓库,以安装必要的包:
# 针对官方源,网络通常不通(除非使用网络加速,YUM 支持) # https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 # 使用阿里镜像站 cat > /etc/yum.repos.d/kubernetes-ali.repo <<EOF [kubernetes-ali] name=Kubernetes ALi # for x86 # baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ # fro aarch64 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64/ enabled=1 gpgcheck=0 EOF # 最后,更新 YUM 缓存 yum makecache
安装 kubadm 工具(但是不需要启动服务):
# --------------------------------------------------------- # 安装 socat / conntrack-tools 依赖 # Q:针对 uosEuler release 20 系统,其仓库未包含 socat conntrack-tools 软件? # A:我们下载 uos-server-20-1060e-arm64 系统,并从中提取如上两个软件包; yum install -y socat conntrack-tools # --------------------------------------------------------- # 安装 Kubernetes 组件 yum install -y kubeadm-1.22.3 kubelet-1.22.3-0 kubectl-1.22.3 yum install kubernetes-cni.aarch64 # for ARM 64 yum install kubernetes-cni.x86_64 # for x86_64 # 如果没有 enable 服务,则在 kubeadm init 时会有警告; # 但是不要 start 服务,这时候还没有初始化完成,缺少启动服务的某些配置文件(比如 /var/lib/kubelet/config.yaml 文件); # 这得感谢群里朋友的反馈 :-) systemctl enable kubelet
第二步、在 Master 上执行
节点初始化
执行如下命令进行节点初始化:
# --------------------------------------------------------- # 在初始化前
# 如果必要,则执行 kubeadm config images list 命令,并将相关镜像保存到私有仓库中;
kubeadm config images list
# --------------------------------------------------------- # 开始初始化
# 如果直接使用官方镜像,则初始化可能失败。因为它会去 k8s.gcr.io 拉取镜像,而国内网络无法访问;
# 所以,我们使用 kubeadm init --image-repository 选项指定阿里云的镜像来初始化,我们没有私用私有镜像仓库;
kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--image-repository registry.aliyuncs.com/google_containers
# --------------------------------------------------------- # 等待初始化结束
# 在执行 kubeadm init 结束后,留意下面的输出:
...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.19.100.12:6443 --token gfryox.ytmse7e3r0scurcy \
--discovery-token-ca-cert-hash sha256:d2fcd2bc71c258f5af9cc444094c4add43705e52adad17c78b6f35bf54689ea3
# 上述内容:
# 1. 提示你初始化成功
# 2. 然后,执行下面的三条命令
# 3. 告诉你应该向集群中部署一个 Pod 网络,这一点参考官方中列出的网络选择
# 4. 在工作节点上执行命令可以加入集群中;
补充说明:
1)旧版 kubeadm init 并没有 –image-repository 选项,但是可以通过指定 –config 选项 kubeadm-config.yaml 配置文件,该配置文件可以指定镜像仓库地址。(本文过于冗长,所以这里不再赘述,可以参考 Container Image:Replicate Images 来处理镜像无法拉取问题)
部署网络插件
正如在 kubeadm init 输出中所提示的,我们需要在集群中部署 Pod 网络。在部署的 Pod 网络中,不同主机的 Pod 就可以互相访问。Pod 网络是工作节点之间的覆盖网络。参考 Networking and Network Policy 文档,以获取更多的网络策略相关内容;
为了快速开始,我们部署 Flannel 网络插件:
# 创建网络 kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml # 然后,执行如下命令查看状态 kubectl get nodes kubectl get pods --all-namespaces # nodes 要处于 Ready 状态,pod 要处于 running 状态 # 当显示 ContainerCreating 时,表示正在创建,稍等即可
(可选)允许 Master 节点调度 Pod
coreos – Allow scheduling of pods on Kubernetes master? – Stack Overflow
当资源紧张时,我们无法提供多台主机来部署 Kubernetes 集群,此时可以修改 Master 的 Taint 以允许在 Master 中调度 Pod 实例;
kubectl taint nodes --all node-role.kubernetes.io/master-
第三步、在 Worker 上执行
添加节点
kubeadm join 172.19.100.12:6443 --token gfryox.ytmse7e3r0scurcy \
--discovery-token-ca-cert-hash sha256:d2fcd2bc71c258f5af9cc444094c4add43705e52adad17c78b6f35bf54689ea3
# !!! error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Unauthorized
# !!! 遇到这错误是因为--token 选项的 TOKEN 时效了(部署 MASTER 和 NODE 我间隔了很久),口令是有生存时间(TTL)的;
# !!! 使用 kubeadm token create 命令重新创建 token 口令(在 Master 上执行);
# !!! https://github.com/kubernetes/kubeadm/issues/1310
# !!! 或者执行 kubeadm token create --print-join-command 命令,重新生成 JOIN 命令
# 重新生成 token 值
kubeadm token create
# 如果 token 过期,创建永不过期的 token 值(不建议)
kubeadm token create --ttl 0
移除节点
kubectl get nodes kubectl drain "<node-name>" --ignore-daemonsets --delete-local-data kubectl delete node "<node-name>"
第四步、验证集群状态
在 Master 上执行命令检查 Node 是否成功加入:
# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 25h v1.14.0 k8s-node01 Ready <none> 7m19s v1.14.0
部署问题处理
[Sol.] … failed to find plugin \”portmap\” in path [/opt/cni/bin] …
k8s 部署 flannel 时报 failed to find plugin “portmap“ in path [/opt/cni/bin]
问题描述
# journalctl -f -u kubelet.service
...
5 月 07 21:10:34 foo-k8s-x86-cp120 kubelet[7393]: I0507 21:10:34.453087 7393 cni.go:204] "Error validating CNI config list" configList="{\n \"name\": \"cbr0\",\n \"cniVersion\": \"0.3.1\",\n \"plugins\": [\n {\n \"type\": \"flannel\",\n \"delegate\": {\n \"hairpinMode\": true,\n \"isDefaultGateway\": true\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\n \"portMappings\": true\n }\n }\n ]\n}\n" err="[failed to find plugin \"portmap\" in path [/opt/cni/bin]]"
5 月 07 21:10:34 foo-k8s-x86-cp120 kubelet[7393]: I0507 21:10:34.453654 7393 cni.go:239] "Unable to update cni config" err="no valid networks found in /etc/cni/net.d"
5 月 07 21:10:34 foo-k8s-x86-cp120 kubelet[7393]: E0507 21:10:34.551170 7393 kubelet.go:2337] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
...
原因分析
1)环境:Kylin-Server-V10_U1-Release-Build02-20210824-GFB-x86_64
解决方案
yum install -y kubernetes-cni.x86_64
[Sol.] … applying cgroup configuration for process caused \”No such device or address\”” …
问题描述
kubectl describe pod -n kube-system coredns-f9fd979d6-bgggv ... OCI message: "process_linux.go:264: applying cgroup configuration for process caused \"No such device or address\"" ...
原因分析
1)环境:Kylin-Server-V10_U1-Release-Build02-20210824-GFB-x86_64
解决方案
1)或,修改 Docker 使用 cgroupfs 配置,即删除 daemon.json 中的 “exec-opts”: [“native.cgroupdriver=systemd”] 配置;
2)或,直接升级 docker 以绕过相关问题;
[WIP.] … open /run/flannel/subnet.env: no such file or directory …
Warning FailedCreatePodSandBox 13m (x53 over 39m) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container “ae9e11c17f11c7a938c5e9d61bc1871149eb1dbc05052229cc15e43ed56ad01d” network for pod “xinghuo-connector-service-6d9fb5b667-8j7vk”: networkPlugin cni failed to set up pod “xinghuo-connector-service-6d9fb5b667-8j7vk_rivspace” network: loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory
kubectl apply -f kube-flannel.yml