【容器应用系列教程】Kubernetes V1.25.11版本集群部署Docker作为容器引擎(Centos7)

由于众所周知的原因,KubernetesV1.24版开始对dockershim代码正式移除,这也就意味着k8s1.24版本后将不再原生支持Docker,由于部分人群可能依然需要Docker作为集群内部的默认容器引擎,于是这篇文章诞生了。

本文将借助CRI-Docker V0.3.4版本插件,使用Kubeadm部署k8s V1.25.11(截止文章撰写日期最新版)

一、前期准备

1.主机规划

主机名 IP地址 软件版本 节点 主机配置
k8s-master.linux.com 192.168.140.10 Kubernetes1.25.11、Docker24.0.4、cri-dockerd0.3.4 Mater节点 最低2c2g
k8s-node01.linux.com 192.168.140.11 Kubernetes1.25.11、Docker24.0.4、cri-dockerd0.3.4 Node节点 最低2c2g
k8s-node02.linux.com 192.168.140.12 Kubernetes1.25.11、Docker24.0.4、cri-dockerd0.3.4 Node节点 最低2c2g

2.网段规划

  • pod网段:192.168.0.0/16
  • service网段:172.16.0.0/16

3.配置国内Yum源和Epel源(三台主机都配置)

如果没有wget命令请安装yum install -y wget

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo
yum clean all && yum makecache fast

4.关闭防火墙和SElinux、配置时间同步(三台主机都配置)

systemctl stop firewalld && systemctl disable firewalld
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config && setenforce 0

配置时间同步

yum install -y ntpdate
crontab -e

*/30 * * * *   /usr/sbin/ntpdate  120.25.115.20 &> /dev/null
init 6

5.所有主机关闭SSH连接使用DNS解析功能(优化SSH连接速度)

sed -ri 's|#UseDNS yes|UseDNS no|' /etc/ssh/sshd_config

systemctl restart sshd

6.配置免密SSH连接

[root@k8s-master ~]# ssh-keygen -t rsa
[root@k8s-master ~]# mv /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
[root@k8s-master ~]# scp -r /root/.ssh/ root@192.168.140.11:/root/
[root@k8s-master ~]# scp -r /root/.ssh/ root@192.168.140.12:/root/

7.测试免密SSH并查看时间同步情况

[root@k8s-master ~]# for i in 10 11 12
> do
> ssh root@192.168.140.$i hostname; date
> done

k8s-master.linux.com
2023年 07月 15日 星期六 14:10:42 CST
k8s-node01.linux.com
2023年 07月 15日 星期六 14:10:43 CST
k8s-node02.linux.com
2023年 07月 15日 星期六 14:10:43 CST

8.所有主机配置主机名解析

[root@k8s-master ~]# vim /etc/hosts
#仅展示添加部分
192.168.140.10 k8s-master.linux.com k8s-master
192.168.140.11 k8s-node01.linux.com k8s-node01
192.168.140.12 k8s-node02.linux.com k8s-node02

[root@k8s-master ~]# scp -r /etc/hosts root@192.168.140.11:/etc/
hosts                                             100%  299    41.8KB/s   00:00    
[root@k8s-master ~]# scp -r /etc/hosts root@192.168.140.12:/etc/
hosts                                             100%  299    55.4KB/s   00:00    

9.所有主机禁用SWAP分区

[root@k8s-master ~]# for i in 10 11 12
> do
> ssh root@192.168.140.$i swapoff -a
> ssh root@192.168.140.$i free -m
> ssh root@192.168.140.$i sysctl -w vm.swappiness=0
> done

10.所有主机注释掉SWAP分区挂载信息

[root@k8s-master ~]# sed -ri 's|\/dev\/mapper\/centos-swap|# \/dev\/mapper\/centos-swap|g' /etc/fstab
[root@k8s-master ~]# mount -a

[root@k8s-node01 ~]# sed -ri 's|\/dev\/mapper\/centos-swap|# \/dev\/mapper\/centos-swap|g' /etc/fstab
[root@k8s-node01 ~]# mount -a

[root@k8s-node02 ~]# sed -ri 's|\/dev\/mapper\/centos-swap|# \/dev\/mapper\/centos-swap|g' /etc/fstab
[root@k8s-node02 ~]# mount -a

11.所有主机调整资源限制

[root@k8s-master ~]# for i in 10 11 12
> do
> ssh root@192.168.140.$i ulimit -SHn 65535
> done
  • nofile最大文件数
  • nproc最大进程数
  • memlock内页的页面锁
[root@k8s-master ~]# vim /etc/security/limits.conf
#在文件末尾添加以下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

[root@k8s-node01 ~]# vim /etc/security/limits.conf
#在文件末尾添加以下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

[root@k8s-node02 ~]# vim /etc/security/limits.conf
#在文件末尾添加以下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

12.配置软件安装源

Docker安装源

[root@k8s-master ~]# vim /etc/yum.repos.d/docker.repo

[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

配置kubeadmkubelet安装源

[root@k8s-master ~]# vim /etc/yum.repos.d/k8s.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

把软件安装源拷贝给另外2台机器

[root@k8s-master ~]# for i in 11 12
> do
> rsync -av /etc/yum.repos.d/*.repo root@192.168.140.$i:/etc/yum.repos.d/
> done

建立Yum缓存(所有主机)

[root@k8s-master ~]# yum clean all && yum makecache fast
[root@k8s-node01 ~]# yum clean all && yum makecache fast
[root@k8s-node02 ~]# yum clean all && yum makecache fast

13.升级系统到最新版本

[root@k8s-master ~]# yum update -y
[root@k8s-node01 ~]# yum update -y
[root@k8s-node02 ~]# yum update -y
[root@k8s-master ~]# init 6
[root@k8s-node01 ~]# init 6
[root@k8s-node02 ~]# init 6

14.升级内核版本(可选的)

关于Centos7升级内核教程:https://www.wsjj.top/archives/kernel

二、调整系统参数

1.所有主机安装IPVS

[root@k8s-master ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y
[root@k8s-node01 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y
[root@k8s-node02 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y

2.所有主机加载IPVS模块

[root@k8s-master ~]# vim /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack_ipv4	#如果使用高版本内核5.x版本,请更名为nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
[root@k8s-master ~]# scp -r /etc/modules-load.d/ipvs.conf root@192.168.140.11:/etc/modules-load.d/
ipvs.conf                                    100%  211   401.5KB/s   00:00    
[root@k8s-master ~]# scp -r /etc/modules-load.d/ipvs.conf root@192.168.140.12:/etc/modules-load.d/
ipvs.conf                                    100%  211   309.1KB/s   00:00
[root@k8s-master ~]# systemctl enable --now systemd-modules-load
[root@k8s-node01 ~]# systemctl enable --now systemd-modules-load
[root@k8s-node02 ~]# systemctl enable --now systemd-modules-load

3.所有主机调整内核参数

[root@k8s-master ~]# vim /etc/sysctl.d/k8s.conf

net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
[root@k8s-master ~]# scp -r /etc/sysctl.d/k8s.conf root@192.168.140.11:/etc/sysctl.d/
k8s.conf                                     100%  704     1.4MB/s   00:00    
[root@k8s-master ~]# scp -r /etc/sysctl.d/k8s.conf root@192.168.140.12:/etc/sysctl.d/
k8s.conf                                     100%  704     1.0MB/s   00:00
[root@k8s-master ~]# sysctl --system
[root@k8s-node01 ~]# sysctl --system
[root@k8s-node02 ~]# sysctl --system

三、所有主机安装需要的软件

1.安装Docker

[root@k8s-master ~]# yum install -y docker-ce
[root@k8s-node01 ~]# yum install -y docker-ce
[root@k8s-node02 ~]# yum install -y docker-ce
[root@k8s-master ~]# systemctl enable --now docker
[root@k8s-node01 ~]# systemctl enable --now docker
[root@k8s-node02 ~]# systemctl enable --now docker

配置Docker镜像加速和修改systemd驱动

Docker默认使用的是cgroupfs驱动,为了避免不必要的麻烦,修改驱动为systemd驱动

[root@k8s-master ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn"],
"exec-opts": ["native.cgroupdriver=systemd"]	#指定驱动为systemd
}
[root@k8s-master ~]# scp -r /etc/docker/daemon.json root@192.168.140.11:/etc/docker/
daemon.json                                  100%   94   140.1KB/s   00:00    
[root@k8s-master ~]# scp -r /etc/docker/daemon.json root@192.168.140.12:/etc/docker/
daemon.json                                  100%   94    43.8KB/s   00:00
[root@k8s-master ~]# systemctl restart docker
[root@k8s-node01 ~]# systemctl restart docker
[root@k8s-node02 ~]# systemctl restart docker

查看驱动

[root@k8s-master ~]# for i in 10 11 12
> do
> ssh root@192.168.140.$i docker info | grep "Cgroup Driver"
> done
 Cgroup Driver: systemd
 Cgroup Driver: systemd
 Cgroup Driver: systemd

2.三台主机下载并配置cri-dockerd

安装cri-dockerd

截止文章撰写日期,最新版为0.3.4,下载地址:https://github.com/Mirantis/cri-dockerd/releases

由于KubernetesV1.24版本以后移除了dockershim,所以此步骤仅针对Kubernetes V1.24版本之后

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4-3.el7.x86_64.rpm

由于国内特殊网络环境,我这里已经提前下载好了,可以在这里下载:https://pan.baidu.com/s/1h1jRd4nVdOTeQMsmQbSd2w?pwd=f3nt

rpm -ivh cri-dockerd-0.3.4-3.el7.x86_64.rpm

修改cri-dockerdservice文件

[root@k8s-master ~]# vim /usr/lib/systemd/system/cri-docker.service
#仅展示修改部分
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.8
#修改ExecStart=/usr/bin/cri-dockerd后面的参数为国内镜像地址
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
[root@k8s-master ~]# scp -r /usr/lib/systemd/system/cri-docker.service root@192.168.140.11:/usr/lib/systemd/system/
cri-docker.service                                100% 1383     1.3MB/s   00:00    
[root@k8s-master ~]# scp -r /usr/lib/systemd/system/cri-docker.service root@192.168.140.12:/usr/lib/systemd/system/
cri-docker.service                                100% 1383     1.5MB/s   00:00
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-node01 ~]# systemctl daemon-reload
[root@k8s-node02 ~]# systemctl daemon-reload

启动cri-docker

[root@k8s-master ~]# systemctl enable --now cri-docker
[root@k8s-node01 ~]# systemctl enable --now cri-docker
[root@k8s-node02 ~]# systemctl enable --now cri-docker

3.所有主机安装KubeadmKubeletKubectl

[root@k8s-master ~]# yum install -y kubeadm-1.25.11 kubelet-1.25.11 kubectl-1.25.11
[root@k8s-node01 ~]# yum install -y kubeadm-1.25.11 kubelet-1.25.11 kubectl-1.25.11
[root@k8s-node02 ~]# yum install -y kubeadm-1.25.11 kubelet-1.25.11 kubectl-1.25.11

修改Kubernetes镜像地址为国内

[root@k8s-master ~]# vim /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.8"
[root@k8s-master ~]# scp -r /etc/sysconfig/kubelet root@192.168.140.11:/etc/sysconfig/
kubelet                                      100%  117   106.7KB/s   00:00    
[root@k8s-master ~]# scp -r /etc/sysconfig/kubelet root@192.168.140.12:/etc/sysconfig/
kubelet                                      100%  117   139.2KB/s   00:00
[root@k8s-master ~]# systemctl enable --now kubelet
[root@k8s-node01 ~]# systemctl enable --now kubelet
[root@k8s-node02 ~]# systemctl enable --now kubelet

四、Kubernetes集群初始化

1.在Master节点准备初始化文件

[root@k8s-master ~]# kubeadm config print init-defaults > new.yaml
[root@k8s-master ~]# vim new.yaml
#仅展示修改部分

localAPIEndpoint:
  advertiseAddress: 192.168.140.10	#指定API server监听地址
  bindPort: 6443
  nodeRegistration:
  criSocket: unix:///var/run/cri-dockerd.sock	#手动指定cri-dockerd套接字文件
  imagePullPolicy: IfNotPresent
  name: k8s-master.linux.com	#修改为主机名
  taints: null

apiServer:
  certSANs:	#手动添加此行内容
  - 192.168.140.10	#指定证书颁发服务器地址
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.140.10:6443	#指定证书服务器地址,此行需要手动添加
controllerManager: {}

imageRepository: registry.aliyuncs.com/google_containers	#修改镜像下载地址为国内地址
kind: ClusterConfiguration
kubernetesVersion: v1.25.11	#指定kubernetes版本
networking:
  dnsDomain: cluster.local
  podSubnet: 10.88.0.0/16	#根据规划,这里填写pod网段,此段需要手动添加
  serviceSubnet: 172.16.0.0/16	#根据规划,这里修改网段
scheduler: {}

2.在Master节点下载初始化用到的镜像

[root@k8s-master ~]# kubeadm config images pull --config /root/new.yaml

3.在Master节点初始化集群

注意生成的信息,每个人的都不同

[root@k8s-master ~]# kubeadm init --config /root/new.yaml --upload-certs

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf	#配置环境变量

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:
	#当集群存在多个Master节点的时候,使用这条命令
  kubeadm join 192.168.140.10:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:9f7a17bdb794de39d777c0a0301f5979ce1faeee6df833223d455c8c45728ec0 \
    --control-plane --certificate-key 4d69a847667698d461dbe235ed30b8fbd973d430f2ec8ee89f2b064a4183c714

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:
	#node节点加入master节点用到的命令
kubeadm join 192.168.140.10:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:9f7a17bdb794de39d777c0a0301f5979ce1faeee6df833223d455c8c45728ec0

4.配置环境变量

[root@k8s-master ~]# vim /etc/profile
#在文件末尾添加内容
export KUBECONFIG=/etc/kubernetes/admin.conf

[root@k8s-master ~]# source /etc/profile

5.将Node节点加入到集群中

[root@k8s-node01 ~]# kubeadm join 192.168.140.10:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:9f7a17bdb794de39d777c0a0301f5979ce1faeee6df833223d455c8c45728ec0 \
    --cri-socket unix:///var/run/cri-dockerd.sock

#需要添加--cri-socket参数,手动指定cri-dockerd套接字文件,如果不指定,k8s默认还是会使用containerd

[root@k8s-node02 ~]# kubeadm join 192.168.140.10:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:9f7a17bdb794de39d777c0a0301f5979ce1faeee6df833223d455c8c45728ec0 \
    --cri-socket unix:///var/run/cri-dockerd.sock

6.回到Master节点查看集群状态

由于容器之前是跨物理机的,所以节点与节点之间没有网络连接,所以容器的状态是NotReady,这需要单独部署跨物理机的网络。
这里可以选择flannel或者calico部署跨物理机通信,本期教程将选择calico
关于flannel部署教程参考:https://www.wsjj.top/archives/133

[root@k8s-master ~]# kubectl get nodes
NAME                   STATUS     ROLES           AGE     VERSION
k8s-master.linux.com   NotReady   control-plane   2m11s   v1.25.11
k8s-node01.linux.com   NotReady   <none>          105s    v1.25.11
k8s-node02.linux.com   NotReady   <none>          90s     v1.25.11

[root@k8s-master ~]# kubectl get pods -A
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE
kube-system   coredns-c676cc86f-cxglq                        0/1     Pending   0          2m1s
kube-system   coredns-c676cc86f-gg6mv                        0/1     Pending   0          2m1s
kube-system   etcd-k8s-master.linux.com                      1/1     Running   0          2m14s
kube-system   kube-apiserver-k8s-master.linux.com            1/1     Running   0          2m16s
kube-system   kube-controller-manager-k8s-master.linux.com   1/1     Running   0          2m14s
kube-system   kube-proxy-lp9k8                               1/1     Running   0          2m1s
kube-system   kube-proxy-n6sxk                               1/1     Running   0          111s
kube-system   kube-proxy-r2cm2                               1/1     Running   0          96s
kube-system   kube-scheduler-k8s-master.linux.com            1/1     Running   0          2m14s

五、部署calico网络

1.准备calico部署文件

下载连接:https://pan.baidu.com/s/1h1jRd4nVdOTeQMsmQbSd2w?pwd=f3nt

2.指定etcd数据库连接地址

以下方式二选一即可

[root@k8s-master ~]# vim calico-etcd.yaml
#配置文件不完整,仅展示修改部分
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Configure this with the location of your etcd cluster.
  etcd_endpoints: "https://192.168.140.10:2379"	#指定etcd连接地址

或者使用下面的命令

[root@k8s-master ~]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.140.10:2379"#g' calico-etcd.yaml

3.指定连接etcd数据库用到的秘钥和证书文件内容

[root@k8s-master ~]# ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
[root@k8s-master ~]# ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
[root@k8s-master ~]# ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
[root@k8s-master ~]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml

指定证书文件和秘钥文件的位置

[root@k8s-master ~]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml

4.指定etcd通信网段

以下方式任选其一即可

[root@k8s-master ~]# vim calico-etcd.yaml
#配置文件不完整,仅展示修改部分
# no effect. This should fall within `--cluster-cidr`.
            - name: CALICO_IPV4POOL_CIDR	#删除注释,和下面的对齐
              value: "192.168.0.0/16"	#删除注释,和下面的对齐
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"

或者使用下面的命令

[root@k8s-master ~]# sed -ri -e 's|            # - name: CALICO_IPV4POOL_CIDR|            - name: CALICO_IPV4POOL_CIDR|' -e 's|            #   value: "192.168.0.0/16"|              value: "192.168.0.0/16"|' calico-etcd.yaml

5.部署calico网络

[root@k8s-master ~]# kubectl apply -f calico-etcd.yaml

6.查看节点状态

可以看到所有节点全部变成了ready状态

[root@k8s-master ~]# kubectl get nodes
NAME                   STATUS   ROLES           AGE   VERSION
k8s-master.linux.com   Ready    control-plane   21m   v1.25.11
k8s-node01.linux.com   Ready    <none>          20m   v1.25.11
k8s-node02.linux.com   Ready    <none>          20m   v1.25.11

7.查看Pod状态

可以看到所有关于calico的容器都启动了

如果遇到没有进入running状态的容器,请使用kubectl delete pod <name> -n kube-system重试

[root@k8s-master ~]# kubectl get pods -A

NAMESPACE     NAME                                           READY   STATUS    RESTARTS        AGE
kube-system   calico-kube-controllers-59d8695f7f-zv5m9       1/1     Running   6 (4m57s ago)   20m
kube-system   calico-node-cxgm9                              1/1     Running   5 (5m1s ago)    20m
kube-system   calico-node-jxjfc                              1/1     Running   5 (4m57s ago)   20m
kube-system   calico-node-ksx4x                              1/1     Running   5 (4m54s ago)   20m
kube-system   coredns-c676cc86f-26hd9                        1/1     Running   0               6m6s
kube-system   coredns-c676cc86f-lfzgk                        1/1     Running   0               6m17s
kube-system   etcd-k8s-master.linux.com                      1/1     Running   1 (5m1s ago)    138m
kube-system   kube-apiserver-k8s-master.linux.com            1/1     Running   1 (4m54s ago)   138m
kube-system   kube-controller-manager-k8s-master.linux.com   1/1     Running   1 (5m1s ago)    138m
kube-system   kube-proxy-lp9k8                               1/1     Running   1 (5m1s ago)    137m
kube-system   kube-proxy-n6sxk                               1/1     Running   1 (4m58s ago)   137m
kube-system   kube-proxy-r2cm2                               1/1     Running   1 (4m54s ago)   137m
kube-system   kube-scheduler-k8s-master.linux.com            1/1     Running   1 (5m1s ago)    138m

恭喜你,如果您跟到了这里的话,那么一个简单的Kubernetes集群就部署好啦!

如果您想安装dashboard图形化插件,可以观看这期教程:https://www.wsjj.top/archives/138