k8s node节点部署(v1.13.10)
系统环境:
node节点 操作系统: centos-7-x86_64-dvd-1908.iso
node节点 ip地址: 192.168.1.204
node节点 hostname(主机名, 请和保持node节点主机名 和master不同):k8s.node03
目标: 在该机器安装k8s node节点,并加入指定集群
步骤如下:
1. 安装基础工具
yum install vim yum install lrzsz yum install docker
systemctl start docker
systemctl enable docker
2. 检查node节点所在系统时间,若和master节点时间不同,请修改node节点和master节点保持一致(我这里时间和master基本一致,故不修改),修改时间的方式请 参考这里
[root@k8s ~]# date sun oct 20 03:08:01 edt 2019 [root@k8s ~]#
3. 关闭防火墙, 如果是公网主机请设置网络安全组,开放必要端口
systemctl stop firewalld systemctl disable firewalld
4. 关闭selinux
setenforce 0
编辑文件 vim /etc/selinux/config 如下
# this file controls the state of selinux on the system. # selinux= can take one of these three values: # enforcing - selinux security policy is enforced. # permissive - selinux prints warnings instead of enforcing. # disabled - no selinux policy is loaded. selinux=disabled # selinuxtype= can take one of three values: # targeted - targeted processes are protected, # minimum - modification of targeted policy. only selected processes are protected. # mls - multi level security protection. selinuxtype=targeted
5. 创建k8s配置文件 /etc/sysctl.d/k8s.conf
cat <<eof > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
eof
6. 执行以下命令使修改生效.
modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
7.添加k8s yum源
cat <<eof > /etc/yum.repos.d/kubernetes.repo [k8s] name=kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg eof
yum makecache
8.安装kubelet, kubectl kubeadm (指定版本号(1.13.1)安装时 这三个东西请按照以下顺序安装)
yum install -y kubelet-1.13.1 yum install kubectl-1.13.1 yum install kubeadm-1.13.1
9.检查已安装的kubelet kubectl kubeadm 版本号
[root@k8s ~]# yum list installed|grep kube kubeadm.x86_64 1.13.1-0 @k8s kubectl.x86_64 1.13.1-0 @k8s kubelet.x86_64 1.13.1-0 @k8s kubernetes-cni.x86_64 0.6.0-0 @k8s
10.拉取必要docker 镜像. 因为网络(需要*),如果node节点所在网络有*(可以直接访问到域名 k8s.gcr.io),那么这个步骤可以忽略。
10.1,我这里的master节点的这些基础docker 镜像的获取方式如下。当然这些命令可以在node节点重新执行一遍,为了节约下载镜像时间,我直接从master节点打包发送到node节点
docker pull docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.13.1 docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1 docker pull docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.1 docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1 docker pull docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.13.1 docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1 docker pull docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.13.1 docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1 docker pull docker.io/mirrorgooglecontainers/pause-amd64:3.1 docker tag docker.io/mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1 docker pull docker.io/mirrorgooglecontainers/etcd-amd64:3.2.24 docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24 docker pull docker.io/coredns/coredns:1.2.6 docker tag docker.io/coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
10.2 , 【master节点】将master节点的所有必要docker 镜像保存成压缩包(如果你安装的不是1.13.1,那么save的时候这些tag请和 docker images 对照,切莫写错tag),并复制到node节点所在设备
docker save k8s.gcr.io/kube-proxy:v1.13.1 k8s.gcr.io/coredns:1.2.6 k8s.gcr.io/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/pause:3.1 -o k8s.1.13.1.tar scp k8s.1.13.1.tar 192.168.1.204:~/
10.3 , 【node节点】从压缩包 恢复docker镜像
cd ~ docker load -i k8s.1.13.1.tar
11. 关闭 swap
swapoff -a && sed -i '/ swap / s/^/#/' /etc/fstab echo "vm.swappiness=0">>/etc/sysctl.d/k8s.conf sysctl -p /etc/sysctl.d/k8s.conf echo 'environment="kubelet_system_pods_args=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"'>>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf systemctl daemon-reload systemctl restart kubelet
12. 添加到集群
kubeadm join 192.168.1.201:6443 --token 6xnc86.n3ftiy9cu9wuyl5a --discovery-token-ca-cert-hash sha256:6dbcec4d2e20e8936e8d74714a194fa838cc6544f98bde41cd766b69b7a4fc12
13. 【master节点】检查该节点是否部署成功,可以看到k8s.node03已添加成功
[root@localhost ~]# kubectl get nodes name status roles age version k8s.node01 ready <none> 6h27m v1.13.1 k8s.node02 ready <none> 6h38m v1.13.1 k8s.node03 ready <none> 61s v1.13.1 localhost.localdomain ready master 7h32m v1.13.1