欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Kubernetes 1.16.9 kubeadm 集群安装

程序员文章站 2024-03-25 13:47:52
...

虚拟机环境

IP 版本 角色
10.211.55.41 CentOS 7.8.2003 k8s-master-1
10.211.55.42 CentOS 7.8.2003 k8s-node-1
10.211.55.43 CentOS 7.8.2003 k8s-node-2

处理不必要的麻烦

  • 代理

  宿主机有开 *X,虚拟机上的网络都是走宿主机的代理,不然 k8s 安装不成。下面有提供离线安装包。下面开启代理:

# 系统代理
$ cat >> /etc/profile << EOF

export http_proxy=http://192.168.1.188:1087
export https_proxy=http://192.168.1.188:1087
EOF

$ source /etc/profile

# Docker 拉取镜像代理配置
$ mkdir -p mkdir -p /lib/systemd/system/docker.service.d
$ cat >> /lib/systemd/system/docker.service.d/socks5-proxy.conf << EOF
[Service]
Environment="ALL_PROXY=socks5://192.168.1.188:1086"
EOF
  • 字符集

  可以查看这篇博客

  • 升级内核

  可以查看这篇博客

  • 安装常用工具
$  yum -y install wget vim
  • 本篇博客所有依赖如下:

  安装包

  • 其他操作
### 三台机器同样操作
# 每天机器的 UUID 要不同
$ cat /sys/class/dmi/id/product_uuid

# 修改 hosts
$ cat >> /etc/hosts << EOF

10.211.55.41 k8s-master-1
10.211.55.42 k8s-node-1
10.211.55.43 k8s-node-2
EOF

# 关闭防火墙和 SELinux
$ systemctl disable firewalld && systemctl stop firewalld && setenforce 0
$ sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 关闭 Swap,自 1.8 开始,k8s 要求关闭系统 Swap,如果不关闭,kubelet 无法启动。
# swappiness 的值的大小对如何使用 swap 分区是有着很大的联系的。swappiness = 0 的时候表示最大限度使用物理内存,然后才是 swap 空间,swappiness = 100 的时候表示积极的使用 swap 分区,并且把内存上的数据及时的搬运到 swap 空间里面。linux 的基本默认设置为 60。
$ swapoff -a
$ sed -i.bak '/swap/s/^/#/' /etc/fstab
# /dev/mapper/centos-swap swap                    swap    defaults        0 0

# 开机去加载系统配置 Modeles
$ cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do [ -x $file ] && $file; done
EOF

# flannel 网络需要 br_netfilter 模块支持
$ cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF

$ chmod 755 /etc/sysconfig/modules/br_netfilter.modules \
  && bash /etc/sysconfig/modules/br_netfilter.modules

# 最大限度使用屋里空间、开启桥接网络和转发
$ cat <<EOF >  /etc/sysctl.d/k8s.conf
vm.swappiness                       = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-iptables  = 1
EOF

sysctl --system

如果不关闭 Swap 也可,需要修改 kubelet 的启动配置项 --fail-swap-on=false 。配置文件:/etc/sysconfig/kubeletKUBELET_EXTRA_ARGS=–fail-swap-on=false

  • kube-proxy 开启 ipvs 的前置条件

  ipvs 已经加入到了内核的主干,所以为 kube-proxy 开启 ipvs 的前提需要加载以下的内核模块:

模块 说明
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4 # 从内核 4.19.1 开始已经修改成:nf_conntrack
$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

  还需要确保各个节点上已经安装了 ipset 软件包 yum -y install ipset。 为了便于查看 ipvs 的代理规则,最好安装一下管理工具 ipvsadmyum -y install ipvsadm。如果以上前提条件如果不满足,则即使 kube-proxy 的配置开启了 ipvs 模式,也会退回到 iptables 模式。

安装 Docker

  • 删除旧版本
$ sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
  • 安装稳定 yum 源仓库
$ sudo yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2

$ sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
  • 安装
# 查看版本
$ yum list docker-ce --showduplicates | sort -r
$ yum install -y docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io
  • 启动
$ systemctl enable docker && systemctl start docker

修改 docker cgoup driver 为 systemd

  CRI installation 中指出,对于使用 systemd 作为 init system 的 Linux 的发行版,使用 systemd 作为 Docker 的 cgroup driver 可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上 Docker 的 cgroup driver 为 systemd。

  • 配置
#

$ mkdir /etc/docker
$ cat > /etc/docker/daemon.json <<EOF
{
  # "registry-mirrors": ["https://tpzm7vxj.mirror.aliyuncs.com"], # 国内镜像加速
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

$ systemctl daemon-reload && systemctl restart docker

使用 kubeadm 部署 Kubernetes

安装 kubeadm 和 kubelet

  • 引用官方 yum 源:
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF


$ yum install -y kubelet-1.16.9 kubeadm-1.16.9 kubectl-1.16.9

  安装完毕,如图:

Kubernetes 1.16.9 kubeadm 集群安装

$ systemctl enable kubelet.service && systemctl start kubelet

kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环。

  使用 kubelet --help 查看很多参数丢已经 DEPRECATED 了,官方推荐 kubelet 使用 --config 指定配置文件,并在配置文件中指定原来这些参数所配置的内容,参考

使用 kubeadm 初始化集群

$ kubeadm init --kubernetes-version=v1.16.9 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.211.55.41
  1. –kubernetes-version:指定 k8s 版本
  2. –pod-network-cidr:fiannel 作为 pod 网络查件,指定范围
  3. –apiserver-advertise-address:api-server 所在主机地址

这里初始化会从 Google 的镜像源拉取 Docker 镜像,如果没有????????的话应该会报错,也可离线 docker load -i 包 导出,文章头部已经提供了下载。

  重新进行初始化操作:

安装之前记得把代理取消掉。

$ kubeadm init --kubernetes-version=v1.16.9 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.211.55.41
[init] Using Kubernetes version: v1.16.9
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.211.55.41]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [10.211.55.41 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [10.211.55.41 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.502757 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: avmkea.3i15b2xvcdnrrwvj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.211.55.41:6443 --token avmkea.3i15b2xvcdnrrwvj \
    --discovery-token-ca-cert-hash sha256:c7811dd5d821821d0fbdb90943a80a7176b8844ca6c24774833b369c258f8ee2

  观看上方日志,可以发现有这样一些操作:

  • [kubelet-start] 生成 kubelet 的配置文件 /var/lib/kubelet/config.yaml
  • [certificates] 生成相关的各种证书
  • [kubeconfig] 生成相关的 kubeconfig 文件
  • [bootstraptoken] 生成 token 记录下来,后边使用 kubeadm join 往集群中添加节点时会用到
  • 配置用户通过 kubectl 访问集群
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 新节点加入(其他工作节点使用此命令加入即可)
$ kubeadm join 10.211.55.41:6443 --token avmkea.3i15b2xvcdnrrwvj \
    --discovery-token-ca-cert-hash sha256:c7811dd5d821821d0fbdb90943a80a7176b8844ca6c24774833b369c258f8ee2

  查看一下集群状态,确认组件都处于 healthy 状态:

$ kubectl get cs
NAME                 AGE
scheduler            <unknown>
controller-manager   <unknown>
etcd-0               <unknown>
$ kubectl get cs -o=go-template='{{printf "|NAME|STATUS|MESSAGE|\n"}}{{range .items}}{{$name := .metadata.name}}{{range .conditions}}{{printf "|%s|%s|%s|\n" $name .status .message}}{{end}}{{end}}'
|NAME|STATUS|MESSAGE|
|scheduler|True|ok|
|controller-manager|True|ok|
|etcd-0|True|{"health":"true"}|

  这里是 unknown 请参考文章:https://segmentfault.com/a/1190000020912684。

集群初始化如果遇到问题可以使用 kubeadm reset 进行清理。

安装 Pod Network

  查看节点的状态:

$ kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master-1   NotReady   master   3m50s   v1.16.9

  STATUS 的值都是 NotReady,这是因为网络插件的问题,k8s 网络查件选型有很多种,这里使用网络插件 flannel

$ kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

✨ 如果 Node 有多个网卡的话,参考 issues,目前需要在 kube-flannel.yml 中使用 --iface 参数指定集群主机内网网卡的名称,否则可能会出现 dns 无法解析。需要将 kube-flannel.yml 下载到本地,flanneld 启动参数加上 --iface=<iface-name>

......
containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth1
......

  再次查看节点和 Pod 状态,确保都在 Ready/Running 状态:

$ kubectl get nodes -o wide
NAME           STATUS   ROLES    AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-master-1   Ready    master   5m17s   v1.16.9   10.211.55.41   <none>        CentOS Linux 7 (Core)   4.4.236-1.el7.elrepo.x86_64   docker://18.9.9

$ kubectl get pods -o wide -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE     IP             NODE           NOMINATED NODE   READINESS GATES
coredns-5644d7b6d9-m2tn4               1/1     Running   0          5m10s   10.244.0.2     k8s-master-1   <none>           <none>
coredns-5644d7b6d9-n4kls               1/1     Running   0          5m10s   10.244.0.3     k8s-master-1   <none>           <none>
etcd-k8s-master-1                      1/1     Running   0          4m25s   10.211.55.41   k8s-master-1   <none>           <none>
kube-apiserver-k8s-master-1            1/1     Running   0          4m6s    10.211.55.41   k8s-master-1   <none>           <none>
kube-controller-manager-k8s-master-1   1/1     Running   0          4m31s   10.211.55.41   k8s-master-1   <none>           <none>
kube-flannel-ds-amd64-cl2gm            1/1     Running   0          53s     10.211.55.41   k8s-master-1   <none>           <none>
kube-proxy-6k77w                       1/1     Running   0          5m9s    10.211.55.41   k8s-master-1   <none>           <none>
kube-scheduler-k8s-master-1            1/1     Running   0          4m7s    10.211.55.41   k8s-master-1   <none>           <none>

让 Master 节点参与负载

  使用 kubeadm 初始化的集群,出于安全考虑 Pod 不会被调度到 Master Node 上,也就是说 Master Node 不参与工作负载。这是因为当前的 master 节点打上了 node-role.kubernetes.io/master:NoSchedule 的污点:

$ kubectl describe node k8s-master-1 | grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule

  如果你想让 Master 节点参与负载,那么去掉这个污点即可:

$ kubectl taint nodes k8s-master-1 node-role.kubernetes.io/master-
node/k8s-master-1 untainted

测试 DNS

$ kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.

$ nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

从集群中移除节点

  在 Master 上执行:

$ kubectl drain k8s-node-1 --delete-local-data --force --ignore-daemonsets
$ kubectl delete node k8s-node-1

  在 k8s-node-1 上执行:

$ kubeadm reset

Over!!!最后安装完了记得取消掉代理。