欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

CentOS安装部署k8s

程序员文章站 2024-03-11 10:10:13
...

1、官网说明:

  1. Kubernetes 安装 kubeadm

  2. 使用kubeadm创建Kubernetes集群

2、准备工作

学习和练手,一切从简!

  1. 直接使用 root 身份

  2. 关闭防火墙
    #systemctl stop firewalld
    #systemctl disable firewalld

  3. 关闭 swap
    #swapoff -a
    #vim /etc/fstab

     #
     # /etc/fstab
     # Created by anaconda on Mon Jun 28 23:11:04 2021
     #
     # Accessible filesystems, by reference, are maintained under '/dev/disk'
     # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
     #
     /dev/mapper/cl-root     /                       xfs     defaults        0 0
     UUID=0b4346b6-cee1-4abb-932e-0c1cb4cda404 /boot                   xfs     defaults        0 0
     /dev/mapper/cl-home     /home                   xfs     defaults        0 0
     # wzh 20211026 for k8s
     # /dev/mapper/cl-swap     swap                    swap    defaults        0 0
    

3、安装 Docker

官方文档
Install Docker Engine on CentOS

简单摘录一下步骤:

  1. #yum install -y yum-utils

  2. #yum-config-manager
    –add-repo
    https://download.docker.com/linux/centos/docker-ce.repo

  3. #yum install docker-ce docker-ce-cli containerd.io

  4. 启动服务,并设置开机启动
    systemctl enable docker && systemctl start docker

  5. 验证 Docker
    docker run hello-world

4、安装kubectl、kubelet和kubeadm

配置yum源

	cat <<EOF > /etc/yum.repos.d/kubernetes.repo
	[kubernetes]
	name=Kubernetes
	baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
	enabled=1
	gpgcheck=1
	repo_gpgcheck=1
	gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
	        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
	EOF

setenforce 0

yum install -y kubelet kubeadm kubectl

systemctl enable kubelet && systemctl start kubelet

5、master 节点执行初始化

  1. 配置初始化文件
    #mkdir working && cd working

    #kubeadm config print init-defaults > kubeadm-config.yaml

    #vim kubeadm-config.yaml
    修改其中 advertiseAddress: 192.168.0.141

     apiVersion: kubeadm.k8s.io/v1beta3
     bootstrapTokens:
     - groups:
       - system:bootstrappers:kubeadm:default-node-token
       token: abcdef.0123456789abcdef
       ttl: 24h0m0s
       usages:
       - signing
       - authentication
     kind: InitConfiguration
     localAPIEndpoint:
       advertiseAddress: 192.168.0.141
       bindPort: 6443
     nodeRegistration:
       criSocket: /var/run/dockershim.sock
       imagePullPolicy: IfNotPresent
       name: centos7-141
       taints: null
     ---
     apiServer:
       timeoutForControlPlane: 4m0s
     apiVersion: kubeadm.k8s.io/v1beta3
     certificatesDir: /etc/kubernetes/pki
     clusterName: kubernetes
     controllerManager: {}
     dns: {}
     etcd:
       local:
         dataDir: /var/lib/etcd
     imageRepository: registry.aliyuncs.com/google_containers
     kind: ClusterConfiguration
     kubernetesVersion: 1.22.0
     networking:
       podSubnet: 10.244.0.0/16
       dnsDomain: cluster.local
    
  2. 预先拉取所需镜像
    #kubeadm config images pull --config=kubeadm-config.yaml

    非必需
    预先拉取可以提前发现失败的 images,提前修改为镜像方式获取

  3. 初始化
    加上 tee kubeadm-init.log,方便后续查看 token 和初始化信息

    #kubeadm init --config=kubeadm-config.yaml | tee kubeadm-init.log

     [init] Using Kubernetes version: v1.22.0
     [preflight] Running pre-flight checks
     [preflight] Pulling images required for setting up a Kubernetes cluster
     [preflight] This might take a minute or two, depending on the speed of your internet connection
     [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
     ...
     Your Kubernetes control-plane has initialized successfully!
     
     To start using your cluster, you need to run the following as a regular user:
     
       mkdir -p $HOME/.kube
       sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
       sudo chown $(id -u):$(id -g) $HOME/.kube/config
     
     Alternatively, if you are the root user, you can run:
     
       export KUBECONFIG=/etc/kubernetes/admin.conf
     
     You should now deploy a pod network to the cluster.
     Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
       https://kubernetes.io/docs/concepts/cluster-administration/addons/
     
     Then you can join any number of worker nodes by running the following on each as root:
     
     kubeadm join 192.168.0.141:6443 --token abcdef.0123456789abcdef \
     	--discovery-token-ca-cert-hash sha256:57df376d612009f381bd3f3835464578666536080c6f779cffcf8bc90af10930 
    

    按照提示,root 身份简单设置

     # echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
    

    启动生效
    #source /etc/profile

  4. 确认所有服务健康状态:Healthy
    #kubectl get cs

     Warning: v1 ComponentStatus is deprecated in v1.19+
     NAME                 STATUS      MESSAGE                                                                                       ERROR
     scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
     controller-manager   Healthy     ok                                                                                            
     etcd-0               Healthy     {"health":"true","reason":""}     
    

    我这里scheduler总是Unhealthy,只能收佛能够处理

    #vim /etc/kubernetes/manifests/kube-scheduler.yaml
    #vim /etc/kubernetes/manifests/kube-controller-manager.yaml

    删除或者注释掉

     #- --port=0
    

    重启kubelet服务生效
    #systemctl restart kubelet

    等待一会儿之后,再来
    #kubectl get cs

     Warning: v1 ComponentStatus is deprecated in v1.19+
     NAME                 STATUS    MESSAGE                         ERROR
     scheduler            Healthy   ok                              
     etcd-0               Healthy   {"health":"true","reason":""}   
     controller-manager   Healthy   ok 
    

    如果发生错误,随时 # kubeadm reset 再重来

  5. 确认 configmap 配置状态
    #kubectl get -n kube-system configmap

     NAME                                 DATA   AGE
     coredns                              1      9m54s
     extension-apiserver-authentication   6      10m
     kube-flannel-cfg                     2      43s
     kube-proxy                           2      9m54s
     kube-root-ca.crt                     1      9m43s
     kubeadm-config                       1      9m56s
     kubelet-config-1.22                  1      9m56s
    

6、master节点安装pod网络

  1. 获取 kube-flannel.yml
    #curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

把yml文件中的所有的quay.io改为quay.mirrors.ustc.edu.cn

sed  -i  's/quay.io/quay.mirrors.ustc.edu.cn/g'   kube-flannel.yml

或者

sed  -i  's/quay.io/quay-mirror.qiniu.com/g'   kube-flannel.yml
  1. 生成 flannel 插件pod
    #kubectl apply -f kube-flannel.yml

    Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
    podsecuritypolicy.policy/psp.flannel.unprivileged created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.apps/kube-flannel-ds created

    1. 确认配置正确
      #kubectl get -n kube-system configmap

      NAME DATA AGE
      coredns 1 9m54s
      extension-apiserver-authentication 6 10m
      kube-flannel-cfg 2 43s
      kube-proxy 2 9m54s
      kube-root-ca.crt 1 9m43s
      kubeadm-config 1 9m56s
      kubelet-config-1.22 1 9m56s

    2. 确认所有的Pod都处于Running状态
      #kubectl get pod -n kube-system

       NAME                                  READY   STATUS    RESTARTS      AGE
       coredns-7f6cbbb7b8-wb7xf              1/1     Running   0             12m
       coredns-7f6cbbb7b8-ww5z4              1/1     Running   0             12m
       etcd-centos7-141                      1/1     Running   7             12m
       kube-apiserver-centos7-141            1/1     Running   1             12m
       kube-controller-manager-centos7-141   1/1     Running   1 (12m ago)   12m
       kube-flannel-ds-bvvq6                 1/1     Running   0             3m31s
       kube-proxy-8f8bq                      1/1     Running   0             12m
       kube-scheduler-centos7-141            1/1     Running   3 (12m ago)   12m
      

6、work节点join

  1. 每一个节点服务器也和 master 主节点一样安装 Docker、kubectl、kubelet和kubeadm

如果master 重新init,则work节点join之前先执行 kubeadm reset

  1. 按照 master 初始化的输出提示加入集群

kubeadm join 192.168.0.141:6443 --token abcdef.0123456789abcdef
–discovery-token-ca-cert-hash sha256:57df376d612009f381bd3f3835464578666536080c6f779cffcf8bc90af10930

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

如果没有记住刚才的 token , master 主机 # cat kubeadm-init.log 可以找到
或者
#kubeadm token list

TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
abcdef.0123456789abcdef   23h         2021-11-10T08:01:53Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

如果超过 24 小时没有 join ,token 过期,需要在 master 重新获取 token

#kubeadm token create

8mfiss.yvbnl8m319ysiflh
  1. 验证node和 Pod状态,全部为Running
    #kubectl get nodes

     NAME          STATUS   ROLES                  AGE     VERSION
     centos7-141   Ready    control-plane,master   30m     v1.22.2
     centos7-143   Ready    <none>                 7m48s   v1.22.2
     centos7-144   Ready    <none>                 2m22s   v1.22.2
    

    #kubectl get pods --all-namespaces

     NAMESPACE     NAME                                  READY   STATUS    RESTARTS      AGE
     kube-system   coredns-7f6cbbb7b8-wb7xf              1/1     Running   0             28m
     kube-system   coredns-7f6cbbb7b8-ww5z4              1/1     Running   0             28m
     kube-system   etcd-centos7-141                      1/1     Running   7             29m
     kube-system   kube-apiserver-centos7-141            1/1     Running   1             29m
     kube-system   kube-controller-manager-centos7-141   1/1     Running   1 (28m ago)   28m
     kube-system   kube-flannel-ds-b5sg8                 1/1     Running   0             47s
     kube-system   kube-flannel-ds-bl9vr                 1/1     Running   0             6m13s
     kube-system   kube-flannel-ds-bvvq6                 1/1     Running   0             19m
     kube-system   kube-proxy-8f8bq                      1/1     Running   0             28m
     kube-system   kube-proxy-j679n                      1/1     Running   0             47s
     kube-system   kube-proxy-qczzf                      1/1     Running   0             6m13s
     kube-system   kube-scheduler-centos7-141            1/1     Running   3 (28m ago)   28m
    

7、部署dashboard

dashboard官方仓库

另外写一个博文
k8s 配置dashboard

如果删除了 kubernetes-dashboard 这个 pod 重建,则用户和角色也需要重新创建