欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

使用kubeadm搭建k8s集群

程序员文章站 2022-07-13 21:10:41
...

现在docker,k8s那么火,我们肯定也要玩一玩得嘛,那么我们就得搭建个k8s集群环境来好好感受感受下k8s的魅力,个人感觉可以先从单机的minikube感受下,熟悉之后再来搭建多服务器的k8s集群,下面就我搭建多服务器k8s集群的过程记录下来,供大家参考

1,环境

      3台vm虚拟机 比如192.168.2.100(master) 192.168.2.101(node1) 192.168.2.102(node2) vm系统为centos7

[[email protected] ~]# uname -a
Linux centos7-01 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

2.准备工作

 docker,k8s.gcr.io的相关镜像

2.1 安装docker

yum update
yum install -y docker

安装完成之后

systemctl enable docker && systemctl start docker

可以查看下docker版本 已经是否启动成功

docker version
systemctl status docker

docker解决之后解决镜像问题

2.2 k8s集群搭建的一个难点就是k8s.gcr.io的镜像被墙的困难,这里我给大家提供下我下载的镜像

链接:https://pan.baidu.com/s/1-03BTghvOKWNGzrwN2A-JA 
提取码:lecc

下载之后上传到服务器,解压

用docker加载镜像

docker load -i k8s.images.tar

给镜像打上正确的标签

chmod 777 docker-load.sh
bash docker-load.sh

查看是否有镜像了

[[email protected] ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.14.1             20a2d7035165        3 days ago          82.1MB
k8s.gcr.io/kube-apiserver            v1.14.1             cfaa4ad74c37        3 days ago          210MB
k8s.gcr.io/kube-controller-manager   v1.14.1             efb3887b411d        3 days ago          158MB
k8s.gcr.io/kube-scheduler            v1.14.1             8931473d5bdb        3 days ago          81.6MB
k8s.gcr.io/coredns                   1.3.1               eb516548c180        2 months ago        40.3MB
k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        4 months ago        258MB
quay.io/coreos/flannel               v0.10.0-amd64       0abc30f2e842        6 months ago        44.6MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        15 months ago       742kB
[[email protected] ~]# 

2.3这两步完成之后配置相关系统环境

2.3.1 关闭防火墙 

systemctl disable firewalld && systemctl stop firewalld

2.3.2 关闭SELINUX

vim /etc/selinux/config

改为
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
#SELINUX=enforcing
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

并 setenforce 0

2.3.3 关闭swap (注释有swap的哪一行)

swapoff -a


vim /etc/fstab#
# /etc/fstab
# Created by anaconda on Fri Mar  8 11:26:03 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=b7107ef6-ed73-492a-b9a7-8bdb0d6201a4 /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

3 环境好了之后开始安装 kubectl kubeamd kubelet

官方仓库无法使用,建议使用阿里源的仓库,执行以下命令添加kubernetes.repo仓库

 
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes 
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
 

然后用yum安装

yum install -y kubelet kubeadm kubectl

安装完成之后执行(这一步可以解决启动之后 The connection to the server localhost:8080 was refused - did you specify the right host or port? 这个问题,特别是node用kubectl命令时)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装flannel网络组件,上一步的images中有flannel的相关镜像,现在就不用下载了

操作如下

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables


mkdir -p /etc/cni/net.d/
cat <<EOF> /etc/cni/net.d/10-flannel.conf
{"name":"cbr0","type":"flannel","delegate": {"isDefaultGateway": true}}
EOF
mkdir /usr/share/oci-umount/oci-umount.d -p
mkdir /run/flannel/
cat <<EOF> /run/flannel/subnet.env
FLANNEL_NETWORK=172.100.0.0/16
FLANNEL_SUBNET=172.100.1.0/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
EOF

然后在master服务器上启动kubelet

kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.14.1

 

(在node服务器上加入k8s集群组

在master服务器上 输入kubeadm token create --print-join-command

复制信息到node 节点上

kubeadm join 192.168.2.100:6443 --token 2oa1ge.3zz9m5kvs02n0goy     --discovery-token-ca-cert-hash sha256:8a63f7379979198f93a9aa554b68d6d2dc0147a95196e010c1a1ce1e6ba10641)

()这一步是在node上操作的,请先让master节点跑起来

 

在master节点 启动flannel

kubectl create -f kube-flannel.yml

master好了之后然后node的操作同上

查看node 

kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
centos7-01   Ready    master   25h     v1.14.1
centos7-02   Ready    <none>   3h25m   v1.14.1
centos7-03   Ready    <none>   81m     v1.14.1
centos7-04   Ready    <none>   8h      v1.14.1

如上图就是好了

看pod信息

[[email protected] bridge]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
default       nginx-deploy-5f5d9f64f6-9gl7w        1/1     Running   0          6h24m   10.244.2.2      centos7-04   <none>           <none>
default       nginx-deploy-5f5d9f64f6-hl56z        1/1     Running   0          61m     10.244.4.5      centos7-02   <none>           <none>
default       nginx-deploy-5f5d9f64f6-ppm7l        1/1     Running   0          61m     10.244.5.2      centos7-03   <none>           <none>
kube-system   coredns-fb8b8dccf-dg9lv              1/1     Running   0          25h     172.100.1.4     centos7-01   <none>           <none>
kube-system   coredns-fb8b8dccf-rlqsw              1/1     Running   0          25h     172.100.1.5     centos7-01   <none>           <none>
kube-system   etcd-centos7-01                      1/1     Running   0          25h     192.168.2.121   centos7-01   <none>           <none>
kube-system   kube-apiserver-centos7-01            1/1     Running   0          25h     192.168.2.121   centos7-01   <none>           <none>
kube-system   kube-controller-manager-centos7-01   1/1     Running   0          25h     192.168.2.121   centos7-01   <none>           <none>
kube-system   kube-flannel-ds-5ldmv                1/1     Running   0          65m     192.168.2.111   centos7-03   <none>           <none>
kube-system   kube-flannel-ds-cnpwj                1/1     Running   0          3h19m   192.168.2.106   centos7-02   <none>           <none>
kube-system   kube-flannel-ds-tbbzg                1/1     Running   2          7h59m   192.168.2.113   centos7-04   <none>           <none>
kube-system   kube-flannel-ds-vqmbk                1/1     Running   0          25h     192.168.2.121   centos7-01   <none>           <none>
kube-system   kube-proxy-885qw                     1/1     Running   0          25h     192.168.2.121   centos7-01   <none>           <none>
kube-system   kube-proxy-jvbkv                     1/1     Running   3          8h      192.168.2.113   centos7-04   <none>           <none>
kube-system   kube-proxy-mkppj                     1/1     Running   0          3h26m   192.168.2.106   centos7-02   <none>           <none>
kube-system   kube-proxy-x8bqp                     1/1     Running   0          82m     192.168.2.111   centos7-03   <none>           <none>
kube-system   kube-scheduler-centos7-01            1/1     Running   0          25h     192.168.2.121   centos7-01   <none>           <none>

如图所示应该就是没什么问题了