欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Kubernetes v1.13.1集群 离线部署 centos7.5

程序员文章站 2022-05-27 19:07:05
...

1.Kubernetes集群 离线部署 centos7.5

最近在做docker集群化管理相关工作,考虑Kubernetes运维的方便性,先做Kubernetes 离线过程部署。实际上在部署过程中还是遇到了需要联网的情形,但只有一步,不影响整体过程,在此写下来备查,欢迎批评交流。
参考了一个github项目,链接:kubeadm-install-offline

  • a.准备两台台机器,基本信息如下:
IP HOST ROLE OS MEMORY
192.168.1.250 k8master Kubernetes master 节点 CentOS 7.5 3G
192.168.1.12 node2 Kubernetes node 节点 CentOS 7.6 2G

本文假定docker-ce已经安装好docker-ce版本:

[[email protected] ~]# docker -v
Docker version 18.09.0, build 4d60db4

在头节点和子节点上:

拉取镜像,从一台能联网的主机上:

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.13.1
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.13.1
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.13.1
docker pull weaveworks/weave-npc:1.8.2
docker pull weaveworks/weave-kube:1.8.2
docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.5.0
docker pull mirrorgooglecontainers/kube-addon-manager:v6.1
docker pull mirrorgooglecontainers/etcd-amd64:3.0.14-kubeadm
docker pull mirrorgooglecontainers/kubedns-amd64:1.9
docker pull mirrorgooglecontainers/dnsmasq-metrics-amd64:1.0
docker pull mirrorgooglecontainers/kubedns-amd64:1.8
docker pull mirrorgooglecontainers/kube-dnsmasq-amd64:1.4
docker pull mirrorgooglecontainers/kube-discovery-amd64:1.0
docker pull quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64
docker pull mirrorgooglecontainers/exechealthz-amd64:1.2
docker pull mirrorgooglecontainers/pause-amd64:3.0   
docker pull coredns/coredns:1.2.6

重命名coredns组件

docker tag coredns/coredns:1.2.6 mirrorgooglecontainers/coredns:1.2.6

导出镜像到tar包

docker save mirrorgooglecontainers/kube-apiserver-amd64:v1.13.1 > kube-apiserver-amd64_v1.13.1.tar
docker save mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.1 > kube-controller-manager-amd64_v1.13.1.tar
docker save mirrorgooglecontainers/kube-proxy-amd64:v1.13.1 > kube-proxy-amd64_v1.13.1.tar
docker save mirrorgooglecontainers/kube-scheduler-amd64:v1.13.1 > kube-scheduler-amd64_v1.13.1.tar
docker save weaveworks/weave-npc:1.8.2 > weave-npc_1.8.2.tar

docker save weaveworks/weave-kube:1.8.2 > weave-kube_1.8.2.tar
docker save mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.5.0> kubernetes-dashboard-amd64_v1.5.0.tar
docker save mirrorgooglecontainers/kube-addon-manager:v6.1 > kube-addon-manager_v6.1.tar
docker save mirrorgooglecontainers/etcd-amd64:3.0.14-kubeadm > etcd-amd64_3.0.14-kubeadm.tar

docker save mirrorgooglecontainers/kubedns-amd64:1.9 > kubedns-amd64_1.9.tar
docker save mirrorgooglecontainers/dnsmasq-metrics-amd64:1.0 > dnsmasq-metrics-amd64_1.0.tar
docker save mirrorgooglecontainers/kubedns-amd64:1.8 > kubedns-amd64_1.8.tar
docker save mirrorgooglecontainers/kube-dnsmasq-amd64:1.4 > kube-dnsmasq-amd64_1.4.tar

docker save mirrorgooglecontainers/kube-discovery-amd64:1.0 > kube-discovery-amd64_1.0.tar
docker save quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64 > flannel-git_v0.6.1-28-g5dde68d-amd64.tar
docker save mirrorgooglecontainers/exechealthz-amd64:1.2 > exechealthz-amd64_1.2.tar
docker save mirrorgooglecontainers/pause-amd64:3.0 > pause-amd64_3.0.tar
docker save mirrorgooglecontainers/coredns:1.2.6 > coredns_1.2.6.tar

复制docker 镜像 tar 包到远程服务器

scp <folder_with_images>/*.tar <user>@<server>:<path>/<to>/<remote>/<folder>

确保docker已经启动

systemctl status docker

加载 docker images 到 远程服务器

docker load < kube-apiserver-amd64_v1.13.1.tar
docker load < kube-controller-manager-amd64_v1.13.1.tar
docker load < kube-proxy-amd64_v1.13.1.tar
docker load < kube-scheduler-amd64_v1.13.1.tar
docker load < weave-npc_1.8.2.tar
docker load < weave-kube_1.8.2.tar
docker load < kubernetes-dashboard-amd64_v1.5.0.tar
docker load < kube-addon-manager_v6.1.tar
docker load < etcd-amd64_3.0.14-kubeadm.tar
docker load < kubedns-amd64_1.9.tar
docker load < dnsmasq-metrics-amd64_1.0.tar
docker load < kubedns-amd64_1.8.tar
docker load < kube-dnsmasq-amd64_1.4.tar
docker load < kube-discovery-amd64_1.0.tar
docker load < flannel-git_v0.6.1-28-g5dde68d-amd64.tar
docker load < exechealthz-amd64_1.2.tar
docker load < pause-amd64_3.0.tar
docker load < coredns_1.2.6.tar

对于 Centos7, 启用 sysctl 配置

编辑 /etc/sysctl.conf

vi /etc/sysctl.conf

修改

net.ipv4.ip_forward = 0 to net.ipv4.ip_forward = 1

添加如下内容

net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

重新加载 sysctl配置

sysctl -p

配置阿里云镜像仓库地址

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

下载相关文件

yum -y install --downloadonly --downloaddir=./ kubelet kubeadm kubectl kubernetes-cni

Copy kubernetes rpms 到服务器

scp <folder_with_rpms>/*.rpm <user>@<server>:<path>/<to>/<remote>/<folder>

安装 kubernetes 工具包

yum install -y *.rpm
systemctl enable kubelet && systemctl start kubelet

禁用swap

swapoff -a

同时

vi /etc/fstab

注释掉引用swap的行

#UUID=7dac6afd-57ad-432c-8736-5a3ba67340ad swap swap defaults 0 0
free m 查看swap使用

在头节点

Kubeadm 安装

按照这个链接的步骤 https://kubernetes.io/docs/getting-started-guides/kubeadm/ (Starting from (2/4) 初始化master)

初始化 由于不能*,这里使用 mirrorgooglecontainers 作为配置源

kubeadm init  --image-repository mirrorgooglecontainers

...
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "064158.548b9ddb1d3fad3e"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 61.317580 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 6.556101 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 6.020980 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token=<token> <master-ip>
# 复制上面的行,保存备用
...

网络配置

下载weave插件配置 (需要联网)

wget https://git.io/weave-kube
mv weave-kube weave-kube.yml

复制weave插件配置

scp <folder_with_weave_yml>/weave-kube.yml <user>@<server>:<path>/<to>/<remote>/<folder>

应用插件

kubectl apply -f <folder_with_weave_yml>/weave-kube.yml

如果报错:The connection to the server localhost:8080 was refused - did you specify the right host or port?

下面如果在root用户下执行的,就不会报错

  mkdir -p $HOME/.kube 
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
检查安装情况:

如果出现如下
情况,说明安装成功:

[[email protected] plugins-master]# kubectl get pods --namespace=kube-system
\NAME                                              READY   STATUS              RESTARTS   AGE
coredns-7ff764b6d6-6gx6j                          0/1     Running 0          4d
coredns-7ff764b6d6-t5ql9                          0/1     Running 0          4d
k8master                     1/1     Running             0          4d
kube-apiserver-ecs-e450-0011.novalocal            1/1     Running             0          4d
kube-controller-manager-ecs-e450-0011.novalocal   1/1     Running             0          4d
kube-proxy-x9grw                                  1/1     Running             0          4d
kube-scheduler-ecs-e450-0011.novalocal            1/1     Running             0          4d

执行命令:
systemctl status kubelet -l
如果出现如下内容:

Jan 09 10:04:51 ecs-e450-0011.novalocal kubelet[20587]: W0109 10:04:51.347324   20587 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
Jan 09 10:04:51 ecs-e450-0011.novalocal kubelet[20587]: E0109 10:04:51.347443   20587 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

则执行如下命令修复(此步骤需要联网):

#re-deploy weave network 
export kubever=$(kubectl version | base64 | tr -d '\n')
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
..then..
systemctl restart docker && systemctl restart kubelet

看到如下内容,说明安装成功

···
这是查看一下状态是成功的
[[email protected] yum.repos.d]# get nodes
NAME STATUS ROLES AGE VERSION
k8master Ready master 25h v1.13.1

[[email protected] yum.repos.d]# kubectl get pods -n kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-dljx2                1/1       Running   0          54m
coredns-78fcdf6894-f97qd                1/1       Running   0          54m
etcd-docker-master                      1/1       Running   0          6m
kube-apiserver-docker-master            1/1       Running   0          6m
kube-controller-manager-docker-master   1/1       Running   0          6m
kube-flannel-ds-amd64-6r56g             1/1       Running   0          6m  #正常运行
kube-proxy-ht2dq                        1/1       Running   0          54m
kube-scheduler-docker-master            1/1       Running   0          6m

kubectl get ns #获取命名空间
[[email protected] ]# kubectl get ns
NAME          STATUS    AGE
default       Active    56m
kube-public   Active    56m
kube-system   Active    56m  #系统级的pod都在这里

···

在子节点行执行如下命令

kubeadm join --token