欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

kubenertes 1.17集群部署总结

程序员文章站 2022-05-07 18:56:09
...

kubenertes 1.17集群部署总结
使用Easypack下提供的Ansible脚本进行一主多从的集群构建,本次所发布的版本未遇到明显问题,倒是解决了1.16版本中出现的kubectl get cs的unknown显示的问题。

部署方法

详细操作方法可参看:https://blog.csdn.net/liumiaocn/article/details/103725251

集群部署

集群说明

机器名称 IP 操作系统 Master节点 kube-apiserver kube-scheduler kube-controller-manager ETCD Node节点 Flannel Docker kubelet kube-proxy
host131 192.168.163.131 CentOS 7.6 Yes 安装 安装 安装 安装 Yes 安装 安装 安装 安装
host132 192.168.163.132 CentOS 7.6 - - - - - Yes 安装 安装 安装 安装
host133 192.168.163.133 CentOS 7.6 - - - - - Yes 安装 安装 安装 安装
host134 192.168.163.134 CentOS 7.6 - - - - - Yes 安装 安装 安装 安装

hosts准备

[aaa@qq.com ansible]# cat hosts.multi-nodes 
# kubernetes : master
[master-nodes]
host131 var_master_host=192.168.163.131 var_master_node_flag=True

# kubernetes : node
[agent-nodes]
host131 var_node_host=192.168.163.131 var_etcd_host=192.168.163.131 var_master_host=192.168.163.131 var_master_node_flag=True
host132 var_node_host=192.168.163.132 var_etcd_host=192.168.163.131 var_master_host=192.168.163.131 var_master_node_flag=False
host133 var_node_host=192.168.163.133 var_etcd_host=192.168.163.131 var_master_host=192.168.163.131 var_master_node_flag=False
host134 var_node_host=192.168.163.134 var_etcd_host=192.168.163.131 var_master_host=192.168.163.131 var_master_node_flag=False

# kubernetes : etcd
[etcd]
host131 var_etcd_host=192.168.163.131
[aaa@qq.com ansible]#

集群部署

[aaa@qq.com ansible]# ansible-playbook 20.multi-nodes.yml 

PLAY [agent-nodes] *********************************************************************************************************************

TASK [clean : stop services] ***********************************************************************************************************
changed: [host134]
changed: [host132]
changed: [host133]
changed: [host131]
...省略
PLAY RECAP *****************************************************************************************************************************
host131                    : ok=94   changed=81   unreachable=0    failed=0   
host132                    : ok=56   changed=46   unreachable=0    failed=0   
host133                    : ok=56   changed=46   unreachable=0    failed=0   
host134                    : ok=56   changed=46   unreachable=0    failed=0   

[aaa@qq.com ansible]# 

结果确认

  • 版本确认
[aaa@qq.com ansible]# kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
[aaa@qq.com ansible]# 
  • 节点确认
[aaa@qq.com ansible]# kubectl get nodes -o wide
NAME              STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
192.168.163.131   Ready    <none>   41s   v1.17.0   192.168.163.131   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.7
192.168.163.132   Ready    <none>   45s   v1.17.0   192.168.163.132   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.7
192.168.163.133   Ready    <none>   45s   v1.17.0   192.168.163.133   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.7
192.168.163.134   Ready    <none>   45s   v1.17.0   192.168.163.134   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.7
[aaa@qq.com ansible]# 
  • kubectl get cs
[aaa@qq.com ansible]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
[aaa@qq.com ansible]# 

1.16中出现的问题已经不存在了。

问题总结

部署时有时会碰到如下错误提示,原因未定,重新执行就不再出现此问题。

"error: the server doesn't have a resource type \"clusterrolebinding\"\nerror: no matches for kind \"ClusterRoleBinding\" in version \"rbac.authorization.k8s.io/v1beta1\"", "stderr_lines": ["error: the server doesn't have a resource type \"clusterrolebinding\"", "error: no matches for kind \"ClusterRoleBinding\" in version \"rbac.authorization.k8s.io/v1beta1\""],