kubernetes 部署 master,node 方法
程序员文章站
2022-03-07 13:28:48
...
集群搭建
环境
主机名 | IP | 角色 |
---|---|---|
ns-yun-020065.vclound.com | 10.189.20.xx | master |
ns-yun-020066.vclound.com | 10.189.20.xx | computer node |
ns-yun-020067.vclound.com | 10.189.20.xx | computer node |
ns-storage-020100.vclound.com | 10.189.20.xxx | rook node |
ns-storage-020101.vclound.com | 10.189.20.xxx | rook node |
ns-storage-020102.vclound.com | 10.189.20.xxx | rook node |
ns-storage-020104.vclound.com | 10.189.20.xxx | rook node |
初始化 master
请完成安装前准备每一步
参数说明
image-repository 自定义 registry
kubernetes-version k8s 版本定义,便于到 registry 中下载对应版本
pod-network-cid 指定了 pod 私有网络 IP 地址段
初始化
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.13.3 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.1. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ns-yun-020065.vclound.com localhost] and IPs [10.189.20.65 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ns-yun-020065.vclound.com localhost] and IPs [10.189.20.65 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ns-yun-020065.vclound.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.189.20.65]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.548043 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ns-yun-020065.vclound.com" as an annotation
[mark-control-plane] Marking the node ns-yun-020065.vclound.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node ns-yun-020065.vclound.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: tsxv62.ow32ru16sw4h4tmh
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.189.20.xx:6443 --token tsxv62.ow32ru16sw4h4tmh --discovery-token-ca-cert-hash sha256:3b985da22317aa3bc2fbb4b5e64762877d19010504f01cecdd53cfada4a8b0d1
错误提示
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.1. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
假如看到上面的错误信息,因为没法连接互联网或无法下载镜像, 参考上面获取镜像方法
master 状态检查
[[email protected] ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ns-yun-020065.vclound.com NotReady master 76m v1.13.3
namespaces 检查
[[email protected] ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78d4cf999f-7dfvf 0/1 Pending 0 75m <- 没有网络导致 pending
kube-system coredns-78d4cf999f-pg2tc 0/1 Pending 0 75m
kube-system etcd-ns-yun-020065.vclound.com 1/1 Running 0 74m
kube-system kube-apiserver-ns-yun-020065.vclound.com 1/1 Running 0 75m
kube-system kube-controller-manager-ns-yun-020065.vclound.com 1/1 Running 0 74m
kube-system kube-proxy-27cqv 1/1 Running 0 75m
kube-system kube-scheduler-ns-yun-020065.vclound.com 1/1 Running 0 75m
添加网络配置
网络配置有很多种
当前选择 flannel 只为方便, 并没有选择性能最佳网络配置
假如初始化过程中,自定义了-pod-network-cidr= 其他网络, 则需要手动编辑 kube-flannel.yml 中 “Network”: “10.244.0.0/16”, 部分
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[[email protected] rpms]# kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
再次检查 namespace
[[email protected] rpms]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78d4cf999f-7dfvf 1/1 Running 0 91m <- status 变成了 running 状态
kube-system coredns-78d4cf999f-pg2tc 1/1 Running 0 91m
kube-system etcd-ns-yun-020065.vclound.com 1/1 Running 0 90m
kube-system kube-apiserver-ns-yun-020065.vclound.com 1/1 Running 0 90m
kube-system kube-controller-manager-ns-yun-020065.vclound.com 1/1 Running 0 90m
kube-system kube-flannel-ds-amd64-gvtld 1/1 Running 0 70s
kube-system kube-proxy-27cqv 1/1 Running 0 91m
kube-system kube-scheduler-ns-yun-020065.vclound.com 1/1 Running 0 90m
kubernetes node 加入集群
请完成安装前准备每一步
在 node 上执行下面命令加入集群
需要获取 token, sha256 key
可以参考安装 master 时返回值提示
可以通过下面方法重新获取值
kubeadm join 10.189.20.xx:6443 --token tsxv62.ow32ru16sw4h4tmh --discovery-token-ca-cert-hash sha256:3b985da22317aa3bc2fbb4b5e64762877d19010504f01cecdd53cfada4a8b0d1
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.1. Latest validated version: 18.06
[discovery] Trying to connect to API Server "10.189.20.xx:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.189.20.65:6443"
[discovery] Requesting info from "https://10.189.20.xx:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.189.20.65:6443"
[discovery] Successfully established connection with API Server "10.189.20.xx:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ns-yun-020067.vclound.com" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
token
token 一般 24 小时内生效
获取 token
[[email protected] rpms]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
tsxv62.ow32ru16sw4h4tmh 18h 2019-02-13T10:32:00+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
下面是 token 过期显示信息
[[email protected] ceph]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
tsxv62.ow32ru16sw4h4tmh <invalid> 2019-02-13T10:32:00+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
假如 token 过期了,可以新建 token
kubeadm token create
discovery-token-ca-cert-hash
可以通过下面方法获取 sha256 key 方法
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
3b985da22317aa3bc2fbb4b5e64762877d19010504f01cecdd53cfada4a8b0d1
验证集群节点
[[email protected] ceph]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ns-storage-020100.vclound.com Ready <none> 11d v1.13.3
ns-storage-020101.vclound.com Ready <none> 11d v1.13.3
ns-storage-020102.vclound.com Ready <none> 11d v1.13.3
ns-storage-020104.vclound.com Ready <none> 3d22h v1.13.3
ns-yun-020065.vclound.com Ready master 14d v1.13.3
ns-yun-020066.vclound.com Ready <none> 14d v1.13.3
ns-yun-020067.vclound.com Ready <none> 14d v1.13.3
成功组件组件后
master 节点会启动下面容器coredns k8s 内部专用 DNS 服务
etcd 主要用于存放消息队列
apiserver 顾名思义, 服务器管理 api 接口
controller-manager 管理每个节点都会自动启动了下面几个容器
flannel 网络组件
kube-proxy 网络代理和负载平衡器
[[email protected] ceph]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-78d4cf999f-7dfvf 1/1 Running 7 14d 10.244.0.18 ns-yun-020065.vclound.com <none> <none>
kube-system coredns-78d4cf999f-pg2tc 1/1 Running 6 14d 10.244.0.17 ns-yun-020065.vclound.com <none> <none>
kube-system etcd-ns-yun-020065.vclound.com 1/1 Running 8 14d 10.189.20.65 ns-yun-020065.vclound.com <none> <none>
kube-system kube-apiserver-ns-yun-020065.vclound.com 1/1 Running 6 14d 10.189.20.65 ns-yun-020065.vclound.com <none> <none>
kube-system kube-controller-manager-ns-yun-020065.vclound.com 1/1 Running 6 14d 10.189.20.65 ns-yun-020065.vclound.com <none> <none>
kube-system kube-flannel-ds-amd64-82sws 1/1 Running 21 11d 10.189.20.101 ns-storage-020101.vclound.com <none> <none>
kube-system kube-flannel-ds-amd64-8qdd4 1/1 Running 18 11d 10.189.20.102 ns-storage-020102.vclound.com <none> <none>
kube-system kube-flannel-ds-amd64-ccb47 1/1 Running 9 3d22h 10.189.20.104 ns-storage-020104.vclound.com <none> <none>
kube-system kube-flannel-ds-amd64-gvtld 1/1 Running 8 14d 10.189.20.65 ns-yun-020065.vclound.com <none> <none>
kube-system kube-flannel-ds-amd64-jlkhv 1/1 Running 9 14d 10.189.20.66 ns-yun-020066.vclound.com <none> <none>
kube-system kube-flannel-ds-amd64-ld8h8 1/1 Running 17 11d 10.189.20.100 ns-storage-020100.vclound.com <none> <none>
kube-system kube-flannel-ds-amd64-phzxd 1/1 Running 7 14d 10.189.20.67 ns-yun-020067.vclound.com <none> <none>
kube-system kube-proxy-27cqv 1/1 Running 6 14d 10.189.20.65 ns-yun-020065.vclound.com <none> <none>
kube-system kube-proxy-c8z9s 1/1 Running 14 11d 10.189.20.100 ns-storage-020100.vclound.com <none> <none>
kube-system kube-proxy-c9j67 1/1 Running 16 11d 10.189.20.102 ns-storage-020102.vclound.com <none> <none>
kube-system kube-proxy-gjk7q 1/1 Running 9 3d22h 10.189.20.104 ns-storage-020104.vclound.com <none> <none>
kube-system kube-proxy-hvvww 1/1 Running 21 11d 10.189.20.101 ns-storage-020101.vclound.com <none> <none>
kube-system kube-proxy-hxqnf 1/1 Running 8 14d 10.189.20.66 ns-yun-020066.vclound.com <none> <none>
kube-system kube-proxy-phw8q 1/1 Running 6 14d 10.189.20.67 ns-yun-020067.vclound.com <none> <none>
kube-system kube-scheduler-ns-yun-020065.vclound.com 1/1 Running 7 14d 10.189.20.65 ns-yun-020065.vclound.com <none> <none>
集群健康检查
[[email protected] rpms]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
[[email protected] rpms]# kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h14m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 4h14m
上一篇: 关于写死bootargs实例