欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

kubeadm安装k8s v1.13.1 HA详细教程之三:安装master

程序员文章站 2022-03-07 13:22:30
...

本文首发自个人博客:https://blog.smile13.com/articles/2019/01/14/1547441934762.html

1. 安装master(第一台master)

1.1 编辑kubeadm配置文件

[[email protected] ~]# cat ~/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kubernetesVersion: v1.13.1
controlPlaneEndpoint: k8s-cluster.smile13.com:6443
apiServer:
  certSANs:
    - k8s-cluster.smile13.com
networking:
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16

1.2 提前拉取镜像

[[email protected] ~]# kubeadm config images pull --config kubeadm-config.yaml 
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6

1.3初始化

[[email protected] ~]# kubeadm init --config kubeadm-config.yaml
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-cluster.smile13.com k8s-cluster.smile13.com] and IPs [10.96.0.1 192.168.158.131]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s01 localhost] and IPs [192.168.158.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s01 localhost] and IPs [192.168.158.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.003045 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s01" as an annotation
[mark-control-plane] Marking the node k8s01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: t1yovr.ag1xbdhfgo36z8f7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join k8s-cluster.smile13.com:6443 --token t1yovr.ag1xbdhfgo36z8f7 --discovery-token-ca-cert-hash sha256:ceaf1b9a9ef558ff8706331cb88e81c28d48528972cee2b92a8416364768e45d

[[email protected] ~]# mkdir -p $HOME/.kube
[[email protected] ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[[email protected] ~]# chown $(id -u):$(id -g) $HOME/.kube/config

1.4 查看集群状态

[[email protected] ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
[[email protected] ~]# kubectl get nodes
NAME    STATUS     ROLES    AGE     VERSION
k8s01   NotReady   master   4m45s   v1.13.1

注意:由于还没有安装网络插件,所以master的状态为NotReady

1.5 安装网络插件Calico

[[email protected] ~]# kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
[[email protected] ~]# kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
configmap/calico-config created
service/calico-typha created
deployment.apps/calico-typha created
poddisruptionbudget.policy/calico-typha created
daemonset.extensions/calico-node created
serviceaccount/calico-node created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created

注意:如果pod CIDR 不是 `192.168.0.0/16`,需要先下载配置文件,修改配置文件中的CALICO_IPV4POOL_CIDR的value为对应的pod CIDR
>
######再次查看集群状态:
[[email protected] ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
k8s01   Ready    master   42m   v1.13.1
>
[[email protected] ~]#  kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE   IP                NODE    NOMINATED NODE   READINESS GATES
kube-system   calico-node-vf2j5               2/2     Running   0          16m   192.168.158.131   k8s01   <none>           <none>
kube-system   coredns-89cc84847-msbbj         1/1     Running   0          19m   10.244.0.3        k8s01   <none>           <none>
kube-system   coredns-89cc84847-pg8l2         1/1     Running   0          19m   10.244.0.2        k8s01   <none>           <none>
kube-system   etcd-k8s01                      1/1     Running   0          18m   192.168.158.131   k8s01   <none>           <none>
kube-system   kube-apiserver-k8s01            1/1     Running   0          18m   192.168.158.131   k8s01   <none>           <none>
kube-system   kube-controller-manager-k8s01   1/1     Running   0          19m   192.168.158.131   k8s01   <none>           <none>
kube-system   kube-proxy-x6v57                1/1     Running   0          19m   192.168.158.131   k8s01   <none>           <none>
kube-system   kube-scheduler-k8s01            1/1     Running   0          18m   192.168.158.131   k8s01   <none>           <none>


1.6 复制相关文件到其他master上

[[email protected] k8s-install]# cd /etc/kubernetes && tar cvzf k8s-key.tgz pki/ca.* pki/sa.* pki/front-proxy-ca.* pki/etcd/ca.* admin.conf
pki/ca.crt
pki/ca.key
pki/sa.key
pki/sa.pub
pki/front-proxy-ca.crt
pki/front-proxy-ca.key
pki/etcd/ca.crt
pki/etcd/ca.key
admin.conf
[[email protected] kubernetes]# scp /etc/kubernetes/k8s-key.tgz k8s02:/etc/kubernetes/
k8s-key.tgz                                                                                                                                                100%   11KB   3.9MB/s   00:00    
[[email protected] kubernetes]# scp /etc/kubernetes/k8s-key.tgz k8s03:/etc/kubernetes/
k8s-key.tgz                                                                                                                                                100%   11KB   3.6MB/s   00:00

######到对应的master解压k8s-key.tgz包

######复制kubeadm-config.yaml到其他master(用于从阿里云下载镜像,也可以不复制,直接pull需要的镜像,这里我为了方便直接copy配置文件进行pull)
[[email protected] ~]# scp  k8s-install/kubeadm-config.yaml k8s02:~
kubeadm-config.yaml                                                                                                                                        100%  302   415.7KB/s   00:00    
[[email protected] ~]# scp  k8s-install/kubeadm-config.yaml k8s03:~
kubeadm-config.yaml                                                                                                                                        100%  302   222.8KB/s   00:00    

1.7安装其他master(以k8s02为例,k8s03一样的操作)

1.7.1 下载镜像(由于不能访问到k8s的官方仓库,从阿里云下载)

[[email protected] ~]# kubeadm config images pull --config kubeadm-config.yaml 
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6

1.7.2 初始化

[[email protected] ~]# kubeadm join k8s-cluster.smile13.com:6443 --token t1yovr.ag1xbdhfgo36z8f7 --discovery-token-ca-cert-hash sha256:ceaf1b9a9ef558ff8706331cb88e81c28d48528972cee2b92a8416364768e45d --experimental-control-plane
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "k8s-cluster.smile13.com:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://k8s-cluster.smile13.com:6443"
[discovery] Requesting info from "https://k8s-cluster.smile13.com:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "k8s-cluster.smile13.com:6443"
[discovery] Successfully established connection with API Server "k8s-cluster.smile13.com:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[join] Running pre-flight checks before initializing the new control plane instance
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s02 localhost] and IPs [192.168.158.132 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s02 localhost] and IPs [192.168.158.132 127.0.0.1 ::1]
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-cluster.smile13.com k8s-cluster.smile13.com] and IPs [10.96.0.1 192.168.158.132]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Using existing up-to-date kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Checking Etcd cluster health
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s02" as an annotation
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s02 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Master label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[[email protected] ~]# mkdir -p $HOME/.kube
[[email protected] ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[[email protected] ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

1.7.3 master安装完成,查看集群状态

[[email protected] ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE     VERSION
k8s01   Ready    master   31m     v1.13.1
k8s02   Ready    master   4m51s   v1.13.1
k8s03   Ready    master   3m36s   v1.13.1

[[email protected] ~]#  kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE     IP                NODE    NOMINATED NODE   READINESS GATES
kube-system   calico-node-62ntc               2/2     Running   0          98s     192.168.158.133   k8s03   <none>           <none>
kube-system   calico-node-lms7b               2/2     Running   0          2m54s   192.168.158.132   k8s02   <none>           <none>
kube-system   calico-node-vf2j5               2/2     Running   0          25m     192.168.158.131   k8s01   <none>           <none>
kube-system   coredns-89cc84847-msbbj         1/1     Running   0          28m     10.244.0.3        k8s01   <none>           <none>
kube-system   coredns-89cc84847-pg8l2         1/1     Running   0          28m     10.244.0.2        k8s01   <none>           <none>
kube-system   etcd-k8s01                      1/1     Running   0          27m     192.168.158.131   k8s01   <none>           <none>
kube-system   etcd-k8s02                      1/1     Running   0          2m53s   192.168.158.132   k8s02   <none>           <none>
kube-system   etcd-k8s03                      1/1     Running   0          97s     192.168.158.133   k8s03   <none>           <none>
kube-system   kube-apiserver-k8s01            1/1     Running   0          27m     192.168.158.131   k8s01   <none>           <none>
kube-system   kube-apiserver-k8s02            1/1     Running   0          2m54s   192.168.158.132   k8s02   <none>           <none>
kube-system   kube-apiserver-k8s03            1/1     Running   0          98s     192.168.158.133   k8s03   <none>           <none>
kube-system   kube-controller-manager-k8s01   1/1     Running   1          28m     192.168.158.131   k8s01   <none>           <none>
kube-system   kube-controller-manager-k8s02   1/1     Running   0          2m54s   192.168.158.132   k8s02   <none>           <none>
kube-system   kube-controller-manager-k8s03   1/1     Running   0          98s     192.168.158.133   k8s03   <none>           <none>
kube-system   kube-proxy-8r9bq                1/1     Running   0          2m54s   192.168.158.132   k8s02   <none>           <none>
kube-system   kube-proxy-bv2bf                1/1     Running   0          98s     192.168.158.133   k8s03   <none>           <none>
kube-system   kube-proxy-x6v57                1/1     Running   0          28m     192.168.158.131   k8s01   <none>           <none>
kube-system   kube-scheduler-k8s01            1/1     Running   1          27m     192.168.158.131   k8s01   <none>           <none>
kube-system   kube-scheduler-k8s02            1/1     Running   0          2m54s   192.168.158.132   k8s02   <none>           <none>
kube-system   kube-scheduler-k8s03            1/1     Running   0          98s     192.168.158.133   k8s03   <none>           <none>

 

版权声明:本文为博主原创文章,转载请注明出处!

相关标签: kubernetes