Centos7环境用kubeadm搭建多节点k8s集群
程序员文章站
2022-03-01 13:01:50
...
- 在每个节点上都要安装
kubeadm
,kubelet
andkubectl
, 并且docker已经运行了 - 我这边采用的是vagrant,然后起了三台机器:
boxes = [
{
:name => "k8s-master",
:eth1 => "192.168.205.120",
:mem => "2048",
:cpu => "2"
},
{
:name => "k8s-node1",
:eth1 => "192.168.205.121",
:mem => "2048",
:cpu => "1"
},
{
:name => "k8s-node2",
:eth1 => "192.168.205.122",
:mem => "2048",
:cpu => "1"
}
]
3.开始安装docker环境:
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker
systemctl enable docker && sudo systemctl start docker
4.安装阿里云的k8s-yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
5.安装Kubernetes packages
setenforce 0
yum install -y kubelet kubeadm kubectl
sudo bash -c 'cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
EOF'
sudo sysctl --system
sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo swapoff -a
systemctl enable docker.service
systemctl enable kubelet.service
6.检查master节点和worker节点是否已经安装上述的程序
[[email protected] ~]# which kubeadm
/usr/bin/kubeadm
[[email protected] ~]# which kubelet
/usr/bin/kubelet
[[email protected] ~]# which kubectl
/usr/bin/kubectl
[[email protected] ~]# docker version
Client: Docker Engine - Community
Version: 19.03.7
API version: 1.40
Go version: go1.12.17
Git commit: 7141c199a2
Built: Wed Mar 4 01:24:10 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.7
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: 7141c199a2
Built: Wed Mar 4 01:22:45 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
7.从阿里云拉取镜像
cat ./pull.sh
for i in `kubeadm config images list`; do
imageName=${i#k8s.gcr.io/}
docker pull registry.aliyuncs.com/google_containers/$imageName
docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.aliyuncs.com/google_containers/$imageName
done;
8.配置Kubernetes Master node
kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address 192.168.205.120
其中:–pod-network-cidr可以指定自己写的网段,apiserver-advertise-address:必须指向master节点的ip.
执行上面的命令后出现:
kubeadm join 192.168.205.120:6443 --token snipoh.vxfykjsi7e7rbtna \
--discovery-token-ca-cert-hash sha256:e202fbfa3eed1e1d6c646dd568285947d67e99b51e824c99aeb6f45080d284c1
说明已经安装成功
9.进行一些配置
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
10.在其他节点上执行
kubeadm join 192.168.205.120:6443 --token 8b372w.suq116thqsby42a4
--discovery-token-ca-cert-hash sha256:3b0f82aadfdb4a3929dbd838153e38f9054f3084721a34953ec9e4069e045016
检查pod情况
kubectl get pod --all-namespaces
出现下面的情况说明基本按照成功
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5c98db65d4-f4kjf 0/1 Pending 0 58m
kube-system coredns-5c98db65d4-xqpwd 0/1 Pending 0 58m
kube-system etcd-k8s-master 1/1 Running 0 57m
kube-system kube-apiserver-k8s-master 1/1 Running 0 57m
kube-system kube-controller-manager-k8s-master 1/1 Running 0 57m
kube-system kube-proxy-9l9vr 1/1 Running 0 58m
kube-system kube-scheduler-k8s-master 1/1 Running 0 57m
11.安装网络插件
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
出现下面就安装成功啦
[[email protected] ~]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-5jclw 1/1 Running 0 9h
kube-system coredns-6955765f44-wlr88 1/1 Running 0 9h
kube-system etcd-k8s-master 1/1 Running 0 9h
kube-system kube-apiserver-k8s-master 1/1 Running 0 9h
kube-system kube-controller-manager-k8s-master 1/1 Running 0 9h
kube-system kube-proxy-vdxpv 1/1 Running 0 9h
kube-system kube-scheduler-k8s-master 1/1 Running 0 9h
kube-system weave-net-n2wkx 2/2 Running 0 8h
weave weave-scope-agent-4l6dl 1/1 Running 0 9h