k8s部署单节点
文章目录
k8s部署单节点
配置k8s群集及docker通信
制作证书过程
master操作
[aaa@qq.com ~]# mkdir k8s
[aaa@qq.com ~]# cd k8s/
证书工具下载
[aaa@qq.com k8s]# mkdir etcd-cert
[aaa@qq.com k8s]# cd etcd-cert/
[aaa@qq.com etcd-cert]# ls //从宿主机拖进来
cfssl cfssl-certinfo cfssljson
让系统识别
[aaa@qq.com etcd-cert]# chmod +x cfssl*
[aaa@qq.com etcd-cert]# mv cfssl* /usr/local/bin/
//定义ca证书
[aaa@qq.com k8s]# cd etcd-cert/
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
实现证书签名
//实现证书签名
cat > ca-csr.json <<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
生产证书,生成ca-key.pem ca.pem
[aaa@qq.com etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
我们查看一下是否生成
[aaa@qq.com etcd-cert]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
指定etcd三个节点之间的通信验证
cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"192.168.136.88",
"192.168.136.40",
"192.168.136.30"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
生成ETCD证书 server-key.pem server.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
四个证书一定要有
[aaa@qq.com etcd-cert]# ls
ca-key.pem ca.pem server-csr.json server-key.pem
ETCD 二进制包地址
在master节点上布置
[aaa@qq.com k8s]# cd /root/k8s/
宿主机拖入文件
etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
解压etcd
[aaa@qq.com k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
配置文件,命令文件,证书目录
[aaa@qq.com#]# cd etcd-v3.3.10-linux-amd64/
[aaa@qq.com etcd-v3.3.10-linux-amd64]# mkdir -p /opt/etcd/{cfg,bin,ssl}
导入命令文件
[aaa@qq.com etcd-v3.3.10-linux-amd64]# mv etcd etcdctl /opt/etcd/bin/
证书拷贝
[aaa@qq.com ~]# cd /root/k8s/etcd-cert/
[aaa@qq.com etcd-cert]# cp *.pem /opt/etcd/ssl/
配置ETCD脚本
[aaa@qq.com k8s]# vim etcd.sh
#!/bin/bash
# example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380
ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3
WORK_DIR=/opt/etcd
cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
进入卡住状态等待其他节点加入
[aaa@qq.com k8s]# bash etcd.sh etcd01 192.168.136.88 etcd02=https://192.168.136.40:2380,etcd03=https://192.168.136.30:2380
使用另外一个会话打开,会发现etcd进程已经开启
[aaa@qq.com ~]# ps -ef | grep etcd
或
aaa@qq.com ~]# systemctl status etcd.service
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: activating (start) since 一 2020-09-28 23:07:39 CST; 6s ago
拷贝证书去其他节点
[aaa@qq.com k8s]# scp -r /opt/etcd/ aaa@qq.com:/opt/
[aaa@qq.com k8s]# scp -r /opt/etcd/ aaa@qq.com:/opt
启动脚本拷贝其他节点
[aaa@qq.com k8s]# scp /usr/lib/systemd/system/etcd.service aaa@qq.com:/usr/lib/systemd/system/
[aaa@qq.com k8s]# scp /usr/lib/systemd/system/etcd.service aaa@qq.com:/usr/lib/systemd/system/
在node01和node02节点配置ETCD
下面配置要在2个node节点个配置一遍(改成本地地址)
[aaa@qq.com ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.136.40:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.136.40:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.136.40:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.136.40:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.136.88:2380,etcd02=https://192.168.136.40:2380,etcd03=https://192.168.136.30:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
~
开启服务
[aaa@qq.com ssl]# systemctl start etcd
[aaa@qq.com ssl]#systemctl enable etcd
查看开启状态
[aaa@qq.com ssl]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since 一 2020-09-28 23:17:22 CST; 10s ago
在master节点上检查群集状态(health为健康)
[aaa@qq.com k8s]# cd /root/k8s/etcd-cert/
[aaa@qq.com etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.136.88:2379,https://192.168.136.40:2379,https://192.168.136.30:2379" cluster-health
docker引擎部署
所有node节点部署docker引擎
详见docker安装脚本
在两个node上安装docker
flannel网络配置
- Flannel是CoreOS开发,专门用于docker多机互联的一个工具,让集群中的不同节点主机创建的容器都具有全集群唯一的虚拟ip地址
- Flannel为每个host分配一个subnet,容器从这个subnet中分配IP,这些IP可以在host间路由,容器间无需使用nat和端口映射即可实现跨主机通信
在master上操作
写入分配的子网段到ETCD中
[aaa@qq.com etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.136.88:2379,https://192.168.136.30:2379,https://192.168.136.40:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
结果 { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
vxlan:是逻辑节点
查看写入的信息是否写入
[aaa@qq.com etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.136.60:2379,https://192.168.136.10:2379,https://192.168.136.20:2379" get /coreos.com/network/config
结果 { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
flannel网络配置
下面配置要在2个node节点个配置一遍
解压(2个node都要配置)
[aaa@qq.com ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
k8s工作目录(2个node都要配置)
[aaa@qq.com ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[aaa@qq.com ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
注意:cfg;配置文件 bin命令文件; ssl;证书
配置flanneld脚本(2个node都要配置)
[aaa@qq.com ~]# vim flannel.sh
#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
解释如下
开启flannel网络功能(2个node都要配置)
[aaa@qq.com ~]# bash flannel.sh https://192.168.136.88:2379,https://192.168.136.40:2379,https://192.168.136.30:2379
配置docker连接flannel(2个node都要配置)
[aaa@qq.com ~]# vim /usr/lib/systemd/system/docker.service
14 EnvironmentFile=/run/flannel/subnet.env 声明环境变量
15 $DOCKER_NETWORK_OPTIONS 添加环境变量
重启docker服务(2个node都要配置)
[aaa@qq.com ~]# systemctl daemon-reload
[aaa@qq.com ~]# systemctl restart docker
查看docker0是否对接上flannel
注意:docer之间相互通信时用的是子网段的地址172.17.80.0
测试ping通对方docker0网卡 证明flannel起到路由作用
创建容器
[aaa@qq.com ~]# docker run -it centos:7 /bin/bash
[aaa@qq.com ~]# yum install net-tools -y
节点IP显示出来了
不同节点互通
master节点配置3大控制主键
我们要开启3个主键master上面第一:apiserver 第二:Scheduler 第三:Controller Manager
master配置
制作证书过程
在master上操作生成apiserver.sh的文件
[aaa@qq.com k8s]# mkdir master
[aaa@qq.com k8s]# cd master/
[aaa@qq.com master]# unzip master.zip
[aaa@qq.com master]# ls
apiserver.sh controller-manager.sh scheduler.sh
[aaa@qq.com master]# chmod +x controller-manager.sh
创建工作目录(cfg;配置文件 bin命令文件; ssl;证书)
[aaa@qq.com master]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
创建证书目录
[aaa@qq.com k8s]# cd /root/k8s/
[aaa@qq.com k8s]# mkdir k8s-cert
[aaa@qq.com k8s]# cd k8s-cert/
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
------ca证书签名------------------------
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
生成证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
生成管理员证书
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
代理端的证书
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
查看证书是否缺失(8大证书)
[aaa@qq.com k8s-cert]# ls *.pem
admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem
把证书放到ssl中
cp ca*pem server*pem /opt/kubernetes/ssl/
配置kubernetes
解压kubernetes压缩包
[aaa@qq.com k8s]# cd /root/k8s/
[aaa@qq.com k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
[aaa@qq.com k8s]# cd /root/k8s/kubernetes/server/bin
//复制关键命令文件
[aaa@qq.com k8s]# cd /root/k8s/kubernetes/server/bin/
tocken令牌认证
随机生成***
[aaa@qq.com bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
41b1afc1eff1d13042da195f37460bf5可以随机生成序列
配置令牌
[aaa@qq.com bin]# vim /opt/kubernetes/cfg/token.csv
41b1afc1eff1d13042da195f37460bf5,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
开aprserver服务
[aaa@qq.com bin]# cd /root/k8s/master/
[aaa@qq.com master]# bash apiserver.sh 192.168.136.88 https://192.168.136.88:2379,https://192.168.136.30:2379,https://192.168.136.40:2379
查看端口是否开启(http和htpps一起出现)
[aaa@qq.com cfg]# netstat -ntap | grep 6443
tcp 0 0 192.168.136.88:6443 0.0.0.0:* LISTEN 18333/kube-apiserve
[aaa@qq.com cfg]# netstat -ntap | grep 8080
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 18333/kube-apiserve
开启scheduler服务
[aaa@qq.com master]#./scheduler.sh 127.0.0.1
启动controller-manager
[aaa@qq.com master]# ./controller-manager.sh 127.0.0.1
查看master 节点状态
[aaa@qq.com master]# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
node节点部署
把 kubelet、kube-proxy拷贝到node节点上去
[aaa@qq.com bin]# cd /root/k8s/kubernetes/server/bin/
[aaa@qq.com bin]# scp kubelet kube-proxy aaa@qq.com:/opt/kubernetes/bin/
[aaa@qq.com bin]# scp kubelet kube-proxy aaa@qq.com:/opt/kubernetes/bin/
nod01节点操作(复制node.zip到/root目录下再解压)
[aaa@qq.com ~]# unzip node.zip
在master上操作
拖入kubeconfig文件
[aaa@qq.com k8s]# mkdir kubeconfig
[aaa@qq.com k8s]# cd kubeconfig/
[aaa@qq.com kubeconfig]# mv kubeconfig.sh kubeconfig
配置kubeconfig
服务token的令牌
[aaa@qq.com kubeconfig]# cat /opt/kubernetes/cfg/token.csv
41b1afc1eff1d13042da195f37460bf5,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
配置kubeconfig
[aaa@qq.com kubeconfig]# vim kubeconfig
----------------删除以下部分----------------------------------------------------------------------
# 创建 TLS Bootstrapping Token
#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
-----------------------------------------------
//获取token信息(红色部分)
[aaa@qq.com ~]#cat /opt/kubernetes/cfg/token.csv
6351d652249951f79c33acdab329e4c4,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
//配置文件修改为tokenID
# 设置客户端认证参数
[aaa@qq.com kubeconfig]# vim kubeconfig
--token=6351d652249951f79c33acdab329e4c4 \
设置环境变量
[aaa@qq.com kubeconfig]# vim /etc/profile
在末尾加上 export PATH=$PATH:/opt/kubernetes/bin/
[aaa@qq.com kubeconfig]# source /etc/profile
生成配置文件
bash kubeconfig 192.168.136.88 /root/k8s/k8s-cert/
查看是否生成文件
[aaa@qq.com kubeconfig]# ls
bootstrap.kubeconfig kube-proxy.kubeconfig
拷贝配置文件到node节点
scp bootstrap.kubeconfig kube-proxy.kubeconfig aaa@qq.com:/opt/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig aaa@qq.com:/opt/kubernetes/cfg/
创建bootstrap角色赋予权限用于连接apiserver请求签名绑定集群(关键)
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
在node01节点上操作
开启服务
[aaa@qq.com ~]# bash kubelet.sh 192.168.136.40
检查kubelet服务启动
[aaa@qq.com ~]# ps aux | grep kube
root 82438 0.0 0.8 300552 16352 ? Ssl 14:18 0:10 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.136.88:2379,https://192.168.136.40:2379,https://192.168.136.30:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
root 109093 10.7 2.3 371788 44076 ? Ssl 19:38 0:01 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.136.40 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root 109121 0.0 0.0 112724 988 pts/1 R+ 19:38 0:00 grep --color=auto kube
master上操作
检查到node01节点的请求(我们看到现在是等待审批状态)
[aaa@qq.com kubeconfig]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-W9TegXU5ABC4drbxBI-rCT5mstCoQhydMi3_3ZiNALQ 93s kubelet-bootstrap Pending(等待集群给该节点颁发证书)
给该节点颁发证书
[aaa@qq.com ~]# kubectl certificate approve node-csr-W9TegXU5ABC4drbxBI-rCT5mstCoQhydMi3_3ZiNALQ
继续查看证书状态
[aaa@qq.com ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-W9TegXU5ABC4drbxBI-rCT5mstCoQhydMi3_3ZiNALQ 4m34s kubelet-bootstrap Approved,Issued(已经被允许加入群集)
查看群集节点,成功加入node01节点
[aaa@qq.com ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.136.40 Ready <none> 3m14s v1.12.3
在node01节点操作,启动proxy服务
[aaa@qq.com ~]# bash proxy.sh 192.168.136.40
查看服务是否开启
systemctl status kube-proxy.service
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since 日 2020-10-04 19:55:28 CST; 17s ago
Main PID: 112611 (kube-proxy)
Tasks: 0
Memory: 7.5M
CGroup: /system.slice/kube-proxy.service
‣ 112611 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.136.40 --...
node02节点部署
在node01节点操作
//把现成的/opt/kubernetes目录复制到其他节点进行修改即可
[aaa@qq.com ~]# scp -r /opt/kubernetes/ aaa@qq.com:/opt/
我们看一下有有什么东西
[aaa@qq.com ~]# tree /opt/kubernetes/
/opt/kubernetes/
├── bin
│ ├── flanneld
│ ├── kubelet
│ ├── kube-proxy
│ └── mk-docker-opts.sh
├── cfg
│ ├── bootstrap.kubeconfig
│ ├── flanneld
│ ├── kubelet
│ ├── kubelet.config
│ ├── kubelet.kubeconfig
│ ├── kube-proxy
│ └── kube-proxy.kubeconfig
└── ssl
├── kubelet-client-2020-10-04-19-43-17.pem
├── kubelet-client-current.pem -> /opt/kubernetes/ssl/kubelet-client-2020-10-04-19-43-17.pem
├── kubelet.crt
└── kubelet.key
把kubelet,kube-proxy的service文件拷贝到node2中
scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service aaa@qq.com:/usr/lib/systemd/system/
首先删除复制过来的证书,等会node02会自行申请证书
[aaa@qq.com ~]# cd /opt/kubernetes/ssl/
[aaa@qq.com ssl]# rm -rf *
修改配置文件kubelet kubelet.config kube-proxy(三个配置文件)
[aaa@qq.com cfg]# cd /opt/kubernetes/cfg/
[aaa@qq.com cfg]# vim kubelet
[aaa@qq.com cfg]# vim kubelet.config
[aaa@qq.com cfg]# vim kube-proxy
启动服务
[aaa@qq.com cfg]# systemctl start kubelet.service
[aaa@qq.com cfg]# systemctl enable kubelet.service
[aaa@qq.com cfg]# systemctl start kube-proxy.service
[aaa@qq.com cfg]# systemctl enable kube-proxy.service
在master上操作查看请求
[aaa@qq.com ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-W9TegXU5ABC4drbxBI-rCT5mstCoQhydMi3_3ZiNALQ 37m kubelet-bootstrap Approved,Issued
node-csr-l0pxa_bwNlGKIv1LM3zaeZr62kSXTYpnloFgJ9kEHqk 87s kubelet-bootstrap Pending
授权许可加入群集
[aaa@qq.com ~]# kubectl certificate approve node-csr-l0pxa_bwNlGKIv1LM3zaeZr62kSXTYpnloFgJ9kEHqk
//查看群集中的节点已经加入k8s
[aaa@qq.com k8s]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.136.30 Ready <none> 57s v1.12.3
192.168.136.40 Ready <none> 34m v1.12.3
ryaw-1601898279697)]
启动服务
[aaa@qq.com cfg]# systemctl start kubelet.service
[aaa@qq.com cfg]# systemctl enable kubelet.service
[aaa@qq.com cfg]# systemctl start kube-proxy.service
[aaa@qq.com cfg]# systemctl enable kube-proxy.service
在master上操作查看请求
[aaa@qq.com ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-W9TegXU5ABC4drbxBI-rCT5mstCoQhydMi3_3ZiNALQ 37m kubelet-bootstrap Approved,Issued
node-csr-l0pxa_bwNlGKIv1LM3zaeZr62kSXTYpnloFgJ9kEHqk 87s kubelet-bootstrap Pending
授权许可加入群集
[aaa@qq.com ~]# kubectl certificate approve node-csr-l0pxa_bwNlGKIv1LM3zaeZr62kSXTYpnloFgJ9kEHqk
//查看群集中的节点已经加入k8s
[aaa@qq.com k8s]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.136.30 Ready <none> 57s v1.12.3
192.168.136.40 Ready <none> 34m v1.12.3
下一篇: PHP的Session(占位)