Kubernetes-----单节点部署(完整版)
文章目录
实验环境
主机名 | 操作系统 | IP地址 | 所需软件 |
---|---|---|---|
master01 | centos7 | 20.0.0.11 | kube-apiserver kube-controller-manager kube-scheduler etcd |
node01 | centos7 | 20.0.0.12 | kubelet kube-proxy docker flannel etcd |
node02 | centos7 | 20.0.0.13 | kubelet kube-proxy docker flannel etcd |
一、master01制作证书
- master01节点操作
1、创建k8s目录和存放证书目录
[[email protected] ~]# mkdir k8s
[[email protected] ~]# cd k8s/
[[email protected] k8s]# mkdir etcd-cert # 存放证书
2、将下载好的证书制作工具存放到 /usr/local/bin/
[[email protected] ~]# cd /usr/local/bin/
[[email protected] bin]# ll
总用量 18808
-rw-r--r--. 1 root root 10376657 1月 16 2020 cfssl
-rw-r--r--. 1 root root 6595195 1月 16 2020 cfssl-certinfo
-rw-r--r--. 1 root root 2277873 1月 16 2020 cfssljson
[[email protected] bin]# chmod +x * # 增加执行权限
[[email protected] bin]# ll
总用量 18808
-rwxr-xr-x. 1 root root 10376657 1月 16 2020 cfssl
-rwxr-xr-x. 1 root root 6595195 1月 16 2020 cfssl-certinfo
-rwxr-xr-x. 1 root root 2277873 1月 16 2020 cfssljson
3、开始制作证书
- ①定义ca证书
[[email protected] ~]# cd k8s/etcd-cert/
[[email protected] etcd-cert]# cat > ca-config.json < {
> "signing": {
> "default": {
> "expiry": "87600h"
> },
> "profiles": {
> "www": {
> "expiry": "87600h",
> "usages": [
> "signing",
> "key encipherment",
> "server auth",
> "client auth"
> ]
> }
> }
> }
> }
> EOF
[[email protected] etcd-cert]# ls
ca-config.json
- ②实现证书签名
[[email protected] etcd-cert]# cat > ca-csr.json < {
> "CN": "etcd CA",
> "key": {
> "algo": "rsa",
> "size": 2048
> },
> "names": [
> {
> "C": "CN",
> "L": "Beijing",
> "ST": "Beijing"
> }
> ]
> }
> EOF
[[email protected] etcd-cert]# ls
ca-config.json ca-csr.json
- ③生产证书,生成ca-key.pem ca.pem
[[email protected] etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2021/01/09 16:41:55 [INFO] generating a new CA key and certificate from CSR
2021/01/09 16:41:55 [INFO] generate received request
2021/01/09 16:41:55 [INFO] received CSR
2021/01/09 16:41:55 [INFO] generating key: rsa-2048
2021/01/09 16:41:55 [INFO] encoded CSR
2021/01/09 16:41:55 [INFO] signed certificate with serial number 225250661609181904395466387385115793089487501444
[[email protected] etcd-cert]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
- ④指定etcd三个节点之间的通信验证
[[email protected] etcd-cert]# cat > server-csr.json < {
> "CN": "etcd",
> "hosts": [
> "20.0.0.11",
> "20.0.0.12",
> "20.0.0.13"
> ],
> "key": {
> "algo": "rsa",
> "size": 2048
> },
> "names": [
> {
> "C": "CN",
> "L": "BeiJing",
> "ST": "BeiJing"
> }
> ]
> }
> EOF
[[email protected] etcd-cert]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server-csr.json
- ⑤生成ETCD证书 server-key.pem server.pem
[[email protected] etcd-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2021/01/09 16:47:43 [INFO] generate received request
2021/01/09 16:47:43 [INFO] received CSR
2021/01/09 16:47:43 [INFO] generating key: rsa-2048
2021/01/09 16:47:44 [INFO] encoded CSR
2021/01/09 16:47:44 [INFO] signed certificate with serial number 104113713071224963091549012626273576083300599861
2021/01/09 16:47:44 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[[email protected] etcd-cert]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem
5、下载ETCD二进制包并解压缩(以下载好)
[[email protected] ~]# cd k8s/
[[email protected] k8s]# ls
etcd-cert etcd-v3.3.10-linux-amd64.tar.gz
[[email protected] k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
6、创建目录将ETCD的执行脚本移动
[[email protected] k8s]# cd etcd-v3.3.10-linux-amd64
[[email protected] etcd-v3.3.10-linux-amd64]# ls
Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md
[[email protected] ~]# mkdir -p /opt/etcd/{cfg,bin,ssl}
[[email protected] ~]# cd /opt/etcd/
[[email protected] etcd]# ls
bin cfg ssl
[[email protected] ~]# cd k8s/etcd-v3.3.10-linux-amd64/
[[email protected] etcd-v3.3.10-linux-amd64]# mv etcd etcdctl /opt/etcd/bin/
[[email protected] ~]# cd /opt/etcd/bin/
[[email protected] bin]# ls
etcd etcdctl
7、证书拷贝
[[email protected] ~]# cd k8s/etcd-cert/
[[email protected] etcd-cert]# cp *.pem /opt/etcd/ssl/
[[email protected] ~]# cd /opt/etcd/ssl/
[[email protected] ssl]# ls
ca-key.pem ca.pem server-key.pem server.pem
8、拷贝脚本
- ①将以配置好的脚本拉取到k8s目录
[[email protected] ~]# cd k8s/
[[email protected] k8s]# ls
etcd-cert etcd.sh etcd-v3.3.10-linux-amd64 etcd-v3.3.10-linux-amd64.tar.gz
- ②启动脚本
[[email protected] ~]# bash /root/k8s/etcd.sh etcd01 20.0.0.11 etcd02=https://20.0.0.12:2380,etcd03=https://20.0.0.13:2380
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service. # 进入卡住状态等待其他节点加入
- ③另起会话查看发现etcd进程已经开启
[[email protected] ~]# ps -ef | grep etcd
root 54443 15178 0 17:10 pts/1 00:00:00 bash /root/k8s/etcd.sh etcd01 20.0.0.11 etcd02=https://20.0.0.12:2380,etcd03=https://20.0.0.13:2380
root 54488 54443 0 17:10 pts/1 00:00:00 systemctl restart etcd
root 54494 1 2 17:10 ? 00:00:00 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://20.0.0.11:2380 --listen-client-urls=https://20.0.0.11:2379,http://127.0.0.1:2379 --advertise-client-urls=https://20.0.0.11:2379 --initial-advertise-peer-urls=https://20.0.0.11:2380 --initial-cluster=etcd01=https://20.0.0.11:2380,etcd02=https://20.0.0.12:2380,etcd03=https://20.0.0.13:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
root 54560 54509 0 17:10 pts/2 00:00:00 grep --color=auto etcd
9、拷贝证书去其他节点
[[email protected] ~]# scp -r /opt/etcd/ [email protected]20.0.0.12:/opt/
[[email protected] ~]# scp -r /opt/etcd/ [email protected]20.0.0.13:/opt/
10、启动脚本拷贝其他节点
[[email protected] ~]# scp /usr/lib/systemd/system/etcd.service [email protected]20.0.0.12:/usr/lib/systemd/system/
[[email protected] ~]# scp /usr/lib/systemd/system/etcd.service [email protected]20.0.0.13:/usr/lib/systemd/system/
二、node01/02节点操作
1、查看所需文件是否复制过来
- ①node01
[[email protected] ~]# yum -y install tree
[[email protected] ~]# tree /opt/etcd/
/opt/etcd/
├── bin
│ ├── etcd
│ └── etcdctl
├── cfg
│ └── etcd
└── ssl
├── ca-key.pem
├── ca.pem
├── server-key.pem
└── server.pem
- ②node02
[[email protected] ~]# yum -y install tree
[[email protected] ~]# tree /opt/etcd/
/opt/etcd/
├── bin
│ ├── etcd
│ └── etcdctl
├── cfg
│ └── etcd
└── ssl
├── ca-key.pem
├── ca.pem
├── server-key.pem
└── server.pem
2、修改配置文件
- ①node01
[[email protected] ~]# cd /opt/etcd/cfg/
[[email protected] cfg]# ls
etcd
[[email protected] cfg]# vi etcd
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://20.0.0.12:2380"
ETCD_LISTEN_CLIENT_URLS="https://20.0.0.12:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://20.0.0.12:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://20.0.0.12:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://20.0.0.11:2380,etcd02=https://20.0.0.12:2380,etcd03=https://20.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
- ②node02
[[email protected] ~]# cd /opt/etcd/cfg/
[[email protected] cfg]# ls
etcd
[[email protected] cfg]# vi etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://20.0.0.13:2380"
ETCD_LISTEN_CLIENT_URLS="https://20.0.0.13:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://20.0.0.13:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://20.0.0.13:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://20.0.0.11:2380,etcd02=https://20.0.0.12:2380,etcd03=https://20.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
3、加入集群
- ①master01启动脚本
[[email protected] ~]# bash /root/k8s/etcd.sh etcd01 20.0.0.11 etcd02=https://20.0.0.12:2380,etcd03=https://20.0.0.13:2380
- ②node01启动etcd
[[email protected] ~]# systemctl start etcd
- ③node02启动etcd
[[email protected] ~]# systemctl start etcd
- ④master01检查群集状态
[[email protected] ~]# cd /opt/etcd/ssl/
[[email protected] ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.11:2379,https://20.0.0.12:2379,https://20.0.0.13:2379" cluster-health
member c1b99d661e10b568 is healthy: got healthy result from https://20.0.0.11:2379
member f33c8e897853b7c4 is healthy: got healthy result from https://20.0.0.12:2379
member f895995aa044cd94 is healthy: got healthy result from https://20.0.0.13:2379
cluster is healthy
三、docker安装
1、node01节点
[[email protected] ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[[email protected] ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[[email protected] ~]# yum install -y docker-ce
[[email protected] ~]# systemctl enable docker
[[email protected] ~]# systemctl start docker
[[email protected] ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
[[email protected] ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[[email protected] ~]# tee /etc/docker/daemon.json <<-'EOF' # 镜像加速
> {
> "registry-mirrors": ["https://b3vpj4z0.mirror.aliyuncs.com"]
> }
> EOF
{
"registry-mirrors": ["https://b3vpj4z0.mirror.aliyuncs.com"]
}
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl restart docker
[[email protected] ~]# vi /etc/sysctl.conf # 网络优化
net.ipv4.ip_forward=1
[[email protected] ~]# sysctl -p
net.ipv4.ip_forward = 1
[[email protected] ~]# systemctl restart network
[[email protected] ~]# systemctl restart docker
2、node02和node01操作相同
四、flannel网络配置
1、master01操作
# 写入分配的子网段到ETCD中,供flannel使用
[[email protected] ~]# cd /opt/etcd/ssl/
[[email protected] ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.11:2379,https://20.0.0.12:2379,https://20.0.0.13:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
# 查看写入的信息
[[email protected] ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.11:2379,https://20.0.0.12:2379,https://20.0.0.13:2379" get /coreos.com/network/config
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
2、flannel包拷贝到所有node节点(只需要部署在node节点即可)
- ①node01
[[email protected] ~]# ls
anaconda-ks.cfg initial-setup-ks.cfg 模板 图片 下载 桌面
flannel-v0.10.0-linux-amd64.tar.gz 公共 视频 文档 音乐
- ②node02
[[email protected] ~]# ls
anaconda-ks.cfg initial-setup-ks.cfg 模板 图片 下载 桌面
flannel-v0.10.0-linux-amd64.tar.gz 公共 视频 文档 音乐
3、所有node节点操作解压
- ①node01
[[email protected] ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
- ②node02
[[email protected] ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
4、创建k8s工作目录
- ①node01
[[email protected] ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[[email protected] ~]# ls /opt/kubernetes/
bin cfg ssl
- ②node02
[[email protected] ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[[email protected] ~]# ls /opt/kubernetes/
bin cfg ssl
5、移动文件和启动脚本到bin目录
- ①node01
[[email protected] ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
[[email protected] ~]# ls /opt/kubernetes/bin/
flanneld mk-docker-opts.sh
- ②node02
[[email protected] ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
[[email protected] ~]# ls /opt/kubernetes/bin/
flanneld mk-docker-opts.sh
6、将写好的flannel.sh脚本拉取到根目录
- ①node01
[[email protected] ~]# ls
anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz README.md 模板 图片 下载 桌面
flannel.sh initial-setup-ks.cfg 公共 视频 文档 音乐
- node02
[[email protected] ~]# ls
anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz README.md 模板 图片 下载 桌面
flannel.sh initial-setup-ks.cfg 公共 视频 文档 音乐
7、开启flannel网络功能
- ①node01
[[email protected] ~]# bash flannel.sh https://20.0.0.11:2379,https://20.0.0.12:2379,https://20.0.0.13:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[[email protected] ~]# ls /opt/kubernetes/cfg/
flanneld
[[email protected] ~]# ifconfig
flannel.1: flags=4163 mtu 1450
inet 172.17.30.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::2c66:bff:fe60:49e9 prefixlen 64 scopeid 0x20
ether 2e:66:0b:60:49:e9 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 26 overruns 0 carrier 0 collisions 0
[[email protected] ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.30.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.30.1/24 --ip-masq=false --mtu=1450"
- ②node02
[[email protected] ~]# bash flannel.sh https://20.0.0.11:2379,https://20.0.0.12:2379,https://20.0.0.13:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[[email protected] ~]# ls /opt/kubernetes/cfg/
flanneld
[[email protected] ~]# ifconfig
flannel.1: flags=4163 mtu 1450
inet 172.17.66.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::40b6:41ff:fecf:cba prefixlen 64 scopeid 0x20
ether 42:b6:41:cf:0c:ba txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 27 overruns 0 carrier 0 collisions 0
[[email protected] ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.66.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.66.1/24 --ip-masq=false --mtu=1450"
8、配置docker连接flannel
- ①node01
[[email protected] ~]# vi /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env # 添加变量文件
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock # 添加变量文件
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl restart docker
- ②node02
[[email protected] ~]# vi /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env # 添加变量文件
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock # 添加变量文件
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl restart docker
9、查看flannel网络
- ①node01
[[email protected] ~]# ifconfig
docker0: flags=4099 mtu 1500
inet 172.17.30.1 netmask 255.255.255.0 broadcast 172.17.30.255
ether 02:42:ab:d1:91:05 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163 mtu 1450
inet 172.17.30.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::2c66:bff:fe60:49e9 prefixlen 64 scopeid 0x20
ether 2e:66:0b:60:49:e9 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 26 overruns 0 carrier 0 collisions 0
- ②node02
docker0: flags=4099 mtu 1500
inet 172.17.66.1 netmask 255.255.255.0 broadcast 172.17.66.255
ether 02:42:07:66:33:3e txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163 mtu 1450
inet 172.17.66.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::40b6:41ff:fecf:cba prefixlen 64 scopeid 0x20
ether 42:b6:41:cf:0c:ba txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 27 overruns 0 carrier 0 collisions 0
10、测试ping通对方docker0网卡 证明flannel起到路由作用
- ①node01
[[email protected] ~]# docker run -it centos:7 /bin/bash
[[email protected]7487a904517f /]# yum -y install net-tools
[[email protected]7487a904517f /]# ifconfig
eth0: flags=4163 mtu 1450
inet 172.17.30.2 netmask 255.255.255.0 broadcast 172.17.30.255
ether 02:42:ac:11:1e:02 txqueuelen 0 (Ethernet)
RX packets 16715 bytes 14617448 (13.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7591 bytes 413808 (404.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
- ②node02
[[email protected] ~]# docker run -it centos:7 /bin/bash
[[email protected]954ddcb1301b /]# yum -y install net-tools
[[email protected]954ddcb1301b /]# ifconfig
eth0: flags=4163 mtu 1450
inet 172.17.66.2 netmask 255.255.255.0 broadcast 172.17.66.255
ether 02:42:ac:11:42:02 txqueuelen 0 (Ethernet)
RX packets 15866 bytes 14588250 (13.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8189 bytes 446254 (435.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
- ③相互ping同
[[email protected]7487a904517f /]# ping 172.17.66.2 # ping node02
PING 172.17.66.2 (172.17.66.2) 56(84) bytes of data.
64 bytes from 172.17.66.2: icmp_seq=1 ttl=62 time=0.345 ms
64 bytes from 172.17.66.2: icmp_seq=2 ttl=62 time=0.339 ms
64 bytes from 172.17.66.2: icmp_seq=3 ttl=62 time=0.456 ms
[[email protected]954ddcb1301b /]# ping 172.17.30.2 # ping node01
PING 172.17.30.2 (172.17.30.2) 56(84) bytes of data.
64 bytes from 172.17.30.2: icmp_seq=1 ttl=62 time=0.304 ms
64 bytes from 172.17.30.2: icmp_seq=2 ttl=62 time=0.964 ms
64 bytes from 172.17.30.2: icmp_seq=3 ttl=62 time=0.476 ms
五、部署master组件
master01操作
1、将下载好的master软件包拉取到目录k8s下
[[email protected] ~]# cd k8s/
[[email protected] k8s]# ls
etcd-cert etcd.sh etcd-v3.3.10-linux-amd64 etcd-v3.3.10-linux-amd64.tar.gz master.zip
[[email protected] k8s]# unzip master.zip # 解压缩
Archive: master.zip
inflating: apiserver.sh
inflating: controller-manager.sh
inflating: scheduler.sh
[[email protected] k8s]# ls
apiserver.sh etcd-cert etcd-v3.3.10-linux-amd64 master.zip
controller-manager.sh etcd.sh etcd-v3.3.10-linux-amd64.tar.gz scheduler.sh
[[email protected] k8s]# chmod +x controller-manager.sh # 脚本增加执行权限
2、api-server生成证书
- ①创建存放证书的目录
[[email protected] ~]# cd k8s/
[[email protected] k8s]# mkdir k8s-cert # 创建存放证书的目录
[[email protected] k8s]# cd k8s-cert/
- ②将生成证书的脚本放进目录
[[email protected] k8s-cert]# ls
k8s-cert.sh
- ③修改脚本
[[email protected] k8s-cert]# vi k8s-cert.sh
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"20.0.0.11", # master01
"20.0.0.14", # master02
"20.0.0.100", # VIP,唯一公共访问入口
"20.0.0.15", # 负载均衡器01
"20.0.0.16", # 负载均衡器02
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
- ④生成k8s证书
[[email protected] k8s-cert]# bash k8s-cert.sh # 生成证书
[[email protected] k8s-cert]# ls
admin.csr admin.pem ca-csr.json k8s-cert.sh kube-proxy-key.pem server-csr.json
admin-csr.json ca-config.json ca-key.pem kube-proxy.csr kube-proxy.pem server-key.pem
admin-key.pem ca.csr ca.pem kube-proxy-csr.json server.csr server.pem
[[email protected] k8s-cert]# ls *pem
admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem # 所需8张证书
3、将pem证书拷贝到kubernetes/ssl目录
- ①创建kubernetes工作目录
[[email protected] ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[[email protected] ~]# cd /opt/kubernetes/
[[email protected] kubernetes]# ls
bin cfg ssl
- ②拷贝pem证书
[[email protected] ~]# cd k8s/k8s-cert/
[[email protected] k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/
[[email protected] k8s-cert]# ls /opt/kubernetes/ssl/
ca-key.pem ca.pem server-key.pem server.pem
4、解压kubernetes压缩包
- ①解压缩
[[email protected] ~]# cd k8s/
[[email protected] k8s]# ls
apiserver.sh etcd.sh k8s-cert scheduler.sh
controller-manager.sh etcd-v3.3.10-linux-amd64 kubernetes-server-linux-amd64.tar.gz
etcd-cert etcd-v3.3.10-linux-amd64.tar.gz master.zip
[[email protected] k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
- ②拷贝关键目录到/opt/kubernetes/bin目录下
[[email protected] ~]# cd k8s/kubernetes/server/bin/
[[email protected] bin]# ll
总用量 1821612
-rwxr-xr-x. 1 root root 60859975 11月 26 2018 apiextensions-apiserver
-rwxr-xr-x. 1 root root 142931406 11月 26 2018 cloud-controller-manager
-rw-r--r--. 1 root root 8 11月 26 2018 cloud-controller-manager.docker_tag
-rw-r--r--. 1 root root 144317440 11月 26 2018 cloud-controller-manager.tar
-rwxr-xr-x. 1 root root 248033928 11月 26 2018 hyperkube
-rwxr-xr-x. 1 root root 54038482 11月 26 2018 kubeadm
-rwxr-xr-x. 1 root root 192793815 11月 26 2018 kube-apiserver
-rw-r--r--. 1 root root 8 11月 26 2018 kube-apiserver.docker_tag
-rw-r--r--. 1 root root 194180096 11月 26 2018 kube-apiserver.tar
-rwxr-xr-x. 1 root root 162973612 11月 26 2018 kube-controller-manager
-rw-r--r--. 1 root root 8 11月 26 2018 kube-controller-manager.docker_tag
-rw-r--r--. 1 root root 164359680 11月 26 2018 kube-controller-manager.tar
-rwxr-xr-x. 1 root root 57356334 11月 26 2018 kubectl
-rwxr-xr-x. 1 root root 176661512 11月 26 2018 kubelet
-rwxr-xr-x. 1 root root 50330867 11月 26 2018 kube-proxy
-rw-r--r--. 1 root root 8 11月 26 2018 kube-proxy.docker_tag
-rw-r--r--. 1 root root 98355200 11月 26 2018 kube-proxy.tar
-rwxr-xr-x. 1 root root 57184656 11月 26 2018 kube-scheduler
-rw-r--r--. 1 root root 8 11月 26 2018 kube-scheduler.docker_tag
-rw-r--r--. 1 root root 58570752 11月 26 2018 kube-scheduler.tar
-rwxr-xr-x. 1 root root 2330265 11月 26 2018 mounter
[[email protected] bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
[[email protected] bin]# ls /opt/kubernetes/bin/
kube-apiserver kube-controller-manager kubectl kube-scheduler
- ③随机生成***
[[email protected] ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
72b3e1ef2457c3d31cf65b7327be5828
- ④创建token.csv
[[email protected] ~]# vi /opt/kubernetes/cfg/token.csv
72b3e1ef2457c3d31cf65b7327be5828,kubelet-bootstrap,10001,"system:kubelet-bootstrap" # ***,用户名,id,角色
- ⑤开启apiserver
[[email protected] ~]# cd k8s/
[[email protected] k8s]# bash apiserver.sh 20.0.0.11 https://20.0.0.11:2379,https://20.0.0.12:2379,https://20.0.0.13:2379
[[email protected] k8s]# ps aux | grep kube
root 62331 27.4 7.9 392812 306556 ? Ssl 04:09 0:09 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://20.0.0.11:2379,https://20.0.0.12:2379,https://20.0.0.13:2379 --bind-address=20.0.0.11 --secure-port=6443 --advertise-address=20.0.0.11 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root 62354 0.0 0.0 112676 984 pts/2 S+ 04:09 0:00 grep --color=auto kube
- ⑥监听的https端口
[[email protected] k8s]# netstat -ntap | grep 6443
tcp 0 0 20.0.0.11:6443 0.0.0.0:* LISTEN 62331/kube-apiserve
tcp 0 0 20.0.0.11:60932 20.0.0.11:6443 ESTABLISHED 62331/kube-apiserve
tcp 0 0 20.0.0.11:6443 20.0.0.11:60932 ESTABLISHED 62331/kube-apiserve
[[email protected] k8s]# netstat -ntap | grep 8080
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 62331/kube-apiserve
- ⑦启动scheduler服务
[[email protected] ~]# cd k8s/
[[email protected] ~]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[[email protected] k8s]# ps aux | grep kube
root 62331 4.2 7.9 393068 307268 ? Ssl 04:09 0:14 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://20.0.0.11:2379,https://20.0.0.12:2379,https://20.0.0.13:2379 --bind-address=20.0.0.11 --secure-port=6443 --advertise-address=20.0.0.11 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root 62458 1.3 0.4 46128 19292 ? Ssl 04:14 0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
root 62474 0.0 0.0 112676 984 pts/2 S+ 04:15 0:00 grep --color=auto kube
- ⑧启动controller-manager
[[email protected] ~]# cd k8s/
[[email protected] k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
- ⑨查看master 节点状态
[[email protected] k8s]# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
六、node节点部署
1、把 kubelet、kube-proxy拷贝到node节点上去
- ①master01
[[email protected] ~]# cd k8s/kubernetes/server/bin/
[[email protected] bin]# scp kubelet kube-proxy [email protected]20.0.0.12:/opt/kubernetes/bin/
[email protected]20.0.0.12's password:
kubelet 100% 168MB 83.8MB/s 00:02
kube-proxy 100% 48MB 70.7MB/s 00:00
[[email protected] bin]# scp kubelet kube-proxy [email protected]20.0.0.13:/opt/kubernetes/bin/
[email protected]20.0.0.13's password:
kubelet 100% 168MB 116.6MB/s 00:01
kube-proxy 100% 48MB 97.0MB/s 00:00
- ②node01验证
[[email protected] ~]# cd /opt/kubernetes/bin/
[[email protected] bin]# ls
flanneld kubelet kube-proxy mk-docker-opts.sh
- ③node02验证
[[email protected] ~]# cd /opt/kubernetes/bin/
[[email protected] bin]# ls
flanneld kubelet kube-proxy mk-docker-opts.sh
2、复制node.zip到/root目录下再解压
node01操作
- ①复制压缩包
[[email protected] ~]# ls
anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip 公共 视频 文档 音乐
flannel.sh initial-setup-ks.cfg README.md 模板 图片 下载 桌面
- ②解压缩
[[email protected] ~]# unzip node.zip
Archive: node.zip
inflating: proxy.sh
inflating: kubelet.sh
3、master上操作创建kubeconfig
- ①创建目录
[[email protected] ~]# cd k8s/
[[email protected] k8s]# mkdir kubeconfig
- ②复制脚本到kubeconfig目录下
[[email protected] ~]# cd k8s/kubeconfig/
[[email protected] kubeconfig]# ls
kubeconfig.sh
- ③修改配置文件
[[email protected] ~]# cat /opt/kubernetes/cfg/token.csv
72b3e1ef2457c3d31cf65b7327be5828,kubelet-bootstrap,10001,"system:kubelet-bootstrap" # ***需要用到
[[email protected] ~]# cd k8s/kubeconfig/
[[email protected] kubeconfig]# vi kubeconfig.sh
##### 以下删除 ####
# 创建 TLS Bootstrapping Token
#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008
cat > token.csv << EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
#----------------------
#### 修改添加 ####
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=72b3e1ef2457c3d31cf65b7327be5828 \ # 删除之前变量,修改成***
--kubeconfig=bootstrap.kubeconfig
4、设置环境变量(可以写入到/etc/profile中)
[[email protected] ~]# vi /etc/profile
export PATH=$PATH:/opt/kubernetes/bin/ # 最后添加
[[email protected] ~]# source /etc/profile # 环境变量生成
[[email protected] ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
5、生成配置文件
- ①修改文件名
[[email protected] ~]# cd k8s/kubeconfig/
[[email protected] kubeconfig]# mv kubeconfig.sh kubeconfig
[[email protected] kubeconfig]# ls
kubeconfig
- ②生成文件
[[email protected] kubeconfig]# bash kubeconfig 20.0.0.11 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[[email protected] kubeconfig]# ls
bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
6、拷贝配置文件到node节点
- master01操作
[[email protected] ~]# cd k8s/kubeconfig/
[[email protected] kubeconfig]# ls
bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
[[email protected] kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]20.0.0.12:/opt/kubernetes/cfg/
[email protected]20.0.0.12's password:
bootstrap.kubeconfig 100% 2163 4.1MB/s 00:00
kube-proxy.kubeconfig 100% 6265 7.9MB/s 00:00
[[email protected] kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]20.0.0.13:/opt/kubernetes/cfg/
[email protected]20.0.0.13's password:
bootstrap.kubeconfig 100% 2163 2.8MB/s 00:00
kube-proxy.kubeconfig 100% 6265 8.4MB/s 00:00
- node01/02验证
[[email protected] ~]# ls /opt/kubernetes/cfg/
bootstrap.kubeconfig flanneld kube-proxy.kubeconfig
[[email protected] ~]# ls /opt/kubernetes/cfg/
bootstrap.kubeconfig flanneld kube-proxy.kubeconfig
7、创建bootstrap角色赋予权限用于连接apiserver请求签名(关键)
master01操作
[[email protected] ~]# cd k8s/kubeconfig/
[[email protected] kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
8、执行kubelet.sh脚本
node01操作
- ①执行脚本
[[email protected] ~]# bash kubelet.sh 20.0.0.12 # 本机IP地址
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
- ②检查kubelet服务启动
[[email protected] ~]# ps aux | grep kube
[[email protected] ~]# ps aux | grep kube
root 36810 0.0 0.0 112676 980 pts/1 S+ 22:10 0:00 grep --color=auto kube
root 71969 3.7 1.9 970288 75604 ? Ssl 3月18 55:58 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=20.0.0.12 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root 73727 0.4 0.6 45460 24188 ? Ssl 3月18 7:13 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=20.0.0.12 --cluster-cidr=10.0.0.0/24 --proxy-mode=ipvs --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
[[email protected] ~]# systemctl status kubelet.service
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since 四 2021-03-18 21:27:31 CST; 34s ago
Main PID: 71969 (kubelet)
Memory: 18.1M
CGroup: /system.slice/kubelet.service
└─71969 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=20.0.0.12 --...
9、检查到node01节点的请求
master01操作
- ①等待节点颁发证书
[[email protected] ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-cT_5pR6PfBoBvr9fBgWDCtiSYlu_tv434z_hlPXdrDQ 7m27s kubelet-bootstrap Pending # 等待集群给该节点颁发证书
- ②允许加入群集
[[email protected] ~]# kubectl certificate approve node-csr-cT_5pR6PfBoBvr9fBgWDCtiSYlu_tv434z_hlPXdrDQ
certificatesigningrequest.certificates.k8s.io/node-csr-cT_5pR6PfBoBvr9fBgWDCtiSYlu_tv434z_hlPXdrDQ approved
- ③继续查看证书状态
[[email protected] ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-cT_5pR6PfBoBvr9fBgWDCtiSYlu_tv434z_hlPXdrDQ 13m kubelet-bootstrap Approved,Issued # 已经被允许加入群集
- ④查看群集节点,成功加入node01节点
[[email protected] ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
20.0.0.12 Ready 3m5s v1.12.3
10、启动proxy服务
node01
- ①执行脚本
[[email protected] ~]# bash proxy.sh 20.0.0.12 # 本机IP
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
- ②检查服务状态
[[email protected] ~]# systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since 四 2021-03-18 21:42:48 CST; 13s ago
Main PID: 73727 (kube-proxy)
Memory: 8.5M
CGroup: /system.slice/kube-proxy.service
‣ 73727 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=20.0.0.12...
七、node02节点部署
1、把node01现成的/opt/kubernetes目录复制到其他节点进行修改即可
node01
[[email protected] ~]# scp -r /opt/kubernetes/ [email protected]20.0.0.13:/opt/
flanneld 100% 223 646.6KB/s 00:00
bootstrap.kubeconfig 100% 2163 3.3MB/s 00:00
kube-proxy.kubeconfig 100% 6265 10.1MB/s 00:00
kubelet 100% 373 488.9KB/s 00:00
kubelet.config 100% 263 16.0KB/s 00:00
kubelet.kubeconfig 100% 2292 3.8MB/s 00:00
kube-proxy 100% 185 428.2KB/s 00:00
mk-docker-opts.sh 100% 2139 3.3MB/s 00:00
flanneld 100% 35MB 103.2MB/s 00:00
kubelet 100% 168MB 134.0MB/s 00:01
kube-proxy 100% 48MB 89.4MB/s 00:00
kubelet.crt 100% 2165 1.1MB/s 00:00
kubelet.key 100% 1675 965.4KB/s 00:00
kubelet-client-2021-03-18-21-36-58.pem 100% 1269 421.2KB/s 00:00
kubelet-client-current.pem 100% 1269 413.9KB/s 00:00
2、把kubelet,kube-proxy的service文件拷贝到node2中
node01
[[email protected] ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service [email protected]20.0.0.13:/usr/lib/systemd/system
[email protected]20.0.0.13's password:
kubelet.service 100% 264 494.6KB/s 00:00
kube-proxy.service 100% 231 315.0KB/s 00:00
3、删除复制过来的证书,等会node02会自行申请证书
node02
[[email protected] ~]# cd /opt/kubernetes/ssl/
[[email protected] ssl]# ls
kubelet-client-2021-03-18-21-36-58.pem kubelet-client-current.pem kubelet.crt kubelet.key
[[email protected] ssl]# rm -rf *
[[email protected] ssl]# ls
4、修改配置文件
- ①kubelet
[[email protected] ~]# cd /opt/kubernetes/cfg/
[[email protected] cfg]# vi kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=20.0.0.13 \ # 修改为 node02 IP地址
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
- ②kubelet.config
[[email protected] cfg]# vi kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 20.0.0.13 # 修改为 node02 IP地址
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
- ③kube-proxy
[[email protected] cfg]# vi kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=20.0.0.13 \ # 修改为 node02 IP地址
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
- ④启动服务
[[email protected] cfg]# systemctl start kubelet.service # 开启之后申请加入集群
[[email protected] cfg]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[[email protected] ~]# ls /opt/kubernetes/ssl/ # 查看生成的证书
kubelet-client-2021-03-19-21-45-43.pem kubelet-client-current.pem kubelet.crt kubelet.key
[[email protected] cfg]# systemctl start kube-proxy.service
[[email protected] cfg]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
5、在master上操作查看请求
master01
- ①查看请求
[[email protected] ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-Al3nupheJJGAQie31Wa34TT9MAAdJa7HELPSryHavL4 67s kubelet-bootstrap Pending
node-csr-cT_5pR6PfBoBvr9fBgWDCtiSYlu_tv434z_hlPXdrDQ 42m kubelet-bootstrap Approved,Issued
- ②授权许可加入群集
[[email protected] ~]# kubectl certificate approve node-csr-Al3nupheJJGAQie31Wa34TT9MAAdJa7HELPSryHavL4
certificatesigningrequest.certificates.k8s.io/node-csr-Al3nupheJJGAQie31Wa34TT9MAAdJa7HELPSryHavL4 approved
- ③查看证书状态
[[email protected] ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-Al3nupheJJGAQie31Wa34TT9MAAdJa7HELPSryHavL4 7m6s kubelet-bootstrap Approved,Issued
node-csr-cT_5pR6PfBoBvr9fBgWDCtiSYlu_tv434z_hlPXdrDQ 48m kubelet-bootstrap Approved,Issued
- ④查看群集中的节点
[[email protected] ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
20.0.0.12 Ready 33m v1.12.3
20.0.0.13 Ready 30s v1.12.3
单节点部署完成
密码:avm5
上一篇: Spring MVC
下一篇: C#基础 --接口