欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

kubernetes二进制安装

程序员文章站 2022-03-12 12:36:42
...

kubernetes二进制安装

1、满足条件

  • 一台或多台机器,操作系统 CentOS7.x-86_x64
  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像
  • 禁止swap分区
  • kubernetes >= 1.9.0 内核要大于4.4

2、升级内核

	#载入公钥
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
#安装ELRepo仓库
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
#载入elrepo-kernel元数据
yum --disablerepo=\* --enablerepo=elrepo-kernel repolist
#查看可用的rpm包
yum --disablerepo=\* --enablerepo=elrepo-kernel list kernel*
#安装最新版本的kernel-ml
yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-ml.x86_64
#查看系统上可以用的内核版本
awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
#更改启动项
grub2-mkconfig -o /boot/grub2/grub.cfg && grub2-set-default  0

注意:这里测试环境选择的 ml ,如果正式环境需要选择lt 长期支持内核版本

3、环境准备

角色 IP
k8s-master 192.168.1.66
k8s-node1 192.168.1.67
k8s-node2 192.168.1.68
#关闭防火墙
systemctl status firewalld
systemctl stop firewalld && systemctl disable firewalld
#关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config 
setenforce 0
#关闭swap分区
swapoff -a  #临时关闭
vim /etc/fstab  # 永久关闭
#设置hosts
vim /etc/hosts
192.168.1.66 k8s-master
192.168.1.67 k8s-node1
192.168.1.68 k8s-node2
时间同步
#使用 chronyd 服务,不建议使用ntp
#master:
vim /etc/chrony.conf
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
server ntp3.aliyun.com iburst

#node
server 192.168.1.66 iburst


[[email protected] opt]# chronyc sources
210 Number of sources = 2
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 120.25.115.20                 2   6    17    26  -1620us[-1017us] +/-   16ms
^- 203.107.6.88                  2   6    17    26   +540us[ +540us] +/-   24ms


[[email protected] opt]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? 192.168.1.66                  0   6     0     -     +0ns[   +0ns] +/-    0ns


[[email protected] opt]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? 192.168.1.66                  0   6     0     -     +0ns[   +0ns] +/-    0ns


优化文件描述符
vim /etc/security/limits.conf

*   hardnofile  65536
*   softnofile  65536
*   hardnproc   65536
*   softnproc   65536
修改内核参数
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

# 允许的最大跟踪连接条目,是在内核内存中 netfilter 可以同时处理的“任务”(连接跟踪条目)
net.netfilter.nf_conntrack_max=10485760
net.netfilter.nf_conntrack_tcp_timeout_established=300
# 哈希表大小(只读)(64位系统、8G内存默认 65536,16G翻倍,如此类推)
net.netfilter.nf_conntrack_buckets=655360

# 每个网络接口接收数据包的速率比内核处理这些包的速率快时,允许送到队列的数据包的最大数目
net.core.netdev_max_backlog=10000

# 表示socket监听(listen)的backlog上限,也就是就是socket的监听队列(accept queue),当一个tcp连接尚未被处理或建立时(半连接状态),会保存在这个监听队列,默认为 128,在高并发场景下偏小,优化到 32768。参考 https://imroc.io/posts/kubernetes-overflow-and-drop/
net.core.somaxconn=32768

# 没有启用syncookies的情况下,syn queue(半连接队列)大小除了受somaxconn限制外,也受这个参数的限制,默认1024,优化到8096,避免在高并发场景下丢包
net.ipv4.tcp_max_syn_backlog=8096

# 表示同一用户同时最大可以创建的 inotify 实例 (每个实例可以有很多 watch)
fs.inotify.max_user_instances=8192

# max-file 表示系统级别的能够打开的文件句柄的数量, 一般如果遇到文件句柄达到上限时,会碰到
# Too many open files 或者 Socket/File: Can’t open so many files 等错误
fs.file-max=2097152

# 表示同一用户同时可以添加的watch数目(watch一般是针对目录,决定了同时同一用户可以监控的目录数量) 默认值 8192 在容器场景下偏小,在某些情况下可能会导致 inotify watch 数量耗尽,使得创建 Pod 不成功或者 kubelet 无法启动成功,将其优化到 524288
fs.inotify.max_user_watches=524288

net.core.bpf_jit_enable=1
net.core.bpf_jit_harden=1
net.core.bpf_jit_kallsyms=1
net.core.dev_weight_tx_bias=1

net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem=4096 12582912 16777216
net.ipv4.tcp_wmem=4096 12582912 16777216

net.core.rps_sock_flow_entries=8192

# 以下三个参数是 arp 缓存的 gc 阀值,相比默认值提高了,当内核维护的 arp 表过于庞大时候,可以考虑优化下,避免在某些场景下arp缓存溢出导致网络超时,参考:https://k8s.imroc.io/avoid/cases/arp-cache-overflow-causes-healthcheck-failed

# 存在于 ARP 高速缓存中的最少层数,如果少于这个数,垃圾收集器将不会运行。缺省值是 128
net.ipv4.neigh.default.gc_thresh1=2048
# 保存在 ARP 高速缓存中的最多的记录软限制。垃圾收集器在开始收集前,允许记录数超过这个数字 5 秒。缺省值是 512
net.ipv4.neigh.default.gc_thresh2=4096
# 保存在 ARP 高速缓存中的最多记录的硬限制,一旦高速缓存中的数目高于此,垃圾收集器将马上运行。缺省值是 1024
net.ipv4.neigh.default.gc_thresh3=8192

net.ipv4.tcp_max_orphans=32768
net.ipv4.tcp_max_tw_buckets=32768

vm.max_map_count=262144

kernel.threads-max=30058

net.ipv4.ip_forward=1

# 避免发生故障时没有 coredump
kernel.core_pattern=core

EOF

sysctl --system  # 生效
下载依赖
#下载依赖
	yum -y install gcc make libnftnl-devel libmnl-devel autoconf automake libtool bison flex  libnetfilter_conntrack-devel libnetfilter_queue-devel libpcap-devel
升级iptables
#必须升级,centos7 自带的iptables 版本太低
wget https://www.netfilter.org/projects/iptables/files/iptables-1.6.2.tar.bz2

tar -jxvf iptables-1.6.2.tar.bz2
 
 ./autogen.sh
 ./configure
 make -j4
 make install

4、下载cfssl二进制包

下载 cfssl 二进制包用于签发证书,官网地址:https://pkg.cfssl.org/,下载如下文件:

将 cfssl 的几个二进制包添加到 PATH 包含的目录下:

cfssl 目录结构

TLS/etcd/server-csr.json #更改自己etcd机器ip地址

TLS/k8s/server-csr.json #更改自己k8s 集群地址,以及 lbs 地址

[[email protected] xuyao]# tree TLS
TLS
├── cfssl
├── cfssl-certinfo
├── cfssljson
├── cfssl.sh
├── etcd
│   ├── ca-config.json
│   ├── ca.csr
│   ├── ca-csr.json
│   ├── ca-key.pem
│   ├── ca.pem
│   ├── generate_etcd_cert.sh
│   ├── server.csr
│   ├── server-csr.json
│   ├── server-key.pem
│   └── server.pem
└── k8s
    ├── ca-config.json
    ├── ca.csr
    ├── ca-csr.json
    ├── ca-key.pem
    ├── ca.pem
    ├── generate_k8s_cert.sh
    ├── kube-proxy.csr
    ├── kube-proxy-csr.json
    ├── kube-proxy-key.pem
    ├── kube-proxy.pem
    ├── server.csr
    ├── server-csr.json
    ├── server-key.pem
    └── server.pem

2 directories, 28 files


5、etcd集群

#生产etcd自签证书
#修改etcd集群配置文件ip
[[email protected] etcd]# cat server-csr.json 
{
    "CN": "etcd",
    "hosts": [
        "192.168.1.66",
        "192.168.1.67",
        "192.168.1.68"
        ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}

#执行生成自签证书脚本
[[email protected] etcd]# ./generate_etcd_cert.sh 
+ cfssl gencert -initca ca-csr.json
+ cfssljson -bare ca -
2020/12/31 15:48:45 [INFO] generating a new CA key and certificate from CSR
2020/12/31 15:48:45 [INFO] generate received request
2020/12/31 15:48:45 [INFO] received CSR
2020/12/31 15:48:45 [INFO] generating key: rsa-2048
2020/12/31 15:48:45 [INFO] encoded CSR
2020/12/31 15:48:45 [INFO] signed certificate with serial number 272817761786451500858086696566593768361585469736
+ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json
+ cfssljson -bare server
2020/12/31 15:48:45 [INFO] generate received request
2020/12/31 15:48:45 [INFO] received CSR
2020/12/31 15:48:45 [INFO] generating key: rsa-2048
2020/12/31 15:48:46 [INFO] encoded CSR
2020/12/31 15:48:46 [INFO] signed certificate with serial number 101341184092860876573507489937573051863285603316
2020/12/31 15:48:46 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

[[email protected] etcd]# ll -rht
total 40K
-rw-r--r--. 1 root root  209 Oct  3  2019 ca-csr.json
-rw-r--r--. 1 root root  287 Oct  3  2019 ca-config.json
-rwxr-xr-x. 1 root root  178 Oct  3  2019 generate_etcd_cert.sh
-rw-r--r--. 1 root root  303 Mar 13 17:56 server-csr.json
-rw-r--r--. 1 root root 1.3K Mar 13 18:13 ca.pem
-rw-------. 1 root root 1.7K Mar 13 18:13 ca-key.pem
-rw-r--r--. 1 root root  956 Mar 13 18:13 ca.csr
-rw-r--r--. 1 root root 1.4K Mar 13 18:13 server.pem
-rw-------. 1 root root 1.7K Mar 13 18:13 server-key.pem
-rw-r--r--. 1 root root 1013 Mar 13 18:13 server.csr

#颁发etcd证书
#master & node
scp server.pem server-key.pem ca.pem /opt/etcd/ssl/

#etcd配置
[[email protected] cfg]# cat etcd.conf 

#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.66:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.66:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.66:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.66:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.1.66:2380,etcd-2=https://192.168.1.67:2380,etcd-3=https://192.168.1.68:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#目录结构
[[email protected] opt]# tree etcd/
etcd/
├── bin
│   ├── etcd
│   └── etcdctl
├── cfg
│   └── etcd.conf
└── ssl
    ├── ca.pem
    ├── server-key.pem
    └── server.pem

3 directories, 6 files


#启动etcd
systemctl enable etcd
systemctl start etcd

6、docker安装

master可以不用安装docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

yum -y install docker-ce

systemctl enable docker && systemctl start docker

配置加速器

#阿里云加速器
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://ihwppc8g.mirror.aliyuncs.com"],
  "log-driver":"json-file",
  "log-opts": {"max-size":"500m", "max-file":"3"}
}
EOF
#重启docker
systemctl restart docker
#查看docker信息
docker info

7、master部署

[[email protected] k8s]# ./generate_k8s_cert.sh
[[email protected] k8s]# ll -rht
total 56K
-rwxr-xr-x 1 root root  321 Oct  3  2019 generate_k8s_cert.sh
-rw-r--r-- 1 root root  230 Oct  3  2019 kube-proxy-csr.json
-rw-r--r-- 1 root root  263 Oct  3  2019 ca-csr.json
-rw-r--r-- 1 root root  294 Oct  3  2019 ca-config.json
-rw-r--r-- 1 root root  554 Mar 16 14:52 server-csr.json
-rw-r--r-- 1 root root 1.4K Mar 16 14:54 ca.pem
-rw------- 1 root root 1.7K Mar 16 14:54 ca-key.pem
-rw-r--r-- 1 root root 1001 Mar 16 14:54 ca.csr
-rw-r--r-- 1 root root 1.6K Mar 16 14:54 server.pem
-rw------- 1 root root 1.7K Mar 16 14:54 server-key.pem
-rw-r--r-- 1 root root 1.3K Mar 16 14:54 server.csr
-rw-r--r-- 1 root root 1.4K Mar 16 14:54 kube-proxy.pem
-rw------- 1 root root 1.7K Mar 16 14:54 kube-proxy-key.pem
-rw-r--r-- 1 root root 1009 Mar 16 14:54 kube-proxy.csr

#copy到kubernetes/ssl目录下  #kube-proxoy 可以不用copy
scp ca.pem ca-key.pem server.pem server-key.pem kube-proxy.pem kube-proxy-key.pem /opt/kubernetes/ssl/

#目录结构
kubernetes
├── bin
│   ├── kube-apiserver
│   ├── kube-controller-manager
│   ├── kubectl
│   └── kube-scheduler
├── cfg
│   ├── kube-apiserver.conf
│   ├── kube-controller-manager.conf
│   ├── kube-scheduler.conf
│   └── token.csv
├── logs
│   ├── kube-apiserver.ERROR -> kube-apiserver.k8s-master.root.log.ERROR.20201231-185050.13683
│   ├── kube-apiserver.INFO -> kube-apiserver.k8s-master.root.log.INFO.20201231-185043.13683
│   ├── kube-apiserver.k8s-master.root.log.ERROR.20201231-185050.13683
│   ├── kube-apiserver.k8s-master.root.log.INFO.20201231-185043.13683
│   ├── kube-apiserver.k8s-master.root.log.WARNING.20201231-185045.13683
│   ├── kube-apiserver.WARNING -> kube-apiserver.k8s-master.root.log.WARNING.20201231-185045.13683
│   ├── kube-controller-manager.ERROR -> kube-controller-manager.k8s-master.root.log.ERROR.20201231-185128.13793
│   ├── kube-controller-manager.INFO -> kube-controller-manager.k8s-master.root.log.INFO.20201231-185113.13793
│   ├── kube-controller-manager.k8s-master.root.log.ERROR.20201231-185128.13793
│   ├── kube-controller-manager.k8s-master.root.log.INFO.20201231-185113.13793
│   ├── kube-controller-manager.k8s-master.root.log.WARNING.20201231-185117.13793
│   ├── kube-controller-manager.WARNING -> kube-controller-manager.k8s-master.root.log.WARNING.20201231-185117.13793
│   ├── kube-scheduler.INFO -> kube-scheduler.k8s-master.root.log.INFO.20201231-185137.13874
│   ├── kube-scheduler.k8s-master.root.log.INFO.20201231-185137.13874
│   ├── kube-scheduler.k8s-master.root.log.WARNING.20201231-185140.13874
│   └── kube-scheduler.WARNING -> kube-scheduler.k8s-master.root.log.WARNING.20201231-185140.13874
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── kube-proxy-key.pem
    ├── kube-proxy.pem
    ├── server-key.pem
    └── server.pem

#配置文件
[[email protected] cfg]# cat kube-apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=http://192.168.1.66:2379,http://192.168.1.67:2379,http://192.168.1.68:2379 \
--bind-address=192.168.1.66 \
--secure-port=6443 \
--advertise-address=192.168.1.66 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

[[email protected] cfg]# cat kube-controller-manager.conf 
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"


[[email protected] cfg]# cat kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--address=127.0.0.1"


[[email protected] opt]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-31 18:50:43 CST; 3 days ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 13683 (kube-apiserver)
   CGroup: /system.slice/kube-apiserver.service
           └─13683 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --etcd-servers=https://192.168....

Dec 31 18:50:43 k8s-master systemd[1]: Started Kubernetes API Server.
Dec 31 18:50:50 k8s-master kube-apiserver[13683]: E1231 18:50:50.037606   13683 controller.go:152] Unable to remove old endpoints from ...rorMsg:
Hint: Some lines were ellipsized, use -l to show in full.
[[email protected] opt]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-31 18:51:13 CST; 3 days ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 13793 (kube-controller)
   CGroup: /system.slice/kube-controller-manager.service
           └─13793 /opt/kubernetes/bin/kube-controller-manager --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect=true --...

Dec 31 18:51:13 k8s-master systemd[1]: Started Kubernetes Controller Manager.
Dec 31 18:51:13 k8s-master kube-controller-manager[13793]: Flag --address has been deprecated, see --bind-address instead.
Dec 31 18:51:13 k8s-master kube-controller-manager[13793]: Flag --experimental-cluster-signing-duration has been deprecated, use --clust...ration
Dec 31 18:51:28 k8s-master kube-controller-manager[13793]: E1231 18:51:28.719414   13793 core.go:230] failed to start cloud node lifecyc...ovided
Dec 31 18:51:28 k8s-master kube-controller-manager[13793]: E1231 18:51:28.723549   13793 core.go:90] Failed to start service controller:...l fail
Hint: Some lines were ellipsized, use -l to show in full.
[[email protected] opt]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-12-31 18:51:37 CST; 3 days ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 13874 (kube-scheduler)
   CGroup: /system.slice/kube-scheduler.service
           └─13874 /opt/kubernetes/bin/kube-scheduler --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --leader-elect --master=127.0.0...

Dec 31 18:51:37 k8s-master systemd[1]: Started Kubernetes Scheduler.
Dec 31 18:51:37 k8s-master kube-scheduler[13874]: I1231 18:51:37.793391   13874 registry.go:173] Registering SelectorSpread plugin
Dec 31 18:51:37 k8s-master kube-scheduler[13874]: I1231 18:51:37.793563   13874 registry.go:173] Registering SelectorSpread plugin
[[email protected] opt]# 


#复制kubectl

scp /opt/kubernetes/bin/kubectl /usr/bin

#查看cs

[[email protected] opt]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"} 



启用TLS Bootstrapping

TLS Bootstrapping 机制的作用就是让 Node 中的 kubelet 启动时能自动向 kube-apiserver 发起申请为 kubelet 颁发证书,它在启动 kube-apiserver 是通过 –enable-bootstrap-token-auth=true 选项来启用,在上面已经启用了。然后可通过 –token-auth-file 来指定一个认证配置,也就是上面添加的 /opt/kubernetes/cfg/token.csv 文件,其内容如下:

[[email protected] cfg]# cat /opt/kubernetes/cfg/token.csv
d8449d7d601eae4227620a298c693308,kubelet-bootstrap,10001,"system:node-bootstrapper"

其中各列的含义如下:

  • d8449d7d601eae4227620a298c693308:认证 Token,该 Token 可通过head -c 16 /dev/urandom | od -An -t x | tr -d ' '自行生成,但 API Server 中配置的 Token 必须要与 Node 节点的/opt/kubernetes/conf/bootstrap.kubeconfig配置里一致;

  • kubelet-bootstrap:用户;

  • 10001:UID

  • system:node-bootstrapper:组;

此时我们虽然已经启用了 TLS Bootstrapping,但 kubelet-bootstrap 目前还是没有任何权限的,此时我们需要给它手动授权一下,将它绑定到内置的集群角色 system:node-bootstrapper 即可,操作如下:

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

4、node部署

[[email protected] opt]# tree kubernetes/
kubernetes/
├── bin
│?? ├── kubectl
│?? ├── kubelet
│?? └── kube-proxy
├── cfg
│?? ├── bootstrap.kubeconfig
│?? ├── kubelet.conf
│?? ├── kubelet-config.yml
│?? ├── kubelet.kubeconfig
│?? ├── kube-proxy.conf
│?? ├── kube-proxy-config.yml
│?? └── kube-proxy.kubeconfig
├── logs
└── ssl

4 directories, 10 files

Kubernetes Node 节点上除了要部署 Docker,还需要的Kubernetes 组件有kubeletkube-proxy,这两个二进制程序在部署 Kubernetes Master 节点时解压的 kubernetes-server-linux-amd64.tar.gz中已经存在,在解压后的kubernetes/server/bin/目录下,将它们拷贝到两个 Kubernetes Node节点的 Kubernetes安装目录:

[[email protected] bin]# scp kubectl kubelet kube-proxy /opt/kubernetes/bin/
[[email protected] bin]# 
[[email protected] bin]# ./kubelet --version
Kubernetes v1.19.6

由于 Kubernetes Node端的 kubeletkube-proxy是要与API Server 进行通信的,所以需要将之前生成的ca.pemkube-proxy.pemkube-proxy-key.pem 拷贝到kubernetes Node 端,如下:

[[email protected] ssl]# scp ca.pem kube-proxy*.pem [email protected]:/opt/kubernetes/ssl/
[email protected]'s password: 
ca.pem                                                                                                                                                     100% 1359    81.6KB/s   00:00    
kube-proxy-key.pem                                                                                                                                         100% 1679     1.4MB/s   00:00    
kube-proxy.pem                                                                                                                                             100% 1403     1.2MB/s   00:00    

创建kubelet用于TLS Bootstrapping 的认证配置文件:

[[email protected] cfg]# cat bootstrap.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/kubernetes/ssl/ca.pem
    server: http://192.168.1.66:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    token: d8449d7d601eae4227620a298c693308

创建 kubelet的资源型配置的YAML文件

[[email protected] cfg]# cat kubelet-config.yml 
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110

创建kubelet 的启动配置文件:

注意这里的 --hostname-override 在不同的 Kubernetes Node 上是不同的。

[[email protected] cfg]# cat kubelet.conf 
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=node1 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-shenzhen.aliyuncs.com/zze/pause:3.2"

创建kube-proxykube-apiserver 通信的认证配置文件:

[[email protected] cfg]# cat kube-proxy.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/kubernetes/ssl/ca.pem
    server: http://192.168.1.66:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-proxy
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
  user:
    client-certificate: /opt/kubernetes/ssl/kube-proxy.pem
    client-key: /opt/kubernetes/ssl/kube-proxy-key.pem

创建kube-proxy的资源型配置的 YAML 文件:

注意这里的 hostnameOverride 在不同的 Kubernetes Node 上是不同的。

[[email protected] cfg]# cat kube-proxy-config.yml 
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: node1
clusterCIDR: 10.0.0.0/24
mode: ipvs
ipvs:
  scheduler: "rr"
iptables:
  masqueradeAll: true

创建kube-proxy的启动配置文件:

[[email protected] cfg]# cat kube-proxy.conf 
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"

创建kubeletSystemd service文件:

$ cat /usr/lib/systemd/system/kubelet.service 
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Before=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

创建 kube-proxyystemd service 文件:

cat /usr/lib/systemd/system/kube-proxy.service 
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

启动 kubelet并设置开机自启:

$ systemctl start kubelet.service
$ systemctl enable kubelet.service

后续继续更新

相关标签: linux devops