Kubernetes 集群部署之Master部署
程序员文章站
2022-07-13 22:24:16
...
目录
创建systemd文件来管理controller-manager
Master 组件
Master组件可以在集群中任何节点上运行。但是为了简单起见,通常在一台VM/机器上启动所有Master组件,并且不会在此VM/机器上运行用户容器
-
kube-apiserver
kube-apiserver用于暴露Kubernetes API。任何的资源请求/调用操作都是通过kube-apiserver提供的接口进行。
-
kube-controller-manager
kube-controller-manager运行管理控制器,它们是集群中处理常规任务的后台线程。逻辑上,每个控制器是一个单独的进程,但为了降低复杂性,它们都被编译成单个二进制文件,并在单个进程中运行。
这些控制器包括:
- 节点(Node)控制器。
- 副本(Replication)控制器:负责维护系统中每个副本中的pod。
- 端点(Endpoints)控制器:填充Endpoints对象(即连接Services&Pods)。
- Service Account和Token控制器:为新的Namespace 创建默认帐户访问API Token
-
kube-scheduler
kube-scheduler 监视新创建没有分配到Node的Pod,为Pod选择一个Node。
1、基础环境准备
https://blog.csdn.net/abel_dwh/article/details/116596454
2、ETC集群部署
https://blog.csdn.net/abel_dwh/article/details/116597343
3、部署Master
-
创建证书存放目录
[[email protected] ssl]# mkdir /etc/k8s/ssl -p
-
生成kube-apiserver证书
[[email protected] ssl]# cat ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
[[email protected] ssl]# cat k8s-ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "NM",
"ST": "HuShi",
"L": "HuShi",
"O": "k8s",
"OU": "K8s Security"
}
]
}
-
生成证书
[[email protected] ssl]# cfssl gencert -initca k8s-ca-csr.json | cfssljson -bare ca -
2021/05/10 18:13:10 [INFO] generating a new CA key and certificate from CSR
2021/05/10 18:13:10 [INFO] generate received request
2021/05/10 18:13:10 [INFO] received CSR
2021/05/10 18:13:10 [INFO] generating key: rsa-2048
2021/05/10 18:13:10 [INFO] encoded CSR
2021/05/10 18:13:10 [INFO] signed certificate with serial number 27631474529929329151854966158487610252489583230
[[email protected] ssl]# ls *.pem
ca-key.pem ca.pem
-
使用自签ca证书办法kube-apiserver证书
[[email protected] ssl]# cat k8s-csr.json
{
"CN": "kubernettes",
"hosts": [
"192.168.44.128",
"192.168.44.130",
"192.168.44.129",
"master",
"node1",
"node2",
"127.0.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "NM",
"ST": "HuShi",
"L": "HuShi",
"O": "k8s",
"OU": "K8s Security"
}
]
}
-
生成证书
[[email protected] ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes k8s-csr.json | cfssljson -bare server
2021/05/10 18:19:22 [INFO] generate received request
2021/05/10 18:19:22 [INFO] received CSR
2021/05/10 18:19:22 [INFO] generating key: rsa-2048
2021/05/10 18:19:23 [INFO] encoded CSR
2021/05/10 18:19:23 [INFO] signed certificate with serial number 607707363525984056077203895987947070954749588362
2021/05/10 18:19:23 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[[email protected] ssl]# ls server*.pem
server-key.pem server.pem
4、准备软件包
-
下载
wget https://dl.k8s.io/v1.18.18/kubernetes-server-linux-amd64.tar.gz
-
创建软件包目录
[[email protected] ~]# mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
-
解压软件包
[[email protected] ~]# tar -xvf kubernetes-server-linux-amd64.tar.gz
kubernetes/
kubernetes/addons/
kubernetes/LICENSES
kubernetes/kubernetes-src.tar.gz
kubernetes/server/
kubernetes/server/bin/
kubernetes/server/bin/kube-scheduler.tar
kubernetes/server/bin/kube-controller-manager.docker_tag
kubernetes/server/bin/kube-proxy.docker_tag
kubernetes/server/bin/kube-proxy.tar
kubernetes/server/bin/kube-proxy
kubernetes/server/bin/kube-scheduler
kubernetes/server/bin/kube-apiserver
kubernetes/server/bin/kubeadm
kubernetes/server/bin/mounter
kubernetes/server/bin/kube-apiserver.tar
kubernetes/server/bin/apiextensions-apiserver
kubernetes/server/bin/kube-apiserver.docker_tag
kubernetes/server/bin/kube-controller-manager
kubernetes/server/bin/kubelet
kubernetes/server/bin/kubectl
kubernetes/server/bin/kube-controller-manager.tar
kubernetes/server/bin/kube-scheduler.docker_tag
-
拷贝文件到指定目录
[[email protected] ~]# cd kubernetes/server/bin
[[email protected] bin]# cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
[[email protected] bin]# cp kubectl /usr/bin/
5、部署kube-apiserver
-
创建配置文件
[[email protected] bin]# cat /opt/kubernetes/cfg/kube-apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://192.168.44.128:2379,https://192.168.44.129:2379,https://192.168.44.130:2379 \
--bind-address=192.168.44.128 \
--secure-port=6443 \
--advertise-address=192.168.44.128 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/etc/k8s/ssl/server.pem \
--kubelet-client-key=/etc/k8s/ssl/server-key.pem \
--tls-cert-file=/etc/k8s/ssl/server.pem \
--tls-private-key-file=/etc/k8s/ssl/server-key.pem \
--client-ca-file=/etc/k8s/ssl/ca.pem \
--service-account-key-file=/etc/k8s/ssl/ca-key.pem \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
-
参数说明
--logtostderr:启用日志
---v:日志等级
--log-dir:日志目录
--etcd-servers:etcd集群地址
--bind-address:监听地址
--secure-port:https安全端口
--advertise-address:集群通告地址
--allow-privileged:启用授权
--service-cluster-ip-range:Service虚拟IP地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth:启用TLS bootstrap机制
--token-auth-file:bootstrap token文件
--service-node-port-range:Service nodeport类型默认分配端口范围
--kubelet-client-xxx:apiserver访问kubelet客户端证书
--tls-xxx-file:apiserver https证书
--etcd-xxxfile:连接Etcd集群证书
--audit-log-xxx:审计日志
-
创建token文件
备注:格式:token,用户名,UID,用户组
[[email protected] bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
267137427e1cb7519d63974aa7598091
[[email protected] bin]# vim /opt/kubernetes/cfg/token.csv
[[email protected] bin]# cat /opt/kubernetes/cfg/token.csv
267137427e1cb7519d63974aa7598091,kubelet-bootstrap,10001,"system:node-bootstrapper"
-
创建systemd文件来管理apiserver
[[email protected] bin]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
-
启动并设置开机自启动
[[email protected]ster kubernetes]# systemctl daemon-reload
[[email protected] kubernetes]# systemctl start kube-apiserver
[[email protected] kubernetes]# systemctl enable kube-apiserver
[[email protected] kubernetes]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
-
授权kubelet-bootstrap用户允许请求证书
[[email protected] kubernetes]# kubectl create clusterrolebinding kubelet-bootstrap \
> --clusterrole=system:node-bootstrapper \
> --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
6、部署kube-controller-manager
-
创建配置文件
[[email protected] kubernetes]# cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/etc/k8s/ssl/ca.pem \\
--cluster-signing-key-file=/etc/k8s/ssl/ca-key.pem \\
--root-ca-file=/etc/k8s/ssl/ca.pem \\
--service-account-private-key-file=/etc/k8s/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF
-
参数说明
--master:通过本地非安全本地端口8080连接apiserver。
--leader-elect:当该组件启动多个时,自动选举(HA)
--cluster-signing-cert-file/--cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致
-
创建systemd文件来管理controller-manager
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
-
启动并设置开机自启动
[[email protected] kubernetes]# systemctl daemon-reload
[[email protected] kubernetes]# systemctl start kube-controller-manager
[[email protected] kubernetes]# systemctl enable kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[[email protected] kubernetes]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
6、kube-scheduler部署
-
创建配置文件
[[email protected] kubernetes]# cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF
-
参数说明
--master:通过本地非安全本地端口8080连接apiserver。
--leader-elect:当该组件启动多个时,自动选举(HA)
-
创建systemd管理scheduler
[[email protected] kubernetes]# cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
-
启动并设置开机自启动
[[email protected] kubernetes]# systemctl daemon-reload
[[email protected] kubernetes]# systemctl start kube-scheduler
[[email protected] kubernetes]# systemctl enable kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[[email protected] kubernetes]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
推荐阅读
-
MySQL之MHA高可用配置及故障切换实现详细部署步骤
-
Spring Cloud Alibaba | Nacos集群部署
-
Mysql基于Mysql Cluster+MysqlRouter的集群部署方案
-
laravel项目利用twemproxy部署redis集群的完整步骤
-
CloudFoundry 之 IBMCloud 项目部署java例子
-
运维自动化之系统部署 PXE(二)
-
部署Tensorflow模型之部署模型的基础知识
-
通过Docker部署Redis 6.x集群的方法
-
使用docker快速部署Elasticsearch集群的方法
-
使用Kubeadm在CentOS7.2上部署Kubernetes集群的方法