欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

二进制安装k8s

程序员文章站 2024-03-13 09:08:39
...

k8s二进制安装

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-zKd1C6AG-1607949183339)(C:\Users\wxs\Desktop\k8s\图片\1563068809299.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Lmcz4e5A-1607949183341)(C:\Users\wxs\Desktop\k8s\图片\1601195361513.png)]

kube-proxy:负载均衡

kubernetes 是一个分布式的集群管理系统,在每个节点(node)上都要运行一个 worker 对容器进行生命周期的管理,这个 worker 程序就是 kubelet。

controller-manager:Controller Manager作为集群内部的管理控制中心

scheduler:调度器

Replication Controller的职责

确保集群中有且仅有N个Pod实例,N是RC中定义的Pod副本数量。
通过调整RC中的spec.replicas属性值来实现系统扩容或缩容。
通过改变RC中的Pod模板来实现系统的滚动升级。

2.2. Replication Controller使用场景

使用场景 说明 使用命令
重新调度 当发生节点故障或Pod被意外终止运行时,可以重新调度保证集群中仍然运行指定的副本数。
弹性伸缩 通过手动或自动扩容代理修复副本控制器的spec.replicas属性,可以实现弹性伸缩。 kubectl scale
滚动更新 创建一个新的RC文件,通过kubectl 命令或API执行,则会新增一个新的副本同时删除旧的副本,当旧副本为0时,删除旧的RC。 kubectl rolling-update

2.k8s的安装

环境准备(修改ip地址,主机名,host解析)

主机 ip 内存 软件
k8s-master 10.0.0.11 1g etcd,api-server,controller-manager,scheduler
k8s-node1 100.0.12 2g etcd,kubelet,kube-proxy,docker,flannel
k8s-node2 10.0.0.13 2g ectd,kubelet,kube-proxy,docker,flannel
k8s-node3 10.0.0.14 1g kubelet,kube-proxy,docker,flannel

host解析

10.0.0.11 k8s-master
10.0.0.12 k8s-node1
10.0.0.13 k8s-node2
10.0.0.14 k8s-node3

免**登陆

[[email protected] ~]# ssh-****** -t rsa

[[email protected] ~]# ls .ssh/
id_rsa id_rsa.pub

[[email protected] ~]# ssh-copy-id [email protected]

[[email protected] ~]# scp -rp .ssh [email protected]:/root

[[email protected] ~]# scp -rp .ssh [email protected]:/root

[[email protected] ~]# scp -rp .ssh [email protected]:/root

[[email protected] ~]# scp -rp .ssh [email protected]:/root

[[email protected] ~]# ssh [email protected]

[[email protected] ~]# ssh [email protected]

[[email protected] ~]# ssh [email protected]

[[email protected] ~]# scp /etc/hosts [email protected]:/etc/hosts
hosts 100% 240 4.6KB/s 00:00
[[email protected] ~]# scp /etc/hosts [email protected]:/etc/hosts
hosts 100% 240 51.4KB/s 00:00
[[email protected] ~]# scp /etc/hosts [email protected]:/etc/hosts
hosts 100% 240 49.2KB/s 00:00

2.1 颁发证书:

准备证书颁发工具

在node3节点上

[[email protected] ~]# mkdir /opt/softs
[[email protected] ~]# cd /opt/softs
[[email protected] softs]# ls
cfssl cfssl-certinfo cfssl-json
[[email protected] softs]# chmod +x /opt/softs/*
[[email protected] softs]# ln -s /opt/softs/* /usr/bin/

[[email protected] softs]# mkdir /opt/certs
[[email protected] softs]# cd /opt/certs

编辑ca证书配置文件

vi /opt/certs/ca-config.json
i{
 "signing": {
     "default": {
         "expiry": "175200h"
     },
     "profiles": {
         "server": {
             "expiry": "175200h",
             "usages": [
                 "signing",
                 "key encipherment",
                 "server auth"
             ]
         },
         "client": {
             "expiry": "175200h",
             "usages": [
                 "signing",
                 "key encipherment",
                 "client auth"
             ]
         },
         "peer": {
             "expiry": "175200h",
             "usages": [
                 "signing",
                 "key encipherment",
                 "server auth",
                 "client auth"
             ]
         }
     }
 }
}

编辑ca证书请求配置文件

vi /opt/certs/ca-csr.json
i{
 "CN": "kubernetes-ca",
 "hosts": [
 ],
 "key": {
     "algo": "rsa",
     "size": 2048
 },
 "names": [
     {
         "C": "CN",
         "ST": "beijing",
         "L": "beijing",
         "O": "od",
         "OU": "ops"
     }
 ],
 "ca": {
     "expiry": "175200h"
 }
}

生成CA证书和私钥

[[email protected] certs]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca - 
2020/09/27 17:20:56 [INFO] generating a new CA key and certificate from CSR
2020/09/27 17:20:56 [INFO] generate received request
2020/09/27 17:20:56 [INFO] received CSR
2020/09/27 17:20:56 [INFO] generating key: rsa-2048
2020/09/27 17:20:56 [INFO] encoded CSR
2020/09/27 17:20:56 [INFO] signed certificate with serial number 409112456326145160001566370622647869686523100724
[[email protected] certs]# ls 
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

ca.csr:证书的请求颁发文件

2.2 部署etcd集群

主机名 ip 角色
k8s-master 10.0.0.11 etcd lead
k8s-node1 10.0.0.12 etcd follow
k8s-node2 10.0.0.13 etcd follow

颁发etcd节点之间通信的证书

[[email protected] certs]# vi /opt/certs/etcd-peer-csr.json

vi /opt/certs/etcd-peer-csr.json
i{
 "CN": "etcd-peer",
 "hosts": [
     "10.0.0.11",
     "10.0.0.12",
     "10.0.0.13"
 ],
 "key": {
     "algo": "rsa",
     "size": 2048
 },
 "names": [
     {
         "C": "CN",
         "ST": "beijing",
         "L": "beijing",
         "O": "od",
         "OU": "ops"
     }
 ]
}
​
[[email protected] certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssl-json -bare etcd-peer
2020/09/27 17:29:49 [INFO] generate received request
2020/09/27 17:29:49 [INFO] received CSR
2020/09/27 17:29:49 [INFO] generating key: rsa-2048
2020/09/27 17:29:49 [INFO] encoded CSR
2020/09/27 17:29:49 [INFO] signed certificate with serial number 15140302313813859454537131325115129339480067698
2020/09/27 17:29:49 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[[email protected] certs]# ls etcd-peer*
etcd-peer.csr  etcd-peer-csr.json  etcd-peer-key.pem  etcd-peer.pem

#安装etcd服务

在k8s-master,k8s-node1,k8s-node2上

[[email protected] ~]# yum install etcd -y

[[email protected] ~]# yum install etcd -y

[[email protected] ~]# yum install etcd -y

#发送证书

[[email protected] certs]# scp -rp *.pem [email protected]:/etc/etcd/
[[email protected] certs]#scp -rp *.pem [email protected]:/etc/etcd/
[[email protected] certs]#scp -rp *.pem [email protected]:/etc/etcd/

#master节点

[[email protected] ~]# chown -R etcd:etcd /etc/etcd/*.pem

[[email protected] etc]# vim /etc/etcd/etcd.conf

ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_PEER_URLS="https://10.0.0.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379"
ETCD_NAME="node1"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="node1=https://10.0.0.11:2380,node2=https://10.0.0.12:2380,node3=https://10.0.0.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/etcd-peer.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/etcd-peer-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ca.pem"
ETCD_PEER_AUTO_TLS="true"

[[email protected] etc]# scp -rp /etc/etcd/etcd.conf [email protected]:/etc/etcd/etcd.conf

[[email protected] etc]# scp -rp /etc/etcd/etcd.conf [email protected]:/etc/etcd/etcd.conf

#node1和node2需修改

node2

ETCD_LISTEN_PEER_URLS="https://10.0.0.12:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.12:2379,http://127.0.0.1:2379"
ETCD_NAME="node2"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.12:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.12:2379,http://127.0.0.1:2379"

node3

ETCD_LISTEN_PEER_URLS="https://10.0.0.13:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.13:2379,http://127.0.0.1:2379"
ETCD_NAME="node3"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.13:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.13:2379,http://127.0.0.1:2379"

#3个etcd节点同时启动

systemctl start etcd

systemctl enable etcd

#验证

[[email protected] ~]# etcdctl member list

2.3 master节点的安装

安装api-server服务

上传kubernetes-server-linux-amd64-v1.15.4.tar.gz到node3上,然后解压

[[email protected] softs]# ls
cfssl cfssl-certinfo cfssl-json kubernetes-server-linux-amd64-v1.15.4.tar.gz
[[email protected] softs]# tar xf kubernetes-server-linux-amd64-v1.15.4.tar.gz
[[email protected] softs]# ls
cfssl cfssl-certinfo cfssl-json kubernetes kubernetes-server-linux-amd64-v1.15.4.tar.gz
[[email protected] softs]# cd /opt/softs/kubernetes/server/bin/

[[email protected] bin]# scp -rp kube-apiserver kube-controller-manager kube-scheduler kubectl [email protected]:/usr/sbin/

签发client证书
[[email protected] bin]# cd /opt/certs/
[[email protected] certs]# vi /opt/certs/client-csr.json
i{
 "CN": "k8s-node",
 "hosts": [
 ],
 "key": {
     "algo": "rsa",
     "size": 2048
 },
 "names": [
     {
         "C": "CN",
         "ST": "beijing",
         "L": "beijing",
         "O": "od",
         "OU": "ops"
     }
 ]
}

生成证书

[[email protected] certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssl-json -bare client

[[email protected] certs]# ls client*
client.csr client-key.pem
client-csr.json client.pem

签发kube-apiserver证书
[[email protected] certs]# vi  /opt/certs/apiserver-csr.json
i{
 "CN": "apiserver",
 "hosts": [
     "127.0.0.1",
     "10.254.0.1",
     "kubernetes.default",
     "kubernetes.default.svc",
     "kubernetes.default.svc.cluster",
     "kubernetes.default.svc.cluster.local",
     "10.0.0.11",
     "10.0.0.12",
     "10.0.0.13"
 ],
 "key": {
     "algo": "rsa",
     "size": 2048
 },
 "names": [
     {
         "C": "CN",
         "ST": "beijing",
         "L": "beijing",
         "O": "od",
         "OU": "ops"
     }
 ]
}
​
#注意10.254.0.1为clusterIP网段的第一个ip,做为pod访问api-server的内部ip,oldqiang在这一块被坑了很久
​
[[email protected] certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json | cfssl-json -bare apiserver
[[email protected] certs]# ls apiserver*
apiserver.csr       apiserver-key.pem
apiserver-csr.json  apiserver.pem

配置api-server服务

master节点

[[email protected] kubernetes]# scp -rp [email protected]:/opt/certs/capem .

[[email protected] kubernetes]# scp -rp [email protected]:/opt/certs/apiserverpem .

[[email protected] kubernetes]# scp -rp [email protected]:/opt/certs/client*pem .

[[email protected] kubernetes]# ls

apiserver-key.pem apiserver.pem ca-key.pem ca.pem client-key.pem client.pem

RBAC:基于角色的访问控制 role bash access controller

#api-server审计日志规则

[[email protected] kubernetes]# vi audit.yaml
iapiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]
​
  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]
​
  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]
​
  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"
​
  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]
​
  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]
​
  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.
​
  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"
​
vi  /usr/lib/systemd/system/kube-apiserver.service
i[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
[Service]
ExecStart=/usr/sbin/kube-apiserver \
  --audit-log-path /var/log/kubernetes/audit-log \
  --audit-policy-file /etc/kubernetes/audit.yaml \
  --authorization-mode RBAC \
  --client-ca-file /etc/kubernetes/ca.pem \
  --requestheader-client-ca-file /etc/kubernetes/ca.pem \
  --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
  --etcd-cafile /etc/kubernetes/ca.pem \
  --etcd-certfile /etc/kubernetes/client.pem \
  --etcd-keyfile /etc/kubernetes/client-key.pem \
  --etcd-servers https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 \
  --service-account-key-file /etc/kubernetes/ca-key.pem \
  --service-cluster-ip-range 10.254.0.0/16 \
  --service-node-port-range 30000-59999 \
  --kubelet-client-certificate /etc/kubernetes/client.pem \
  --kubelet-client-key /etc/kubernetes/client-key.pem \
  --log-dir  /var/log/kubernetes/ \
  --logtostderr=false \
  --tls-cert-file /etc/kubernetes/apiserver.pem \
  --tls-private-key-file /etc/kubernetes/apiserver-key.pem \
  --v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target

[[email protected] kubernetes]# mkdir /var/log/kubernetes
[[email protected] kubernetes]# systemctl daemon-reload
[[email protected] kubernetes]# systemctl start kube-apiserver.service
[[email protected] kubernetes]# systemctl enable kube-apiserver.service

[[email protected] kubernetes]# kubectl get cs //检测

[[email protected] kubernetes]# mkdir /var/log/kubernetes
[[email protected] kubernetes]# systemctl daemon-reload
[[email protected] kubernetes]# systemctl start kube-apiserver.service
[[email protected] kubernetes]# systemctl enable kube-apiserver.service

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-gSSfijKA-1607949183343)(C:\Users\wxs\Desktop\k8s\图片\image-20201213203210465.png)]

安装controller-manager服务
[[email protected]8s-master kubernetes]# vi  /usr/lib/systemd/system/kube-controller-manager.service
i[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
[Service]
ExecStart=/usr/sbin/kube-controller-manager \
--cluster-cidr 172.18.0.0/16 \
--log-dir /var/log/kubernetes/ \
--master http://127.0.0.1:8080 \
--service-account-private-key-file /etc/kubernetes/ca-key.pem \
--service-cluster-ip-range 10.254.0.0/16 \
--root-ca-file /etc/kubernetes/ca.pem \
--logtostderr=false \
--v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target
​
[[email protected] kubernetes]# systemctl daemon-reload 
[[email protected] kubernetes]# systemctl start kube-controller-manager.service 
[[email protected] kubernetes]# systemctl enable kube-controller-manager.service
安装scheduler服务
[[email protected] kubernetes]# vi   /usr/lib/systemd/system/kube-scheduler.service
i[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
[Service]
ExecStart=/usr/sbin/kube-scheduler \
--log-dir /var/log/kubernetes/ \
--master http://127.0.0.1:8080 \
--logtostderr=false \
--v 2
Restart=on-failure
[Install]
WantedBy=multi-user.target
​
[[email protected] kubernetes]# systemctl daemon-reload 
[[email protected] kubernetes]# systemctl start kube-scheduler.service 
[[email protected] kubernetes]# systemctl enable kube-scheduler.service

验证master节点

[[email protected] kubernetes]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"} 

2.4 node节点的安装

安装kubelet服务

在node3节点上签发证书

[[email protected] bin]# cd /opt/certs/
[[email protected] certs]# vi kubelet-csr.json
i{
 "CN": "kubelet-node",
 "hosts": [
 "127.0.0.1",
 "10.0.0.11",
 "10.0.0.12",
 "10.0.0.13",
 "10.0.0.14",
 "10.0.0.15"
 ],
 "key": {
     "algo": "rsa",
     "size": 2048
 },
 "names": [
     {
         "C": "CN",
         "ST": "beijing",
         "L": "beijing",
         "O": "od",
         "OU": "ops"
     }
 ]
}
​
[[email protected] certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet
[[email protected] certs]# ls kubelet*
kubelet.csr  kubelet-csr.json  kubelet-key.pem  kubelet.pem

#生成kubelet启动所需的kube-config文件

[[email protected] certs]# ln -s /opt/softs/kubernetes/server/bin/kubectl /usr/sbin/
#设置集群参数
[[email protected] certs]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/certs/ca.pem \
--embed-certs=true \
--server=https://10.0.0.11:6443 \
--kubeconfig=kubelet.kubeconfig
Cluster "myk8s" set.
#设置客户端认证参数
[[email protected] certs]# kubectl config set-credentials k8s-node --client-certificate=/opt/certs/client.pem --client-key=/opt/certs/client-key.pem --embed-certs=true --kubeconfig=kubelet.kubeconfig
User "k8s-node" set.
#生成上下文参数
[[email protected] certs]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=k8s-node \
--kubeconfig=kubelet.kubeconfig
Context "myk8s-context" created.
#切换默认上下文
[[email protected] certs]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig
Switched to context "myk8s-context".
#查看生成的kube-config文件
[[email protected] certs]# ls kubelet.kubeconfig 
kubelet.kubeconfig

master节点上

[[email protected] ~]# vi k8s-node.yaml
iapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node
​
[[email protected] ~]# kubectl create -f k8s-node.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-node created

node1节点

#安装docker-ce
过程略
vim /etc/docker/daemon.json
i{
"registry-mirrors": ["https://registry.docker-cn.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl restart docker.service
systemctl enable docker.service
​
[[email protected] ~]# mkdir /etc/kubernetes
[[email protected] ~]# cd /etc/kubernetes
[[email protected] kubernetes]# scp -rp [email protected]:/opt/certs/kubelet.kubeconfig .
[email protected]'s password: 
kubelet.kubeconfig                                                                            100% 6219     3.8MB/s   00:00    
[[email protected] kubernetes]# scp -rp [email protected]:/opt/certs/ca*pem .
[email protected]'s password: 
ca-key.pem                                                                                    100% 1675     1.2MB/s   00:00    
ca.pem                                                                                        100% 1354   946.9KB/s   00:00    
[[email protected] kubernetes]# scp -rp [email protected]:/opt/certs/kubelet*pem .
[email protected]'s password: 
kubelet-key.pem                                                                               100% 1679     1.2MB/s   00:00    
kubelet.pem                                                                                   100% 1464     1.1MB/s   00:00    
[[email protected] kubernetes]# 
[[email protected] kubernetes]# scp -rp [email protected]:/opt/softs/kubernetes/server/bin/kubelet /usr/bin/
[email protected]'s password: 
kubelet                                                                                       100%  114MB  29.6MB/s   00:03 
​
[[email protected] kubernetes]# mkdir  /var/log/kubernetes
[[email protected] kubernetes]# vi  /usr/lib/systemd/system/kubelet.service
i[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet \
--anonymous-auth=false \
--cgroup-driver systemd \
--cluster-dns 10.254.230.254 \
--cluster-domain cluster.local \
--runtime-cgroups=/systemd/system.slice \
--kubelet-cgroups=/systemd/system.slice \
--fail-swap-on=false \
--client-ca-file /etc/kubernetes/ca.pem \
--tls-cert-file /etc/kubernetes/kubelet.pem \
--tls-private-key-file /etc/kubernetes/kubelet-key.pem \
--hostname-override 10.0.0.12 \
--image-gc-high-threshold 20 \
--image-gc-low-threshold 10 \
--kubeconfig /etc/kubernetes/kubelet.kubeconfig \
--log-dir /var/log/kubernetes/ \
--pod-infra-container-image t29617342/pause-amd64:3.0 \
--logtostderr=false \
--v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
​
[[email protected] kubernetes]# systemctl daemon-reload 
[[email protected] kubernetes]# systemctl start kubelet.service 
[[email protected] kubernetes]# systemctl enable kubelet.service

node-2执行相同的命令

修改ip地址

–hostname-override 10.0.0.13 \

master节点验证

[[email protected] ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
10.0.0.12   Ready    <none>   15m   v1.15.4
10.0.0.13   Ready    <none>   16s   v1.15.4
安装kube-proxy服务

在node3节点上签发证书

[[email protected] ~]# cd /opt/certs/
[[email protected] certs]# vi /opt/certs/kube-proxy-csr.json
i{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}
​
[[email protected] certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json | cfssl-json -bare kube-proxy-client
[[email protected] certs]# ls kube-proxy-c*

#生成kube-proxy启动所需要kube-config

[[email protected] certs]# kubectl config set-cluster myk8s \
--certificate-authority=/opt/certs/ca.pem \
--embed-certs=true \
--server=https://10.0.0.11:6443 \
--kubeconfig=kube-proxy.kubeconfig
Cluster "myk8s" set.
[[email protected] certs]# kubectl config set-credentials kube-proxy \
--client-certificate=/opt/certs/kube-proxy-client.pem \
--client-key=/opt/certs/kube-proxy-client-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.
[[email protected] certs]# kubectl config set-context myk8s-context \
--cluster=myk8s \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
Context "myk8s-context" created.
[[email protected] certs]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig
Switched to context "myk8s-context".
[[email protected] certs]# ls kube-proxy.kubeconfig 
kube-proxy.kubeconfig
[[email protected] certs]# scp -rp kube-proxy.kubeconfig  [email protected]:/etc/kubernetes/
[[email protected] certs]# scp -rp kube-proxy.kubeconfig  [email protected]:/etc/kubernetes/
[[email protected] bin]# scp -rp kube-proxy [email protected]:/usr/bin/   
[[email protected] bin]# scp -rp kube-proxy [email protected]:/usr/bin/ 

在node1节点上配置kube-proxy

[[email protected] ~]# vi   /usr/lib/systemd/system/kube-proxy.service
i[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \
--kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \
--cluster-cidr 172.18.0.0/16 \
--hostname-override 10.0.0.12 \
--logtostderr=false \
--v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
​
[[email protected] ~]# systemctl daemon-reload 
[[email protected] ~]# systemctl start kube-proxy.service 
[[email protected] ~]# systemctl enable kube-proxy.service

在node-2上面执行相同的命令

–hostname-override 10.0.0.13 \

2.5 配置flannel网络

所有节点安装flannel

yum install flannel  -y
mkdir  /opt/certs/

在node3上分发证书

[[email protected] ~]# cd /opt/certs/
[[email protected] certs]# scp -rp ca.pem client*pem [email protected]:/opt/certs/ 
[[email protected] certs]# scp -rp ca.pem client*pem [email protected]:/opt/certs/
[[email protected] certs]# scp -rp ca.pem client*pem [email protected]:/opt/certs/

在master节点上

etcd创建flannel的key

#通过这个key定义pod的ip地址范围
etcdctl mk /atomic.io/network/config   '{ "Network": "172.18.0.0/16","Backend": {"Type": "vxlan"} }'
#注意可能会失败提示
Error:  x509: certificate signed by unknown authority
#多重试几次就好了

配置启动flannel

vi  /etc/sysconfig/flanneld
第4行:FLANNEL_ETCD_ENDPOINTS="https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379"
第8行不变:FLANNEL_ETCD_PREFIX="/atomic.io/network"
第11行:FLANNEL_OPTIONS="-etcd-cafile=/opt/certs/ca.pem -etcd-certfile=/opt/certs/client.pem -etcd-keyfile=/opt/certs/client-key.pem"
​
systemctl start flanneld.service 
systemctl enable flanneld.service
​
#验证
[[email protected] ~]# ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
     inet 172.18.43.0  netmask 255.255.255.255  broadcast 0.0.0.0
     inet6 fe80::30d9:50ff:fe47:599e  prefixlen 64  scopeid 0x20<link>
     ether 32:d9:50:47:59:9e  txqueuelen 0  (Ethernet)
     RX packets 0  bytes 0 (0.0 B)
     RX errors 0  dropped 0  overruns 0  frame 0
     TX packets 0  bytes 0 (0.0 B)
     TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

在node1和node2上

[[email protected] ~]# vim /usr/lib/systemd/system/docker.service
将ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
修改为ExecStart=/usr/bin/dockerd  $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
增加一行ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
[[email protected] ~]# systemctl daemon-reload 
[[email protected] ~]# systemctl restart docker
​
#验证,docker0网络为172.18网段就ok了
[[email protected] ~]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
     inet 172.18.41.1  netmask 255.255.255.0  broadcast 172.18.41.255
     ether 02:42:07:3e:8a:09  txqueuelen 0  (Ethernet)
     RX packets 0  bytes 0 (0.0 B)
     RX errors 0  dropped 0  overruns 0  frame 0
     TX packets 0  bytes 0 (0.0 B)
     TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

验证k8s集群的安装

[[email protected] ~]# kubectl run nginx  --image=nginx:1.13 --replicas=2
#多等待一段时间,再查看pod状态
[[email protected] ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
nginx-6459cd46fd-8lln4   1/1     Running   0          3m27s   172.18.41.2   10.0.0.12   <none>           <none>
nginx-6459cd46fd-xxt24   1/1     Running   0          3m27s   172.18.96.2   10.0.0.13   <none>           <none>
​
[[email protected] ~]# kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort 
service/nginx exposed
[[email protected] ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.254.0.1      <none>        443/TCP        6h46m
nginx        NodePort    10.254.160.83   <none>        80:41760/TCP   3s
​
#打开浏览器访问http://10.0.0.12:41760,能访问就ok了

[[email protected] kubernetes]# docker load -i docker_alpine3.9.tar.gz
[[email protected] kubernetes]# docker run -it alpine:3.9
/ # ip add
[[email protected] kubernetes]# docker load -i docker_nginx1.13.tar.gz
[[email protected] ~]# curl -I  10.0.0.12:44473
[[email protected] ~]# curl -I  10.0.0.13:44473

验证

[[email protected] kubernetes]# docker load -i docker_alpine3.9.tar.gz
[[email protected] kubernetes]# docker run -it alpine:3.9
/ # ip add
[[email protected] kubernetes]# docker load -i docker_nginx1.13.tar.gz
[[email protected] ~]# curl -I 10.0.0.12:44473
[[email protected] ~]# curl -I 10.0.0.13:44473

3:k8s的常用资源

3.1 pod资源

pod资源至少由两个容器组成,一个基础容器pod+业务容器

动态pod,这个pod的yaml文件从etcd获取的yaml

静态pod,kubelet本地目录读取yaml文件,启动的pod

在node1上面执行
mkdir /etc/kubernetes/manifest
​
vim /usr/lib/systemd/system/kubelet.service
#启动参数增加一行
--pod-manifest-path /etc/kubernetes/manifest \
​
systemctl daemon-reload 
systemctl restart kubelet.service
​
cd /etc/kubernetes/manifest/
​
vi k8s_pod.yaml
iapiVersion: v1
kind: Pod
metadata:
  name: static-pod
spec:
  containers:
    - name: nginx
      image: nginx:1.13
      ports:
        - containerPort: 80
        
​
#验证
[[email protected] ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
nginx-6459cd46fd-hg2kq    1/1     Running   1          2d16h
nginx-6459cd46fd-ng9v6    1/1     Running   1          2d16h
oldboy-5478b985bc-6f8gz   1/1     Running   1          2d16h
static-pod-10.0.0.12      1/1     Running   0          21s
​

3.2 secrets资源

方式1:

kubectl create secret docker-registry harbor-secret --namespace=default  --docker-username=admin  --docker-password=a123456 --docker-server=blog.oldqiang.com
​
vi  k8s_sa_harbor.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
name: docker-image
namespace: default
imagePullSecrets:
- name: harbor-secret
​
vi k8s_pod.yaml
iapiVersion: v1
kind: Pod
metadata:
  name: static-pod
spec:
  serviceAccount: docker-image
  containers:
    - name: nginx
      image: blog.oldqiang.com/oldboy/nginx:1.13
      ports:
        - containerPort: 80

方法二:

kubectl create secret docker-registry regcred --docker-server=blog.oldqiang.com --docker-username=admin --docker-password=a123456 [email protected]
​
#验证
[[email protected] ~]# kubectl get secrets 
NAME                       TYPE                                  DATA   AGE
default-token-vgc4l        kubernetes.io/service-account-token   3      2d19h
regcred                    kubernetes.io/dockerconfigjson        1      114s
​
[[email protected] ~]# cat k8s_pod.yaml 
apiVersion: v1
kind: Pod
metadata:
name: static-pod
spec:
nodeName: 10.0.0.12
imagePullSecrets:
    - name: regcred
  containers:
    - name: nginx
      image: blog.oldqiang.com/oldboy/nginx:1.13
      ports:
        - containerPort: 80

3.3 configmap资源

vi /opt/81.conf
 server {
     listen       81;
     server_name  localhost;
     root         /html;
     index      index.html index.htm;
     location / {
     }
 }
​
kubectl create configmap 81.conf --from-file=/opt/81.conf
#验证
kubectl get cm
​
vi k8s_deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
template:
 metadata:
   labels:
     app: nginx
 spec:
   volumes:
        - name: nginx-config
          configMap:
            name: 81.conf
            items:
              - key: 81.conf
                path: 81.conf
      containers:
      - name: nginx
        image: nginx:1.13
        volumeMounts:
          - name: nginx-config
            mountPath: /etc/nginx/conf.d
        ports:
        - containerPort: 80
          name: port1
        - containerPort: 81
          name: port2

4: k8s常用服务

4.1 部署dns服务

vi coredns.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
   kubernetes.io/cluster-service: "true"
   addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
 kubernetes.io/bootstrapping: rbac-defaults
 addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      nodeName: 10.0.0.13
      containers:
      - name: coredns
        image: coredns/coredns:1.3.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        - name: tmp
          mountPath: /tmp
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: tmp
          emptyDir: {}
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.254.230.254
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
​
#测试
yum install bind-utils.x86_64 -y
dig @10.254.230.254 kubernetes.default.svc.cluster.local +short

4.2 部署dashboard服务

wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
​
vi kubernetes-dashboard.yaml
#修改镜像地址
image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
#修改service类型为NodePort类型
spec:
type: NodePort
ports:
    - port: 443
      nodePort: 30001
      targetPort: 8443
​
​
kubectl create -f kubernetes-dashboard.yaml
#使用火狐浏览器访问https://10.0.0.12:30001
​
vim dashboard_rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kube-system

5:k8s的网络访问

5.1 k8s的映射

#准备数据库
yum install mariadb-server -y
systemctl start  mariadb
mysql_secure_installation
mysql>grant all on *.* to [email protected]'%' identified by '123456';
​
#删除mysql的rc和svc
kubectl  delete  rc  mysql
kubectl delete  svc mysql
​
#创建endpoint和svc
[[email protected] yingshe]# cat mysql_endpoint.yaml 
apiVersion: v1
kind: Endpoints
metadata:
name: mysql
subsets:
- addresses:
  - ip: 10.0.0.13
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
​
[[email protected] yingshe]# cat mysql_svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
    targetPort: 3306  
  type: ClusterIP
​
#web页面重新访问tomcat/demo
#验证
[[email protected] ~]# mysql -e 'show databases;'
+--------------------+
| Database           |
+--------------------+
| information_schema |
| HPE_APP            |
| mysql              |
| performance_schema |
+--------------------+

5.2 kube-proxy的ipvs模式

yum install conntrack-tools -y
yum install ipvsadm.x86_64 -y
​
vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \
--kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \
--cluster-cidr 172.18.0.0/16 \
--hostname-override 10.0.0.12 \
--proxy-mode ipvs \
--logtostderr=false \
--v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
​
systemctl daemon-reload 
systemctl restart kube-proxy.service 
ipvsadm -L -n

5.3 ingress

6: k8s弹性伸缩

弹性伸缩

修改kube-controller-manager

–horizontal-pod-autoscaler-use-rest-clients=false

7: 动态存储

cat nfs-client.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
 type: Recreate
selector:
 matchLabels:
   app: nfs-client-provisioner
template:
 metadata:
   labels:
     app: nfs-client-provisioner
 spec:
   serviceAccountName: nfs-client-provisioner
   containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.0.0.13
            - name: NFS_PATH
              value: /data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.13
            path: /data
​
​
vi nfs-client-sa.yaml 
iapiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
​
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
​
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
​
vi nfs-client-class.yaml 
iapiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: course-nfs-storage
provisioner: fuseim.pri/ifs
​
修改pvc的配置文件
metadata:
  namespace: tomcat
  name: pvc-01
  annotations:
    volume.beta.kubernetes.io/storage-class: "course-nfs-storage"

8:增加计算节点

计算节点服务: docker kubelet kube-proxy flannel

9: 污点和容忍度

污点: 给node节点加污点

污点的类型:
NoSchedule
PreferNoSchedule
NoExecute
​
#添加污点的例子
kubectl taint node 10.0.0.14  node-role.kubernetes.io=master:NoExecute
#检查
[[email protected] ~]# kubectl describe nodes 10.0.0.14|grep -i taint
Taints:             node-role.kubernetes.io=master:NoExecute

容忍度

#添加在pod的spec下
tolerations:
- key: "node-role.kubernetes.io"
  operator: "Exists"
  value: "master"
  effect: "NoExecute"