Ubuntu安装kubernates全过程实记
安装kubernates
主要是在两台ubuntu机器上安装kubernates的过程.
今天开始研究k8s的内容,准备在两台机器上安装kubernates的集群.
docker版本:Docker version 17.03.2-ce, build f5ec1e2
kubernates版本: Kubernetes 1.9
etcd版本:3.3.0
flannel版本: v0.10.0
其中
10.20.100.236 既做Kubernetes的master节点,又做node节点;
192.168.174.128 服务器只做node节点。 (本地windows的虚拟机)
master节点上需要部署:kube-apiserver、kube-controller-manager、kube-scheduler、etcd服务。
node节点上部署:kubelet、kube-proxy、docker和flannel服务。
下面是kubernates和docker的版本对应关系.
Kubernetes 1.9 <--Docker 1.11.2 to 1.13.1 and 17.03.x
Kubernetes 1.8 <--Docker 1.11.2 to 1.13.1 and 17.03.x
Kubernetes 1.7 <--Docker 1.10.3, 1.11.2, 1.12.6
Kubernetes 1.6 <--Docker 1.10.3, 1.11.2, 1.12.6
Kubernetes 1.5 <--Docker 1.10.3, 1.11.2, 1.12.3
然后开始下载对应的etcd,flannel,k8s的文件.
wget https://dl.k8s.io/v1.9.11/kubernetes-server-linux-amd64.tar.gz #这里下载的是kubernates的server端
wget https://dl.k8s.io/v1.9.11/kubernetes-client-linux-amd64.tar.gz #这里下载的是client端
wget https://dl.k8s.io/v1.9.11/kubernetes-noode-linux-amd64.tar.gz #这里下载的是node端
如果找不到对应的版本可以到 https://github.com/kubernetes/kubernetes 这里的release中找对应的版本.
全部解压之后发现只要下载server就有全部的内容了.
wget https://github.com/coreos/etcd/releases/download/v3.3.0/etcd-v3.3.0-linux-amd64.tar.gz
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
然后生成对应TLS文件和秘钥
1.安装cfSSL
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
export PATH=/usr/local/bin:$PATH
2.创建CA配置文件
mkdir /root/ssl
cd /root/ssl
cfssl print-defaults config > config.json
cfssl print-defaults csr > csr.json
根据config.json文件的格式创建如下的ca-config.json文件
过期时间设置成了 87600h
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF
3.创建 CA 证书签名请求:
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
4.生成 CA 证书和私钥:
$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
$ ls ca*
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
5.创建 kubernetes 证书 (这里有疑问 , 留待后续研究.)
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"10.20.100.236",
"192.168.174.128",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
6.生成 kubernetes 证书和私钥:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
ls kubernetes*
kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem
7.创建admin证书
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
8.生成admin 证书和私钥:
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
$ ls admin*
admin.csr admin-csr.json admin-key.pem admin.pem
9.创建kube-proxy 证书:
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
10.生成 kube-proxy 客户端证书和私钥:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*
kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem
11.校验证书
openssl x509 -noout -text -in kubernetes.pem
11.分发证书:
将生成的证书和秘钥文件(后缀名为.pem)拷贝到所有机器的 /etc/kubernetes/ssl 目录下备用;
mkdir -p /etc/kubernetes/ssl
cp *.pem /etc/kubernetes/ssl
scp *.pem [email protected]:/etc/kubernetes/ssl
scp *.pem [email protected]:/etc/kubernetes/ssl
然后我发现,这么配置太慢了,研究了一下,终于决定,先不加这些验证文件.
所以上面的这些,都没加,下面才是正真的开始.
开始安装全部的组件
先准备工作目录,下面所有的下载和操作都在这个目录下执行
cd /mnt/
mkdir k8s
cd k8s
sudo swapoff -a
安装etcd
tar zxvf etcd-v3.3.0-linux-amd64.tar.gz
sudo mkdir -p /var/lib/etcd/
sudo mkdir -p /etc/etcd/
sudo vim /etc/etcd/etcd.conf
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://10.20.100.236:2379"
创建systemd文件
sudo vim /lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
Documentation=https://github.com/coreos/etcd
After=network.target
[Service]
User=sunht
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/mnt/k8s/etcd-v3.3.0-linux-amd64/etcd
Restart=on-failure
RestartSec=10s
LimitNOFILE=40000
[Install]
WantedBy=multi-user.target
启动服务
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
检查服务及端口
sudo systemctl status etcd
netstat -apn | grep 2379
创建一个etcd网络
etcdctl set /coreos.com/network/config '{ "Network": "172.17.0.0/16" }'
这里的etcd网络是给flannel分配docker使用的,目前docker上的子网是172.17.0.1网关.所以这里也就用这个作为网关了.
如果部署的是etcd集群,那么每台etcd服务器上都需要执行上述步骤。但我这里只使用了standalone,所以我的etcd服务就搞定了。
Kubernetes通用配置
创建Kubernetes配置目录
sudo mkdir /etc/kubernetes
sudo vim /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://10.20.100.236:6060"
8080端口被占了,还是用6060吧,看看后面有没有其他的地方要改的.
同样在master的主机上配置kube-apiserver服务
tar -xzvf kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-node-linux-amd64.tar.gz
sudo vim /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_API_PORT="--port=6060"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://10.20.100.236:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=172.17.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
KUBE_API_ARGS=""
创建systemd文件
sudo vim /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
Wants=etcd.service
[Service]
User=root
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/mnt/k8s/kubernetes/server/bin/kube-apiserver \ ##这里是kube-apiserver 果然能copy还是不要手敲了,这里的错误找了半天
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置kube-controller-manager服务
sudo vim /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS=""
创建systemd文件
sudo vim /lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=etcd.service
After=kube-apiserver.service
Requires=etcd.service
Requires=kube-apiserver.service
[Service]
User=root
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/mnt/k8s/kubernetes/server/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
配置kube-scheduler服务
创建kube-scheduler配置文件
sudo vim /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS=""
创建systemd文件
sudo vim /lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
User=root
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/mnt/k8s/kubernetes/server/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_MASTER
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动Kubernetes master节点的服务
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
启动成功后
配置Node上的kubernates
/etc/kubernetes/config 与主的一样.
flannel配置
创建配置目录和文件
sudo vim /etc/default/flanneld.conf
FLANNEL_ETCD_ENDPOINTS="http://10.20.100.236:2379"
FLANNEL_ETCD_PREFIX="/coreos.com/network"
其中,FLANNEL_ETCD_PREFIX选项就是刚才配置的etcd网络。
创建systemd文件
sudo vim /lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
After=etcd.service
Before=docker.service
[Service]
User=root
EnvironmentFile=/etc/default/flanneld.conf
ExecStart=/mnt/k8s/flanneld \
-etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \
-etcd-prefix=${FLANNEL_ETCD_PREFIX} \
$FLANNEL_OPTIONS
ExecStartPost=/usr/bin/flannel/mk-docker-opts.sh -k DOCKER_OPTS -d /run/flannel/docker
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
然后启动flanneld服务
sudo systemctl daemon-reload
sudo systemctl enable flanneld
sudo systemctl start flanneld
查看服务是否启动
sudo systemctl status flanneld
● flanneld.service - Flanneld
Loaded: loaded (/lib/systemd/system/flanneld.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-03-27 14:54:17 HKT; 16s ago
Docs: https://github.com/coreos/flannel
Process: 6840 ExecStartPost=/usr/bin/flannel/mk-docker-opts.sh -k DOCKER_OPTS -d /run/flannel/docker (code=exited, status=0/SUCCESS)
Main PID: 6814 (flanneld)
Tasks: 23
Memory: 7.4M
CPU: 113ms
CGroup: /system.slice/flanneld.service
└─6814 /mnt/k8s/flanneld -etcd-endpoints=http://10.20.100.236:2379 -etcd-prefix=/coreos.com/network
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.328085 6814 main.go:505] Defaulting external address to interface address (10.20.100.236)
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.328186 6814 main.go:235] Created subnet manager: Etcd Local Manager with Previous Subnet: 172.17.40.0/24
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.328194 6814 main.go:238] Installing signal handlers
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.329064 6814 main.go:353] Found network config - Backend type: udp
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.343361 6814 local_manager.go:147] Found lease (172.17.40.0/24) for current IP (10.20.100.236), reusing
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.352201 6814 main.go:300] Wrote subnet file to /run/flannel/subnet.env
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.352214 6814 main.go:304] Running backend.
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.352311 6814 udp_network_amd64.go:100] Watching for new subnet leases
Mar 27 14:54:17 ubuntu2 flanneld[6814]: I0327 14:54:17.360031 6814 main.go:396] Waiting for 22h59m59.983567988s to renew lease
Mar 27 14:54:17 ubuntu2 systemd[1]: Started Flanneld.
##Docker的安装和配置
sudo apt-get install docker.io
使flannel作用docker网络
修改docker的systemd配置文件。
sudo mkdir /lib/systemd/system/docker.service.d
sudo vim /lib/systemd/system/docker.service.d/flannel.conf
[Service]
EnvironmentFile=-/run/flannel/docker
重启docker服务。
sudo systemctl daemon-reload
sudo systemctl restart docker
查看docker是否有了flannel的网络。
sudo ps -ef | grep docker
root 7039 1 0 14:58 ? 00:00:00 /usr/bin/dockerd -H fd:// --bip=172.17.40.1/24 --ip-masq=true --mtu=1472
配置kubelet服务
创建kubelet的数据目录
sudo mkdir /var/lib/kubelet
创建kubelet配置文件
kubelet的专用配置文件为/etc/kubernetes/kubelet
sudo vim /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=127.0.0.1"
KUBELET_HOSTNAME="--hostname-override=10.20.100.236"
KUBELET_PORT="--kubelet-port=10250"
#KUBELET_API_SERVER="--api-servers=http://10.20.100.236:6060"
KUBELET_API_SERVER="--kubeconfig=/var/lib/kubelet/kubeconfig"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.RedHat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS="--enable-server=true --enable-debugging-handlers=true"
创建systemd文件
sudo vim /lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
启动kubelet服务
sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl start kubelet
journalctl -xe
journalctl -xefu kubelet ##我在这一步卡死了,老是报错 exitCode 2 invalidArgument
在v1.8版本之后kubelet不再支持api-server参数,那么在新版本kubelet如何才能与api-server进行通信呢?是通过kubeconfig参数,指定配置文件。(这个地方是一个大坑,如果还是按照这种配置的话,master会找不到这个node)
在/etc/kubernetes/kubelet配置文件中有一个配置项
KUBELET_ARGS="–fail-swap-on=false --cgroup-driver=cgroupfs --kubeconfig=/var/lib/kubelet/kubeconfig"
###编辑配置文件/var/lib/kubelet/kubeconfig
apiVersion: v1
clusters:
- cluster:
server: http://10.20.100.236:6060
name: myk8s
contexts:
- context:
cluster: myk8s
user: ""
name: myk8s-context
current-context: myk8s-context
kind: Config
preferences: {}
users: []
本地的虚拟机上的配置.
KUBELET_ADDRESS="--address=127.0.0.1"
KUBELET_HOSTNAME="--hostname-override=192.168.174.128"
KUBELET_PORT="--kubelet-port=10250"
#KUBELET_API_SERVER="--api-servers=http://10.20.100.236:6060"
KUBELET_API_SERVER="--kubeconfig=/var/lib/kubelet/kubeconfig"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.RedHat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS="--enable-server=true --enable-debugging-handlers=true"
本地虚拟机上的配置systemd
vi /lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/mnt/k8s/kubernetes/server/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
配置kube-proxy服务
创建kube-proxy配置文件
sudo vi /etc/kubernetes/proxy
KUBE_PROXY_ARGS=""
创建systemd文件
sudo vim /lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/mnt/k8s/kubernetes/server/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动proxy
sudo systemctl daemon-reload
sudo systemctl enable kube-proxy
sudo systemctl start kube-proxy
查询node状态
执行kubectl get node命令来查看node状态。都为Ready状态时,则说明node节点已经成功连接到master,如果不是该状态,则需要到该节点上,定位下原因。可通过journalctl -u kubelet.service命令来查看kubelet服务的日志。
$ kubectl get node
NAME STATUS AGE
192.168.56.160 Ready d
192.168.56.161 Ready d ## 这里是抄别人的
因为本地配置的是6060端口,所以需要进行一步额外的操作,将kubectl的端口改正到6060上去.
kubectl --server=http://10.20.100.236:6060 get nodes
NAME STATUS ROLES AGE VERSION
10.20.100.236 Ready <none> 23m v1.9.11
192.168.174.128 Ready <none> 25m v1.9.11
##Kubernetes测试
测试Kubernetes是否成功安装。
编写yaml文件
在Kubernetes master上创建一个nginx.yaml,用于创建一个nginx的ReplicationController。
vim rc_nginx.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
labels:
name: nginx
spec:
replicas: 2
selector:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
##创建pod
执行kubectl create命令创建ReplicationController。该ReplicationController配置中有两个副本,并且我们的环境有两个Kubernetes Node,因此,它应该会在两个Node上分别运行一个Pod。
注意:这个过程可能会需要很长的时间,它会从网上拉取nginx镜像,还有pod-infrastructure这个关键镜像。
kubectl --server=http://10.20.100.236:6060 create -f ./rc_nginx.yaml
查询状态
执行kubectl get pod和rc命令来查看pod和rc状态。刚开始可能会处于containerCreating的状态,待需要的镜像下载完成后,就会创建具体的容器。pod状态应该显示Running状态。
kubectl --server=http://10.20.100.236:6060 get rc
NAME DESIRED CURRENT READY AGE
nginx 2 2 0 2m
对应的pod状态为:
kubectl --server=http://10.20.100.236:6060 get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-mk58l 0/1 ContainerCreating 0 5m <none> 10.20.100.236
nginx-xbx2p 0/1 ContainerCreating 0 5m <none> 192.168.174.128
后来变成了
NAME READY STATUS RESTARTS AGE IP NODE
nginx-mk58l 1/1 Running 1 42m 172.17.40.5 10.20.100.236
nginx-xbx2p 1/1 Running 0 42m 172.17.35.2 192.168.174.128
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
96ed1c12b646 [email protected]:c8a861b8a1eeef6d48955a6c6d5dff8e2580f13ff4d0f549e082e7c82a8617a2 "nginx -g 'daemon ..." 9 minutes ago Up 9 minutes k8s_nginx_nginx-mk58l_default_13d7d944-512c-11e9-b685-da6056ccded9_1
00f082a41e57 registry.access.RedHat.com/rhel7/pod-infrastructure:latest "/usr/bin/pod" 9 minutes ago Up 9 minutes k8s_POD_nginx-mk58l_default_13d7d944-512c-11e9-b685-da6056ccded9_1
8d3d4d2c692b [email protected]:c8a861b8a1eeef6d48955a6c6d5dff8e2580f13ff4d0f549e082e7c82a8617a2 "nginx -g 'daemon ..." 9 minutes ago Exited (0) 9 minutes ago k8s_nginx_nginx-mk58l_default_13d7d944-512c-11e9-b685-da6056ccded9_0
54dd5cafa1cc registry.access.RedHat.com/rhel7/pod-infrastructure:latest "/usr/bin/pod" 10 minutes ago Exited (0) 9 minutes ago k8s_POD_nginx-mk58l_default_13d7d944-512c-11e9-b685-da6056ccded9_0_20ebccfa
剩下的等待吧.不知道多久能部署完成.
好吧, 终于大功告成了.!!
然后考虑了一下,停了本地的虚拟机,过了一段时间查看一下
kubectl --server=http://10.20.100.236:6060 get rc
NAME DESIRED CURRENT READY AGE
nginx 2 2 2 1h
kubectl --server=http://10.20.100.236:6060 get pods
NAME READY STATUS RESTARTS AGE
nginx-dbfr4 1/1 Running 0 17m
nginx-mk58l 1/1 Running 1 1h
nginx-xbx2p 1/1 Unknown 0 1h
居然自动的在唯一的一台机器上部署了2份.
尝试将本地的关机的虚拟机删除
kubectl --server=http://10.20.100.236:6060 get nodes
NAME STATUS ROLES AGE VERSION
10.20.100.236 Ready <none> 1h v1.9.11
192.168.174.128 NotReady <none> 1h v1.9.11
kubectl --server=http://10.20.100.236:6060 delete node 192.168.174.128
node "192.168.174.128" deleted
kubectl --server=http://10.20.100.236:6060 get nodes
NAME STATUS ROLES AGE VERSION
10.20.100.236 Ready <none> 1h v1.9.11
好了 ,成功的删除一个节点了.
本文参考
1.Ubuntu上手动安装部署Kubernetes详细指南(很好的指南)
2.kubernates手动安装
还有一些其他的参考网站,解决了配置过程中的一些问题.
推荐阅读
-
记一次Ubuntu19无法安装docker源
-
【转】ubuntu安装jdk全过程(仅供参考)
-
在 Ubuntu系统下安装 OpenCV 全过程
-
ubuntu1404安装qt5.2.1全过程
-
安装使用Ubuntu的全过程(随时更新)
-
Ubuntu下载安装android 源码全过程
-
Linux也可以这样美——Ubuntu18.04安装、配置、美化-踩坑记
-
ubuntu14笔记: 安装NFS
-
在windows10 笔记本电脑(uefi+gpt,固态机械双硬盘)上安装Ubuntu16.04 双系统全过程遇到得问题和解决方法
-
在ubuntu server 18.04上安装配置MySQL8.0手记 ubuntumysql