从Docker开发到部署到K8s全流程全解
程序员文章站
2022-07-14 09:24:39
...
从开发到部署全流程全解
假设我们不使用镜像仓库的情况下,在A机器上打包镜像在B机器中部署
- 打包镜像
- 导出镜像
- 部署至K8s
- 使用helm一键部署
在A机器 打包Docker镜像
- 打包镜像
- 部署镜像
在本地开发完成后,把应用程序打包成 Docker镜像 ,随后我们将一起一步步将应用部署到k8s集群。
编写Dockerfile
vi Dockerfile
FROM busybox:latest
LABEL wangligang [email protected].com>
RUN echo 'hello docker'
打包镜像
docker build -t image_name -f Dockerfile
-t指定镜像名称
-f指定Dockerfile位置
查看镜像
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
image_name latest 1548739e1734 5s ago 20KB
导出镜像
docker save image_name > image_name.tar
#拷贝镜像到B机器
scp image_name.tar [email protected]:/share/
把镜像部署到B机器的minikube中
安装并启动minikube
brew install minikube
minikube start
使用NFS做为minikube的永久存储
测试环境下minikube里的内容会因每次重启而丢失,故启用NFS
在MacOs中
vi /etc/exports
写入
/share/dir -network 192.168.0.0 -mask 255.255.0.0 (rw,sync,no_root_squash)
启用nfs
sudo nfsd enable
sudo nfsd start
在minikube中使用nfs
minikube ssh
su -
mount -t nfs hostname:/share/dir /export/servers
编写deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: demo
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
strategy:
type: Recreate
template:
metadata:
labels:
app: demo
spec:
containers:
- image: image_name
imagePullPolicy: "Never"
name: demo
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
serviceAccountName: ""
status: {}
编写service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: demo-service
name: demo-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: demo
status:
loadBalancer: {}
编写ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
spec:
rules:
- host: test.demo.com
http:
paths:
- backend:
serviceName: demo-service
servicePort: 80
编写pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: "demo-data-pv"
labels:
name: demo-data-pv
release: stable
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/export/servers"
编写pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: demo-pvc
namespace: default
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
selector:
matchLabels:
name: demo-data-pv
release: stable
status: {}
引入Docker镜像
docker load --input /share/image_name.tar
使用kubectl部署
kubectl apply -y pv.yaml
kubectl apply -y pvc.yaml
kubectl apply -y deployment.yaml
kubectl apply -y service.yaml
kubectl apply -y ingress.yaml
查看k8s部署情况
kubectl get pods
kubectl get deployments
kubectl get services
#查看日志
kubectl logs -f --tail=10 container_namexxxxxx
#登录查看
kubectl exec -it container_namexxxxxx -- /bin/bash
使用helm一键部署
helm create demo-helm
cd demo-helm/template
rm -rf *
cp /share/yamls/deployment.yaml .
cp /share/yamls/service.yaml .
cp /share/yamls/pv.yaml .
cp /share/yamls/pvc.yaml .
cp /share/yamls/ingress.yaml .
清除上一步使用kubectl部署的资源
kubectl delete deployment --all
kubectl delete service --all
kubectl delete ingress --all
kubectl delete pvc --all
kubectl delete pv --all
一键上线
helm install demo-helm demo-helm
使用helm的一个小技巧
如果有一些项目的配置文件需要写入configmap,可以在demo-helm目录下放一个配置文件如conf.properties
在template
下写一个configmap.yaml
如下
apiVersion: v1
kind: ConfigMap
metadata:
name: demo-config
data:
{{range .Files.Lines "conf.properties"}}
{{.}}{{ end }}
应用中就可以使用configmap中的变量了
一些问题
- minikube环境无法识别host里的docker image
eval $(minikube docker-env)
- deployment无法识别出同namespace里的ConfigMap中的变量值
invalid type for io.k8s.api.core.v1.ConfigMap.data: got “string”, expected “map”;
使用kubectl create configmap coding-config --from-env-file=coding-config.properties
或者如下方式,注意数字需要用引号引起来
apiVersion: v1
kind: ConfigMap
metadata:
name: coding-config
data:
db_host: 192.168.0.45
db_port: "3306"
db_name: test
- 如果 deployment指定command会覆盖Dockfile里的EXPOSEPOINT
- redis无法写入问题:MISCONF Redis is configured to save RDB snapshots
使用 sysctl vm.overcommit_memory=1
修复
-
minikube如果跑在docker上是不支持ingress,需要指定:
minikube start --vm=true --driver=hyperkit --memory=4g
-
无法启用ingress,是因为默认minikube不支持,使用
minikube addons enable ingress
开启 -
远程访问mysql时开启访问别忘记指定密码
grant all on *.* to 'root'@'%' identified by '123456';