欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

k8s service

程序员文章站 2022-03-12 11:57:03
...

**使用Kubernetes,您无需修改应用程序即可使用不熟悉的服务发现机制。 Kubernetes为Pods提供自己的IP地址和一组Pod的单个DNS名称,并且可以在它们之间进行负载平衡。
**
1,ClusterIP
会自动分配一个虚拟ip,支持集群内部的访问

部署文件

apiVersion: v1
kind: Service
metadata:
  name: my-clusterip
spec:
  ports:
    - name: http
      port: 80
      targetPort: 80
  selector:
      app: nginx


[[email protected] ~]$ kubectl get svc
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP   6d
my-clusterip   ClusterIP   10.105.139.248   <none>        80/TCP    6m30s

容器内部可以访问ip和name都可以提供负载均衡

[[email protected] ~]$ kubectl run test  --image=radial/busyboxplus  -it
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.

/ # 
/ # ls
bin      dev      etc      home     lib      lib64    linuxrc  media    mnt      opt      proc     root     run      sbin     sys      tmp      usr      var
/ # curl 10.105.139.248
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

/ # curl 10.105.139.248/hostname.html
nginx-deployment-74f9595fbb-c4rpg
/ # curl 10.105.139.248/hostname.html
nginx-deployment-74f9595fbb-x6xfg
/ # curl 10.105.139.248/hostname.html
nginx-deployment-74f9595fbb-c4rpg
/ # curl 10.105.139.248/hostname.html
nginx-deployment-74f9595fbb-c4rpg
/ # curl 10.105.139.248/hostname.html
nginx-deployment-74f9595fbb-tmk8r
/ # curl 10.105.139.248/hostname.html
nginx-deployment-74f9595fbb-c4rpg

Dns的功能:

[[email protected] ~]$ kubectl get deployments.apps -n kube-system 

NAME      READY   UP-TO-DATE   AVAILABLE   AGE

coredns   2/2     2            2           6d1h

工作机制:
Iptables;ptables 代理模式:这种模式,kube-proxy 会监视 Kubernetes 控制节点对 Service 对象和 Endpoints 对象的添加和移除。 对每个 Service,它会安装 iptables 规则,从而捕获到达该 Service 的 clusterIP 和端口的请求,进而将请求重定向到 Service 的一组 backend 中的某个上面。 对于每个 Endpoints 对象,它也会安装 iptables 规则,这个规则会选择一个 backend 组合。消耗大量的资源
IPVS 代理模式:在 ipvs 模式下,kube-proxy监视Kubernetes服务和端点,调用 netlink 接口相应地创建 IPVS 规则, 并定期将 IPVS 规则与 Kubernetes 服务和端点同步。 该控制循环可确保 IPVS 状态与所需状态匹配。 访问服务时,IPVS 将流量定向到后端Pod之一。

修改工作机制:改为ipvs
kubectl edit cm kube-proxy -n kube-system  #编辑文件 
修改:mode “ipvs”
编辑完成不能立即生效,要重新启动pod。

重建pod:
[email protected] ~]# kubectl get pod -n kube-system -o wide | grep kube-proxy 
kube-proxy-2dfrj                  1/1     Running   4          6d2h    192.168.213.30   server3   <none>           <none>
kube-proxy-fqtw5                  1/1     Running   9          6d2h    192.168.213.20   server2   <none>           <none>
kube-proxy-wk8n7                  1/1     Running   11         6d2h    192.168.213.10   server1   <none>           <none>
[[email protected] ~]# kubectl get pod -n kube-system -o wide | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-2dfrj" deleted
pod "kube-proxy-fqtw5" deleted
pod "kube-proxy-wk8n7" deleted
[[email protected] ~]# kubectl get pod -n kube-system -o wide | grep kube-proxy 
kube-proxy-4c4rk                  1/1     Running   0          4s      192.168.213.10   server1   <none>           <none>
kube-proxy-k8tl9                  1/1     Running   0          10s     192.168.213.30   server3   <none>           <none>
kube-proxy-v4snh                  1/1     Running   0          6s      192.168.213.20   server2   <none>           <none>
[[email protected] ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 192.168.213.10:6443          Masq    1      0          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.35:53               Masq    1      0          0         
  -> 10.244.0.36:53               Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.0.35:9153             Masq    1      0          0         
  -> 10.244.0.36:9153             Masq    1      0          0         
TCP  10.105.139.248:80 rr
  -> 10.244.0.50:80               Masq    1      0          0         
  -> 10.244.1.46:80               Masq    1      0          0         
  -> 10.244.2.49:80               Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.0.35:53               Masq    1      0          0         
  -> 10.244.0.36:53               Masq    1      0 

查看轮询:


/ # curl 10.105.139.248/hostname.html
nginx-deployment-74f9595fbb-7w5ck
/ # curl 10.105.139.248/hostname.html
nginx-deployment-74f9595fbb-sggdw
/ # curl 10.105.139.248/hostname.html
nginx-deployment-74f9595fbb-l9mgg
/ # curl 10.105.139.248/hostname.html
nginx-deployment-74f9595fbb-7w5ck
/ # curl 10.105.139.248/hostname.html
nginx-deployment-74f9595fbb-sggdw

2,Nodeport
这种模式会将端口进行暴漏,可以随时访问实现负载均衡:
自定义文件:

apiVersion: v1
kind: Service
metadata:
  name: nodeport
spec:
  ports:
    - name: http
      port: 80
      targetPort: 80
  selector:
      app: nginx
  type: NodePort


[[email protected] ~]$ kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        6d23h
nodeport     NodePort    10.108.66.203   <none>        80:30118/TCP   17[[email protected] ~]# curl 192.168.213.10:30118/hostname.html
nginx-deployment-74f9595fbb-7w5ck
[[email protected] ~]# curl 192.168.213.10:30118/hostname.html
nginx-deployment-74f9595fbb-l9mgg
[[email protected] ~]# curl 192.168.213.10:30118/hostname.html
nginx-deployment-74f9595fbb-f2k8h


3,ExternlName
这种模式使用域名来访问,具有DNS的自解析功能
自定义文件:

apiVersion: v1
kind: Service
metadata:
  name: ex-svc
spec:
  type: ExternalName
  externalName: www.westos.org


[[email protected] ~]$ dig -t A ex-svc.default.svc.cluster.local @10.96.0.1

; <<>> DiG 9.9.4-RedHat-9.9.4-37.el7 <<>> -t A ex-svc.default.svc.cluster.local @10.96.0.1
;; global options: +cmd
;; connection timed out; no servers could be reached
[[email protected] ~]$ dig -t A [email protected]

; <<>> DiG 9.9.4-RedHat-9.9.4-37.el7 <<>> -t A [email protected]
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 20556
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;ex-svc.default.svc.cluster.local.\@10.96.0.1. IN A

;; AUTHORITY SECTION:
.			30	IN	SOA	a.root-servers.net. nstld.verisign-grs.com. 2020022300 1800 900 604800 86400

;; Query time: 42 msec
;; SERVER: 114.114.114.114#53(114.114.114.114)
;; WHEN: Sat Feb 22 22:39:40 PST 2020
;; MSG SIZE  rcvd: 147

这种模式还可以分配一个共有的IP可以从外部来访问,需要指定externalIPs

piVersion: v1

kind: Service

metadata:

  name: my-clusterip
spec:
  ports:
    - name: http
      port: 80
      targetPort: 80
  selector:
      app: nginx
  externalIPs:
  - 192.168.213.100
访问:
[[email protected] ~]$ kubectl get svc
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)        AGE
ex-2         ClusterIP      10.108.15.9     192.168.213.100   80/TCP         12s
ex-svc       ExternalName   <none>          www.westos.org    <none>         18m
kubernetes   ClusterIP      10.96.0.1       <none>            443/TCP        7d
nodeport     NodePort       10.108.66.203   <none>            80:30118/TCP   18h
[[email protected] ~]$ ping 192.168.213.100
PING 192.168.213.100 (192.168.213.100) 56(84) bytes of data.
64 bytes from 192.168.213.100: icmp_seq=1 ttl=64 time=0.072 ms
64 bytes from 192.168.213.100: icmp_seq=2 ttl=64 time=0.100 ms
^C
--- 192.168.213.100 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.072/0.086/0.100/0.014 ms
[[email protected] ~]$ curl  192.168.213.100
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[[email protected] ~]$ curl  192.168.213.100
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>