欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

LVS集群架构NAT模型机制。

程序员文章站 2022-07-13 21:42:43
...

LVS实现负载均衡机制。
一:为何实现负载均衡。
实际生产环境中某台服务器已经不能承受日常访问压力时,就需要实现负载均衡,将用户的请求量尽可能的平分到后端处理用户请求的服务集群点上,同样也是为了解决单台服务器故障问题。从而提高用户体验。

二:负载均衡的实现:

  硬件实现:F5,  BIG-IP,等等
  软件实现:LVS,haproxy,Nginx 等
  今天我们主要讲述LVS的实现。

三:LVS的简单介绍:

LVS负载均衡器,根据请求报文的目标和port将其转发至后端主机集群中的某一台主机(根据挑选算法)
LVS两种工作模式:NAT模型,DR模型
LVS算法:静态算法,动态算法:
静态算法:
 RR:round  robin,轮调
 WRR:weighted  rr,加权重轮调
 SH:source hash,实现session保持的机制;将来自同一个ip请求始终调度至同一RS
 DH:destination hash,将来往所有主机访问同一个目标的请求始终发往同一RS

动态算法:
 overhead(负载)
        LC:least connection最小连接数
            overhead=active*256+inactive 计算结果较小的为挑中的那台主机
        WLC:weighted LC
            overhead=(active*256+inactive)/weight  计算出权重最小的为挑中的那台主机                                         
        SED:shorttest expection delay:最小期望延迟,希望权重最大响应用户请求。
             overhead=(active+1)    
        NQ:Never  queue  SED算法改进,先一次挑选过后,在根据sed算法进行计算挑选。
        LBLC:locality-based  LC:基于本地的连接。即为动态的DH算法 ,正向代理情形下的cache server调度
        LBLCR:locality-based  least-connection  with  reolication  ,带复制功能的LBLC算法

四:LVS的工作原理:

LVS负载均衡调度技术是在linux内核中实现的,我们使用配置LVS时不是直接配置内核中的IPVS,而是通过IPVS的管理工具IPVSADM来管理配置
LVS集群负载均衡器接受所有入站客户端的请求,并根据算法来决定由那个集群节点来处理请求。

五:LVS相关地址配置:

虚拟IP地址(VIP) :     用于向客户端提供服务的IP地址(配置于负载均衡器上)
真实的IP地址(RIP):    集群中节点服务器的IP地址
负载均衡器IP地址(DIP): 负载均衡器的IP地址,物理网卡上的IP,用与同外网连接的地址
客户端主机IP地址(CIP): 终端请求用户的主机IP地址

六:LVS的NAT工作模式:
通过网络地址转换,调度器LB重写请求报文的目标地址,根据算法将请求分配给后端的真实主机服务器,真实服务器响应处理报文后返回给调度器LB,经过LB的报文源地址被重写,再返回给请求的客户端用户
LVS集群架构NAT模型机制。
过程详解:

1:客户端请求CIP(10.10.0.1:80)目标的地址是VIP(1.1.1.1:802:数据经过LB,目的地址将被LB改写成后端服务器其中一个主机地址(RIP 192.168.1.1:803:服务器接收到数据请求后返回应答信息(源地址:192.168.1.80,目的地址:1.1.1.1:80,因此处的网关地址需指向LB)给LB
4:LB需将源数据地址改写成VIP地址(1.1.1.1:80,但实际的源地址是RIP1的地址)
5:LB将数据返回给请求的客户端用户,完成整个流程的访问
因此WEB访问量很大的时候,LB就会有很大的负载压力,一般支持10-20台节点,但是这种模式支持IP和端口的转换功能,即10.10.0.1:80---->1.1.1.1:80---->192.168.1.1:8080

LVS的NAT模型配置:

LB负载均衡器(centos6) 后端realserver(centos7)
node3(10.5.100.94) node1(10.5.100.207),node2(10.5.100.208)
node3外网卡(192.168.20.1) node1外网卡(192.168.20.8),node2外网卡(10.5.100.20.7)

一:以处理web服务器为例,配置后端web服务,两个realserver
Node1节点(realserver)。,添加一块网卡自定义模式

[aaa@qq.com ~]# vim /etc/sysconfig/network-scripts/ifcfg-eno33554984
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
NAME=eno33554984
DEVICE=eno33554984
ONBOOT=yes
IPADDR=192.168.20.8
MASK=24
GATEWAY=192.168.20.1    将网卡指向负载均衡器的外网卡。

[aaa@qq.com ~]# systemctl  restart  network
[aaa@qq.com ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:d0:83:ae brd ff:ff:ff:ff:ff:ff
    inet 10.5.100.207/24 brd 10.5.100.255 scope global dynamic eno16777736
       valid_lft 587568sec preferred_lft 587568sec
    inet6 fe80::20c:29ff:fed0:83ae/64 scope link 
       valid_lft forever preferred_lft forever
3: eno33554984: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:d0:83:b8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.8/24 brd 192.168.20.255 scope global eno33554984
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fed0:83b8/64 scope link 
       valid_lft forever preferred_lft forever
[aaa@qq.com ~]# 
[aaa@qq.com ~]# yum install httpd -y
[aaa@qq.com ~]# echo "<h1>this is node1</h1>" /var/www/html/index.html 
[aaa@qq.com ~]# systemctl restart httpd
[aaa@qq.com ~]# ss -tnl
State       Recv-Q Send-Q                                        Local Address:Port                                          Peer Address:Port 
LISTEN      0      128                                                       *:22                                                       *:*     
LISTEN      0      100                                               127.0.0.1:25                                                       *:*     
LISTEN      0      128                                                      :::80                                                      :::*     
LISTEN      0      128                                                      :::22                                                      :::*     
LISTEN      0      100                                                     ::1:25                                                      :::*     
LISTEN      0      128                                                      :::443                                                     :::*     
[aaa@qq.com ~]# 
[aaa@qq.com ~]# curl http://10.5.100.207    使用两块网卡进行web测试。
<h1>this is node1</h1>
[aaa@qq.com ~]# curl http://192.168.20.8
<h1>this is node1</h1>
[aaa@qq.com ~]# 

Node2节点(realserver),添加一块外网卡,自定义模式。

[aaa@qq.com ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens36
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
NAME=ens36
DEVICE=ens36
ONBOOT=yes
IPADDR=192.168.20.7
MASK=24
GATEWAY=192.168.20.1

[aaa@qq.com ~]# systemctl restart network
[aaa@qq.com ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:ad:af:e0 brd ff:ff:ff:ff:ff:ff
    inet 10.5.100.208/24 brd 10.5.100.255 scope global noprefixroute dynamic ens33
       valid_lft 691178sec preferred_lft 691178sec
    inet6 fe80::a425:d3b4:3c87:c428/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: ens36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:ad:af:ea brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.7/24 brd 192.168.20.255 scope global noprefixroute ens36
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fead:afea/64 scope link 
       valid_lft forever preferred_lft forever
[aaa@qq.com ~]# yum install httpd -y
[aaa@qq.com ~]# echo "<h1>this is node2</h1>" /var/www/html/index.html 
[aaa@qq.com ~]# systemctl restart httpd
[aaa@qq.com ~]# ss -tnl
State       Recv-Q Send-Q                                        Local Address:Port                                          Peer Address:Port 
LISTEN      0      128                                                       *:22                                                       *:*     
LISTEN      0      100                                               127.0.0.1:25                                                       *:*     
LISTEN      0      128                                                      :::80                                                      :::*     
LISTEN      0      128                                                      :::22                                                      :::*     
LISTEN      0      100                                                     ::1:25                                                      :::*     
LISTEN      0      128                                                      :::443                                                     :::*     
[aaa@qq.com ~]# 
[aaa@qq.com ~]# curl http://10.5.100.208    使用两块网卡进行web测试。
<h1>this is node2</h1>
[aaa@qq.com ~]# curl http://192.168.20.7
<h1>this is node2</h1>
[aaa@qq.com ~]# 

两台realserver已经配置完成,接下来配置负载均衡器LB
Node3节点:添加外网卡,用于与realserver的连接,安装ipvsadm管理工具。管理内核中ipvs的工具。

[aaa@qq.com ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE="eth1"
BOOTPROTO="static"
HWADDR="00:0c:29:c3:9e:71"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
IPADDR="192.168.20.1"
MASK=24
[aaa@qq.com ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:c3:9e:67 brd ff:ff:ff:ff:ff:ff
    inet 10.5.100.94/24 brd 10.5.100.255 scope global eth0
    inet6 fe80::20c:29ff:fec3:9e67/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:c3:9e:71 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.1/24 brd 192.168.20.255 scope global eth1
    inet6 fe80::20c:29ff:fec3:9e71/64 scope link 
       valid_lft forever preferred_lft forever
[aaa@qq.com ~]# 
[aaa@qq.com ~]# ping 192.168.20.7
PING 192.168.20.7 (192.168.20.7) 56(84) bytes of data.
64 bytes from 192.168.20.7: icmp_seq=1 ttl=64 time=1.12 ms
64 bytes from 192.168.20.7: icmp_seq=2 ttl=64 time=0.710 ms
^C
--- 192.168.20.7 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1679ms
rtt min/avg/max/mdev = 0.710/0.916/1.123/0.208 ms
[aaa@qq.com ~]# ping 192.168.20.7
PING 192.168.20.7 (192.168.20.7) 56(84) bytes of data.
64 bytes from 192.168.20.7: icmp_seq=1 ttl=64 time=0.801 ms
64 bytes from 192.168.20.7: icmp_seq=2 ttl=64 time=1.01 ms
64 bytes from 192.168.20.7: icmp_seq=3 ttl=64 time=0.280 ms
64 bytes from 192.168.20.7: icmp_seq=4 ttl=64 time=1.34 ms
^C
--- 192.168.20.7 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3150ms
rtt min/avg/max/mdev = 0.280/0.859/1.346/0.387 ms
[aaa@qq.com ~]# ping 192.168.20.8
PING 192.168.20.8 (192.168.20.8) 56(84) bytes of data.
64 bytes from 192.168.20.8: icmp_seq=1 ttl=64 time=3.09 ms
64 bytes from 192.168.20.8: icmp_seq=2 ttl=64 time=0.273 ms
64 bytes from 192.168.20.8: icmp_seq=3 ttl=64 time=0.367 ms
64 bytes from 192.168.20.8: icmp_seq=4 ttl=64 time=0.272 ms
^C
--- 192.168.20.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3278ms
rtt min/avg/max/mdev = 0.272/1.002/3.096/1.209 ms
[aaa@qq.com ~]# 
[aaa@qq.com ~]# cat /proc/sys/net/ipv4/ip_forward   开启网络转发功能。需要vip与dip的转换
1

定义web集群服务。
Node3

[aaa@qq.com ~]# yum install ipvsadm -y
[aaa@qq.com ~]# ipvsadm -A -t 10.5.100.94:80 -s rr
[aaa@qq.com ~]# ipvsadm -a -t 10.5.100.94:80 -r 192.168.20.7 -m 
[aaa@qq.com ~]# ipvsadm -a -t 10.5.100.94:80 -r 192.168.20.8 -m 
[aaa@qq.com ~]# ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.5.100.94:http rr
  -> 192.168.20.7:http            Masq    1      0          0         
  -> 192.168.20.8:http            Masq    1      0          0     
[aaa@qq.com ~]# service ipvsadm restart
ipvsadm: Clearing the current IPVS table:                  [  OK  ]
ipvsadm: Unloading modules:                                [  OK  ]
ipvsadm: Clearing the current IPVS table:                  [  OK  ]
ipvsadm: Applying IPVS configuration:                      [  OK  ]
[aaa@qq.com ~]# 
访问负载均衡器进行测试。
[aaa@qq.com ~]# curl http://10.5.100.94
<h1>this is node2</h1>
[aaa@qq.com ~]# curl http://10.5.100.94
<h1>this is node1</h1>

总结:LVS的NAT模型基于DNAT方式工作的,

  多目标的DNAT(iptables):它通过修改请求报文的目标地址(同时可能会修改目标端口)
  至挑选出某RS的RIP地址实现转发;
(1)RS应该和DIP应该使用私网地址,且RS的网关要指向DIP;
(2)请求和响应报文都要经由director转发:极高负载的场景中,director可能会成为系统瓶颈;
(3)支持端口映射
(4)RS可以使用任意OS
(5)RS的RIP和Directoy的DIP必须在同一ip网络;
相关标签: Linux服务