欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  网络运营

LVS+Keepalived构建高可用负载均衡(测试篇)

程序员文章站 2022-05-27 13:34:12
一、 启动lvs高可用集群服务 首先,启动每个real server节点的服务: [root@localhost ~]# /etc/init.d/lvsrs start s...
一、 启动lvs高可用集群服务

首先,启动每个real server节点的服务:
[root@localhost ~]# /etc/init.d/lvsrs start
start lvs of realserver
然后,分别在主备director server启动keepalived服务:
[root@dr1 ~]#/etc/init.d/keepalived start
[root@dr1 ~]#/ ipvsadm -l
ip virtual server version 1.2.1 (size=4096)
prot localaddress:port scheduler flags
-> remoteaddress:port forward weight activeconn inactconn
tcp bogon:http rr
-> real-server1:http route 1 1 0
-> real-server2:http route 1 1 0
此时查看keepalived服务的系统日志信息如下:
[root@localhost ~]# tail -f /var/log/messages
feb 28 10:01:56 localhost keepalived: starting keepalived v1.1.19 (02/27,2011)
feb 28 10:01:56 localhost keepalived_healthcheckers: netlink reflector reports ip 192.168.12.25 added
feb 28 10:01:56 localhost keepalived_healthcheckers: opening file '/etc/keepalived/keepalived.conf'.
feb 28 10:01:56 localhost keepalived_healthcheckers: configuration is using : 12063 bytes
feb 28 10:01:56 localhost keepalived: starting healthcheck child process, pid=4623
feb 28 10:01:56 localhost keepalived_vrrp: netlink reflector reports ip 192.168.12.25 added
feb 28 10:01:56 localhost keepalived: starting vrrp child process, pid=4624
feb 28 10:01:56 localhost keepalived_healthcheckers: activating healtchecker for service [192.168.12.246:80]
feb 28 10:01:56 localhost keepalived_vrrp: opening file '/etc/keepalived/keepalived.conf'.
feb 28 10:01:56 localhost keepalived_healthcheckers: activating healtchecker for service [192.168.12.237:80]
feb 28 10:01:57 localhost keepalived_vrrp: vrrp_instance(vi_1) transition to master state
feb 28 10:01:58 localhost keepalived_vrrp: vrrp_instance(vi_1) entering master state
feb 28 10:01:58 localhost keepalived_vrrp: vrrp_instance(vi_1) setting protocol vips.
feb 28 10:01:58 localhost keepalived_healthcheckers: netlink reflector reports ip 192.168.12.135 added
feb 28 10:01:58 localhost avahi-daemon[2778]: registering new address record for 192.168.12.135 on eth0.

二、 高可用性功能测试

高可用性是通过lvs的两个director server完成的,为了模拟故障,我们先将主director server上面的keepalived服务停止,然后观察备用director server上keepalived的运行日志,信息如下:
feb 28 10:08:52 lvs-backup keepalived_vrrp: vrrp_instance(vi_1) transition to master state
feb 28 10:08:54 lvs-backup keepalived_vrrp: vrrp_instance(vi_1) entering master state
feb 28 10:08:54 lvs-backup keepalived_vrrp: vrrp_instance(vi_1) setting protocol vips.
feb 28 10:08:54 lvs-backup keepalived_vrrp: vrrp_instance(vi_1) sending gratuitous arps on eth0 for 192.168.12.135
feb 28 10:08:54 lvs-backup keepalived_vrrp: netlink reflector reports ip 192.168.12.135 added
feb 28 10:08:54 lvs-backup keepalived_healthcheckers: netlink reflector reports ip 192.168.12.135 added
feb 28 10:08:54 lvs-backup avahi-daemon[3349]: registering new address record for 192.168.12.135 on eth0.
feb 28 10:08:59 lvs-backup keepalived_vrrp: vrrp_instance(vi_1) sending gratuitous arps on eth0 for 192.168.12.135
从日志中可以看出,主机出现故障后,备机立刻检测到,此时备机变为master角色,并且接管了主机的虚拟ip资源,最后将虚拟ip绑定在eth0设备上。
接着,重新启动主director server上的keepalived服务,继续观察备用director server的日志状态:
备用director server的日志状态:
feb 28 10:12:11 lvs-backup keepalived_vrrp: vrrp_instance(vi_1) received higher prio advert
feb 28 10:12:11 lvs-backup keepalived_vrrp: vrrp_instance(vi_1) entering backup state
feb 28 10:12:11 lvs-backup keepalived_vrrp: vrrp_instance(vi_1) removing protocol vips.
feb 28 10:12:11 lvs-backup keepalived_vrrp: netlink reflector reports ip 192.168.12.135 removed
feb 28 10:12:11 lvs-backup keepalived_healthcheckers: netlink reflector reports ip 192.168.12.135 removed
feb 28 10:12:11 lvs-backup avahi-daemon[3349]: withdrawing address record for 192.168.12.135 on eth0.
从日志可知,备机在检测到主机重新恢复正常后,重新返回backup角色,并且释放了虚拟ip资源。

三、 负载均衡测试

这里假定两个real server节点配置www服务的网页文件根目录均为/webdata/www目录,然后分别执行如下操作:
在real server1 执行:
echo "this is real server1" /webdata/www/index.html
在real server2 执行:
echo "this is real server2" /webdata/www/index.html
接着打开浏览器,访问http://192.168.12.135这个地址,然后不断刷新此页面,如果能分别看到“this is real server1”和“this is real server2”就表明lvs已经在进行负载均衡了。

四、 故障切换测试

故障切换是测试当某个节点出现故障后,keepalived监控模块是否能及时发现,然后屏蔽故障节点,同时将服务转移到正常节点来执行。
这里我们将real server 1节点服务停掉,假定这个节点出现故障,然后查看主、备机日志信息,相关日志如下:
feb 28 10:14:12 localhost keepalived_healthcheckers: tcp connection to [192.168.12.246:80] failed !!!
feb 28 10:14:12 localhost keepalived_healthcheckers: removing service [192.168.12.246:80] from vs [192.168.12.135:80]
feb 28 10:14:12 localhost keepalived_healthcheckers: remote smtp server [192.168.12.1:25] connected.
feb 28 10:14:12 localhost keepalived_healthcheckers: smtp alert successfully sent.
通过日志可以看出,keepalived监控模块检测到192.168.12.246这台主机出现故障后,将此节点从集群系统中剔除掉了。
此时访问http://192.168.12.135这个地址,应该只能看到“this is real server2”了,这是因为节点1出现故障,而keepalived监控模块将节点1从集群系统中剔除了。
下面重新启动real server 1节点的服务,可以看到keepalived日志信息如下:
feb 28 10:15:48 localhost keepalived_healthcheckers: tcp connection to [192.168.12.246:80] success.
feb 28 10:15:48 localhost keepalived_healthcheckers: adding service [192.168.12.246:80] to vs [192.168.12.135:80]
feb 28 10:15:48 localhost keepalived_healthcheckers: remote smtp server [192.168.12.1:25] connected.
feb 28 10:15:48 localhost keepalived_healthcheckers: smtp alert successfully sent.
从日志可知,keepalived监控模块检测到192.168.12.246这台主机恢复正常后,又将此节点加入了集群系统中。
此时再次访问http://192.168.12.135这个地址,然后不断刷新此页面,应该又能分别看到“this is real server1”和“this is real server2”页面了,这说明在real server 1节点恢复正常后,keepalived监控模块将此节点加入了集群系统中。

本文出自 “技术成就梦想” 博客