欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  科技

keepalived搭建主从架构、主主架构实例

程序员文章站 2022-06-26 08:33:30
实例拓扑图: DR1和DR2部署keepalived和lvs作主从架构或主主架构,RS1和RS2部署nginx搭建web站点。 注意:各节点的时间需要同步(ntpdate ntp1.aliyun.com);关闭firewalld(systemctl stop firewalld.service,sy ......

实例拓扑图:

keepalived搭建主从架构、主主架构实例

dr1和dr2部署keepalived和lvs作主从架构或主主架构,rs1和rs2部署nginx搭建web站点。

注意:各节点的时间需要同步(ntpdate ntp1.aliyun.com);关闭firewalld(systemctl stop firewalld.service,systemctl disable firewalld.service),设置selinux为permissive(setenforce 0);同时确保各网卡支持multicast(多播)通信。

通过命令ifconfig可以查看到是否开启了multicast:

       keepalived搭建主从架构、主主架构实例

keepalived的主从架构

搭建rs1:

[root@rs1 ~]# yum -y install nginx   #安装nginx
[root@rs1 ~]# vim /usr/share/nginx/html/index.html   #修改主页
    <h1> 192.168.4.118 rs1 server </h1>
[root@rs1 ~]# systemctl start nginx.service   #启动nginx服务
[root@rs1 ~]# vim rs.sh   #配置lvs-dr的脚本文件
    #!/bin/bash
    #
    vip=192.168.4.120
    mask=255.255.255.255
    case $1 in
    start)
        echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ifconfig lo:0 $vip netmask $mask broadcast $vip up
        route add -host $vip dev lo:0
        ;;
    stop)
        ifconfig lo:0 down
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ;;
    *) 
        echo "usage $(basename $0) start|stop"
        exit 1
        ;;
    esac
[root@rs1 ~]# bash rs.sh start

参考rs1的配置搭建rs2。

搭建dr1:

[root@dr1 ~]# yum -y install ipvsadm keepalived   #安装ipvsadm和keepalived
[root@dr1 ~]# vim /etc/keepalived/keepalived.conf   #修改keepalived.conf配置文件
    global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from keepalived@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id 192.168.4.116
       vrrp_skip_check_adv_addr
       vrrp_mcast_group4 224.0.0.10
    }
    
    vrrp_instance vip_1 {
        state master
        interface eno16777736
        virtual_router_id 1
        priority 100
        advert_int 1
        authentication {
            auth_type pass
            auth_pass %&hhjj99
        }
        virtual_ipaddress {
          192.168.4.120/24 dev eno16777736 label eno16777736:0
        }
    }
    
    virtual_server 192.168.4.120 80 {
        delay_loop 6
        lb_algo rr
        lb_kind dr
        protocol tcp
    
        real_server 192.168.4.118 80 {
            weight 1
            http_get {
                url {
                  path /index.html
                  status_code 200
                }
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
        }
        real_server 192.168.4.119 80 {
            weight 1
            http_get {
                url {
                  path /index.html
                  status_code 200
                }
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
         }
    }
[root@dr1 ~]# systemctl start keepalived
[root@dr1 ~]# ifconfig
    eno16777736: flags=4163<up,broadcast,running,multicast>  mtu 1500
            inet 192.168.4.116  netmask 255.255.255.0  broadcast 192.168.4.255
            inet6 fe80::20c:29ff:fe93:270f  prefixlen 64  scopeid 0x20<link>
            ether 00:0c:29:93:27:0f  txqueuelen 1000  (ethernet)
            rx packets 14604  bytes 1376647 (1.3 mib)
            rx errors 0  dropped 0  overruns 0  frame 0
            tx packets 6722  bytes 653961 (638.6 kib)
            tx errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    eno16777736:0: flags=4163<up,broadcast,running,multicast>  mtu 1500
            inet 192.168.4.120  netmask 255.255.255.0  broadcast 0.0.0.0
            ether 00:0c:29:93:27:0f  txqueuelen 1000  (ethernet)
[root@dr1 ~]# ipvsadm -ln
    ip virtual server version 1.2.1 (size=4096)
    prot localaddress:port scheduler flags
      -> remoteaddress:port           forward weight activeconn inactconn
    tcp  192.168.4.120:80 rr
      -> 192.168.4.118:80             route   1      0          0         
      -> 192.168.4.119:80             route   1      0          0

dr2的搭建基本同dr1,主要修改一下配置文件中/etc/keepalived/keepalived.conf的state和priority:state backup、priority 90. 同时我们发现作为backup的dr2没有启用eno16777736:0的网口:

keepalived搭建主从架构、主主架构实例

客户端进行测试:

[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done   #客户端正常访问
<h1> 192.168.4.119 rs2 server</h1>
<h1> 192.168.4.118 rs1 server </h1>
<h1> 192.168.4.119 rs2 server</h1>
<h1> 192.168.4.118 rs1 server </h1>
<h1> 192.168.4.119 rs2 server</h1>
<h1> 192.168.4.118 rs1 server </h1>
<h1> 192.168.4.119 rs2 server</h1>
<h1> 192.168.4.118 rs1 server </h1>
<h1> 192.168.4.119 rs2 server</h1>
<h1> 192.168.4.118 rs1 server </h1>
<h1> 192.168.4.119 rs2 server</h1>
<h1> 192.168.4.118 rs1 server </h1>
<h1> 192.168.4.119 rs2 server</h1>
<h1> 192.168.4.118 rs1 server </h1>
<h1> 192.168.4.119 rs2 server</h1>
<h1> 192.168.4.118 rs1 server </h1>
<h1> 192.168.4.119 rs2 server</h1>
<h1> 192.168.4.118 rs1 server </h1>
<h1> 192.168.4.119 rs2 server</h1>
<h1> 192.168.4.118 rs1 server </h1>
[root@dr1 ~]# systemctl stop keepalived.service #关闭dr1的keepalived服务
[root@dr2 ~]# systemctl status keepalived.service #观察dr2,可以看到dr2已经进入master状态 ● keepalived.service - lvs and vrrp high availability monitor loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled) active: active (running) since tue 2018-09-04 11:33:04 cst; 7min ago process: 12983 execstart=/usr/sbin/keepalived $keepalived_options (code=exited, status=0/success) main pid: 12985 (keepalived) cgroup: /system.slice/keepalived.service ├─12985 /usr/sbin/keepalived -d ├─12988 /usr/sbin/keepalived -d └─12989 /usr/sbin/keepalived -d sep 04 11:37:41 happiness keepalived_healthcheckers[12988]: smtp alert successfully sent. sep 04 11:40:22 happiness keepalived_vrrp[12989]: vrrp_instance(vip_1) transition to master state sep 04 11:40:23 happiness keepalived_vrrp[12989]: vrrp_instance(vip_1) entering master state sep 04 11:40:23 happiness keepalived_vrrp[12989]: vrrp_instance(vip_1) setting protocol vips. sep 04 11:40:23 happiness keepalived_vrrp[12989]: sending gratuitous arp on eno16777736 for 192.168.4.120 sep 04 11:40:23 happiness keepalived_vrrp[12989]: vrrp_instance(vip_1) sending/queueing gratuitous arps on eno16777736 for 192.168.4.120 sep 04 11:40:23 happiness keepalived_vrrp[12989]: sending gratuitous arp on eno16777736 for 192.168.4.120 sep 04 11:40:23 happiness keepalived_vrrp[12989]: sending gratuitous arp on eno16777736 for 192.168.4.120 sep 04 11:40:23 happiness keepalived_vrrp[12989]: sending gratuitous arp on eno16777736 for 192.168.4.120 sep 04 11:40:23 happiness keepalived_vrrp[12989]: sending gratuitous arp on eno16777736 for 192.168.4.120
[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done #可以看到客户端正常访问 <h1> 192.168.4.119 rs2 server</h1> <h1> 192.168.4.118 rs1 server </h1> <h1> 192.168.4.119 rs2 server</h1> <h1> 192.168.4.118 rs1 server </h1> <h1> 192.168.4.119 rs2 server</h1> <h1> 192.168.4.118 rs1 server </h1> <h1> 192.168.4.119 rs2 server</h1> <h1> 192.168.4.118 rs1 server </h1> <h1> 192.168.4.119 rs2 server</h1> <h1> 192.168.4.118 rs1 server </h1> <h1> 192.168.4.119 rs2 server</h1> <h1> 192.168.4.118 rs1 server </h1> <h1> 192.168.4.119 rs2 server</h1> <h1> 192.168.4.118 rs1 server </h1> <h1> 192.168.4.119 rs2 server</h1> <h1> 192.168.4.118 rs1 server </h1> <h1> 192.168.4.119 rs2 server</h1> <h1> 192.168.4.118 rs1 server </h1> <h1> 192.168.4.119 rs2 server</h1> <h1> 192.168.4.118 rs1 server </h1>

keepalived的主主架构

 修改rs1和rs2,添加新的vip:

[root@rs1 ~]# cp rs.sh rs_bak.sh
[root@rs1 ~]# vim rs_bak.sh   #添加新的vip
    #!/bin/bash
    #
    vip=192.168.4.121
    mask=255.255.255.255
    case $1 in
    start)
        echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ifconfig lo:1 $vip netmask $mask broadcast $vip up
        route add -host $vip dev lo:1
        ;;
    stop)
        ifconfig lo:1 down
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ;;
    *)
        echo "usage $(basename $0) start|stop"
        exit 1
        ;;
    esac
[root@rs1 ~]# bash rs_bak.sh start
[root@rs1 ~]# ifconfig
    ...
    lo:0: flags=73<up,loopback,running>  mtu 65536
            inet 192.168.4.120  netmask 255.255.255.255
            loop  txqueuelen 0  (local loopback)
    
    lo:1: flags=73<up,loopback,running>  mtu 65536
            inet 192.168.4.121  netmask 255.255.255.255
            loop  txqueuelen 0  (local loopback) 
[root@rs1 ~]# scp rs_bak.sh root@192.168.4.119:~
root@192.168.4.119's password: 
rs_bak.sh                100%  693     0.7kb/s   00:00

[root@rs2 ~]# bash rs_bak.sh   #直接运行脚本添加新的vip 
[root@rs2 ~]# ifconfig
    ...
    lo:0: flags=73<up,loopback,running>  mtu 65536
            inet 192.168.4.120  netmask 255.255.255.255
            loop  txqueuelen 0  (local loopback)
    
    lo:1: flags=73<up,loopback,running>  mtu 65536
            inet 192.168.4.121  netmask 255.255.255.255
            loop  txqueuelen 0  (local loopback)

修改dr1和dr2:

[root@dr1 ~]# vim /etc/keepalived/keepalived.conf   #修改dr1的配置文件,添加新的实例,配置服务器组
    ...
    vrrp_instance vip_2 {
        state backup
        interface eno16777736
        virtual_router_id 2
        priority 90
        advert_int 1
        authentication {
            auth_type pass
            auth_pass uu**99^^
        }
        virtual_ipaddress {
            192.168.4.121/24 dev eno16777736 label eno16777736:1
        }
    }
    
    virtual_server_group ngxsrvs {
        192.168.4.120 80
        192.168.4.121 80
    }
    virtual_server group ngxsrvs {
        ...
    }
[root@dr1 ~]# systemctl restart keepalived.service   #重启服务
[root@dr1 ~]# ifconfig   #此时可以看到eno16777736:1,因为dr2还未配置
    eno16777736: flags=4163<up,broadcast,running,multicast>  mtu 1500
            inet 192.168.4.116  netmask 255.255.255.0  broadcast 192.168.4.255
            inet6 fe80::20c:29ff:fe93:270f  prefixlen 64  scopeid 0x20<link>
            ether 00:0c:29:93:27:0f  txqueuelen 1000  (ethernet)
            rx packets 54318  bytes 5480463 (5.2 mib)
            rx errors 0  dropped 0  overruns 0  frame 0
            tx packets 38301  bytes 3274990 (3.1 mib)
            tx errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    eno16777736:0: flags=4163<up,broadcast,running,multicast>  mtu 1500
            inet 192.168.4.120  netmask 255.255.255.0  broadcast 0.0.0.0
            ether 00:0c:29:93:27:0f  txqueuelen 1000  (ethernet)
    
    eno16777736:1: flags=4163<up,broadcast,running,multicast>  mtu 1500
            inet 192.168.4.121  netmask 255.255.255.0  broadcast 0.0.0.0
            ether 00:0c:29:93:27:0f  txqueuelen 1000  (ethernet)
[root@dr1 ~]# ipvsadm -ln
    ip virtual server version 1.2.1 (size=4096)
    prot localaddress:port scheduler flags
      -> remoteaddress:port           forward weight activeconn inactconn
    tcp  192.168.4.120:80 rr
      -> 192.168.4.118:80             route   1      0          0         
      -> 192.168.4.119:80             route   1      0          0         
    tcp  192.168.4.121:80 rr
      -> 192.168.4.118:80             route   1      0          0         
      -> 192.168.4.119:80             route   1      0          0

[root@dr2 ~]# vim /etc/keepalived/keepalived.conf   #修改dr2的配置文件,添加实例,配置服务器组
    ...
    vrrp_instance vip_2 {
        state master
        interface eno16777736
        virtual_router_id 2
        priority 100
        advert_int 1
        authentication {
            auth_type pass
            auth_pass uu**99^^
        }
        virtual_ipaddress {
            192.168.4.121/24 dev eno16777736 label eno16777736:1
        }
    }
    
    virtual_server_group ngxsrvs {
        192.168.4.120 80
        192.168.4.121 80
    }
    virtual_server group ngxsrvs {
        ...
    }
[root@dr2 ~]# systemctl restart keepalived.service   #重启服务
[root@dr2 ~]# ifconfig
    eno16777736: flags=4163<up,broadcast,running,multicast>  mtu 1500
            inet 192.168.4.117  netmask 255.255.255.0  broadcast 192.168.4.255
            inet6 fe80::20c:29ff:fe3d:a31b  prefixlen 64  scopeid 0x20<link>
            ether 00:0c:29:3d:a3:1b  txqueuelen 1000  (ethernet)
            rx packets 67943  bytes 6314537 (6.0 mib)
            rx errors 0  dropped 0  overruns 0  frame 0
            tx packets 23250  bytes 2153847 (2.0 mib)
            tx errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    eno16777736:1: flags=4163<up,broadcast,running,multicast>  mtu 1500
            inet 192.168.4.121  netmask 255.255.255.0  broadcast 0.0.0.0
            ether 00:0c:29:3d:a3:1b  txqueuelen 1000  (ethernet)
[root@dr2 ~]# ipvsadm -ln
    ip virtual server version 1.2.1 (size=4096)
    prot localaddress:port scheduler flags
      -> remoteaddress:port           forward weight activeconn inactconn
    tcp  192.168.4.120:80 rr
      -> 192.168.4.118:80             route   1      0          0         
      -> 192.168.4.119:80             route   1      0          0         
    tcp  192.168.4.121:80 rr
      -> 192.168.4.118:80             route   1      0          0         
      -> 192.168.4.119:80             route   1      0          0 

客户端测试:

[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
[root@client ~]# for i in {1..20};do curl http://192.168.4.121;done
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>
    <h1> 192.168.4.119 rs2 server</h1>
    <h1> 192.168.4.118 rs1 server </h1>