Keepalived 和Nginx搭建高可用集群
Keepalived 高可用集群
架构:
node1:192.168.205.10 (keepalived主负载服务器)
node2:192.168.205.20 (keepalived备负载服务器)
node3:192.168.205.30 (web01服务器)
node4:192.168.205.40 (web02服务器)
安装Keepalived
yum install keepalived -y
rpm -qa keepalived
已经安装好:
keepalived-1.3.5-6.el7.x86_64
node1 启动keepalived
如果启动不起来:
查看一下配置,确认一下网卡。
vim /etc/keepalived/keepalived.conf
下面是成功的,默认是3个进程
[[email protected] vagrant]# ps -ef |grep keep|grep -v grep
root 5268 1 0 10:14 ? 00:00:00 /usr/sbin/keepalived -D
root 5269 5268 0 10:14 ? 00:00:00 /usr/sbin/keepalived -D
root 5270 5268 0 10:14 ? 00:00:00 /usr/sbin/keepalived -D
默认也是三个
[[email protected] vagrant]# ip add |grep 192.168
inet 192.168.205.10/24 brd 192.168.205.255 scope global enp0s8
inet 192.168.200.16/32 scope global enp0s8
inet 192.168.200.17/32 scope global enp0s8
inet 192.168.200.18/32 scope global enp0s8
node2也和上面的安装上
node1和node2 都停止keepalived
systemctl stop keepalived.service
编辑:node1
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1 #节点名
#vrrp_skip_check_adv_addr
#vrrp_strict
#vrrp_garp_interval 0
#vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER
interface enp0s8 #网卡名
virtual_router_id 55 #ip主备一致
priority 150 #优先级最高(主)
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.205.15/24 dev enp0s8 node1 enp0s8:1 #虚拟地址,主备一致
}
}
启动keepalived
systemctl start keepalived
[[email protected] vagrant]# ip a
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:06:eb:fa brd ff:ff:ff:ff:ff:ff
inet 192.168.205.10/24 brd 192.168.205.255 scope global enp0s8
valid_lft forever preferred_lft forever
inet 192.168.205.15/24 scope global secondary enp0s8 #已经生效
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe06:ebfa/64 scope link
valid_lft forever preferred_lft forever
编辑:node2
停止keepalived
systemctl stop keepalived.service
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node2
#vrrp_skip_check_adv_addr
#vrrp_strict
#vrrp_garp_interval 0
#vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state BACKUP
interface enp0s8
virtual_router_id 55
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.205.15/24 dev enp0s8 node1 enp0s8:1
}
}
启动keepalived
systemctl start keepalived.service
查看是否生效ip a
enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:62:54:9c brd ff:ff:ff:ff:ff:ff
inet 192.168.205.20/24 brd 192.168.205.255 scope global enp0s8
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe62:549c/64 scope link
valid_lft forever preferred_lft forever
注意:正常情况下是没有虚拟机ip的
只有node1 down后node2才会有
node1恢复正常后,node2的虚拟ip还会消失
如果node1正常启动下,node2的虚拟ip还在,检查一下:
1.防火墙是否关闭
2.192.168.205.15/24 dev enp0s8 node1 enp0s8:1 #虚拟地址,主备一致
3.virtual_router_id 55 #ip主备一致
4.priority 150 #优先级最高(主),BACKUP一定要低于MASTER
一般情况下是三项不同的
router id
state
priority
========================================================
开始测试环境试验:
架构:
node1:192.168.205.10 (keepalived主负载服务器)
node2:192.168.205.20 (keepalived备负载服务器)
node3:192.168.205.30 (web01服务器)
node4:192.168.205.40 (web02服务器)
node1和node2的虚拟ip是192.168.205.15
本地和主备host:didi.com 192.168.205.15
node1和node2都安装Nginx和keepalived
node1和node2都安装Nginx
下面四份配置文件完成后重启keepalived和nginx
systemctl start keepalived.service
systemctl start nginx.service
测试阶段:
测试keepalived的可用性
node1执行:ip a 应该是有192.168.205.15
node2执行:ip a 无有192.168.205.15
node1挂掉和关机后是可以将虚拟ip 192.168.205.15 到node2上
网站还是可以继续打开
注意
问题:虚拟ip:192.168.205.15在node1上时,node2是默认启动不了的,可以先让node2取得ip,然后启动nginx
Node1 keepalived配置:
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
#vrrp_skip_check_adv_addr
#vrrp_strict
#vrrp_garp_interval 0
#vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER
interface enp0s8
virtual_router_id 55
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.205.15/24 dev enp0s8 node1 enp0s8:1
}
}
Node1的Nginx配置:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream www_didi_pools {
server 192.168.205.30:80 weight=1;
server 192.168.205.40:80 weight=1;
}
server {
listen 192.168.205.15:80;
server_name didi.com www.didi.com;
location / {
proxy_pass http://www_didi_pools;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
Node2 的keepalived配置:
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node2
#vrrp_skip_check_adv_addr
#vrrp_strict
#vrrp_garp_interval 0
#vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state BACKUP
interface enp0s8
virtual_router_id 55
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.205.15/24 dev enp0s8 node1 enp0s8:1
}
}
Node2的Nginx配置
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream www_didi_pools {
server 192.168.205.30:80 weight=1;
server 192.168.205.40:80 weight=1;
}
server {
listen 192.168.205.15:80;
server_name didi.com www.didi.com;
location / {
proxy_pass http://www_didi_pools;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
推荐阅读
-
[图文][提供可行性脚本] CentOS 7 Fencing+Pacemaker三节点搭建高可用集群
-
一张图讲解最少机器搭建FastDFS高可用分布式集群安装说明
-
nginx+keepalived 高可用主从配置详解
-
详解Keepalived+Nginx实现高可用(HA)
-
Nginx配置upstream实现负载均衡及keepalived实现nginx高可用
-
基于mysql+mycat搭建稳定高可用集群负载均衡主备复制读写分离操作
-
Rancher2.2.2-HA 高可用k8s容器集群搭建
-
MongoDB搭建高可用集群的完整步骤(3个分片+3个副本)
-
Hadoop HA 高可用集群搭建
-
JAVAEE——宜立方商城03:Nginx负载均衡高可用、Keepalived+Nginx实现主备