Linux集群架构
Name | Version |
---|---|
CentOS 7 | 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
Nginx | nginx-1.12.1 |
keepalived | keepalived.x86_64 0:1.3.5-6.el7 |
集群介绍
-
根据功能划分为两大类:高可用和负载均衡
-
高可用集群通常为两台服务器,一台工作,另外一台作为冗余,当提供服务的机器宕机,冗余将接替继续提供服务
-
实现高可用的开源软件有:heartbeat、keepalived
-
负载均衡集群,需要有一台服务器作为分发器,它负责把用户的请求分发给后端的服务器处理,在这个集群里,除了分发器外,就是给用户提供服务的服务器了,这些服务器数量至少为2
-
实现负载均衡的开源软件有LVS、keepalived、haproxy、nginx,商业的有F5、Netscaler
keepalived介绍
-
在这里我们使用keepalived来实现高可用集群,因为heartbeat在centos6上有一些问题,影响实验效果
-
keepalived通过VRRP(Virtual Router Redundancy Protocl)来实现高可用。
-
在这个协议里会将多台功能相同的路由器组成一个小组,这个小组里会有1个master角色和N(N>=1)个backup角色。
-
master会通过组播的形式向各个backup发送VRRP协议的数据包,当backup收不到master发来的VRRP数据包时,就会认为master宕机了。此时就需要根据各个backup的优先级来决定谁成为新的mater。
-
Keepalived要有三个模块,分别是core、check和vrrp。其中core模块为keepalived的核心,负责主进程的启动、维护以及全局配置文件的加载和解析,check模块负责健康检查,vrrp模块是来实现VRRP协议的。
用keepalived配置高可用集群
-
准备两台机器128和130,128作为master,130作为backup
-
两台机器都执行yum install -y keepalived
-
两台机器都安装nginx,其中128上已经编译安装过nginx,130上需要yum安装nginx: yum install -y nginx
-
设定vip为100,编辑128上keepalived配置文件,内容从https://coding.net/u/aminglinux/p/aminglinux-book/git/blob/master/D21Z/master_keepalived.conf获取
[aaa@qq.com ~]# vim /etc/keepalived/keepalived.conf
##快速清空文本的快捷方式##
[aaa@qq.com ~]# > !$
> /etc/keepalived/keepalived.conf
[aaa@qq.com ~]# vim /etc/keepalived/keepalived.conf
global_defs {
notification_email {
aaa@qq.com
}
notification_email_from aaa@qq.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_nginx {
script "/usr/local/sbin/check_ng.sh"
interval 3
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass zyshanlinux>com
}
virtual_ipaddress {
192.168.106.100
}
track_script {
chk_nginx
}
}
-- 插入 --
配置文件说明图
-
130编辑监控脚本,内容从https://coding.net/u/aminglinux/p/aminglinux-book/git/blob/master/D21Z/master_check_ng.sh获取,给脚本755权限。
##检测脚本是/etc/keepalived/keepalived.conf里面定义的##
[aaa@qq.com ~]# vim /usr/local/sbin/check_ng.sh
#!/bin/bash
#时间变量,用于记录日志
d=`date --date today +%Y%m%d_%H:%M:%S`
#计算nginx进程数量
n=`ps -C nginx --no-heading|wc -l`
#如果进程为0,则启动nginx,并且再次检测nginx进程数量,
#如果还为0,说明nginx无法启动,此时需要关闭keepalived
if [ $n -eq "0" ]; then
/etc/init.d/nginx start
n2=`ps -C nginx --no-heading|wc -l`
if [ $n2 -eq "0" ]; then
echo "$d nginx down,keepalived will stop" >> /var/log/check_ng.log
systemctl stop keepalived
fi
fi
[aaa@qq.com ~]# chmod 755 /usr/local/sbin/check_ng.sh
[aaa@qq.com ~]# ls -l !$
ls -l /usr/local/sbin/check_ng.sh
-rwxr-xr-x 1 root root 567 7月 22 21:52 /usr/local/sbin/check_ng.sh
-
128启动keepalived服务
[aaa@qq.com ~]# ps aux |grep keep
root 1766 0.0 0.0 112720 972 pts/0 S+ 21:57 0:00 grep --color=auto keep
[aaa@qq.com ~]# systemctl start keepalived
[aaa@qq.com ~]# ps aux |grep keep
root 1774 0.0 0.0 118652 1396 ? Ss 21:57 0:00 /usr/sbin/keepalived -D
root 1775 0.0 0.1 129580 3304 ? S 21:57 0:00 /usr/sbin/keepalived -D
root 1776 0.3 0.1 129520 2840 ? R 21:57 0:00 /usr/sbin/keepalived -D
root 1789 0.0 0.0 112720 972 pts/0 R+ 21:57 0:00 grep --color=auto keep
[aaa@qq.com ~]# ps aux |grep nginx
root 1173 0.0 0.0 46032 1280 ? Ss 21:10 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
nobody 1182 0.0 0.2 48520 4176 ? S 21:10 0:00 nginx: worker process
nobody 1183 0.0 0.2 48520 3920 ? S 21:10 0:00 nginx: worker process
root 1960 0.0 0.0 112720 968 pts/0 R+ 21:58 0:00 grep --color=auto nginx
nginx服务即使停止也会自动启动。
[aaa@qq.com ~]# /etc/init.d/nginx stop
Stopping nginx (via systemctl): [ 确定 ]
[aaa@qq.com ~]# !ps
ps aux |grep nginx
root 2150 0.0 0.0 46032 1280 ? Ss 22:00 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
nobody 2154 0.0 0.2 48520 3920 ? S 22:00 0:00 nginx: worker process
nobody 2155 0.0 0.2 48520 3920 ? S 22:00 0:00 nginx: worker process
root 2169 0.0 0.0 112720 972 pts/0 R+ 22:00 0:00 grep --color=auto nginx
命令ip add才能看到100的IP,ifconfig就看不到了
检查防火墙是否关闭。
[aaa@qq.com ~]# iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
[aaa@qq.com ~]# getenforce
Disabled
从上也要关闭防火墙。
[aaa@qq.com ~]# systemctl stop firewalld
[aaa@qq.com ~]# iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
[aaa@qq.com ~]# getenforce
Enforcing
[aaa@qq.com ~]# setenforce 0
[aaa@qq.com ~]# getenforce
Permissive
-
130上编辑配置文件,内容从https://coding.net/u/aminglinux/p/aminglinux-book/git/blob/master/D21Z/backup_keepalived.conf获取
[aaa@qq.com ~]# vim /etc/keepalived/keepalived.conf
[aaa@qq.com ~]# > !$
> /etc/keepalived/keepalived.conf
[aaa@qq.com ~]# vim /etc/keepalived/keepalived.conf
-
130上编辑监控脚本,内容从https://coding.net/u/aminglinux/p/aminglinux-book/git/blob/master/D21Z/backup_check_ng.sh获取 ,如果你是编译安装的脚本就要用/etc/init.d/nginx start,如果是yum安装的就要用systemctl start nginx,根据实际情况来。
-
给脚本755权限,130上也启动服务 systemctl start keepalived
[aaa@qq.com ~]# chmod 755 !$
chmod 755 /usr/local/sbin/check_ng.sh
[aaa@qq.com ~]# ls -l !$
ls -l /usr/local/sbin/check_ng.sh
-rwxr-xr-x. 1 root root 555 7月 22 22:30 /usr/local/sbin/check_ng.sh
[aaa@qq.com ~]# systemctl start keepalived
[aaa@qq.com ~]# ps aux |grep kee
root 1590 0.0 0.0 118652 1400 ? Ss 22:32 0:00 /usr/sbin/keepalived -D
root 1591 0.0 0.1 127516 3304 ? S 22:32 0:00 /usr/sbin/keepalived -D
root 1592 0.1 0.1 127456 2844 ? S 22:32 0:00 /usr/sbin/keepalived -D
root 1631 0.0 0.0 112720 968 pts/0 R+ 22:32 0:00 grep --color=auto kee
[aaa@qq.com ~]# systemctl restart keepalived
[aaa@qq.com ~]# ps aux |grep kee
root 1711 0.0 0.0 118652 1392 ? Ss 22:32 0:00 /usr/sbin/keepalived -D
root 1712 0.0 0.1 127516 3292 ? S 22:32 0:00 /usr/sbin/keepalived -D
root 1713 0.3 0.1 127456 2836 ? S 22:32 0:00 /usr/sbin/keepalived -D
root 1723 0.0 0.0 112724 972 pts/0 S+ 22:32 0:00 grep --color=auto kee
区分主从之间的nginx
主:源码包安装nginx的路径
[aaa@qq.com ~]# cat /data/wwwroot/default/index.html
master master.This is a default site.
从:yum安装nginx的路径
[aaa@qq.com src]# cat /usr/share/nginx/html/index.html
backup backup.
virtual_ipaddress:访问的是master,v-ip在master上
测试高可用
-
先确定好两台机器上nginx差异,比如可以通过curl -I 来查看nginx版本
-
测试1:关闭master上的nginx服务
无法关闭,keepalived会重新启动nginx。
-
测试2:在master上增加iptabls规则 :iptables -I OUTPUT -p vrrp -j DROP
也无法测试出想要的效果,主服务继续启用没有宕机的效果。
-
测试3:关闭master上的keepalived服务
主丢失v_ip,查看日志发现keepalived断开的问题
[aaa@qq.com ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a1:d4:eb brd ff:ff:ff:ff:ff:ff
inet 192.168.106.128/24 brd 192.168.106.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.106.100/32 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.106.150/24 brd 192.168.106.255 scope global secondary ens33:0
valid_lft forever preferred_lft forever
inet6 fe80::8fc3:bbdf:ba89:22a7/64 scope link
valid_lft forever preferred_lft forever
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a1:d4:f5 brd ff:ff:ff:ff:ff:ff
[aaa@qq.com ~]# systemctl stop keepalived
[aaa@qq.com ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a1:d4:eb brd ff:ff:ff:ff:ff:ff
inet 192.168.106.128/24 brd 192.168.106.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.106.150/24 brd 192.168.106.255 scope global secondary ens33:0
valid_lft forever preferred_lft forever
inet6 fe80::8fc3:bbdf:ba89:22a7/64 scope link
valid_lft forever preferred_lft forever
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a1:d4:f5 brd ff:ff:ff:ff:ff:ff
[aaa@qq.com ~]# less /var/log/messages
Jul 22 23:54:26 zyshanlinux-001 Keepalived[1774]: Stopping
Jul 22 23:54:26 zyshanlinux-001 systemd: Stopping LVS and VRRP High Availability Monitor...
Jul 22 23:54:26 zyshanlinux-001 Keepalived_vrrp[1776]: VRRP_Instance(VI_1) sent 0 priority
Jul 22 23:54:26 zyshanlinux-001 Keepalived_vrrp[1776]: VRRP_Instance(VI_1) removing protocol VIPs.
Jul 22 23:54:26 zyshanlinux-001 Keepalived_healthcheckers[1775]: Stopped
Jul 22 23:54:27 zyshanlinux-001 Keepalived_vrrp[1776]: Stopped
Jul 22 23:54:27 zyshanlinux-001 systemd: Stopped LVS and VRRP High Availability Monitor.
Jul 22 23:54:27 zyshanlinux-001 Keepalived[1774]: Stopped Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2
从监听v_ip,日志显示具体过程
[aaa@qq.com src]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:29:ed:0e brd ff:ff:ff:ff:ff:ff
inet 192.168.106.130/24 brd 192.168.106.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.106.100/32 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::3c7e:461a:5056:da7d/64 scope link
valid_lft forever preferred_lft forever
3: ens37: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 00:0c:29:29:ed:18 brd ff:ff:ff:ff:ff:ff
[aaa@qq.com src]# tail /var/log/messages
Jul 22 23:54:31 zyshanlinux-02 Keepalived_vrrp[1733]: Sending gratuitous ARP on ens33 for 192.168.106.100
Jul 22 23:54:31 zyshanlinux-02 Keepalived_vrrp[1733]: Sending gratuitous ARP on ens33 for 192.168.106.100
Jul 22 23:54:31 zyshanlinux-02 Keepalived_vrrp[1733]: Sending gratuitous ARP on ens33 for 192.168.106.100
Jul 22 23:54:31 zyshanlinux-02 Keepalived_vrrp[1733]: Sending gratuitous ARP on ens33 for 192.168.106.100
Jul 22 23:54:36 zyshanlinux-02 Keepalived_vrrp[1733]: Sending gratuitous ARP on ens33 for 192.168.106.100
Jul 22 23:54:36 zyshanlinux-02 Keepalived_vrrp[1733]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 192.168.106.100
Jul 22 23:54:36 zyshanlinux-02 Keepalived_vrrp[1733]: Sending gratuitous ARP on ens33 for 192.168.106.100
Jul 22 23:54:36 zyshanlinux-02 Keepalived_vrrp[1733]: Sending gratuitous ARP on ens33 for 192.168.106.100
Jul 22 23:54:36 zyshanlinux-02 Keepalived_vrrp[1733]: Sending gratuitous ARP on ens33 for 192.168.106.100
Jul 22 23:54:36 zyshanlinux-02 Keepalived_vrrp[1733]: Sending gratuitous ARP on ens33 for 192.168.106.100
-
测试4:启动master上的keepalived服务
主:重新监听v_ip
[aaa@qq.com ~]# systemctl start keepalived
[aaa@qq.com ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a1:d4:eb brd ff:ff:ff:ff:ff:ff
inet 192.168.106.128/24 brd 192.168.106.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.106.100/32 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.106.150/24 brd 192.168.106.255 scope global secondary ens33:0
valid_lft forever preferred_lft forever
inet6 fe80::8fc3:bbdf:ba89:22a7/64 scope link
valid_lft forever preferred_lft forever
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a1:d4:f5 brd ff:ff:ff:ff:ff:ff
inet6 fe80::7285:a690:d34:bb0c/64 scope link
valid_lft forever preferred_lft forever
从:丢失v_ip
[aaa@qq.com src]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:29:ed:0e brd ff:ff:ff:ff:ff:ff
inet 192.168.106.130/24 brd 192.168.106.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::3c7e:461a:5056:da7d/64 scope link
valid_lft forever preferred_lft forever
3: ens37: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 00:0c:29:29:ed:18 brd ff:ff:ff:ff:ff:ff
负载均衡集群介绍
-
主流开源软件LVS、keepalived、haproxy、nginx等
-
其中LVS属于4层(网络OSI 7层模型),nginx属于7层,haproxy既可以认为是4层,也可以当做7层使用
-
keepalived的负载均衡功能其实就是lvs
-
lvs这种4层的负载均衡是可以分发除80外的其他端口通信的,比如MySQL的,而nginx仅仅支持http,https,mail,haproxy也支持MySQL这种
-
相比较来说,LVS这种4层的更稳定,能承受更多的请求,而nginx这种7层的更加灵活,能实现更多的个性化需求
LVS介绍
-
LVS是由国人章文嵩开发
-
流行度不亚于apache的httpd,基于TCP/IP做的路由和转发,稳定性和效率很高
-
LVS最新版本基于Linux内核2.6,有好多年不更新了
-
LVS有三种常见的模式:NAT、DR、IP Tunnel
-
LVS架构中有一个核心角色叫做分发器(Load balance),它用来分发用户的请求,还有诸多处理用户请求的服务器(Real Server,简称rs)
LVS的调度算法
-
轮询 Round-Robin rr
-
加权轮询 Weight Round-Robin wrr
-
最小连接 Least-Connection lc
-
加权最小连接 Weight Least-Connection wlc
-
基于局部性的最小连接 Locality-Based Least Connections lblc
-
带复制的基于局部性最小连接 Locality-Based Least Connections with Replication lblcr
-
目标地址散列调度 Destination Hashing dh
-
源地址散列调度 Source Hashing sh
LVS NAT模式搭建
三台机器 分发器,也叫调度器(简写为dir) 内网:133.130,外网:142.147(vmware仅主机模式) rs1 内网:133.132,设置网关为133.130 rs2 内网:133.133,设置网关为133.130 三台机器上都执行执行 systemctl stop firewalld; systemc disable firewalld systemctl start iptables-services; iptables -F; service iptables save
在dir上安装ipvsadm yum install -y ipvsdam 在dir上编写脚本,vim /usr/local/sbin/lvs_nat.sh//内容如下
#! /bin/bash
# director 服务器上开启路由转发功能
echo 1 > /proc/sys/net/ipv4/ip_forward
# 关闭icmp的重定向
echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects
echo 0 > /proc/sys/net/ipv4/conf/default/send_redirects
# 注意区分网卡名字,阿铭的两个网卡分别为ens33和ens37
echo 0 > /proc/sys/net/ipv4/conf/ens33/send_redirects
echo 0 > /proc/sys/net/ipv4/conf/ens37/send_redirects
# director 设置nat防火墙
iptables -t nat -F
iptables -t nat -X
iptables -t nat -A POSTROUTING -s 192.168.133.0/24 -j MASQUERADE
# director设置ipvsadm
IPVSADM='/usr/sbin/ipvsadm'
$IPVSADM -C
$IPVSADM -A -t 192.168.147.144:80 -s wlc -p 3
$IPVSADM -a -t 192.168.147.144:80 -r 192.168.133.132:80 -m -w 1
$IPVSADM -a -t 192.168.147.144:80 -r 192.168.133.133:80 -m -w 1
NAT模式效果测试
两台rs上都安装nginx 设置两台rs的主页,做一个区分,也就是说直接curl两台rs的ip时,得到不同的结果 浏览器里访问192.168.142.147,多访问几次看结果差异
拓展:
keepalived 默认报警设置使用外部邮箱http://blog.51cto.com/6764097/1954158
HAProxy+Keepalived配置邮件报警,会话保存机制和TCP端口范围(三)
https://blog.csdn.net/HzSunshine/article/details/62052398
LVS介绍及工作原理图解http://blog.51cto.com/jiekeyang/1839583
上一篇: ubuntu12.04
下一篇: MyEclipse 快捷键大全!