KeepAlived+Redis+Haproxy实现主从热备、负载均衡
程序员文章站
2024-03-21 09:16:47
...
KeepAlived+Redis+Haproxy实现主从热备、负载均衡、秒级切换实战
- Redis+Keepalived+Haproxy 的集群架构,分别用六个端口,实现多路复用,最终实现主从热备、负载均衡、秒级切换。
一、部署Redis集群
1、环境
- 3台虚拟机模拟6个节点,一台机器2个节点,创建出3 master、3 salve 环境
- redis1: 192.168.222.120
- redis2: 192.168.222.110
- redis3: 192.168.222.100
2、安装 redis 实例 (6节点)
1、修改集群主机名
[aaa@qq.com ~]# hostnamectl --static set-hostname redis1
[aaa@qq.com ~]# hostnamectl --static set-hostname redis2
[aaa@qq.com ~]# hostnamectl --static set-hostname redis3
3、hosts文件配置
[aaa@qq.com ~]# cat >> /etc/hosts << EOF
192.168.222.120 redis1
192.168.222.110 redis2
192.168.222.100 redis3
EOF
4、修改系统参数(所有节点)
1.修改最大可打开数
[aaa@qq.com ~]# cat >> /etc/security/limits.conf << EOF
soft nofile 102400
hard nofile 102400
EOF
2.设置TCP监听队列大小
[aaa@qq.com ~]# echo "net.core.somaxconn = 32767" >> /etc/sysctl.conf
3、OOM相关:vm.overcommit_memory
[aaa@qq.com ~]# echo "vm.overcommit_memory=1" >> /etc/sysctl.conf
[aaa@qq.com ~]# sysctl -p
- /proc/sys/vm/overcommit_memory”默认值为0,表示不允许申请超过CommitLimmit大小的内存。可以设置为1关闭Overcommit,设置方法请参照net.core.somaxconn完成
4、开启内核的“Transparent Huge Pages (THP)”特性
为了保证永久生效,将以下写入/etc/rc.local
[aaa@qq.com ~]# echo "echo never > /sys/kernel/mm/transparent_hugepage/enabled" >> /etc/rc.local
[aaa@qq.com ~]# chmod +x /etc/rc.local
5、安装 redis 并配置 redis-cluster
1、安装gcc套装
[aaa@qq.com ~]# yum -y install gcc glibc glibc-kernheaders glibc-common glibc-devel make
2、升级gcc
[aaa@qq.com ~]# yum -y install centos-release-scl
[aaa@qq.com ~]# yum -y install devtoolset-9-gcc devtoolset-9-gcc-c++ devtoolset-9-binutils
[aaa@qq.com ~]# scl enable devtoolset-9 bash
3、设置永久升级
[aaa@qq.com ~]# echo "source /opt/rh/devtoolset-9/enable" >>/etc/profile
4、redis01 安装
[aaa@qq.com ~]# cd /usr/local/src
[aaa@qq.com ~]# wget http://download.redis.io/releases/redis-6.0.5.tar.gz
[aaa@qq.com ~]# tar -zxvf redis-6.0.5.tar.gz
[aaa@qq.com ~]# cd redis-6.0.5/
[aaa@qq.com ~]# make
[aaa@qq.com ~]# make install PREFIX=/usr/local/redis-cluster
1、创建实例目录
[aaa@qq.com ~]# mkdir -p /redis/{6001,6002}/{conf,data,log}
2、配置
- 配置官方配置文件,去掉#开头的和空格行
[aaa@qq.com ~]# grep -Ev "^$|#" /usr/local/redis-6.0.5/redis.conf
1、redis01 6001 配置文件
开启daemon守护进程,开启cluster-enabled——这个开启之后,才能开启redis集群
[aaa@qq.com conf]# cd /redis/6001/conf
[aaa@qq.com conf]# cat >> redis.conf << EOF
bind 0.0.0.0
protected-mode no
port 6001
dir /redis/6001/data
cluster-enabled yes
cluster-config-file /redis/6001/conf/nodes.conf
cluster-node-timeout 5000
appendonly yes
daemonize yes
pidfile /redis/6001/redis.pid
logfile /redis/6001/log/redis.log
EOF
2.redis01 6002配置文件
[aaa@qq.com conf]# sed 's/6001/6002/g' redis.conf > /redis/6002/conf/redis.conf
3、启动脚本 start-redis-cluster.sh
[aaa@qq.com ~]# cat >/usr/local/redis-cluster/start-redis-cluster.sh<<-EOF
#!/bin/bash
REDIS_HOME=/usr/local/redis-cluster
REDIS_CONF=/redis
$REDIS_HOME/bin/redis-server $REDIS_CONF/6001/conf/redis.conf
$REDIS_HOME/bin/redis-server $REDIS_CONF/6002/conf/redis.conf
EOF
4、添加权限
[aaa@qq.com ~]# chmod +x /usr/local/redis-cluster/start-redis-cluster.sh
5、启动 redis
[aaa@qq.com ~]# bash /usr/local/redis-cluster/start-redis-cluster.sh
3、检查 redis 启动情况
[aaa@qq.com redis]# netstat -tnlp | grep redis
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:16001 0.0.0.0:* LISTEN 5667/redis-server 0
tcp 0 0 0.0.0.0:16002 0.0.0.0:* LISTEN 5673/redis-server 0
tcp 0 0 0.0.0.0:6001 0.0.0.0:* LISTEN 5667/redis-server 0
tcp 0 0 0.0.0.0:6002 0.0.0.0:* LISTEN 5673/redis-server 0
查看进程启动情况
[aaa@qq.com redis]# ps -ef | grep redis
root 5730 1 0 14:02 ? 00:00:00 /usr/local/redis-cluster/bin/redis-server 0.0.0.0:6001 [cluster]
root 5732 1 0 14:02 ? 00:00:00 /usr/local/redis-cluster/bin/redis-server 0.0.0.0:6002 [cluster]
root 5744 1383 0 14:06 pts/0 00:00:00 grep --color=auto redis
4、创建集群
出现交互,输入yes
[aaa@qq.com bin]# ./redis-cli --cluster create 192.168.222.120:6001 192.168.222.120:6002 192.168.222.110:6001 192.168.222.110:6002 192.168.222.100:6001 192.168.222.100:6002 --cluster-replicas 1
Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.222.110:6002 to 192.168.222.120:6001
Adding replica 192.168.222.100:6002 to 192.168.222.110:6001
Adding replica 192.168.222.120:6002 to 192.168.222.100:6001
M: 4e77fd97e6d288a8bf2893791ff1cfa34b036bc8 192.168.222.120:6001
slots:[0-5460] (5461 slots) master
S: bf1ac05a6a62d08cca9104afac6cc1633335ff39 192.168.222.120:6002
replicates 56f4068d0f77eca95bc243b25884933877bef162
M: 82f5debb0cb2c7ad0bef3ada51b7abfea7aa8e05 192.168.222.110:6001
slots:[5461-10922] (5462 slots) master
S: bbf49544225d661345b1971915165fc401b5ff6c 192.168.222.110:6002
replicates 4e77fd97e6d288a8bf2893791ff1cfa34b036bc8
M: 56f4068d0f77eca95bc243b25884933877bef162 192.168.222.100:6001
slots:[10923-16383] (5461 slots) master
S: 480ad0d2eebaf2188aafca9e1f3566fe51412db4 192.168.222.100:6002
replicates 82f5debb0cb2c7ad0bef3ada51b7abfea7aa8e05
Can I set the above configuration? (type 'yes' to accept): yes
Nodes configuration updated
Assign a different config epoch to each node
Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
Performing Cluster Check (using node 192.168.222.120:6001)
M: 4e77fd97e6d288a8bf2893791ff1cfa34b036bc8 192.168.222.120:6001
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 480ad0d2eebaf2188aafca9e1f3566fe51412db4 192.168.222.100:6002
slots: (0 slots) slave
replicates 82f5debb0cb2c7ad0bef3ada51b7abfea7aa8e05
S: bbf49544225d661345b1971915165fc401b5ff6c 192.168.222.110:6002
slots: (0 slots) slave
replicates 4e77fd97e6d288a8bf2893791ff1cfa34b036bc8
S: bf1ac05a6a62d08cca9104afac6cc1633335ff39 192.168.222.120:6002
slots: (0 slots) slave
replicates 56f4068d0f77eca95bc243b25884933877bef162
M: 56f4068d0f77eca95bc243b25884933877bef162 192.168.222.100:6001
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: 82f5debb0cb2c7ad0bef3ada51b7abfea7aa8e05 192.168.222.110:6001
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
Check for open slots...
Check slots coverage...
[OK] All 16384 slots covered.
5、集群验证
[aaa@qq.com bin]# ./redis-cli -p 6001
127.0.0.1:6001> CLUSTER NODES
480ad0d2eebaf2188aafca9e1f3566fe51412db4 192.168.222.100:aaa@qq.com slave 82f5debb0cb2c7ad0bef3ada51b7abfea7aa8e05 0 1593954565000 6 connected
bbf49544225d661345b1971915165fc401b5ff6c 192.168.222.110:aaa@qq.com master - 0 1593954566916 7 connected 0-5460
bf1ac05a6a62d08cca9104afac6cc1633335ff39 192.168.222.120:aaa@qq.com slave 56f4068d0f77eca95bc243b25884933877bef162 0 1593954566512 5 connected
56f4068d0f77eca95bc243b25884933877bef162 192.168.222.100:aaa@qq.com master - 0 1593954566000 5 connected 10923-16383
82f5debb0cb2c7ad0bef3ada51b7abfea7aa8e05 192.168.222.110:aaa@qq.com master - 0 1593954566000 3 connected 5461-10922
4e77fd97e6d288a8bf2893791ff1cfa34b036bc8 192.168.222.120:aaa@qq.com myself,slave bbf49544225d661345b1971915165fc401b5ff6c 0 1593954565000 1 connected
二、部署Keepalived,实现主从热备、秒级切换
1、环境
-
两台虚拟机或者选择集群中的任意两个节点配置
-
keepalived1:192.168.222.120
-
keepalived2:192.168.222.110
-
VIP地址:192.168.222.150
2、安装keepalived
yum -y install keepalived
3、修改配置文件
1、keepalived1 配置
! Configuration File for keepalived
global_defs {
router_id haproxy_01
}
vrrp_script haproxy_chk.sh {
script "/etc/keepalived/haproxy_chk.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
haproxy_chk.sh
}
virtual_ipaddress {
192.168.222.150/24
}
}
2、keepalived2 配置
! Configuration File for keepalived
global_defs {
router_id haproxy_02
}
vrrp_script haproxy_chk.sh {
script "/etc/keepalived/haproxy_chk.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 50
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
haproxy_chk.sh
}
virtual_ipaddress {
192.168.222.150/24
}
}
3、健康检测脚本 haproxy_chk.sh
#!/usr/bin/env bash
/usr/bin curl -I http://localhost &> /dev/null
if [ $? -ne 0 ];then
/etc/init.d/keepalived stop
fi
4、开启服务验证是VIP
[aaa@qq.com bin]# ./redis-cli -h 192.168.222.150 -p 6001
192.168.222.150:6001> auth redis
三、部署haproxy,实现访问6379端口时,轮询访问六个节点
1、安装 haproxy
任选一台机器安装haproxy
yum -y install haproxy
2、创建 haproxy.conf
在配置文件中,定义浏览器访问的端口、url和用来登录的账号密码
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
defaults
mode http
log global
option dontlognull
retries 3
maxconn 3000
contimeout 50000
clitimeout 50000
srvtimeout 50000
listen stats
bind *:8888
stats enable
stats hide-version
stats uri /haproxystats
stats realm Haproxy\ stats
stats auth admin:admin
stats admin if TRUE
listen admin_stat
bind 0.0.0.0:8888
mode http
stats refresh 30s
stats uri /haproxystats
stats realm Haproxy\ stats
stats auth admin:admin
stats enable
backend app
balance roundrobin
server redis 192.168.222.120:6001 check
server redis 192.168.222.120:6002 check
server redis 192.168.222.110:6001 check
server redis 192.168.222.110:6002 check
server redis 192.168.222.100:6001 check
server redis 192.168.222.100:6002 check
3、Haproxy rsyslog 日志配置
1.创建日志目录
[aaa@qq.com ~]# mkdir /var/log/haproxy
[aaa@qq.com ~]# chmod a+w /var/log/haproxy
2.开启 rsyslog 记录 haproxy 日志
# Provides UDP syslog reception
$ModLoad imudp #开启
$UDPServerRun 514 #开启
# Provides TCP syslog reception
$ModLoad imtcp #开启
$InputTCPServerRun 514 #开启
# haproxy log
local0.* /var/log/haproxy/haproxy.log # 添加
3.重启rsyslog并修改haproxy配置
[aaa@qq.com haproxy]# systemctl restart rsyslog
[aaa@qq.com haproxy]# vim haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
chroot /var/lib/haproxy #添加
pidfile /var/run/haproxy.pid #添加
[aaa@qq.com haproxy]# systemctl restart haproxy
4.查看bind监听端口
[aaa@qq.com haproxy]# netstat -tunlp | grep 514
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:514 0.0.0.0:* LISTEN 7089/rsyslogd
tcp6 0 0 :::514 :::* LISTEN 7089/rsyslogd
udp 0 0 0.0.0.0:514 0.0.0.0:* 7089/rsyslogd
udp6 0 0 :::514 :::* 7089/rsyslogd
4、haproxy 监控页面访问验证
这里如果能够看到自己的集群状态,就说明集群已经在haproxy能够监控的到了。
上一篇: 带你了解JWT
推荐阅读
-
KeepAlived+Redis+Haproxy实现主从热备、负载均衡
-
keepalive做HA热备 博客分类: 负载均衡架构相关 keepalive主备HA架构
-
keepalived双机备热+LVS负载均衡——超详细!!!
-
keepalived的原理、LVS负载均衡之DR模式+KeepAlived双机热备架构【原理和配置】
-
Nginx+Keepalived(双机热备)搭建高可用负载均衡环境(HA)
-
聊聊Nginx+Keepalived(双机热备)搭建高可用负载均衡环境(HA) 的技术
-
JAVAEE——宜立方商城03:Nginx负载均衡高可用、Keepalived+Nginx实现主备
-
数据库水平切分的实现原理解析---分库,分表,主从,集群,负载均衡器
-
Mysql 如何做双机热备和负载均衡 (方法一)
-
Mysql 如何做双机热备和负载均衡 (方法二)