saltstack 进行高可用nginx集群搭建
程序员文章站
2022-05-07 10:50:07
...
以下文章中涉及nginx 基于saltstack的搭建可参看的我的博客 saltstack 自动化部署 nginx(源码编译)
实验拓扑以及涉及的文件目录如下:
进行keepalived 安装脚本的编写
[[email protected] ~]# vim /srv/salt/keepalived/install.sls
include:
- pkgs.make # 此种包含着常的安装依赖性
kp.install: # keepalived的源码编译
file.managed: # 文件加管理,向客户端主机,推送源码包
- name: /mnt/keepalived-2.0.6.tar.gz
- source: salt://keepalived/files/keepalived-2.0.6.tar.gz
cmd.run: # 执行解压与编译命令
- name: cd /mnt && tar zxf keepalived-2.0.6.tar.gz && cd keepalived-2.0.6 && ./configure --prefix=/usr/local/keepalived --with-init=SYSV &>/dev/null && make &>/dev/null && make install &>/dev/null
- create: /usr/local/keepalived
# 在客户端主机中有此文件后,将不再重复编译过程
/etc/keepalived: # 进行keepalived配置文件目录建立
file.directory:
- mode: 755
/etc/sysconfig/keepalived: # 建立软连接,
file.symlink:
- target: /usr/local/keepalived/etc/sysconfig/keepalived
/sbin/keepalived:
file.symlink:
- target: /usr/local/keepalived/sbin/keepalived
进行keepalived 服务脚本的编写
include:
- keepalived.install # 此脚本中包含install的内容
/etc/keepalived/keepalived.conf: # minion中要同步的文件
file.managed: # 文件管理模块
- source: salt://keepalived/files/keepalived.conf # server上的源文件
- template: jinja # jinja 模块
- context:
STATE: {{ pillar['state'] }}
VRID: {{ pillar['vrid'] }}
PRIORITY: {{ pillar['priority'] }}
kp-service:
file.managed: # 文件管理,管理keepalived启动脚本
- name: /etc/init.d/keepalived
- source: salt://keepalived/files/keepalived
- mode: 755
service.running:
- name: keepalived
- reload: True
- watch: # 监控keepalived 配置文件
- file: /etc/keepalived/keepalived.conf
keepalived 配置文件模版编写
[root@server1 ~]# vim /srv/salt/keepalived/files/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
# vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state {{ STATE }} # 调用 keepalived.service脚本中的变量
interface eth0
virtual_router_id {{ VRID }} # 调用 keepalived.service脚本中的变量
priority {{ PRIORITY }} # 调用 keepalived.service脚本中的变量
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.21.100 # 设定vip
}
}
服务脚本中pillor的编写
[root@server1 ~]# vim /srv/pillar/keepalived/install.sls
{% if grains['fqdn'] == 'server1' %}
webserver: keepalived
state: MASTER # 定义server1的state
vrid: 21 # 虚拟id
priority: 100 # 优先级
{% elif grains['fqdn'] == 'server4' %}
webserver: keepalived
state: BACKUP
vrid: 21
priority: 50
{% endif %}
进行pillar top文件的编写
[[email protected] ~]# vim /srv/pillar/top.sls
base:
'*':
- web.install
- keepalived.install # 在全局声明
salt top 文件的编写
[[email protected] ~]# vim /srv/salt/top.sls
base:
'server1': # 在top 文件中进行声明
- haproxy.install
- keepalived.service
'server4':
- haproxy.install
- keepalived.service
'server2':
- apache.service
'server3':
- nginx.service
编辑完成进行向各主机推送
[root@server1 ~]# salt '*' state.highstate
以上,简单的http 服务器 基于 keepalived + haproxy 的高可用负载均衡集群已自动搭建完成。若想扩展业务,只需要添加minion主机,并修改相应的文件脚本即可
进行haproxy的高可用
我们建立的上述集群是有缺陷的,没有对haproxy进行健康检查的,若是haproxy宕机,keepalived服务正常启动,还是无法做到对后端web服务器负载均衡
为了解决这个缺陷,我们可以为keepalived添加haproxy的监控脚本
脚本内容如下:(一个很粗糙的脚本)
[root@server1 files]# vim check_haproxy.sh
#!/bin/bash
/etc/init.d/haproxy status &> /dev/null || /etc/init.d/haproxy restart &> /dev/null
# 宕检查haproxy的状态,如果返回错误,则重启haproy
if [ $? -ne 0 ];
then /etc/init.d/keepalived stop &> /dev/null
fi
如果返回值重启后返回值不为0,则停止keepalived服务
编辑keepalived 模版文件,并将内容推给各个keepalived主机
在keealived.conf 开头加上监控脚本
vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh"
# 脚本绝对路径
interval 2
# 脚本刷新的时间
weight 2
}
......省略......
virtual_ipaddress {
172.25.21.100
}
track_script {
check_haproxy # 将脚本加到此处,以调用
}
将此脚本推送到各个keepalived主机中
[[email protected] keepalived]# salt '*' state.highstate
server2:
----------
......省略......
Summary for server2
------------
Succeeded: 2 # server2推送成功
Failed: 0
------------
Total states run: 2
Total run time: 495.607 ms
server3:
----------
......省略......
Summary for server3
------------
Succeeded: 9 # server3 推送成功
Failed: 0
------------
Total states run: 9
Total run time: 1.373 s
server4:
----------
......省略......
Summary for server4 # server4推送成功
-------------
Succeeded: 13 (changed=4)
Failed: 0
-------------
Total states run: 13
Total run time: 10.100 s
server1:
----------
......省略......
Summary for server1
-------------
Succeeded: 13 (changed=3) # server1推送成功
Failed: 0
-------------
Total states run: 13
Total run time: 10.529 s
推荐阅读