keepalived+haproxy+mycat+mysql高可用搭建配制
mysql双主同步配置:
mysql1服务器的my.cnf配置:
server-id=1
log_bin=bin_log
binlog-do-db=ry-vue
slave_parallel_type='logical_clock'
slave_parallel_workers=4
relay-log=relay-bin
relay-log-index=relay-bin.index
replicate-do-db=ry-vue
auto_increment_offset = 1
auto_increment_increment = 2
mysql2服务器 的my.ini配置:
server-id=2
log_bin=bin_log
binlog-do-db=ry-vue
slave_parallel_type='logical_clock'
slave_parallel_workers=4
relay-log=relay-bin
relay-log-index=relay-bin.index
replicate-do-db=ry-vue
auto_increment_offset = 2
auto_increment_increment = 2
进入mysql1:
-
CREATE USER '用户名'@'MySql2服务器ip' IDENTIFIED BY '密码';
-
GRANT REPLICATION SLAVE ON *.* TO '用户名'@'MySql2服务器ip';
-
flush privileges;
-
然后输入
-
mysql> show master status;
+----------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+----------------+----------+--------------+------------------+-------------------+
| bin_log.000001 | 790 | ry-vue | | |
+----------------+----------+--------------+------------------+-------------------+
1 row in set (0.01 sec) -
我们会看到:(记住 File 和 Position)
-
进入mysql2:
-
CHANGE MASTER TO
master_host = 'MySql1服务器ip',
master_user = '用户名',
master_password = '密码',
master_log_file = 'bin_log.000001',
master_log_pos = 790;mysql> start slave;
Query OK, 0 rows affected (0.01 sec)mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.233.4
Master_User: chenshuang
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: bin_log.000001
Read_Master_Log_Pos: 790
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 318
Relay_Master_Log_File: bin_log.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB: ry-vuemycat配置:
-
schema.xml配置:
-
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://org.opencloudb/"><schema name="ry-vue" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn">
<!-- auto sharding by id (long) -->
<!--<table name="travelrecord" dataNode="dn1,dn2,dn3" rule="auto-sharding-long" />--><!-- global table is auto cloned to all defined data nodes ,so can join
with any table whose sharding node is in the same data node -->
<!--<table name="company" primaryKey="ID" type="global" dataNode="dn1,dn2,dn3" />-->
<!--<table name="goods" primaryKey="ID" type="global" dataNode="dn1,dn2" />-->
<!-- random sharding using mod sharind rule -->
<!--<table name="hotnews" primaryKey="ID" dataNode="dn1,dn2,dn3"
rule="mod-long" />-->
<!-- <table name="dual" primaryKey="ID" dataNode="dnx,dnoracle2" type="global"
needAddLimit="false"/> <table name="worker" primaryKey="ID" dataNode="jdbc_dn1,jdbc_dn2,jdbc_dn3"
rule="mod-long" /> -->
<!--<table name="employee" primaryKey="ID" dataNode="dn1,dn2"
rule="sharding-by-intfile" />-->
<!--<table name="customer" primaryKey="ID" dataNode="dn1,dn2"
rule="sharding-by-intfile">
<childTable name="orders" primaryKey="ID" joinKey="customer_id"
parentKey="id">
<childTable name="order_items" joinKey="order_id"
parentKey="id" />
</childTable>
<childTable name="customer_addr" primaryKey="ID" joinKey="customer_id"
parentKey="id" />
</table>-->
<!-- <table name="oc_call" primaryKey="ID" dataNode="dn1$0-743" rule="latest-month-calldate"
/> -->
</schema>
<!-- <dataNode name="dn1$0-743" dataHost="localhost1" database="db$0-743"
/> -->
<dataNode name="dn" dataHost="localhost" database="ry-vue" />
<!--<dataNode name="dn1" dataHost="localhost1" database="db1" />
<dataNode name="dn2" dataHost="localhost1" database="db2" />
<dataNode name="dn3" dataHost="localhost1" database="db3" />-->
<!--<dataNode name="dn4" dataHost="sequoiadb1" database="SAMPLE" />
<dataNode name="jdbc_dn1" dataHost="jdbchost" database="db1" />
<dataNode name="jdbc_dn2" dataHost="jdbchost" database="db2" />
<dataNode name="jdbc_dn3" dataHost="jdbchost" database="db3" /> -->
<dataHost name="localhost" maxCon="1000" minCon="10" balance="1"
writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<!-- can have multi write hosts -->
<writeHost host="hostM1" url="192.168.233.4:3306" user="root"
password="ROOT@localhost123456">
<!-- can have multi read hosts -->
<readHost host="hostS1" url="192.168.233.5:3306" user="root"
password="ROOT@localhost123456" />
</writeHost>
<writeHost host="hostS1" url="192.168.233.5:3306" user="root"
password="ROOT@localhost123456" />
<!-- <writeHost host="hostM2" url="localhost:3316" user="root" password="123456"/> -->
</dataHost>
<!--
<dataHost name="sequoiadb1" maxCon="1000" minCon="1" balance="0" dbType="sequoiadb" dbDriver="jdbc">
<heartbeat> </heartbeat>
<writeHost host="hostM1" url="sequoiadb://1426587161.dbaas.sequoialab.net:11920/SAMPLE" user="jifeng" password="jifeng"></writeHost>
</dataHost>
<dataHost name="oracle1" maxCon="1000" minCon="1" balance="0" writeType="0" dbType="oracle" dbDriver="jdbc"> <heartbeat>select 1 from dual</heartbeat>
<connectionInitSql>alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss'</connectionInitSql>
<writeHost host="hostM1" url="jdbc:oracle:thin:@127.0.0.1:1521:nange" user="base" password="123456" > </writeHost> </dataHost>
<dataHost name="jdbchost" maxCon="1000" minCon="1" balance="0" writeType="0" dbType="mongodb" dbDriver="jdbc">
<heartbeat>select user()</heartbeat>
<writeHost host="hostM" url="mongodb://192.168.0.99/test" user="admin" password="123456" ></writeHost> </dataHost>
<dataHost name="sparksql" maxCon="1000" minCon="1" balance="0" dbType="spark" dbDriver="jdbc">
<heartbeat> </heartbeat>
<writeHost host="hostM1" url="jdbc:hive2://feng01:10000" user="jifeng" password="jifeng"></writeHost> </dataHost> --><!-- <dataHost name="jdbchost" maxCon="1000" minCon="10" balance="0" dbType="mysql"
dbDriver="jdbc"> <heartbeat>select user()</heartbeat> <writeHost host="hostM1"
url="jdbc:mysql://localhost:3306" user="root" password="123456"> </writeHost>
</dataHost> -->
</mycat:schema>server.xml配制:
-
<?xml version="1.0" encoding="UTF-8"?>
<!-- - - Licensed under the Apache License, Version 2.0 (the "License");
- you may not use this file except in compliance with the License. - You
may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0
- - Unless required by applicable law or agreed to in writing, software -
distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the
License for the specific language governing permissions and - limitations
under the License. -->
<!DOCTYPE mycat:server SYSTEM "server.dtd">
<mycat:server xmlns:mycat="http://org.opencloudb/">
<system>
<property name="defaultSqlParser">druidparser</property>
<!-- <property name="useCompression">1</property>--> <!--1为开启mysql压缩协议-->
<!-- <property name="processorBufferChunk">40960</property> -->
<!--
<property name="processors">1</property>
<property name="processorExecutor">32</property>
-->
<!--默认是65535 64K 用于sql解析时最大文本长度 -->
<!--<property name="maxStringLiteralLength">65535</property>-->
<!--<property name="sequnceHandlerType">0</property>-->
<!--<property name="backSocketNoDelay">1</property>-->
<!--<property name="frontSocketNoDelay">1</property>-->
<!--<property name="processorExecutor">16</property>-->
<!--
<property name="mutiNodeLimitType">1</property> 0:开启小数量级(默认) ;1:开启亿级数据排序
<property name="mutiNodePatchSize">100</property> 亿级数量排序批量
<property name="processors">32</property> <property name="processorExecutor">32</property>
<property name="serverPort">8066</property> <property name="managerPort">9066</property>
<property name="idleTimeout">300000</property> <property name="bindIp">0.0.0.0</property>
<property name="frontWriteQueueSize">4096</property> <property name="processors">32</property> -->
</system>
<user name="root">
<property name="password">123456</property>
<property name="schemas">ry-vue</property>
</user><!--<user name="user">
<property name="password">user</property>
<property name="schemas">TESTDB</property>
<property name="readOnly">true</property>
</user>-->
<!-- <cluster> <node name="cobar1"> <property name="host">127.0.0.1</property>
<property name="weight">1</property> </node> </cluster> -->
<!-- <quarantine> <host name="1.2.3.4"> <property name="user">test</property>
</host> </quarantine> --></mycat:server>
haproxy配制:
-
haproxy.cfg配制:
-
global
log 127.0.0.1 local1
maxconn 4096
chroot /usr/local/haproxy
user root
group root
daemon
defaults
log global
option dontlognull
retries 3
option redispatch
maxconn 2000timeout connect 5000
timeout client 50000
timeout server 60000
listen admin_stats
bind 192.168.233.4:48800
mode http
option httplog
stats refresh 30s
stats uri /admin-status
stats realm Haproxy Manager
stats auth admin:admin
stats hide-version
listen mycat_service
bind 192.168.233.4:8067
mode tcp
option tcplog
option httpchk OPTIONS*HTTP/1.1\r\nHost:\www
balance roundrobin
server mycat_4 192.168.233.4:8066 check port 48700 inter 5s rise 2 fall 3
server mycat_5 192.168.233.5:8066 check port 48700 inter 5s rise 2 fall 3
timeout server 60000
listen mycat_admin
bind 192.168.233.4:9067
mode tcp
option tcplog
option httpchk OPTIONS*HTTP/1.1\r\nHost:\www
balance roundrobin
server mycat_4 192.168.233.4:9066 check port 48700 inter 5s rise 2 fall 3
server mycat_5 192.168.233.5:9066 check port 48700 inter 5s rise 2 fall 3
timeout server 60000
keepalived配制: -
keepalived.conf配制:
-
! Configuration File for keepalived
global_defs {
router_id haproxy_01
}vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 3
}vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_haproxy
}
virtual_ipaddress {
192.168.233.6
}
}
本文地址:https://blog.csdn.net/chenshuang_com/article/details/107388793
推荐阅读
-
详解MySQL高可用MMM搭建方案及架构原理
-
Spring-Cloud Eureka注册中心实现高可用搭建
-
Spring-Cloud Eureka注册中心实现高可用搭建
-
Linux系统 CentOS 7怎么搭建集群? 搭建高可用集群的步骤
-
CentOS下RabbitMq高可用集群环境搭建教程
-
MongoDB高可用集群搭建
-
[图文][提供可行性脚本] CentOS 7 Fencing+Pacemaker三节点搭建高可用集群
-
一张图讲解最少机器搭建FastDFS高可用分布式集群安装说明
-
Spark快速入门系列(6) | Spark环境搭建—standalone(3) 配置HA高可用模式
-
centos7搭建hadoop2.10高可用(HA)