Mysql GTID Mha配置方法
gtid + mha +binlog server配置:
1:测试环境
os:centos 6.5
mysql:5.6.28
mha:0.56
192.168.1.21 mysql1 m1
192.168.1.22 mysql2 s1
192.168.1.23 mysql3 s2 mha manage、binlog server
2:配置/etc/my.cnf相关参数,在3各节点中分别配置
binlog-format=row log-slave-updates=true gtid-mode=on enforce-gtid-consistency=true master-info-repository=table relay-log-info-repository=table sync-master-info=1 slave-parallel-workers=2 binlog-checksum=crc32 master-verify-checksum=1 slave-sql-verify-checksum=1 binlog-rows-query-log_events=1
设置root密码,创建复制用户:
mysql> use mysql; mysql> grant all privileges on *.* to root@"%" identified by "oracle123"; mysql> update user set password = password('oracle123') where user='root'; mysql> flush privileges; mysql> grant replication slave on *.* to 'repl'@'%' identified by 'oracle'; mysql> flush privileges;
3:在mysql2、mysql3配置gtid复制
change master to master_host = '192.168.1.21', master_port = 3306, master_user = 'repl', master_password = 'oracle', master_auto_position = 1; start slave; mysql> show slave status\g *************************** 1. row *************************** slave_io_state: waiting for master to send event master_host: 192.168.1.21 master_user: repl master_port: 3306 connect_retry: 60 master_log_file: mysql-bin.000003 read_master_log_pos: 524 relay_log_file: mysql-relay-bin.000002 relay_log_pos: 734 relay_master_log_file: mysql-bin.000003 slave_io_running: yes slave_sql_running: yes replicate_do_db: ...... master_ssl_crlpath: retrieved_gtid_set: 9ee7c7af-cbf3-11e5-bf75-000c2923e459:1-2 executed_gtid_set: 9ee7c7af-cbf3-11e5-bf75-000c2923e459:1-2 auto_position: 1 1 row in set (0.00 sec)
4:安装mha
rpm -uvh epel-release-6-8.noarch.rpm
配置ssh等效:
在所有节点都执行
ssh-keygen -t rsa ssh-copy-id -i /root/.ssh/id_rsa.pub root@mysql1 ssh-copy-id -i /root/.ssh/id_rsa.pub root@mysql2 ssh-copy-id -i /root/.ssh/id_rsa.pub root@mysql3
测试ssh登录,在3各节点分别测试:
ssh myqsl1 ssh myqsl2 ssh myqsl3
binlog server配置:在mysql3
mkdir -p /mysql/backup/binlog /usr/local/mysql/bin/mysqlbinlog -r --raw --host=192.168.1.20 --user='root' --password='oracle123' --stop-never mysql- bin.000003 &
最后那个binlog文件时给定从那个binlog文件开始。另外需要注意,当mysql1上的mysql进程退出后,binlog server也会退出。
需要安装一些包做支持,使用yum网络源;如安装遇到问题可以尝试yum update更新yum源或yum clean all清除缓存
在每个节点安装 mha4mysql-node
yum -y install perl-dbd-mysql ncftp
rpm -uvh mha4mysql-node-0.56-0.el6.noarch.rpm
在mysql3上安装mha-manager
yum install perl yum install cpan yum install perl-config-tiny yum install perl-time-hires yum install perl-log-dispatch yum install perl-parallel-forkmanager
如果安装perl-log-dispatch,perl-parallel-forkmanager安装包报错:
需要先安装epel(可以参考https://fedoraproject.org/wiki/epel)
rpm -uvh mha4mysql-manager-0.56-0.el6.noarch.rpm
5:配置mha,在mysql3
mkdir -p /etc/masterha/app1 vi /etc/masterha/app1.cnf [server default] user=root password=oracle123 manager_workdir=/etc/masterha/app1 manager_log=/etc/masterha/app1/manager.log remote_workdir=/etc/masterha/app1 ssh_user=root repl_user=repluser repl_password=oracle ping_interval=3 master_ip_failover_script=/etc/masterha/app1/master_ip_failover [server1] hostname=192.168.1.21 #ssh_port=9999 master_binlog_dir=/mysql/logs check_repl_delay=0 #防止master故障时候,切换时slave有延迟,可在那里切不过来 candidate_master=1 [server2] hostname=192.168.1.22 #ssh_port=9999 master_binlog_dir=/mysql/logs candidate_master=1 [server3] hostname=192.168.1.23 #ssh_port=9999 master_binlog_dir=/mysql/logs no_master=1 ignore_fail=1 #如果这个节点挂了,mha将不可用,加上这个参数slave挂了一样可以用 [binlog1] #binlog server需要mysqlbinlog命令 hostname=192.168.1.23 master_binlog_dir=/mysql/backup/binlog #读取binlog存放位置 ignore_fail=1 no_master=1 vi /etc/masterha/app1/master_ip_failover #!/usr/bin/env perl use strict; use warnings fatal => 'all'; use getopt::long; my ( $command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port ); my $vip = '192.168.1.20';#virtual ip my $gateway = '192.168.1.1';#gateway ip my $interface = 'eth0'; my $key = "1"; my $ssh_start_vip = "/sbin/ifconfig $interface:$key $vip;/sbin/arping -i $interface -c 3 -s $vip $gateway >/dev/null 2>&1"; my $ssh_stop_vip = "/sbin/ifconfig $interface:$key down"; getoptions( 'command=s' => \$command, 'ssh_user=s' => \$ssh_user, 'orig_master_host=s' => \$orig_master_host, 'orig_master_ip=s' => \$orig_master_ip, 'orig_master_port=i' => \$orig_master_port, 'new_master_host=s' => \$new_master_host, 'new_master_ip=s' => \$new_master_ip, 'new_master_port=i' => \$new_master_port, ); exit &main(); sub main { print "\n\nin script test====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { # $orig_master_host, $orig_master_ip, $orig_master_port are passed. # if you manage master ip address at global catalog database, # invalidate orig_master_ip here. my $exit_code = 1; eval { print "disabling the vip on old master: $orig_master_host \n"; &stop_vip(); $exit_code = 0; }; if ($@) { warn "got error: $@\n"; exit $exit_code; } exit $exit_code; } elsif ( $command eq "start" ) { # all arguments are passed. # if you manage master ip address at global catalog database, # activate new_master_ip here. # you can also grant write access (create user, set read_only=0, etc) here. my $exit_code = 10; eval { print "enabling the vip - $vip on the new master - $new_master_host \n"; &start_vip(); $exit_code = 0; }; if ($@) { warn $@; exit $exit_code; } exit $exit_code; } elsif ( $command eq "status" ) { print "checking the status of the script.. ok \n"; `ssh $ssh_user\@$orig_master_host \" $ssh_start_vip \"`; exit 0; } else { &usage(); exit 1; } } # a simple system call that enable the vip on the new master sub start_vip() { `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`; } # a simple system call that disable the vip on the old_master sub stop_vip() { `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`; } sub usage { print "usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip -- orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n"; } chmod 777 /etc/masterha/app1/
配置文件测试:
# masterha_check_ssh --conf=/etc/masterha/app1.cnf thu may 26 23:25:35 2016 - [warning] global configuration file /etc/masterha_default.cnf not found. skipping. thu may 26 23:25:35 2016 - [info] reading application default configuration from /etc/masterha/app1.cnf.. thu may 26 23:25:35 2016 - [info] reading server configuration from /etc/masterha/app1.cnf.. thu may 26 23:25:35 2016 - [info] starting ssh connection tests.. thu may 26 23:25:35 2016 - [debug] thu may 26 23:25:35 2016 - [debug] connecting via ssh from root@192.168.1.21(192.168.1.21:22) to root@192.168.1.22(192.168.1.22:22).. thu may 26 23:25:35 2016 - [debug] ok. thu may 26 23:25:35 2016 - [debug] connecting via ssh from root@192.168.1.21(192.168.1.21:22) to root@192.168.1.23(192.168.1.23:22).. thu may 26 23:25:35 2016 - [debug] ok. thu may 26 23:25:36 2016 - [debug] thu may 26 23:25:35 2016 - [debug] connecting via ssh from root@192.168.1.22(192.168.1.22:22) to root@192.168.1.21(192.168.1.21:22).. thu may 26 23:25:35 2016 - [debug] ok. thu may 26 23:25:35 2016 - [debug] connecting via ssh from root@192.168.1.22(192.168.1.22:22) to root@192.168.1.23(192.168.1.23:22).. thu may 26 23:25:36 2016 - [debug] ok. thu may 26 23:25:36 2016 - [debug] thu may 26 23:25:36 2016 - [debug] connecting via ssh from root@192.168.1.23(192.168.1.23:22) to root@192.168.1.21(192.168.1.21:22).. thu may 26 23:25:36 2016 - [debug] ok. thu may 26 23:25:36 2016 - [debug] connecting via ssh from root@192.168.1.23(192.168.1.23:22) to root@192.168.1.22(192.168.1.22:22).. thu may 26 23:25:36 2016 - [debug] ok. thu may 26 23:25:36 2016 - [info] all ssh connection tests passed successfully. #masterha_check_repl --conf=/etc/masterha/app1.cnf thu may 26 22:52:30 2016 - [warning] global configuration file /etc/masterha_default.cnf not found. skipping. thu may 26 22:52:30 2016 - [info] reading application default configuration from /etc/masterha/app1.cnf.. thu may 26 22:52:30 2016 - [info] reading server configuration from /etc/masterha/app1.cnf.. thu may 26 22:52:30 2016 - [info] mha::mastermonitor version 0.56. thu may 26 22:52:31 2016 - [info] gtid failover mode = 1 thu may 26 22:52:31 2016 - [info] dead servers: thu may 26 22:52:31 2016 - [info] alive servers: thu may 26 22:52:31 2016 - [info] 192.168.1.21(192.168.1.21:3306) thu may 26 22:52:31 2016 - [info] 192.168.1.22(192.168.1.22:3306) thu may 26 22:52:31 2016 - [info] 192.168.1.23(192.168.1.23:3306) thu may 26 22:52:31 2016 - [info] alive slaves: thu may 26 22:52:31 2016 - [info] 192.168.1.22(192.168.1.22:3306) version=5.6.28-log (oldest major version between slaves) log-bin:enabled thu may 26 22:52:31 2016 - [info] gtid on thu may 26 22:52:31 2016 - [info] replicating from 192.168.1.21(192.168.1.21:3306) thu may 26 22:52:31 2016 - [info] primary candidate for the new master (candidate_master is set) thu may 26 22:52:31 2016 - [info] 192.168.1.23(192.168.1.23:3306) version=5.6.28-log (oldest major version between slaves) log-bin:enabled thu may 26 22:52:31 2016 - [info] gtid on thu may 26 22:52:31 2016 - [info] replicating from 192.168.1.21(192.168.1.21:3306) thu may 26 22:52:31 2016 - [info] not candidate for the new master (no_master is set) thu may 26 22:52:31 2016 - [info] current alive master: 192.168.1.21(192.168.1.21:3306) thu may 26 22:52:31 2016 - [info] checking slave configurations.. thu may 26 22:52:31 2016 - [info] read_only=1 is not set on slave 192.168.1.22(192.168.1.22:3306). thu may 26 22:52:31 2016 - [info] read_only=1 is not set on slave 192.168.1.23(192.168.1.23:3306). thu may 26 22:52:31 2016 - [info] checking replication filtering settings.. thu may 26 22:52:31 2016 - [info] binlog_do_db= , binlog_ignore_db= thu may 26 22:52:31 2016 - [info] replication filtering check ok. thu may 26 22:52:31 2016 - [info] gtid (with auto-pos) is supported. skipping all ssh and node package checking. thu may 26 22:52:31 2016 - [info] healthcheck: ssh to 192.168.1.23 is reachable. thu may 26 22:52:31 2016 - [info] binlog server 192.168.1.23 is reachable. thu may 26 22:52:31 2016 - [info] checking recovery script configurations on 192.168.1.23(192.168.1.23:3306).. thu may 26 22:52:31 2016 - [info] executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/mysql/backup/binlog --output_file=/etc/masterha/app1/save_binary_logs_test --manager_version=0.56 --start_file=mysql-bin.000004 thu may 26 22:52:31 2016 - [info] connecting to root@192.168.1.23(192.168.1.23:22).. creating /etc/masterha/app1 if not exists.. ok. checking output directory is accessible or not.. ok. binlog found at /mysql/backup/binlog, up to mysql-bin.000004 thu may 26 22:52:31 2016 - [info] binlog setting check done. thu may 26 22:52:31 2016 - [info] checking ssh publickey authentication settings on the current master.. thu may 26 22:52:31 2016 - [info] healthcheck: ssh to 192.168.1.21 is reachable. thu may 26 22:52:31 2016 - [info] 192.168.1.21(192.168.1.21:3306) (current master) +--192.168.1.22(192.168.1.22:3306) +--192.168.1.23(192.168.1.23:3306) thu may 26 22:52:31 2016 - [info] checking replication health on 192.168.1.22.. thu may 26 22:52:31 2016 - [info] ok. thu may 26 22:52:31 2016 - [info] checking replication health on 192.168.1.23.. thu may 26 22:52:31 2016 - [info] ok. thu may 26 22:52:31 2016 - [info] checking master_ip_failover_script status: thu may 26 22:52:31 2016 - [info] /etc/masterha/app1/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.1.21 --orig_master_ip=192.168.1.21 --orig_master_port=3306 in script test====/sbin/ifconfig eth1:1 down==/sbin/ifconfig eth1:1 192.168.1.20;/sbin/arping -i eth1 -c 3 -s 192.168.1.20 192.168.1.1 >/dev/null 2>&1=== checking the status of the script.. ok thu may 26 22:52:34 2016 - [info] ok. thu may 26 22:52:34 2016 - [warning] shutdown_script is not defined. thu may 26 22:52:34 2016 - [info] got exit code 0 (not master dead). mysql replication health is ok.
mha启动及关闭
nohup masterha_manager --conf=/etc/masterha/app1.cnf > /etc/masterha/app1/manager.log < /dev/null 2>&1 &
检查是否启动:
masterha_check_status --conf=/etc/masterha/app1.cnf app1 (pid:11447) is running(0:ping_ok), master:192.168.1.21
停止mha:
masterha_stop --conf=/etc/masterha/app1.cnf stopped app1 successfully. [3]+ exit 1 nohup masterha_manager --conf=/etc/masterha/app1.cnf > /etc/masterha/app1/manager.log < /dev/null 2>&1
测试:
说明,每次测试完成后,需要清理/etc/masterha/app1下的日志,然后启动mha manager.
1:关闭mysql1上的mysql,查看从库从那里同步,以及mha日志输出
2:恢复mysql1为mysql2的slave,change master语句可以在/etc/masterha/app1/manager.log里找到。
在配置gtid复制时候遇到 1032错误,用以下方法解决
mysql> show global variables like '%gtid%'; +---------------------------------+------------------------------------------------------------------------------------+ | variable_name | value | +---------------------------------+------------------------------------------------------------------------------------+ | binlog_gtid_simple_recovery | off | | enforce_gtid_consistency | on | | gtid_executed | 88b05570-2599-11e6-880a-000c29c18cf5:1-3, 9ee7c7af-cbf3-11e5-bf75-000c2923e459:1-4 | | gtid_mode | on | | gtid_owned | | | gtid_purged | | | simplified_binlog_gtid_recovery | off | +---------------------------------+------------------------------------------------------------------------------------+ stop slave; set gtid_next='9ee7c7af-cbf3-11e5-bf75-000c2923e459:4'; begin; commit; set gtid_next='automatic'; start slave; show slave status\g;
以上这篇mysql gtid mha配置方法就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持。
推荐阅读
-
mysql5.7.20免安装版配置方法图文教程
-
MySql 5.7.21免安装版本win10下的配置方法
-
mysql8.0.11 winx64安装配置方法图文教程(win10)
-
CentOS7下mysql 8.0.16 安装配置方法图文教程
-
windows下mysql 8.0.16 安装配置方法图文教程
-
mysql 8.0.16 winx64安装配置方法图文教程
-
mysql 8.0.16 winx64.zip安装配置方法图文教程
-
win10下mysql 8.0.16 winx64安装配置方法图文教程
-
Windows10下mysql 8.0.16 安装配置方法图文教程
-
Windows10 mysql 8.0.12 非安装版配置启动方法