PostgreSQL逻辑复制之slony的安装和使用
slony是postgresql领域中最广泛的复制解决方案之一。它不仅是最古老的复制实现之一,它也是一个拥有最广泛的外部工具支持的工具,比如pgadmin3。多年来,slony是在postgresql中复制数据的惟一可行的解决方案。slony使用逻辑复制;slony-i一般要求表有主键,或者唯一键;slony的工作不是基于postgresql事务日志的;而是基于触发器的;基于逻辑复制高可用性;postgresql除了slony;还有londiste,bdr等等后续文章会讲到
1. 安装slony
安装步骤:
# tar -jxvf slony1-2.2.5.tar.bz2 # cd slony1-2.2.5 # ./configure --with-pgconfigdir=/opt/pgsql96/bin # make # make install
安装完成!
执行./configure时;会在当前目录是否可以找到pg_config命令;本例pg_config在/opt/pgsql96/bin目录下;
2. slony架构图
3. 复制表
现有实验环境:
主机名 | ip | 角色 |
postgresql201 | 192.168.1.201 | master |
postgresql202 | 192.168.1.202 | slave |
3.1在两台中都创建一个slony的超级用户;专为slony服务
create user slony superuser password 'li0924';
3.2 本实验两台主机都有lottu数据库;以lottu数据库中的表作为实验对象;在两个数据库中以相同的方式创建该表synctab,因为表结构不会自动复制。
create table synctab(id int primary key,name text);
3.3 在所有节点设置允许slony-i用户远程登录;在pg_hba.conf文件添加
host all slony 192.168.1.0/24 trust
3.4 设置slony(在master主机操作)
编写一个slonik脚本用于注册这些节点的脚本如下所示:
[postgres@postgres201 ~]$ cat slony_setup.sh #!/bin/sh masterdb=lottu slavedb=lottu host1=192.168.1.201 host2=192.168.1.202 dbuser=slony slonik<<_eof_ cluster name = first_cluster; # define nodes (this is needed by pretty much # all slonik scripts) node 1 admin conninfo = 'dbname=$masterdb host=$host1 user=$dbuser'; node 2 admin conninfo = 'dbname=$slavedb host=$host2 user=$dbuser'; # init cluster init cluster ( id=1, comment = 'master node'); # group tables into sets create set (id=1, origin=1, comment='our tables'); set add table (set id=1, origin=1, id=1, fully qualified name = 'lottu.synctab', comment='sample table'); store node (id=2, comment = 'slave node', event node=1); store path (server = 1, client = 2, conninfo='dbname=$masterdb host=$host1 user=$dbuser'); store path (server = 2, client = 1, conninfo='dbname=$slavedb host=$host2 user=$dbuser'); _eof_
现在这个表在slony的控制下,我们可以开始订阅脚本如下所示:
[postgres@postgres201 ~]$ cat slony_subscribe.sh #!/bin/sh masterdb=lottu slavedb=lottu host1=192.168.1.201 host2=192.168.1.202 dbuser=slony slonik<<_eof_ cluster name = first_cluster; node 1 admin conninfo = 'dbname=$masterdb host=$host1 user=$dbuser'; node 2 admin conninfo = 'dbname=$slavedb host=$host2 user=$dbuser'; subscribe set ( id = 1, provider = 1, receiver = 2, forward = no); _eof_
在master主机执行脚本
[postgres@postgres201 ~]$ ./slony_setup.sh [postgres@postgres201 ~]$ ./slony_subscribe.sh & [1] 1225
定义了我们想要复制的东西之后,我们可以在每台主机启动slon守护进程
slon first_cluster 'host=192.168.1.201 dbname=lottu user=slony' & slon first_cluster 'host=192.168.1.202 dbname=lottu user=slony' &
3.5 验证slony-i是否配置成功?
在master主机执行dml操作
[postgres@postgres201 ~]$ psql lottu lottu psql (9.6.0) type "help" for help. lottu=# \d synctab table "lottu.synctab" column | type | modifiers --------+---------+----------- id | integer | not null name | text | indexes: "synctab_pkey" primary key, btree (id) triggers: _first_cluster_logtrigger after insert or delete or update on synctab for each row execute procedure _first_cluster.logtrigger('_first_cluster', '1', 'k') _first_cluster_truncatetrigger before truncate on synctab for each statement execute procedure _first_cluster.log_truncate('1') disabled user triggers: _first_cluster_denyaccess before insert or delete or update on synctab for each row execute procedure _first_cluster.denyaccess('_first_cluster') _first_cluster_truncatedeny before truncate on synctab for each statement execute procedure _first_cluster.deny_truncate() lottu=# insert into synctab values (1001,'lottu'); insert 0 1
在slave主机查看是否对应变化
[postgres@postgres202 ~]$ psql psql (9.6.0) type "help" for help. postgres=# \c lottu lottu you are now connected to database "lottu" as user "lottu". lottu=> select * from synctab ; id | name ------+------- 1001 | lottu (1 row)
4. slony-i相关表或者视图查看
4.1 配置成功;会在所在的数据库中生成一个schema
[postgres@postgres201 ~]$ psql lottu lottu psql (9.6.0) type "help" for help. lottu=# \dn list of schemas name | owner ----------------+---------- _first_cluster | slony lottu | lottu public | postgres (3 rows)
4.2 查看集群中的节点信息
lottu=# select * from _first_cluster.sl_node; no_id | no_active | no_comment | no_failed -------+-----------+-------------+----------- 1 | t | master node | f 2 | t | slave node | f (2 rows)
4.3 查看集群中的集合信息
lottu=# select * from _first_cluster.sl_set; set_id | set_origin | set_locked | set_comment --------+------------+------------+------------- 1 | 1 | | our tables (1 row)
4.4 查看集群中的表信息
lottu=# select * from _first_cluster.sl_table; -[ record 1 ]------------- tab_id | 1 tab_reloid | 57420 tab_relname | synctab tab_nspname | lottu tab_set | 1 tab_idxname | synctab_pkey tab_altered | f tab_comment | sample table
5. 日常维护
5.1 slony-i向现有集群中增加一个复制表
以表synctab2为例:
create table synctab2(id int primary key,name text,reg_time timestamp);
我们要创建一个新的表格集;脚本是这样的
[postgres@postgres201 ~]$ cat slony_add_table_set.sh #!/bin/sh masterdb=lottu slavedb=lottu host1=192.168.1.201 host2=192.168.1.202 dbuser=slony slonik<<_eof_ cluster name = first_cluster; node 1 admin conninfo = 'dbname=$masterdb host=$host1 user=$dbuser'; node 2 admin conninfo = 'dbname=$slavedb host=$host2 user=$dbuser'; create set (id=2, origin=1, comment='a second replication set'); set add table (set id=2, origin=1, id=2, fully qualified name ='lottu.synctab2', comment='second table'); subscribe set(id=1, provider=1,receiver=2); subscribe set(id=2, provider=1,receiver=2); merge set(id=1, add id=2,origin=1); _eof_
执行slony_add_table_set.sh脚本
[postgres@postgres201 ~]$ ./slony_add_table_set.sh :8 subscription in progress before mergeset. waiting :8 subscription in progress before mergeset. waiting
查看是否添加成功
lottu=# select * from _first_cluster.sl_table; -[ record 1 ]-------------- tab_id | 1 tab_reloid | 57420 tab_relname | synctab tab_nspname | lottu tab_set | 1 tab_idxname | synctab_pkey tab_altered | f tab_comment | sample table -[ record 2 ]-------------- tab_id | 2 tab_reloid | 57840 tab_relname | synctab2 tab_nspname | lottu tab_set | 1 tab_idxname | synctab2_pkey tab_altered | f tab_comment | second table
5.2 slony-i向现有集群中删除一个复制表
[postgres@postgres201 ~]$ cat slony_drop_table.sh #!/bin/sh masterdb=lottu slavedb=lottu host1=192.168.1.201 host2=192.168.1.202 dbuser=slony slonik<<_eof_ cluster name = first_cluster; node 1 admin conninfo = 'dbname=$masterdb host=$host1 user=$dbuser'; node 2 admin conninfo = 'dbname=$slavedb host=$host2 user=$dbuser'; set drop table (id=2, origin=1); _eof_
执行slony_drop_table.sh脚本
[postgres@postgres201 ~]$ ./slony_drop_table.sh
查看是否删除成功
lottu=# select * from _first_cluster.sl_table; tab_id | tab_reloid | tab_relname | tab_nspname | tab_set | tab_idxname | tab_altered | tab_comment --------+------------+-------------+-------------+---------+--------------+-------------+-------------- 1 | 57420 | synctab | lottu | 1 | synctab_pkey | f | sample table (1 row)
5. 3删除slony
[postgres@postgres201 ~]$ cat slony_drop_node.sh #!/bin/sh masterdb=lottu slavedb=lottu host1=192.168.1.201 host2=192.168.1.202 dbuser=slony slonik<<_eof_ cluster name = first_cluster; node 1 admin conninfo = 'dbname=$masterdb host=$host1 user=$dbuser'; node 2 admin conninfo = 'dbname=$slavedb host=$host2 user=$dbuser'; uninstall node (id = 1); uninstall node (id = 2); _eof_
执行脚本如下:
[postgres@postgres201 ~]$ ./slony_drop_node.sh :4: notice: slony-i: please drop schema "_first_cluster" :4: notice: drop cascades to 175 other objects detail: drop cascades to table _first_cluster.sl_node drop cascades to table _first_cluster.sl_nodelock drop cascades to table _first_cluster.sl_set drop cascades to table _first_cluster.sl_setsync drop cascades to table _first_cluster.sl_table drop cascades to table _first_cluster.sl_sequence drop cascades to table _first_cluster.sl_path drop cascades to table _first_cluster.sl_listen drop cascades to table _first_cluster.sl_subscribe drop cascades to table _first_cluster.sl_event drop cascades to table _first_cluster.sl_confirm drop cascades to table _first_cluster.sl_seqlog drop cascades to function _first_cluster.sequencelastvalue(text) drop cascades to table _first_cluster.sl_log_1 drop cascades to table _first_cluster.sl_log_2 drop cascades to table _first_cluster.sl_log_script drop cascades to table _first_cluster.sl_registry drop cascades to table _first_cluster.sl_apply_stats drop cascades to view _first_cluster.sl_seqlastvalue drop cascades to view _first_cluster.sl_failover_targets drop cascades to sequence _first_cluster.sl_local_node_id drop cascades to sequence _first_cluster.sl_event_seq drop cascades to sequence _first_cluster.sl_action_seq drop cascades to sequence _first_cluster.sl_log_status drop cascades to table _first_cluster.sl_config_lock drop cascades to table _first_cluster.sl_event_lock drop cascades to table _first_cluster.sl_archive_counter drop cascades to table _first_cluster.sl_components drop cascades to type _first_cluster.vactables drop cascades to function _first_cluster.createevent(name,text) drop cascades to function _first_cluster.createevent(name,text,text) drop cascades to function _first_cluster.createevent(name,text,text,text) drop cascades to function _first_cluster.createevent(name,text,text,text,text) drop cascades to function _first_cluster.createevent(name,text,text,text,text,text) drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text) drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text,text) drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text,text,text) drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text,text,text,text) drop cascades to function _first_cluster.denyaccess() drop cascades to trigger _first_cluster_denyaccess on table lottu.synctab drop cascades to function _first_cluster.lockedset() drop cascades to function _first_cluster.getlocalnodeid(name) drop cascades to function _first_cluster.getmoduleversion() drop cascades to function _first_cluster.resetsession() drop cascades to function _first_cluster.logapply() drop cascades to function _first_cluster.logapplysetcachesize(integer) drop cascades to function _first_cluster.logapplysavestats(name,integer,interval) drop cascades to function _first_cluster.checkmoduleversion() drop cascades to function _first_cluster.decode_tgargs(bytea) drop cascades to function _first_cluster.logtrigger() drop cascades to trigger _first_cluster_logtrigger on table lottu.synctab drop cascades to function _first_cluster.terminatenodeconnections(integer) drop cascades to function _first_cluster.killbackend(integer,text) drop cascades to function _first_cluster.seqtrack(integer,bigint) drop cascades to function _first_cluster.slon_quote_brute(text) drop cascades to function _first_cluster.slon_quote_input(text) drop cascades to function _first_cluster.slonyversionmajor() drop cascades to function _first_cluster.slonyversionminor() drop cascades to function _first_cluster.slonyversionpatchlevel() drop cascades to function _first_cluster.slonyversion() drop cascades to function _first_cluster.registry_set_int4(text,integer) drop cascades to function _first_cluster.registry_get_int4(text,integer) drop cascades to function _first_cluster.registry_set_text(text,text) drop cascades to function _first_cluster.registry_get_text(text,text) drop cascades to function _first_cluster.registry_set_timestamp(text,timestamp with time zone) drop cascades to function _first_cluster.registry_get_timestamp(text,timestamp with time zone) drop cascades to function _first_cluster.cleanupnodelock() drop cascades to function _first_cluster.registernodeconnection(integer) drop cascades to function _first_cluster.initializelocalnode(integer,text) drop cascades to function _first_cluster.storenode(integer,text) drop cascades to function _first_cluster.storenode_int(integer,text) drop cascades to function _first_cluster.enablenode(integer) drop cascades to function _first_cluster.enablenode_int(integer) drop cascades to function _first_cluster.disablenode(integer) drop cascades to function _first_cluster.disablenode_int(integer) drop cascades to function _first_cluster.dropnode(integer[]) drop cascades to function _first_cluster.dropnode_int(integer) drop cascades to function _first_cluster.prefailover(integer,boolean) drop cascades to function _first_cluster.failednode(integer,integer,integer[]) drop cascades to function _first_cluster.failednode2(integer,integer,bigint,integer[]) drop cascades to function _first_cluster.failednode3(integer,integer,bigint) drop cascades to function _first_cluster.failoverset_int(integer,integer,bigint) drop cascades to function _first_cluster.uninstallnode() drop cascades to function _first_cluster.clonenodeprepare(integer,integer,text) drop cascades to function _first_cluster.clonenodeprepare_int(integer,integer,text) drop cascades to function _first_cluster.clonenodefinish(integer,integer) drop cascades to function _first_cluster.storepath(integer,integer,text,integer) drop cascades to function _first_cluster.storepath_int(integer,integer,text,integer) drop cascades to function _first_cluster.droppath(integer,integer) drop cascades to function _first_cluster.droppath_int(integer,integer) drop cascades to function _first_cluster.storelisten(integer,integer,integer) drop cascades to function _first_cluster.storelisten_int(integer,integer,integer) drop cascades to function _first_cluster.droplisten(integer,integer,integer) drop cascades to function _first_cluster.droplisten_int(integer,integer,integer) drop cascades to function _first_cluster.storeset(integer,text) drop cascades to function _first_cluster.storeset_int(integer,integer,text) drop cascades to function _first_cluster.lockset(integer) drop cascades to function _first_cluster.unlockset(integer) drop cascades to function _first_cluster.moveset(integer,integer) drop cascades to function _first_cluster.moveset_int(integer,integer,integer,bigint) and 75 other objects (see server log for list) :5: notice: slony-i: please drop schema "_first_cluster" :5: notice: drop cascades to 175 other objects detail: drop cascades to table _first_cluster.sl_node drop cascades to table _first_cluster.sl_nodelock drop cascades to table _first_cluster.sl_set drop cascades to table _first_cluster.sl_setsync drop cascades to table _first_cluster.sl_table drop cascades to table _first_cluster.sl_sequence drop cascades to table _first_cluster.sl_path drop cascades to table _first_cluster.sl_listen drop cascades to table _first_cluster.sl_subscribe drop cascades to table _first_cluster.sl_event drop cascades to table _first_cluster.sl_confirm drop cascades to table _first_cluster.sl_seqlog drop cascades to function _first_cluster.sequencelastvalue(text) drop cascades to table _first_cluster.sl_log_1 drop cascades to table _first_cluster.sl_log_2 drop cascades to table _first_cluster.sl_log_script drop cascades to table _first_cluster.sl_registry drop cascades to table _first_cluster.sl_apply_stats drop cascades to view _first_cluster.sl_seqlastvalue drop cascades to view _first_cluster.sl_failover_targets drop cascades to sequence _first_cluster.sl_local_node_id drop cascades to sequence _first_cluster.sl_event_seq drop cascades to sequence _first_cluster.sl_action_seq drop cascades to sequence _first_cluster.sl_log_status drop cascades to table _first_cluster.sl_config_lock drop cascades to table _first_cluster.sl_event_lock drop cascades to table _first_cluster.sl_archive_counter drop cascades to table _first_cluster.sl_components drop cascades to type _first_cluster.vactables drop cascades to function _first_cluster.createevent(name,text) drop cascades to function _first_cluster.createevent(name,text,text) drop cascades to function _first_cluster.createevent(name,text,text,text) drop cascades to function _first_cluster.createevent(name,text,text,text,text) drop cascades to function _first_cluster.createevent(name,text,text,text,text,text) drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text) drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text,text) drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text,text,text) drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text,text,text,text) drop cascades to function _first_cluster.denyaccess() drop cascades to trigger _first_cluster_denyaccess on table lottu.synctab drop cascades to function _first_cluster.lockedset() drop cascades to function _first_cluster.getlocalnodeid(name) drop cascades to function _first_cluster.getmoduleversion() drop cascades to function _first_cluster.resetsession() drop cascades to function _first_cluster.logapply() drop cascades to function _first_cluster.logapplysetcachesize(integer) drop cascades to function _first_cluster.logapplysavestats(name,integer,interval) drop cascades to function _first_cluster.checkmoduleversion() drop cascades to function _first_cluster.decode_tgargs(bytea) drop cascades to function _first_cluster.logtrigger() drop cascades to trigger _first_cluster_logtrigger on table lottu.synctab drop cascades to function _first_cluster.terminatenodeconnections(integer) drop cascades to function _first_cluster.killbackend(integer,text) drop cascades to function _first_cluster.seqtrack(integer,bigint) drop cascades to function _first_cluster.slon_quote_brute(text) drop cascades to function _first_cluster.slon_quote_input(text) drop cascades to function _first_cluster.slonyversionmajor() drop cascades to function _first_cluster.slonyversionminor() drop cascades to function _first_cluster.slonyversionpatchlevel() drop cascades to function _first_cluster.slonyversion() drop cascades to function _first_cluster.registry_set_int4(text,integer) drop cascades to function _first_cluster.registry_get_int4(text,integer) drop cascades to function _first_cluster.registry_set_text(text,text) drop cascades to function _first_cluster.registry_get_text(text,text) drop cascades to function _first_cluster.registry_set_timestamp(text,timestamp with time zone) drop cascades to function _first_cluster.registry_get_timestamp(text,timestamp with time zone) drop cascades to function _first_cluster.cleanupnodelock() drop cascades to function _first_cluster.registernodeconnection(integer) drop cascades to function _first_cluster.initializelocalnode(integer,text) drop cascades to function _first_cluster.storenode(integer,text) drop cascades to function _first_cluster.storenode_int(integer,text) drop cascades to function _first_cluster.enablenode(integer) drop cascades to function _first_cluster.enablenode_int(integer) drop cascades to function _first_cluster.disablenode(integer) drop cascades to function _first_cluster.disablenode_int(integer) drop cascades to function _first_cluster.dropnode(integer[]) drop cascades to function _first_cluster.dropnode_int(integer) drop cascades to function _first_cluster.prefailover(integer,boolean) drop cascades to function _first_cluster.failednode(integer,integer,integer[]) drop cascades to function _first_cluster.failednode2(integer,integer,bigint,integer[]) drop cascades to function _first_cluster.failednode3(integer,integer,bigint) drop cascades to function _first_cluster.failoverset_int(integer,integer,bigint) drop cascades to function _first_cluster.uninstallnode() drop cascades to function _first_cluster.clonenodeprepare(integer,integer,text) drop cascades to function _first_cluster.clonenodeprepare_int(integer,integer,text) drop cascades to function _first_cluster.clonenodefinish(integer,integer) drop cascades to function _first_cluster.storepath(integer,integer,text,integer) drop cascades to function _first_cluster.storepath_int(integer,integer,text,integer) drop cascades to function _first_cluster.droppath(integer,integer) drop cascades to function _first_cluster.droppath_int(integer,integer) drop cascades to function _first_cluster.storelisten(integer,integer,integer) drop cascades to function _first_cluster.storelisten_int(integer,integer,integer) drop cascades to function _first_cluster.droplisten(integer,integer,integer) drop cascades to function _first_cluster.droplisten_int(integer,integer,integer) drop cascades to function _first_cluster.storeset(integer,text) drop cascades to function _first_cluster.storeset_int(integer,integer,text) drop cascades to function _first_cluster.lockset(integer) drop cascades to function _first_cluster.unlockset(integer) drop cascades to function _first_cluster.moveset(integer,integer) drop cascades to function _first_cluster.moveset_int(integer,integer,integer,bigint) and 75 other objects (see server log for list)
完美;一切归零!
上一篇: Android购物车的实现
推荐阅读
-
详解Linux中PostgreSQL和PostGIS的安装和使用
-
PostgreSQL逻辑复制之slony的安装和使用
-
CentOS7使用yum安装PostgreSQL和PostGIS的方法
-
postgresql流复制原理以及流复制和逻辑复制的区别说明
-
python编写脚本之pyautogui的安装和使用教程
-
详解Linux中PostgreSQL和PostGIS的安装和使用
-
PostgreSQL逻辑复制之slony的安装和使用
-
JavaScript测试工具之Karma-Jasmine的安装和使用详解_javascript技巧
-
Linux中PostgreSQL和PostGIS的安装和使用方法
-
postgresql流复制原理以及流复制和逻辑复制的区别说明