Redis源码解析:集群手动故障转移、从节点迁移详解
一:手动故障转移
redis集群支持手动故障转移。也就是向从节点发送”cluster failover”命令,使其在主节点未下线的情况下,发起故障转移流程,升级为新的主节点,而原来的主节点降级为从节点。
为了不丢失数据,向从节点发送”cluster failover”命令后,流程如下:
a:从节点收到命令后,向主节点发送clustermsg_type_mfstart包;
b:主节点收到该包后,会将其所有客户端置于阻塞状态,也就是在10s的时间内,不再处理客户端发来的命令;并且在其发送的心跳包中,会带有clustermsg_flag0_paused标记;
c:从节点收到主节点发来的,带clustermsg_flag0_paused标记的心跳包后,从中获取主节点当前的复制偏移量。从节点等到自己的复制偏移量达到该值后,才会开始执行故障转移流程:发起选举、统计选票、赢得选举、升级为主节点并更新配置;
”cluster failover”命令支持两个选项:force和takeover。使用这两个选项,可以改变上述的流程。
如果有force选项,则从节点不会与主节点进行交互,主节点也不会阻塞其客户端,而是从节点立即开始故障转移流程:发起选举、统计选票、赢得选举、升级为主节点并更新配置。
如果有takeover选项,则更加简单粗暴:从节点不再发起选举,而是直接将自己升级为主节点,接手原主节点的槽位,增加自己的configepoch后更新配置。
因此,使用force和takeover选项,主节点可以已经下线;而不使用任何选项,只发送”cluster failover”命令的话,主节点必须在线。
在clustercommand函数中,处理”cluster failover”命令的部分代码如下:
else if (!strcasecmp(c->argv[1]->ptr,"failover") && (c->argc == 2 || c->argc == 3)) { /* cluster failover [force|takeover] */ int force = 0, takeover = 0; if (c->argc == 3) { if (!strcasecmp(c->argv[2]->ptr,"force")) { force = 1; } else if (!strcasecmp(c->argv[2]->ptr,"takeover")) { takeover = 1; force = 1; /* takeover also implies force. */ } else { addreply(c,shared.syntaxerr); return; } } /* check preconditions. */ if (nodeismaster(myself)) { addreplyerror(c,"you should send cluster failover to a slave"); return; } else if (myself->slaveof == null) { addreplyerror(c,"i'm a slave but my master is unknown to me"); return; } else if (!force && (nodefailed(myself->slaveof) || myself->slaveof->link == null)) { addreplyerror(c,"master is down or failed, " "please use cluster failover force"); return; } resetmanualfailover(); server.cluster->mf_end = mstime() + redis_cluster_mf_timeout; if (takeover) { /* a takeover does not perform any initial check. it just * generates a new configuration epoch for this node without * consensus, claims the master's slots, and broadcast the new * configuration. */ redislog(redis_warning,"taking over the master (user request)."); clusterbumpconfigepochwithoutconsensus(); clusterfailoverreplaceyourmaster(); } else if (force) { /* if this is a forced failover, we don't need to talk with our * master to agree about the offset. we just failover taking over * it without coordination. */ redislog(redis_warning,"forced failover user request accepted."); server.cluster->mf_can_start = 1; } else { redislog(redis_warning,"manual failover user request accepted."); clustersendmfstart(myself->slaveof); } addreply(c,shared.ok); }
首先检查命令的最后一个参数是否是force或takeover;
如果当前节点是主节点;或者当前节点是从节点,但没有主节点;或者当前从节点的主节点已经下线或者断链,并且命令中没有force或takeover参数,则直接回复客户端错误信息后返回;
然后调用resetmanualfailover,重置手动强制故障转移的状态;
置mf_end为当前时间加5秒,该属性表示手动强制故障转移流程的超时时间,也用来表示当前是否正在进行手动强制故障转移;
如果命令最后一个参数为takeover,这表示收到命令的从节点无需经过选举的过程,直接接手其主节点的槽位,并成为新的主节点。因此首先调用函数clusterbumpconfigepochwithoutconsensus,产生新的configepoch,以便后续更新配置;然后调用clusterfailoverreplaceyourmaster函数,转变成为新的主节点,并将这种转变广播给集群中所有节点;
如果命令最后一个参数是force,这表示收到命令的从节点可以直接开始选举过程,而无需达到主节点的复制偏移量之后才开始选举过程。因此置mf_can_start为1,这样在函数clusterhandleslavefailover中,即使在主节点未下线或者当前从节点的复制数据比较旧的情况下,也可以开始故障转移流程;
如果最后一个参数不是force或takeover,这表示收到命令的从节点,首先需要向主节点发送clustermsg_type_mfstart包,因此调用clustersendmfstart函数,向其主节点发送该包;
主节点收到clustermsg_type_mfstart包后,在clusterprocesspacket函数中,是这样处理的:
else if (type == clustermsg_type_mfstart) { /* this message is acceptable only if i'm a master and the sender * is one of my slaves. */ if (!sender || sender->slaveof != myself) return 1; /* manual failover requested from slaves. initialize the state * accordingly. */ resetmanualfailover(); server.cluster->mf_end = mstime() + redis_cluster_mf_timeout; server.cluster->mf_slave = sender; pauseclients(mstime()+(redis_cluster_mf_timeout*2)); redislog(redis_warning,"manual failover requested by slave %.40s.", sender->name); }
如果字典中找不到发送节点,或者发送节点的主节点不是当前节点,则直接返回;
调用resetmanualfailover,重置手动强制故障转移的状态;
然后置mf_end为当前时间加5秒,该属性表示手动强制故障转移流程的超时时间,也用来表示当前是否正在进行手动强制故障转移;
然后设置mf_slave为sender,该属性表示要进行手动强制故障转移的从节点;
然后调用pauseclients,使所有客户端在之后的10s内阻塞;
主节点在发送心跳包时,在构建包头时,如果发现当前正处于手动强制故障转移阶段,则会在包头中增加clustermsg_flag0_paused标记:
void clusterbuildmessagehdr(clustermsg *hdr, int type) { ... /* set the message flags. */ if (nodeismaster(myself) && server.cluster->mf_end) hdr->mflags[0] |= clustermsg_flag0_paused; ... }
从节点在clusterprocesspacket函数中处理收到的包,一旦发现主节点发来的,带有clustermsg_flag0_paused标记的包,就会将该主节点的复制偏移量记录到server.cluster->mf_master_offset中:
int clusterprocesspacket(clusterlink *link) { ... /* check if the sender is a known node. */ sender = clusterlookupnode(hdr->sender); if (sender && !nodeinhandshake(sender)) { ... /* update the replication offset info for this node. */ sender->repl_offset = ntohu64(hdr->offset); sender->repl_offset_time = mstime(); /* if we are a slave performing a manual failover and our master * sent its offset while already paused, populate the mf state. */ if (server.cluster->mf_end && nodeisslave(myself) && myself->slaveof == sender && hdr->mflags[0] & clustermsg_flag0_paused && server.cluster->mf_master_offset == 0) { server.cluster->mf_master_offset = sender->repl_offset; redislog(redis_warning, "received replication offset for paused " "master manual failover: %lld", server.cluster->mf_master_offset); } } }
从节点在集群定时器函数clustercron中,会调用clusterhandlemanualfailover函数,判断一旦当前从节点的复制偏移量达到了server.cluster->mf_master_offset,就会置server.cluster->mf_can_start为1。这样在接下来要调用的clusterhandleslavefailover函数中,就会立即开始故障转移流程了。
clusterhandlemanualfailover函数的代码如下:
void clusterhandlemanualfailover(void) { /* return asap if no manual failover is in progress. */ if (server.cluster->mf_end == 0) return; /* if mf_can_start is non-zero, the failover was already triggered so the * next steps are performed by clusterhandleslavefailover(). */ if (server.cluster->mf_can_start) return; if (server.cluster->mf_master_offset == 0) return; /* wait for offset... */ if (server.cluster->mf_master_offset == replicationgetslaveoffset()) { /* our replication offset matches the master replication offset * announced after clients were paused. we can start the failover. */ server.cluster->mf_can_start = 1; redislog(redis_warning, "all master replication stream processed, " "manual failover can start."); } }
不管是从节点,还是主节点,在集群定时器函数clustercron中,都会调用manualfailoverchecktimeout函数,一旦发现手动故障转移的超时时间已到,就会重置手动故障转移的状态,表示终止该过程。manualfailoverchecktimeout函数代码如下:
/* if a manual failover timed out, abort it. */ void manualfailoverchecktimeout(void) { if (server.cluster->mf_end && server.cluster->mf_end < mstime()) { redislog(redis_warning,"manual failover timed out."); resetmanualfailover(); } }
二:从节点迁移
在redis集群中,为了增强集群的可用性,一般情况下需要为每个主节点配置若干从节点。但是这种主从关系如果是固定不变的,则经过一段时间之后,就有可能出现孤立主节点的情况,也就是一个主节点再也没有可用于故障转移的从节点了,一旦这样的主节点下线,整个集群也就不可用了。
因此,在redis集群中,增加了从节点迁移的功能。简单描述如下:一旦发现集群中出现了孤立主节点,则某个从节点a就会自动变成该孤立主节点的从节点。该从节点a满足这样的条件:a的主节点具有最多的附属从节点;a在这些附属从节点中,节点id是最小的(the acting slave is the slave among the masterswith the maximum number of attached slaves, that is not in fail state and hasthe smallest node id)。
该功能是在集群定时器函数clustercron中实现的。这部分的代码如下:
void clustercron(void) { ... orphaned_masters = 0; max_slaves = 0; this_slaves = 0; di = dictgetsafeiterator(server.cluster->nodes); while((de = dictnext(di)) != null) { clusternode *node = dictgetval(de); now = mstime(); /* use an updated time at every iteration. */ mstime_t delay; if (node->flags & (redis_node_myself|redis_node_noaddr|redis_node_handshake)) continue; /* orphaned master check, useful only if the current instance * is a slave that may migrate to another master. */ if (nodeisslave(myself) && nodeismaster(node) && !nodefailed(node)) { int okslaves = clustercountnonfailingslaves(node); /* a master is orphaned if it is serving a non-zero number of * slots, have no working slaves, but used to have at least one * slave. */ if (okslaves == 0 && node->numslots > 0 && node->numslaves) orphaned_masters++; if (okslaves > max_slaves) max_slaves = okslaves; if (nodeisslave(myself) && myself->slaveof == node) this_slaves = okslaves; } ... } ... if (nodeisslave(myself)) { ... /* if there are orphaned slaves, and we are a slave among the masters * with the max number of non-failing slaves, consider migrating to * the orphaned masters. note that it does not make sense to try * a migration if there is no master with at least *two* working * slaves. */ if (orphaned_masters && max_slaves >= 2 && this_slaves == max_slaves) clusterhandleslavemigration(max_slaves); } ... }
轮训字典server.cluster->nodes,只要其中的节点不是当前节点,没有处于redis_node_noaddr或者握手状态,就对该node节点做相应的处理:
如果当前节点是从节点,并且node节点是主节点,并且node未被标记为下线,则首先调用函数clustercountnonfailingslaves,计算node节点未下线的从节点个数okslaves,如果node主节点的okslaves为0,并且该主节点负责的插槽数不为0,说明该node主节点是孤立主节点,因此增加orphaned_masters的值;如果该node主节点的okslaves大于max_slaves,则将max_slaves改为okslaves,因此,max_slaves记录了所有主节点中,拥有最多未下线从节点的那个主节点的未下线从节点个数;如果当前节点正好是node主节点的从节点之一,则将okslaves记录到this_slaves中,以上都是为后续做从节点迁移做的准备;
轮训完所有节点之后,如果存在孤立主节点,并且max_slaves大于等于2,并且当前节点刚好是那个拥有最多未下线从节点的主节点的众多从节点之一,则调用函数clusterhandleslavemigration,满足条件的情况下,进行从节点迁移,也就是将当前从节点置为某孤立主节点的从节点。
clusterhandleslavemigration函数的代码如下:
void clusterhandleslavemigration(int max_slaves) { int j, okslaves = 0; clusternode *mymaster = myself->slaveof, *target = null, *candidate = null; dictiterator *di; dictentry *de; /* step 1: don't migrate if the cluster state is not ok. */ if (server.cluster->state != redis_cluster_ok) return; /* step 2: don't migrate if my master will not be left with at least * 'migration-barrier' slaves after my migration. */ if (mymaster == null) return; for (j = 0; j < mymaster->numslaves; j++) if (!nodefailed(mymaster->slaves[j]) && !nodetimedout(mymaster->slaves[j])) okslaves++; if (okslaves <= server.cluster_migration_barrier) return; /* step 3: idenitfy a candidate for migration, and check if among the * masters with the greatest number of ok slaves, i'm the one with the * smaller node id. * * note that this means that eventually a replica migration will occurr * since slaves that are reachable again always have their fail flag * cleared. at the same time this does not mean that there are no * race conditions possible (two slaves migrating at the same time), but * this is extremely unlikely to happen, and harmless. */ candidate = myself; di = dictgetsafeiterator(server.cluster->nodes); while((de = dictnext(di)) != null) { clusternode *node = dictgetval(de); int okslaves; /* only iterate over working masters. */ if (nodeisslave(node) || nodefailed(node)) continue; /* if this master never had slaves so far, don't migrate. we want * to migrate to a master that remained orphaned, not masters that * were never configured to have slaves. */ if (node->numslaves == 0) continue; okslaves = clustercountnonfailingslaves(node); if (okslaves == 0 && target == null && node->numslots > 0) target = node; if (okslaves == max_slaves) { for (j = 0; j < node->numslaves; j++) { if (memcmp(node->slaves[j]->name, candidate->name, redis_cluster_namelen) < 0) { candidate = node->slaves[j]; } } } } dictreleaseiterator(di); /* step 4: perform the migration if there is a target, and if i'm the * candidate. */ if (target && candidate == myself) { redislog(redis_warning,"migrating to orphaned master %.40s", target->name); clustersetmaster(target); } }
如果当前集群状态不是redis_cluster_ok,则直接返回;如果当前从节点没有主节点,则直接返回;
接下来计算,当前从节点的主节点,具有未下线从节点的个数okslaves;如果okslaves小于等于迁移阈值server.cluster_migration_barrier,则直接返回;
接下来,开始轮训字典server.cluster->nodes,针对其中的每一个节点node:
如果node节点是从节点,或者处于下线状态,则直接处理下一个节点;如果node节点没有配置从节点,则直接处理下一个节点;
调用clustercountnonfailingslaves函数,计算该node节点的未下线主节点数okslaves;如果okslaves为0,并且该node节点的numslots大于0,说明该主节点之前有从节点,但是都下线了,因此找到了一个孤立主节点target;
如果okslaves等于参数max_slaves,说明该node节点就是具有最多未下线从节点的主节点,因此将当前节点的节点id,与其所有从节点的节点id进行比较,如果当前节点的名字更大,则将candidate置为具有更小名字的那个从节点;(其实从这里就可以直接退出返回了)
轮训完所有节点后,如果找到了孤立节点,并且当前节点拥有最小的节点id,则调用clustersetmaster,将target置为当前节点的主节点,并开始主从复制流程。
三:configepoch冲突问题
在集群中,负责不同槽位的主节点,具有相同的configepoch其实是没有问题的,但是有可能因为人为介入的原因或者bug的问题,导致具有相同configepoch的主节点都宣称负责相同的槽位,这在分布式系统中是致命的问题;因此,redis规定集群中的所有节点,必须具有不同的configepoch。
当某个从节点升级为新主节点时,它会得到一个大于当前所有节点的configepoch的新configepoch,所以不会导致具有重复configepoch的从节点(因为一次选举中,不会有两个从节点同时胜出)。但是在管理员发起的重新分片过程的最后,迁入槽位的节点会自己更新自己的configepoch,而无需其他节点的同意;或者手动强制故障转移过程,也会导致从节点在无需其他节点同意的情况下更新configepoch,以上的情况都可能导致出现多个主节点具有相同configepoch的情况。
因此,就需要一种算法,保证集群中所有节点的configepoch都不相同。这种算法是这样实现的:当某个主节点收到其他主节点发来的心跳包后,发现包中的configepoch与自己的configepoch相同,就会调用clusterhandleconfigepochcollision函数,解决这种configepoch冲突的问题。
clusterhandleconfigepochcollision函数的代码如下:
void clusterhandleconfigepochcollision(clusternode *sender) { /* prerequisites: nodes have the same configepoch and are both masters. */ if (sender->configepoch != myself->configepoch || !nodeismaster(sender) || !nodeismaster(myself)) return; /* don't act if the colliding node has a smaller node id. */ if (memcmp(sender->name,myself->name,redis_cluster_namelen) <= 0) return; /* get the next id available at the best of this node knowledge. */ server.cluster->currentepoch++; myself->configepoch = server.cluster->currentepoch; clustersaveconfigordie(1); redislog(redis_verbose, "warning: configepoch collision with node %.40s." " configepoch set to %llu", sender->name, (unsigned long long) myself->configepoch); }
如果发送节点的configepoch不等于当前节点的configepoch,或者发送节点不是主节点,或者当前节点不是主节点,则直接返回;
如果相比于当前节点的节点id,发送节点的节点id更小,则直接返回;
因此,较小名字的节点能获得更大的configepoch,接下来首先增加自己的currentepoch,然后将configepoch赋值为currentepoch。
这样,即使有多个节点具有相同的configepoch,最终,只有具有最大节点id的节点的configepoch保持不变,其他节点都会增加自己的configepoch,而且增加的值会不同,具有最小node id的节点,最终具有最大的configepoch。
总结
以上就是本文关于redis源码解析:集群手动故障转移、从节点迁移详解的全部内容,感兴趣的朋友可以参阅:详细分析redis集群故障、简述redis和mysql的区别、spring aop实现redis缓存数据库查询源码等,有不足之处,请留言指出,感谢朋友们对本站的支持!
上一篇: 干这行比我更有前途