欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  科技

window下创建redis出现问题小结

程序员文章站 2022-03-11 14:44:19
一.准备工作1.准备一个redis,删除目录下的,dump.rdb文件,并修改他的配置文件:redis.windows.conf1.修改端口:我设置为 port 70012.放开配置: cluster...

一.准备工作

1.准备一个redis,删除目录下的,dump.rdb文件,并修改他的配置文件:redis.windows.conf

1.修改端口:我设置为 port 7001
2.放开配置:
 cluster-enabled yes
 cluster-config-file nodes-7001.conf //名称可自改
 cluster-node-timeout 15000

 

window下创建redis出现问题小结

window下创建redis出现问题小结

将这个redis复制5份,并修改相应配置,将端口设置为:7001,7002,7003,7004,7005,7006

2.配置ruby环境、redis的ruby驱动redis.gem以及创建redis集群的工具redis-trib.rb。

(1)下载redis安装文件:http://download.redis.io/releases/,我们下载zip格式redis-x64-3.2.1版本。

window下创建redis出现问题小结

(2)下载ruby安装文件:http://dl.bintray.com/oneclick/rubyinstaller/rubyinstaller-2.2.4-x64.exe

(3)下载ruby环境下redis的驱动:https://rubygems.org/gems/redis/versions/3.2.2,考虑到兼容性,这里下载的是3.2.2版本。

下载:https://rubygems.org/downloads/redis-3.2.2.gem

window下创建redis出现问题小结

(4)下载redis官方提供的创建redis集群的ruby脚本文件redis-trib.rb;(我自己下载的,在文章最后,自用)

3.将所有redis文件和redis-trib.rb放在一个redis目录下:

window下创建redis出现问题小结

二.启动redis群

1.在redis-trib.rb目录下输入:后面会出现选项,选yes即可;

redis-trib.rb create --replicas 0 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006

三.出现问题

1.缺少redis库:      

 解决方法:gem install redis (安装redis库)

window下创建redis出现问题小结

2.插槽15被占用,解决方法:打开每一个服务器,执行flushall、flushdb、cluster reset指令:到每个redis目录下修改:

window下创建redis出现问题小结

window下创建redis出现问题小结

四.资源

redis-trib.rb:将他复制到文本上并修改文本名称即可:

#!/usr/bin/env ruby
 
# todo (temporary here, we'll move this into the github issues once
# redis-trib initial implementation is completed).
#
# - make sure that if the rehashing fails in the middle redis-trib will try
# to recover.
# - when redis-trib performs a cluster check, if it detects a slot move in
# progress it should prompt the user to continue the move from where it
# stopped.
# - gracefully handle ctrl+c in move_slot to prompt the user if really stop
# while rehashing, and performing the best cleanup possible if the user
# forces the quit.
# - when doing "fix" set a global fix to true, and prompt the user to
# fix the problem if automatically fixable every time there is something
# to fix. for instance:
# 1) if there is a node that pretend to receive a slot, or to migrate a
# slot, but has no entries in that slot, fix it.
# 2) if there is a node having keys in slots that are not owned by it
# fix this condition moving the entries in the same node.
# 3) perform more possibly slow tests about the state of the cluster.
# 4) when aborted slot migration is detected, fix it.
 
require 'rubygems'
require 'redis'
 
clusterhashslots = 16384
migratedefaulttimeout = 60000
migratedefaultpipeline = 10
rebalancedefaultthreshold = 2
 
$verbose = false
 
def xputs(s)
 case s[0..2]
 when ">>>"
 color="29;1"
 when "[er"
 color="31;1"
 when "[wa"
 color="31;1"
 when "[ok"
 color="32"
 when "[fa","***"
 color="33"
 else
 color=nil
 end
 
 color = nil if env['term'] != "xterm"
 print "\033[#{color}m" if color
 print s
 print "\033[0m" if color
 print "\n"
end
 
class clusternode
 def initialize(addr)
 s = addr.split(":")
 if s.length < 2
  puts "invalid ip or port (given as #{addr}) - use ip:port format"
  exit 1
 end
 port = s.pop # removes port from split array
 ip = s.join(":") # if s.length > 1 here, it's ipv6, so restore address
 @r = nil
 @info = {}
 @info[:host] = ip
 @info[:port] = port
 @info[:slots] = {}
 @info[:migrating] = {}
 @info[:importing] = {}
 @info[:replicate] = false
 @dirty = false # true if we need to flush slots info into node.
 @friends = []
 end
 
 def friends
 @friends
 end
 
 def slots
 @info[:slots]
 end
 
 def has_flag?(flag)
 @info[:flags].index(flag)
 end
 
 def to_s
 "#{@info[:host]}:#{@info[:port]}"
 end
 
 def connect(o={})
 return if @r
 print "connecting to node #{self}: " if $verbose
 stdout.flush
 begin
  @r = redis.new(:host => @info[:host], :port => @info[:port], :timeout => 60)
  @r.ping
 rescue
  xputs "[err] sorry, can't connect to node #{self}"
  exit 1 if o[:abort]
  @r = nil
 end
 xputs "ok" if $verbose
 end
 
 def assert_cluster
 info = @r.info
 if !info["cluster_enabled"] || info["cluster_enabled"].to_i == 0
  xputs "[err] node #{self} is not configured as a cluster node."
  exit 1
 end
 end
 
 def assert_empty
 if !(@r.cluster("info").split("\r\n").index("cluster_known_nodes:1")) ||
  (@r.info['db0'])
  xputs "[err] node #{self} is not empty. either the node already knows other nodes (check with cluster nodes) or contains some key in database 0."
  exit 1
 end
 end
 
 def load_info(o={})
 self.connect
 nodes = @r.cluster("nodes").split("\n")
 nodes.each{|n|
  # name addr flags role ping_sent ping_recv link_status slots
  split = n.split
  name,addr,flags,master_id,ping_sent,ping_recv,config_epoch,link_status = split[0..6]
  slots = split[8..-1]
  info = {
  :name => name,
  :addr => addr,
  :flags => flags.split(","),
  :replicate => master_id,
  :ping_sent => ping_sent.to_i,
  :ping_recv => ping_recv.to_i,
  :link_status => link_status
  }
  info[:replicate] = false if master_id == "-"
 
  if info[:flags].index("myself")
  @info = @info.merge(info)
  @info[:slots] = {}
  slots.each{|s|
   if s[0..0] == '['
   if s.index("->-") # migrating
    slot,dst = s[1..-1].split("->-")
    @info[:migrating][slot.to_i] = dst
   elsif s.index("-<-") # importing
    slot,src = s[1..-1].split("-<-")
    @info[:importing][slot.to_i] = src
   end
   elsif s.index("-")
   start,stop = s.split("-")
   self.add_slots((start.to_i)..(stop.to_i))
   else
   self.add_slots((s.to_i)..(s.to_i))
   end
  } if slots
  @dirty = false
  @r.cluster("info").split("\n").each{|e|
   k,v=e.split(":")
   k = k.to_sym
   v.chop!
   if k != :cluster_state
   @info[k] = v.to_i
   else
   @info[k] = v
   end
  }
  elsif o[:getfriends]
  @friends << info
  end
 }
 end
 
 def add_slots(slots)
 slots.each{|s|
  @info[:slots][s] = :new
 }
 @dirty = true
 end
 
 def set_as_replica(node_id)
 @info[:replicate] = node_id
 @dirty = true
 end
 
 def flush_node_config
 return if !@dirty
 if @info[:replicate]
  begin
  @r.cluster("replicate",@info[:replicate])
  rescue
  # if the cluster did not already joined it is possible that
  # the slave does not know the master node yet. so on errors
  # we return asap leaving the dirty flag set, to flush the
  # config later.
  return
  end
 else
  new = []
  @info[:slots].each{|s,val|
  if val == :new
   new << s
   @info[:slots][s] = true
  end
  }
  @r.cluster("addslots",*new)
 end
 @dirty = false
 end
 
 def info_string
 # we want to display the hash slots assigned to this node
 # as ranges, like in: "1-5,8-9,20-25,30"
 #
 # note: this could be easily written without side effects,
 # we use 'slots' just to split the computation into steps.
 
 # first step: we want an increasing array of integers
 # for instance: [1,2,3,4,5,8,9,20,21,22,23,24,25,30]
 slots = @info[:slots].keys.sort
 
 # as we want to aggregate adjacent slots we convert all the
 # slot integers into ranges (with just one element)
 # so we have something like [1..1,2..2, ... and so forth.
 slots.map!{|x| x..x}
 
 # finally we group ranges with adjacent elements.
 slots = slots.reduce([]) {|a,b|
  if !a.empty? && b.first == (a[-1].last)+1
  a[0..-2] + [(a[-1].first)..(b.last)]
  else
  a + [b]
  end
 }
 
 # now our task is easy, we just convert ranges with just one
 # element into a number, and a real range into a start-end format.
 # finally we join the array using the comma as separator.
 slots = slots.map{|x|
  x.count == 1 ? x.first.to_s : "#{x.first}-#{x.last}"
 }.join(",")
 
 role = self.has_flag?("master") ? "m" : "s"
 
 if self.info[:replicate] and @dirty
  is = "s: #{self.info[:name]} #{self.to_s}"
 else
  is = "#{role}: #{self.info[:name]} #{self.to_s}\n"+
  " slots:#{slots} (#{self.slots.length} slots) "+
  "#{(self.info[:flags]-["myself"]).join(",")}"
 end
 if self.info[:replicate]
  is += "\n replicates #{info[:replicate]}"
 elsif self.has_flag?("master") && self.info[:replicas]
  is += "\n #{info[:replicas].length} additional replica(s)"
 end
 is
 end
 
 # return a single string representing nodes and associated slots.
 # todo: remove slaves from config when slaves will be handled
 # by redis cluster.
 def get_config_signature
 config = []
 @r.cluster("nodes").each_line{|l|
  s = l.split
  slots = s[8..-1].select {|x| x[0..0] != "["}
  next if slots.length == 0
  config << s[0]+":"+(slots.sort.join(","))
 }
 config.sort.join("|")
 end
 
 def info
 @info
 end
 
 def is_dirty?
 @dirty
 end
 
 def r
 @r
 end
end
 
class redistrib
 def initialize
 @nodes = []
 @fix = false
 @errors = []
 @timeout = migratedefaulttimeout
 end
 
 def check_arity(req_args, num_args)
 if ((req_args > 0 and num_args != req_args) ||
  (req_args < 0 and num_args < req_args.abs))
  xputs "[err] wrong number of arguments for specified sub command"
  exit 1
 end
 end
 
 def add_node(node)
 @nodes << node
 end
 
 def reset_nodes
 @nodes = []
 end
 
 def cluster_error(msg)
 @errors << msg
 xputs msg
 end
 
 # return the node with the specified id or nil.
 def get_node_by_name(name)
 @nodes.each{|n|
  return n if n.info[:name] == name.downcase
 }
 return nil
 end
 
 # like get_node_by_name but the specified name can be just the first
 # part of the node id as long as the prefix in unique across the
 # cluster.
 def get_node_by_abbreviated_name(name)
 l = name.length
 candidates = []
 @nodes.each{|n|
  if n.info[:name][0...l] == name.downcase
  candidates << n
  end
 }
 return nil if candidates.length != 1
 candidates[0]
 end
 
 # this function returns the master that has the least number of replicas
 # in the cluster. if there are multiple masters with the same smaller
 # number of replicas, one at random is returned.
 def get_master_with_least_replicas
 masters = @nodes.select{|n| n.has_flag? "master"}
 sorted = masters.sort{|a,b|
  a.info[:replicas].length <=> b.info[:replicas].length
 }
 sorted[0]
 end
 
 def check_cluster(opt={})
 xputs ">>> performing cluster check (using node #{@nodes[0]})"
 show_nodes if !opt[:quiet]
 check_config_consistency
 check_open_slots
 check_slots_coverage
 end
 
 def show_cluster_info
 masters = 0
 keys = 0
 @nodes.each{|n|
  if n.has_flag?("master")
  puts "#{n} (#{n.info[:name][0...8]}...) -> #{n.r.dbsize} keys | #{n.slots.length} slots | "+
   "#{n.info[:replicas].length} slaves."
  masters += 1
  keys += n.r.dbsize
  end
 }
 xputs "[ok] #{keys} keys in #{masters} masters."
 keys_per_slot = sprintf("%.2f",keys/16384.0)
 puts "#{keys_per_slot} keys per slot on average."
 end
 
 # merge slots of every known node. if the resulting slots are equal
 # to clusterhashslots, then all slots are served.
 def covered_slots
 slots = {}
 @nodes.each{|n|
  slots = slots.merge(n.slots)
 }
 slots
 end
 
 def check_slots_coverage
 xputs ">>> check slots coverage..."
 slots = covered_slots
 if slots.length == clusterhashslots
  xputs "[ok] all #{clusterhashslots} slots covered."
 else
  cluster_error \
  "[err] not all #{clusterhashslots} slots are covered by nodes."
  fix_slots_coverage if @fix
 end
 end
 
 def check_open_slots
 xputs ">>> check for open slots..."
 open_slots = []
 @nodes.each{|n|
  if n.info[:migrating].size > 0
  cluster_error \
   "[warning] node #{n} has slots in migrating state (#{n.info[:migrating].keys.join(",")})."
  open_slots += n.info[:migrating].keys
  end
  if n.info[:importing].size > 0
  cluster_error \
   "[warning] node #{n} has slots in importing state (#{n.info[:importing].keys.join(",")})."
  open_slots += n.info[:importing].keys
  end
 }
 open_slots.uniq!
 if open_slots.length > 0
  xputs "[warning] the following slots are open: #{open_slots.join(",")}"
 end
 if @fix
  open_slots.each{|slot| fix_open_slot slot}
 end
 end
 
 def nodes_with_keys_in_slot(slot)
 nodes = []
 @nodes.each{|n|
  next if n.has_flag?("slave")
  nodes << n if n.r.cluster("getkeysinslot",slot,1).length > 0
 }
 nodes
 end
 
 def fix_slots_coverage
 not_covered = (0...clusterhashslots).to_a - covered_slots.keys
 xputs ">>> fixing slots coverage..."
 xputs "list of not covered slots: " + not_covered.join(",")
 
 # for every slot, take action depending on the actual condition:
 # 1) no node has keys for this slot.
 # 2) a single node has keys for this slot.
 # 3) multiple nodes have keys for this slot.
 slots = {}
 not_covered.each{|slot|
  nodes = nodes_with_keys_in_slot(slot)
  slots[slot] = nodes
  xputs "slot #{slot} has keys in #{nodes.length} nodes: #{nodes.join(", ")}"
 }
 
 none = slots.select {|k,v| v.length == 0}
 single = slots.select {|k,v| v.length == 1}
 multi = slots.select {|k,v| v.length > 1}
 
 # handle case "1": keys in no node.
 if none.length > 0
  xputs "the folowing uncovered slots have no keys across the cluster:"
  xputs none.keys.join(",")
  yes_or_die "fix these slots by covering with a random node?"
  none.each{|slot,nodes|
  node = @nodes.sample
  xputs ">>> covering slot #{slot} with #{node}"
  node.r.cluster("addslots",slot)
  }
 end
 
 # handle case "2": keys only in one node.
 if single.length > 0
  xputs "the folowing uncovered slots have keys in just one node:"
  puts single.keys.join(",")
  yes_or_die "fix these slots by covering with those nodes?"
  single.each{|slot,nodes|
  xputs ">>> covering slot #{slot} with #{nodes[0]}"
  nodes[0].r.cluster("addslots",slot)
  }
 end
 
 # handle case "3": keys in multiple nodes.
 if multi.length > 0
  xputs "the folowing uncovered slots have keys in multiple nodes:"
  xputs multi.keys.join(",")
  yes_or_die "fix these slots by moving keys into a single node?"
  multi.each{|slot,nodes|
  target = get_node_with_most_keys_in_slot(nodes,slot)
  xputs ">>> covering slot #{slot} moving keys to #{target}"
 
  target.r.cluster('addslots',slot)
  target.r.cluster('setslot',slot,'stable')
  nodes.each{|src|
   next if src == target
   # set the source node in 'importing' state (even if we will
   # actually migrate keys away) in order to avoid receiving
   # redirections for migrate.
   src.r.cluster('setslot',slot,'importing',target.info[:name])
   move_slot(src,target,slot,:dots=>true,:fix=>true,:cold=>true)
   src.r.cluster('setslot',slot,'stable')
  }
  }
 end
 end
 
 # return the owner of the specified slot
 def get_slot_owners(slot)
 owners = []
 @nodes.each{|n|
  next if n.has_flag?("slave")
  n.slots.each{|s,_|
  owners << n if s == slot
  }
 }
 owners
 end
 
 # return the node, among 'nodes' with the greatest number of keys
 # in the specified slot.
 def get_node_with_most_keys_in_slot(nodes,slot)
 best = nil
 best_numkeys = 0
 @nodes.each{|n|
  next if n.has_flag?("slave")
  numkeys = n.r.cluster("countkeysinslot",slot)
  if numkeys > best_numkeys || best == nil
  best = n
  best_numkeys = numkeys
  end
 }
 return best
 end
 
 # slot 'slot' was found to be in importing or migrating state in one or
 # more nodes. this function fixes this condition by migrating keys where
 # it seems more sensible.
 def fix_open_slot(slot)
 puts ">>> fixing open slot #{slot}"
 
 # try to obtain the current slot owner, according to the current
 # nodes configuration.
 owners = get_slot_owners(slot)
 owner = owners[0] if owners.length == 1
 
 migrating = []
 importing = []
 @nodes.each{|n|
  next if n.has_flag? "slave"
  if n.info[:migrating][slot]
  migrating << n
  elsif n.info[:importing][slot]
  importing << n
  elsif n.r.cluster("countkeysinslot",slot) > 0 && n != owner
  xputs "*** found keys about slot #{slot} in node #{n}!"
  importing << n
  end
 }
 puts "set as migrating in: #{migrating.join(",")}"
 puts "set as importing in: #{importing.join(",")}"
 
 # if there is no slot owner, set as owner the slot with the biggest
 # number of keys, among the set of migrating / importing nodes.
 if !owner
  xputs ">>> nobody claims ownership, selecting an owner..."
  owner = get_node_with_most_keys_in_slot(@nodes,slot)
 
  # if we still don't have an owner, we can't fix it.
  if !owner
  xputs "[err] can't select a slot owner. impossible to fix."
  exit 1
  end
 
  # use addslots to assign the slot.
  puts "*** configuring #{owner} as the slot owner"
  owner.r.cluster("setslot",slot,"stable")
  owner.r.cluster("addslots",slot)
  # make sure this information will propagate. not strictly needed
  # since there is no past owner, so all the other nodes will accept
  # whatever epoch this node will claim the slot with.
  owner.r.cluster("bumpepoch")
 
  # remove the owner from the list of migrating/importing
  # nodes.
  migrating.delete(owner)
  importing.delete(owner)
 end
 
 # if there are multiple owners of the slot, we need to fix it
 # so that a single node is the owner and all the other nodes
 # are in importing state. later the fix can be handled by one
 # of the base cases above.
 #
 # note that this case also covers multiple nodes having the slot
 # in migrating state, since migrating is a valid state only for
 # slot owners.
 if owners.length > 1
  owner = get_node_with_most_keys_in_slot(owners,slot)
  owners.each{|n|
  next if n == owner
  n.r.cluster('delslots',slot)
  n.r.cluster('setslot',slot,'importing',owner.info[:name])
  importing.delete(n) # avoid duplciates
  importing << n
  }
  owner.r.cluster('bumpepoch')
 end
 
 # case 1: the slot is in migrating state in one slot, and in
 #  importing state in 1 slot. that's trivial to address.
 if migrating.length == 1 && importing.length == 1
  move_slot(migrating[0],importing[0],slot,:dots=>true,:fix=>true)
 # case 2: there are multiple nodes that claim the slot as importing,
 # they probably got keys about the slot after a restart so opened
 # the slot. in this case we just move all the keys to the owner
 # according to the configuration.
 elsif migrating.length == 0 && importing.length > 0
  xputs ">>> moving all the #{slot} slot keys to its owner #{owner}"
  importing.each {|node|
  next if node == owner
  move_slot(node,owner,slot,:dots=>true,:fix=>true,:cold=>true)
  xputs ">>> setting #{slot} as stable in #{node}"
  node.r.cluster("setslot",slot,"stable")
  }
 # case 3: there are no slots claiming to be in importing state, but
 # there is a migrating node that actually don't have any key. we
 # can just close the slot, probably a reshard interrupted in the middle.
 elsif importing.length == 0 && migrating.length == 1 &&
  migrating[0].r.cluster("getkeysinslot",slot,10).length == 0
  migrating[0].r.cluster("setslot",slot,"stable")
 else
  xputs "[err] sorry, redis-trib can't fix this slot yet (work in progress). slot is set as migrating in #{migrating.join(",")}, as importing in #{importing.join(",")}, owner is #{owner}"
 end
 end
 
 # check if all the nodes agree about the cluster configuration
 def check_config_consistency
 if !is_config_consistent?
  cluster_error "[err] nodes don't agree about configuration!"
 else
  xputs "[ok] all nodes agree about slots configuration."
 end
 end
 
 def is_config_consistent?
 signatures=[]
 @nodes.each{|n|
  signatures << n.get_config_signature
 }
 return signatures.uniq.length == 1
 end
 
 def wait_cluster_join
 print "waiting for the cluster to join"
 while !is_config_consistent?
  print "."
  stdout.flush
  sleep 1
 end
 print "\n"
 end
 
 def alloc_slots
 nodes_count = @nodes.length
 masters_count = @nodes.length / (@replicas+1)
 masters = []
 
 # the first step is to split instances by ip. this is useful as
 # we'll try to allocate master nodes in different physical machines
 # (as much as possible) and to allocate slaves of a given master in
 # different physical machines as well.
 #
 # this code assumes just that if the ip is different, than it is more
 # likely that the instance is running in a different physical host
 # or at least a different virtual machine.
 ips = {}
 @nodes.each{|n|
  ips[n.info[:host]] = [] if !ips[n.info[:host]]
  ips[n.info[:host]] << n
 }
 
 # select master instances
 puts "using #{masters_count} masters:"
 interleaved = []
 stop = false
 while not stop do
  # take one node from each ip until we run out of nodes
  # across every ip.
  ips.each do |ip,nodes|
  if nodes.empty?
   # if this ip has no remaining nodes, check for termination
   if interleaved.length == nodes_count
   # stop when 'interleaved' has accumulated all nodes
   stop = true
   next
   end
  else
   # else, move one node from this ip to 'interleaved'
   interleaved.push nodes.shift
  end
  end
 end
 
 masters = interleaved.slice!(0, masters_count)
 nodes_count -= masters.length
 
 masters.each{|m| puts m}
 
 # alloc slots on masters
 slots_per_node = clusterhashslots.to_f / masters_count
 first = 0
 cursor = 0.0
 masters.each_with_index{|n,masternum|
  last = (cursor+slots_per_node-1).round
  if last > clusterhashslots || masternum == masters.length-1
  last = clusterhashslots-1
  end
  last = first if last < first # min step is 1.
  n.add_slots first..last
  first = last+1
  cursor += slots_per_node
 }
 
 # select n replicas for every master.
 # we try to split the replicas among all the ips with spare nodes
 # trying to avoid the host where the master is running, if possible.
 #
 # note we loop two times. the first loop assigns the requested
 # number of replicas to each master. the second loop assigns any
 # remaining instances as extra replicas to masters. some masters
 # may end up with more than their requested number of replicas, but
 # all nodes will be used.
 assignment_verbose = false
 
 [:requested,:unused].each do |assign|
  masters.each do |m|
  assigned_replicas = 0
  while assigned_replicas < @replicas
   break if nodes_count == 0
   if assignment_verbose
   if assign == :requested
    puts "requesting total of #{@replicas} replicas " \
     "(#{assigned_replicas} replicas assigned " \
     "so far with #{nodes_count} total remaining)."
   elsif assign == :unused
    puts "assigning extra instance to replication " \
     "role too (#{nodes_count} remaining)."
   end
   end
 
   # return the first node not matching our current master
   node = interleaved.find{|n| n.info[:host] != m.info[:host]}
 
   # if we found a node, use it as a best-first match.
   # otherwise, we didn't find a node on a different ip, so we
   # go ahead and use a same-ip replica.
   if node
   slave = node
   interleaved.delete node
   else
   slave = interleaved.shift
   end
   slave.set_as_replica(m.info[:name])
   nodes_count -= 1
   assigned_replicas += 1
   puts "adding replica #{slave} to #{m}"
 
   # if we are in the "assign extra nodes" loop,
   # we want to assign one extra replica to each
   # master before repeating masters.
   # this break lets us assign extra replicas to masters
   # in a round-robin way.
   break if assign == :unused
  end
  end
 end
 end
 
 def flush_nodes_config
 @nodes.each{|n|
  n.flush_node_config
 }
 end
 
 def show_nodes
 @nodes.each{|n|
  xputs n.info_string
 }
 end
 
 # redis cluster config epoch collision resolution code is able to eventually
 # set a different epoch to each node after a new cluster is created, but
 # it is slow compared to assign a progressive config epoch to each node
 # before joining the cluster. however we do just a best-effort try here
 # since if we fail is not a problem.
 def assign_config_epoch
 config_epoch = 1
 @nodes.each{|n|
  begin
  n.r.cluster("set-config-epoch",config_epoch)
  rescue
  end
  config_epoch += 1
 }
 end
 
 def join_cluster
 # we use a brute force approach to make sure the node will meet
 # each other, that is, sending cluster meet messages to all the nodes
 # about the very same node.
 # thanks to gossip this information should propagate across all the
 # cluster in a matter of seconds.
 first = false
 @nodes.each{|n|
  if !first then first = n.info; next; end # skip the first node
  n.r.cluster("meet",first[:host],first[:port])
 }
 end
 
 def yes_or_die(msg)
 print "#{msg} (type 'yes' to accept): "
 stdout.flush
 if !(stdin.gets.chomp.downcase == "yes")
  xputs "*** aborting..."
  exit 1
 end
 end
 
 def load_cluster_info_from_node(nodeaddr)
 node = clusternode.new(nodeaddr)
 node.connect(:abort => true)
 node.assert_cluster
 node.load_info(:getfriends => true)
 add_node(node)
 node.friends.each{|f|
  next if f[:flags].index("noaddr") ||
   f[:flags].index("disconnected") ||
   f[:flags].index("fail")
  fnode = clusternode.new(f[:addr])
  fnode.connect()
  next if !fnode.r
  begin
  fnode.load_info()
  add_node(fnode)
  rescue => e
  xputs "[err] unable to load info for node #{fnode}"
  end
 }
 populate_nodes_replicas_info
 end
 
 # this function is called by load_cluster_info_from_node in order to
 # add additional information to every node as a list of replicas.
 def populate_nodes_replicas_info
 # start adding the new field to every node.
 @nodes.each{|n|
  n.info[:replicas] = []
 }
 
 # populate the replicas field using the replicate field of slave
 # nodes.
 @nodes.each{|n|
  if n.info[:replicate]
  master = get_node_by_name(n.info[:replicate])
  if !master
   xputs "*** warning: #{n} claims to be slave of unknown node id #{n.info[:replicate]}."
  else
   master.info[:replicas] << n
  end
  end
 }
 end
 
 # given a list of source nodes return a "resharding plan"
 # with what slots to move in order to move "numslots" slots to another
 # instance.
 def compute_reshard_table(sources,numslots)
 moved = []
 # sort from bigger to smaller instance, for two reasons:
 # 1) if we take less slots than instances it is better to start
 # getting from the biggest instances.
 # 2) we take one slot more from the first instance in the case of not
 # perfect divisibility. like we have 3 nodes and need to get 10
 # slots, we take 4 from the first, and 3 from the rest. so the
 # biggest is always the first.
 sources = sources.sort{|a,b| b.slots.length <=> a.slots.length}
 source_tot_slots = sources.inject(0) {|sum,source|
  sum+source.slots.length
 }
 sources.each_with_index{|s,i|
  # every node will provide a number of slots proportional to the
  # slots it has assigned.
  n = (numslots.to_f/source_tot_slots*s.slots.length)
  if i == 0
  n = n.ceil
  else
  n = n.floor
  end
  s.slots.keys.sort[(0...n)].each{|slot|
  if moved.length < numslots
   moved << {:source => s, :slot => slot}
  end
  }
 }
 return moved
 end
 
 def show_reshard_table(table)
 table.each{|e|
  puts " moving slot #{e[:slot]} from #{e[:source].info[:name]}"
 }
 end
 
 # move slots between source and target nodes using migrate.
 #
 # options:
 # :verbose -- print a dot for every moved key.
 # :fix -- we are moving in the context of a fix. use replace.
 # :cold -- move keys without opening slots / reconfiguring the nodes.
 # :update -- update nodes.info[:slots] for source/target nodes.
 # :quiet -- don't print info messages.
 def move_slot(source,target,slot,o={})
 o = {:pipeline => migratedefaultpipeline}.merge(o)
 
 # we start marking the slot as importing in the destination node,
 # and the slot as migrating in the target host. note that the order of
 # the operations is important, as otherwise a client may be redirected
 # to the target node that does not yet know it is importing this slot.
 if !o[:quiet]
  print "moving slot #{slot} from #{source} to #{target}: "
  stdout.flush
 end
 
 if !o[:cold]
  target.r.cluster("setslot",slot,"importing",source.info[:name])
  source.r.cluster("setslot",slot,"migrating",target.info[:name])
 end
 # migrate all the keys from source to target using the migrate command
 while true
  keys = source.r.cluster("getkeysinslot",slot,o[:pipeline])
  break if keys.length == 0
  begin
  source.r.client.call(["migrate",target.info[:host],target.info[:port],"",0,@timeout,:keys,*keys])
  rescue => e
  if o[:fix] && e.to_s =~ /busykey/
   xputs "*** target key exists. replacing it for fix."
   source.r.client.call(["migrate",target.info[:host],target.info[:port],"",0,@timeout,:replace,:keys,*keys])
  else
   puts ""
   xputs "[err] calling migrate: #{e}"
   exit 1
  end
  end
  print "."*keys.length if o[:dots]
  stdout.flush
 end
 
 puts if !o[:quiet]
 # set the new node as the owner of the slot in all the known nodes.
 if !o[:cold]
  @nodes.each{|n|
  next if n.has_flag?("slave")
  n.r.cluster("setslot",slot,"node",target.info[:name])
  }
 end
 
 # update the node logical config
 if o[:update] then
  source.info[:slots].delete(slot)
  target.info[:slots][slot] = true
 end
 end
 
 # redis-trib subcommands implementations.
 
 def check_cluster_cmd(argv,opt)
 load_cluster_info_from_node(argv[0])
 check_cluster
 end
 
 def info_cluster_cmd(argv,opt)
 load_cluster_info_from_node(argv[0])
 show_cluster_info
 end
 
 def rebalance_cluster_cmd(argv,opt)
 opt = {
  'pipeline' => migratedefaultpipeline,
  'threshold' => rebalancedefaultthreshold
 }.merge(opt)
 
 # load nodes info before parsing options, otherwise we can't
 # handle --weight.
 load_cluster_info_from_node(argv[0])
 
 # options parsing
 threshold = opt['threshold'].to_i
 autoweights = opt['auto-weights']
 weights = {}
 opt['weight'].each{|w|
  fields = w.split("=")
  node = get_node_by_abbreviated_name(fields[0])
  if !node || !node.has_flag?("master")
  puts "*** no such master node #{fields[0]}"
  exit 1
  end
  weights[node.info[:name]] = fields[1].to_f
 } if opt['weight']
 useempty = opt['use-empty-masters']
 
 # assign a weight to each node, and compute the total cluster weight.
 total_weight = 0
 nodes_involved = 0
 @nodes.each{|n|
  if n.has_flag?("master")
  next if !useempty && n.slots.length == 0
  n.info[:w] = weights[n.info[:name]] ? weights[n.info[:name]] : 1
  total_weight += n.info[:w]
  nodes_involved += 1
  end
 }
 
 # check cluster, only proceed if it looks sane.
 check_cluster(:quiet => true)
 if @errors.length != 0
  puts "*** please fix your cluster problems before rebalancing"
  exit 1
 end
 
 # calculate the slots balance for each node. it's the number of
 # slots the node should lose (if positive) or gain (if negative)
 # in order to be balanced.
 threshold = opt['threshold'].to_f
 threshold_reached = false
 @nodes.each{|n|
  if n.has_flag?("master")
  next if !n.info[:w]
  expected = ((clusterhashslots.to_f / total_weight) *
    n.info[:w]).to_i
  n.info[:balance] = n.slots.length - expected
  # compute the percentage of difference between the
  # expected number of slots and the real one, to see
  # if it's over the threshold specified by the user.
  over_threshold = false
  if threshold > 0
   if n.slots.length > 0
   err_perc = (100-(100.0*expected/n.slots.length)).abs
   over_threshold = true if err_perc > threshold
   elsif expected > 0
   over_threshold = true
   end
  end
  threshold_reached = true if over_threshold
  end
 }
 if !threshold_reached
  xputs "*** no rebalancing needed! all nodes are within the #{threshold}% threshold."
  return
 end
 
 # only consider nodes we want to change
 sn = @nodes.select{|n|
  n.has_flag?("master") && n.info[:w]
 }
 
 # because of rounding, it is possible that the balance of all nodes
 # summed does not give 0. make sure that nodes that have to provide
 # slots are always matched by nodes receiving slots.
 total_balance = sn.map{|x| x.info[:balance]}.reduce{|a,b| a+b}
 while total_balance > 0
  sn.each{|n|
  if n.info[:balance] < 0 && total_balance > 0
   n.info[:balance] -= 1
   total_balance -= 1
  end
  }
 end
 
 # sort nodes by their slots balance.
 sn = sn.sort{|a,b|
  a.info[:balance] <=> b.info[:balance]
 }
 
 xputs ">>> rebalancing across #{nodes_involved} nodes. total weight = #{total_weight}"
 
 if $verbose
  sn.each{|n|
  puts "#{n} balance is #{n.info[:balance]} slots"
  }
 end
 
 # now we have at the start of the 'sn' array nodes that should get
 # slots, at the end nodes that must give slots.
 # we take two indexes, one at the start, and one at the end,
 # incrementing or decrementing the indexes accordingly til we
 # find nodes that need to get/provide slots.
 dst_idx = 0
 src_idx = sn.length - 1
 
 while dst_idx < src_idx
  dst = sn[dst_idx]
  src = sn[src_idx]
  numslots = [dst.info[:balance],src.info[:balance]].map{|n|
  n.abs
  }.min
 
  if numslots > 0
  puts "moving #{numslots} slots from #{src} to #{dst}"
 
  # actaully move the slots.
  reshard_table = compute_reshard_table([src],numslots)
  if reshard_table.length != numslots
   xputs "*** assertio failed: reshard table != number of slots"
   exit 1
  end
  if opt['simulate']
   print "#"*reshard_table.length
  else
   reshard_table.each{|e|
   move_slot(e[:source],dst,e[:slot],
    :quiet=>true,
    :dots=>false,
    :update=>true,
    :pipeline=>opt['pipeline'])
   print "#"
   stdout.flush
   }
  end
  puts
  end
 
  # update nodes balance.
  dst.info[:balance] += numslots
  src.info[:balance] -= numslots
  dst_idx += 1 if dst.info[:balance] == 0
  src_idx -= 1 if src.info[:balance] == 0
 end
 end
 
 def fix_cluster_cmd(argv,opt)
 @fix = true
 @timeout = opt['timeout'].to_i if opt['timeout']
 
 load_cluster_info_from_node(argv[0])
 check_cluster
 end
 
 def reshard_cluster_cmd(argv,opt)
 opt = {'pipeline' => migratedefaultpipeline}.merge(opt)
 
 load_cluster_info_from_node(argv[0])
 check_cluster
 if @errors.length != 0
  puts "*** please fix your cluster problems before resharding"
  exit 1
 end
 
 @timeout = opt['timeout'].to_i if opt['timeout'].to_i
 
 # get number of slots
 if opt['slots']
  numslots = opt['slots'].to_i
 else
  numslots = 0
  while numslots <= 0 or numslots > clusterhashslots
  print "how many slots do you want to move (from 1 to #{clusterhashslots})? "
  numslots = stdin.gets.to_i
  end
 end
 
 # get the target instance
 if opt['to']
  target = get_node_by_name(opt['to'])
  if !target || target.has_flag?("slave")
  xputs "*** the specified node is not known or not a master, please retry."
  exit 1
  end
 else
  target = nil
  while not target
  print "what is the receiving node id? "
  target = get_node_by_name(stdin.gets.chop)
  if !target || target.has_flag?("slave")
   xputs "*** the specified node is not known or not a master, please retry."
   target = nil
  end
  end
 end
 
 # get the source instances
 sources = []
 if opt['from']
  opt['from'].split(',').each{|node_id|
  if node_id == "all"
   sources = "all"
   break
  end
  src = get_node_by_name(node_id)
  if !src || src.has_flag?("slave")
   xputs "*** the specified node is not known or is not a master, please retry."
   exit 1
  end
  sources << src
  }
 else
  xputs "please enter all the source node ids."
  xputs " type 'all' to use all the nodes as source nodes for the hash slots."
  xputs " type 'done' once you entered all the source nodes ids."
  while true
  print "source node ##{sources.length+1}:"
  line = stdin.gets.chop
  src = get_node_by_name(line)
  if line == "done"
   break
  elsif line == "all"
   sources = "all"
   break
  elsif !src || src.has_flag?("slave")
   xputs "*** the specified node is not known or is not a master, please retry."
  elsif src.info[:name] == target.info[:name]
   xputs "*** it is not possible to use the target node as source node."
  else
   sources << src
  end
  end
 end
 
 if sources.length == 0
  puts "*** no source nodes given, operation aborted"
  exit 1
 end
 
 # handle soures == all.
 if sources == "all"
  sources = []
  @nodes.each{|n|
  next if n.info[:name] == target.info[:name]
  next if n.has_flag?("slave")
  sources << n
  }
 end
 
 # check if the destination node is the same of any source nodes.
 if sources.index(target)
  xputs "*** target node is also listed among the source nodes!"
  exit 1
 end
 
 puts "\nready to move #{numslots} slots."
 puts " source nodes:"
 sources.each{|s| puts " "+s.info_string}
 puts " destination node:"
 puts " #{target.info_string}"
 reshard_table = compute_reshard_table(sources,numslots)
 puts " resharding plan:"
 show_reshard_table(reshard_table)
 if !opt['yes']
  print "do you want to proceed with the proposed reshard plan (yes/no)? "
  yesno = stdin.gets.chop
  exit(1) if (yesno != "yes")
 end
 reshard_table.each{|e|
  move_slot(e[:source],target,e[:slot],
  :dots=>true,
  :pipeline=>opt['pipeline'])
 }
 end
 
 # this is an helper function for create_cluster_cmd that verifies if
 # the number of nodes and the specified replicas have a valid configuration
 # where there are at least three master nodes and enough replicas per node.
 def check_create_parameters
 masters = @nodes.length/(@replicas+1)
 if masters < 3
  puts "*** error: invalid configuration for cluster creation."
  puts "*** redis cluster requires at least 3 master nodes."
  puts "*** this is not possible with #{@nodes.length} nodes and #{@replicas} replicas per node."
  puts "*** at least #{3*(@replicas+1)} nodes are required."
  exit 1
 end
 end
 
 def create_cluster_cmd(argv,opt)
 opt = {'replicas' => 0}.merge(opt)
 @replicas = opt['replicas'].to_i
 
 xputs ">>> creating cluster"
 argv[0..-1].each{|n|
  node = clusternode.new(n)
  node.connect(:abort => true)
  node.assert_cluster
  node.load_info
  node.assert_empty
  add_node(node)
 }
 check_create_parameters
 xputs ">>> performing hash slots allocation on #{@nodes.length} nodes..."
 alloc_slots
 show_nodes
 yes_or_die "can i set the above configuration?"
 flush_nodes_config
 xputs ">>> nodes configuration updated"
 xputs ">>> assign a different config epoch to each node"
 assign_config_epoch
 xputs ">>> sending cluster meet messages to join the cluster"
 join_cluster
 # give one second for the join to start, in order to avoid that
 # wait_cluster_join will find all the nodes agree about the config as
 # they are still empty with unassigned slots.
 sleep 1
 wait_cluster_join
 flush_nodes_config # useful for the replicas
 check_cluster
 end
 
 def addnode_cluster_cmd(argv,opt)
 xputs ">>> adding node #{argv[0]} to cluster #{argv[1]}"
 
 # check the existing cluster
 load_cluster_info_from_node(argv[1])
 check_cluster
 
 # if --master-id was specified, try to resolve it now so that we
 # abort before starting with the node configuration.
 if opt['slave']
  if opt['master-id']
  master = get_node_by_name(opt['master-id'])
  if !master
   xputs "[err] no such master id #{opt['master-id']}"
  end
  else
  master = get_master_with_least_replicas
  xputs "automatically selected master #{master}"
  end
 end
 
 # add the new node
 new = clusternode.new(argv[0])
 new.connect(:abort => true)
 new.assert_cluster
 new.load_info
 new.assert_empty
 first = @nodes.first.info
 add_node(new)
 
 # send cluster meet command to the new node
 xputs ">>> send cluster meet to node #{new} to make it join the cluster."
 new.r.cluster("meet",first[:host],first[:port])
 
 # additional configuration is needed if the node is added as
 # a slave.
 if opt['slave']
  wait_cluster_join
  xputs ">>> configure node as replica of #{master}."
  new.r.cluster("replicate",master.info[:name])
 end
 xputs "[ok] new node added correctly."
 end
 
 def delnode_cluster_cmd(argv,opt)
 id = argv[1].downcase
 xputs ">>> removing node #{id} from cluster #{argv[0]}"
 
 # load cluster information
 load_cluster_info_from_node(argv[0])
 
 # check if the node exists and is not empty
 node = get_node_by_name(id)
 
 if !node
  xputs "[err] no such node id #{id}"
  exit 1
 end
 
 if node.slots.length != 0
  xputs "[err] node #{node} is not empty! reshard data away and try again."
  exit 1
 end
 
 # send cluster forget to all the nodes but the node to remove
 xputs ">>> sending cluster forget messages to the cluster..."
 @nodes.each{|n|
  next if n == node
  if n.info[:replicate] && n.info[:replicate].downcase == id
  # reconfigure the slave to replicate with some other node
  master = get_master_with_least_replicas
  xputs ">>> #{n} as replica of #{master}"
  n.r.cluster("replicate",master.info[:name])
  end
  n.r.cluster("forget",argv[1])
 }
 
 # finally shutdown the node
 xputs ">>> shutdown the node."
 node.r.shutdown
 end
 
 def set_timeout_cluster_cmd(argv,opt)
 timeout = argv[1].to_i
 if timeout < 100
  puts "setting a node timeout of less than 100 milliseconds is a bad idea."
  exit 1
 end
 
 # load cluster information
 load_cluster_info_from_node(argv[0])
 ok_count = 0
 err_count = 0
 
 # send cluster forget to all the nodes but the node to remove
 xputs ">>> reconfiguring node timeout in every cluster node..."
 @nodes.each{|n|
  begin
  n.r.config("set","cluster-node-timeout",timeout)
  n.r.config("rewrite")
  ok_count += 1
  xputs "*** new timeout set for #{n}"
  rescue => e
  puts "err setting node-timeot for #{n}: #{e}"
  err_count += 1
  end
 }
 xputs ">>> new node timeout set. #{ok_count} ok, #{err_count} err."
 end
 
 def call_cluster_cmd(argv,opt)
 cmd = argv[1..-1]
 cmd[0] = cmd[0].upcase
 
 # load cluster information
 load_cluster_info_from_node(argv[0])
 xputs ">>> calling #{cmd.join(" ")}"
 @nodes.each{|n|
  begin
  res = n.r.send(*cmd)
  puts "#{n}: #{res}"
  rescue => e
  puts "#{n}: #{e}"
  end
 }
 end
 
 def import_cluster_cmd(argv,opt)
 source_addr = opt['from']
 xputs ">>> importing data from #{source_addr} to cluster #{argv[1]}"
 use_copy = opt['copy']
 use_replace = opt['replace']
  
 # check the existing cluster.
 load_cluster_info_from_node(argv[0])
 check_cluster
 
 # connect to the source node.
 xputs ">>> connecting to the source redis instance"
 src_host,src_port = source_addr.split(":")
 source = redis.new(:host =>src_host, :port =>src_port)
 if source.info['cluster_enabled'].to_i == 1
  xputs "[err] the source node should not be a cluster node."
 end
 xputs "*** importing #{source.dbsize} keys from db 0"
 
 # build a slot -> node map
 slots = {}
 @nodes.each{|n|
  n.slots.each{|s,_|
  slots[s] = n
  }
 }
 
 # use scan to iterate over the keys, migrating to the
 # right node as needed.
 cursor = nil
 while cursor != 0
  cursor,keys = source.scan(cursor, :count => 1000)
  cursor = cursor.to_i
  keys.each{|k|
  # migrate keys using the migrate command.
  slot = key_to_slot(k)
  target = slots[slot]
  print "migrating #{k} to #{target}: "
  stdout.flush
  begin
   cmd = ["migrate",target.info[:host],target.info[:port],k,0,@timeout]
   cmd << :copy if use_copy
   cmd << :replace if use_replace
   source.client.call(cmd)
  rescue => e
   puts e
  else
   puts "ok"
  end
  }
 end
 end
 
 def help_cluster_cmd(argv,opt)
 show_help
 exit 0
 end
 
 # parse the options for the specific command "cmd".
 # returns an hash populate with option => value pairs, and the index of
 # the first non-option argument in argv.
 def parse_options(cmd)
 idx = 1 ; # current index into argv
 options={}
 while idx < argv.length && argv[idx][0..1] == '--'
  if argv[idx][0..1] == "--"
  option = argv[idx][2..-1]
  idx += 1
 
  # --verbose is a global option
  if option == "verbose"
   $verbose = true
   next
  end
 
  if allowed_options[cmd] == nil || allowed_options[cmd][option] == nil
   puts "unknown option '#{option}' for command '#{cmd}'"
   exit 1
  end
  if allowed_options[cmd][option] != false
   value = argv[idx]
   idx += 1
  else
   value = true
  end
 
  # if the option is set to [], it's a multiple arguments
  # option. we just queue every new value into an array.
  if allowed_options[cmd][option] == []
   options[option] = [] if !options[option]
   options[option] << value
  else
   options[option] = value
  end
  else
  # remaining arguments are not options.
  break
  end
 end
 
 # enforce mandatory options
 if allowed_options[cmd]
  allowed_options[cmd].each {|option,val|
  if !options[option] && val == :required
   puts "option '--#{option}' is required "+ \
    "for subcommand '#{cmd}'"
   exit 1
  end
  }
 end
 return options,idx
 end
end
 
#################################################################################
# libraries
#
# we try to don't depend on external libs since this is a critical part
# of redis cluster.
#################################################################################
 
# this is the crc16 algorithm used by redis cluster to hash keys.
# implementation according to ccitt standards.
#
# this is actually the xmodem crc 16 algorithm, using the
# following parameters:
#
# name   : "xmodem", also known as "zmodem", "crc-16/acorn"
# width   : 16 bit
# poly   : 1021 (that is actually x^16 + x^12 + x^5 + 1)
# initialization  : 0000
# reflect input byte  : false
# reflect output crc  : false
# xor constant to output crc : 0000
# output for "123456789" : 31c3
 
module redisclustercrc16
 def redisclustercrc16.crc16(bytes)
 crc = 0
 bytes.each_byte{|b|
  crc = ((crc<<8) & 0xffff) ^ xmodemcrc16lookup[((crc>>8)^b) & 0xff]
 }
 crc
 end
 
private
 xmodemcrc16lookup = [
 0x0000,0x1021,0x2042,0x3063,0x4084,0x50a5,0x60c6,0x70e7,
 0x8108,0x9129,0xa14a,0xb16b,0xc18c,0xd1ad,0xe1ce,0xf1ef,
 0x1231,0x0210,0x3273,0x2252,0x52b5,0x4294,0x72f7,0x62d6,
 0x9339,0x8318,0xb37b,0xa35a,0xd3bd,0xc39c,0xf3ff,0xe3de,
 0x2462,0x3443,0x0420,0x1401,0x64e6,0x74c7,0x44a4,0x5485,
 0xa56a,0xb54b,0x8528,0x9509,0xe5ee,0xf5cf,0xc5ac,0xd58d,
 0x3653,0x2672,0x1611,0x0630,0x76d7,0x66f6,0x5695,0x46b4,
 0xb75b,0xa77a,0x9719,0x8738,0xf7df,0xe7fe,0xd79d,0xc7bc,
 0x48c4,0x58e5,0x6886,0x78a7,0x0840,0x1861,0x2802,0x3823,
 0xc9cc,0xd9ed,0xe98e,0xf9af,0x8948,0x9969,0xa90a,0xb92b,
 0x5af5,0x4ad4,0x7ab7,0x6a96,0x1a71,0x0a50,0x3a33,0x2a12,
 0xdbfd,0xcbdc,0xfbbf,0xeb9e,0x9b79,0x8b58,0xbb3b,0xab1a,
 0x6ca6,0x7c87,0x4ce4,0x5cc5,0x2c22,0x3c03,0x0c60,0x1c41,
 0xedae,0xfd8f,0xcdec,0xddcd,0xad2a,0xbd0b,0x8d68,0x9d49,
 0x7e97,0x6eb6,0x5ed5,0x4ef4,0x3e13,0x2e32,0x1e51,0x0e70,
 0xff9f,0xefbe,0xdfdd,0xcffc,0xbf1b,0xaf3a,0x9f59,0x8f78,
 0x9188,0x81a9,0xb1ca,0xa1eb,0xd10c,0xc12d,0xf14e,0xe16f,
 0x1080,0x00a1,0x30c2,0x20e3,0x5004,0x4025,0x7046,0x6067,
 0x83b9,0x9398,0xa3fb,0xb3da,0xc33d,0xd31c,0xe37f,0xf35e,
 0x02b1,0x1290,0x22f3,0x32d2,0x4235,0x5214,0x6277,0x7256,
 0xb5ea,0xa5cb,0x95a8,0x8589,0xf56e,0xe54f,0xd52c,0xc50d,
 0x34e2,0x24c3,0x14a0,0x0481,0x7466,0x6447,0x5424,0x4405,
 0xa7db,0xb7fa,0x8799,0x97b8,0xe75f,0xf77e,0xc71d,0xd73c,
 0x26d3,0x36f2,0x0691,0x16b0,0x6657,0x7676,0x4615,0x5634,
 0xd94c,0xc96d,0xf90e,0xe92f,0x99c8,0x89e9,0xb98a,0xa9ab,
 0x5844,0x4865,0x7806,0x6827,0x18c0,0x08e1,0x3882,0x28a3,
 0xcb7d,0xdb5c,0xeb3f,0xfb1e,0x8bf9,0x9bd8,0xabbb,0xbb9a,
 0x4a75,0x5a54,0x6a37,0x7a16,0x0af1,0x1ad0,0x2ab3,0x3a92,
 0xfd2e,0xed0f,0xdd6c,0xcd4d,0xbdaa,0xad8b,0x9de8,0x8dc9,
 0x7c26,0x6c07,0x5c64,0x4c45,0x3ca2,0x2c83,0x1ce0,0x0cc1,
 0xef1f,0xff3e,0xcf5d,0xdf7c,0xaf9b,0xbfba,0x8fd9,0x9ff8,
 0x6e17,0x7e36,0x4e55,0x5e74,0x2e93,0x3eb2,0x0ed1,0x1ef0
 ]
end
 
# turn a key name into the corrisponding redis cluster slot.
def key_to_slot(key)
 # only hash what is inside {...} if there is such a pattern in the key.
 # note that the specification requires the content that is between
 # the first { and the first } after the first {. if we found {} without
 # nothing in the middle, the whole key is hashed as usually.
 s = key.index "{"
 if s
 e = key.index "}",s+1
 if e && e != s+1
  key = key[s+1..e-1]
 end
 end
 redisclustercrc16.crc16(key) % 16384
end
 
#################################################################################
# definition of commands
#################################################################################
 
commands={
 "create" => ["create_cluster_cmd", -2, "host1:port1 ... hostn:portn"],
 "check" => ["check_cluster_cmd", 2, "host:port"],
 "info" => ["info_cluster_cmd", 2, "host:port"],
 "fix" => ["fix_cluster_cmd", 2, "host:port"],
 "reshard" => ["reshard_cluster_cmd", 2, "host:port"],
 "rebalance" => ["rebalance_cluster_cmd", -2, "host:port"],
 "add-node" => ["addnode_cluster_cmd", 3, "new_host:new_port existing_host:existing_port"],
 "del-node" => ["delnode_cluster_cmd", 3, "host:port node_id"],
 "set-timeout" => ["set_timeout_cluster_cmd", 3, "host:port milliseconds"],
 "call" => ["call_cluster_cmd", -3, "host:port command arg arg .. arg"],
 "import" => ["import_cluster_cmd", 2, "host:port"],
 "help" => ["help_cluster_cmd", 1, "(show this help)"]
}
 
allowed_options={
 "create" => {"replicas" => true},
 "add-node" => {"slave" => false, "master-id" => true},
 "import" => {"from" => :required, "copy" => false, "replace" => false},
 "reshard" => {"from" => true, "to" => true, "slots" => true, "yes" => false, "timeout" => true, "pipeline" => true},
 "rebalance" => {"weight" => [], "auto-weights" => false, "use-empty-masters" => false, "timeout" => true, "simulate" => false, "pipeline" => true, "threshold" => true},
 "fix" => {"timeout" => migratedefaulttimeout},
}
 
def show_help
 puts "usage: redis-trib <command> <options> <arguments ...>\n\n"
 commands.each{|k,v|
 o = ""
 puts " #{k.ljust(15)} #{v[2]}"
 if allowed_options[k]
  allowed_options[k].each{|optname,has_arg|
  puts "   --#{optname}" + (has_arg ? " <arg>" : "")
  }
 end
 }
 puts "\nfor check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.\n"
end
 
# sanity check
if argv.length == 0
 show_help
 exit 1
end
 
rt = redistrib.new
cmd_spec = commands[argv[0].downcase]
if !cmd_spec
 puts "unknown redis-trib subcommand '#{argv[0]}'"
 exit 1
end
 
# parse options
cmd_options,first_non_option = rt.parse_options(argv[0].downcase)
rt.check_arity(cmd_spec[1],argv.length-(first_non_option-1))
 
# dispatch
rt.send(cmd_spec[0],argv[first_non_option..-1],cmd_options)

总结

到此这篇关于window下创建redis出现问题小结的文章就介绍到这了,更多相关创建redis出现问题内容请搜索以前的文章或继续浏览下面的相关文章希望大家以后多多支持!

相关标签: window 创建redis