redis3.0.6 集群安装 博客分类: redis redis
程序员文章站
2024-02-16 19:16:58
...
(集群正常工作至少需要3个主节点,在这里我们要创建6个redis节点,其中三个为主节点,三个为从节点,对应的redis节点的ip和端口对应关系如下)
127.0.0.1:6379 127.0.0.1:6380 127.0.0.1: 6381 127.0.0.1: 6382 127.0.0.1: 6383 127.0.0.1: 6384
一、redis安装
wget http://download.redis.io/releases/redis-3.0.6.tar.gz tar -zxvf redis-3.0.6.tar.gz mv redis-3.0.6 redis mv redis /data/setup/ make make install
二、安装ruby的环境
yum install ruby yum install rubygems gem install redis
三、创建集群目录
cd /data/setup/redis/ mkdir cluster cd cluster mkdir 6379 mkdir 6380 mkdir 6381 mkdir 6382 mkdir 6383 mkdir 6384
四、修改配置文件redis.conf
cp /data/setup/redis/redis.conf /data/setup/redis/cluster/ vi redis.conf
#修改配置文件中的下面选项 daemonize yes port 6379 pidfile /data/setup/redis/cluster/6379/redis-6379.pid dbfilename dump-6379.rdb dir /data/setup/redis/cluster/6379/ logfile /data/setup/redis/cluster/6379/redis-6379.logs cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 15000 appendonly yes
#修改完redis.conf配置文件中拷贝配置文件 cp redis.conf /data/setup/redis/cluster/6379 cp redis.conf /data/setup/redis/cluster/6380 cp redis.conf /data/setup/redis/cluster/6381 cp redis.conf /data/setup/redis/cluster/6382 cp redis.conf /data/setup/redis/cluster/6383 cp redis.conf /data/setup/redis/cluster/6384 #注意:拷贝完成之后要修改6379/6380/6381/6382/6383/6384目录下面redis.conf文件中的port参数
五、分别启动Redis
/data/setup/redis/cluster/6379/redis-6379.conf redis-server /data/setup/redis/cluster/6380/redis-6380.conf redis-server /data/setup/redis/cluster/6381/redis-6381.conf redis-server /data/setup/redis/cluster/6382/redis-6382.conf redis-server /data/setup/redis/cluster/6383/redis-6383.conf redis-server /data/setup/redis/cluster/6384/redis-6384.conf
六:执行命令创建集群
# --replicas 则指定了为Redis Cluster中的每个Master节点配备几个Slave节点
[root@localhost cluster]# /data/setup/redis/src/redis-trib.rb create --replicas 1 172.16.5.240:6379 172.16.5.240:6380 172.16.5.240:6381 172.16.5.240:6382 172.16.5.240:6383 172.16.5.240:6384
.16.5.240:6383 172.16.5.240:6384 >>> Creating cluster >>> Performing hash slots allocation on 6 nodes... Using 3 masters: 172.16.5.240:6379 172.16.5.240:6380 172.16.5.240:6381 Adding replica 172.16.5.240:6382 to 172.16.5.240:6379 Adding replica 172.16.5.240:6383 to 172.16.5.240:6380 Adding replica 172.16.5.240:6384 to 172.16.5.240:6381 M: 89d58edb12b775a5be489690b1955990271af896 172.16.5.240:6379 slots:0-5460 (5461 slots) master M: a795943a5cba83c8ec8cf81146c9e2e4233d2a97 172.16.5.240:6380 slots:5461-10922 (5462 slots) master M: 4ad16d3551d88a11edef03711a0c451ef38d89f0 172.16.5.240:6381 slots:10923-16383 (5461 slots) master S: 7d138fe67343c13be4b78fe6a969088b08d48cc0 172.16.5.240:6382 replicates 89d58edb12b775a5be489690b1955990271af896 S: 22631d93e34708b570b078733e73e3d6b584890d 172.16.5.240:6383 replicates a795943a5cba83c8ec8cf81146c9e2e4233d2a97 S: 4d98f6f561a85278d57ab20bf9aa036fdf31bb17 172.16.5.240:6384 replicates 4ad16d3551d88a11edef03711a0c451ef38d89f0 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join.... >>> Performing Cluster Check (using node 172.16.5.240:6379) M: 89d58edb12b775a5be489690b1955990271af896 172.16.5.240:6379 slots:0-5460 (5461 slots) master M: a795943a5cba83c8ec8cf81146c9e2e4233d2a97 172.16.5.240:6380 slots:5461-10922 (5462 slots) master M: 4ad16d3551d88a11edef03711a0c451ef38d89f0 172.16.5.240:6381 slots:10923-16383 (5461 slots) master M: 7d138fe67343c13be4b78fe6a969088b08d48cc0 172.16.5.240:6382 slots: (0 slots) master replicates 89d58edb12b775a5be489690b1955990271af896 M: 22631d93e34708b570b078733e73e3d6b584890d 172.16.5.240:6383 slots: (0 slots) master replicates a795943a5cba83c8ec8cf81146c9e2e4233d2a97 M: 4d98f6f561a85278d57ab20bf9aa036fdf31bb17 172.16.5.240:6384 slots: (0 slots) master replicates 4ad16d3551d88a11edef03711a0c451ef38d89f0 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
七、查看集群状态
[root@localhost cluster]# /data/setup/redis/src/redis-trib.rb check 172.16.5.240:6379
>>> Performing Cluster Check (using node 172.16.5.240:6379) S: 89d58edb12b775a5be489690b1955990271af896 172.16.5.240:6379 slots: (0 slots) slave replicates 7d138fe67343c13be4b78fe6a969088b08d48cc0 M: 7d138fe67343c13be4b78fe6a969088b08d48cc0 172.16.5.240:6382 slots:0-5460 (5461 slots) master 1 additional replica(s) M: a795943a5cba83c8ec8cf81146c9e2e4233d2a97 172.16.5.240:6380 slots:5461-10922 (5462 slots) master 1 additional replica(s) M: 4ad16d3551d88a11edef03711a0c451ef38d89f0 172.16.5.240:6381 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: 22631d93e34708b570b078733e73e3d6b584890d 172.16.5.240:6383 slots: (0 slots) slave replicates a795943a5cba83c8ec8cf81146c9e2e4233d2a97 S: 4d98f6f561a85278d57ab20bf9aa036fdf31bb17 172.16.5.240:6384 slots: (0 slots) slave replicates 4ad16d3551d88a11edef03711a0c451ef38d89f0 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
八、客户端操作
1.客户端登陆
[root@localhost cluster]# redis-cli -c -p 6379
2.查看节点状态
127.0.0.1:6379> cluster nodes
7d138fe67343c13be4b78fe6a969088b08d48cc0 172.16.5.240:6382 master - 0 1452915979733 7 connected 0-5460 a795943a5cba83c8ec8cf81146c9e2e4233d2a97 172.16.5.240:6380 master - 0 1452915980741 2 connected 5461-10922 89d58edb12b775a5be489690b1955990271af896 172.16.5.240:6379 myself,slave 7d138fe67343c13be4b78fe6a969088b08d48cc0 0 0 1 connected 4ad16d3551d88a11edef03711a0c451ef38d89f0 172.16.5.240:6381 master - 0 1452915977717 3 connected 10923-16383 22631d93e34708b570b078733e73e3d6b584890d 172.16.5.240:6383 slave a795943a5cba83c8ec8cf81146c9e2e4233d2a97 0 1452915976709 5 connected 4d98f6f561a85278d57ab20bf9aa036fdf31bb17 172.16.5.240:6384 slave 4ad16d3551d88a11edef03711a0c451ef38d89f0 0 1452915978725 6 connected
九、服务开机启动
cd /etc/init.d/
#分别创建 redis-6379 redis-6380 redis-6381 redis-6382 redis-6383 redis-6384,并修改对应端口
vi redis-6379
# 添加下面选项
# chkconfig: 2345 10 90 # description: Start and Stop redis PATH=/usr/local/bin:/sbin:/usr/bin:/bin REDISPORT=6379 #实际环境而定 EXEC=/usr/local/bin/redis-server #实际环境而定 REDIS_CLI=/usr/local/bin/redis-cli #实际环境而定 PIDFILE=/data/setup/redis/cluster/6379/redis-6379.pid CONF="/data/setup/redis/cluster/6379/redis-6379.conf" #实际环境而定 case "$1" in start) if [ -f $PIDFILE ] then echo "$PIDFILE exists, process is already running or crashed." else echo "Starting Redis server..." $EXEC $CONF fi if [ "$?"="0" ] then echo "Redis is running..." fi ;; stop) if [ ! -f $PIDFILE ] then echo "$PIDFILE exists, process is not running." else PID=$(cat $PIDFILE) echo "Stopping..." $REDIS_CLI -p $REDISPORT SHUTDOWN while [ -x $PIDFILE ] do echo "Waiting for Redis to shutdown..." sleep 1 done echo "Redis stopped" fi ;; restart|force-reload) ${0} stop ${0} start ;; *) echo "Usage: /etc/init.d/redis-6379 {start|stop|restart|force-reload}" >&2 exit 1 esac
#添加服务 chkconfig redis-6379 on #查看服务 chkconfig --list
十、Jedis测试
package my.redis.demo; import java.io.IOException; import java.util.HashSet; import java.util.Set; import redis.clients.jedis.HostAndPort; import redis.clients.jedis.JedisCluster; public class ClusterTest { private static JedisCluster jc; static { Set<HostAndPort> jedisClusterNodes = new HashSet<HostAndPort>(); jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6379)); jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6380)); jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6381)); jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6382)); jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6383)); jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6384)); jc = new JedisCluster(jedisClusterNodes,5000,1000); jc = new JedisCluster(jedisClusterNodes,1000,1000); } public static void main(String[] args) throws IOException, InterruptedException { System.out.println("##########################################"); Thread t0 = new Thread(new Runnable() { @Override public void run() { for (int i = 0; i < 10000; i++) { String key = "key:" + i; jc.del(key); System.out.println("delete:" + key); } } }); t0.start(); t0.join(); System.out.println("##########################################"); Thread t1 = new Thread(new Runnable() { @Override public void run() { for (int i = 0; i < 10000; i++) { String key = "key:" + i; jc.set(key, key); System.out.println("write:" + key); } } }); t1.start(); t1.join(); System.out.println("##########################################"); Thread t2 = new Thread(new Runnable() { @Override public void run() { for (int i = 0; i < 10000; i++) { String key = "key:" + i; jc.get(key); System.out.println("read:"+key); } } }); t2.start(); } }
推荐阅读
-
redis3.0.6 集群安装 博客分类: redis redis
-
redis 分片 博客分类: redis
-
redis3.0.6 集群安装 博客分类: redis redis
-
redis 结合Spring的应用 博客分类: Redis
-
redis 结合Spring的应用 博客分类: Redis
-
redis maser-salve 博客分类: redis redis中间件master-slave读写分离
-
RedisCluster读写分离改造 博客分类: 系统实现redis rediscluster读写分离
-
在Python中使用AOP实现缓存(Redis) 博客分类: 大数据数据库AOPRedisPython redispythonaopcache
-
Java Redis实战之Redis + Jedis 博客分类: 技术总结 JavaRedis实战Jedis
-
redis相关知识,安装流程,Java调用 博客分类: redis redis安装nosql缓存jedis