Redis集群搭建

2020/01/05 Redis

redis集群搭建

脚本部署

#!/bin/bash 
# auto start redis program
mkdir /usr/local/redis
cd /usr/local/redis
tar -zxvf /redis-5.0.4.tar.gz
cd redis-5.0.4/src
# 编译
make 
make MALLOC=libc
# 让端口可用
firewall-cmd --add-port=16379/tcp --permanent
firewall-cmd --add-port=6379/tcp --permanent
mkdir /usr/local/redis/data
# 把redis。conf配置文件复制到src文件夹下
cp /usr/local/redis/redis-5.0.4/redis.conf /usr/local/redis/redis-5.0.4/src 
mkdir /usr/local/redis/log
# 修改日志文档
sed -i 's/logfile ""/logfile \/usr\/local\/redis\/log\/redis.log/g' redis.conf     
#  redis在后台运行
sed -i 's/daemonize no/daemonize yes/g' redis.conf
# 修改bind的ip地址
sed -i 's/bind 127.0.0.1/bind 0.0.0.0/g' redis.conf 
# 设定开始故障转移的时间
sed -i 's/# cluster-node-timeout 15000/cluster-node-timeout 15000/g' redis.conf
# 判断是否有故障转移的资格
sed -i 's/# cluster-replica-validity-factor 10/cluster-replica-validity-factor 10/g' redis.conf
#  确定故障转移需要几个节点
sed -i 's/# cluster-migration-barrier 1/cluster-migration-barrier 1/g' redis.conf
# 修改dir文件目录
sed -i 's/dir .\//dir \/usr\/local\/redis\/data/g' redis.conf
# 开启集群模式
sed -i 's/# cluster-enabled yes/cluster-enabled yes/g' redis.conf
# 启动redis服务
/usr/local/redis/redis-5.0.4/src/redis-server /usr/local/redis/redis-5.0.4/src/redis.conf
#集群搭建
/usr/local/redis/redis-5.0.4/src/redis-cli --cluster create 192.100.3.90:6379 192.100.3.91:6379 192.100.3.92:6379 --cluster-replicas 0

docker部署

#拉取镜像
docker pull redis:5.0.2
#创建容器
docker create --name redis-node01 --net host -v /data/redis-data/node01:/data
redis:5.0.2 --cluster-enabled yes --cluster-config-file nodes-node-01.conf --port
6379
docker create --name redis-node02 --net host -v /data/redis-data/node02:/data
redis:5.0.2 --cluster-enabled yes --cluster-config-file nodes-node-02.conf --port
6380
docker create --name redis-node03 --net host -v /data/redis-data/node03:/data
redis:5.0.2 --cluster-enabled yes --cluster-config-file nodes-node-03.conf --port
6381
#启动容器
docker start redis-node01 redis-node02 redis-node03
#进入redis-node01容器进行操作
docker exec -it redis-node01 /bin/bash
#172.16.55.185是主机的ip地址
redis-cli --cluster create 172.16.55.185:6379 172.16.55.185:6380 172.16.55.185:6381
--cluster-replicas 0

Redis5 高可用集群

概述

HA(High Available,高可用性群集)机集群系统简称,是保证业务连续性的有效解决方案,一般有两个或两个以上的节点,且分为活动节点及备用节点。通常把正在执 行业务的称为活动节点,而作为活动节点的一个备份的则称为备用节点。当活动节点出现问题,导致正在运行的业务(任务)不能正常运行时,备用节点此时就会侦测到,并立即接续活动节点来执行业务。从而实现业务的不中断或短暂中断。

Redis 一般以主/从方式部署(这里讨论的应用从实例主要用于备份,主实例提供读写)该方式要实现 HA 主要有如下几种方案:

  • keepalived: 通过 keepalived 的虚拟 IP,提供主从的统一访问,在主出现问题时, 通过 keepalived 运行脚本将从提升为主,待主恢复后先同步后自动变为主,该方案的好处是主从切换后,应用程序不需要知道(因为访问的虚拟 IP 不变),坏处是引入 keepalived 增加部署复杂性,在有些情况下会导致数据丢失
  • zookeeper: 通过 zookeeper 来监控主从实例, 维护最新有效的 IP, 应用通过 zookeeper 取得 IP,对 Redis 进行访问,该方案需要编写大量的监控代码
  • sentinel: 通过 Sentinel 监控主从实例,自动进行故障恢复,该方案有个缺陷:因为主从实例地址( IP & PORT )是不同的,当故障发生进行主从切换后,应用程序无法知道新地址,故在 Jedis2.2.2 中新增了对 Sentinel 的支持,应用通过 redis.clients.jedis.JedisSentinelPool.getResource() 取得的 Jedis 实例会及时更新到新的主实例地址

img

注意: sentinel 是解决 HA 问题的,cluster 是解决主从复制问题的,不重复,并且经常一起用

Redis Sentinel

Redis 集群可以在一组 redis 节点之间实现高可用性和 sharding。在集群中会有 1 个 master 和多个 slave 节点。当 master 节点失效时,应选举出一个 slave 节点作为新的 master。然而 Redis 本身 (包括它的很多客户端) 没有实现自动故障发现并进行主备切换的能力,需要外部的监控方案来实现自动故障恢复。

Redis Sentinel 是官方推荐的高可用性解决方案。它是 Redis 集群的监控管理工具,可以提供节点监控、通知、自动故障恢复和客户端配置发现服务。

img

基于 Docker 安装 Redis 集群

2018 年 10 月 Redis 发布了稳定版本的 5.0 版本,推出了各种新特性,其中一点是放弃 Ruby 的集群方式,改为 使用 C 语言编写的 redis-cli 的方式,使集群的构建方式复杂度大大降低

下载所需镜像

  • Redis (5.x 测试有效):docker pull redis
  • Redis Trib (用于创建集群):docker pull zvelo/redis-trib

创建配置文件

官方配置文件地址:http://download.redis.io/redis-stable/redis.conf ,我们部署 3 个节点,需要分别创建 3 个配置文件,路径如下

  • renode1:cluster/node1/redis.conf
  • renode2:cluster/node2/redis.conf
  • renode3:cluster/node3/redis.conf

配置文件内容如下 (端口号不要重复)

bind 0.0.0.0
# 关闭保护模式
protected-mode no
# 绑定自定义端口
port 6379
# 禁止 Redis 后台运行
# daemonize yes
pidfile /var/run/redis_6379.pid
# 开启集群
cluster-enabled yes
# 集群的配置,配置文件首次启动自动生成
cluster-config-file nodes_6379.conf
# 开启 AOF
# appendonly yes
# 集群的 IP
cluster-announce-ip 192.168.x.x
# 集群的端口
cluster-announce-port 6379
# 集群的总线端口
cluster-announce-bus-port 16379

创建资源配置

docker-compose.yml 配置文件如下

version: '3.1'
services:
  renode1:
    image: redis
    container_name: renode1
    restart: always
    ports:
      - 6379:6379
      - 16379:16379
    volumes:
      - ./cluster/node1:/usr/local/etc/redis
    command:
      redis-server /usr/local/etc/redis/redis.conf
  renode2:
    image: redis
    container_name: renode2
    restart: always
    ports:
      - 6380:6380
      - 16380:16380
    volumes:
      - ./cluster/node2:/usr/local/etc/redis
    command:
      redis-server /usr/local/etc/redis/redis.conf
  renode3:
    image: redis
    container_name: renode3
    restart: always
    ports:
      - 6381:6381
      - 16381:16381
    volumes:
      - ./cluster/node3:/usr/local/etc/redis
    command:
      redis-server /usr/local/etc/redis/redis.conf
docker-compose up -d

创建集群

docker run --rm -it zvelo/redis-trib create 192.168.141.220:6379 192.168.141.220:6380 192.168.141.220:6381
# 输出如下
>>> Creating cluster
>>> Performing hash slots allocation on 3 nodes...
Using 3 masters:
192.168.141.220:6379
192.168.141.220:6380
192.168.141.220:6381
M: 9250c85592b7bb8a19636b90e4cf22590bd3334f 192.168.141.220:6379
   slots:0-5460 (5461 slots) master
M: 7373de7b44ee50a1e4f653bfba1bb808138e7e3a 192.168.141.220:6380
   slots:5461-10922 (5462 slots) master
M: ec7e6e4cdd142249e3aa83541044932655d75b66 192.168.141.220:6381
   slots:10923-16383 (5461 slots) master
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join..
>>> Performing Cluster Check (using node 192.168.141.220:6379)
M: 9250c85592b7bb8a19636b90e4cf22590bd3334f 192.168.141.220:6379
   slots:0-5460 (5461 slots) master
M: 7373de7b44ee50a1e4f653bfba1bb808138e7e3a 192.168.141.220:6380
   slots:5461-10922 (5462 slots) master
M: ec7e6e4cdd142249e3aa83541044932655d75b66 192.168.141.220:6381
   slots:10923-16383 (5461 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

验证集群是否成功

  • 交互式进入容器
docker exec -it renode1 /bin/bash
  • 登录 Redis
redis-cli -p 6379
  • 查看集群信息
127.0.0.1:6379> cluster info
# 输出如下
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:3
cluster_size:3
cluster_current_epoch:3
cluster_my_epoch:1
cluster_stats_messages_ping_sent:860
cluster_stats_messages_pong_sent:870
cluster_stats_messages_sent:1730
cluster_stats_messages_ping_received:868
cluster_stats_messages_pong_received:860
cluster_stats_messages_meet_received:2
cluster_stats_messages_received:1730
  • 查看集群节点
127.0.0.1:6379> cluster nodes
# 输出如下
ec7e6e4cdd142249e3aa83541044932655d75b66 192.168.141.220:6381@16381 master - 0 1569665070102 3 connected 10923-16383
7373de7b44ee50a1e4f653bfba1bb808138e7e3a 192.168.141.220:6380@16380 master - 0 1569665069094 2 connected 5461-10922
9250c85592b7bb8a19636b90e4cf22590bd3334f 192.168.141.220:6379@16379 myself,master - 0 1569665069000 1 connected 0-5460
  • 查看集群插槽
27.0.0.1:6379> cluster slots
# 输出如下
1) 1) (integer) 10923
   2) (integer) 16383
   3) 1) "192.168.141.220"
      2) (integer) 6381
      3) "ec7e6e4cdd142249e3aa83541044932655d75b66"
2) 1) (integer) 5461
   2) (integer) 10922
   3) 1) "192.168.141.220"
      2) (integer) 6380
      3) "7373de7b44ee50a1e4f653bfba1bb808138e7e3a"
3) 1) (integer) 0
   2) (integer) 5460
   3) 1) "192.168.141.220"
      2) (integer) 6379
      3) "9250c85592b7bb8a19636b90e4cf22590bd3334f"

基于 Docker 安装 Redis Sentinel

创建配置文件

我们部署 3 个 Sentinel 节点,需要分别创建 3 个配置文件,路径如下

  • resentinel1:cluster/node1/sentinel.conf
  • resentinel2:cluster/node2/sentinel.conf
  • resentinel3:cluster/node3/sentinel.conf

配置文件内容如下 (端口号不要重复)

bind 0.0.0.0
port 26379
dir /tmp
# 自定义集群名,其中 192.168.141.220 为 Redis Master 的 IP,6379 为 Redis Master 的端口,2 为最小投票数(因为有 3 台 Sentinel 所以可以设置成 2)
sentinel monitor rmaster 192.168.141.220 6379 2
sentinel down-after-milliseconds rmaster 30000
sentinel parallel-syncs rmaster 1
sentinel failover-timeout rmaster 180000
sentinel deny-scripts-reconfig yes

创建资源配置

docker-compose.yml 配置文件如下

version: '3.1'
services:
  resentinel1:
    image: redis
    container_name: resentinel1
    ports:
      - 26379:26379
    command: redis-sentinel /usr/local/etc/redis/sentinel.conf
    volumes:
      - ./cluster/node1/sentinel.conf:/usr/local/etc/redis/sentinel.conf
  resentinel2:
    image: redis
    container_name: resentinel2
    ports:
      - 26380:26380
    command: redis-sentinel /usr/local/etc/redis/sentinel.conf
    volumes:
      - ./cluster/node2/sentinel.conf:/usr/local/etc/redis/sentinel.conf
  resentinel3:
    image: redis
    container_name: resentinel3
    ports:
      - 26381:26381
    command: redis-sentinel /usr/local/etc/redis/sentinel.conf
    volumes:
      - ./cluster/node3/sentinel.conf:/usr/local/etc/redis/sentinel.conf

验证集群是否成功

  • 交互式进入容器
docker exec -it resentinel1 /bin/bash
  • 登录 Redis Sentinel
redis-cli -p 26379
  • 查看集群 Master 信息
127.0.0.1:26379> sentinel master rmaster
# 输出如下
 1) "name"
 2) "rmaster"
 3) "ip"
 4) "192.168.141.220"
 5) "port"
 6) "6379"
 7) "runid"
 8) "30055483aeb9d75f35c0046aaec03440731e3e88"
 9) "flags"
10) "master"
11) "link-pending-commands"
12) "0"
13) "link-refcount"
14) "1"
15) "last-ping-sent"
16) "0"
17) "last-ok-ping-reply"
18) "413"
19) "last-ping-reply"
20) "413"
21) "down-after-milliseconds"
22) "30000"
23) "info-refresh"
24) "2878"
25) "role-reported"
26) "master"
27) "role-reported-time"
28) "143537"
29) "config-epoch"
30) "0"
31) "num-slaves"
32) "0"
33) "num-other-sentinels"
34) "2"
35) "quorum"
36) "2"
37) "failover-timeout"
38) "180000"
39) "parallel-syncs"
40) "1"

Search

    Table of Contents