Redis基础文档-01-安装

Redis基础文档-01-安装

参考文档-Redis快速入门指南-中文
参考文档-Redis 教程

一、启动并连接

本地启动一个redis或者用redis cloud免费账户,都可以。

language-bash
1
docker run --name CommonTrain -p 6379:6379 -itd redis:7.2

然后下载REDISINSIGHT

二、支持的数据类型

  • 字符串string
  • 哈希hash
  • 列表list
  • 集合set
  • 有序集合sorted set
  • 位图bitmaps
  • 基数统计hyperLogLogs
Redis之缓存击穿问题解决方案

Redis之缓存击穿问题解决方案

一、书接上文

Redis之缓存雪崩问题解决方案

二、介绍

缓存击穿就是大量并发访问同一个热点数据,一旦这个热点数据缓存失效,则请求压力都来到数据库。

三、解决方案

1. 单例双检锁

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
@Override
public CoursePublish getCoursePublishCache(Long courseId) {

String key = "content:course:publish:" + courseId;
//布隆过滤器
boolean contains = bloomFilter.contains(key);
if (!contains){

return null;
}
//先查询redis
Object object = redisTemplate.opsForValue().get(key);
if (object != null){

String string = object.toString();
CoursePublish coursePublish = JSON.parseObject(string, CoursePublish.class);
return coursePublish;
}else {

//后查询数据库
//加锁,防止缓存击穿
synchronized (this){

//单例双检锁
object = redisTemplate.opsForValue().get(key);
if (object != null){

String string = object.toString();
CoursePublish coursePublish = JSON.parseObject(string, CoursePublish.class);
return coursePublish;
}
CoursePublish coursePublish = getCoursePublish(courseId);
if (coursePublish != null) {

bloomFilter.add(key);
redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish));
} else {

int timeout = 10 + new Random().nextInt(20);
redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish), timeout, TimeUnit.SECONDS);
}
return coursePublish;
}
}
}

2. 缓存预热和定时任务

使用缓存预热,把数据提前放入缓存,然后根据过期时间,发布合理的定时任务,主动去更新缓存,让热点数据永不过期。

Redis之缓存穿透问题解决方案实践SpringBoot3+Docker

Redis之缓存穿透问题解决方案实践SpringBoot3+Docker

一、介绍

当一种请求,总是能越过缓存,调用数据库,就是缓存穿透。

比如当请求一个数据库没有的数据,那么缓存也不会有,然后就一直请求,甚至高并发去请求,对数据库压力会增大。

二、方案介绍

  1. 如果key具有某种规则,那么可以对key增加校验机制,不符合直接返回。
  2. Redisson布隆过滤器
  3. 逻辑修改,当数据库没有此数据,以nullvalue,也插入redis缓存,但设置较短的过期时间。

三、Redis Docker部署

docker-compose示例如下,redis.conf从这里下载

language-yml
1
2
3
4
5
6
7
8
redis:
container_name: redis
image: redis:7.2
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]

四、SpringBoot3 Base代码

1. 依赖配置

language-xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<!-- redis -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<!-- redis 连接线程池 -->
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-pool2</artifactId>
<version>2.11.1</version>
</dependency>
<!-- redisson -->
<dependency>
<groupId>org.redisson</groupId>
<artifactId>redisson</artifactId>
<version>3.24.3</version>
</dependency>
language-yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
spring: 
data:
redis:
host: 192.168.101.65 # Redis服务器的主机名或IP地址
port: 6379 # Redis服务器的端口号
password: # 用于连接Redis服务器的密码
database: 0 # 要连接的Redis数据库的索引号
lettuce:
pool:
max-active: 20 # 连接池中最大的活跃连接数
max-idle: 10 # 连接池中最大的空闲连接数
min-idle: 0 # 连接池中最小的空闲连接数
timeout: 10000 # 连接超时时间(毫秒)
lock-watchdog-timeout: 100 # Redisson的分布式锁的看门狗超时时间(毫秒)

2. 基本代码

要演示的代码很简单,就是一个携带courseId请求过来,调用下面的service函数,然后查询数据库。

language-java
1
2
3
4
5
@Override
public CoursePublish getCoursePublish(Long courseId) {

return coursePublishMapper.selectById(courseId);
}

当我们使用redis改造时,基本代码如下

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
@Override
public CoursePublish getCoursePublishCache(Long courseId) {

String key = "content:course:publish:" + courseId;
//先查询redis
Object object = redisTemplate.opsForValue().get(key);
if (object != null){

String string = object.toString();
CoursePublish coursePublish = JSON.parseObject(string, CoursePublish.class);
return coursePublish;
}else {

//后查询数据库
CoursePublish coursePublish = getCoursePublish(courseId);
if (coursePublish != null){

redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish));
}
return coursePublish;
}
}

五、缓存优化代码

1. 校验机制

我这里的id没规则,所以加不了,跳过。

2. 布隆过滤器

读取yaml配置

language-java
1
2
3
4
5
6
7
8
9
10
11
12
@Data
@Component
@ConfigurationProperties(prefix = "spring.data.redis")
public class RedisProperties {

private String host;
private int port;
private String password;
private int database;
private int lockWatchdogTimeout;
}

配置RedissonClient

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
@Slf4j
@Configuration
public class RedissionConfig {


@Autowired
private RedisProperties redisProperties;

@Bean
public RedissonClient redissonClient() {

RedissonClient redissonClient;

Config config = new Config();
//starter依赖进来的redisson要以redis://开头,其他不用
String url = "redis://"+ redisProperties.getHost() + ":" + redisProperties.getPort();
config.useSingleServer().setAddress(url)
//.setPassword(redisProperties.getPassword())
.setDatabase(redisProperties.getDatabase());

try {

redissonClient = Redisson.create(config);
return redissonClient;
} catch (Exception e) {

log.error("RedissonClient init redis url:[{}], Exception:", url, e);
return null;
}
}
}

把布隆过滤器加到service,如下

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
private RBloomFilter<String> bloomFilter;

@PostConstruct
public void init(){

//初始化布隆过滤器
bloomFilter = redissonClient.getBloomFilter("bloom-filter");
bloomFilter.tryInit(100, 0.003);
List<CoursePublish> coursePublishList = coursePublishMapper.selectList(new LambdaQueryWrapper<CoursePublish>());
coursePublishList.forEach(coursePublish -> {

String key = "content:course:publish:" + coursePublish.getId();
bloomFilter.add(key);
});
}

@Override
public CoursePublish getCoursePublishCache(Long courseId) {

String key = "content:course:publish:" + courseId;
//布隆过滤器
boolean contains = bloomFilter.contains(key);
if (!contains){

return null;
}
//先查询redis
Object object = redisTemplate.opsForValue().get(key);
if (object != null){

String string = object.toString();
CoursePublish coursePublish = JSON.parseObject(string, CoursePublish.class);
return coursePublish;
}else {

//后查询数据库
CoursePublish coursePublish = getCoursePublish(courseId);
if (coursePublish != null){

bloomFilter.add(key);
redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish));
}
return coursePublish;
}
}

3. 逻辑优化

当数据库没有此数据,以nullvalue,也插入redis缓存,但设置较短的过期时间。

language-java
1
2
3
4
5
6
7
8
9
10
11
//后查询数据库
CoursePublish coursePublish = getCoursePublish(courseId);
if (coursePublish != null) {

bloomFilter.add(key);
redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish));
}else {

redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish), 10, TimeUnit.SECONDS);
}
return coursePublish;
Redis之缓存雪崩问题解决方案

Redis之缓存雪崩问题解决方案

一、书接上文

Redis之缓存穿透问题解决方案实践SpringBoot3+Docker

二、介绍

缓存雪崩,指大量的缓存失效,大量的请求又同时落在数据库。主要的一种诱因是key设置的过期时间都一样。

三、解决方案

1. 锁

加锁,每次只让一个线程可以访问数据库,随后存入缓存。性能太差。

2. 不同的过期时间

最简单有效的解决办法是设置不同的过期时间。比如

language-java
1
2
int timeout = 10 + new Random().nextInt(20);
redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish), timeout, TimeUnit.SECONDS);

3. 缓存预热和定时任务

使用缓存预热,把数据提前放入缓存,然后根据过期时间,发布合理的定时任务,主动去更新缓存。
缓存预热参考代码如下。

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
@Component
public class RedisHandler implements InitializingBean {

@Autowired
RedisTemplate redisTemplate;
@Autowired
CoursePublishMapper coursePublishMapper;

@Override
public void afterPropertiesSet() throws Exception {

List<CoursePublish> coursePublishList = coursePublishMapper.selectList(new LambdaQueryWrapper<CoursePublish>());
//缓存预热
coursePublishList.forEach(coursePublish -> {

String key = "content:course:publish:" + coursePublish.getId();
redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish));
});
}
}

至于定时任务,可以使用xxl-job。具体使用方法,可以参考这个文章
Docker部署xxl-job调度器并结合SpringBoot测试

SpringBoot基于Redis(7.2)分片集群实现读写分离

SpringBoot基于Redis(7.2)分片集群实现读写分离

一、前置提要

SpringBoot访问Redis分片集群和Redis哨兵模式,使用上没有什么区别。唯一的区别在于application.yml配置上不一样。

二、集群搭建

首先,无论如何,得先有一个Redis分片集群,具体可以参考下面这篇文章

搭建完成后大致得到如下图描述的一个集群。

三、SpringBoot访问分片集群

其次,具体如何结合IdeaDocker让本地开发的SpringBoot项目访问Redis分片集群,可以参考下面这篇文章

要注意的是,yaml文件要从

language-yaml
1
2
3
4
5
6
7
8
9
10
spring:
redis:
sentinel:
master: mymaster
nodes:
- 172.30.1.11:26379
- 172.30.1.12:26379
- 172.30.1.13:26379
password: 1009
password: 1009

变成

language-yaml
1
2
3
4
5
6
7
8
9
10
spring:
redis:
cluster:
nodes:
- 172.30.2.11:6379
- 172.30.2.12:6379
- 172.30.2.13:6379
- 172.30.2.21:6379
- 172.30.2.22:6379
- 172.30.2.23:6379

其余基本一致。

Docker-Compose部署Redis(v7.2)分片集群(含主从)

Docker-Compose部署Redis(v7.2)分片集群(含主从)

环境

  • docker desktop for windows 4.23.0
  • redis 7.2

目标

搭建如下图分片+主从集群。

一、前提准备

1. 文件夹结构

因为Redis 7.2 docker镜像里面没有配置文件,所以需要去redis官网下载一个复制里面的redis.conf
博主这里用的是7.2.3版本的redis.conf,这个文件就在解压后第一层文件夹里。

然后构建如下文件夹结构。

language-txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
sharding/
├── docker-compose.yaml
├── master1
│ └── conf
│ └── redis.conf
├── master2
│ └── conf
│ └── redis.conf
├── master3
│ └── conf
│ └── redis.conf
├── replica1
│ └── conf
│ └── redis.conf
├── replica2
│ └── conf
│ └── redis.conf
└── replica3
└── conf
└── redis.conf

二、配置文件

1. redis.conf

对每个redis.conf都做以下修改。分片集群的redis主从的redis.conf目前都是一样的。

language-bash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
port 6379
# 开启集群功能
cluster-enabled yes
# 集群的配置文件名称,不需要我们创建,由redis自己维护
cluster-config-file /data/nodes.conf
# 节点心跳失败的超时时间
cluster-node-timeout 5000
# 持久化文件存放目录
dir /data
# 绑定地址
bind 0.0.0.0
# 让redis后台运行
daemonize no
# 保护模式
protected-mode no
# 数据库数量
databases 1
# 日志
logfile /data/run.log

2. docker-compose文件

language-yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
version: '3.8'

networks:
redis-sharding:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.30.2.0/24

services:
master1:
container_name: master1
image: redis:7.2
volumes:
- ./master1/conf:/usr/local/etc/redis
ports:
- "7001:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
networks:
redis-sharding:
ipv4_address: 172.30.2.11

master2:
container_name: master2
image: redis:7.2
volumes:
- ./master2/conf:/usr/local/etc/redis
ports:
- "7002:6379"
command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
networks:
redis-sharding:
ipv4_address: 172.30.2.12

master3:
container_name: master3
image: redis:7.2
volumes:
- ./master3/conf:/usr/local/etc/redis
ports:
- "7003:6379"
command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
networks:
redis-sharding:
ipv4_address: 172.30.2.13

replica1:
container_name: replica1
image: redis:7.2
volumes:
- ./replica1/conf:/usr/local/etc/redis
ports:
- "8001:6379"
command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
networks:
redis-sharding:
ipv4_address: 172.30.2.21

replica2:
container_name: replica2
image: redis:7.2
volumes:
- ./replica2/conf:/usr/local/etc/redis
ports:
- "8002:6379"
command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
networks:
redis-sharding:
ipv4_address: 172.30.2.22

replica3:
container_name: replica3
image: redis:7.2
volumes:
- ./replica3/conf:/usr/local/etc/redis
ports:
- "8003:6379"
command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
networks:
redis-sharding:
ipv4_address: 172.30.2.23


需要注意以下几点

  • 这里自定义了bridge子网并限定了范围,如果该范围已经被使用,请更换。
  • 这里没有对data进行-v挂载,如果要挂载,请注意宿主机对应文件夹权限问题。

随后运行

language-bash
1
docker-compose -p redis-sharding up -d

三、构建集群

接下来所有命令都在master1容器的命令行执行

1. 自动分配主从关系

这个命令会创建了一个集群,包括三个主节点和三个从节点,每个主节点分配一个从节点作为副本,前3个ip为主节点,后3个为从节点,主节点的从节点随机分配。

language-bash
1
redis-cli --cluster create 172.30.2.11:6379 172.30.2.12:6379 172.30.2.13:6379 172.30.2.21:6379 172.30.2.22:6379 172.30.2.23:6379 --cluster-replicas 1

如果希望手动指定主从关系,看下面,否则你可以跳过这一章节了。

2.1 构建3 master集群

language-bash
1
redis-cli --cluster create 172.30.2.11:6379 172.30.2.12:6379 172.30.2.13:6379 --cluster-replicas 0

2.2 手动配置从节点

查看3个主节点的ID

language-bash
1
redis-cli -h 172.30.2.11 -p 6379 cluster nodes

下面3个命令会将3个从节点加入集群中,其中172.30.2.11可以是三个主节点的任意一个。

language-bash
1
2
3
redis-cli -h 172.30.2.21 -p 6379 cluster meet 172.30.2.11 6379
redis-cli -h 172.30.2.22 -p 6379 cluster meet 172.30.2.11 6379
redis-cli -h 172.30.2.23 -p 6379 cluster meet 172.30.2.11 6379

然后为每个从节点指定主节点。

language-bash
1
2
3
redis-cli -h 172.30.2.21 -p 6379 cluster replicate <master-ID>
redis-cli -h 172.30.2.22 -p 6379 cluster replicate <master-ID>
redis-cli -h 172.30.2.23 -p 6379 cluster replicate <master-ID>

四、测试

1. 集群结构

可以通过以下命令查看集群中每个节点的id、角色、ip、port、插槽范围等信息

language-bash
1
redis-cli -h 172.30.2.11 -p 6379 cluster nodes

2. 分片测试

往集群存入4个键值

language-bash
1
2
3
4
redis-cli -c -h 172.30.2.11 -p 6379 set key1 value1
redis-cli -c -h 172.30.2.11 -p 6379 set key2 value2
redis-cli -c -h 172.30.2.11 -p 6379 set key3 value3
redis-cli -c -h 172.30.2.11 -p 6379 set key4 value4

查看每个主节点现有的键值,会发现每个节点只有一部分键值。

language-bash
1
2
3
redis-cli -h 172.30.2.11 -p 6379 --scan
redis-cli -h 172.30.2.12 -p 6379 --scan
redis-cli -h 172.30.2.13 -p 6379 --scan
SpringBoot基于哨兵模式的Redis(7.2)集群实现读写分离

SpringBoot基于哨兵模式的Redis(7.2)集群实现读写分离

环境

  • docker desktop for windows 4.23.0
  • redis 7.2
  • Idea

一、前提条件

先根据以下文章搭建一个Redis集群

部署完后,redis集群看起来大致如下图

二、SpringBoot访问Redis集群

1. 引入依赖

需要注意的是lettuce-core版本问题,不能太旧,否则不兼容新版的Redis

language-xml
1
2
3
4
5
6
7
8
9
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>io.lettuce</groupId>
<artifactId>lettuce-core</artifactId>
<version>6.1.4.RELEASE</version> <!-- 或更高版本 -->
</dependency>

2. yaml配置

application.yml加入以下配置。第一个password是用于sentinel节点验证,第二个password用于数据节点验证。

language-yaml
1
2
3
4
5
6
7
8
9
10
spring:
redis:
sentinel:
master: mymaster
nodes:
- 172.30.1.11:26379
- 172.30.1.12:26379
- 172.30.1.13:26379
password: 1009
password: 1009

这里关于sentinelip问题后面会讲解。

3. 设置读写分离

在任意配置类中写一个Bean,本文简单起见,直接写在SpringBoot启动类了。

language-java
1
2
3
4
5
@Bean
public LettuceClientConfigurationBuilderCustomizer clientConfigurationBuilderCustomizer(){

return clientConfigurationBuilder -> clientConfigurationBuilder.readFrom(ReadFrom.REPLICA_PREFERRED);
}

这里的ReadFrom是配置Redis的读取策略,是一个枚举,包括下面选择:

  • MASTER:从主节点读取
  • MASTER_PREFERRED:优先从master节点读取,master不可用才读取replica
  • REPLICA:从slave (replica)节点读取
  • REPLICA_PREFERRED:优先从slave (replica)节点读取,所有的slave都不可用才读取master

至于哪些节点支持读,哪些支持写,因为redis 7 默认给从节点设置为只读,所以可以认为只有主节点有读写权限,其余只有读权限。如果情况不一致,就手动给每一个redis-server的配置文件都加上这一行。

language-txt
1
replica-read-only yes

4. 简单的controller

写一个简单的controller,等会用于测试。

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
@RestController
public class HelloController {


@Autowired
private StringRedisTemplate redisTemplate;

@GetMapping("/get/{key}")
public String hi(@PathVariable String key) {

return redisTemplate.opsForValue().get(key);
}

@GetMapping("/set/{key}/{value}")
public String hi(@PathVariable String key, @PathVariable String value) {

redisTemplate.opsForValue().set(key, value);
return "success";
}
}

三、运行

首先,因为所有redis节点都在一个docker bridge网络中,所以基于Idea编写的项目在宿主机(Windows)中运行spirngboot程序,不好去和redis集群做完整的交互。

虽然说无论是sentinel还是redis-server都暴露了端口到宿主机,我们可以通过映射的端口分别访问它们,但是我们的程序只访问sentinelsentinel管理redis-serversentinel会返回redis-serverip来让我们的程序来访问redis-server,这里的ipdocker bridge网络里的ip,所以即使我们的程序拿到ip也访问不了redis-server

这个时候就需要将我们的项目放到一个docker容器中运行,然后把这个容器放到和redis同一网络下,就像下图。

具体如何快捷让Idea结合Docker去运行SpringBoot程序,可以参考下面这篇文章。

记得要暴露你的程序端口到宿主机,这样才方便测试。

四、测试

1. 写

浏览器访问localhost:8080/set/num/7799

查看SpringBoot容器日志,可以看到向主节点172.30.1.2:6379发送写请求。

language-txt
1
2
3
4
5
6
7
8
9
10
11
01-06 07:23:59:848 DEBUG 1 --- [nio-8080-exec-6] io.lettuce.core.RedisChannelHandler      : dispatching command AsyncCommand [type=SET, output=StatusOutput [output=null, error='null'], commandType=io.lettuce.core.protocol.Command]
01-06 07:23:59:848 DEBUG 1 --- [nio-8080-exec-6] i.l.c.m.MasterReplicaConnectionProvider : getConnectionAsync(WRITE)
01-06 07:23:59:848 DEBUG 1 --- [nio-8080-exec-6] io.lettuce.core.RedisChannelHandler : dispatching command AsyncCommand [type=SET, output=StatusOutput [output=null, error='null'], commandType=io.lettuce.core.protocol.Command]
01-06 07:23:59:848 DEBUG 1 --- [nio-8080-exec-6] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x9b4ebc85, /172.30.1.5:46700 -> /172.30.1.2:6379, epid=0xf] write() writeAndFlush command AsyncCommand [type=SET, output=StatusOutput [output=null, error='null'], commandType=io.lettuce.core.protocol.Command]
01-06 07:23:59:848 DEBUG 1 --- [nio-8080-exec-6] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x9b4ebc85, /172.30.1.5:46700 -> /172.30.1.2:6379, epid=0xf] write() done
01-06 07:23:59:848 DEBUG 1 --- [oEventLoop-4-10] io.lettuce.core.protocol.CommandHandler : [channel=0x9b4ebc85, /172.30.1.5:46700 -> /172.30.1.2:6379, epid=0xf, chid=0x16] write(ctx, AsyncCommand [type=SET, output=StatusOutput [output=null, error='null'], commandType=io.lettuce.core.protocol.Command], promise)
01-06 07:23:59:849 DEBUG 1 --- [oEventLoop-4-10] io.lettuce.core.protocol.CommandEncoder : [channel=0x9b4ebc85, /172.30.1.5:46700 -> /172.30.1.2:6379] writing command AsyncCommand [type=SET, output=StatusOutput [output=null, error='null'], commandType=io.lettuce.core.protocol.Command]
01-06 07:23:59:851 DEBUG 1 --- [oEventLoop-4-10] io.lettuce.core.protocol.CommandHandler : [channel=0x9b4ebc85, /172.30.1.5:46700 -> /172.30.1.2:6379, epid=0xf, chid=0x16] Received: 5 bytes, 1 commands in the stack
01-06 07:23:59:851 DEBUG 1 --- [oEventLoop-4-10] io.lettuce.core.protocol.CommandHandler : [channel=0x9b4ebc85, /172.30.1.5:46700 -> /172.30.1.2:6379, epid=0xf, chid=0x16] Stack contains: 1 commands
01-06 07:23:59:851 DEBUG 1 --- [oEventLoop-4-10] i.l.core.protocol.RedisStateMachine : Decode done, empty stack: true
01-06 07:23:59:852 DEBUG 1 --- [oEventLoop-4-10] io.lettuce.core.protocol.CommandHandler : [channel=0x9b4ebc85, /172.30.1.5:46700 -> /172.30.1.2:6379, epid=0xf, chid=0x16] Completing command AsyncCommand [type=SET, output=StatusOutput [output=OK, error='null'], commandType=io.lettuce.core.protocol.Command]

2. 读

浏览器访问localhost:8080/get/num

查看SpringBoot容器日志,会向两个从节点之一发送读请求。

language-txt
1
2
3
4
5
6
7
8
9
10
11
01-06 07:25:45:342 DEBUG 1 --- [io-8080-exec-10] io.lettuce.core.RedisChannelHandler      : dispatching command AsyncCommand [type=GET, output=ValueOutput [output=null, error='null'], commandType=io.lettuce.core.protocol.Command]
01-06 07:25:45:342 DEBUG 1 --- [io-8080-exec-10] i.l.c.m.MasterReplicaConnectionProvider : getConnectionAsync(READ)
01-06 07:25:45:342 DEBUG 1 --- [io-8080-exec-10] io.lettuce.core.RedisChannelHandler : dispatching command AsyncCommand [type=GET, output=ValueOutput [output=null, error='null'], commandType=io.lettuce.core.protocol.Command]
01-06 07:25:45:342 DEBUG 1 --- [io-8080-exec-10] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x96ae68cf, /172.30.1.5:38102 -> /172.30.1.4:6379, epid=0x1c] write() writeAndFlush command AsyncCommand [type=GET, output=ValueOutput [output=null, error='null'], commandType=io.lettuce.core.protocol.Command]
01-06 07:25:45:342 DEBUG 1 --- [io-8080-exec-10] i.lettuce.core.protocol.DefaultEndpoint : [channel=0x96ae68cf, /172.30.1.5:38102 -> /172.30.1.4:6379, epid=0x1c] write() done
01-06 07:25:45:342 DEBUG 1 --- [oEventLoop-4-11] io.lettuce.core.protocol.CommandHandler : [channel=0x96ae68cf, /172.30.1.5:38102 -> /172.30.1.4:6379, epid=0x1c, chid=0x23] write(ctx, AsyncCommand [type=GET, output=ValueOutput [output=null, error='null'], commandType=io.lettuce.core.protocol.Command], promise)
01-06 07:25:45:343 DEBUG 1 --- [oEventLoop-4-11] io.lettuce.core.protocol.CommandEncoder : [channel=0x96ae68cf, /172.30.1.5:38102 -> /172.30.1.4:6379] writing command AsyncCommand [type=GET, output=ValueOutput [output=null, error='null'], commandType=io.lettuce.core.protocol.Command]
01-06 07:25:45:346 DEBUG 1 --- [oEventLoop-4-11] io.lettuce.core.protocol.CommandHandler : [channel=0x96ae68cf, /172.30.1.5:38102 -> /172.30.1.4:6379, epid=0x1c, chid=0x23] Received: 10 bytes, 1 commands in the stack
01-06 07:25:45:346 DEBUG 1 --- [oEventLoop-4-11] io.lettuce.core.protocol.CommandHandler : [channel=0x96ae68cf, /172.30.1.5:38102 -> /172.30.1.4:6379, epid=0x1c, chid=0x23] Stack contains: 1 commands
01-06 07:25:45:346 DEBUG 1 --- [oEventLoop-4-11] i.l.core.protocol.RedisStateMachine : Decode done, empty stack: true
01-06 07:25:45:346 DEBUG 1 --- [oEventLoop-4-11] io.lettuce.core.protocol.CommandHandler : [channel=0x96ae68cf, /172.30.1.5:38102 -> /172.30.1.4:6379, epid=0x1c, chid=0x23] Completing command AsyncCommand [type=GET, output=ValueOutput [output=[B@7427ef47, error='null'], commandType=io.lettuce.core.protocol.Command]

3. 额外测试

以及还有一些额外的测试,可以自行去尝试,检验,这里列举一些,但具体不再赘述。

  1. 关闭两个从节点容器,等待sentinel完成维护和通知后,测试读数据和写数据会请求谁?
  2. 再次开启两个从节点,等待sentinel完成操作后,再关闭主节点,等待sentinel完成操作后,测试读数据和写数据会请求谁?
  3. 再次开启主节点,等待sentinel完成操作后,测试读数据和写数据会请求谁?
Docker-Compose部署Redis(v7.2)主从模式

Docker-Compose部署Redis(v7.2)主从模式

环境

  • docker desktop for windows 4.23.0
  • redis 7.2

一、前提准备

1. redis配置文件

因为Redis 7.2 docker镜像里面没有配置文件,所以需要去redis官网下载一个复制里面的redis.conf
博主这里用的是7.2.3版本的redis.conf,这个文件就在解压后第一层文件夹里。

2. 下载redis镜像

language-bash
1
docker pull redis:7.2

3. 文件夹结构

如下建立cluster文件夹,并复制出三份conf文件到如图位置。

二、docker-compose

docker-compose文件具体内容如下。

language-yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
version: '3.8'

networks:
redis-network:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.30.1.0/24

services:
redis-master:
container_name: redis-master
image: redis:7.2
volumes:
- ./master/redis.conf:/usr/local/etc/redis/redis.conf
# - ./master/data:/data
ports:
- "7001:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
networks:
redis-network:
ipv4_address: 172.30.1.2

redis-replica1:
container_name: redis-replica1
image: redis:7.2
volumes:
- ./replica1/redis.conf:/usr/local/etc/redis/redis.conf
# - ./replica1/data:/data
ports:
- "7002:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
depends_on:
- redis-master
networks:
redis-network:
ipv4_address: 172.30.1.3

redis-replica2:
container_name: redis-replica2
image: redis:7.2
volumes:
- ./replica2/redis.conf:/usr/local/etc/redis/redis.conf
# - ./replica2/data:/data
ports:
- "7003:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
depends_on:
- redis-master
networks:
redis-network:
ipv4_address: 172.30.1.4

需要注意以下几点

  1. 这里自定义了bridge子网并限定了范围,如果该范围已经被使用,请更换。
  2. 这里没有对data进行-v挂载,如果要挂载,请注意宿主机对应文件夹权限问题。

三、主从配置

1.主节点配置文件

主节点对应的配置文件是master/redis.conf,需要做以下修改

  1. bind
    bind 127.0.0.1 -::1修改为bind 0.0.0.0,监听来自任意网络接口的连接。

  2. protected-mode
    protected-mode设置为no,关闭保护模式,接收远程连接。

  3. masterauth
    masterauth设置为1009,这是从节点连接到主节点的认证密码,你可以指定为其他的。

  4. requirepass
    requirepass设置为1009,这是客户端连接到本节点的认证密码,你可以指定为其他的。

2.从节点配置文件

把上面主节点的配置文件复制粘贴,然后继续做以下更改,就可以作为从节点配置文件了

  1. replicaof
    旧版本添加一行replicaof redis-master 6379,表示本节点为从节点,并且主节点ipredis-master,端口为6379。这里你也可以把ip填成172.30.1.2,因为在docker-compose中我们为各节点分配了固定的ip,以及端口是6379而不是映射的700x,这些都是docker的知识,这里不再赘述。

redis在5.0引入了replica的概念来替换slave,所以后续的新版本推荐使用replicaof,即便slaveof目前仍然支持。

四、运行

配置好三个节点的配置文件后,用以下命令运行整个服务

language-shell
1
docker-compose -p redis-cluster up -d

查看主节点日志,可以看到主节点向172.30.1.3172.30.1.4两个从节点同步数据,并且连接正常,以及一系列success。

language-bash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
2024-01-05 15:12:59 1:M 05 Jan 2024 07:12:59.008 * Opening AOF incr file appendonly.aof.1.incr.aof on server start
2024-01-05 15:12:59 1:M 05 Jan 2024 07:12:59.008 * Ready to accept connections tcp
2024-01-05 15:13:00 1:M 05 Jan 2024 07:13:00.996 * Replica 172.30.1.4:6379 asks for synchronization
2024-01-05 15:13:00 1:M 05 Jan 2024 07:13:00.996 * Full resync requested by replica 172.30.1.4:6379
2024-01-05 15:13:00 1:M 05 Jan 2024 07:13:00.996 * Replication backlog created, my new replication IDs are '5bef8fa8e58042f1aee8eae528c6e10228a0c96b' and '0000000000000000000000000000000000000000'
2024-01-05 15:13:00 1:M 05 Jan 2024 07:13:00.996 * Delay next BGSAVE for diskless SYNC
2024-01-05 15:13:01 1:M 05 Jan 2024 07:13:01.167 * Replica 172.30.1.3:6379 asks for synchronization
2024-01-05 15:13:01 1:M 05 Jan 2024 07:13:01.167 * Full resync requested by replica 172.30.1.3:6379
2024-01-05 15:13:01 1:M 05 Jan 2024 07:13:01.167 * Delay next BGSAVE for diskless SYNC
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.033 * Starting BGSAVE for SYNC with target: replicas sockets
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.033 * Background RDB transfer started by pid 20
2024-01-05 15:13:05 20:C 05 Jan 2024 07:13:05.035 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.035 * Diskless rdb transfer, done reading from pipe, 2 replicas still up.
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.052 * Background RDB transfer terminated with success
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.052 * Streamed RDB transfer with replica 172.30.1.4:6379 succeeded (socket). Waiting for REPLCONF ACK from replica to enable streaming
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.052 * Synchronization with replica 172.30.1.4:6379 succeeded
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.052 * Streamed RDB transfer with replica 172.30.1.3:6379 succeeded (socket). Waiting for REPLCONF ACK from replica to enable streaming
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.052 * Synchronization with replica 172.30.1.3:6379 succeeded

接着看看从节点日志,可以看到Connecting to MASTER redis-master:6379,向主节点连接并申请同步数据,以及一系列success。

language-bash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
2024-01-05 15:13:01 1:S 05 Jan 2024 07:13:01.166 * Connecting to MASTER redis-master:6379
2024-01-05 15:13:01 1:S 05 Jan 2024 07:13:01.166 * MASTER <-> REPLICA sync started
2024-01-05 15:13:01 1:S 05 Jan 2024 07:13:01.166 * Non blocking connect for SYNC fired the event.
2024-01-05 15:13:01 1:S 05 Jan 2024 07:13:01.167 * Master replied to PING, replication can continue...
2024-01-05 15:13:01 1:S 05 Jan 2024 07:13:01.167 * Partial resynchronization not possible (no cached master)
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.033 * Full resync from master: 5bef8fa8e58042f1aee8eae528c6e10228a0c96b:0
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.035 * MASTER <-> REPLICA sync: receiving streamed RDB from master with EOF to disk
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.038 * MASTER <-> REPLICA sync: Flushing old data
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.038 * MASTER <-> REPLICA sync: Loading DB in memory
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.056 * Loading RDB produced by version 7.2.3
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.056 * RDB age 0 seconds
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.056 * RDB memory usage when created 0.90 Mb
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.056 * Done loading RDB, keys loaded: 1, keys expired: 0.
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.057 * MASTER <-> REPLICA sync: Finished with success
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.057 * Creating AOF incr file temp-appendonly.aof.incr on background rewrite
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.057 * Background append only file rewriting started by pid 21
2024-01-05 15:13:05 21:C 05 Jan 2024 07:13:05.067 * Successfully created the temporary AOF base file temp-rewriteaof-bg-21.aof
2024-01-05 15:13:05 21:C 05 Jan 2024 07:13:05.068 * Fork CoW for AOF rewrite: current 0 MB, peak 0 MB, average 0 MB
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.084 * Background AOF rewrite terminated with success
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.084 * Successfully renamed the temporary AOF base file temp-rewriteaof-bg-21.aof into appendonly.aof.5.base.rdb
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.084 * Successfully renamed the temporary AOF incr file temp-appendonly.aof.incr into appendonly.aof.5.incr.aof
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.093 * Removing the history file appendonly.aof.4.incr.aof in the background
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.093 * Removing the history file appendonly.aof.4.base.rdb in the background
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.101 * Background AOF rewrite finished successfully

五、测试

用你喜欢的docker容器连接工具或者redis连接工具来连接主节点redis服务,只要能进入redis-cli就行。这里以docker容器连接为例。

  1. 主节点设置一个字段并查看从节点信息
language-txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
root@ac1ecfc4e3a5:/data# redis-cli 
127.0.0.1:6379> auth 1009
OK
127.0.0.1:6379> set num 67899
OK
127.0.0.1:6379> get num
"67899"
127.0.0.1:6379> INFO replication
# Replication
role:master
connected_slaves:2
slave0:ip=172.30.1.4,port=6379,state=online,offset=3388,lag=1
slave1:ip=172.30.1.3,port=6379,state=online,offset=3388,lag=1
master_failover_state:no-failover
master_replid:5bef8fa8e58042f1aee8eae528c6e10228a0c96b
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:3388
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:3388
  1. 从节点获取
language-txt
1
2
3
4
5
root@a3016db388e3:/data# redis-cli 
127.0.0.1:6379> auth 1009
OK
127.0.0.1:6379> get num
"67899"

测试成功。

Docker-Compose部署Redis(v7.2)哨兵模式

Docker-Compose部署Redis(v7.2)哨兵模式

环境

  • docker desktop for windows 4.23.0
  • redis 7.2

一、前提准备

1. 主从集群

首先需要有一个redis主从集群,才能接着做redis哨兵。具体可以参考下面这篇文章
Docker-Compose部署Redis(v7.2)主从模式(之后简称”主从模式博文“)

2. 文件夹结构

和主从模式不同的是,redis sentinel(哨兵)会更改你的conf文件,无论是redis server节点还是sentinel节点本身,都可能被修改,所以这里需要注意文件权限问题。不然会一直警告Sentinel was not able to save the new configuration on disk

有兴趣可以参考以下几个帖子,或者接着本文做就行了。

总的来说,需要对主从模式博文里提到的文件夹结构做一定改善和添加,具体如下:

language-txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cluster/
├── docker-compose.yaml
├── master
│ └── conf
│ └── redis.conf
├── replica1
│ └── conf
│ └── redis.conf
├── replica2
│ └── conf
│ └── redis.conf
├── sentinel1
│ └── conf
│ └── sentinel.conf
├── sentinel2
│ └── conf
│ └── sentinel.conf
└── sentinel3
└── conf
└── sentinel.conf

其中redis.confdocker-compose.yaml主从模式博文内容暂时保持一致,其余的都是新增的,暂时保持空白即可。

二、配置文件

1. redis server配置文件

保持不变

2. redis sentinel配置文件

对于上述三个sentinel.conf内容都填入以下

language-sql
1
2
3
4
5
sentinel monitor mymaster 172.30.1.2 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
sentinel auth-pass mymaster 1009
dir "/data"

意思分别是

  • 监控的主节点:通过 sentinel monitor 指定要监控的主节点。这包括一个用户定义的名称(如 mymaster)、主节点的地址、端口号和一个”仲裁”阈值,后者表示要进行故障转移所需的最小 Sentinel 投票数量。
  • 故障检测:设置 Sentinel 判断主节点是否下线所需的时间
  • 故障转移设置:配置故障转移的行为,如故障转移的超时时间
  • 认证密码(如果主节点设置了密码):如果主节点设置了密码,Sentinel 需要这个密码来连接主节点和副本节点
  • 设置 Sentinel 的工作目录

3. docker compose文件

language-yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
version: '3.8'

networks:
redis-network:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.30.1.0/24

services:
redis-master:
container_name: redis-master
image: redis:7.2
volumes:
- ./master/conf:/usr/local/etc/redis
# - ./master/data:/data
ports:
- "7001:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
networks:
redis-network:
ipv4_address: 172.30.1.2

redis-replica1:
container_name: redis-replica1
image: redis:7.2
volumes:
- ./replica1/conf:/usr/local/etc/redis
# - ./replica1/data:/data
ports:
- "7002:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
depends_on:
- redis-master
networks:
redis-network:
ipv4_address: 172.30.1.3

redis-replica2:
container_name: redis-replica2
image: redis:7.2
volumes:
- ./replica2/conf:/usr/local/etc/redis
# - ./replica2/data:/data
ports:
- "7003:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
depends_on:
- redis-master
networks:
redis-network:
ipv4_address: 172.30.1.4

redis-sentinel1:
container_name: redis-sentinel1
image: redis:7.2
volumes:
- ./sentinel1/conf:/usr/local/etc/redis
ports:
- "27001:26379"
command: ["redis-sentinel", "/usr/local/etc/redis/sentinel.conf"]
depends_on:
- redis-master
networks:
redis-network:
ipv4_address: 172.30.1.11

redis-sentinel2:
container_name: redis-sentinel2
image: redis:7.2
volumes:
- ./sentinel2/conf:/usr/local/etc/redis
ports:
- "27002:26379"
command: [ "redis-sentinel", "/usr/local/etc/redis/sentinel.conf" ]
depends_on:
- redis-master
networks:
redis-network:
ipv4_address: 172.30.1.12

redis-sentinel3:
container_name: redis-sentinel3
image: redis:7.2
volumes:
- ./sentinel3/conf:/usr/local/etc/redis
ports:
- "27003:26379"
command: [ "redis-sentinel", "/usr/local/etc/redis/sentinel.conf" ]
depends_on:
- redis-master
networks:
redis-network:
ipv4_address: 172.30.1.13



需要注意以下几点

  • 主从模式博文不同,这里所有的配置文件挂载都采用文件夹挂载而非单文件挂载
  • 这里自定义了bridge子网并限定了范围,如果该范围已经被使用,请更换。
  • 这里没有对data进行-v挂载,如果要挂载,请注意宿主机对应文件夹权限问题。
  • 主节点地址为172.30.1.2,如果更改请注意sentinel.conf中也需要更改。

三、运行

在运行之前,记得备份一下所有的conf文件,因为sentinel会修改挂载到容器的conf。

language-bash
1
docker-compose -p redis-cluster up -d

查看其中一个sentinel节点的日志,可以看到监听端口是26379,同时监测主节点mymaster 172.30.1.2 6379,以及添加了172.30.1.4 6379172.30.1.3 6379两个从节点,并且感应到了位于172.30.1.13 26379172.30.1.12 26379两个同为sentinel节点的服务。

language-txt
1
2
3
4
5
6
7
8
9
10
11
12
2024-01-05 18:06:40 1:X 05 Jan 2024 10:06:40.758 * Running mode=sentinel, port=26379.
2024-01-05 18:06:40 1:X 05 Jan 2024 10:06:40.789 * Sentinel new configuration saved on disk
2024-01-05 18:06:40 1:X 05 Jan 2024 10:06:40.790 * Sentinel ID is 499007c98c0a165b13e026a4443ceb890695c191
2024-01-05 18:06:40 1:X 05 Jan 2024 10:06:40.790 # +monitor master mymaster 172.30.1.2 6379 quorum 2
2024-01-05 18:06:40 1:X 05 Jan 2024 10:06:40.791 * +slave slave 172.30.1.4:6379 172.30.1.4 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:06:40 1:X 05 Jan 2024 10:06:40.815 * Sentinel new configuration saved on disk
2024-01-05 18:06:42 1:X 05 Jan 2024 10:06:42.055 * +sentinel sentinel bcfaed15fb01e7ad03b013fe5e964479c1a1f138 172.30.1.13 26379 @ mymaster 172.30.1.2 6379
2024-01-05 18:06:42 1:X 05 Jan 2024 10:06:42.093 * Sentinel new configuration saved on disk
2024-01-05 18:06:42 1:X 05 Jan 2024 10:06:42.356 * +sentinel sentinel 92d9a1419be1256d1715df2aa17cea4bbacfdf60 172.30.1.12 26379 @ mymaster 172.30.1.2 6379
2024-01-05 18:06:42 1:X 05 Jan 2024 10:06:42.376 * Sentinel new configuration saved on disk
2024-01-05 18:06:50 1:X 05 Jan 2024 10:06:50.823 * +slave slave 172.30.1.3:6379 172.30.1.3 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:06:50 1:X 05 Jan 2024 10:06:50.837 * Sentinel new configuration saved on disk

四、测试

直接让redis-master容器停止运行,查看sentinel日志,可以看到sentinel监测到master节点挂掉后,选举了172.30.1.3为新的主节点,并将其余两个作为slave节点。

language-txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
2024-01-05 18:10:08 1:X 05 Jan 2024 10:10:08.896 # +sdown master mymaster 172.30.1.2 6379
2024-01-05 18:10:08 1:X 05 Jan 2024 10:10:08.968 # +odown master mymaster 172.30.1.2 6379 #quorum 2/2
2024-01-05 18:10:08 1:X 05 Jan 2024 10:10:08.968 # +new-epoch 1
2024-01-05 18:10:08 1:X 05 Jan 2024 10:10:08.968 # +try-failover master mymaster 172.30.1.2 6379
2024-01-05 18:10:08 1:X 05 Jan 2024 10:10:08.987 * Sentinel new configuration saved on disk
2024-01-05 18:10:08 1:X 05 Jan 2024 10:10:08.987 # +vote-for-leader 499007c98c0a165b13e026a4443ceb890695c191 1
2024-01-05 18:10:08 1:X 05 Jan 2024 10:10:08.990 * 92d9a1419be1256d1715df2aa17cea4bbacfdf60 voted for 92d9a1419be1256d1715df2aa17cea4bbacfdf60 1
2024-01-05 18:10:09 1:X 05 Jan 2024 10:10:09.021 * bcfaed15fb01e7ad03b013fe5e964479c1a1f138 voted for 499007c98c0a165b13e026a4443ceb890695c191 1
2024-01-05 18:10:09 1:X 05 Jan 2024 10:10:09.054 # +elected-leader master mymaster 172.30.1.2 6379
2024-01-05 18:10:09 1:X 05 Jan 2024 10:10:09.054 # +failover-state-select-slave master mymaster 172.30.1.2 6379
2024-01-05 18:10:09 1:X 05 Jan 2024 10:10:09.125 # +selected-slave slave 172.30.1.3:6379 172.30.1.3 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:10:09 1:X 05 Jan 2024 10:10:09.125 * +failover-state-send-slaveof-noone slave 172.30.1.3:6379 172.30.1.3 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:10:09 1:X 05 Jan 2024 10:10:09.209 * +failover-state-wait-promotion slave 172.30.1.3:6379 172.30.1.3 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.033 * Sentinel new configuration saved on disk
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.033 # +promoted-slave slave 172.30.1.3:6379 172.30.1.3 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.033 # +failover-state-reconf-slaves master mymaster 172.30.1.2 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.094 * +slave-reconf-sent slave 172.30.1.4:6379 172.30.1.4 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.262 * +slave-reconf-inprog slave 172.30.1.4:6379 172.30.1.4 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.262 * +slave-reconf-done slave 172.30.1.4:6379 172.30.1.4 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.338 # +failover-end master mymaster 172.30.1.2 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.338 # +switch-master mymaster 172.30.1.2 6379 172.30.1.3 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.338 * +slave slave 172.30.1.4:6379 172.30.1.4 6379 @ mymaster 172.30.1.3 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.338 * +slave slave 172.30.1.2:6379 172.30.1.2 6379 @ mymaster 172.30.1.3 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.373 * Sentinel new configuration saved on disk

接着让我们看看172.30.1.3的日志,也就是redis-replica1的日志,可以看到与主节点连接失败后,它开启了主节点模式MASTER MODE enabled

language-txt
1
2
3
4
5
6
7
8
9
10
11
2024-01-05 18:10:03 1:S 05 Jan 2024 10:10:03.812 * Reconnecting to MASTER 172.30.1.2:6379
2024-01-05 18:10:03 1:S 05 Jan 2024 10:10:03.813 * MASTER <-> REPLICA sync started
2024-01-05 18:10:03 1:S 05 Jan 2024 10:10:03.813 # Error condition on socket for SYNC: Connection refused
2024-01-05 18:10:04 1:S 05 Jan 2024 10:10:04.582 * Connecting to MASTER 172.30.1.2:6379
2024-01-05 18:10:04 1:S 05 Jan 2024 10:10:04.582 * MASTER <-> REPLICA sync started
2024-01-05 18:10:09 1:M 05 Jan 2024 10:10:09.209 * Discarding previously cached master state.
2024-01-05 18:10:09 1:M 05 Jan 2024 10:10:09.209 * Setting secondary replication ID to 5032654a1279c56d758c93a4eb1c4b89c99975a9, valid up to offset: 40756. New replication ID is d3464601d550e1159d91234567a366fa1f1a0b5e
2024-01-05 18:10:09 1:M 05 Jan 2024 10:10:09.209 * MASTER MODE enabled (user request from 'id=8 addr=172.30.1.11:55710 laddr=172.30.1.3:6379 fd=13 name=sentinel-499007c9-cmd age=199 idle=0 flags=x db=0 sub=0 psub=0 ssub=0 multi=4 qbuf=188 qbuf-free=20286 argv-mem=4 multi-mem=169 rbs=2048 rbp=1024 obl=45 oll=0 omem=0 tot-mem=23717 events=r cmd=exec user=default redir=-1 resp=2 lib-name= lib-ver=')
2024-01-05 18:10:09 1:M 05 Jan 2024 10:10:09.229 * CONFIG REWRITE executed with success.
2024-01-05 18:10:10 1:M 05 Jan 2024 10:10:10.120 * Replica 172.30.1.4:6379 asks for synchronization
2024-01-05 18:10:10 1:M 05 Jan 2024 10:10:10.120 * Partial resynchronization request from 172.30.1.4:6379 accepted. Sending 567 bytes of backlog starting from offset 40756.

并且还有redis-replica2的日志,里面会显示将数据同步请求地址变成了172.30.1.3而不是先前的172.30.1.2

接着连接redis-replica1容器看看,发现这个节点以前作为从节点时是只读节点,现在可以写入数据了。

language-txt
1
2
3
4
5
6
7
root@1eefea35001f:/data# redis-cli 
127.0.0.1:6379> auth 1009
OK
127.0.0.1:6379> set num 8766
OK
127.0.0.1:6379> get num
"8766"

并且会发现另外两个节点变成只读了,同时,即使先前的主节点又恢复正常了,它不会去夺回master地位。

测试成功。