Redis基础文档-01-安装

Redis基础文档-01-安装

参考文档-Redis快速入门指南-中文
参考文档-Redis 教程

一、启动并连接

本地启动一个redis或者用redis cloud免费账户,都可以。

language-bash
1
docker run --name CommonTrain -p 6379:6379 -itd redis:7.2

然后下载REDISINSIGHT

二、支持的数据类型

  • 字符串string
  • 哈希hash
  • 列表list
  • 集合set
  • 有序集合sorted set
  • 位图bitmaps
  • 基数统计hyperLogLogs
Redis之缓存击穿问题解决方案

Redis之缓存击穿问题解决方案

一、书接上文

Redis之缓存雪崩问题解决方案

二、介绍

缓存击穿就是大量并发访问同一个热点数据,一旦这个热点数据缓存失效,则请求压力都来到数据库。

三、解决方案

1. 单例双检锁

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
@Override
public CoursePublish getCoursePublishCache(Long courseId) {

String key = "content:course:publish:" + courseId;
//布隆过滤器
boolean contains = bloomFilter.contains(key);
if (!contains){

return null;
}
//先查询redis
Object object = redisTemplate.opsForValue().get(key);
if (object != null){

String string = object.toString();
CoursePublish coursePublish = JSON.parseObject(string, CoursePublish.class);
return coursePublish;
}else {

//后查询数据库
//加锁,防止缓存击穿
synchronized (this){

//单例双检锁
object = redisTemplate.opsForValue().get(key);
if (object != null){

String string = object.toString();
CoursePublish coursePublish = JSON.parseObject(string, CoursePublish.class);
return coursePublish;
}
CoursePublish coursePublish = getCoursePublish(courseId);
if (coursePublish != null) {

bloomFilter.add(key);
redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish));
} else {

int timeout = 10 + new Random().nextInt(20);
redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish), timeout, TimeUnit.SECONDS);
}
return coursePublish;
}
}
}

2. 缓存预热和定时任务

使用缓存预热,把数据提前放入缓存,然后根据过期时间,发布合理的定时任务,主动去更新缓存,让热点数据永不过期。

Redis之缓存穿透问题解决方案实践SpringBoot3+Docker

Redis之缓存穿透问题解决方案实践SpringBoot3+Docker

一、介绍

当一种请求,总是能越过缓存,调用数据库,就是缓存穿透。

比如当请求一个数据库没有的数据,那么缓存也不会有,然后就一直请求,甚至高并发去请求,对数据库压力会增大。

二、方案介绍

  1. 如果key具有某种规则,那么可以对key增加校验机制,不符合直接返回。
  2. Redisson布隆过滤器
  3. 逻辑修改,当数据库没有此数据,以nullvalue,也插入redis缓存,但设置较短的过期时间。

三、Redis Docker部署

docker-compose示例如下,redis.conf从这里下载

language-yml
1
2
3
4
5
6
7
8
redis:
container_name: redis
image: redis:7.2
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]

四、SpringBoot3 Base代码

1. 依赖配置

language-xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<!-- redis -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<!-- redis 连接线程池 -->
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-pool2</artifactId>
<version>2.11.1</version>
</dependency>
<!-- redisson -->
<dependency>
<groupId>org.redisson</groupId>
<artifactId>redisson</artifactId>
<version>3.24.3</version>
</dependency>
language-yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
spring: 
data:
redis:
host: 192.168.101.65 # Redis服务器的主机名或IP地址
port: 6379 # Redis服务器的端口号
password: # 用于连接Redis服务器的密码
database: 0 # 要连接的Redis数据库的索引号
lettuce:
pool:
max-active: 20 # 连接池中最大的活跃连接数
max-idle: 10 # 连接池中最大的空闲连接数
min-idle: 0 # 连接池中最小的空闲连接数
timeout: 10000 # 连接超时时间(毫秒)
lock-watchdog-timeout: 100 # Redisson的分布式锁的看门狗超时时间(毫秒)

2. 基本代码

要演示的代码很简单,就是一个携带courseId请求过来,调用下面的service函数,然后查询数据库。

language-java
1
2
3
4
5
@Override
public CoursePublish getCoursePublish(Long courseId) {

return coursePublishMapper.selectById(courseId);
}

当我们使用redis改造时,基本代码如下

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
@Override
public CoursePublish getCoursePublishCache(Long courseId) {

String key = "content:course:publish:" + courseId;
//先查询redis
Object object = redisTemplate.opsForValue().get(key);
if (object != null){

String string = object.toString();
CoursePublish coursePublish = JSON.parseObject(string, CoursePublish.class);
return coursePublish;
}else {

//后查询数据库
CoursePublish coursePublish = getCoursePublish(courseId);
if (coursePublish != null){

redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish));
}
return coursePublish;
}
}

五、缓存优化代码

1. 校验机制

我这里的id没规则,所以加不了,跳过。

2. 布隆过滤器

读取yaml配置

language-java
1
2
3
4
5
6
7
8
9
10
11
12
@Data
@Component
@ConfigurationProperties(prefix = "spring.data.redis")
public class RedisProperties {

private String host;
private int port;
private String password;
private int database;
private int lockWatchdogTimeout;
}

配置RedissonClient

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
@Slf4j
@Configuration
public class RedissionConfig {


@Autowired
private RedisProperties redisProperties;

@Bean
public RedissonClient redissonClient() {

RedissonClient redissonClient;

Config config = new Config();
//starter依赖进来的redisson要以redis://开头,其他不用
String url = "redis://"+ redisProperties.getHost() + ":" + redisProperties.getPort();
config.useSingleServer().setAddress(url)
//.setPassword(redisProperties.getPassword())
.setDatabase(redisProperties.getDatabase());

try {

redissonClient = Redisson.create(config);
return redissonClient;
} catch (Exception e) {

log.error("RedissonClient init redis url:[{}], Exception:", url, e);
return null;
}
}
}

把布隆过滤器加到service,如下

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
private RBloomFilter<String> bloomFilter;

@PostConstruct
public void init(){

//初始化布隆过滤器
bloomFilter = redissonClient.getBloomFilter("bloom-filter");
bloomFilter.tryInit(100, 0.003);
List<CoursePublish> coursePublishList = coursePublishMapper.selectList(new LambdaQueryWrapper<CoursePublish>());
coursePublishList.forEach(coursePublish -> {

String key = "content:course:publish:" + coursePublish.getId();
bloomFilter.add(key);
});
}

@Override
public CoursePublish getCoursePublishCache(Long courseId) {

String key = "content:course:publish:" + courseId;
//布隆过滤器
boolean contains = bloomFilter.contains(key);
if (!contains){

return null;
}
//先查询redis
Object object = redisTemplate.opsForValue().get(key);
if (object != null){

String string = object.toString();
CoursePublish coursePublish = JSON.parseObject(string, CoursePublish.class);
return coursePublish;
}else {

//后查询数据库
CoursePublish coursePublish = getCoursePublish(courseId);
if (coursePublish != null){

bloomFilter.add(key);
redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish));
}
return coursePublish;
}
}

3. 逻辑优化

当数据库没有此数据,以nullvalue,也插入redis缓存,但设置较短的过期时间。

language-java
1
2
3
4
5
6
7
8
9
10
11
//后查询数据库
CoursePublish coursePublish = getCoursePublish(courseId);
if (coursePublish != null) {

bloomFilter.add(key);
redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish));
}else {

redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish), 10, TimeUnit.SECONDS);
}
return coursePublish;
Redis之缓存雪崩问题解决方案

Redis之缓存雪崩问题解决方案

一、书接上文

Redis之缓存穿透问题解决方案实践SpringBoot3+Docker

二、介绍

缓存雪崩,指大量的缓存失效,大量的请求又同时落在数据库。主要的一种诱因是key设置的过期时间都一样。

三、解决方案

1. 锁

加锁,每次只让一个线程可以访问数据库,随后存入缓存。性能太差。

2. 不同的过期时间

最简单有效的解决办法是设置不同的过期时间。比如

language-java
1
2
int timeout = 10 + new Random().nextInt(20);
redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish), timeout, TimeUnit.SECONDS);

3. 缓存预热和定时任务

使用缓存预热,把数据提前放入缓存,然后根据过期时间,发布合理的定时任务,主动去更新缓存。
缓存预热参考代码如下。

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
@Component
public class RedisHandler implements InitializingBean {

@Autowired
RedisTemplate redisTemplate;
@Autowired
CoursePublishMapper coursePublishMapper;

@Override
public void afterPropertiesSet() throws Exception {

List<CoursePublish> coursePublishList = coursePublishMapper.selectList(new LambdaQueryWrapper<CoursePublish>());
//缓存预热
coursePublishList.forEach(coursePublish -> {

String key = "content:course:publish:" + coursePublish.getId();
redisTemplate.opsForValue().set(key, JSON.toJSONString(coursePublish));
});
}
}

至于定时任务,可以使用xxl-job。具体使用方法,可以参考这个文章
Docker部署xxl-job调度器并结合SpringBoot测试

多级缓存架构(五)缓存同步

多级缓存架构(五)缓存同步

通过本文章,可以完成多级缓存架构中的缓存同步。

一、Canal服务

1. mysql添加canal用户

连接在上一次multiCache项目中运行的mysql容器,创建canal用户。

language-sql
1
2
3
4
CREATE USER canal IDENTIFIED BY 'canal';  
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';
-- GRANT ALL PRIVILEGES ON *.* TO 'canal'@'%' ;
FLUSH PRIVILEGES;

2. mysql配置文件

docker/mysql/conf/my.cnf添加如下配置

language-txt
1
2
3
4
server-id=1000
log-bin=/var/lib/mysql/mysql-bin
binlog-do-db=heima
binlog_format=row

3. canal配置文件

添加canal服务块到docker-compose.yml,如下

language-yml
1
2
3
4
5
6
7
8
9
10
11
12
13
canal:
container_name: canal
image: canal/canal-server:v1.1.7
volumes:
- ./canal/logs:/home/admin/canal-server/logs
- ./canal/conf:/home/admin/canal-server/conf
ports:
- "11111:11111"
depends_on:
- mysql
networks:
multi-cache:
ipv4_address: 172.30.3.7
language-bash
1
docker pull canal/canal-server:v1.1.7

任意启动一个canal-server容器,将里面的/home/admin/canal-server/conf文件夹复制到宿主机,对应docker/canal/conf文件夹。
删除此临时容器。

修改docker/canal/conf/canal.properties如下条目

language-dart
1
2
canal.destinations=example
canal.instance.tsdb.enable=true

修改docker/canal/conf/example/instance.properties如下条目

language-dart
1
2
3
4
5
6
7
canal.instance.master.address=172.30.3.2:3306
canal.instance.dbUsername=canal
canal.instance.dbPassword=canal
canal.instance.connectionCharset = UTF-8
canal.instance.tsdb.enable=true
canal.instance.gtidon=false
canal.instance.filter.regex=heima\\..*

二、引入依赖

pom.xml

language-xml
1
2
3
4
5
<dependency>
<groupId>top.javatool</groupId>
<artifactId>canal-spring-boot-starter</artifactId>
<version>1.2.1-RELEASE</version>
</dependency>

application.yml

language-yml
1
2
3
canal:
destination: example
server: 172.30.3.7:11111

三、监听Canal消息

这是canal-spring-boot-starter官方仓库,含使用文档

新建canal.ItemHandler类,内容如下

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
package com.heima.item.canal;

import com.github.benmanes.caffeine.cache.Cache;
import com.heima.item.config.RedisHandler;
import com.heima.item.pojo.Item;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import top.javatool.canal.client.annotation.CanalTable;
import top.javatool.canal.client.handler.EntryHandler;

@CanalTable(value = "tb_item")
@Component
public class ItemHandler implements EntryHandler<Item> {


@Autowired
private RedisHandler redisHandler;
@Autowired
private Cache<Long, Item> itemCache;

@Override
public void insert(Item item) {

itemCache.put(item.getId(), item);
redisHandler.saveItem(item);
}

@Override
public void update(Item before, Item after) {

itemCache.put(after.getId(), after);
redisHandler.saveItem(after);
}

@Override
public void delete(Item item) {

itemCache.invalidate(item.getId());
redisHandler.deleteItemById(item.getId());
}
}

修改pojo.Item类,如下

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
package com.heima.item.pojo;

import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableField;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
import org.springframework.data.annotation.Id;
import org.springframework.data.annotation.Transient;

import java.util.Date;

@Data
@TableName("tb_item")
public class Item {

@TableId(type = IdType.AUTO)
@Id
private Long id;//商品id
private String name;//商品名称
private String title;//商品标题
private Long price;//价格(分)
private String image;//商品图片
private String category;//分类名称
private String brand;//品牌名称
private String spec;//规格
private Integer status;//商品状态 1-正常,2-下架
private Date createTime;//创建时间
private Date updateTime;//更新时间
@TableField(exist = false)
@Transient
private Integer stock;
@TableField(exist = false)
@Transient
private Integer sold;
}

四、运行

到此为止,docker-compose.yml内容应该如下

language-yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
version: '3.8'

networks:
multi-cache:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.30.3.0/24

services:
mysql:
container_name: mysql
image: mysql:8
volumes:
- ./mysql/conf/my.cnf:/etc/mysql/conf.d/my.cnf
- ./mysql/data:/var/lib/mysql
- ./mysql/logs:/logs
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=1009
networks:
multi-cache:
ipv4_address: 172.30.3.2

nginx:
container_name: nginx
image: nginx:stable
volumes:
- ./nginx/conf/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/conf/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./nginx/dist:/usr/share/nginx/dist
ports:
- "8080:8080"
networks:
multi-cache:
ipv4_address: 172.30.3.3

canal:
container_name: canal
image: canal/canal-server:v1.1.7
volumes:
- ./canal/logs:/home/admin/canal-server/logs
- ./canal/conf:/home/admin/canal-server/conf
ports:
- "11111:11111"
depends_on:
- mysql
networks:
multi-cache:
ipv4_address: 172.30.3.7

openresty1:
container_name: openresty1
image: openresty/openresty:1.21.4.3-3-jammy-amd64
volumes:
- ./openresty1/conf/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf
- ./openresty1/conf/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./openresty1/lua:/usr/local/openresty/nginx/lua
- ./openresty1/lualib/common.lua:/usr/local/openresty/lualib/common.lua
networks:
multi-cache:
ipv4_address: 172.30.3.11

redis:
container_name: redis
image: redis:7.2
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
networks:
multi-cache:
ipv4_address: 172.30.3.21

删除原来的multiCache,重新启动各项服务。

language-bash
1
docker-compose -p multi-cache up -d

启动springboot程序。

五、测试

springboot不断输入类似如下日志,属于正常监听canal消息中。

language-txt
1
2
3
09:27:17:175  INFO 1 --- [l-client-thread] t.j.c.client.client.AbstractCanalClient  : 获取消息 Message[id=-1,entries=[],raw=false,rawEntries=[]]
09:27:18:177 INFO 1 --- [l-client-thread] t.j.c.client.client.AbstractCanalClient : 获取消息 Message[id=-1,entries=[],raw=false,rawEntries=[]]
09:27:19:178 INFO 1 --- [l-client-thread] t.j.c.client.client.AbstractCanalClient : 获取消息 Message[id=-1,entries=[],raw=false,rawEntries=[]]

访问http://localhost:8081/item/10001,此时信息为tomcat查询数据库所得数据,而后存入Caffeine缓存。
访问http://localhost:8080/item.html?id=10001,此时信息为Redis缓存数据。

然后,
访问http://localhost:8081/来到商品管理页面。

修改id=10001的数据的商品分类

确认后
springboot日志出现类似如下日志

language-txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
09:31:29:234  INFO 1 --- [l-client-thread] t.j.c.client.client.AbstractCanalClient  : 获取消息 Message[id=1,entries=[header {
version: 1
logfileName: "binlog.000007"
logfileOffset: 236
serverId: 1
serverenCode: "UTF-8"
executeTime: 1705051889000
sourceType: MYSQL
schemaName: ""
tableName: ""
eventLength: 93
}
entryType: TRANSACTIONBEGIN
storeValue: " \r"
, header {
version: 1
logfileName: "binlog.000007"
logfileOffset: 411
serverId: 1
serverenCode: "UTF-8"
executeTime: 1705051889000
sourceType: MYSQL
schemaName: "heima"
tableName: "tb_item"
eventLength: 626
eventType: UPDATE
props {
key: "rowsCount"
value: "1"
}
}
entryType: ROWDATA
storeValue: "\bV\020\002P\000b\332\n\n&\b\000\020\373\377\377\377\377\377\377\377\377\001\032\002id \001(\0000\000B\00510001R\006bigint\nd\b\001\020\f\032\005title \000(\0000\000BCRIMOWA 21\345\257\270\346\211\230\350\277\220\347\256\261\346\213\211\346\235\206\347\256\261 SALSA AIR\347\263\273\345\210\227\346\236\234\347\273\277\350\211\262 820.70.36.4R\fvarchar(264)\n)\b\002\020\f\032\004name \000(\0000\000B\tSALSA AIRR\fvarchar(128)\n)\b\003\020\373\377\377\377\377\377\377\377\377\001\032\005price \000(\0000\000B\00516900R\006bigint\n\226\001\b\004\020\f\032\005image \000(\0000\000Buhttps://m.360buyimg.com/mobilecms/s720x720_jfs/t6934/364/1195375010/84676/e9f2c55f/597ece38N0ddcbc77.jpg!q70.jpg.webpR\fvarchar(200)\n0\b\005\020\f\032\bcategory \000(\0000\000B\f\346\213\211\346\235\206\347\256\261777R\fvarchar(200)\n\'\b\006\020\f\032\005brand \000(\0000\000B\006RIMOWAR\fvarchar(100)\nG\b\a\020\f\032\004spec \000(\0000\000B\'{\"\351\242\234\350\211\262\": \"\347\272\242\350\211\262\", \"\345\260\272\347\240\201\": \"26\345\257\270\"}R\fvarchar(200)\n\032\b\b\020\004\032\006status \000(\0000\000B\0011R\003int\n6\b\t\020]\032\vcreate_time \000(\0000\000B\0232019-05-01 00:00:00R\bdatetime\n6\b\n\020]\032\vupdate_time \000(\0000\000B\0232019-05-01 00:00:00R\bdatetime\022&\b\000\020\373\377\377\377\377\377\377\377\377\001\032\002id \001(\0000\000B\00510001R\006bigint\022d\b\001\020\f\032\005title \000(\0000\000BCRIMOWA 21\345\257\270\346\211\230\350\277\220\347\256\261\346\213\211\346\235\206\347\256\261 SALSA AIR\347\263\273\345\210\227\346\236\234\347\273\277\350\211\262 820.70.36.4R\fvarchar(264)\022)\b\002\020\f\032\004name \000(\0000\000B\tSALSA AIRR\fvarchar(128)\022)\b\003\020\373\377\377\377\377\377\377\377\377\001\032\005price \000(\0000\000B\00516900R\006bigint\022\226\001\b\004\020\f\032\005image \000(\0000\000Buhttps://m.360buyimg.com/mobilecms/s720x720_jfs/t6934/364/1195375010/84676/e9f2c55f/597ece38N0ddcbc77.jpg!q70.jpg.webpR\fvarchar(200)\0220\b\005\020\f\032\bcategory \000(\0010\000B\f\346\213\211\346\235\206\347\256\261888R\fvarchar(200)\022\'\b\006\020\f\032\005brand \000(\0000\000B\006RIMOWAR\fvarchar(100)\022G\b\a\020\f\032\004spec \000(\0000\000B\'{\"\351\242\234\350\211\262\": \"\347\272\242\350\211\262\", \"\345\260\272\347\240\201\": \"26\345\257\270\"}R\fvarchar(200)\022\032\b\b\020\004\032\006status \000(\0000\000B\0011R\003int\0226\b\t\020]\032\vcreate_time \000(\0000\000B\0232019-05-01 00:00:00R\bdatetime\0226\b\n\020]\032\vupdate_time \000(\0000\000B\0232019-05-01 00:00:00R\bdatetime"
],raw=false,rawEntries=[]]
09:31:30:572 INFO 1 --- [l-client-thread] t.j.c.client.client.AbstractCanalClient : 获取消息 Message[id=2,entries=[header {
version: 1
logfileName: "binlog.000007"
logfileOffset: 1037
serverId: 1
serverenCode: "UTF-8"
executeTime: 1705051889000
sourceType: MYSQL
schemaName: ""
tableName: ""
eventLength: 31
}
entryType: TRANSACTIONEND
storeValue: "\022\00287"
],raw=false,rawEntries=[]]

这里可以先用redis连接工具查询数据,发现rediis已被更新。

再次访问http://localhost:8081/item/10001直接向springbootcontroller发送请求,发现caffeine数据更新,并且springboot日志没有出现查询记录,说明走的是caffeine

多级缓存架构(三)OpenResty Lua缓存

多级缓存架构(三)OpenResty Lua缓存

通过本文章,可以完成多级缓存架构中的Lua缓存。

一、nginx服务

docker/docker-compose.yml中添加nginx服务块。

language-yaml
1
2
3
4
5
6
7
8
9
10
11
12
nginx:
container_name: nginx
image: nginx:stable
volumes:
- ./nginx/conf/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/conf/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./nginx/dist:/usr/share/nginx/dist
ports:
- "8080:8080"
networks:
multi-cache:
ipv4_address: 172.30.3.3

删除原来docker里的multiCache项目并停止springboot应用。

nginx部分配置如下,监听端口为8080,并且将请求反向代理至172.30.3.11,下一小节,将openresty固定在172.30.3.11

language-txt
1
2
3
4
5
6
7
8
9
10
11
12
13
upstream nginx-cluster {
server 172.30.3.11;
}

server {
listen 8080;
listen [::]:8080;
server_name localhost;

location /api {
proxy_pass http://nginx-cluster;
}
}

重新启动multiCache看看nginx前端网页效果。

language-css
1
docker-compose -p multi-cache up -d

访问http://localhost:8080/item.html?id=10001查询id=10001商品页

这里是假数据,前端页面会向/api/item/10001发送数据请求。

二、OpenResty服务

1. 服务块定义

docker/docker-compose.yml中添加openresty1服务块。

language-yaml
1
2
3
4
5
6
7
8
9
10
11
openresty1:
container_name: openresty1
image: openresty/openresty:1.21.4.3-3-jammy-amd64
volumes:
- ./openresty1/conf/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf
- ./openresty1/conf/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./openresty1/lua:/usr/local/openresty/nginx/lua
- ./openresty1/lualib/common.lua:/usr/local/openresty/lualib/common.lua
networks:
multi-cache:
ipv4_address: 172.30.3.11

2. 配置修改

前端向后端发送/api/item/10001请求关于id=10001商品信息。

根据nginx的配置内容,这个请求首先被nginx拦截,反向代理到172.30.3.11 (即openresty1)。

language-txt
1
2
3
4
5
6
7
8
9
upstream nginx-cluster {
server 172.30.3.11;
}

server {
location /api {
proxy_pass http://nginx-cluster;
}
}

openresty1收到的也是/api/item/10001,同时,openresty/api/item/(\d+)请求代理到指定lua程序,在lua程序中完成数据缓存。

因此,openrestyconf/conf.d/default.conf如下

language-txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
upstream tomcat-cluster {
hash $request_uri;
server 172.30.3.4:8081;
# server 172.30.3.5:8081;
}

server {
listen 80;
listen [::]:80;
server_name localhost;

# intercept /item and join lua
location ~ /api/item/(\d+) {
default_type application/json;
content_by_lua_file lua/item.lua;
}

# intercept lua and redirect to back-end
location /path/ {
rewrite ^/path/(.*)$ /$1 break;
proxy_pass http://tomcat-cluster;
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/dist;
}
}

conf/nginx.confhttp块最后添加3行,引入依赖。

language-txt
1
2
3
4
5
6
#lua 模块
lua_package_path "/usr/local/openresty/lualib/?.lua;;";
#c模块
lua_package_cpath "/usr/local/openresty/lualib/?.so;;";
#本地缓存
lua_shared_dict item_cache 150m;

3. Lua程序编写

common.lua被挂载到lualib,表示可以被其他lua当做库使用,内容如下

language-lua
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
-- 创建一个本地缓存对象item_cache
local item_cache = ngx.shared.item_cache;

-- 函数,向openresty本身发送类似/path/item/10001请求,根据conf配置,将被删除/path前缀并代理至tomcat程序
local function read_get(path, params)
local rsp = ngx.location.capture('/path'..path,{

method = ngx.HTTP_GET,
args = params,
})
if not rsp then
ngx.log(ngx.ERR, "http not found, path: ", path, ", args: ", params);
ngx.exit(404)
end
return rsp.body
end

-- 函数,如果本地有缓存,使用缓存,如果没有代理到tomcat然后将数据存入缓存
local function read_data(key, expire, path, params)
-- query local cache
local rsp = item_cache:get(key)
-- query tomcat
if not rsp then
ngx.log(ngx.ERR, "redis cache miss, try tomcat, key: ", key)
rsp = read_get(path, params)
end
-- write into local cache
item_cache:set(key, rsp, expire)
return rsp
end

local _M = {

read_data = read_data
}

return _M

item.lua是处理来自形如/api/item/10001请求的程序,内容如下

language-lua
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
-- include
local commonUtils = require('common')
local cjson = require("cjson")

-- get url params 10001
local id = ngx.var[1]
-- redirect item, 缓存过期时间1800s, 适合长时间不改变的数据
local itemJson = commonUtils.read_data("item:id:"..id, 1800,"/item/"..id,nil)
-- redirect item/stock, 缓存过期时间4s, 适合经常改变的数据
local stockJson = commonUtils.read_data("item:stock:id:"..id, 4 ,"/item/stock/"..id, nil)
-- json2table
local item = cjson.decode(itemJson)
local stock = cjson.decode(stockJson)
-- combine item and stock
item.stock = stock.stock
item.sold = stock.sold
-- return result
ngx.say(cjson.encode(item))

4. 总结

  1. 这里luaitem(tb_item表)和stock(tb_stock表)两个信息都有缓存,并使用cjson库将两者合并后返回到前端。
  2. 关于expire时效性的问题,如果后台改变了数据,但是openresty关于此数据的缓存未过期,前端得到的是旧数据
  3. 大致来说openresty = nginx + lua,不仅具有nginx反向代理的能力,还能介入lua程序进行扩展。

三、运行

到此为止,docker-compose.yml应该如下

language-yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
version: '3.8'

networks:
multi-cache:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.30.3.0/24

services:
mysql:
container_name: mysql
image: mysql:8
volumes:
- ./mysql/conf/my.cnf:/etc/mysql/conf.d/my.cnf
- ./mysql/data:/var/lib/mysql
- ./mysql/logs:/logs
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=1009
networks:
multi-cache:
ipv4_address: 172.30.3.2

nginx:
container_name: nginx
image: nginx:stable
volumes:
- ./nginx/conf/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/conf/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./nginx/dist:/usr/share/nginx/dist
ports:
- "8080:8080"
networks:
multi-cache:
ipv4_address: 172.30.3.3

openresty1:
container_name: openresty1
image: openresty/openresty:1.21.4.3-3-jammy-amd64
volumes:
- ./openresty1/conf/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf
- ./openresty1/conf/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./openresty1/lua:/usr/local/openresty/nginx/lua
- ./openresty1/lualib/common.lua:/usr/local/openresty/lualib/common.lua
networks:
multi-cache:
ipv4_address: 172.30.3.11

启动各项服务

language-bash
1
docker-compose -p multi-cache up -d

启动springboot程序。

四、测试

清空openresty容器日志。
访问http://localhost:8080/item.html?id=10001
查看openresty容器日志,可以看到两次commonUtils.read_data都没有缓存,于是代理到tomcat,可以看到springboot日志出现查询相关记录。

language-txt
1
2
3
2024-01-12 11:45:53 2024/01/12 03:45:53 [error] 7#7: *1 [lua] common.lua:99: read_data(): redis cache miss, try tomcat, key: item:id:10001, client: 172.30.3.3, server: localhost, request: "GET /api/item/10001 HTTP/1.0", host: "nginx-cluster", referrer: "http://localhost:8080/item.html?id=10001"
2024-01-12 11:45:53 2024/01/12 03:45:53 [error] 7#7: *1 [lua] common.lua:99: read_data(): redis cache miss, try tomcat, key: item:stock:id:10001 while sending to client, client: 172.30.3.3, server: localhost, request: "GET /api/item/10001 HTTP/1.0", host: "nginx-cluster", referrer: "http://localhost:8080/item.html?id=10001"
2024-01-12 11:45:53 172.30.3.3 - - [12/Jan/2024:03:45:53 +0000] "GET /api/item/10001 HTTP/1.0" 200 486 "http://localhost:8080/item.html?id=10001" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36 Edg/121.0.0.0"

再次访问此网址,强制刷新+禁用浏览器缓存+更换浏览器
间隔超过4s但小于1800s时,日志如下,只出现一次miss。

language-txt
1
2
2024-01-12 11:48:04 2024/01/12 03:48:04 [error] 7#7: *4 [lua] common.lua:99: read_data(): redis cache miss, try tomcat, key: item:stock:id:10001, client: 172.30.3.3, server: localhost, request: "GET /api/item/10001 HTTP/1.0", host: "nginx-cluster", referrer: "http://localhost:8080/item.html?id=10001"
2024-01-12 11:48:04 172.30.3.3 - - [12/Jan/2024:03:48:04 +0000] "GET /api/item/10001 HTTP/1.0" 200 486 "http://localhost:8080/item.html?id=10001" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36 Edg/121.0.0.0"

再次访问此网址,强制刷新+禁用浏览器缓存+更换浏览器
间隔小于4s,日志如下,未出现miss。

language-txt
1
2024-01-12 11:49:16 172.30.3.3 - - [12/Jan/2024:03:49:16 +0000] "GET /api/item/10001 HTTP/1.0" 200 486 "http://localhost:8080/item.html?id=10001" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36 Edg/121.0.0.0"

五、高可用集群

1. openresty

对于openresty高可用,可以部署多个openresty docker实例,并在nginxdocker/nginx/conf/conf.d/default.confupstream nginx-cluster将多个openresty地址添加进去即可。比如

language-txt
1
2
3
4
5
6
7
upstream nginx-cluster {
hash $request_uri;
# hash $request_uri consistent;
server 172.30.3.11;
server 172.30.3.12;
server 172.30.3.13;
}

多个openresty 无论是conf还是lua都保持一致即可。
并且使用hash $request_uri负载均衡作为反向代理策略,防止同一请求被多个实例缓存数据。

2. tomcat

对于springboot程序高可用,也是类似。可以部署多个springboot docker实例,并在openresty docker/openresty1/conf/conf.d/default.confupstream nginx-cluster将多个springboot地址添加进去即可。比如

language-txt
1
2
3
4
5
upstream tomcat-cluster {
hash $request_uri;
server 172.30.3.4:8081;
server 172.30.3.5:8081;
}
多级缓存架构(二)Caffeine进程缓存

多级缓存架构(二)Caffeine进程缓存

通过本文章,可以完成多级缓存架构中的进程缓存。

一、引入依赖

item-service中引入caffeine依赖

language-xml
1
2
3
4
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
</dependency>

二、实现进程缓存

这是Caffeine官方文档地址

1. 配置Config类

创建config.CaffeineConfig

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
@Configuration
public class CaffeineConfig {

@Bean
public Cache<Long, Item> itemCache(){

return Caffeine.newBuilder()
.initialCapacity(100)
.maximumSize(10_000)
.build();
}

@Bean
public Cache<Long, ItemStock> stockCache(){

return Caffeine.newBuilder()
.initialCapacity(100)
.maximumSize(10_000)
.build();
}
}

2. 修改controller

ItemController中注入两个Cache对象,并修改业务逻辑

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
@RestController
@RequestMapping("item")
public class ItemController {


@Autowired
private IItemService itemService;
@Autowired
private IItemStockService stockService;
@Autowired
private Cache<Long, Item> itemCache;
@Autowired
private Cache<Long, ItemStock> stockCache;

@GetMapping("/{id}")
public Item findById(@PathVariable("id") Long id){

return itemCache.get(id, key->
itemService.query()
.ne("status", 3).eq("id", id)
.one()
);
// return itemService.query()
// .ne("status", 3).eq("id", id)
// .one();
}

@GetMapping("/stock/{id}")
public ItemStock findStockById(@PathVariable("id") Long id){

return stockCache.get(id, key->
stockService.getById(id)
);
// return stockService.getById(id);
}
}

三、运行

Idea结合Docker将springboot放入docker容器中运行,并指定使用multi-cache_multi-cache网络,以及固定172.30.3.4地址。
详细参考如下文章

启动好后,可以看到springboot容器和mysql容器处于同一网络下。(Docker Desktop for Windows插件PortNavigator)

四、测试

访问http://localhost:8081/item/10001可以看到springboot日志输出如下

language-txt
1
2
3
02:45:58:841 DEBUG 1 --- [nio-8081-exec-1] c.h.item.mapper.ItemMapper.selectOne     : ==>  Preparing: SELECT id,name,title,price,image,category,brand,spec,status,create_time,update_time FROM tb_item WHERE (status <> ? AND id = ?)
02:45:58:889 DEBUG 1 --- [nio-8081-exec-1] c.h.item.mapper.ItemMapper.selectOne : ==> Parameters: 3(Integer), 10001(Long)
02:45:58:951 DEBUG 1 --- [nio-8081-exec-1] c.h.item.mapper.ItemMapper.selectOne : <== Total: 1

当我们二次访问此网址,强制刷新+禁用浏览器缓存+更换浏览器,springboot日志都没有新的查询记录,说明使用了Caffeine缓存。

基于Docker Compose单机实现多级缓存架构2024

基于Docker Compose单机实现多级缓存架构2024

一、环境参考

Name Version
Docker Desktop for Windows 4.23.0
Openjdk 8
MySQL 8.2.0
Redis 7.2
Canal 1.1.7
OpenResty 1.21.4.3-3-jammy-amd64
Lua -
Caffeine -

二、专栏简介

多级缓存实现过程比较长,将拆分为多个文章分步讲述。如果一切顺利,大致会得到如下一个多级缓存架构:

本专栏主要对Lua缓存Redis缓存Caffeine缓存进行实践,以及缓存同步实践。依次为以下几篇:

  1. 多级缓存架构(一)项目初始化
  2. 多级缓存架构(二)Caffeine进程缓存
  3. 多级缓存架构(三)OpenResty Lua缓存
  4. 多级缓存架构(四)Redis缓存
  5. 多级缓存架构(五)缓存同步

三、扩展

对于高可用,集群等扩展,例如下图的构造,本专栏只包含部分展开但并不提供实践指导

多级缓存架构(四)Redis缓存

多级缓存架构(四)Redis缓存

通过本文章,可以完成多级缓存架构中的Redis缓存。

一、Redis服务

docker/docker-compose.yml中,添加redis服务块

language-yml
1
2
3
4
5
6
7
8
9
10
11
redis:
container_name: redis
image: redis:7.2
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
networks:
multi-cache:
ipv4_address: 172.30.3.21

二、Redis缓存预热

spirngboot项目启动时,将固定的热点数据提前加载到redis中。

1. 引入依赖

pom.xml添加如下依赖

language-xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>io.lettuce</groupId>
<artifactId>lettuce-core</artifactId>
<version>6.1.4.RELEASE</version> <!-- 或更高版本 -->
</dependency>
<!-- https://mvnrepository.com/artifact/com.alibaba.fastjson2/fastjson2 -->
<dependency>
<groupId>com.alibaba.fastjson2</groupId>
<artifactId>fastjson2</artifactId>
<version>2.0.41</version>
</dependency>

application.yml添加如下配置

language-yml
1
2
3
spring:
redis:
host: 172.30.3.21

2. handler类实现

新建config.RedisHandler类,内容如下,主要是重写afterPropertiesSet,完成缓存预热逻辑,saveItemdeleteItemById函数给之后的章节使用。

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
@Component
public class RedisHandler implements InitializingBean {

@Autowired
private StringRedisTemplate redisTemplate;
@Autowired
private IItemService itemService;
@Autowired
private IItemStockService stockService;
@Override
public void afterPropertiesSet() throws Exception {

List<Item> itemList = itemService.list();
for (Item item : itemList) {

String json = JSON.toJSONString(item);
redisTemplate.opsForValue().set("item:id:"+item.getId(), json);
}
List<ItemStock> stockList = stockService.list();
for (ItemStock stock : stockList) {

String json = JSON.toJSONString(stock);
redisTemplate.opsForValue().set("item:stock:id:"+stock.getId(), json);
}
}

public void saveItem(Item item){

String json = JSON.toJSONString(item);
redisTemplate.opsForValue().set("item:id:"+item.getId(), json);
}

public void deleteItemById(Long id){

redisTemplate.delete("item:id:"+id);
}
}

三、整合Redis缓存

改进openrestydocker/openresty1/lualib/common.lua,如下

language-lua
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
local redis = require('resty.redis')
local red = redis:new()
red:set_timeouts(1000, 1000, 1000)
-- 创建一个本地缓存对象item_cache
local item_cache = ngx.shared.item_cache;

-- 关闭redis连接的工具方法,其实是放入连接池
local function close_redis(red)
local pool_max_idle_time = 10000 -- 连接的空闲时间,单位是毫秒
local pool_size = 100 --连接池大小
local ok, err = red:set_keepalive(pool_max_idle_time, pool_size)
if not ok then
ngx.log(ngx.ERR, "放入redis连接池失败: ", err)
end
end

-- 查询redis的方法 ip和port是redis地址,key是查询的key
local function read_redis(ip, port, key)
-- 获取一个连接
local ok, err = red:connect(ip, port)
if not ok then
ngx.log(ngx.ERR, "连接redis失败 : ", err)
return nil
end
-- 查询redis
local resp, err = red:get(key)
-- 查询失败处理
if not resp then
ngx.log(ngx.ERR, "查询Redis失败: ", err, ", key = " , key)
end
--得到的数据为空处理
if resp == ngx.null then
resp = nil
ngx.log(ngx.ERR, "查询Redis数据为空, key = ", key)
end
close_redis(red)
return resp
end

-- 函数,向openresty本身发送类似/path/item/10001请求,根据conf配置,将被删除/path前缀并代理至tomcat程序
local function read_get(path, params)
local rsp = ngx.location.capture('/path'..path,{

method = ngx.HTTP_GET,
args = params,
})
if not rsp then
ngx.log(ngx.ERR, "http not found, path: ", path, ", args: ", params);
ngx.exit(404)
end
return rsp.body
end

-- 函数,如果本地有缓存,使用缓存,如果没有代理到tomcat然后将数据存入缓存
local function read_data(key, expire, path, params)
-- query local cache
local rsp = item_cache:get(key)
-- query redis
if not rsp then
ngx.log(ngx.ERR, "local cache miss, try redis, key: ", key)
rsp = read_redis("172.30.3.21", 6379, key)
if not rsp then
ngx.log(ngx.ERR, "redis cache miss, try tomcat, key: ", key)
rsp = read_get(path, params)
end
end
-- write into local cache
item_cache:set(key, rsp, expire)
return rsp
end

local _M = {

read_get = read_get,
read_redis = read_redis,
read_data = read_data
}

return _M

item.lua不需要用改动。

四、运行

到此为止,docker-compose.yml内容应该如下

language-yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
version: '3.8'

networks:
multi-cache:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.30.3.0/24

services:
mysql:
container_name: mysql
image: mysql:8
volumes:
- ./mysql/conf/my.cnf:/etc/mysql/conf.d/my.cnf
- ./mysql/data:/var/lib/mysql
- ./mysql/logs:/logs
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=1009
networks:
multi-cache:
ipv4_address: 172.30.3.2

nginx:
container_name: nginx
image: nginx:stable
volumes:
- ./nginx/conf/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/conf/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./nginx/dist:/usr/share/nginx/dist
ports:
- "8080:8080"
networks:
multi-cache:
ipv4_address: 172.30.3.3

openresty1:
container_name: openresty1
image: openresty/openresty:1.21.4.3-3-jammy-amd64
volumes:
- ./openresty1/conf/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf
- ./openresty1/conf/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./openresty1/lua:/usr/local/openresty/nginx/lua
- ./openresty1/lualib/common.lua:/usr/local/openresty/lualib/common.lua
networks:
multi-cache:
ipv4_address: 172.30.3.11

redis:
container_name: redis
image: redis:7.2
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
networks:
multi-cache:
ipv4_address: 172.30.3.21

删除原来的multiCache,重新启动各项服务。

language-bash
1
docker-compose -p multi-cache up -d

启动springboot程序。

五、测试

1. redis缓存预热

springboot程序启动后,出现查询日志,查看redis数据库发现自动存入了数据。

2. redis缓存命中

清空openresty容器日志,访问http://localhost:8080/item.html?id=10001,查看日志,发现两次commonUtils.read_data都只触发到查询redis,没到查询tomcat

language-txt
1
2
3
2024-01-12 16:06:18 2024/01/12 08:06:18 [error] 7#7: *1 [lua] common.lua:59: read_data(): local cache miss, try redis, key: item:id:10001, client: 172.30.3.3, server: localhost, request: "GET /api/item/10001 HTTP/1.0", host: "nginx-cluster", referrer: "http://localhost:8080/item.html?id=10001"
2024-01-12 16:06:18 2024/01/12 08:06:18 [error] 7#7: *1 [lua] common.lua:59: read_data(): local cache miss, try redis, key: item:stock:id:10001, client: 172.30.3.3, server: localhost, request: "GET /api/item/10001 HTTP/1.0", host: "nginx-cluster", referrer: "http://localhost:8080/item.html?id=10001"
2024-01-12 16:06:18 172.30.3.3 - - [12/Jan/2024:08:06:18 +0000] "GET /api/item/10001 HTTP/1.0" 200 466 "http://localhost:8080/item.html?id=10001" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36 Edg/121.0.0.0"

查看springboot程序日志,也没有查询记录,说明redis缓存命中成功。

六、高可用集群

对于redis高可用集群,可以参考以下专栏文章。
https://blog.csdn.net/m0_51390969/category_12546314.html?spm=1001.2014.3001.5482

多级缓存架构(一)项目初始化

多级缓存架构(一)项目初始化

一、项目克隆

克隆此项目到本地
https://github.com/Xiamu-ssr/MultiCache
来到start目录下,分别有以下文件夹

  • docker:docker相关文件
  • item-service:springboot项目

二、数据库准备

docker/docker-compose.yml中已经定义好如下mysql

language-yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
mysql:
container_name: mysql
image: mysql:8
volumes:
- ./mysql/conf/my.cnf:/etc/mysql/conf.d/my.cnf
- ./mysql/data:/var/lib/mysql
- ./mysql/logs:/logs
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=1009
networks:
multi-cache:
ipv4_address: 172.30.3.2

my.cnf如下

language-bash
1
2
3
4
5
[mysqld]
bind-address=0.0.0.0
skip-name-resolve
character_set_server=utf8
datadir=/var/lib/mysql

运行以下命令启动docker-compose

language-bash
1
docker-compose -p multi-cache up -d

之后使用数据库连接工具连接mysql容器,创建heima数据库,并对其执行docker/mysql/item.sql脚本。

三、项目工程准备

idea打开item-service文件夹,等待idea加载本springboot项目。

如果在docker-compose中服务ip改动,请注意一些可能关联的地方也需要做同样改动,比如item-serviceapplication.yml

language-yaml
1
2
3
4
5
6
7
8
spring:
application:
name: itemservice
datasource:
url: jdbc:mysql://172.30.3.2:3306/heima?useSSL=false&allowPublicKeyRetrieval=true
username: root
password: 1009
driver-class-name: com.mysql.cj.jdbc.Driver

观察controller

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
package com.heima.item.web;

import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
import com.heima.item.pojo.Item;
import com.heima.item.pojo.ItemStock;
import com.heima.item.pojo.PageDTO;
import com.heima.item.service.IItemService;
import com.heima.item.service.IItemStockService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;

import java.util.List;
import java.util.stream.Collectors;

@RestController
@RequestMapping("item")
public class ItemController {


@Autowired
private IItemService itemService;
@Autowired
private IItemStockService stockService;

@GetMapping("list")
public PageDTO queryItemPage(
@RequestParam(value = "page", defaultValue = "1") Integer page,
@RequestParam(value = "size", defaultValue = "5") Integer size){

// 分页查询商品
Page<Item> result = itemService.query()
.ne("status", 3)
.page(new Page<>(page, size));

// 查询库存
List<Item> list = result.getRecords().stream().peek(item -> {

ItemStock stock = stockService.getById(item.getId());
item.setStock(stock.getStock());
item.setSold(stock.getSold());
}).collect(Collectors.toList());

// 封装返回
return new PageDTO(result.getTotal(), list);
}

@PostMapping
public void saveItem(@RequestBody Item item){

itemService.saveItem(item);
}

@PutMapping
public void updateItem(@RequestBody Item item) {

itemService.updateById(item);
}

@PutMapping("stock")
public void updateStock(@RequestBody ItemStock itemStock){

stockService.updateById(itemStock);
}

@DeleteMapping("/{id}")
public void deleteById(@PathVariable("id") Long id){

itemService.update().set("status", 3).eq("id", id).update();
}

@GetMapping("/{id}")
public Item findById(@PathVariable("id") Long id){

return itemService.query()
.ne("status", 3).eq("id", id)
.one();
}

@GetMapping("/stock/{id}")
public ItemStock findStockById(@PathVariable("id") Long id){

return stockService.getById(id);
}
}