基于Docker Compose单机实现多级缓存架构2024

基于Docker Compose单机实现多级缓存架构2024

一、环境参考

Name Version
Docker Desktop for Windows 4.23.0
Openjdk 8
MySQL 8.2.0
Redis 7.2
Canal 1.1.7
OpenResty 1.21.4.3-3-jammy-amd64
Lua -
Caffeine -

二、专栏简介

多级缓存实现过程比较长,将拆分为多个文章分步讲述。如果一切顺利,大致会得到如下一个多级缓存架构:

本专栏主要对Lua缓存Redis缓存Caffeine缓存进行实践,以及缓存同步实践。依次为以下几篇:

  1. 多级缓存架构(一)项目初始化
  2. 多级缓存架构(二)Caffeine进程缓存
  3. 多级缓存架构(三)OpenResty Lua缓存
  4. 多级缓存架构(四)Redis缓存
  5. 多级缓存架构(五)缓存同步

三、扩展

对于高可用,集群等扩展,例如下图的构造,本专栏只包含部分展开但并不提供实践指导

多级缓存架构(四)Redis缓存

多级缓存架构(四)Redis缓存

通过本文章,可以完成多级缓存架构中的Redis缓存。

一、Redis服务

docker/docker-compose.yml中,添加redis服务块

language-yml
1
2
3
4
5
6
7
8
9
10
11
redis:
container_name: redis
image: redis:7.2
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
networks:
multi-cache:
ipv4_address: 172.30.3.21

二、Redis缓存预热

spirngboot项目启动时,将固定的热点数据提前加载到redis中。

1. 引入依赖

pom.xml添加如下依赖

language-xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>io.lettuce</groupId>
<artifactId>lettuce-core</artifactId>
<version>6.1.4.RELEASE</version> <!-- 或更高版本 -->
</dependency>
<!-- https://mvnrepository.com/artifact/com.alibaba.fastjson2/fastjson2 -->
<dependency>
<groupId>com.alibaba.fastjson2</groupId>
<artifactId>fastjson2</artifactId>
<version>2.0.41</version>
</dependency>

application.yml添加如下配置

language-yml
1
2
3
spring:
redis:
host: 172.30.3.21

2. handler类实现

新建config.RedisHandler类,内容如下,主要是重写afterPropertiesSet,完成缓存预热逻辑,saveItemdeleteItemById函数给之后的章节使用。

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
@Component
public class RedisHandler implements InitializingBean {

@Autowired
private StringRedisTemplate redisTemplate;
@Autowired
private IItemService itemService;
@Autowired
private IItemStockService stockService;
@Override
public void afterPropertiesSet() throws Exception {

List<Item> itemList = itemService.list();
for (Item item : itemList) {

String json = JSON.toJSONString(item);
redisTemplate.opsForValue().set("item:id:"+item.getId(), json);
}
List<ItemStock> stockList = stockService.list();
for (ItemStock stock : stockList) {

String json = JSON.toJSONString(stock);
redisTemplate.opsForValue().set("item:stock:id:"+stock.getId(), json);
}
}

public void saveItem(Item item){

String json = JSON.toJSONString(item);
redisTemplate.opsForValue().set("item:id:"+item.getId(), json);
}

public void deleteItemById(Long id){

redisTemplate.delete("item:id:"+id);
}
}

三、整合Redis缓存

改进openrestydocker/openresty1/lualib/common.lua,如下

language-lua
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
local redis = require('resty.redis')
local red = redis:new()
red:set_timeouts(1000, 1000, 1000)
-- 创建一个本地缓存对象item_cache
local item_cache = ngx.shared.item_cache;

-- 关闭redis连接的工具方法,其实是放入连接池
local function close_redis(red)
local pool_max_idle_time = 10000 -- 连接的空闲时间,单位是毫秒
local pool_size = 100 --连接池大小
local ok, err = red:set_keepalive(pool_max_idle_time, pool_size)
if not ok then
ngx.log(ngx.ERR, "放入redis连接池失败: ", err)
end
end

-- 查询redis的方法 ip和port是redis地址,key是查询的key
local function read_redis(ip, port, key)
-- 获取一个连接
local ok, err = red:connect(ip, port)
if not ok then
ngx.log(ngx.ERR, "连接redis失败 : ", err)
return nil
end
-- 查询redis
local resp, err = red:get(key)
-- 查询失败处理
if not resp then
ngx.log(ngx.ERR, "查询Redis失败: ", err, ", key = " , key)
end
--得到的数据为空处理
if resp == ngx.null then
resp = nil
ngx.log(ngx.ERR, "查询Redis数据为空, key = ", key)
end
close_redis(red)
return resp
end

-- 函数,向openresty本身发送类似/path/item/10001请求,根据conf配置,将被删除/path前缀并代理至tomcat程序
local function read_get(path, params)
local rsp = ngx.location.capture('/path'..path,{

method = ngx.HTTP_GET,
args = params,
})
if not rsp then
ngx.log(ngx.ERR, "http not found, path: ", path, ", args: ", params);
ngx.exit(404)
end
return rsp.body
end

-- 函数,如果本地有缓存,使用缓存,如果没有代理到tomcat然后将数据存入缓存
local function read_data(key, expire, path, params)
-- query local cache
local rsp = item_cache:get(key)
-- query redis
if not rsp then
ngx.log(ngx.ERR, "local cache miss, try redis, key: ", key)
rsp = read_redis("172.30.3.21", 6379, key)
if not rsp then
ngx.log(ngx.ERR, "redis cache miss, try tomcat, key: ", key)
rsp = read_get(path, params)
end
end
-- write into local cache
item_cache:set(key, rsp, expire)
return rsp
end

local _M = {

read_get = read_get,
read_redis = read_redis,
read_data = read_data
}

return _M

item.lua不需要用改动。

四、运行

到此为止,docker-compose.yml内容应该如下

language-yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
version: '3.8'

networks:
multi-cache:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.30.3.0/24

services:
mysql:
container_name: mysql
image: mysql:8
volumes:
- ./mysql/conf/my.cnf:/etc/mysql/conf.d/my.cnf
- ./mysql/data:/var/lib/mysql
- ./mysql/logs:/logs
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=1009
networks:
multi-cache:
ipv4_address: 172.30.3.2

nginx:
container_name: nginx
image: nginx:stable
volumes:
- ./nginx/conf/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/conf/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./nginx/dist:/usr/share/nginx/dist
ports:
- "8080:8080"
networks:
multi-cache:
ipv4_address: 172.30.3.3

openresty1:
container_name: openresty1
image: openresty/openresty:1.21.4.3-3-jammy-amd64
volumes:
- ./openresty1/conf/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf
- ./openresty1/conf/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./openresty1/lua:/usr/local/openresty/nginx/lua
- ./openresty1/lualib/common.lua:/usr/local/openresty/lualib/common.lua
networks:
multi-cache:
ipv4_address: 172.30.3.11

redis:
container_name: redis
image: redis:7.2
volumes:
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
ports:
- "6379:6379"
command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
networks:
multi-cache:
ipv4_address: 172.30.3.21

删除原来的multiCache,重新启动各项服务。

language-bash
1
docker-compose -p multi-cache up -d

启动springboot程序。

五、测试

1. redis缓存预热

springboot程序启动后,出现查询日志,查看redis数据库发现自动存入了数据。

2. redis缓存命中

清空openresty容器日志,访问http://localhost:8080/item.html?id=10001,查看日志,发现两次commonUtils.read_data都只触发到查询redis,没到查询tomcat

language-txt
1
2
3
2024-01-12 16:06:18 2024/01/12 08:06:18 [error] 7#7: *1 [lua] common.lua:59: read_data(): local cache miss, try redis, key: item:id:10001, client: 172.30.3.3, server: localhost, request: "GET /api/item/10001 HTTP/1.0", host: "nginx-cluster", referrer: "http://localhost:8080/item.html?id=10001"
2024-01-12 16:06:18 2024/01/12 08:06:18 [error] 7#7: *1 [lua] common.lua:59: read_data(): local cache miss, try redis, key: item:stock:id:10001, client: 172.30.3.3, server: localhost, request: "GET /api/item/10001 HTTP/1.0", host: "nginx-cluster", referrer: "http://localhost:8080/item.html?id=10001"
2024-01-12 16:06:18 172.30.3.3 - - [12/Jan/2024:08:06:18 +0000] "GET /api/item/10001 HTTP/1.0" 200 466 "http://localhost:8080/item.html?id=10001" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36 Edg/121.0.0.0"

查看springboot程序日志,也没有查询记录,说明redis缓存命中成功。

六、高可用集群

对于redis高可用集群,可以参考以下专栏文章。
https://blog.csdn.net/m0_51390969/category_12546314.html?spm=1001.2014.3001.5482

多级缓存架构(一)项目初始化

多级缓存架构(一)项目初始化

一、项目克隆

克隆此项目到本地
https://github.com/Xiamu-ssr/MultiCache
来到start目录下,分别有以下文件夹

  • docker:docker相关文件
  • item-service:springboot项目

二、数据库准备

docker/docker-compose.yml中已经定义好如下mysql

language-yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
mysql:
container_name: mysql
image: mysql:8
volumes:
- ./mysql/conf/my.cnf:/etc/mysql/conf.d/my.cnf
- ./mysql/data:/var/lib/mysql
- ./mysql/logs:/logs
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=1009
networks:
multi-cache:
ipv4_address: 172.30.3.2

my.cnf如下

language-bash
1
2
3
4
5
[mysqld]
bind-address=0.0.0.0
skip-name-resolve
character_set_server=utf8
datadir=/var/lib/mysql

运行以下命令启动docker-compose

language-bash
1
docker-compose -p multi-cache up -d

之后使用数据库连接工具连接mysql容器,创建heima数据库,并对其执行docker/mysql/item.sql脚本。

三、项目工程准备

idea打开item-service文件夹,等待idea加载本springboot项目。

如果在docker-compose中服务ip改动,请注意一些可能关联的地方也需要做同样改动,比如item-serviceapplication.yml

language-yaml
1
2
3
4
5
6
7
8
spring:
application:
name: itemservice
datasource:
url: jdbc:mysql://172.30.3.2:3306/heima?useSSL=false&allowPublicKeyRetrieval=true
username: root
password: 1009
driver-class-name: com.mysql.cj.jdbc.Driver

观察controller

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
package com.heima.item.web;

import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
import com.heima.item.pojo.Item;
import com.heima.item.pojo.ItemStock;
import com.heima.item.pojo.PageDTO;
import com.heima.item.service.IItemService;
import com.heima.item.service.IItemStockService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;

import java.util.List;
import java.util.stream.Collectors;

@RestController
@RequestMapping("item")
public class ItemController {


@Autowired
private IItemService itemService;
@Autowired
private IItemStockService stockService;

@GetMapping("list")
public PageDTO queryItemPage(
@RequestParam(value = "page", defaultValue = "1") Integer page,
@RequestParam(value = "size", defaultValue = "5") Integer size){

// 分页查询商品
Page<Item> result = itemService.query()
.ne("status", 3)
.page(new Page<>(page, size));

// 查询库存
List<Item> list = result.getRecords().stream().peek(item -> {

ItemStock stock = stockService.getById(item.getId());
item.setStock(stock.getStock());
item.setSold(stock.getSold());
}).collect(Collectors.toList());

// 封装返回
return new PageDTO(result.getTotal(), list);
}

@PostMapping
public void saveItem(@RequestBody Item item){

itemService.saveItem(item);
}

@PutMapping
public void updateItem(@RequestBody Item item) {

itemService.updateById(item);
}

@PutMapping("stock")
public void updateStock(@RequestBody ItemStock itemStock){

stockService.updateById(itemStock);
}

@DeleteMapping("/{id}")
public void deleteById(@PathVariable("id") Long id){

itemService.update().set("status", 3).eq("id", id).update();
}

@GetMapping("/{id}")
public Item findById(@PathVariable("id") Long id){

return itemService.query()
.ne("status", 3).eq("id", id)
.one();
}

@GetMapping("/stock/{id}")
public ItemStock findStockById(@PathVariable("id") Long id){

return stockService.getById(id);
}
}

云服务器Docker部署SpringBoot+Vue前后端(Ubuntu)

云服务器Docker部署SpringBoot+Vue前后端(Ubuntu)

本文创作环境
华为云Ubuntu22.04
需要对以下知识具备一定了解和经验

  • Linux和Docker使用基础
  • Vue基本使用
  • SpringBoot基本使用

一、起手式-环境配置

1.远程服务器免密

在远程服务器执行ssh-keygen -t rsa,得到如下三个文件

language-bash
1
2
3
4
5
root@hecs-295176:~# ls -lh .ssh/
total 12K
-rw------- 1 root root 570 Oct 21 15:14 authorized_keys
-rw------- 1 root root 2.6K Oct 21 15:09 id_rsa
-rw-r--r-- 1 root root 570 Oct 21 15:09 id_rsa.pub

id_rsa.pub内容复制到authorized_keys,然后将id_rsa下载到本地。
在VsCode使用Remote-SSH配置远程免密登录,例如

language-bash
1
2
3
4
5
Host huaweiYun
HostName xxx.xxx.xxx.xxx
User root
Port 22
IdentityFile "C:\Users\mumu\.ssh\id_rsa"

2.安装Docker

执行以下命令安装Docker并检查docker命令是否可以使用

language-bash
1
2
3
4
5
apt update
apt upgrade
apt install docker.io
docker ps -a
docker images -a

二、Vue前端部署

1.参考文件夹结构

文件先不用创建,按照下面结构先把文件夹创建出来
然后将你的Vue打包后的dist文件夹替换下面的dist文件夹(如果没有Vue的打包文件夹本人建议先在index.html随便写点东西等会看能不能访问)

language-html
1
2
3
4
5
6
7
8
9
10
11
12
13
/root
├── conf
│ └── nginx
│ ├── default.conf
│ └── nginx.conf
└── Vue
├── MyTest01
│ ├── dist
│ │ └── index.html
│ └── logs
│ ├── access.log
│ └── error.log
└── nginxDocker.sh

2.nginx

拉取nginx镜像并创建nginx容器

language-bash
1
2
docker pull nginx
docker run -itd nginx

把里面的两个配置文件复制到主机

language-bash
1
2
docker cp containerName:/etc/nginx/nginx.conf ~/conf/nginx/nginx.conf
docker cp containerName:/etc/nginx/conf.d/default.conf ~/conf/nginx/default.conf

编辑default.conf 文件,修改以下内容:

  • listen 80; 改为 listen 8080;,表示 nginx 容器监听 8080 端口。
  • root /usr/share/nginx/html; 改为 root /usr/share/nginx/dist;,表示 nginx容器的根目录为 /usr/share/nginx/dist
  • index index.html index.htm; 改为 index index.html;,表示 nginx 容器的默认首页为 index.html。

参考第一小步文件夹结构,把以下内容写入nginxDocker.sh

language-bash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#!/bin/bash

containerName="Test01"
nginxConf="/root/conf/nginx/nginx.conf"
defaultConf="/root/conf/nginx/default.conf"
logsPath="/root/Vue/MyTest01/logs"
vuePath="/root/Vue/MyTest01/dist"

docker run -d --name "$containerName" \
-v "$nginxConf":/etc/nginx/nginx.conf \
-v "$defaultConf":/etc/nginx/conf.d/default.conf \
-v "$logsPath":/var/log/nginx \
-v "$vuePath":/usr/share/nginx/dist \
-p 8080:8080 \
nginx

命令行运行这个sh脚本并查看当前容器列表确认容器已经在运行中

language-bash
1
2
bash Vue/nginxDocker.sh
docker ps -a

3.开放8080端口

如果你使用的VsCode远程连接的服务器,那么可以先通过端口转发,在本地访问前端服务。
如果想要别人通过公网访问,需要去购买云服务器的平台,修改服务器的安全组配置,添加入站规则,开放8080端口。
之后使用IP+8080即可访问这个docker容器里的前端服务。

4.复盘

首先是更新便捷性,使用-v挂载文件到容器,我们可以直接修改主机的dist文件夹内容而不必对容器做任何操作,前端服务就可以自动update,其它-v挂载的文件都可以在主机直接修改而不必连入容器中修改,同时重启容器即可一定保证所有服务重启。
其次是多开便捷性,以上流程就是一个包裹了前端服务的docker占一个端口,如果有多个Vue前端,使用不同端口即可。
总而言之,都是选择Docker容器化的优势所在。

三、SpringBoot后端部署

1.参考文件夹结构

将maven打包好的jar包如下图放入对应位置

language-bash
1
2
3
4
SpringBoot
├── javaDocker.sh
└── MyTest01
└── demo-0.0.1-SNAPSHOT.jar

2.openjdk17

以java17举例,拉取对应docker镜像(java8对应的镜像是java:8 )

language-bash
1
docker pull openjdk:17

如下编写javaDocker.sh脚本

language-bash
1
2
3
4
5
6
7
8
9
#!/bin/bash

containerName="JavaTest01"
SpringBootPath="/root/SpringBoot/MyTest01/demo-0.0.1-SNAPSHOT.jar"

docker run -d --name "$containerName" \
-p 8081:8081 \
-v "$SpringBootPath":/app/your-app.jar \
openjdk:17 java -jar /app/your-app.jar

命令行运行这个sh脚本并查看当前容器列表确认容器已经在运行中

language-bash
1
2
bash SpringBoot/javaDocker.sh
docker ps -a

3.开放8081端口

需要去购买云服务器的平台,修改服务器的安全组配置,添加入站规则,开放8081端口。
打开浏览器,输入IP:8081然后跟上一些你在程序中写的api路径,验证是否有返回。

四、Vue->Axios->SpringBoot前后端通信简单实现

vue这里使用ts + setup +组合式 语法举例,前端代码如下
意思是向IP为xxx.xxx.xxx.xxx的云服务器的8081端口服务发送路径为/Home/Kmo的请求

language-html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<template>
<div>
Your Remote JavaDocker State : {
{ line }}
</div>
</template>

<script setup lang="ts">
import {
ref } from "vue";
import axios from "axios";
const line = ref("fail");
axios.get("http://xxx.xxx.xxx.xxx:8081/Home/Kmo").then(rp=>{

line.value = rp.data
})
</script>


<style scoped>
</style>

SpringBoot后端写一个简单Controller类,代码如下

language-java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
package com.kmo.demo.controller;

import org.springframework.web.bind.annotation.CrossOrigin;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@CrossOrigin(originPatterns = "*", allowCredentials = "true")
@RestController
@RequestMapping("Home")
public class TestController {


@GetMapping("/Kmo")
public String test(){

return "Success!";
}

}

分别打包放到云服务指定文件夹,然后restart重启两个docker容器即可,在本地浏览器访问IP:8080看看效果吧。

(完)

Docker-Compose部署Redis(v7.2)分片集群(含主从)

Docker-Compose部署Redis(v7.2)分片集群(含主从)

环境

  • docker desktop for windows 4.23.0
  • redis 7.2

目标

搭建如下图分片+主从集群。

一、前提准备

1. 文件夹结构

因为Redis 7.2 docker镜像里面没有配置文件,所以需要去redis官网下载一个复制里面的redis.conf
博主这里用的是7.2.3版本的redis.conf,这个文件就在解压后第一层文件夹里。

然后构建如下文件夹结构。

language-txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
sharding/
├── docker-compose.yaml
├── master1
│ └── conf
│ └── redis.conf
├── master2
│ └── conf
│ └── redis.conf
├── master3
│ └── conf
│ └── redis.conf
├── replica1
│ └── conf
│ └── redis.conf
├── replica2
│ └── conf
│ └── redis.conf
└── replica3
└── conf
└── redis.conf

二、配置文件

1. redis.conf

对每个redis.conf都做以下修改。分片集群的redis主从的redis.conf目前都是一样的。

language-bash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
port 6379
# 开启集群功能
cluster-enabled yes
# 集群的配置文件名称,不需要我们创建,由redis自己维护
cluster-config-file /data/nodes.conf
# 节点心跳失败的超时时间
cluster-node-timeout 5000
# 持久化文件存放目录
dir /data
# 绑定地址
bind 0.0.0.0
# 让redis后台运行
daemonize no
# 保护模式
protected-mode no
# 数据库数量
databases 1
# 日志
logfile /data/run.log

2. docker-compose文件

language-yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
version: '3.8'

networks:
redis-sharding:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.30.2.0/24

services:
master1:
container_name: master1
image: redis:7.2
volumes:
- ./master1/conf:/usr/local/etc/redis
ports:
- "7001:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
networks:
redis-sharding:
ipv4_address: 172.30.2.11

master2:
container_name: master2
image: redis:7.2
volumes:
- ./master2/conf:/usr/local/etc/redis
ports:
- "7002:6379"
command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
networks:
redis-sharding:
ipv4_address: 172.30.2.12

master3:
container_name: master3
image: redis:7.2
volumes:
- ./master3/conf:/usr/local/etc/redis
ports:
- "7003:6379"
command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
networks:
redis-sharding:
ipv4_address: 172.30.2.13

replica1:
container_name: replica1
image: redis:7.2
volumes:
- ./replica1/conf:/usr/local/etc/redis
ports:
- "8001:6379"
command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
networks:
redis-sharding:
ipv4_address: 172.30.2.21

replica2:
container_name: replica2
image: redis:7.2
volumes:
- ./replica2/conf:/usr/local/etc/redis
ports:
- "8002:6379"
command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
networks:
redis-sharding:
ipv4_address: 172.30.2.22

replica3:
container_name: replica3
image: redis:7.2
volumes:
- ./replica3/conf:/usr/local/etc/redis
ports:
- "8003:6379"
command: [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
networks:
redis-sharding:
ipv4_address: 172.30.2.23


需要注意以下几点

  • 这里自定义了bridge子网并限定了范围,如果该范围已经被使用,请更换。
  • 这里没有对data进行-v挂载,如果要挂载,请注意宿主机对应文件夹权限问题。

随后运行

language-bash
1
docker-compose -p redis-sharding up -d

三、构建集群

接下来所有命令都在master1容器的命令行执行

1. 自动分配主从关系

这个命令会创建了一个集群,包括三个主节点和三个从节点,每个主节点分配一个从节点作为副本,前3个ip为主节点,后3个为从节点,主节点的从节点随机分配。

language-bash
1
redis-cli --cluster create 172.30.2.11:6379 172.30.2.12:6379 172.30.2.13:6379 172.30.2.21:6379 172.30.2.22:6379 172.30.2.23:6379 --cluster-replicas 1

如果希望手动指定主从关系,看下面,否则你可以跳过这一章节了。

2.1 构建3 master集群

language-bash
1
redis-cli --cluster create 172.30.2.11:6379 172.30.2.12:6379 172.30.2.13:6379 --cluster-replicas 0

2.2 手动配置从节点

查看3个主节点的ID

language-bash
1
redis-cli -h 172.30.2.11 -p 6379 cluster nodes

下面3个命令会将3个从节点加入集群中,其中172.30.2.11可以是三个主节点的任意一个。

language-bash
1
2
3
redis-cli -h 172.30.2.21 -p 6379 cluster meet 172.30.2.11 6379
redis-cli -h 172.30.2.22 -p 6379 cluster meet 172.30.2.11 6379
redis-cli -h 172.30.2.23 -p 6379 cluster meet 172.30.2.11 6379

然后为每个从节点指定主节点。

language-bash
1
2
3
redis-cli -h 172.30.2.21 -p 6379 cluster replicate <master-ID>
redis-cli -h 172.30.2.22 -p 6379 cluster replicate <master-ID>
redis-cli -h 172.30.2.23 -p 6379 cluster replicate <master-ID>

四、测试

1. 集群结构

可以通过以下命令查看集群中每个节点的id、角色、ip、port、插槽范围等信息

language-bash
1
redis-cli -h 172.30.2.11 -p 6379 cluster nodes

2. 分片测试

往集群存入4个键值

language-bash
1
2
3
4
redis-cli -c -h 172.30.2.11 -p 6379 set key1 value1
redis-cli -c -h 172.30.2.11 -p 6379 set key2 value2
redis-cli -c -h 172.30.2.11 -p 6379 set key3 value3
redis-cli -c -h 172.30.2.11 -p 6379 set key4 value4

查看每个主节点现有的键值,会发现每个节点只有一部分键值。

language-bash
1
2
3
redis-cli -h 172.30.2.11 -p 6379 --scan
redis-cli -h 172.30.2.12 -p 6379 --scan
redis-cli -h 172.30.2.13 -p 6379 --scan
Idea连接Docker在本地(Windows)开发SpringBoot

Idea连接Docker在本地(Windows)开发SpringBoot

当一些需要的服务在docker容器中运行时,因为docker网络等种种原因,不得不把在idea开发的springboot项目放到docker容器中才能做测试或者运行。

1. 新建运行配置

2. 修改运行目标

3. 设置新目标Docker

推荐使用openjdk镜像即可,运行选项就是平时运行Docker的形参,--rm是指当容器停止时自动删除,-p暴露端口,一般都需要。包括--network指定网络有需要也可以加上。

等待idea自动执行完成,下一步

保持默认即可,创建。

4. 选择运行主类

根据自己的情况选择一个。

5. 运行

成功。

Docker-Compose部署Redis(v7.2)主从模式

Docker-Compose部署Redis(v7.2)主从模式

环境

  • docker desktop for windows 4.23.0
  • redis 7.2

一、前提准备

1. redis配置文件

因为Redis 7.2 docker镜像里面没有配置文件,所以需要去redis官网下载一个复制里面的redis.conf
博主这里用的是7.2.3版本的redis.conf,这个文件就在解压后第一层文件夹里。

2. 下载redis镜像

language-bash
1
docker pull redis:7.2

3. 文件夹结构

如下建立cluster文件夹,并复制出三份conf文件到如图位置。

二、docker-compose

docker-compose文件具体内容如下。

language-yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
version: '3.8'

networks:
redis-network:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.30.1.0/24

services:
redis-master:
container_name: redis-master
image: redis:7.2
volumes:
- ./master/redis.conf:/usr/local/etc/redis/redis.conf
# - ./master/data:/data
ports:
- "7001:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
networks:
redis-network:
ipv4_address: 172.30.1.2

redis-replica1:
container_name: redis-replica1
image: redis:7.2
volumes:
- ./replica1/redis.conf:/usr/local/etc/redis/redis.conf
# - ./replica1/data:/data
ports:
- "7002:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
depends_on:
- redis-master
networks:
redis-network:
ipv4_address: 172.30.1.3

redis-replica2:
container_name: redis-replica2
image: redis:7.2
volumes:
- ./replica2/redis.conf:/usr/local/etc/redis/redis.conf
# - ./replica2/data:/data
ports:
- "7003:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
depends_on:
- redis-master
networks:
redis-network:
ipv4_address: 172.30.1.4

需要注意以下几点

  1. 这里自定义了bridge子网并限定了范围,如果该范围已经被使用,请更换。
  2. 这里没有对data进行-v挂载,如果要挂载,请注意宿主机对应文件夹权限问题。

三、主从配置

1.主节点配置文件

主节点对应的配置文件是master/redis.conf,需要做以下修改

  1. bind
    bind 127.0.0.1 -::1修改为bind 0.0.0.0,监听来自任意网络接口的连接。

  2. protected-mode
    protected-mode设置为no,关闭保护模式,接收远程连接。

  3. masterauth
    masterauth设置为1009,这是从节点连接到主节点的认证密码,你可以指定为其他的。

  4. requirepass
    requirepass设置为1009,这是客户端连接到本节点的认证密码,你可以指定为其他的。

2.从节点配置文件

把上面主节点的配置文件复制粘贴,然后继续做以下更改,就可以作为从节点配置文件了

  1. replicaof
    旧版本添加一行replicaof redis-master 6379,表示本节点为从节点,并且主节点ipredis-master,端口为6379。这里你也可以把ip填成172.30.1.2,因为在docker-compose中我们为各节点分配了固定的ip,以及端口是6379而不是映射的700x,这些都是docker的知识,这里不再赘述。

redis在5.0引入了replica的概念来替换slave,所以后续的新版本推荐使用replicaof,即便slaveof目前仍然支持。

四、运行

配置好三个节点的配置文件后,用以下命令运行整个服务

language-shell
1
docker-compose -p redis-cluster up -d

查看主节点日志,可以看到主节点向172.30.1.3172.30.1.4两个从节点同步数据,并且连接正常,以及一系列success。

language-bash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
2024-01-05 15:12:59 1:M 05 Jan 2024 07:12:59.008 * Opening AOF incr file appendonly.aof.1.incr.aof on server start
2024-01-05 15:12:59 1:M 05 Jan 2024 07:12:59.008 * Ready to accept connections tcp
2024-01-05 15:13:00 1:M 05 Jan 2024 07:13:00.996 * Replica 172.30.1.4:6379 asks for synchronization
2024-01-05 15:13:00 1:M 05 Jan 2024 07:13:00.996 * Full resync requested by replica 172.30.1.4:6379
2024-01-05 15:13:00 1:M 05 Jan 2024 07:13:00.996 * Replication backlog created, my new replication IDs are '5bef8fa8e58042f1aee8eae528c6e10228a0c96b' and '0000000000000000000000000000000000000000'
2024-01-05 15:13:00 1:M 05 Jan 2024 07:13:00.996 * Delay next BGSAVE for diskless SYNC
2024-01-05 15:13:01 1:M 05 Jan 2024 07:13:01.167 * Replica 172.30.1.3:6379 asks for synchronization
2024-01-05 15:13:01 1:M 05 Jan 2024 07:13:01.167 * Full resync requested by replica 172.30.1.3:6379
2024-01-05 15:13:01 1:M 05 Jan 2024 07:13:01.167 * Delay next BGSAVE for diskless SYNC
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.033 * Starting BGSAVE for SYNC with target: replicas sockets
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.033 * Background RDB transfer started by pid 20
2024-01-05 15:13:05 20:C 05 Jan 2024 07:13:05.035 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.035 * Diskless rdb transfer, done reading from pipe, 2 replicas still up.
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.052 * Background RDB transfer terminated with success
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.052 * Streamed RDB transfer with replica 172.30.1.4:6379 succeeded (socket). Waiting for REPLCONF ACK from replica to enable streaming
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.052 * Synchronization with replica 172.30.1.4:6379 succeeded
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.052 * Streamed RDB transfer with replica 172.30.1.3:6379 succeeded (socket). Waiting for REPLCONF ACK from replica to enable streaming
2024-01-05 15:13:05 1:M 05 Jan 2024 07:13:05.052 * Synchronization with replica 172.30.1.3:6379 succeeded

接着看看从节点日志,可以看到Connecting to MASTER redis-master:6379,向主节点连接并申请同步数据,以及一系列success。

language-bash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
2024-01-05 15:13:01 1:S 05 Jan 2024 07:13:01.166 * Connecting to MASTER redis-master:6379
2024-01-05 15:13:01 1:S 05 Jan 2024 07:13:01.166 * MASTER <-> REPLICA sync started
2024-01-05 15:13:01 1:S 05 Jan 2024 07:13:01.166 * Non blocking connect for SYNC fired the event.
2024-01-05 15:13:01 1:S 05 Jan 2024 07:13:01.167 * Master replied to PING, replication can continue...
2024-01-05 15:13:01 1:S 05 Jan 2024 07:13:01.167 * Partial resynchronization not possible (no cached master)
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.033 * Full resync from master: 5bef8fa8e58042f1aee8eae528c6e10228a0c96b:0
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.035 * MASTER <-> REPLICA sync: receiving streamed RDB from master with EOF to disk
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.038 * MASTER <-> REPLICA sync: Flushing old data
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.038 * MASTER <-> REPLICA sync: Loading DB in memory
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.056 * Loading RDB produced by version 7.2.3
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.056 * RDB age 0 seconds
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.056 * RDB memory usage when created 0.90 Mb
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.056 * Done loading RDB, keys loaded: 1, keys expired: 0.
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.057 * MASTER <-> REPLICA sync: Finished with success
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.057 * Creating AOF incr file temp-appendonly.aof.incr on background rewrite
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.057 * Background append only file rewriting started by pid 21
2024-01-05 15:13:05 21:C 05 Jan 2024 07:13:05.067 * Successfully created the temporary AOF base file temp-rewriteaof-bg-21.aof
2024-01-05 15:13:05 21:C 05 Jan 2024 07:13:05.068 * Fork CoW for AOF rewrite: current 0 MB, peak 0 MB, average 0 MB
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.084 * Background AOF rewrite terminated with success
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.084 * Successfully renamed the temporary AOF base file temp-rewriteaof-bg-21.aof into appendonly.aof.5.base.rdb
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.084 * Successfully renamed the temporary AOF incr file temp-appendonly.aof.incr into appendonly.aof.5.incr.aof
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.093 * Removing the history file appendonly.aof.4.incr.aof in the background
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.093 * Removing the history file appendonly.aof.4.base.rdb in the background
2024-01-05 15:13:05 1:S 05 Jan 2024 07:13:05.101 * Background AOF rewrite finished successfully

五、测试

用你喜欢的docker容器连接工具或者redis连接工具来连接主节点redis服务,只要能进入redis-cli就行。这里以docker容器连接为例。

  1. 主节点设置一个字段并查看从节点信息
language-txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
root@ac1ecfc4e3a5:/data# redis-cli 
127.0.0.1:6379> auth 1009
OK
127.0.0.1:6379> set num 67899
OK
127.0.0.1:6379> get num
"67899"
127.0.0.1:6379> INFO replication
# Replication
role:master
connected_slaves:2
slave0:ip=172.30.1.4,port=6379,state=online,offset=3388,lag=1
slave1:ip=172.30.1.3,port=6379,state=online,offset=3388,lag=1
master_failover_state:no-failover
master_replid:5bef8fa8e58042f1aee8eae528c6e10228a0c96b
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:3388
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:3388
  1. 从节点获取
language-txt
1
2
3
4
5
root@a3016db388e3:/data# redis-cli 
127.0.0.1:6379> auth 1009
OK
127.0.0.1:6379> get num
"67899"

测试成功。

Docker-Compose部署Redis(v7.2)哨兵模式

Docker-Compose部署Redis(v7.2)哨兵模式

环境

  • docker desktop for windows 4.23.0
  • redis 7.2

一、前提准备

1. 主从集群

首先需要有一个redis主从集群,才能接着做redis哨兵。具体可以参考下面这篇文章
Docker-Compose部署Redis(v7.2)主从模式(之后简称”主从模式博文“)

2. 文件夹结构

和主从模式不同的是,redis sentinel(哨兵)会更改你的conf文件,无论是redis server节点还是sentinel节点本身,都可能被修改,所以这里需要注意文件权限问题。不然会一直警告Sentinel was not able to save the new configuration on disk

有兴趣可以参考以下几个帖子,或者接着本文做就行了。

总的来说,需要对主从模式博文里提到的文件夹结构做一定改善和添加,具体如下:

language-txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cluster/
├── docker-compose.yaml
├── master
│ └── conf
│ └── redis.conf
├── replica1
│ └── conf
│ └── redis.conf
├── replica2
│ └── conf
│ └── redis.conf
├── sentinel1
│ └── conf
│ └── sentinel.conf
├── sentinel2
│ └── conf
│ └── sentinel.conf
└── sentinel3
└── conf
└── sentinel.conf

其中redis.confdocker-compose.yaml主从模式博文内容暂时保持一致,其余的都是新增的,暂时保持空白即可。

二、配置文件

1. redis server配置文件

保持不变

2. redis sentinel配置文件

对于上述三个sentinel.conf内容都填入以下

language-sql
1
2
3
4
5
sentinel monitor mymaster 172.30.1.2 6379 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
sentinel auth-pass mymaster 1009
dir "/data"

意思分别是

  • 监控的主节点:通过 sentinel monitor 指定要监控的主节点。这包括一个用户定义的名称(如 mymaster)、主节点的地址、端口号和一个”仲裁”阈值,后者表示要进行故障转移所需的最小 Sentinel 投票数量。
  • 故障检测:设置 Sentinel 判断主节点是否下线所需的时间
  • 故障转移设置:配置故障转移的行为,如故障转移的超时时间
  • 认证密码(如果主节点设置了密码):如果主节点设置了密码,Sentinel 需要这个密码来连接主节点和副本节点
  • 设置 Sentinel 的工作目录

3. docker compose文件

language-yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
version: '3.8'

networks:
redis-network:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.30.1.0/24

services:
redis-master:
container_name: redis-master
image: redis:7.2
volumes:
- ./master/conf:/usr/local/etc/redis
# - ./master/data:/data
ports:
- "7001:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
networks:
redis-network:
ipv4_address: 172.30.1.2

redis-replica1:
container_name: redis-replica1
image: redis:7.2
volumes:
- ./replica1/conf:/usr/local/etc/redis
# - ./replica1/data:/data
ports:
- "7002:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
depends_on:
- redis-master
networks:
redis-network:
ipv4_address: 172.30.1.3

redis-replica2:
container_name: redis-replica2
image: redis:7.2
volumes:
- ./replica2/conf:/usr/local/etc/redis
# - ./replica2/data:/data
ports:
- "7003:6379"
command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
depends_on:
- redis-master
networks:
redis-network:
ipv4_address: 172.30.1.4

redis-sentinel1:
container_name: redis-sentinel1
image: redis:7.2
volumes:
- ./sentinel1/conf:/usr/local/etc/redis
ports:
- "27001:26379"
command: ["redis-sentinel", "/usr/local/etc/redis/sentinel.conf"]
depends_on:
- redis-master
networks:
redis-network:
ipv4_address: 172.30.1.11

redis-sentinel2:
container_name: redis-sentinel2
image: redis:7.2
volumes:
- ./sentinel2/conf:/usr/local/etc/redis
ports:
- "27002:26379"
command: [ "redis-sentinel", "/usr/local/etc/redis/sentinel.conf" ]
depends_on:
- redis-master
networks:
redis-network:
ipv4_address: 172.30.1.12

redis-sentinel3:
container_name: redis-sentinel3
image: redis:7.2
volumes:
- ./sentinel3/conf:/usr/local/etc/redis
ports:
- "27003:26379"
command: [ "redis-sentinel", "/usr/local/etc/redis/sentinel.conf" ]
depends_on:
- redis-master
networks:
redis-network:
ipv4_address: 172.30.1.13



需要注意以下几点

  • 主从模式博文不同,这里所有的配置文件挂载都采用文件夹挂载而非单文件挂载
  • 这里自定义了bridge子网并限定了范围,如果该范围已经被使用,请更换。
  • 这里没有对data进行-v挂载,如果要挂载,请注意宿主机对应文件夹权限问题。
  • 主节点地址为172.30.1.2,如果更改请注意sentinel.conf中也需要更改。

三、运行

在运行之前,记得备份一下所有的conf文件,因为sentinel会修改挂载到容器的conf。

language-bash
1
docker-compose -p redis-cluster up -d

查看其中一个sentinel节点的日志,可以看到监听端口是26379,同时监测主节点mymaster 172.30.1.2 6379,以及添加了172.30.1.4 6379172.30.1.3 6379两个从节点,并且感应到了位于172.30.1.13 26379172.30.1.12 26379两个同为sentinel节点的服务。

language-txt
1
2
3
4
5
6
7
8
9
10
11
12
2024-01-05 18:06:40 1:X 05 Jan 2024 10:06:40.758 * Running mode=sentinel, port=26379.
2024-01-05 18:06:40 1:X 05 Jan 2024 10:06:40.789 * Sentinel new configuration saved on disk
2024-01-05 18:06:40 1:X 05 Jan 2024 10:06:40.790 * Sentinel ID is 499007c98c0a165b13e026a4443ceb890695c191
2024-01-05 18:06:40 1:X 05 Jan 2024 10:06:40.790 # +monitor master mymaster 172.30.1.2 6379 quorum 2
2024-01-05 18:06:40 1:X 05 Jan 2024 10:06:40.791 * +slave slave 172.30.1.4:6379 172.30.1.4 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:06:40 1:X 05 Jan 2024 10:06:40.815 * Sentinel new configuration saved on disk
2024-01-05 18:06:42 1:X 05 Jan 2024 10:06:42.055 * +sentinel sentinel bcfaed15fb01e7ad03b013fe5e964479c1a1f138 172.30.1.13 26379 @ mymaster 172.30.1.2 6379
2024-01-05 18:06:42 1:X 05 Jan 2024 10:06:42.093 * Sentinel new configuration saved on disk
2024-01-05 18:06:42 1:X 05 Jan 2024 10:06:42.356 * +sentinel sentinel 92d9a1419be1256d1715df2aa17cea4bbacfdf60 172.30.1.12 26379 @ mymaster 172.30.1.2 6379
2024-01-05 18:06:42 1:X 05 Jan 2024 10:06:42.376 * Sentinel new configuration saved on disk
2024-01-05 18:06:50 1:X 05 Jan 2024 10:06:50.823 * +slave slave 172.30.1.3:6379 172.30.1.3 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:06:50 1:X 05 Jan 2024 10:06:50.837 * Sentinel new configuration saved on disk

四、测试

直接让redis-master容器停止运行,查看sentinel日志,可以看到sentinel监测到master节点挂掉后,选举了172.30.1.3为新的主节点,并将其余两个作为slave节点。

language-txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
2024-01-05 18:10:08 1:X 05 Jan 2024 10:10:08.896 # +sdown master mymaster 172.30.1.2 6379
2024-01-05 18:10:08 1:X 05 Jan 2024 10:10:08.968 # +odown master mymaster 172.30.1.2 6379 #quorum 2/2
2024-01-05 18:10:08 1:X 05 Jan 2024 10:10:08.968 # +new-epoch 1
2024-01-05 18:10:08 1:X 05 Jan 2024 10:10:08.968 # +try-failover master mymaster 172.30.1.2 6379
2024-01-05 18:10:08 1:X 05 Jan 2024 10:10:08.987 * Sentinel new configuration saved on disk
2024-01-05 18:10:08 1:X 05 Jan 2024 10:10:08.987 # +vote-for-leader 499007c98c0a165b13e026a4443ceb890695c191 1
2024-01-05 18:10:08 1:X 05 Jan 2024 10:10:08.990 * 92d9a1419be1256d1715df2aa17cea4bbacfdf60 voted for 92d9a1419be1256d1715df2aa17cea4bbacfdf60 1
2024-01-05 18:10:09 1:X 05 Jan 2024 10:10:09.021 * bcfaed15fb01e7ad03b013fe5e964479c1a1f138 voted for 499007c98c0a165b13e026a4443ceb890695c191 1
2024-01-05 18:10:09 1:X 05 Jan 2024 10:10:09.054 # +elected-leader master mymaster 172.30.1.2 6379
2024-01-05 18:10:09 1:X 05 Jan 2024 10:10:09.054 # +failover-state-select-slave master mymaster 172.30.1.2 6379
2024-01-05 18:10:09 1:X 05 Jan 2024 10:10:09.125 # +selected-slave slave 172.30.1.3:6379 172.30.1.3 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:10:09 1:X 05 Jan 2024 10:10:09.125 * +failover-state-send-slaveof-noone slave 172.30.1.3:6379 172.30.1.3 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:10:09 1:X 05 Jan 2024 10:10:09.209 * +failover-state-wait-promotion slave 172.30.1.3:6379 172.30.1.3 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.033 * Sentinel new configuration saved on disk
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.033 # +promoted-slave slave 172.30.1.3:6379 172.30.1.3 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.033 # +failover-state-reconf-slaves master mymaster 172.30.1.2 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.094 * +slave-reconf-sent slave 172.30.1.4:6379 172.30.1.4 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.262 * +slave-reconf-inprog slave 172.30.1.4:6379 172.30.1.4 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.262 * +slave-reconf-done slave 172.30.1.4:6379 172.30.1.4 6379 @ mymaster 172.30.1.2 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.338 # +failover-end master mymaster 172.30.1.2 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.338 # +switch-master mymaster 172.30.1.2 6379 172.30.1.3 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.338 * +slave slave 172.30.1.4:6379 172.30.1.4 6379 @ mymaster 172.30.1.3 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.338 * +slave slave 172.30.1.2:6379 172.30.1.2 6379 @ mymaster 172.30.1.3 6379
2024-01-05 18:10:10 1:X 05 Jan 2024 10:10:10.373 * Sentinel new configuration saved on disk

接着让我们看看172.30.1.3的日志,也就是redis-replica1的日志,可以看到与主节点连接失败后,它开启了主节点模式MASTER MODE enabled

language-txt
1
2
3
4
5
6
7
8
9
10
11
2024-01-05 18:10:03 1:S 05 Jan 2024 10:10:03.812 * Reconnecting to MASTER 172.30.1.2:6379
2024-01-05 18:10:03 1:S 05 Jan 2024 10:10:03.813 * MASTER <-> REPLICA sync started
2024-01-05 18:10:03 1:S 05 Jan 2024 10:10:03.813 # Error condition on socket for SYNC: Connection refused
2024-01-05 18:10:04 1:S 05 Jan 2024 10:10:04.582 * Connecting to MASTER 172.30.1.2:6379
2024-01-05 18:10:04 1:S 05 Jan 2024 10:10:04.582 * MASTER <-> REPLICA sync started
2024-01-05 18:10:09 1:M 05 Jan 2024 10:10:09.209 * Discarding previously cached master state.
2024-01-05 18:10:09 1:M 05 Jan 2024 10:10:09.209 * Setting secondary replication ID to 5032654a1279c56d758c93a4eb1c4b89c99975a9, valid up to offset: 40756. New replication ID is d3464601d550e1159d91234567a366fa1f1a0b5e
2024-01-05 18:10:09 1:M 05 Jan 2024 10:10:09.209 * MASTER MODE enabled (user request from 'id=8 addr=172.30.1.11:55710 laddr=172.30.1.3:6379 fd=13 name=sentinel-499007c9-cmd age=199 idle=0 flags=x db=0 sub=0 psub=0 ssub=0 multi=4 qbuf=188 qbuf-free=20286 argv-mem=4 multi-mem=169 rbs=2048 rbp=1024 obl=45 oll=0 omem=0 tot-mem=23717 events=r cmd=exec user=default redir=-1 resp=2 lib-name= lib-ver=')
2024-01-05 18:10:09 1:M 05 Jan 2024 10:10:09.229 * CONFIG REWRITE executed with success.
2024-01-05 18:10:10 1:M 05 Jan 2024 10:10:10.120 * Replica 172.30.1.4:6379 asks for synchronization
2024-01-05 18:10:10 1:M 05 Jan 2024 10:10:10.120 * Partial resynchronization request from 172.30.1.4:6379 accepted. Sending 567 bytes of backlog starting from offset 40756.

并且还有redis-replica2的日志,里面会显示将数据同步请求地址变成了172.30.1.3而不是先前的172.30.1.2

接着连接redis-replica1容器看看,发现这个节点以前作为从节点时是只读节点,现在可以写入数据了。

language-txt
1
2
3
4
5
6
7
root@1eefea35001f:/data# redis-cli 
127.0.0.1:6379> auth 1009
OK
127.0.0.1:6379> set num 8766
OK
127.0.0.1:6379> get num
"8766"

并且会发现另外两个节点变成只读了,同时,即使先前的主节点又恢复正常了,它不会去夺回master地位。

测试成功。

Docker单点部署Seata(2.0.0) + Nacos(v2.3.0) + Mysql(5.7)

Docker单点部署Seata(2.0.0) + Nacos(v2.3.0) + Mysql(5.7)

系统环境
docker desktop for windows v4.23.0
nacosmysqlseata三者都在bridge网络中

一、部署Nacos

language-bash
1
2
3
4
5
6
docker run -itd \
-e MODE=standalone
-e NACOS_SERVER_PORT=8848
-p 8848:8848
--name=nacos_standalone
nacos/nacos-server:v2.3.0

二、部署Mysql

language-bash
1
2
3
4
5
docker run -itd \
-e MYSQL_ROOT_PASSWORD=1009
-p 3306:3306
--name=mysql_itcast
mysql:5.7

三、Seata准备工作

1. 记住nacos、mysql、宿主机的ip

language-bash
1
$ docker network inspect bridge

假设这里nacos172.17.0.3mysql172.17.0.2

language-bash
1
ipconfig /all

这里假设宿主机ip为192.168.1.102

之后遇到上述三个ip,记得写成自己的

2. 建立数据库

mysql_itcast中新建seata数据库,然后导入以下脚本

language-sql
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
-- -------------------------------- The script used when storeMode is 'db' --------------------------------
-- the table to store GlobalSession data
CREATE TABLE IF NOT EXISTS `global_table`
(
`xid` VARCHAR(128) NOT NULL,
`transaction_id` BIGINT,
`status` TINYINT NOT NULL,
`application_id` VARCHAR(32),
`transaction_service_group` VARCHAR(32),
`transaction_name` VARCHAR(128),
`timeout` INT,
`begin_time` BIGINT,
`application_data` VARCHAR(2000),
`gmt_create` DATETIME,
`gmt_modified` DATETIME,
PRIMARY KEY (`xid`),
KEY `idx_status_gmt_modified` (`status` , `gmt_modified`),
KEY `idx_transaction_id` (`transaction_id`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8mb4;

-- the table to store BranchSession data
CREATE TABLE IF NOT EXISTS `branch_table`
(
`branch_id` BIGINT NOT NULL,
`xid` VARCHAR(128) NOT NULL,
`transaction_id` BIGINT,
`resource_group_id` VARCHAR(32),
`resource_id` VARCHAR(256),
`branch_type` VARCHAR(8),
`status` TINYINT,
`client_id` VARCHAR(64),
`application_data` VARCHAR(2000),
`gmt_create` DATETIME(6),
`gmt_modified` DATETIME(6),
PRIMARY KEY (`branch_id`),
KEY `idx_xid` (`xid`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8mb4;

-- the table to store lock data
CREATE TABLE IF NOT EXISTS `lock_table`
(
`row_key` VARCHAR(128) NOT NULL,
`xid` VARCHAR(128),
`transaction_id` BIGINT,
`branch_id` BIGINT NOT NULL,
`resource_id` VARCHAR(256),
`table_name` VARCHAR(32),
`pk` VARCHAR(36),
`status` TINYINT NOT NULL DEFAULT '0' COMMENT '0:locked ,1:rollbacking',
`gmt_create` DATETIME,
`gmt_modified` DATETIME,
PRIMARY KEY (`row_key`),
KEY `idx_status` (`status`),
KEY `idx_branch_id` (`branch_id`),
KEY `idx_xid` (`xid`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8mb4;

CREATE TABLE IF NOT EXISTS `distributed_lock`
(
`lock_key` CHAR(20) NOT NULL,
`lock_value` VARCHAR(20) NOT NULL,
`expire` BIGINT,
primary key (`lock_key`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8mb4;

INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('AsyncCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryRollbacking', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('TxTimeoutCheck', ' ', 0);

3. Nacos远程配置文件

访问Nacos网页,一般是http://localhost:8848/nacos/,新建一个配置seataServer.properties

具体内容如下:

language-puppet
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
store.mode=db
#-----db-----
store.db.datasource=druid
store.db.dbType=mysql
# 需要根据mysql的版本调整driverClassName
# mysql8及以上版本对应的driver:com.mysql.cj.jdbc.Driver
# mysql8以下版本的driver:com.mysql.jdbc.Driver
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://172.17.0.2:3306/seata?useUnicode=true&characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useSSL=false
store.db.user= root
store.db.password=1009
# 数据库初始连接数
store.db.minConn=1
# 数据库最大连接数
store.db.maxConn=20
# 获取连接时最大等待时间 默认5000,单位毫秒
store.db.maxWait=5000
# 全局事务表名 默认global_table
store.db.globalTable=global_table
# 分支事务表名 默认branch_table
store.db.branchTable=branch_table
# 全局锁表名 默认lock_table
store.db.lockTable=lock_table
store.db.distributedLockTable=distributed_lock
# 查询全局事务一次的最大条数 默认100
store.db.queryLimit=100


# undo保留天数 默认7天,log_status=1(附录3)和未正常清理的undo
server.undo.logSaveDays=7
# undo清理线程间隔时间 默认86400000,单位毫秒
server.undo.logDeletePeriod=86400000
# 二阶段提交重试超时时长 单位ms,s,m,h,d,对应毫秒,秒,分,小时,天,默认毫秒。默认值-1表示无限重试
# 公式: timeout>=now-globalTransactionBeginTime,true表示超时则不再重试
# 注: 达到超时时间后将不会做任何重试,有数据不一致风险,除非业务自行可校准数据,否者慎用
server.maxCommitRetryTimeout=-1
# 二阶段回滚重试超时时长
server.maxRollbackRetryTimeout=-1
# 二阶段提交未完成状态全局事务重试提交线程间隔时间 默认1000,单位毫秒
server.recovery.committingRetryPeriod=1000
# 二阶段异步提交状态重试提交线程间隔时间 默认1000,单位毫秒
server.recovery.asynCommittingRetryPeriod=1000
# 二阶段回滚状态重试回滚线程间隔时间 默认1000,单位毫秒
server.recovery.rollbackingRetryPeriod=1000
# 超时状态检测重试线程间隔时间 默认1000,单位毫秒,检测出超时将全局事务置入回滚会话管理器
server.recovery.timeoutRetryPeriod=1000

四、部署Seata

宿主机新建一个application.yml文件,内容如下

language-yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
server:
port: 7091

spring:
application:
name: seata-server

logging:
config: classpath:logback-spring.xml
file:
path: ${
user.home}/logs/seata
extend:
logstash-appender:
destination: 127.0.0.1:4560
kafka-appender:
bootstrap-servers: 127.0.0.1:9092
topic: logback_to_logstash

console:
user:
username: seata
password: seata

seata:
config:
# support: nacos, consul, apollo, zk, etcd3
type: nacos
nacos:
server-addr: 172.17.0.3:8848
namespace:
group: DEFAULT_GROUP
username: nacos
password: nacos
data-id: seataServer.properties

registry:
# support: nacos, eureka, redis, zk, consul, etcd3, sofa
type: nacos
nacos:
application: seata-tc-server
server-addr: 172.17.0.3:8848
group: DEFAULT_GROUP
namespace:
# tc集群名称
cluster: SH
username: nacos
password: nacos
# server:
# service-port: 8091 #If not configured, the default is '${server.port} + 1000'
security:
secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017
tokenValidityInMilliseconds: 1800000
ignore:
urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/api/v1/auth/login

然后用以下命令运行seata容器

language-bash
1
2
3
4
5
6
7
8
9
docker run --name seata-server \
-itd \
-p 8091:8091 \
-p 7091:7091 \
-e STORE_MODE=db \
-e SEATA_IP="192.168.1.102" \
-e SEATA_PORT=8091 \
-v "path/to/application.yml:/seata-server/resources/application.yml" \
seataio/seata-server:2.0.0

五、初步检验Seata部署情况

访问Seata网页,这里是http://192.168.1.102:7091/,输入两个seata后进入系统。

Nacos网页上查看Seata服务详情,ip为宿主机ip,不要是docker容器内网ip就行。

六、微服务使用Seata

1.引入依赖

language-xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<!--seata-->
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-seata</artifactId>
<exclusions>
<!--版本较低,1.3.0,因此排除-->
<exclusion>
<artifactId>seata-spring-boot-starter</artifactId>
<groupId>io.seata</groupId>
</exclusion>
</exclusions>
</dependency>
<!--seata starter 采用1.4.2版本-->
<dependency>
<groupId>io.seata</groupId>
<artifactId>seata-spring-boot-starter</artifactId>
<version>1.4.2</version>
<!--2.0.0貌似有点问题,TCC模式BusinessActionContextParameter无效-->
</dependency>

2. application.yml配置

language-yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
seata:
registry:
type: nacos
nacos: # tc
server-addr: localhost:8848
namespace: ""
group: DEFAULT_GROUP
application: seata-tc-server # tc服务在nacos中的服务名称
cluster: SH
username: nacos
password: nacos
tx-service-group: seata-demo # 事务组,根据这个获取tc服务的cluster名称
service:
vgroup-mapping: # 事务组与TC服务cluster的映射关系
seata-demo: SH

启动微服务后,除了可以看微服务的日志外,还可以看Seata容器日志,出现类似以下日志即为正常

language-txt
1
2
3
4
5
6
7
2023-12-30 21:37:35 digest=seata-demo,192.168.222.1,1703943453643
2023-12-30 21:37:35 timestamp=1703943453643
2023-12-30 21:37:35 authVersion=V4
2023-12-30 21:37:35 vgroup=seata-demo
2023-12-30 21:37:35 ip=192.168.222.1
2023-12-30 21:37:35 '},channel:[id: 0x7f82356a, L:/172.17.0.4:8091 - R:/172.17.0.1:35092],client version:2.0.0
2023-12-30 21:37:36 21:37:36.389 INFO --- [rverHandlerThread_1_6_500] [rocessor.server.RegRmProcessor] [ onRegRmMessage] [] : RM register success,message:RegisterRMRequest{resourceIds='jdbc:mysql://localhost:3306/seata_demo', version='2.0.0', applicationId='order-service', transactionServiceGroup='seata-demo', extraData='null'},channel:[id: 0x3a9f4e29, L:/172.17.0.4:8091 - R:/172.17.0.1:35096],client version:2.0.0

七、遇到的坑

1. Nacos显示Seata服务的ip为容器内网ip导致微服务无法访问

网上看到以下各种方法均无效

  1. 使用host网络
  2. application.yml指定spring.cloud.nacos.discovery.ip

以下方法有效

  1. 容器创建时使用 -e SEATA_IP="宿主机ip"

2. 使用host宿主机网络

一开始为了图方便,给Nacos用过host网络,结果容器程序运行正常,打不开网页,玄学的一批。
也给Seata使用host网络,为了配置文件里面不用自己手动查询nacos和mysql的ip,结果然并卵。

3. seata The distribute lock table is not config, please create the target table and config it

这个是因为很多文档,都只有3张表,少了一张。
官方文档说store.db.distributedLockTable1.5.1版本新增的参数。
https://seata.io/zh-cn/docs/user/configurations
但是很多文档和博客,都只有3张表,第4张在哪里呢?
在这里
https://seata.io/zh-cn/docs/ops/deploy-by-docker-compose.html
里面写到nacos注册中心,db存储时会提供 [建表脚本]
以及最后最重要的是,要在Nacos配置中心配置seataServer.properties时,要多加一行

language-bash
1
store.db.distributedLockTable=distributed_lock

这点在官网文档都没有提及。

4. 高版本中BusinessActionContextParameter和TwoPhaseBusinessAction推荐都放在实现类中,接口上的做法后续将会废除

具体参考这个Issue
2.0.0 TCC模式@BusinessActionContextParameter修饰的参数失效,无法在BusinessActionContext获取

Docker单点部署[8.11.3] Elasticsearch + Kibana + ik分词器 + pinyin分词器

Docker单点部署[8.11.3] Elasticsearch + Kibana + ik分词器 + pinyin分词器

这里记录一次成功简单登陆Kibana的实际经验。

一、Elasticsearch

运行Elasticsearch容器

language-shell
1
2
3
4
5
6
7
8
9
10
11
12
13
docker run -d \
--name es \
-e "ES_JAVA_OPTS=-Xms512m -Xmx512m" \
-e "discovery.type=single-node" \
-e "xpack.security.enabled=true" \
-e "xpack.security.enrollment.enabled=true" \
-v your_host_es_data_path:/usr/share/elasticsearch/data \ #宿主机绝对路径挂载
-v your_host_es_plugins_path:/usr/share/elasticsearch/plugins \ #宿主机绝对路径挂载
--privileged \
--network es-net \
-p 9200:9200 \
-p 9300:9300 \
elasticsearch:8.11.3

重置elastic密码,记住这段密码

language-shell
1
docker exec -it es /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic

重置kibana_system 密码,记住这段密码

language-shell
1
docker exec -it es /usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system 

二、Kibana

运行Kibana容器,账户密码填kibana_system

language-shell
1
2
3
4
5
6
7
8
docker run -d \
--name kibana \
-e ELASTICSEARCH_HOSTS=http://es:9200 \
-e ELASTICSEARCH_USERNAME=kibana_system \
-e ELASTICSEARCH_PASSWORD=kibana_system_passwrod \ #刚才获得的kibana_system 密码
--network=es-net \
-p 5601:5601 \
kibana:8.11.3

三、访问

访问http://localhost:5601
elastic的账号密码登录。

四、其他

关于一些报错

  1. kibana容器创建时不允许用elastic用户连接elasticsearch
  2. 运行docker exec -it es01 /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana报错SSL错误
  3. 等等各种因为使用了不是8.11.3版本的安全验证方法遇到的错误

这里是官方的install with docker教程,也是一坨shit。
https://www.elastic.co/guide/en/kibana/current/docker.html

这里是官方关于安全配置的docs,遇到什么问题就多翻翻。
https://www.elastic.co/guide/en/elasticsearch/reference/master/manually-configure-security.html

或者来社区多讨论讨论。
https://discuss.elastic.co/latest

五、ik分词器

这里是官方仓库
https://github.com/medcl/elasticsearch-analysis-ik
推荐有两种安装方式

第一种:在线安装

language-bash
1
2
3
4
5
6
7
8
9
10
# 进入容器内部
docker exec -it es /bin/bash

# 在线下载并安装

#退出
exit
#重启容器
docker restart es
docker restart kibana

如果遇到ik版本和es版本不匹配问题请看下面

第二种:离线安装

  1. 在发行版下载页面,找到和es版本最接近的ik版本(博主这里是ik8.11.1 + es8.11.3)
    https://github.com/medcl/elasticsearch-analysis-ik/releases

  2. 在您的 your_host_es_plugins_path 目录下,创建一个名为 ik 的新文件夹。

  3. 将下载的 elasticsearch-analysis-ik-8.11.1.zip 文件解压到刚刚创建的 ik 文件夹中

  4. 修改plugin-descriptor.properties文件

!如无需要请跳过,可能造成无法预估的bug

language-bash
1
2
3
4
5
6
7
8
# 'version': plugin's version
version=8.11.3
# 'elasticsearch.version' version of elasticsearch compiled against
# You will have to release a new version of the plugin for each new
# elasticsearch release. This version is checked when the plugin
# is loaded so Elasticsearch will refuse to start in the presence of
# plugins with the incorrect elasticsearch.version.
elasticsearch.version=8.11.3
  1. 重启容器
language-bash
1
2
docker restart es
docker restart kibana

安装好了之后,登录kinaba,找到Dev Tools - Console

language-bash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#测试分词器
GET /_analyze
{

"text":"我爱吃冰淇淋,也喜欢小淇,i want to eat her",
"analyzer":"ik_smart"
}

#测试分词器
GET /_analyze
{

"text":"我爱吃冰淇淋,也喜欢小淇,i want to eat her",
"analyzer":"ik_max_word"
}

这里的句子分词ik_smart和ik_max_word区别不明显,可以换用”程序员”试试。

六、ik分词器的扩展和停用

1. 配置

ik分词器并不能准确识别最新的网络流行词,以及禁用敏感词。
我们可以手动配置来实现这两点。

修改IKAnalyzer.cfg.xml文件如下

language-xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>IK Analyzer 扩展配置</comment>
<!--用户可以在这里配置自己的扩展字典 -->
<entry key="ext_dict">ext.dic</entry>
<!--用户可以在这里配置自己的扩展停止词字典-->
<entry key="ext_stopwords">stopword.dic</entry>
<!--用户可以在这里配置远程扩展字典 -->
<!-- <entry key="remote_ext_dict">words_location</entry> -->
<!--用户可以在这里配置远程扩展停止词字典-->
<!-- <entry key="remote_ext_stopwords">words_location</entry> -->
</properties>

这里的意思是,使用同目录下的ext.dic作为扩展词汇;使用同目录下的stopword.dic作为禁用词汇。这两个文件有就用,没有就新建。

最后记得重启es容器

2. 测试

language-bash
1
2
3
4
5
6
7
#测试分词器
GET /_analyze
{

"text":"程序员墨扛教育的课程可以白嫖啊,而且就业率高达95%哦,奥利给!嘤",
"analyzer":"ik_smart"
}
language-bash
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
{

"tokens": [
{

"token": "程序员",
"start_offset": 0,
"end_offset": 3,
"type": "CN_WORD",
"position": 0
},
{

"token": "墨扛教育",
"start_offset": 3,
"end_offset": 7,
"type": "CN_WORD",
"position": 1
},
{

"token": "课程",
"start_offset": 8,
"end_offset": 10,
"type": "CN_WORD",
"position": 2
},
{

"token": "可以",
"start_offset": 10,
"end_offset": 12,
"type": "CN_WORD",
"position": 3
},
{

"token": "白嫖",
"start_offset": 12,
"end_offset": 14,
"type": "CN_WORD",
"position": 4
},
{

"token": "而且",
"start_offset": 16,
"end_offset": 18,
"type": "CN_WORD",
"position": 5
},
{

"token": "就业率",
"start_offset": 18,
"end_offset": 21,
"type": "CN_WORD",
"position": 6
},
{

"token": "高达",
"start_offset": 21,
"end_offset": 23,
"type": "CN_WORD",
"position": 7
},
{

"token": "95",
"start_offset": 23,
"end_offset": 25,
"type": "ARABIC",
"position": 8
},
{

"token": "奥利给",
"start_offset": 28,
"end_offset": 31,
"type": "CN_WORD",
"position": 9
}
]
}

七、pinyin分词器

离线安装

  1. 在发行版下载页面,找到和es版本最接近的版本(博主这里是pinyin8.11.1 + es8.11.3)
    https://github.com/medcl/elasticsearch-analysis-pinyin

  2. 在您的 your_host_es_plugins_path 目录下,创建一个名为 py 的新文件夹。

  3. 将下载的 elasticsearch-analysis-pinyin-8.11.1.zip 文件解压到刚刚创建的 py 文件夹中

  4. 修改plugin-descriptor.properties文件

!如无需要请跳过,可能造成无法预估的bug

language-bash
1
2
3
4
5
6
7
8
# 'version': plugin's version
version=8.11.3
# 'elasticsearch.version' version of elasticsearch compiled against
# You will have to release a new version of the plugin for each new
# elasticsearch release. This version is checked when the plugin
# is loaded so Elasticsearch will refuse to start in the presence of
# plugins with the incorrect elasticsearch.version.
elasticsearch.version=8.11.3
  1. 重启容器
language-bash
1
2
docker restart es
docker restart kibana

安装好了之后,登录kinaba,找到Dev Tools - Console

language-bash
1
2
3
4
5
6
7
#测试分词器
POST /_analyze
{

"text":"如家酒店还不错",
"analyzer":"pinyin"
}

注意事项

pinyin分词器默认时有很多缺点,比如每个字都拆分变成拼音,不符合一般需求,并且如果使用pinyin分词器,默认的中文索引就没了,只剩下pinyin索引了。所以,需要完善以下几点:

  1. 分词时不仅包含汉字,还需包含拼音
  2. 分词时按词分,不是字
  3. 使用汉字查询时,不会查询到同音词条目docs

为了做到这几点,需要在创建索引库时构建一个自定义分词器,如下

language-json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
PUT /test
{

"settings": {

"analysis": {

"analyzer": {

"my_analyzer":{

"tokenizer":"ik_max_word",
"filter":"py"
}
},
"filter": {

"py":{

"type":"pinyin",
"keep_full_pinyin":false,
"keep_joined_full_pinyin":true,
"keep_original":true,
"limit_first_letter_length":16,
"remove_duplicated_term":true,
"none_chinese_pinyin_tokenize":false
}
}
}
},
"mappings": {

"properties": {

"name":{

"type": "text",
"analyzer": "my_analyzer",
"search_analyzer": "ik_smart"
}
}
}
}

我们自定义了三步之中的tokenizerfilter,前者用ik分词,后者用pinyin分词,同时自定义了pinyin分词器的一些设置,分词时同时保留汉字和拼音,具体设置看pinyin分词器的github官网。同时设定了存入数据时使用分词器my_analyzer,搜索时,使用分词器ik_smart

存入两个数据,如下

language-json
1
2
3
4
5
6
7
8
9
10
11
12
13
POST /test/_doc/1
{

"id":1,
"name":"狮子"
}

POST /test/_doc/2
{

"id":2,
"name":"虱子"
}

那么现在,索引库的具体内容如下所示
因为搜索时使用的是ik_smart分词器,不是自定义分词器,所以这里已经解决了同音词的问题。