81 Commits

Author SHA1 Message Date
许晓东
60fb02d6d4 指定utf-8编码. 2023-05-19 15:35:44 +08:00
许晓东
6f093fbb27 fixed 打包后访问权限问题,升级1.0.8. 2023-05-19 15:20:15 +08:00
许晓东
e186fff939 contact update. 2023-05-18 23:00:32 +08:00
Xiaodong Xu
c4676fb51a Merge pull request #24 from xxd763795151/user-auth-dev
User auth dev
2023-05-18 22:57:26 +08:00
许晓东
571efe6ddc 接口权限过滤. 2023-05-18 22:56:00 +08:00
许晓东
7e98a58f60 eslint fixed and 页面权限配置 and 默认权限数据加载. 2023-05-17 22:29:04 +08:00
许晓东
b08be2aa65 权限配置、默认用户、角色配置. 2023-05-16 21:23:45 +08:00
许晓东
238507de19 权限配置. 2023-05-15 21:58:55 +08:00
许晓东
435a5ca2bc 支持登录 2023-05-14 21:25:13 +08:00
许晓东
be8e567684 个人设置更新密码. 2023-05-09 20:44:54 +08:00
许晓东
da1ddeb1e7 用户删除、重置密码,分配角色. 2023-05-09 07:16:55 +08:00
许晓东
38ca2cfc52 用户新增,删除. 2023-05-07 22:55:01 +08:00
许晓东
cc8f671fdb 角色列表页面权限分配. 2023-05-04 16:24:34 +08:00
许晓东
e3b0dd5d2a 角色列表页面权限分配雏形. 2023-04-24 22:08:05 +08:00
许晓东
a37664f6d5 角色列表页面雏形. 2023-04-19 22:10:08 +08:00
许晓东
af52e6bc61 用户权限展示. 2023-04-17 22:24:51 +08:00
许晓东
fce1154c36 新增用户管理:用户、角色、权限新增接口 2023-04-11 22:01:16 +08:00
许晓东
0bf5c6f46c 更新下载地址. 2023-04-11 13:30:53 +08:00
许晓东
fe759aaf74 集群字段长度增加到1024,修复漏洞(CVE-2023-20860):spring升级到5.3.26 2023-04-11 12:25:24 +08:00
许晓东
1e6a7bb269 polish 文档. 2023-02-10 20:41:15 +08:00
许晓东
fb440ae153 polish 文档. 2023-02-10 20:28:23 +08:00
许晓东
07c6714fd9 替换淘宝镜像,限制客户端ID太长显示宽度. 2023-02-09 22:23:34 +08:00
许晓东
5e9304efc2 增加限流说明,fixed acl 开启授权提示. 2023-02-07 22:24:37 +08:00
许晓东
fc7f05cf4e 限流,支持用户and客户端ID同时存在. 2023-02-06 22:11:56 +08:00
许晓东
5a87e9cad8 限流,支持用户. 2023-02-05 23:08:14 +08:00
许晓东
608f7cdc47 限流,支持客户端ID修改、删除. 2023-02-05 22:01:37 +08:00
许晓东
ee6defe5d2 限流,支持客户端ID查询. 2023-02-04 21:28:51 +08:00
许晓东
56621e0b8c 新增客户端限流菜单页. 2023-01-30 21:40:11 +08:00
许晓东
832b20a83e 客户端限流查询接口. 2023-01-09 22:11:50 +08:00
许晓东
4dbadee0d4 客户端限流 service. 2023-01-03 22:01:43 +08:00
许晓东
7d76632f08 客户端限流console 2023-01-03 21:28:11 +08:00
许晓东
d5102f626c 客户端限流console. 2023-01-02 21:28:47 +08:00
许晓东
daf77290da 客户端限流console. 2023-01-02 21:16:22 +08:00
许晓东
4df20f9ca5 contact jpg 2022-12-09 09:21:10 +08:00
许晓东
b465ba78b8 update contact 2022-11-01 19:05:40 +08:00
许晓东
9a69bad93a wechat contact. 2022-10-16 13:32:41 +08:00
许晓东
3785e9aaca wechat contact. 2022-10-16 13:31:35 +08:00
许晓东
d502da1b39 update weixin contact 2022-09-20 20:07:06 +08:00
许晓东
d6282cb902 认证授权分离页面未开启ACL时,显示效果. 2022-08-28 22:48:21 +08:00
许晓东
ca4dc2ebc9 Polish READM. 2022-08-28 22:04:31 +08:00
许晓东
50775994b5 Polish READM. 2022-08-28 22:02:21 +08:00
许晓东
3bd14a35d6 ACL的认证和授权管理功能分离. 2022-08-28 21:39:08 +08:00
许晓东
4c3fe5230c update contact jpg. 2022-08-04 09:12:39 +08:00
许晓东
57d549635b 恢复interceptory路径. 2022-07-27 18:19:50 +08:00
许晓东
923b89b6bd 更新下载地址. 2022-07-24 17:33:22 +08:00
许晓东
e9f34e1d19 支持批量删除topic. 2022-07-24 11:52:48 +08:00
许晓东
ccdcebb24d 更新icon. 2022-07-24 11:09:25 +08:00
许晓东
7ddd75e34f 更新icon. 2022-07-24 11:09:21 +08:00
许晓东
aebea435fa 更新概览. 2022-07-14 21:59:25 +08:00
许晓东
ea788313c6 更新概览. 2022-07-14 21:59:18 +08:00
许晓东
727edfcca8 支持缓存生产者连接,缓存连接默认关闭 2022-07-09 18:54:20 +08:00
Xiaodong Xu
cc1989a74b Merge pull request #17 from comdotwww/main
解决 Windows 操作系统下 CMD 路径转义的问题
2022-07-07 22:46:39 +08:00
comdotwww
0196a90b69 Update start.bat 2022-07-07 22:05:46 +08:00
许晓东
9c3e3988e0 consumer连接属性处理、联系更新 2022-07-07 20:09:27 +08:00
许晓东
458e13c9e0 缓存连接 2022-07-05 10:19:51 +08:00
许晓东
979859b232 支持在线删除消息 2022-07-04 17:16:00 +08:00
许晓东
b163e5f776 升级kafka版本从2.8.0 -> 3.2.0,增加DockerCompose部署说明 2022-06-30 20:11:29 +08:00
Xiaodong Xu
d062e18940 Merge pull request #16 from wdkang123/main
new(md): Docker DockerCompose部署方式
2022-06-30 19:42:14 +08:00
武子康
87c1e7ba4a new(md): Docker DockerCompose部署方式 2022-06-30 19:12:42 +08:00
许晓东
5194c952f2 polish README. 2022-06-29 19:17:21 +08:00
许晓东
c1cc44d32f 修复集群无活跃节点时NPE,更新README. 2022-06-29 17:22:29 +08:00
许晓东
82fafe980d 修复集群无活跃节点时NPE,更新README. 2022-06-29 17:20:57 +08:00
许晓东
34752deca2 update wechat contact. 2022-06-17 10:10:00 +08:00
yinuo
9e42e2c72a 更新联系方式 2022-05-06 11:02:28 +08:00
dongyinuo
e531f5d786 Delete weixin_contact.jpeg 2022-05-06 11:00:56 +08:00
dongyinuo
10e75ac55d Update README.md
更新联系方式
2022-05-06 10:58:55 +08:00
yinuo
4a8d09dc89 更新联系方式 2022-05-06 10:56:22 +08:00
dongyinuo
116bc100a7 Add files via upload
替换微信群图片
2022-05-06 10:48:00 +08:00
Xiaodong Xu
b1feaad9f7 Merge pull request #14 from dongyinuo/feature/dongyinuo/add/contact
Feature/dongyinuo/add/contact
2022-04-29 17:27:19 +08:00
yinuo
4d372f8374 添加联系方式 2022-04-29 17:22:44 +08:00
yinuo
4b2c544c0d 添加联系方式 2022-04-29 17:21:32 +08:00
许晓东
8131cb1a42 发布1.0.4安装包下载地址 2022-02-16 20:01:06 +08:00
许晓东
1dd6466261 副本重分配 2022-02-16 19:50:35 +08:00
许晓东
dda08a2152 副本重分配-》生成分配计划 2022-02-15 20:13:07 +08:00
许晓东
01c7121ee4 集群节点列表有序 2022-01-22 23:33:13 +08:00
许晓东
d939d7653c 主页展示Broker API的版本兼容信息 2022-01-22 23:07:41 +08:00
许晓东
058cd5a24e 查询当前重分配,版本不支持异常处理 2022-01-20 13:44:37 +08:00
许晓东
db3f55ac4a polish README 2022-01-19 19:05:00 +08:00
许晓东
a311a34537 分区比较栈溢出bug修复 2022-01-18 20:42:11 +08:00
许晓东
e8fe2ea1c7 集群名称支持中文,消息查询可选择时间展示顺序 2022-01-13 14:19:17 +08:00
许晓东
10302dd39c v1.0.3安装包下载地址 2022-01-09 23:57:00 +08:00
155 changed files with 9010 additions and 878 deletions

View File

@@ -1,13 +1,16 @@
# kafka可视化管理平台
一款轻量级的kafka可视化管理平台安装配置快捷、简单易用。
为了开发的省事,没有国际化支持,只支持中文展示。
用过rocketmq-console吧前端展示风格跟那个有点类似。
为了开发的省事,没有国际化支持,页面只支持中文展示。
用过rocketmq-console(rocketmq-dashboard)吧,对,前端展示风格跟那个有点类似。
## 页面预览
如果github能查看图片的话可以点击[查看菜单页面](./document/overview/概览.md),查看每个页面的样子
## 集群迁移支持说明
当前主分支及日后版本不再提供消息同步、集群迁移的解决方案,如有需要,查看:[集群迁移说明](./document/datasync/集群迁移.md)
## ACL说明
最新代码运行即可看到acl菜单将权限管理和认证的用户管理SASL_SCRAM)进行了分离。分离之后支持只开启SASL_SCRAM认证的时候未开启鉴权用户变更操作。或者使用其它认证机制下的权限管理操作可视化的权限管理但是可视化的认证用户管理目前只支持Scram。
v1.0.6版本之前如果kafka集群启用了ACL但是控制台没看到Acl菜单可以查看[Acl配置启用说明](./document/acl/Acl.md)
## 功能支持
* 多集群支持
* 集群信息
@@ -15,13 +18,18 @@
* 消费组管理
* 消息管理
* ACL
* 客户端限流
* 运维
功能明细看这个脑图:
![功能特性](./document/img/功能特性.png)
## 安装包下载
点击下载(v1.0.2版本)[kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.2/kafka-console-ui.zip)
点击下载(v1.0.8版本)[kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.8/kafka-console-ui.zip)
如果安装包下载的比较慢,可以查看下面的源码打包说明,把代码下载下来,本地快速打包.
github下载慢也可以试试从gitee下载点击下载[gitee来源kafka-console-ui.zip](https://gitee.com/xiaodong_xu/kafka-console-ui/releases/download/v1.0.8/kafka-console-ui.zip)
## 快速使用
### Windows
@@ -60,7 +68,7 @@ sh bin/shutdown.sh
在新增集群的时候除了集群地址还可以输入集群的其它属性配置比如请求超时ACL配置等。如果开启了ACL切换到该集群的时候导航栏上便会出现ACL菜单支持进行相关操作目前是基于SASL_SCRAM认证授权管理支持的最完善其它的我也没验证过虽然是我开发的但是我也没具体全部验证这一块功能授权部分应该是通用的
## kafka版本
* 当前使用的kafka 2.8.0
* 当前使用的kafka 3.2.0
## 监控
仅提供运维管理功能监控、告警需要配合其它组件如有需要建议请查看https://blog.csdn.net/x763795151/article/details/119705372
@@ -69,3 +77,35 @@ sh bin/shutdown.sh
## 本地开发
如果需要本地开发,开发环境配置查看:[本地开发](./document/develop/开发配置.md)
## 登录认证和权限
1.0.7版本及之前,主分支不支持登录认证,感谢@dongyinuo 同学开发了一版支持登录认证,及相关的按钮权限(主要有两个角色:管理员和普通开发人员)。
在分支feature/dongyinuo/20220501/devops 上。
如果有需要使用管理台登录认证的,可以切换到这个分支上进行打包,打包方式看 源码打包 说明。
默认登录账户admin/kafka-console-ui521
目前最新版本主分支已增加登录认证、用户、角色等权限配置如需开启登录认证修改配置文件config/application.yml: auth.enable=true(默认 false),如下:
```yaml
# 权限认证设置设置为true需要先登录才能访问
auth:
enable: true
# 登录用户token的过期时间单位小时
expire-hours: 24
```
默认有两个登录用户super-admin/123465admin/123456登录成功后在个人设置修改密码。super-admin和admin的唯一区别是super-admin可以增加删除用户admin不能。如果觉得不合适请在用户菜单下删除相关用户或角色自行创建合适的角色或用户。
注意:不开启登录认证,页面不显示用户菜单,正常现象。
## DockerCompose部署
感谢@wdkang123 同学分享的部署方式,如果有需要请查看[DockerCompose部署方式](./document/deploy/docker部署.md)
## 联系方式
+ 微信群
<img src="./document/contact/weixin_contact.jpg" width="40%"/>
[//]: # (<img src="https://github.com/xxd763795151/kafka-console-ui/blob/main/document/contact/weixin_contact.jpg" width="40%"/>)
+ 若联系方式失效, 请联系加一下微信, 说明意图
- xxd763795151
- wxid_7jy2ezljvebt12

View File

@@ -1,7 +1,7 @@
#!/bin/bash
SCRIPT_DIR=`dirname $0`
PROJECT_DIR="$SCRIPT_DIR/.."
SCRIPT_DIR=$(dirname "`pwd`/$0")
PROJECT_DIR=`dirname "$SCRIPT_DIR"`
# 不要修改进程标记,作为进程属性关闭使用
PROCESS_FLAG="kafka-console-ui-process-flag:${PROJECT_DIR}"
pkill -f $PROCESS_FLAG

View File

@@ -1,8 +1,8 @@
rem MAIN_CLASS=org.springframework.boot.loader.JarLauncher
rem JAVA_HOME=jre1.8.0_66
set JAVA_CMD=%JAVA_HOME%\bin\java
set JAVA_OPTS=-Xmx512m -Xms512m -Xmn256m -Xss256k
set JAVA_OPTS=-Xmx512m -Xms512m -Xmn256m -Xss256k -Dfile.encoding=utf-8
set CONFIG_FILE=../config/application.yml
set TARGET=../lib/kafka-console-ui.jar
set DATA_DIR=..
%JAVA_CMD% -jar %TARGET% --spring.config.location=%CONFIG_FILE% --data.dir=%DATA_DIR%
"%JAVA_CMD%" -jar %TARGET% --spring.config.location=%CONFIG_FILE% --data.dir=%DATA_DIR%

View File

@@ -3,8 +3,8 @@
# 设置jvm堆大小及栈大小栈大小最少设置为256K不要小于这个值比如设置为128太小了
JAVA_MEM_OPTS="-Xmx512m -Xms512m -Xmn256m -Xss256k"
SCRIPT_DIR=`dirname $0`
PROJECT_DIR="$SCRIPT_DIR/.."
SCRIPT_DIR=$(dirname "`pwd`/$0")
PROJECT_DIR=`dirname "$SCRIPT_DIR"`
CONF_FILE="$PROJECT_DIR/config/application.yml"
TARGET="$PROJECT_DIR/lib/kafka-console-ui.jar"
@@ -17,7 +17,7 @@ ERROR_OUT="$PROJECT_DIR/error.out"
# 不要修改进程标记作为进程属性关闭使用如果要修改请把stop.sh里的该属性的值保持一致
PROCESS_FLAG="kafka-console-ui-process-flag:${PROJECT_DIR}"
JAVA_OPTS="$JAVA_OPTS $JAVA_MEM_OPTS"
JAVA_OPTS="$JAVA_OPTS $JAVA_MEM_OPTS -Dfile.encoding=utf-8"
nohup java -jar $JAVA_OPTS $TARGET --spring.config.location="$CONF_FILE" --logging.home="$PROJECT_DIR" --data.dir=$DATA_DIR $PROCESS_FLAG 1>/dev/null 2>$ERROR_OUT &

36
document/acl/Acl.md Normal file
View File

@@ -0,0 +1,36 @@
# Acl配置启用说明
## 前言
可能有的同学是看了这篇文章来的:[如何通过可视化方式快捷管理kafka的acl配置](https://blog.csdn.net/x763795151/article/details/120200119)
这篇文章里可能说了是通过修改配置文件application.yml的方式来启用ACL示例如下
```yaml
kafka:
config:
# kafka broker地址多个以逗号分隔
bootstrap-server: 'localhost:9092'
# 服务端是否启用acl如果不启用下面的几项都忽略即可
enable-acl: true
# 只支持2种安全协议SASL_PLAINTEXT和PLAINTEXT启用acl则设置为SASL_PLAINTEXT不启用acl不需关心这个配置
security-protocol: SASL_PLAINTEXT
sasl-mechanism: SCRAM-SHA-256
# 超级管理员用户名在broker上已经配置为超级管理员
admin-username: admin
# 超级管理员密码
admin-password: admin
# 启动自动创建配置的超级管理员用户
admin-create: true
# broker连接的zk地址
zookeeper-addr: localhost:2181
sasl-jaas-config: org.apache.kafka.common.security.scram.ScramLoginModule required username="${kafka.config.admin-username}" password="${kafka.config.admin-password}";
```
其中说明了kafka.config.enable-acl配置项需要为true。
注意:**现在不再支持这种方式了**
## v1.0.6之前的版本说明
因为现在支持多集群配置,关于多集群配置,可以看主页说明的 配置集群 介绍。
所以这里把这些额外的配置项都去掉了。
如果启用了ACL在页面上新增集群的时候在属性里配置集群的ACL相关信息如下![新增集群](./新增集群.png)
如果控制台检测到属性里有ACL相关属性配置切换到这个集群后ACL菜单会自动出现的。
注意只支持SASL。

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 245 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 178 KiB

View File

@@ -0,0 +1,189 @@
# Docker/DockerCompose部署
# 1.快速上手
## 1.1 镜像拉取
```shell
docker pull wdkang/kafka-console-ui
```
## 1.2 查看镜像
```shell
docker images
```
## 1.3 启动服务
由于Docker内不会对数据进行持久化 所以这里推荐将数据目录映射到实体机中
详见 **2.数据持久**
```shell
docker run -d -p 7766:7766 wdkang/kafka-console-ui
```
## 1.4 查看状态
```shell
docker ps -a
```
## 1.5 查看日志
```shell
docker logs -f ${containerId}
```
## 1.6 访问服务
```shell
http://localhost:7766
```
# 2. 数据持久
推荐对数据进行持久化
## 2.1 新建目录
```shell
mkdir -p /home/kafka-console-ui/data /home/kafka-console-ui/log
cd /home/kafka-console-ui
```
## 2.2 启动服务
```shell
docker run -d -p 7766:7766 -v $PWD/data:/app/data -v $PWD/log:/app/log wdkang/kafka-console-ui
```
# 3.自主打包
## 3.1 构建镜像
**前置需求**
(可根据自身情况修改Dockerfile)
下载[kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases)包
解压后 将Dockerfile放入文件夹的根目录
**Dockerfile**
```dockerfile
# jdk
FROM openjdk:8-jdk-alpine
# label
LABEL by="https://github.com/xxd763795151/kafka-console-ui"
# root
RUN mkdir -p /app && cd /app
WORKDIR /app
# config log data
RUN mkdir -p /app/config && mkdir -p /app/log && mkdir -p /app/data && mkdir -p /app/lib
# add file
ADD ./lib/kafka-console-ui.jar /app/lib
ADD ./config /app/config
# port
EXPOSE 7766
# start server
CMD java -jar -Xmx512m -Xms512m -Xmn256m -Xss256k /app/lib/kafka-console-ui.jar --spring.config.location="/app/config/" --logging.home="/app/log" --data.dir="/app/data"
```
**进行打包**
在文件夹根目录下
(注意末尾有个点)
```shell
docker build -t ${your_docker_hub_addr} .
```
## 3.2 上传镜像
```shell
docker push ${your_docker_hub_addr}
```
# 4.容器编排
```dockerfile
# docker-compose 编排
version: '3'
services:
# 服务名
kafka-console-ui:
# 容器名
container_name: "kafka-console-ui"
# 端口
ports:
- "7766:7766"
# 持久化
volumes:
- ./data:/app/data
- ./log:/app/log
# 防止读写文件有问题
privileged: true
user: root
# 镜像地址
image: "wdkang/kafka-console-ui"
```
## 4.1 拉取镜像
```shell
docker-compose pull kafka-console-ui
```
## 4.2 构建启动
```shell
docker-compose up --detach --build kafka-console-ui
```
## 4.3 查看状态
```shell
docker-compose ps -a
```
## 4.3 停止服务
```shell
docker-compose down
```

View File

@@ -12,8 +12,11 @@
* scala 2.13
* maven >=3.6+
* webstorm
* Node
除了webstorm是开发前端的ide可以根据自己需要代替jdk scala是必须有的。
开发的时候我本地用的node版本是v14.16.0下载目录https://nodejs.org/download/release/v14.16.0/ . 过高或过低版本是否适用,我也没测试过。
scala 2.13下载地址在这个页面最下面https://www.scala-lang.org/download/scala2.html
## 克隆代码
@@ -21,7 +24,8 @@ scala 2.13下载地址在这个页面最下面https://www.scala-lang.org/d
## 后端配置
1. 用idea打开项目
2. 打开idea的Project Structure(Settings) -> Modules -> 设置src/main/scala为Sources因为约定src/main/java是源码目录所以这里要再加一个源码目录
3. 打开idea的Project Structure(Settings) -> Libraries 添加scala sdk然后选择本地下载的scala 2.13的目录确定添加进来如果使用的idea可以直接勾选也可以不用先下载到本地
3. 打开idea的Settings -> plugins 搜索scala plugin并安装然后应该是要重启idea生效这一步必须在第4步之前
4. 打开idea的Project Structure(Settings) -> Libraries 添加scala sdk然后选择本地下载的scala 2.13的目录确定添加进来如果使用的idea可以直接勾选也可以不用先下载到本地
## 前端
前端代码在工程的ui目录下找个前端开发的ide如web storm打开进行开发即可。

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 99 KiB

After

Width:  |  Height:  |  Size: 439 KiB

View File

@@ -29,4 +29,6 @@ package.bat
cd kafka-console-ui
# linux或mac执行
sh package.sh
```
```
打包完成会在target目录下生成一个kafka-console-ui.zip的安装包

67
pom.xml
View File

@@ -10,7 +10,7 @@
</parent>
<groupId>com.xuxd</groupId>
<artifactId>kafka-console-ui</artifactId>
<version>1.0.3</version>
<version>1.0.8</version>
<name>kafka-console-ui</name>
<description>Kafka console manage ui</description>
<properties>
@@ -21,10 +21,11 @@
<ui.path>${project.basedir}/ui</ui.path>
<frontend-maven-plugin.version>1.11.0</frontend-maven-plugin.version>
<compiler.version>1.8</compiler.version>
<kafka.version>2.8.0</kafka.version>
<kafka.version>3.2.0</kafka.version>
<maven.assembly.plugin.version>3.0.0</maven.assembly.plugin.version>
<mybatis-plus-boot-starter.version>3.4.2</mybatis-plus-boot-starter.version>
<scala.version>2.13.6</scala.version>
<spring-framework.version>5.3.26</spring-framework.version>
</properties>
<dependencies>
<dependency>
@@ -48,6 +49,11 @@
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
@@ -76,6 +82,18 @@
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.13</artifactId>
<version>${kafka.version}</version>
<exclusions>
<exclusion>
<groupId>com.typesafe.scala-logging</groupId>
<artifactId>scala-logging_2.13</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.typesafe.scala-logging</groupId>
<artifactId>scala-logging_2.13</artifactId>
<version>3.9.2</version>
</dependency>
<dependency>
@@ -156,26 +174,26 @@
</executions>
</plugin>
<!-- <plugin>-->
<!-- <groupId>org.codehaus.mojo</groupId>-->
<!-- <artifactId>build-helper-maven-plugin</artifactId>-->
<!-- <version>3.2.0</version>-->
<!-- <executions>-->
<!-- <execution>-->
<!-- <id>add-source</id>-->
<!-- <phase>generate-sources</phase>-->
<!-- <goals>-->
<!-- <goal>add-source</goal>-->
<!-- </goals>-->
<!-- <configuration>-->
<!-- <sources>-->
<!-- <source>src/main/java</source>-->
<!-- <source>src/main/scala</source>-->
<!-- </sources>-->
<!-- </configuration>-->
<!-- </execution>-->
<!-- </executions>-->
<!-- </plugin>-->
<!-- <plugin>-->
<!-- <groupId>org.codehaus.mojo</groupId>-->
<!-- <artifactId>build-helper-maven-plugin</artifactId>-->
<!-- <version>3.2.0</version>-->
<!-- <executions>-->
<!-- <execution>-->
<!-- <id>add-source</id>-->
<!-- <phase>generate-sources</phase>-->
<!-- <goals>-->
<!-- <goal>add-source</goal>-->
<!-- </goals>-->
<!-- <configuration>-->
<!-- <sources>-->
<!-- <source>src/main/java</source>-->
<!-- <source>src/main/scala</source>-->
<!-- </sources>-->
<!-- </configuration>-->
<!-- </execution>-->
<!-- </executions>-->
<!-- </plugin>-->
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
@@ -183,6 +201,8 @@
<source>${compiler.version}</source>
<target>${compiler.version}</target>
<encoding>${project.build.sourceEncoding}</encoding>
<!-- <debug>false</debug>-->
<!-- <parameters>false</parameters>-->
</configuration>
</plugin>
<plugin>
@@ -207,7 +227,8 @@
<goal>npm</goal>
</goals>
<configuration>
<arguments>install --registry=https://registry.npmjs.org/</arguments>
<!-- <arguments>install &#45;&#45;registry=https://registry.npmjs.org/</arguments>-->
<arguments>install --registry=https://registry.npm.taobao.org</arguments>
</configuration>
</execution>
<execution>

View File

@@ -6,6 +6,9 @@ import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.servlet.ServletComponentScan;
import org.springframework.scheduling.annotation.EnableScheduling;
/**
* @author 晓东哥哥
*/
@MapperScan("com.xuxd.kafka.console.dao")
@SpringBootApplication
@EnableScheduling

View File

@@ -0,0 +1,117 @@
package com.xuxd.kafka.console.aspect;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.locks.ReentrantLock;
/**
* @author 晓东哥哥
*/
@Slf4j
@Order(-1)
@Aspect
@Component
public class ControllerLogAspect {
private Map<String, String> descMap = new HashMap<>();
private ReentrantLock lock = new ReentrantLock();
@Pointcut("@annotation(com.xuxd.kafka.console.aspect.annotation.ControllerLog)")
private void pointcut() {
}
@Around("pointcut()")
public Object around(ProceedingJoinPoint joinPoint) throws Throwable {
StringBuilder params = new StringBuilder("[");
try {
String methodName = getMethodFullName(joinPoint.getTarget().getClass().getName(), joinPoint.getSignature().getName());
if (!descMap.containsKey(methodName)) {
cacheDescInfo(joinPoint);
}
Object[] args = joinPoint.getArgs();
long startTime = System.currentTimeMillis();
Object res = joinPoint.proceed();
long endTime = System.currentTimeMillis();
for (int i = 0; i < args.length; i++) {
params.append(args[i]);
}
params.append("]");
String resStr = "[" + (res != null ? res.toString() : "") + "]";
StringBuilder sb = new StringBuilder();
String shortMethodName = descMap.getOrDefault(methodName, ".-");
shortMethodName = shortMethodName.substring(shortMethodName.lastIndexOf(".") + 1);
sb.append("[").append(shortMethodName)
.append("调用完成: ")
.append("请求参数=").append(params).append(", ")
.append("响应值=").append(resStr).append(", ")
.append("耗时=").append(endTime - startTime)
.append(" ms");
log.info(sb.toString());
return res;
} catch (Throwable e) {
log.error("调用方法异常, 请求参数:" + params, e);
throw e;
}
}
private void cacheDescInfo(ProceedingJoinPoint joinPoint) {
lock.lock();
try {
String methodName = joinPoint.getSignature().getName();
Class<?> aClass = joinPoint.getTarget().getClass();
Method method = null;
try {
Object[] args = joinPoint.getArgs();
Class<?>[] clzArr = new Class[args.length];
for (int i = 0; i < args.length; i++) {
clzArr[i] = args[i].getClass();
}
method = aClass.getDeclaredMethod(methodName, clzArr);
} catch (Exception e) {
log.warn("cacheDescInfo error: {}", e.getMessage());
}
String fullMethodName = getMethodFullName(aClass.getName(), methodName);
String desc = "[" + fullMethodName + "]";
if (method == null) {
descMap.put(fullMethodName, desc);
return;
}
ControllerLog controllerLog = method.getAnnotation(ControllerLog.class);
String value = controllerLog.value();
if (StringUtils.isBlank(value)) {
descMap.put(fullMethodName, desc);
} else {
descMap.put(fullMethodName, value);
}
} finally {
lock.unlock();
}
}
private String getMethodFullName(String className, String methodName) {
return className + "#" + methodName;
}
}

View File

@@ -0,0 +1,127 @@
package com.xuxd.kafka.console.aspect;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.Credentials;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import com.xuxd.kafka.console.cache.RolePermCache;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.dao.SysPermissionMapper;
import com.xuxd.kafka.console.dao.SysRoleMapper;
import com.xuxd.kafka.console.dao.SysUserMapper;
import com.xuxd.kafka.console.exception.UnAuthorizedException;
import com.xuxd.kafka.console.filter.CredentialsContext;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;
import java.lang.reflect.Method;
import java.util.*;
import java.util.stream.Collectors;
/**
* @author: xuxd
* @date: 2023/5/17 22:32
**/
@Slf4j
@Order(1)
@Aspect
@Component
public class PermissionAspect {
private Map<String, Set<String>> permMap = new HashMap<>();
private final AuthConfig authConfig;
private final SysUserMapper userMapper;
private final SysRoleMapper roleMapper;
private final SysPermissionMapper permissionMapper;
private final RolePermCache rolePermCache;
public PermissionAspect(AuthConfig authConfig,
SysUserMapper userMapper,
SysRoleMapper roleMapper,
SysPermissionMapper permissionMapper,
RolePermCache rolePermCache) {
this.authConfig = authConfig;
this.userMapper = userMapper;
this.roleMapper = roleMapper;
this.permissionMapper = permissionMapper;
this.rolePermCache = rolePermCache;
}
@Pointcut("@annotation(com.xuxd.kafka.console.aspect.annotation.Permission)")
private void pointcut() {
}
@Before(value = "pointcut()")
public void before(JoinPoint joinPoint) {
if (!authConfig.isEnable()) {
return;
}
MethodSignature signature = (MethodSignature) joinPoint.getSignature();
Method method = signature.getMethod();
Permission permission = method.getAnnotation(Permission.class);
if (permission == null) {
return;
}
String[] value = permission.value();
if (value == null || value.length == 0) {
return;
}
String name = method.getName() + "@" + method.hashCode();
Map<String, Set<String>> pm = checkPermMap(name, value);
Set<String> allowPermSet = pm.get(name);
if (allowPermSet == null) {
log.error("解析权限出现意外啦!!!");
return;
}
Credentials credentials = CredentialsContext.get();
if (credentials == null || credentials.isInvalid()) {
throw new UnAuthorizedException("credentials is invalid");
}
QueryWrapper<SysUserDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("username", credentials.getUsername());
SysUserDO userDO = userMapper.selectOne(queryWrapper);
if (userDO == null) {
throw new UnAuthorizedException(credentials.getUsername() + ":" + allowPermSet);
}
String roleIds = userDO.getRoleIds();
List<Long> roleIdList = Arrays.stream(roleIds.split(",")).map(String::trim).filter(StringUtils::isNotEmpty).map(Long::valueOf).collect(Collectors.toList());
for (Long roleId : roleIdList) {
Set<String> permSet = rolePermCache.getRolePermCache().getOrDefault(roleId, Collections.emptySet());
for (String p : allowPermSet) {
if (permSet.contains(p)) {
return;
}
}
}
throw new UnAuthorizedException(credentials.getUsername() + ":" + allowPermSet);
}
private Map<String, Set<String>> checkPermMap(String methodName, String[] value) {
if (!permMap.containsKey(methodName)) {
Map<String, Set<String>> map = new HashMap<>(permMap);
map.put(methodName, new HashSet<>(Arrays.asList(value)));
permMap = map;
return map;
}
return permMap;
}
}

View File

@@ -0,0 +1,15 @@
package com.xuxd.kafka.console.aspect.annotation;
import java.lang.annotation.*;
/**
* 该注解用到controller层的方法上
* @author 晓东哥哥
*/
@Inherited
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface ControllerLog {
String value() default "";
}

View File

@@ -0,0 +1,17 @@
package com.xuxd.kafka.console.aspect.annotation;
import java.lang.annotation.*;
/**
* 权限注解,开启认证的时候拥有该权限的用户才能访问对应接口.
*
* @author: xuxd
* @date: 2023/5/17 22:30
**/
@Inherited
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Permission {
String[] value() default {};
}

View File

@@ -1,18 +1,15 @@
package com.xuxd.kafka.console.beans;
import java.util.Objects;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.common.acl.AccessControlEntry;
import org.apache.kafka.common.acl.AccessControlEntryFilter;
import org.apache.kafka.common.acl.AclBinding;
import org.apache.kafka.common.acl.AclBindingFilter;
import org.apache.kafka.common.acl.AclOperation;
import org.apache.kafka.common.acl.AclPermissionType;
import org.apache.kafka.common.acl.*;
import org.apache.kafka.common.resource.PatternType;
import org.apache.kafka.common.resource.ResourcePattern;
import org.apache.kafka.common.resource.ResourcePatternFilter;
import org.apache.kafka.common.resource.ResourceType;
import org.apache.kafka.common.security.auth.KafkaPrincipal;
import org.apache.kafka.common.utils.SecurityUtils;
import java.util.Objects;
/**
* kafka-console-ui.
@@ -41,7 +38,9 @@ public class AclEntry {
entry.setResourceType(binding.pattern().resourceType().name());
entry.setName(binding.pattern().name());
entry.setPatternType(binding.pattern().patternType().name());
entry.setPrincipal(KafkaPrincipal.fromString(binding.entry().principal()).getName());
// entry.setPrincipal(KafkaPrincipal.fromString(binding.entry().principal()).getName());
// 3.x版本使用该方法
entry.setPrincipal(SecurityUtils.parseKafkaPrincipal(binding.entry().principal()).getName());
entry.setHost(binding.entry().host());
entry.setOperation(binding.entry().operation().name());
entry.setPermissionType(binding.entry().permissionType().name());

View File

@@ -8,7 +8,7 @@ import org.apache.kafka.common.Node;
* @author xuxd
* @date 2021-10-08 14:03:21
**/
public class BrokerNode {
public class BrokerNode implements Comparable{
private int id;
@@ -80,4 +80,8 @@ public class BrokerNode {
public void setController(boolean controller) {
isController = controller;
}
@Override public int compareTo(Object o) {
return this.id - ((BrokerNode)o).id;
}
}

View File

@@ -0,0 +1,21 @@
package com.xuxd.kafka.console.beans;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/5/14 19:37
**/
@Data
public class Credentials {
public static final Credentials INVALID = new Credentials();
private String username;
private long expiration;
public boolean isInvalid() {
return this == INVALID;
}
}

View File

@@ -0,0 +1,17 @@
package com.xuxd.kafka.console.beans;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/5/14 20:44
**/
@Data
public class LoginResult {
private String token;
private List<String> permissions;
}

View File

@@ -0,0 +1,22 @@
package com.xuxd.kafka.console.beans;
import lombok.Getter;
import lombok.Setter;
import lombok.ToString;
import org.springframework.context.ApplicationEvent;
/**
* @author: xuxd
* @date: 2023/5/18 15:49
**/
@ToString
public class RolePermUpdateEvent extends ApplicationEvent {
@Getter
@Setter
private boolean reload = false;
public RolePermUpdateEvent(Object source) {
super(source);
}
}

View File

@@ -21,7 +21,7 @@ public class TopicPartition implements Comparable {
}
TopicPartition other = (TopicPartition) o;
if (!this.topic.equals(other.getTopic())) {
return this.compareTo(other);
return this.topic.compareTo(other.topic);
}
return this.partition - other.partition;

View File

@@ -0,0 +1,29 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
@TableName("t_sys_permission")
public class SysPermissionDO {
@TableId(type = IdType.AUTO)
private Long id;
private String name;
/**
* 权限类型: 0菜单1按钮
*/
private Integer type;
private Long parentId;
private String permission;
}

View File

@@ -0,0 +1,24 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
@TableName("t_sys_role")
public class SysRoleDO {
@TableId(type = IdType.AUTO)
private Long id;
private String roleName;
private String description;
private String permissionIds;
}

View File

@@ -0,0 +1,26 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
@TableName("t_sys_user")
public class SysUserDO {
@TableId(type = IdType.AUTO)
private Long id;
private String username;
private String password;
private String salt;
private String roleIds;
}

View File

@@ -0,0 +1,27 @@
package com.xuxd.kafka.console.beans.dto;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/1/10 20:12
**/
@Data
public class AlterClientQuotaDTO {
private String type;
private List<String> types;
private List<String> names;
private String consumerRate;
private String producerRate;
private String requestPercentage;
private List<String> deleteConfigs;
}

View File

@@ -0,0 +1,15 @@
package com.xuxd.kafka.console.beans.dto;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/5/14 18:59
**/
@Data
public class LoginUserDTO {
private String username;
private String password;
}

View File

@@ -0,0 +1,18 @@
package com.xuxd.kafka.console.beans.dto;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-02-15 19:08:13
**/
@Data
public class ProposedAssignmentDTO {
private String topic;
private List<Integer> brokers;
}

View File

@@ -0,0 +1,17 @@
package com.xuxd.kafka.console.beans.dto;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/1/9 21:53
**/
@Data
public class QueryClientQuotaDTO {
private List<String> types;
private List<String> names;
}

View File

@@ -0,0 +1,32 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.dos.SysPermissionDO;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
public class SysPermissionDTO {
private String name;
/**
* 权限类型: 0菜单1按钮
*/
private Integer type;
private Long parentId;
private String permission;
public SysPermissionDO toSysPermissionDO() {
SysPermissionDO permissionDO = new SysPermissionDO();
permissionDO.setName(this.name);
permissionDO.setType(this.type);
permissionDO.setParentId(this.parentId);
permissionDO.setPermission(this.permission);
return permissionDO;
}
}

View File

@@ -0,0 +1,35 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import lombok.Data;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
public class SysRoleDTO {
private Long id;
private String roleName;
private String description;
private List<String> permissionIds;
public SysRoleDO toDO() {
SysRoleDO roleDO = new SysRoleDO();
roleDO.setId(this.id);
roleDO.setRoleName(this.roleName);
roleDO.setDescription(this.description);
if (CollectionUtils.isNotEmpty(permissionIds)) {
roleDO.setPermissionIds(StringUtils.join(this.permissionIds, ","));
}
return roleDO;
}
}

View File

@@ -0,0 +1,34 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
public class SysUserDTO {
private Long id;
private String username;
private String password;
private String salt;
private String roleIds;
private Boolean resetPassword = false;
public SysUserDO toDO() {
SysUserDO userDO = new SysUserDO();
userDO.setId(this.id);
userDO.setUsername(this.username);
userDO.setPassword(this.password);
userDO.setSalt(this.salt);
userDO.setRoleIds(this.roleIds);
return userDO;
}
}

View File

@@ -0,0 +1,24 @@
package com.xuxd.kafka.console.beans.vo;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-22 16:24:58
**/
@Data
public class BrokerApiVersionVO {
private int brokerId;
private String host;
private int supportNums;
private int unSupportNums;
private List<String> versionInfo;
}

View File

@@ -0,0 +1,82 @@
package com.xuxd.kafka.console.beans.vo;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.common.config.internals.QuotaConfigs;
import org.apache.kafka.common.quota.ClientQuotaEntity;
import java.util.List;
import java.util.Map;
/**
* @author 晓东哥哥
*/
@Data
public class ClientQuotaEntityVO {
private String user;
private String client;
private String ip;
private String consumerRate;
private String producerRate;
private String requestPercentage;
public static ClientQuotaEntityVO from(ClientQuotaEntity entity, List<String> entityTypes, Map<String, Object> config) {
ClientQuotaEntityVO entityVO = new ClientQuotaEntityVO();
Map<String, String> entries = entity.entries();
entityTypes.forEach(type -> {
switch (type) {
case ClientQuotaEntity.USER:
entityVO.setUser(entries.get(type));
break;
case ClientQuotaEntity.CLIENT_ID:
entityVO.setClient(entries.get(type));
break;
case ClientQuotaEntity.IP:
entityVO.setIp(entries.get(type));
break;
default:
break;
}
});
entityVO.setConsumerRate(convert(config.getOrDefault(QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG, "")));
entityVO.setProducerRate(convert(config.getOrDefault(QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG, "")));
entityVO.setRequestPercentage(config.getOrDefault(QuotaConfigs.REQUEST_PERCENTAGE_OVERRIDE_CONFIG, "").toString());
return entityVO;
}
public static String convert(Object num) {
if (num == null) {
return null;
}
if (num instanceof String) {
if ((StringUtils.isBlank((String) num))) {
return (String) num;
}
}
if (num instanceof Number) {
Number number = (Number) num;
double value = number.doubleValue();
double _1kb = 1024;
double _1mb = 1024 * _1kb;
if (value < _1kb) {
return value + " Byte";
}
if (value < _1mb) {
return String.format("%.1f KB", (value / _1kb));
}
if (value >= _1mb) {
return String.format("%.1f MB", (value / _1mb));
}
}
return String.valueOf(num);
}
}

View File

@@ -0,0 +1,43 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.dos.SysPermissionDO;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/4/17 21:18
**/
@Data
public class SysPermissionVO {
private Long id;
private String name;
/**
* 权限类型: 0菜单1按钮
*/
private Integer type;
private Long parentId;
private String permission;
private Long key;
private List<SysPermissionVO> children;
public static SysPermissionVO from(SysPermissionDO permissionDO) {
SysPermissionVO permissionVO = new SysPermissionVO();
permissionVO.setPermission(permissionDO.getPermission());
permissionVO.setType(permissionDO.getType());
permissionVO.setName(permissionDO.getName());
permissionVO.setParentId(permissionDO.getParentId());
permissionVO.setKey(permissionDO.getId());
permissionVO.setId(permissionDO.getId());
return permissionVO;
}
}

View File

@@ -0,0 +1,38 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
/**
* @author: xuxd
* @date: 2023/4/19 21:12
**/
@Data
public class SysRoleVO {
private Long id;
private String roleName;
private String description;
private List<Long> permissionIds;
public static SysRoleVO from(SysRoleDO roleDO) {
SysRoleVO roleVO = new SysRoleVO();
roleVO.setId(roleDO.getId());
roleVO.setRoleName(roleDO.getRoleName());
roleVO.setDescription(roleDO.getDescription());
if (StringUtils.isNotEmpty(roleDO.getPermissionIds())) {
List<Long> list = Arrays.stream(roleDO.getPermissionIds().split(",")).
filter(StringUtils::isNotEmpty).map(e -> Long.valueOf(e.trim())).collect(Collectors.toList());
roleVO.setPermissionIds(list);
}
return roleVO;
}
}

View File

@@ -0,0 +1,31 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/5/6 13:06
**/
@Data
public class SysUserVO {
private Long id;
private String username;
private String password;
private String roleIds;
private String roleNames;
public static SysUserVO from(SysUserDO userDO) {
SysUserVO userVO = new SysUserVO();
userVO.setId(userDO.getId());
userVO.setUsername(userDO.getUsername());
userVO.setRoleIds(userDO.getRoleIds());
userVO.setPassword(userDO.getPassword());
return userVO;
}
}

View File

@@ -0,0 +1,91 @@
package com.xuxd.kafka.console.cache;
import com.xuxd.kafka.console.beans.RolePermUpdateEvent;
import com.xuxd.kafka.console.beans.dos.SysPermissionDO;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import com.xuxd.kafka.console.dao.SysPermissionMapper;
import com.xuxd.kafka.console.dao.SysRoleMapper;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.SmartInitializingSingleton;
import org.springframework.context.ApplicationListener;
import org.springframework.context.annotation.DependsOn;
import org.springframework.stereotype.Component;
import java.util.*;
import java.util.function.Function;
import java.util.stream.Collectors;
/**
* @author: xuxd
* @date: 2023/5/18 15:47
**/
@DependsOn("dataInit")
@Slf4j
@Component
public class RolePermCache implements ApplicationListener<RolePermUpdateEvent>, SmartInitializingSingleton {
private Map<Long, SysPermissionDO> permCache = new HashMap<>();
private Map<Long, Set<String>> rolePermCache = new HashMap<>();
private final SysPermissionMapper permissionMapper;
private final SysRoleMapper roleMapper;
public RolePermCache(SysPermissionMapper permissionMapper, SysRoleMapper roleMapper) {
this.permissionMapper = permissionMapper;
this.roleMapper = roleMapper;
}
@Override
public void onApplicationEvent(RolePermUpdateEvent event) {
log.info("更新角色权限信息:{}", event);
if (event.isReload()) {
this.loadPermCache();
}
refresh();
}
public Map<Long, SysPermissionDO> getPermCache() {
return permCache;
}
public Map<Long, Set<String>> getRolePermCache() {
return rolePermCache;
}
private void refresh() {
List<SysRoleDO> roleDOS = roleMapper.selectList(null);
Map<Long, Set<String>> tmp = new HashMap<>();
for (SysRoleDO roleDO : roleDOS) {
String permissionIds = roleDO.getPermissionIds();
if (StringUtils.isEmpty(permissionIds)) {
continue;
}
List<Long> list = Arrays.stream(permissionIds.split(",")).map(String::trim).filter(StringUtils::isNotEmpty).map(Long::valueOf).collect(Collectors.toList());
Set<String> permSet = tmp.getOrDefault(roleDO.getId(), new HashSet<>());
for (Long permId : list) {
SysPermissionDO permissionDO = permCache.get(permId);
if (permissionDO != null) {
permSet.add(permissionDO.getPermission());
}
}
tmp.put(roleDO.getId(), permSet);
}
rolePermCache = tmp;
}
private void loadPermCache() {
List<SysPermissionDO> roleDOS = permissionMapper.selectList(null);
Map<Long, SysPermissionDO> map = roleDOS.stream().collect(Collectors.toMap(SysPermissionDO::getId, Function.identity(), (e1, e2) -> e1));
permCache = map;
}
@Override
public void afterSingletonsInstantiated() {
this.loadPermCache();
this.refresh();
}
}

View File

@@ -0,0 +1,33 @@
package com.xuxd.kafka.console.cache;
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
import com.google.common.cache.RemovalListener;
import kafka.console.KafkaConsole;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
public class TimeBasedCache<K, V> {
private LoadingCache<K, V> cache;
private KafkaConsole console;
public TimeBasedCache(CacheLoader<K, V> loader, RemovalListener<K, V> listener) {
cache = CacheBuilder.newBuilder()
.maximumSize(50) // maximum 100 records can be cached
.expireAfterAccess(30, TimeUnit.MINUTES) // cache will expire after 30 minutes of access
.removalListener(listener)
.build(loader);
}
public V get(K k) {
try {
return cache.get(k);
} catch (ExecutionException e) {
throw new RuntimeException("Get connection from cache error.", e);
}
}
}

View File

@@ -0,0 +1,21 @@
package com.xuxd.kafka.console.config;
import lombok.Data;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
/**
* @author: xuxd
* @date: 2023/5/9 21:08
**/
@Data
@Configuration
@ConfigurationProperties(prefix = "auth")
public class AuthConfig {
private boolean enable;
private String secret = "kafka-console-ui-default-secret";
private long expireHours;
}

View File

@@ -20,6 +20,12 @@ public class KafkaConfig {
private Properties properties;
private boolean cacheAdminConnection;
private boolean cacheProducerConnection;
private boolean cacheConsumerConnection;
public String getBootstrapServer() {
return bootstrapServer;
}
@@ -43,4 +49,28 @@ public class KafkaConfig {
public void setProperties(Properties properties) {
this.properties = properties;
}
public boolean isCacheAdminConnection() {
return cacheAdminConnection;
}
public void setCacheAdminConnection(boolean cacheAdminConnection) {
this.cacheAdminConnection = cacheAdminConnection;
}
public boolean isCacheProducerConnection() {
return cacheProducerConnection;
}
public void setCacheProducerConnection(boolean cacheProducerConnection) {
this.cacheProducerConnection = cacheProducerConnection;
}
public boolean isCacheConsumerConnection() {
return cacheConsumerConnection;
}
public void setCacheConsumerConnection(boolean cacheConsumerConnection) {
this.cacheConsumerConnection = cacheConsumerConnection;
}
}

View File

@@ -1,13 +1,6 @@
package com.xuxd.kafka.console.config;
import kafka.console.ClusterConsole;
import kafka.console.ConfigConsole;
import kafka.console.ConsumerConsole;
import kafka.console.KafkaAclConsole;
import kafka.console.KafkaConfigConsole;
import kafka.console.MessageConsole;
import kafka.console.OperationConsole;
import kafka.console.TopicConsole;
import kafka.console.*;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@@ -52,7 +45,7 @@ public class KafkaConfiguration {
@Bean
public OperationConsole operationConsole(KafkaConfig config, TopicConsole topicConsole,
ConsumerConsole consumerConsole) {
ConsumerConsole consumerConsole) {
return new OperationConsole(config, topicConsole, consumerConsole);
}
@@ -60,4 +53,9 @@ public class KafkaConfiguration {
public MessageConsole messageConsole(KafkaConfig config) {
return new MessageConsole(config);
}
@Bean
public ClientQuotaConsole clientQuotaConsole(KafkaConfig config) {
return new ClientQuotaConsole(config);
}
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.AclEntry;
import com.xuxd.kafka.console.beans.dto.AddAuthDTO;
import com.xuxd.kafka.console.beans.dto.ConsumerAuthDTO;
@@ -28,6 +29,7 @@ public class AclAuthController {
@Autowired
private AclService aclService;
@Permission({"acl:authority:detail", "acl:sasl-scram:detail"})
@PostMapping("/detail")
public Object getAclDetailList(@RequestBody QueryAclDTO param) {
return aclService.getAclDetailList(param.toEntry());
@@ -38,11 +40,13 @@ public class AclAuthController {
return aclService.getOperationList();
}
@Permission("acl:authority")
@PostMapping("/list")
public Object getAclList(@RequestBody QueryAclDTO param) {
return aclService.getAclList(param.toEntry());
}
@Permission({"acl:authority:add-principal", "acl:authority:add", "acl:sasl-scram:add-auth"})
@PostMapping
public Object addAcl(@RequestBody AddAuthDTO param) {
return aclService.addAcl(param.toAclEntry());
@@ -54,6 +58,7 @@ public class AclAuthController {
* @param param entry.topic && entry.username must.
* @return
*/
@Permission({"acl:authority:producer", "acl:sasl-scram:producer"})
@PostMapping("/producer")
public Object addProducerAcl(@RequestBody ProducerAuthDTO param) {
@@ -66,6 +71,7 @@ public class AclAuthController {
* @param param entry.topic && entry.groupId entry.username must.
* @return
*/
@Permission({"acl:authority:consumer", "acl:sasl-scram:consumer"})
@PostMapping("/consumer")
public Object addConsumerAcl(@RequestBody ConsumerAuthDTO param) {
@@ -78,6 +84,7 @@ public class AclAuthController {
* @param entry entry
* @return
*/
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping
public Object deleteAclByUser(@RequestBody AclEntry entry) {
return aclService.deleteAcl(entry);
@@ -89,6 +96,7 @@ public class AclAuthController {
* @param param entry.username
* @return
*/
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/user")
public Object deleteAclByUser(@RequestBody DeleteAclDTO param) {
return aclService.deleteUserAcl(param.toUserEntry());
@@ -100,6 +108,7 @@ public class AclAuthController {
* @param param entry.topic && entry.username must.
* @return
*/
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/producer")
public Object deleteProducerAcl(@RequestBody ProducerAuthDTO param) {
@@ -112,10 +121,22 @@ public class AclAuthController {
* @param param entry.topic && entry.groupId entry.username must.
* @return
*/
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/consumer")
public Object deleteConsumerAcl(@RequestBody ConsumerAuthDTO param) {
return aclService.deleteConsumerAcl(param.toTopicEntry(), param.toGroupEntry());
}
/**
* clear principal acls.
*
* @param param acl principal.
* @return true or false.
*/
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/clear")
public Object clearAcl(@RequestBody DeleteAclDTO param) {
return aclService.clearAcl(param.toUserEntry());
}
}

View File

@@ -1,7 +1,10 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.AclEntry;
import com.xuxd.kafka.console.beans.AclUser;
import com.xuxd.kafka.console.service.AclService;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
@@ -12,7 +15,7 @@ import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
/**
* kafka-console-ui.
* kafka-console-ui. sasl scram user.
*
* @author xuxd
* @date 2021-08-28 21:13:05
@@ -24,29 +27,41 @@ public class AclUserController {
@Autowired
private AclService aclService;
@Permission("acl:sasl-scram")
@GetMapping
public Object getUserList() {
return aclService.getUserList();
}
@Permission({"acl:sasl-scram:add-update", "acl:sasl-scram:add-auth"})
@PostMapping
public Object addOrUpdateUser(@RequestBody AclUser user) {
return aclService.addOrUpdateUser(user.getUsername(), user.getPassword());
}
@Permission({"acl:sasl-scram:del", "acl:sasl-scram:pure"})
@DeleteMapping
public Object deleteUser(@RequestBody AclUser user) {
return aclService.deleteUser(user.getUsername());
}
@Permission({"acl:sasl-scram:del", "acl:sasl-scram:pure"})
@DeleteMapping("/auth")
public Object deleteUserAndAuth(@RequestBody AclUser user) {
return aclService.deleteUserAndAuth(user.getUsername());
}
@Permission("acl:sasl-scram:detail")
@GetMapping("/detail")
public Object getUserDetail(@RequestParam String username) {
public Object getUserDetail(@RequestParam("username") String username) {
return aclService.getUserDetail(username);
}
@GetMapping("/scram")
public Object getSaslScramUserList(@RequestParam(required = false, name = "username") String username) {
AclEntry entry = new AclEntry();
entry.setPrincipal(StringUtils.isNotBlank(username) ? username : null);
return aclService.getSaslScramUserList(entry);
}
}

View File

@@ -0,0 +1,36 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.LoginUserDTO;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.service.AuthService;
import org.springframework.web.bind.annotation.*;
/**
* @author: xuxd
* @date: 2023/5/11 18:54
**/
@RestController
@RequestMapping("/auth")
public class AuthController {
private final AuthConfig authConfig;
private final AuthService authService;
public AuthController(AuthConfig authConfig, AuthService authService) {
this.authConfig = authConfig;
this.authService = authService;
}
@GetMapping("/enable")
public boolean enable() {
return authConfig.isEnable();
}
@PostMapping("/login")
public ResponseData login(@RequestBody LoginUserDTO userDTO) {
return authService.login(userDTO);
}
}

View File

@@ -0,0 +1,56 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterClientQuotaDTO;
import com.xuxd.kafka.console.beans.dto.QueryClientQuotaDTO;
import com.xuxd.kafka.console.service.ClientQuotaService;
import org.apache.commons.collections.CollectionUtils;
import org.springframework.web.bind.annotation.*;
/**
* @author: xuxd
* @date: 2023/1/9 21:50
**/
@RestController
@RequestMapping("/client/quota")
public class ClientQuotaController {
private final ClientQuotaService clientQuotaService;
public ClientQuotaController(ClientQuotaService clientQuotaService) {
this.clientQuotaService = clientQuotaService;
}
@Permission({"quota:user", "quota:client", "quota:user-client"})
@PostMapping("/list")
public Object getClientQuotaConfigs(@RequestBody QueryClientQuotaDTO request) {
return clientQuotaService.getClientQuotaConfigs(request.getTypes(), request.getNames());
}
@Permission({"quota:user:add", "quota:client:add", "quota:user-client:add", "quota:edit"})
@PostMapping
public Object alterClientQuotaConfigs(@RequestBody AlterClientQuotaDTO request) {
if (request.getTypes().size() != 2) {
if (CollectionUtils.isEmpty(request.getTypes())
|| CollectionUtils.isEmpty(request.getNames())
|| request.getTypes().size() != request.getNames().size()) {
return ResponseData.create().failed("types length and names length is invalid.");
}
}
return clientQuotaService.alterClientQuotaConfigs(request);
}
@Permission("quota:del")
@DeleteMapping
public Object deleteClientQuotaConfigs(@RequestBody AlterClientQuotaDTO request) {
if (request.getTypes().size() != 2) {
if (CollectionUtils.isEmpty(request.getTypes())
|| CollectionUtils.isEmpty(request.getNames())
|| request.getTypes().size() != request.getNames().size()) {
return ResponseData.create().failed("types length and names length is invalid.");
}
}
return clientQuotaService.deleteClientQuotaConfigs(request);
}
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.dto.ClusterInfoDTO;
import com.xuxd.kafka.console.service.ClusterService;
import org.springframework.beans.factory.annotation.Autowired;
@@ -34,16 +35,19 @@ public class ClusterController {
return clusterService.getClusterInfoList();
}
@Permission("op:cluster-switch:add")
@PostMapping("/info")
public Object addClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.addClusterInfo(dto.to());
}
@Permission("op:cluster-switch:del")
@DeleteMapping("/info")
public Object deleteClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.deleteClusterInfo(dto.getId());
}
@Permission("op:cluster-switch:edit")
@PutMapping("/info")
public Object updateClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.updateClusterInfo(dto.to());
@@ -53,4 +57,9 @@ public class ClusterController {
public Object peekClusterInfo() {
return clusterService.peekClusterInfo();
}
@GetMapping("/info/api/version")
public Object getBrokerApiVersionInfo() {
return clusterService.getBrokerApiVersionInfo();
}
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterConfigDTO;
import com.xuxd.kafka.console.beans.enums.AlterType;
@@ -41,46 +42,55 @@ public class ConfigController {
return ResponseData.create().data(configMap).success();
}
@Permission("topic:property-config")
@GetMapping("/topic")
public Object getTopicConfig(String topic) {
return configService.getTopicConfig(topic);
}
@Permission("topic:property-config:edit")
@PostMapping("/topic")
public Object setTopicConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterTopicConfig(dto.getEntity(), dto.to(), AlterType.SET);
}
@Permission("topic:property-config:del")
@DeleteMapping("/topic")
public Object deleteTopicConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterTopicConfig(dto.getEntity(), dto.to(), AlterType.DELETE);
}
@Permission("cluster:property-config")
@GetMapping("/broker")
public Object getBrokerConfig(String brokerId) {
return configService.getBrokerConfig(brokerId);
}
@Permission("cluster:edit")
@PostMapping("/broker")
public Object setBrokerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerConfig(dto.getEntity(), dto.to(), AlterType.SET);
}
@Permission("cluster:edit")
@DeleteMapping("/broker")
public Object deleteBrokerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerConfig(dto.getEntity(), dto.to(), AlterType.DELETE);
}
@Permission("cluster:log-config")
@GetMapping("/broker/logger")
public Object getBrokerLoggerConfig(String brokerId) {
return configService.getBrokerLoggerConfig(brokerId);
}
@Permission("cluster:edit")
@PostMapping("/broker/logger")
public Object setBrokerLoggerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerLoggerConfig(dto.getEntity(), dto.to(), AlterType.SET);
}
@Permission("cluster:edit")
@DeleteMapping("/broker/logger")
public Object deleteBrokerLoggerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerLoggerConfig(dto.getEntity(), dto.to(), AlterType.DELETE);

View File

@@ -1,28 +1,20 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AddSubscriptionDTO;
import com.xuxd.kafka.console.beans.dto.QueryConsumerGroupDTO;
import com.xuxd.kafka.console.beans.dto.ResetOffsetDTO;
import com.xuxd.kafka.console.service.ConsumerService;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Objects;
import java.util.Set;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.consumer.OffsetResetStrategy;
import org.apache.kafka.common.ConsumerGroupState;
import org.apache.kafka.common.TopicPartition;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.*;
import java.util.*;
/**
* kafka-console-ui.
@@ -51,26 +43,34 @@ public class ConsumerController {
return consumerService.getConsumerGroupList(groupIdList, stateSet);
}
@Permission("group:del")
@DeleteMapping("/group")
public Object deleteConsumerGroup(@RequestParam String groupId) {
public Object deleteConsumerGroup(@RequestParam("groupId") String groupId) {
return consumerService.deleteConsumerGroup(groupId);
}
@Permission("group:client")
@GetMapping("/member")
public Object getConsumerMembers(@RequestParam String groupId) {
public Object getConsumerMembers(@RequestParam("groupId") String groupId) {
return consumerService.getConsumerMembers(groupId);
}
@Permission("group:consumer-detail")
@GetMapping("/detail")
public Object getConsumerDetail(@RequestParam String groupId) {
public Object getConsumerDetail(@RequestParam("groupId") String groupId) {
return consumerService.getConsumerDetail(groupId);
}
@Permission("group:add")
@PostMapping("/subscription")
public Object addSubscription(@RequestBody AddSubscriptionDTO subscriptionDTO) {
return consumerService.addSubscription(subscriptionDTO.getGroupId(), subscriptionDTO.getTopic());
}
@Permission({"group:consumer-detail:min",
"group:consumer-detail:last",
"group:consumer-detail:timestamp",
"group:consumer-detail:any"})
@PostMapping("/reset/offset")
public Object restOffset(@RequestBody ResetOffsetDTO offsetDTO) {
ResponseData res = ResponseData.create().failed("unknown");
@@ -78,7 +78,7 @@ public class ConsumerController {
case ResetOffsetDTO.Level.TOPIC:
switch (offsetDTO.getType()) {
case ResetOffsetDTO.Type
.EARLIEST:
.EARLIEST:
res = consumerService.resetOffsetToEndpoint(offsetDTO.getGroupId(), offsetDTO.getTopic(), OffsetResetStrategy.EARLIEST);
break;
case ResetOffsetDTO.Type.LATEST:
@@ -94,7 +94,7 @@ public class ConsumerController {
case ResetOffsetDTO.Level.PARTITION:
switch (offsetDTO.getType()) {
case ResetOffsetDTO.Type
.SPECIAL:
.SPECIAL:
res = consumerService.resetPartitionToTargetOffset(offsetDTO.getGroupId(), new TopicPartition(offsetDTO.getTopic(), offsetDTO.getPartition()), offsetDTO.getOffset());
break;
default:
@@ -114,17 +114,19 @@ public class ConsumerController {
}
@GetMapping("/topic/list")
public Object getSubscribeTopicList(@RequestParam String groupId) {
public Object getSubscribeTopicList(@RequestParam("groupId") String groupId) {
return consumerService.getSubscribeTopicList(groupId);
}
@Permission({"topic:consumer-detail"})
@GetMapping("/topic/subscribed")
public Object getTopicSubscribedByGroups(@RequestParam String topic) {
public Object getTopicSubscribedByGroups(@RequestParam("topic") String topic) {
return consumerService.getTopicSubscribedByGroups(topic);
}
@Permission("group:offset-partition")
@GetMapping("/offset/partition")
public Object getOffsetPartition(@RequestParam String groupId) {
public Object getOffsetPartition(@RequestParam("groupId") String groupId) {
return consumerService.getOffsetPartition(groupId);
}
}

View File

@@ -1,14 +1,16 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage;
import com.xuxd.kafka.console.beans.dto.QueryMessageDTO;
import com.xuxd.kafka.console.service.MessageService;
import org.apache.commons.collections.CollectionUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.*;
import java.util.List;
/**
* kafka-console-ui.
@@ -23,16 +25,19 @@ public class MessageController {
@Autowired
private MessageService messageService;
@Permission("message:search-time")
@PostMapping("/search/time")
public Object searchByTime(@RequestBody QueryMessageDTO dto) {
return messageService.searchByTime(dto.toQueryMessage());
}
@Permission("message:search-offset")
@PostMapping("/search/offset")
public Object searchByOffset(@RequestBody QueryMessageDTO dto) {
return messageService.searchByOffset(dto.toQueryMessage());
}
@Permission("message:detail")
@PostMapping("/search/detail")
public Object searchDetail(@RequestBody QueryMessageDTO dto) {
return messageService.searchDetail(dto.toQueryMessage());
@@ -43,13 +48,24 @@ public class MessageController {
return messageService.deserializerList();
}
@Permission("message:send")
@PostMapping("/send")
public Object send(@RequestBody SendMessage message) {
return messageService.send(message);
}
@Permission("message:resend")
@PostMapping("/resend")
public Object resend(@RequestBody SendMessage message) {
return messageService.resend(message);
}
@Permission("message:del")
@DeleteMapping
public Object delete(@RequestBody List<QueryMessage> messages) {
if (CollectionUtils.isEmpty(messages)) {
return ResponseData.create().failed("params is null");
}
return messageService.delete(messages);
}
}

View File

@@ -1,19 +1,15 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.TopicPartition;
import com.xuxd.kafka.console.beans.dto.BrokerThrottleDTO;
import com.xuxd.kafka.console.beans.dto.ProposedAssignmentDTO;
import com.xuxd.kafka.console.beans.dto.ReplicationDTO;
import com.xuxd.kafka.console.beans.dto.SyncDataDTO;
import com.xuxd.kafka.console.service.OperationService;
import org.apache.kafka.clients.admin.AdminClientConfig;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.*;
/**
* kafka-console-ui.
@@ -46,32 +42,42 @@ public class OperationController {
}
@DeleteMapping("/sync/alignment")
public Object deleteAlignment(@RequestParam Long id) {
public Object deleteAlignment(@RequestParam("id") Long id) {
return operationService.deleteAlignmentById(id);
}
@Permission({"topic:partition-detail:preferred", "op:replication-preferred"})
@PostMapping("/replication/preferred")
public Object electPreferredLeader(@RequestBody ReplicationDTO dto) {
return operationService.electPreferredLeader(dto.getTopic(), dto.getPartition());
}
@Permission("op:config-throttle")
@PostMapping("/broker/throttle")
public Object configThrottle(@RequestBody BrokerThrottleDTO dto) {
return operationService.configThrottle(dto.getBrokerList(), dto.getUnit().toKb(dto.getThrottle()));
}
@Permission("op:remove-throttle")
@DeleteMapping("/broker/throttle")
public Object removeThrottle(@RequestBody BrokerThrottleDTO dto) {
return operationService.removeThrottle(dto.getBrokerList());
}
@Permission("op:replication-update-detail")
@GetMapping("/replication/reassignments")
public Object currentReassignments() {
return operationService.currentReassignments();
}
@Permission("op:replication-update-detail:cancel")
@DeleteMapping("/replication/reassignments")
public Object cancelReassignment(@RequestBody TopicPartition partition) {
return operationService.cancelReassignment(new org.apache.kafka.common.TopicPartition(partition.getTopic(), partition.getPartition()));
}
@PostMapping("/replication/reassignments/proposed")
public Object proposedAssignments(@RequestBody ProposedAssignmentDTO dto) {
return operationService.proposedAssignments(dto.getTopic(), dto.getBrokers());
}
}

View File

@@ -1,23 +1,19 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ReplicaAssignment;
import com.xuxd.kafka.console.beans.dto.AddPartitionDTO;
import com.xuxd.kafka.console.beans.dto.NewTopicDTO;
import com.xuxd.kafka.console.beans.dto.TopicThrottleDTO;
import com.xuxd.kafka.console.beans.enums.TopicType;
import com.xuxd.kafka.console.service.TopicService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
/**
* kafka-console-ui.
@@ -37,26 +33,31 @@ public class TopicController {
return topicService.getTopicNameList(false);
}
@Permission("topic:load")
@GetMapping("/list")
public Object getTopicList(@RequestParam(required = false) String topic, @RequestParam String type) {
public Object getTopicList(@RequestParam(required = false, name = "topic") String topic, @RequestParam("type") String type) {
return topicService.getTopicList(topic, TopicType.valueOf(type.toUpperCase()));
}
@Permission({"topic:batch-del", "topic:del"})
@DeleteMapping
public Object deleteTopic(@RequestParam String topic) {
return topicService.deleteTopic(topic);
public Object deleteTopic(@RequestBody List<String> topics) {
return topicService.deleteTopics(topics);
}
@Permission("topic:partition-detail")
@GetMapping("/partition")
public Object getTopicPartitionInfo(@RequestParam String topic) {
public Object getTopicPartitionInfo(@RequestParam("topic") String topic) {
return topicService.getTopicPartitionInfo(topic.trim());
}
@Permission("topic:add")
@PostMapping("/new")
public Object createNewTopic(@RequestBody NewTopicDTO topicDTO) {
return topicService.createTopic(topicDTO.toNewTopic());
}
@Permission("topic:partition-add")
@PostMapping("/partition/new")
public Object addPartition(@RequestBody AddPartitionDTO partitionDTO) {
String topic = partitionDTO.getTopic().trim();
@@ -75,22 +76,25 @@ public class TopicController {
}
@GetMapping("/replica/assignment")
public Object getCurrentReplicaAssignment(@RequestParam String topic) {
public Object getCurrentReplicaAssignment(@RequestParam("topic") String topic) {
return topicService.getCurrentReplicaAssignment(topic);
}
@Permission({"topic:replication-modify", "op:replication-reassign"})
@PostMapping("/replica/assignment")
public Object updateReplicaAssignment(@RequestBody ReplicaAssignment assignment) {
return topicService.updateReplicaAssignment(assignment);
}
@Permission("topic:replication-sync-throttle")
@PostMapping("/replica/throttle")
public Object configThrottle(@RequestBody TopicThrottleDTO dto) {
return topicService.configThrottle(dto.getTopic(), dto.getPartitions(), dto.getOperation());
}
@Permission("topic:send-count")
@GetMapping("/send/stats")
public Object sendStats(@RequestParam String topic) {
public Object sendStats(@RequestParam("topic") String topic) {
return topicService.sendStats(topic);
}
}

View File

@@ -0,0 +1,97 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.Credentials;
import com.xuxd.kafka.console.beans.dto.SysPermissionDTO;
import com.xuxd.kafka.console.beans.dto.SysRoleDTO;
import com.xuxd.kafka.console.beans.dto.SysUserDTO;
import com.xuxd.kafka.console.service.UserManageService;
import org.springframework.web.bind.annotation.*;
import javax.servlet.http.HttpServletRequest;
/**
* @author: xuxd
* @date: 2023/4/11 21:34
**/
@RestController
@RequestMapping("/sys/user/manage")
public class UserManageController {
private final UserManageService userManageService;
public UserManageController(UserManageService userManageService) {
this.userManageService = userManageService;
}
@Permission({"user-manage:user:add", "user-manage:user:change-role", "user-manage:user:reset-pass"})
@ControllerLog("新增/更新用户")
@PostMapping("/user")
public Object addOrUpdateUser(@RequestBody SysUserDTO userDTO) {
return userManageService.addOrUpdateUser(userDTO);
}
@Permission("user-manage:role:save")
@ControllerLog("新增/更新角色")
@PostMapping("/role")
public Object addOrUpdateRole(@RequestBody SysRoleDTO roleDTO) {
return userManageService.addOrUdpateRole(roleDTO);
}
@ControllerLog("新增权限")
@PostMapping("/permission")
public Object addPermission(@RequestBody SysPermissionDTO permissionDTO) {
return userManageService.addPermission(permissionDTO);
}
@Permission("user-manage:role:save")
@ControllerLog("更新角色")
@PutMapping("/role")
public Object updateRole(@RequestBody SysRoleDTO roleDTO) {
return userManageService.updateRole(roleDTO);
}
@Permission({"user-manage:role"})
@GetMapping("/role")
public Object selectRole() {
return userManageService.selectRole();
}
@Permission({"user-manage:permission"})
@GetMapping("/permission")
public Object selectPermission() {
return userManageService.selectPermission();
}
@Permission({"user-manage:user"})
@GetMapping("/user")
public Object selectUser() {
return userManageService.selectUser();
}
@Permission("user-manage:role:del")
@ControllerLog("删除角色")
@DeleteMapping("/role")
public Object deleteRole(@RequestParam("id") Long id) {
return userManageService.deleteRole(id);
}
@Permission("user-manage:user:del")
@ControllerLog("删除用户")
@DeleteMapping("/user")
public Object deleteUser(@RequestParam("id") Long id) {
return userManageService.deleteUser(id);
}
@Permission("user-manage:setting")
@ControllerLog("更新密码")
@PostMapping("/user/password")
public Object updatePassword(@RequestBody SysUserDTO userDTO, HttpServletRequest request) {
Credentials credentials = (Credentials) request.getAttribute("credentials");
if (credentials != null && !credentials.isInvalid()) {
userDTO.setUsername(credentials.getUsername());
}
return userManageService.updatePassword(userDTO);
}
}

View File

@@ -0,0 +1,15 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.SysPermissionDO;
import org.apache.ibatis.annotations.Mapper;
/**
* 系统权限 .
*
* @author: xuxd
* @date: 2023/4/11 21:21
**/
@Mapper
public interface SysPermissionMapper extends BaseMapper<SysPermissionDO> {
}

View File

@@ -0,0 +1,13 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import org.apache.ibatis.annotations.Mapper;
/**
* @author: xuxd
* @date: 2023/4/11 21:22
**/
@Mapper
public interface SysRoleMapper extends BaseMapper<SysRoleDO> {
}

View File

@@ -0,0 +1,13 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import org.apache.ibatis.annotations.Mapper;
/**
* @author: xuxd
* @date: 2023/4/11 21:22
**/
@Mapper
public interface SysUserMapper extends BaseMapper<SysUserDO> {
}

View File

@@ -0,0 +1,93 @@
package com.xuxd.kafka.console.dao.init;
import com.xuxd.kafka.console.beans.RolePermUpdateEvent;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.dao.SysPermissionMapper;
import com.xuxd.kafka.console.dao.SysRoleMapper;
import com.xuxd.kafka.console.dao.SysUserMapper;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.SmartInitializingSingleton;
import org.springframework.context.ApplicationEventPublisher;
import org.springframework.stereotype.Component;
import javax.sql.DataSource;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
/**
* @author: xuxd
* @date: 2023/5/17 13:10
**/
@Slf4j
@Component
public class DataInit implements SmartInitializingSingleton {
private final AuthConfig authConfig;
private final SysUserMapper userMapper;
private final SysRoleMapper roleMapper;
private final SysPermissionMapper permissionMapper;
private final DataSource dataSource;
private final SqlParse sqlParse;
private final ApplicationEventPublisher publisher;
public DataInit(AuthConfig authConfig,
SysUserMapper userMapper,
SysRoleMapper roleMapper,
SysPermissionMapper permissionMapper,
DataSource dataSource,
ApplicationEventPublisher publisher) {
this.authConfig = authConfig;
this.userMapper = userMapper;
this.roleMapper = roleMapper;
this.permissionMapper = permissionMapper;
this.dataSource = dataSource;
this.publisher = publisher;
this.sqlParse = new SqlParse();
}
@Override
public void afterSingletonsInstantiated() {
if (!authConfig.isEnable()) {
log.info("Disable login authentication, no longer try to initialize the data");
return;
}
try {
Connection connection = dataSource.getConnection();
Integer userCount = userMapper.selectCount(null);
if (userCount == null || userCount == 0) {
initData(connection, SqlParse.USER_TABLE);
}
Integer roleCount = roleMapper.selectCount(null);
if (roleCount == null || roleCount == 0) {
initData(connection, SqlParse.ROLE_TABLE);
}
Integer permCount = permissionMapper.selectCount(null);
if (permCount == null || permCount == 0) {
initData(connection, SqlParse.PERM_TABLE);
}
RolePermUpdateEvent event = new RolePermUpdateEvent(this);
event.setReload(true);
publisher.publishEvent(event);
} catch (SQLException e) {
throw new RuntimeException(e);
}
}
private void initData(Connection connection, String table) throws SQLException {
log.info("Init default data for {}", table);
String sql = sqlParse.getMergeSql(table);
PreparedStatement statement = connection.prepareStatement(sql);
statement.execute();
}
}

View File

@@ -0,0 +1,96 @@
package com.xuxd.kafka.console.dao.init;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import scala.collection.mutable.StringBuilder;
import java.io.BufferedReader;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* @author: xuxd
* @date: 2023/5/17 21:22
**/
@Slf4j
public class SqlParse {
private final String FILE = "db/data-h2.sql";
private final Map<String, List<String>> sqlMap = new HashMap<>();
public static final String ROLE_TABLE = "t_sys_role";
public static final String USER_TABLE = "t_sys_user";
public static final String PERM_TABLE = "t_sys_permission";
public SqlParse() {
sqlMap.put(ROLE_TABLE, new ArrayList<>());
sqlMap.put(USER_TABLE, new ArrayList<>());
sqlMap.put(PERM_TABLE, new ArrayList<>());
String table = null;
try {
List<String> lines = getSqlLines();
for (String str : lines) {
if (StringUtils.isNotEmpty(str)) {
if (str.indexOf("start--") > 0) {
if (str.indexOf(ROLE_TABLE) > 0) {
table = ROLE_TABLE;
}
if (str.indexOf(USER_TABLE) > 0) {
table = USER_TABLE;
}
if (str.indexOf(PERM_TABLE) > 0) {
table = PERM_TABLE;
}
}
if (isSql(str)) {
if (table == null) {
log.error("Table is null, can not load sql: {}", str);
continue;
}
sqlMap.get(table).add(str);
}
}
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public List<String> getSqlList(String table) {
return sqlMap.get(table);
}
public String getMergeSql(String table) {
List<String> list = getSqlList(table);
StringBuilder sb = new StringBuilder();
list.forEach(sql -> sb.append(sql));
return sb.toString();
}
private boolean isSql(String str) {
return StringUtils.isNotEmpty(str) && str.startsWith("insert");
}
private List<String> getSqlLines() throws Exception {
// File file = ResourceUtils.getFile(FILE);
// List<String> lines = Files.readLines(file, Charset.forName("UTF-8"));
List<String> lines = new ArrayList<>();
try (InputStream inputStream = Thread.currentThread().getContextClassLoader().getResourceAsStream(FILE)) {
try (InputStreamReader inputStreamReader = new InputStreamReader(inputStream, "UTF-8")) {
try (BufferedReader reader = new BufferedReader(inputStreamReader)) {
String line;
while ((line = reader.readLine()) != null) {
lines.add(line);
}
}
}
}
return lines;
}
}

View File

@@ -1,11 +1,14 @@
package com.xuxd.kafka.console.interceptor;
package com.xuxd.kafka.console.exception;
import com.xuxd.kafka.console.beans.ResponseData;
import javax.servlet.http.HttpServletRequest;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ControllerAdvice;
import org.springframework.web.bind.annotation.ExceptionHandler;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.ResponseStatus;
import javax.servlet.http.HttpServletRequest;
/**
* kafka-console-ui.
@@ -17,6 +20,14 @@ import org.springframework.web.bind.annotation.ResponseBody;
@ControllerAdvice(basePackages = "com.xuxd.kafka.console.controller")
public class GlobalExceptionHandler {
@ResponseStatus(code = HttpStatus.FORBIDDEN)
@ExceptionHandler(value = UnAuthorizedException.class)
@ResponseBody
public Object unAuthorizedExceptionHandler(HttpServletRequest req, Exception ex) throws Exception {
log.error("unAuthorized: {}", ex.getMessage());
return ResponseData.create().failed("UnAuthorized: " + ex.getMessage());
}
@ExceptionHandler(value = Exception.class)
@ResponseBody
public Object exceptionHandler(HttpServletRequest req, Exception ex) throws Exception {

View File

@@ -0,0 +1,12 @@
package com.xuxd.kafka.console.exception;
/**
* @author: xuxd
* @date: 2023/5/17 23:08
**/
public class UnAuthorizedException extends RuntimeException{
public UnAuthorizedException(String message) {
super(message);
}
}

View File

@@ -0,0 +1,78 @@
package com.xuxd.kafka.console.filter;
import com.xuxd.kafka.console.beans.Credentials;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.utils.AuthUtil;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.core.annotation.Order;
import org.springframework.http.HttpStatus;
import javax.servlet.*;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
/**
* @author: xuxd
* @date: 2023/5/9 21:20
**/
@Order(1)
@WebFilter(filterName = "auth-filter", urlPatterns = {"/*"})
@Slf4j
public class AuthFilter implements Filter {
private final AuthConfig authConfig;
private final String TOKEN_HEADER = "X-Auth-Token";
private final String AUTH_URI_PREFIX = "/auth";
public AuthFilter(AuthConfig authConfig) {
this.authConfig = authConfig;
}
@Override
public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
if (!authConfig.isEnable()) {
filterChain.doFilter(servletRequest, servletResponse);
return;
}
HttpServletRequest request = (HttpServletRequest) servletRequest;
HttpServletResponse response = (HttpServletResponse) servletResponse;
String accessToken = request.getHeader(TOKEN_HEADER);
String requestURI = request.getRequestURI();
if (isResourceRequest(requestURI)) {
filterChain.doFilter(servletRequest, servletResponse);
return;
}
if (requestURI.startsWith(AUTH_URI_PREFIX)) {
filterChain.doFilter(servletRequest, servletResponse);
return;
}
if (StringUtils.isEmpty(accessToken)) {
response.setStatus(HttpStatus.UNAUTHORIZED.value());
return;
}
Credentials credentials = AuthUtil.parseToken(authConfig.getSecret(), accessToken);
if (credentials.isInvalid()) {
response.setStatus(HttpStatus.UNAUTHORIZED.value());
return;
}
request.setAttribute("credentials", credentials);
try {
CredentialsContext.set(credentials);
filterChain.doFilter(servletRequest, servletResponse);
} finally {
CredentialsContext.remove();
}
}
private boolean isResourceRequest(String requestURI) {
return requestURI.contains(".") || requestURI.equals("/");
}
}

View File

@@ -1,4 +1,4 @@
package com.xuxd.kafka.console.interceptor;
package com.xuxd.kafka.console.filter;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
@@ -6,28 +6,27 @@ import com.xuxd.kafka.console.config.ContextConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.utils.ConvertUtil;
import java.io.IOException;
import java.util.HashSet;
import java.util.Set;
import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.core.annotation.Order;
import org.springframework.http.MediaType;
import javax.servlet.*;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import java.io.IOException;
import java.util.HashSet;
import java.util.Set;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-05 19:56:25
**/
@WebFilter(filterName = "context-set-filter", urlPatterns = {"/acl/*","/user/*","/cluster/*","/config/*","/consumer/*","/message/*","/topic/*","/op/*"})
@Order(100)
@WebFilter(filterName = "context-set-filter", urlPatterns = {"/acl/*", "/user/*", "/cluster/*", "/config/*", "/consumer/*", "/message/*", "/topic/*", "/op/*", "/client/*"})
@Slf4j
public class ContextSetFilter implements Filter {
@@ -42,8 +41,9 @@ public class ContextSetFilter implements Filter {
@Autowired
private ClusterInfoMapper clusterInfoMapper;
@Override public void doFilter(ServletRequest req, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
@Override
public void doFilter(ServletRequest req, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
try {
HttpServletRequest request = (HttpServletRequest) req;
String uri = request.getRequestURI();
@@ -72,6 +72,7 @@ public class ContextSetFilter implements Filter {
config.setProperties(ConvertUtil.toProperties(infoDO.getProperties()));
}
ContextConfigHolder.CONTEXT_CONFIG.set(config);
// log.info("current kafka config: {}", config);
}
}
chain.doFilter(req, response);

View File

@@ -0,0 +1,23 @@
package com.xuxd.kafka.console.filter;
import com.xuxd.kafka.console.beans.Credentials;
/**
* @author: xuxd
* @date: 2023/5/17 23:02
**/
public class CredentialsContext {
private static final ThreadLocal<Credentials> CREDENTIALS = new ThreadLocal<>();
public static void set(Credentials credentials) {
CREDENTIALS.set(credentials);
}
public static Credentials get() {
return CREDENTIALS.get();
}
public static void remove() {
CREDENTIALS.remove();
}
}

View File

@@ -42,4 +42,7 @@ public interface AclService {
ResponseData getUserDetail(String username);
ResponseData clearAcl(AclEntry entry);
ResponseData getSaslScramUserList(AclEntry entry);
}

View File

@@ -0,0 +1,13 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.LoginUserDTO;
/**
* @author: xuxd
* @date: 2023/5/14 19:00
**/
public interface AuthService {
ResponseData login(LoginUserDTO userDTO);
}

View File

@@ -0,0 +1,18 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterClientQuotaDTO;
import java.util.List;
/**
* @author 晓东哥哥
*/
public interface ClientQuotaService {
ResponseData getClientQuotaConfigs(List<String> types, List<String> names);
ResponseData alterClientQuotaConfigs(AlterClientQuotaDTO request);
ResponseData deleteClientQuotaConfigs(AlterClientQuotaDTO request);
}

View File

@@ -21,4 +21,6 @@ public interface ClusterService {
ResponseData updateClusterInfo(ClusterInfoDO infoDO);
ResponseData peekClusterInfo();
ResponseData getBrokerApiVersionInfo();
}

View File

@@ -4,6 +4,8 @@ import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage;
import java.util.List;
/**
* kafka-console-ui.
*
@@ -23,4 +25,6 @@ public interface MessageService {
ResponseData send(SendMessage message);
ResponseData resend(SendMessage message);
ResponseData delete(List<QueryMessage> messages);
}

View File

@@ -30,4 +30,6 @@ public interface OperationService {
ResponseData currentReassignments();
ResponseData cancelReassignment(TopicPartition partition);
ResponseData proposedAssignments(String topic, List<Integer> brokerList);
}

View File

@@ -4,6 +4,8 @@ import com.xuxd.kafka.console.beans.ReplicaAssignment;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.enums.TopicThrottleSwitch;
import com.xuxd.kafka.console.beans.enums.TopicType;
import java.util.Collection;
import java.util.List;
import org.apache.kafka.clients.admin.NewTopic;
@@ -19,7 +21,7 @@ public interface TopicService {
ResponseData getTopicList(String topic, TopicType type);
ResponseData deleteTopic(String topic);
ResponseData deleteTopics(Collection<String> topics);
ResponseData getTopicPartitionInfo(String topic);

View File

@@ -0,0 +1,40 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.SysPermissionDTO;
import com.xuxd.kafka.console.beans.dto.SysRoleDTO;
import com.xuxd.kafka.console.beans.dto.SysUserDTO;
/**
* 登录用户权限管理.
*
* @author: xuxd
* @date: 2023/4/11 21:24
**/
public interface UserManageService {
/**
* 增加权限
*/
ResponseData addPermission(SysPermissionDTO permissionDTO);
ResponseData addOrUdpateRole(SysRoleDTO roleDTO);
ResponseData addOrUpdateUser(SysUserDTO userDTO);
ResponseData selectRole();
ResponseData selectPermission();
ResponseData selectUser();
ResponseData updateUser(SysUserDTO userDTO);
ResponseData updateRole(SysRoleDTO roleDTO);
ResponseData deleteRole(Long id);
ResponseData deleteUser(Long id);
ResponseData updatePassword(SysUserDTO userDTO);
}

View File

@@ -10,30 +10,23 @@ import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.KafkaUserMapper;
import com.xuxd.kafka.console.service.AclService;
import com.xuxd.kafka.console.utils.SaslUtil;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import java.util.stream.Collectors;
import kafka.console.KafkaAclConsole;
import kafka.console.KafkaConfigConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.admin.ScramMechanism;
import org.apache.kafka.clients.admin.UserScramCredentialsDescription;
import org.apache.kafka.common.acl.AclBinding;
import org.apache.kafka.common.acl.AclOperation;
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.security.auth.SecurityProtocol;
import org.apache.kafka.common.errors.SecurityDisabledException;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import scala.Tuple2;
import java.util.*;
import java.util.stream.Collectors;
import static com.xuxd.kafka.console.utils.SaslUtil.isEnableSasl;
import static com.xuxd.kafka.console.utils.SaslUtil.isEnableScram;
@@ -139,39 +132,52 @@ public class AclServiceImpl implements AclService {
}
@Override public ResponseData getAclList(AclEntry entry) {
List<AclBinding> aclBindingList = entry.isNull() ? aclConsole.getAclList(null) : aclConsole.getAclList(entry);
List<AclBinding> aclBindingList = Collections.emptyList();
try {
aclBindingList = entry.isNull() ? aclConsole.getAclList(null) : aclConsole.getAclList(entry);
}catch (Exception ex) {
if (ex.getCause() instanceof SecurityDisabledException) {
Throwable e = ex.getCause();
log.info("SecurityDisabledException: {}", e.getMessage());
Map<String, String> hint = new HashMap<>(2);
hint.put("hint", "Security Disabled: " + e.getMessage());
return ResponseData.create().data(hint).success();
}
throw new RuntimeException(ex.getCause());
}
// List<AclBinding> aclBindingList = entry.isNull() ? aclConsole.getAclList(null) : aclConsole.getAclList(entry);
List<AclEntry> entryList = aclBindingList.stream().map(x -> AclEntry.valueOf(x)).collect(Collectors.toList());
Map<String, List<AclEntry>> entryMap = entryList.stream().collect(Collectors.groupingBy(AclEntry::getPrincipal));
Map<String, Object> resultMap = new HashMap<>();
entryMap.forEach((k, v) -> {
Map<String, List<AclEntry>> map = v.stream().collect(Collectors.groupingBy(e -> e.getResourceType() + "#" + e.getName()));
String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
if (k.equals(username)) {
Map<String, Object> map2 = new HashMap<>(map);
Map<String, Object> userMap = new HashMap<>();
userMap.put("role", "admin");
map2.put("USER", userMap);
}
// String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
// if (k.equals(username)) {
// Map<String, Object> map2 = new HashMap<>(map);
// Map<String, Object> userMap = new HashMap<>();
// userMap.put("role", "admin");
// map2.put("USER", userMap);
// }
resultMap.put(k, map);
});
if (entry.isNull() || StringUtils.isNotBlank(entry.getPrincipal())) {
Map<String, UserScramCredentialsDescription> detailList = configConsole.getUserDetailList(StringUtils.isNotBlank(entry.getPrincipal()) ? Collections.singletonList(entry.getPrincipal()) : null);
detailList.values().forEach(u -> {
if (!resultMap.containsKey(u.name()) && !u.credentialInfos().isEmpty()) {
String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
if (!u.name().equals(username)) {
resultMap.put(u.name(), Collections.emptyMap());
} else {
Map<String, Object> map2 = new HashMap<>();
Map<String, Object> userMap = new HashMap<>();
userMap.put("role", "admin");
map2.put("USER", userMap);
resultMap.put(u.name(), map2);
}
}
});
}
// if (entry.isNull() || StringUtils.isNotBlank(entry.getPrincipal())) {
// Map<String, UserScramCredentialsDescription> detailList = configConsole.getUserDetailList(StringUtils.isNotBlank(entry.getPrincipal()) ? Collections.singletonList(entry.getPrincipal()) : null);
//
// detailList.values().forEach(u -> {
// if (!resultMap.containsKey(u.name()) && !u.credentialInfos().isEmpty()) {
// String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
// if (!u.name().equals(username)) {
// resultMap.put(u.name(), Collections.emptyMap());
// } else {
// Map<String, Object> map2 = new HashMap<>();
// Map<String, Object> userMap = new HashMap<>();
// userMap.put("role", "admin");
// map2.put("USER", userMap);
// resultMap.put(u.name(), map2);
// }
// }
// });
// }
return ResponseData.create().data(new CounterMap<>(resultMap)).success();
}
@@ -236,6 +242,37 @@ public class AclServiceImpl implements AclService {
return ResponseData.create().data(vo).success();
}
@Override
public ResponseData clearAcl(AclEntry entry) {
log.info("Start clear acl, principal: {}", entry);
return aclConsole.deleteUserAcl(entry) ? ResponseData.create().success() : ResponseData.create().failed("操作失败");
}
@Override
public ResponseData getSaslScramUserList(AclEntry entry) {
Map<String, Object> resultMap = new HashMap<>();
if (entry.isNull() || StringUtils.isNotBlank(entry.getPrincipal())) {
Map<String, UserScramCredentialsDescription> detailList = configConsole.getUserDetailList(StringUtils.isNotBlank(entry.getPrincipal()) ? Collections.singletonList(entry.getPrincipal()) : null);
detailList.values().forEach(u -> {
if (!resultMap.containsKey(u.name()) && !u.credentialInfos().isEmpty()) {
String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
if (!u.name().equals(username)) {
resultMap.put(u.name(), Collections.emptyMap());
} else {
Map<String, Object> map2 = new HashMap<>();
Map<String, Object> userMap = new HashMap<>();
userMap.put("role", "admin");
map2.put("USER", userMap);
resultMap.put(u.name(), map2);
}
}
});
}
return ResponseData.create().data(new CounterMap<>(resultMap)).success();
}
// @Override public void afterSingletonsInstantiated() {
// if (kafkaConfig.isEnableAcl() && kafkaConfig.isAdminCreate()) {
// log.info("Start create admin user, username: {}, password: {}", kafkaConfig.getAdminUsername(), kafkaConfig.getAdminPassword());

View File

@@ -0,0 +1,96 @@
package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.Credentials;
import com.xuxd.kafka.console.beans.LoginResult;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import com.xuxd.kafka.console.beans.dto.LoginUserDTO;
import com.xuxd.kafka.console.cache.RolePermCache;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.dao.SysRoleMapper;
import com.xuxd.kafka.console.dao.SysUserMapper;
import com.xuxd.kafka.console.service.AuthService;
import com.xuxd.kafka.console.utils.AuthUtil;
import com.xuxd.kafka.console.utils.UUIDStrUtil;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.stereotype.Service;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
/**
* @author: xuxd
* @date: 2023/5/14 19:01
**/
@Slf4j
@Service
public class AuthServiceImpl implements AuthService {
private final SysUserMapper userMapper;
private final SysRoleMapper roleMapper;
private final AuthConfig authConfig;
private final RolePermCache rolePermCache;
public AuthServiceImpl(SysUserMapper userMapper,
SysRoleMapper roleMapper,
AuthConfig authConfig,
RolePermCache rolePermCache) {
this.userMapper = userMapper;
this.roleMapper = roleMapper;
this.authConfig = authConfig;
this.rolePermCache = rolePermCache;
}
@Override
public ResponseData login(LoginUserDTO userDTO) {
QueryWrapper<SysUserDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("username", userDTO.getUsername());
SysUserDO userDO = userMapper.selectOne(queryWrapper);
if (userDO == null) {
return ResponseData.create().failed("用户名/密码不正确");
}
String encrypt = UUIDStrUtil.generate(userDTO.getPassword(), userDO.getSalt());
if (!userDO.getPassword().equals(encrypt)) {
return ResponseData.create().failed("用户名/密码不正确");
}
Credentials credentials = new Credentials();
credentials.setUsername(userDO.getUsername());
credentials.setExpiration(System.currentTimeMillis() + authConfig.getExpireHours() * 3600 * 1000);
String token = AuthUtil.generateToken(authConfig.getSecret(), credentials);
LoginResult loginResult = new LoginResult();
List<String> permissions = new ArrayList<>();
String roleIds = userDO.getRoleIds();
if (StringUtils.isNotEmpty(roleIds)) {
List<String> roleIdList = Arrays.stream(roleIds.split(",")).map(String::trim).filter(StringUtils::isNotEmpty).collect(Collectors.toList());
roleIdList.forEach(roleId -> {
Long rId = Long.valueOf(roleId);
SysRoleDO roleDO = roleMapper.selectById(rId);
String permissionIds = roleDO.getPermissionIds();
if (StringUtils.isNotEmpty(permissionIds)) {
List<Long> permIds = Arrays.stream(permissionIds.split(",")).map(String::trim).
filter(StringUtils::isNotEmpty).map(Long::valueOf).collect(Collectors.toList());
permIds.forEach(id -> {
String permission = rolePermCache.getPermCache().get(id).getPermission();
if (StringUtils.isNotEmpty(permission)) {
permissions.add(permission);
} else {
log.error("角色:{}权限id: {},不存在", roleId, id);
}
});
}
});
}
loginResult.setToken(token);
loginResult.setPermissions(permissions);
return ResponseData.create().data(loginResult).success();
}
}

View File

@@ -0,0 +1,172 @@
package com.xuxd.kafka.console.service.impl;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterClientQuotaDTO;
import com.xuxd.kafka.console.beans.vo.ClientQuotaEntityVO;
import com.xuxd.kafka.console.service.ClientQuotaService;
import kafka.console.ClientQuotaConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.common.config.internals.QuotaConfigs;
import org.apache.kafka.common.quota.ClientQuotaEntity;
import org.springframework.stereotype.Service;
import scala.Tuple2;
import java.util.*;
import java.util.stream.Collectors;
/**
* @author 晓东哥哥
*/
@Slf4j
@Service
public class ClientQuotaServiceImpl implements ClientQuotaService {
private final ClientQuotaConsole clientQuotaConsole;
private final Map<String, String> typeDict = new HashMap<>();
private final Map<String, String> configDict = new HashMap<>();
private final String USER = "user";
private final String CLIENT_ID = "client-id";
private final String IP = "ip";
private final String USER_CLIENT = "user&client-id";
{
typeDict.put(USER, ClientQuotaEntity.USER);
typeDict.put(CLIENT_ID, ClientQuotaEntity.CLIENT_ID);
typeDict.put(IP, ClientQuotaEntity.IP);
typeDict.put(USER_CLIENT, USER_CLIENT);
configDict.put("producerRate", QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG);
configDict.put("consumerRate", QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG);
configDict.put("requestPercentage", QuotaConfigs.REQUEST_PERCENTAGE_OVERRIDE_CONFIG);
}
public ClientQuotaServiceImpl(ClientQuotaConsole clientQuotaConsole) {
this.clientQuotaConsole = clientQuotaConsole;
}
@Override
public ResponseData getClientQuotaConfigs(List<String> types, List<String> names) {
List<String> entityNames = names == null ? Collections.emptyList() : new ArrayList<>(names);
List<String> entityTypes = types.stream().map(e -> typeDict.get(e)).filter(e -> e != null).collect(Collectors.toList());
if (entityTypes.isEmpty() || entityTypes.size() != types.size()) {
throw new IllegalArgumentException("types illegal.");
}
boolean userAndClientFilterClientOnly = false;
// only type: [user and client-id], type.size == 2
if (entityTypes.size() == 2) {
if (names.size() == 2 && StringUtils.isBlank(names.get(0)) && StringUtils.isNotBlank(names.get(1))) {
userAndClientFilterClientOnly = true;
}
}
Map<ClientQuotaEntity, Map<String, Object>> clientQuotasConfigs = clientQuotaConsole.getClientQuotasConfigs(entityTypes,
userAndClientFilterClientOnly ? Collections.emptyList() : entityNames);
List<ClientQuotaEntityVO> voList = clientQuotasConfigs.entrySet().stream().map(entry -> ClientQuotaEntityVO.from(
entry.getKey(), entityTypes, entry.getValue())).collect(Collectors.toList());
if (!userAndClientFilterClientOnly) {
return ResponseData.create().data(voList).success();
}
List<ClientQuotaEntityVO> list = voList.stream().filter(e -> names.get(1).equals(e.getClient())).collect(Collectors.toList());
return ResponseData.create().data(list).success();
}
@Override
public ResponseData alterClientQuotaConfigs(AlterClientQuotaDTO request) {
if (StringUtils.isEmpty(request.getType()) || !typeDict.containsKey(request.getType())) {
return ResponseData.create().failed("Unknown type.");
}
List<String> types = new ArrayList<>();
List<String> names = new ArrayList<>();
parseTypesAndNames(request, types, names, request.getType());
Map<String, String> configsToBeAddedMap = new HashMap<>();
if (StringUtils.isNotEmpty(request.getProducerRate())) {
configsToBeAddedMap.put(QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG, String.valueOf(Math.floor(Double.valueOf(request.getProducerRate()))));
}
if (StringUtils.isNotEmpty(request.getConsumerRate())) {
configsToBeAddedMap.put(QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG, String.valueOf(Math.floor(Double.valueOf(request.getConsumerRate()))));
}
if (StringUtils.isNotEmpty(request.getRequestPercentage())) {
configsToBeAddedMap.put(QuotaConfigs.REQUEST_PERCENTAGE_OVERRIDE_CONFIG, String.valueOf(Math.floor(Double.valueOf(request.getRequestPercentage()))));
}
Tuple2<Object, String> tuple2 = clientQuotaConsole.addQuotaConfigs(types, names, configsToBeAddedMap);
if (!(Boolean) tuple2._1) {
return ResponseData.create().failed(tuple2._2);
}
if (CollectionUtils.isNotEmpty(request.getDeleteConfigs())) {
List<String> delete = request.getDeleteConfigs().stream().map(key -> configDict.get(key)).collect(Collectors.toList());
Tuple2<Object, String> tuple2Del = clientQuotaConsole.deleteQuotaConfigs(types, names, delete);
if (!(Boolean) tuple2Del._1) {
return ResponseData.create().failed(tuple2Del._2);
}
}
return ResponseData.create().success();
}
@Override
public ResponseData deleteClientQuotaConfigs(AlterClientQuotaDTO request) {
if (StringUtils.isEmpty(request.getType()) || !typeDict.containsKey(request.getType())) {
return ResponseData.create().failed("Unknown type.");
}
List<String> types = new ArrayList<>();
List<String> names = new ArrayList<>();
parseTypesAndNames(request, types, names, request.getType());
List<String> configs = new ArrayList<>();
configs.add(QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG);
configs.add(QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG);
configs.add(QuotaConfigs.REQUEST_PERCENTAGE_OVERRIDE_CONFIG);
Tuple2<Object, String> tuple2 = clientQuotaConsole.deleteQuotaConfigs(types, names, configs);
if (!(Boolean) tuple2._1) {
return ResponseData.create().failed(tuple2._2);
}
return ResponseData.create().success();
}
private void parseTypesAndNames(AlterClientQuotaDTO request, List<String> types, List<String> names, String type) {
switch (request.getType()) {
case USER:
getTypesAndNames(request, types, names, USER);
break;
case CLIENT_ID:
getTypesAndNames(request, types, names, CLIENT_ID);
break;
case IP:
getTypesAndNames(request, types, names, IP);
break;
case USER_CLIENT:
getTypesAndNames(request, types, names, USER);
getTypesAndNames(request, types, names, CLIENT_ID);
break;
}
}
private void getTypesAndNames(AlterClientQuotaDTO request, List<String> types, List<String> names, String type) {
int index = -1;
for (int i = 0; i < request.getTypes().size(); i++) {
if (type.equals(request.getTypes().get(i))) {
index = i;
break;
}
}
if (index < 0) {
throw new IllegalArgumentException("Does not contain the type" + type);
}
types.add(request.getTypes().get(index));
if (CollectionUtils.isNotEmpty(request.getNames()) && request.getNames().size() > index) {
names.add(request.getNames().get(index));
} else {
names.add("");
}
}
}

View File

@@ -1,15 +1,23 @@
package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.BrokerNode;
import com.xuxd.kafka.console.beans.ClusterInfo;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.beans.vo.BrokerApiVersionVO;
import com.xuxd.kafka.console.beans.vo.ClusterInfoVO;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.service.ClusterService;
import java.util.List;
import java.util.*;
import java.util.stream.Collectors;
import kafka.console.ClusterConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.NodeApiVersions;
import org.apache.kafka.common.Node;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.stereotype.Service;
@@ -19,6 +27,7 @@ import org.springframework.stereotype.Service;
* @author xuxd
* @date 2021-10-08 14:23:09
**/
@Slf4j
@Service
public class ClusterServiceImpl implements ClusterService {
@@ -33,7 +42,14 @@ public class ClusterServiceImpl implements ClusterService {
}
@Override public ResponseData getClusterInfo() {
return ResponseData.create().data(clusterConsole.clusterInfo()).success();
ClusterInfo clusterInfo = clusterConsole.clusterInfo();
Set<BrokerNode> nodes = clusterInfo.getNodes();
if (nodes == null) {
log.error("集群节点信息为空,集群地址可能不正确或集群内没有活跃节点");
return ResponseData.create().failed("集群节点信息为空,集群地址可能不正确或集群内没有活跃节点");
}
clusterInfo.setNodes(new TreeSet<>(nodes));
return ResponseData.create().data(clusterInfo).success();
}
@Override public ResponseData getClusterInfoList() {
@@ -57,6 +73,10 @@ public class ClusterServiceImpl implements ClusterService {
}
@Override public ResponseData updateClusterInfo(ClusterInfoDO infoDO) {
if (infoDO.getProperties() == null) {
// null 的话不更新这个是bug设置为空字符串解决
infoDO.setProperties("");
}
clusterInfoMapper.updateById(infoDO);
return ResponseData.create().success();
}
@@ -69,4 +89,29 @@ public class ClusterServiceImpl implements ClusterService {
return ResponseData.create().data(dos.stream().findFirst().map(ClusterInfoVO::from)).success();
}
@Override public ResponseData getBrokerApiVersionInfo() {
HashMap<Node, NodeApiVersions> map = clusterConsole.listBrokerVersionInfo();
List<BrokerApiVersionVO> list = new ArrayList<>(map.size());
map.forEach(((node, versions) -> {
BrokerApiVersionVO vo = new BrokerApiVersionVO();
vo.setBrokerId(node.id());
vo.setHost(node.host() + ":" + node.port());
vo.setSupportNums(versions.allSupportedApiVersions().size());
String versionInfo = versions.toString(true);
int from = 0;
int count = 0;
int index = -1;
while ((index = versionInfo.indexOf("UNSUPPORTED", from)) >= 0 && from < versionInfo.length()) {
count++;
from = index + 1;
}
vo.setUnSupportNums(count);
versionInfo = versionInfo.substring(1, versionInfo.length() - 2);
vo.setVersionInfo(Arrays.asList(StringUtils.split(versionInfo, ",")));
list.add(vo);
}));
Collections.sort(list, Comparator.comparingInt(BrokerApiVersionVO::getBrokerId));
return ResponseData.create().data(list).success();
}
}

View File

@@ -24,6 +24,7 @@ import kafka.console.TopicConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.admin.RecordsToDelete;
import org.apache.kafka.clients.admin.TopicDescription;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.producer.ProducerRecord;
@@ -242,6 +243,18 @@ public class MessageServiceImpl implements MessageService, ApplicationContextAwa
return success ? ResponseData.create().success("success: " + tuple2._2()) : ResponseData.create().failed(tuple2._2());
}
@Override
public ResponseData delete(List<QueryMessage> messages) {
Map<TopicPartition, RecordsToDelete> params = new HashMap<>(messages.size(), 1f);
messages.forEach(message -> {
params.put(new TopicPartition(message.getTopic(), message.getPartition()), RecordsToDelete.beforeOffset(message.getOffset()));
});
Tuple2<Object, String> tuple2 = messageConsole.delete(params);
boolean success = (boolean) tuple2._1();
return success ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2());
}
private Map<TopicPartition, ConsumerRecord<byte[], byte[]>> searchRecordByOffset(QueryMessage queryMessage) {
Set<TopicPartition> partitions = getPartitions(queryMessage);

View File

@@ -1,6 +1,7 @@
package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.google.common.collect.Lists;
import com.google.gson.Gson;
import com.google.gson.JsonObject;
import com.xuxd.kafka.console.beans.ResponseData;
@@ -10,6 +11,7 @@ import com.xuxd.kafka.console.beans.vo.OffsetAlignmentVO;
import com.xuxd.kafka.console.dao.MinOffsetAlignmentMapper;
import com.xuxd.kafka.console.service.OperationService;
import com.xuxd.kafka.console.utils.GsonUtil;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
@@ -19,6 +21,7 @@ import java.util.Properties;
import java.util.Set;
import java.util.stream.Collectors;
import kafka.console.OperationConsole;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.admin.PartitionReassignment;
import org.apache.kafka.common.TopicPartition;
import org.springframework.beans.factory.ObjectProvider;
@@ -162,4 +165,21 @@ public class OperationServiceImpl implements OperationService {
}
return ResponseData.create().success();
}
@Override public ResponseData proposedAssignments(String topic, List<Integer> brokerList) {
Map<String, Object> params = new HashMap<>();
params.put("version", 1);
Map<String, String> topicMap = new HashMap<>(1, 1.0f);
topicMap.put("topic", topic);
params.put("topics", Lists.newArrayList(topicMap));
List<String> list = brokerList.stream().map(String::valueOf).collect(Collectors.toList());
Map<TopicPartition, List<Object>> assignments = operationConsole.proposedAssignments(gson.toJson(params), StringUtils.join(list, ","));
List<CurrentReassignmentVO> res = new ArrayList<>(assignments.size());
assignments.forEach((tp, replicas) -> {
CurrentReassignmentVO vo = new CurrentReassignmentVO(tp.topic(), tp.partition(),
replicas.stream().map(x -> (Integer) x).collect(Collectors.toList()), null, null);
res.add(vo);
});
return ResponseData.create().data(res).success();
}
}

View File

@@ -9,16 +9,6 @@ import com.xuxd.kafka.console.beans.vo.TopicDescriptionVO;
import com.xuxd.kafka.console.beans.vo.TopicPartitionVO;
import com.xuxd.kafka.console.service.TopicService;
import com.xuxd.kafka.console.utils.GsonUtil;
import java.util.Calendar;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.atomic.AtomicLong;
import java.util.stream.Collectors;
import kafka.console.MessageConsole;
import kafka.console.TopicConsole;
import lombok.extern.slf4j.Slf4j;
@@ -33,6 +23,10 @@ import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import scala.Tuple2;
import java.util.*;
import java.util.concurrent.atomic.AtomicLong;
import java.util.stream.Collectors;
/**
* kafka-console-ui.
*
@@ -87,8 +81,8 @@ public class TopicServiceImpl implements TopicService {
return ResponseData.create().data(topicDescriptions.stream().map(d -> TopicDescriptionVO.from(d))).success();
}
@Override public ResponseData deleteTopic(String topic) {
Tuple2<Object, String> tuple2 = topicConsole.deleteTopic(topic);
@Override public ResponseData deleteTopics(Collection<String> topics) {
Tuple2<Object, String> tuple2 = topicConsole.deleteTopics(topics);
return (Boolean) tuple2._1 ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2);
}

View File

@@ -0,0 +1,230 @@
package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.RolePermUpdateEvent;
import com.xuxd.kafka.console.beans.dos.SysPermissionDO;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import com.xuxd.kafka.console.beans.dto.SysPermissionDTO;
import com.xuxd.kafka.console.beans.dto.SysRoleDTO;
import com.xuxd.kafka.console.beans.dto.SysUserDTO;
import com.xuxd.kafka.console.beans.vo.SysPermissionVO;
import com.xuxd.kafka.console.beans.vo.SysRoleVO;
import com.xuxd.kafka.console.beans.vo.SysUserVO;
import com.xuxd.kafka.console.dao.SysPermissionMapper;
import com.xuxd.kafka.console.dao.SysRoleMapper;
import com.xuxd.kafka.console.dao.SysUserMapper;
import com.xuxd.kafka.console.service.UserManageService;
import com.xuxd.kafka.console.utils.RandomStringUtil;
import com.xuxd.kafka.console.utils.UUIDStrUtil;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.context.ApplicationEventPublisher;
import org.springframework.stereotype.Service;
import java.util.*;
import java.util.function.Function;
import java.util.stream.Collectors;
/**
* @author: xuxd
* @date: 2023/4/11 21:24
**/
@Slf4j
@Service
public class UserManageServiceImpl implements UserManageService {
private final SysUserMapper userMapper;
private final SysRoleMapper roleMapper;
private final SysPermissionMapper permissionMapper;
private final ApplicationEventPublisher publisher;
public UserManageServiceImpl(ObjectProvider<SysUserMapper> userMapper,
ObjectProvider<SysRoleMapper> roleMapper,
ObjectProvider<SysPermissionMapper> permissionMapper,
ApplicationEventPublisher publisher) {
this.userMapper = userMapper.getIfAvailable();
this.roleMapper = roleMapper.getIfAvailable();
this.permissionMapper = permissionMapper.getIfAvailable();
this.publisher = publisher;
}
@Override
public ResponseData addPermission(SysPermissionDTO permissionDTO) {
permissionMapper.insert(permissionDTO.toSysPermissionDO());
return ResponseData.create().success();
}
@Override
public ResponseData addOrUdpateRole(SysRoleDTO roleDTO) {
SysRoleDO roleDO = roleDTO.toDO();
if (roleDO.getId() == null) {
roleMapper.insert(roleDO);
} else {
roleMapper.updateById(roleDO);
}
publisher.publishEvent(new RolePermUpdateEvent(this));
return ResponseData.create().success();
}
@Override
public ResponseData addOrUpdateUser(SysUserDTO userDTO) {
if (userDTO.getId() == null) {
if (StringUtils.isEmpty(userDTO.getPassword())) {
userDTO.setPassword(RandomStringUtil.random6Str());
}
SysUserDO userDO = userDTO.toDO();
QueryWrapper<SysUserDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq(true, "username", userDO.getUsername());
SysUserDO exist = userMapper.selectOne(queryWrapper);
if (exist != null) {
return ResponseData.create().failed("用户已存在:" + userDO.getUsername());
}
userDO.setSalt(UUIDStrUtil.random());
userDO.setPassword(UUIDStrUtil.generate(userDTO.getPassword(), userDO.getSalt()));
userMapper.insert(userDO);
} else {
SysUserDO userDO = userMapper.selectById(userDTO.getId());
if (userDO == null) {
log.error("查不到用户: {}", userDTO.getId());
return ResponseData.create().failed("Unknown User.");
}
// 判断是否更新密码
if (userDTO.getResetPassword()) {
userDTO.setPassword(RandomStringUtil.random6Str());
userDO.setSalt(UUIDStrUtil.random());
userDO.setPassword(UUIDStrUtil.generate(userDTO.getPassword(), userDO.getSalt()));
}
userDO.setRoleIds(userDTO.getRoleIds());
userDO.setUsername(userDTO.getUsername());
userMapper.updateById(userDO);
}
return ResponseData.create().data(userDTO.getPassword()).success();
}
@Override
public ResponseData selectRole() {
List<SysRoleDO> dos = roleMapper.selectList(new QueryWrapper<>());
return ResponseData.create().data(dos.stream().map(SysRoleVO::from).collect(Collectors.toList())).success();
}
@Override
public ResponseData selectPermission() {
QueryWrapper<SysPermissionDO> queryWrapper = new QueryWrapper<>();
List<SysPermissionDO> permissionDOS = permissionMapper.selectList(queryWrapper);
List<SysPermissionVO> vos = new ArrayList<>();
Map<Long, Integer> posMap = new HashMap<>();
Map<Long, SysPermissionVO> voMap = new HashMap<>();
Iterator<SysPermissionDO> iterator = permissionDOS.iterator();
while (iterator.hasNext()) {
SysPermissionDO permissionDO = iterator.next();
if (permissionDO.getParentId() == null) {
// 菜单
SysPermissionVO vo = SysPermissionVO.from(permissionDO);
vos.add(vo);
int index = vos.size() - 1;
// 记录位置
posMap.put(permissionDO.getId(), index);
iterator.remove();
}
}
// 上面把菜单都处理过了
while (!permissionDOS.isEmpty()) {
iterator = permissionDOS.iterator();
while (iterator.hasNext()) {
SysPermissionDO permissionDO = iterator.next();
Long parentId = permissionDO.getParentId();
if (posMap.containsKey(parentId)) {
// 菜单下的按扭
SysPermissionVO vo = SysPermissionVO.from(permissionDO);
Integer index = posMap.get(parentId);
SysPermissionVO menuVO = vos.get(index);
if (menuVO.getChildren() == null) {
menuVO.setChildren(new ArrayList<>());
}
menuVO.getChildren().add(vo);
voMap.put(permissionDO.getId(), vo);
iterator.remove();
} else if (voMap.containsKey(parentId)) {
// 按钮下的按扭
SysPermissionVO vo = SysPermissionVO.from(permissionDO);
SysPermissionVO buttonVO = voMap.get(parentId);
if (buttonVO.getChildren() == null) {
buttonVO.setChildren(new ArrayList<>());
}
buttonVO.getChildren().add(vo);
voMap.put(permissionDO.getId(), vo);
iterator.remove();
}
}
}
return ResponseData.create().data(vos).success();
}
@Override
public ResponseData selectUser() {
QueryWrapper<SysUserDO> queryWrapper = new QueryWrapper<>();
List<SysUserDO> userDOS = userMapper.selectList(queryWrapper);
List<SysRoleDO> roleDOS = roleMapper.selectList(null);
Map<Long, SysRoleDO> roleDOMap = roleDOS.stream().collect(Collectors.toMap(SysRoleDO::getId, Function.identity(), (e1, e2) -> e1));
List<SysUserVO> voList = userDOS.stream().map(SysUserVO::from).collect(Collectors.toList());
voList.forEach(vo -> {
if (vo.getRoleIds() != null) {
Long roleId = Long.valueOf(vo.getRoleIds());
vo.setRoleNames(roleDOMap.containsKey(roleId) ? roleDOMap.get(roleId).getRoleName() : null);
}
});
return ResponseData.create().data(voList).success();
}
@Override
public ResponseData updateUser(SysUserDTO userDTO) {
userMapper.updateById(userDTO.toDO());
return ResponseData.create().success();
}
@Override
public ResponseData updateRole(SysRoleDTO roleDTO) {
roleMapper.updateById(roleDTO.toDO());
publisher.publishEvent(new RolePermUpdateEvent(this));
return ResponseData.create().success();
}
@Override
public ResponseData deleteRole(Long id) {
QueryWrapper<SysUserDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq(true, "role_ids", id);
Integer count = userMapper.selectCount(queryWrapper);
if (count > 0) {
return ResponseData.create().failed("存在用户被分配为当前角色,不允许删除");
}
roleMapper.deleteById(id);
publisher.publishEvent(new RolePermUpdateEvent(this));
return ResponseData.create().success();
}
@Override
public ResponseData deleteUser(Long id) {
userMapper.deleteById(id);
return ResponseData.create().success();
}
@Override
public ResponseData updatePassword(SysUserDTO userDTO) {
SysUserDO userDO = userDTO.toDO();
userDO.setSalt(UUIDStrUtil.random());
userDO.setPassword(UUIDStrUtil.generate(userDTO.getPassword(), userDO.getSalt()));
QueryWrapper<SysUserDO> wrapper = new QueryWrapper<>();
wrapper.eq("username", userDTO.getUsername());
userMapper.update(userDO, wrapper);
return ResponseData.create().success();
}
}

View File

@@ -0,0 +1,54 @@
package com.xuxd.kafka.console.utils;
import com.google.gson.Gson;
import com.xuxd.kafka.console.beans.Credentials;
import lombok.extern.slf4j.Slf4j;
import org.springframework.util.Base64Utils;
import java.nio.charset.StandardCharsets;
/**
* @author: xuxd
* @date: 2023/5/14 19:34
**/
@Slf4j
public class AuthUtil {
private static Gson gson = GsonUtil.INSTANCE.get();
public static String generateToken(String secret, Credentials info) {
String json = gson.toJson(info);
String str = json + secret;
String signature = MD5Util.md5(str);
return Base64Utils.encodeToString(json.getBytes(StandardCharsets.UTF_8)) + "." +
Base64Utils.encodeToString(signature.getBytes(StandardCharsets.UTF_8));
}
public static boolean isToken(String token) {
return token.split("\\.").length == 2;
}
public static Credentials parseToken(String secret, String token) {
if (!isToken(token)) {
return Credentials.INVALID;
}
String[] arr = token.split("\\.");
String infoStr = new String(Base64Utils.decodeFromString(arr[0]), StandardCharsets.UTF_8);
String signature = new String(Base64Utils.decodeFromString(arr[1]), StandardCharsets.UTF_8);
String encrypt = MD5Util.md5(infoStr + secret);
if (!encrypt.equals(signature)) {
return Credentials.INVALID;
}
try {
Credentials credentials = gson.fromJson(infoStr, Credentials.class);
if (credentials.getExpiration() < System.currentTimeMillis()) {
return Credentials.INVALID;
}
return credentials;
} catch (Exception e) {
log.error("解析token失败: {}", token, e);
return Credentials.INVALID;
}
}
}

View File

@@ -0,0 +1,32 @@
package com.xuxd.kafka.console.utils;
import lombok.extern.slf4j.Slf4j;
import java.nio.charset.StandardCharsets;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
/**
* @author: xuxd
* @date: 2023/5/14 20:25
**/
@Slf4j
public class MD5Util {
public static MessageDigest getInstance() {
try {
MessageDigest md5 = MessageDigest.getInstance("MD5");
return md5;
} catch (NoSuchAlgorithmException e) {
return null;
}
}
public static String md5(String str) {
MessageDigest digest = getInstance();
if (digest == null) {
return null;
}
return new String(digest.digest(str.getBytes(StandardCharsets.UTF_8)), StandardCharsets.UTF_8);
}
}

View File

@@ -0,0 +1,26 @@
package com.xuxd.kafka.console.utils;
import java.util.Random;
/**
* @author: xuxd
* @date: 2023/5/8 9:19
**/
public class RandomStringUtil {
private final static String ALLOWED_CHARS = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
public static String random6Str() {
return generateRandomString(6);
}
public static String generateRandomString(int length) {
Random random = new Random();
StringBuilder sb = new StringBuilder(length);
for (int i = 0; i < length; i++) {
int index = random.nextInt(ALLOWED_CHARS.length());
sb.append(ALLOWED_CHARS.charAt(index));
}
return sb.toString();
}
}

View File

@@ -0,0 +1,22 @@
package com.xuxd.kafka.console.utils;
import java.util.UUID;
/**
* @author: xuxd
* @date: 2023/5/6 13:30
**/
public class UUIDStrUtil {
public static String random() {
return UUID.randomUUID().toString();
}
public static String generate(String ... strs) {
StringBuilder sb = new StringBuilder();
for (String str : strs) {
sb.append(str);
}
return UUID.nameUUIDFromBytes(sb.toString().getBytes()).toString();
}
}

View File

@@ -12,6 +12,14 @@ kafka:
# 集群其它属性配置
properties:
# request.timeout.ms: 5000
# 缓存连接,不缓存的情况下,每次请求建立连接. 即使每次请求建立连接其实也很快某些情况下开启ACL查询可能很慢可以设置连接缓存为true
# 或者想提高查询速度也可以设置下面连接缓存为true
# 缓存 admin client的连接
cache-admin-connection: false
# 缓存 producer的连接
cache-producer-connection: false
# 缓存 consumer的连接
cache-consumer-connection: false
spring:
application:
@@ -39,3 +47,9 @@ logging:
cron:
# clear-dirty-user: 0 * * * * ?
clear-dirty-user: 0 0 1 * * ?
# 权限认证设置设置为true需要先登录才能访问
auth:
enable: false
# 登录用户token的过期时间单位小时
expire-hours: 24

View File

@@ -1,5 +1,109 @@
-- DELETE FROM t_kafka_user;
--
-- INSERT INTO t_kafka_user (id, username, password) VALUES
-- (1, 'Jone', 'p1'),
-- (2, 'Jack', 'p2');
-- 不要随便修改下面的注释,要根据这个注释初始化加载数据
-- t_sys_permission start--
insert into t_sys_permission(id, name,type,parent_id,permission) values(0,'主页',0,null,'home');
insert into t_sys_permission(id, name,type,parent_id,permission) values(11,'集群',0,null,'cluster');
insert into t_sys_permission(id, name,type,parent_id,permission) values(12,'属性配置',1,11,'cluster:property-config');
insert into t_sys_permission(id, name,type,parent_id,permission) values(13,'日志配置',1,11,'cluster:log-config');
insert into t_sys_permission(id, name,type,parent_id,permission) values(14,'编辑配置',1,11,'cluster:edit');
insert into t_sys_permission(id, name,type,parent_id,permission) values(21,'Topic',0,null,'topic');
insert into t_sys_permission(id, name,type,parent_id,permission) values(22,'刷新',1,21,'topic:load');
insert into t_sys_permission(id, name,type,parent_id,permission) values(23,'新增',1,21,'topic:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(24,'批量删除',1,21,'topic:batch-del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(25,'删除',1,21,'topic:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(26,'分区详情',1,21,'topic:partition-detail');
insert into t_sys_permission(id, name,type,parent_id,permission) values(27,'首选副本作leader',1,26,'topic:partition-detail:preferred');
insert into t_sys_permission(id, name,type,parent_id,permission) values(28,'增加分区',1,21,'topic:partition-add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(29,'消费详情',1,21,'topic:consumer-detail');
insert into t_sys_permission(id, name,type,parent_id,permission) values(30,'属性配置',1,21,'topic:property-config');
insert into t_sys_permission(id, name,type,parent_id,permission) values(31,'变更副本',1,21,'topic:replication-modify');
insert into t_sys_permission(id, name,type,parent_id,permission) values(32,'发送统计',1,21,'topic:send-count');
insert into t_sys_permission(id, name,type,parent_id,permission) values(33,'限流',1,21,'topic:replication-sync-throttle');
insert into t_sys_permission(id, name,type,parent_id,permission) values(34,'编辑属性配置',1,30,'topic:property-config:edit');
insert into t_sys_permission(id, name,type,parent_id,permission) values(35,'删除属性配置',1,30,'topic:property-config:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(41,'消费组',0,null,'group');
insert into t_sys_permission(id, name,type,parent_id,permission) values(42,'新增订阅',1,41,'group:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(43,'删除',1,41,'group:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(44,'消费端',1,41,'group:client');
insert into t_sys_permission(id, name,type,parent_id,permission) values(45,'消费详情',1,41,'group:consumer-detail');
insert into t_sys_permission(id, name,type,parent_id,permission) values(46,'最小位点',1,45,'group:consumer-detail:min');
insert into t_sys_permission(id, name,type,parent_id,permission) values(47,'最新位点',1,45,'group:consumer-detail:last');
insert into t_sys_permission(id, name,type,parent_id,permission) values(48,'时间戳',1,45,'group:consumer-detail:timestamp');
insert into t_sys_permission(id, name,type,parent_id,permission) values(49,'重置位点',1,45,'group:consumer-detail:any');
insert into t_sys_permission(id, name,type,parent_id,permission) values(50,'位移分区',1,41,'group:offset-partition');
insert into t_sys_permission(id, name,type,parent_id,permission) values(61,'消息',0,null,'message');
insert into t_sys_permission(id, name,type,parent_id,permission) values(62,'根据时间查询',1,61,'message:search-time');
insert into t_sys_permission(id, name,type,parent_id,permission) values(63,'根据偏移查询',1,61,'message:search-offset');
insert into t_sys_permission(id, name,type,parent_id,permission) values(64,'在线发送',1,61,'message:send');
insert into t_sys_permission(id, name,type,parent_id,permission) values(65,'在线删除',1,61,'message:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(66,'消息详情',1,61,'message:detail');
insert into t_sys_permission(id, name,type,parent_id,permission) values(67,'重新发送',1,61,'message:resend');
insert into t_sys_permission(id, name,type,parent_id,permission) values(80,'限流',0,null,'quota');
insert into t_sys_permission(id, name,type,parent_id,permission) values(81,'用户',1,80,'quota:user');
insert into t_sys_permission(id, name,type,parent_id,permission) values(82,'新增配置',1,81,'quota:user:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(83,'客户端ID',1,80,'quota:client');
insert into t_sys_permission(id, name,type,parent_id,permission) values(84,'新增配置',1,83,'quota:client:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(85,'用户和客户端ID',1,80,'quota:user-client');
insert into t_sys_permission(id, name,type,parent_id,permission) values(86,'新增配置',1,85,'quota:user-client:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(87,'删除',1,80,'quota:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(88,'修改',1,80,'quota:edit');
insert into t_sys_permission(id, name,type,parent_id,permission) values(100,'Acl',0,null,'acl');
insert into t_sys_permission(id, name,type,parent_id,permission) values(101,'资源授权',1,100,'acl:authority');
insert into t_sys_permission(id, name,type,parent_id,permission) values(102,'新增主体权限',1,101,'acl:authority:add-principal');
insert into t_sys_permission(id, name,type,parent_id,permission) values(103,'权限详情',1,101,'acl:authority:detail');
insert into t_sys_permission(id, name,type,parent_id,permission) values(104,'管理生产权限',1,101,'acl:authority:producer');
insert into t_sys_permission(id, name,type,parent_id,permission) values(105,'管理消费权限',1,101,'acl:authority:consumer');
insert into t_sys_permission(id, name,type,parent_id,permission) values(106,'增加权限',1,101,'acl:authority:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(107,'清除权限',1,101,'acl:authority:clean');
insert into t_sys_permission(id, name,type,parent_id,permission) values(108,'SaslScram用户管理',1,100,'acl:sasl-scram');
insert into t_sys_permission(id, name,type,parent_id,permission) values(109,'新增/更新用户',1,108,'acl:sasl-scram:add-update');
insert into t_sys_permission(id, name,type,parent_id,permission) values(110,'详情',1,108,'acl:sasl-scram:detail');
insert into t_sys_permission(id, name,type,parent_id,permission) values(111,'删除',1,108,'acl:sasl-scram:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(112,'管理生产权限',1,108,'acl:sasl-scram:producer');
insert into t_sys_permission(id, name,type,parent_id,permission) values(113,'管理消费权限',1,108,'acl:sasl-scram:consumer');
insert into t_sys_permission(id, name,type,parent_id,permission) values(114,'增加权限',1,108,'acl:sasl-scram:add-auth');
insert into t_sys_permission(id, name,type,parent_id,permission) values(115,'彻底删除',1,108,'acl:sasl-scram:pure');
insert into t_sys_permission(id, name,type,parent_id,permission) values(140,'用户',0,null,'user-manage');
insert into t_sys_permission(id, name,type,parent_id,permission) values(141,'用户列表',1,140,'user-manage:user');
insert into t_sys_permission(id, name,type,parent_id,permission) values(142,'新增',1,141,'user-manage:user:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(143,'删除',1,141,'user-manage:user:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(144,'重置密码',1,141,'user-manage:user:reset-pass');
insert into t_sys_permission(id, name,type,parent_id,permission) values(145,'分配角色',1,141,'user-manage:user:change-role');
insert into t_sys_permission(id, name,type,parent_id,permission) values(146,'角色列表',1,140,'user-manage:role');
insert into t_sys_permission(id, name,type,parent_id,permission) values(147,'保存',1,146,'user-manage:role:save');
insert into t_sys_permission(id, name,type,parent_id,permission) values(148,'删除',1,146,'user-manage:role:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(149,'权限列表',1,140,'user-manage:permission');
insert into t_sys_permission(id, name,type,parent_id,permission) values(150,'个人设置',1,140,'user-manage:setting');
insert into t_sys_permission(id, name,type,parent_id,permission) values(160,'运维',0,null,'op');
insert into t_sys_permission(id, name,type,parent_id,permission) values(161,'集群切换',1,160,'op:cluster-switch');
insert into t_sys_permission(id, name,type,parent_id,permission) values(162,'新增集群',1,161,'op:cluster-switch:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(163,'切换',1,161,'op:cluster-switch:switch');
insert into t_sys_permission(id, name,type,parent_id,permission) values(164,'编辑',1,161,'op:cluster-switch:edit');
insert into t_sys_permission(id, name,type,parent_id,permission) values(165,'删除',1,161,'op:cluster-switch:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(166,'配置限流',1,160,'op:config-throttle');
insert into t_sys_permission(id, name,type,parent_id,permission) values(167,'解除限流',1,160,'op:remove-throttle');
insert into t_sys_permission(id, name,type,parent_id,permission) values(168,'首选副本作leader',1,160,'op:replication-preferred');
insert into t_sys_permission(id, name,type,parent_id,permission) values(169,'副本变更详情',1,160,'op:replication-update-detail');
insert into t_sys_permission(id, name,type,parent_id,permission) values(170,'副本重分配',1,160,'op:replication-reassign');
insert into t_sys_permission(id, name,type,parent_id,permission) values(171,'取消副本重分配',1,169,'op:replication-update-detail:cancel');
-- t_sys_permission end--
-- t_sys_role start--
insert into t_sys_role(id, role_name, description, permission_ids) VALUES (1,'超级管理员','超级管理员','12,13,14,22,23,24,25,26,27,28,29,30,34,35,31,32,33,42,43,44,45,46,47,48,49,50,62,63,64,65,66,67,81,82,83,84,85,86,87,88,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,141,142,143,144,145,146,147,148,149,150,161,162,163,164,165,166,167,168,169,171,170');
insert into t_sys_role(id, role_name, description, permission_ids) VALUES (2,'普通管理员','普通管理员,不能更改用户信息','12,13,14,22,23,24,25,26,27,28,29,30,34,35,31,32,33,42,43,44,45,46,47,48,49,50,62,63,64,65,66,67,81,82,83,84,85,86,87,88,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,141,146,149,150,161,162,163,164,165,166,167,168,169,171,170');
-- insert into t_sys_role(id, role_name, description, permission_ids) VALUES (2,'访客','访客','12,13,22,26,29,32,44,45,50,62,63,81,83,85,141,146,149,150,161,163');
-- t_sys_role end--
-- t_sys_user start--
insert into t_sys_user(id, username, password, salt, role_ids) VALUES (1,'super-admin','3a3e4d32-5247-321b-9efb-9cbf60b2bf6c','e6973cfc-7583-4baa-8802-65ded1268ab6','1' );
insert into t_sys_user(id, username, password, salt, role_ids) VALUES (2,'admin','3a3e4d32-5247-321b-9efb-9cbf60b2bf6c','e6973cfc-7583-4baa-8802-65ded1268ab6','2' );
-- t_sys_user end--

View File

@@ -29,9 +29,40 @@ CREATE TABLE IF NOT EXISTS T_CLUSTER_INFO
(
ID IDENTITY NOT NULL COMMENT '主键ID',
CLUSTER_NAME VARCHAR(128) NOT NULL DEFAULT '' COMMENT '集群名',
ADDRESS VARCHAR(256) NOT NULL DEFAULT '' COMMENT '集群地址',
PROPERTIES VARCHAR(512) NOT NULL DEFAULT '' COMMENT '集群的其它属性配置',
ADDRESS VARCHAR(1024) NOT NULL DEFAULT '' COMMENT '集群地址',
PROPERTIES VARCHAR(1024) NOT NULL DEFAULT '' COMMENT '集群的其它属性配置',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
PRIMARY KEY (ID),
UNIQUE (CLUSTER_NAME)
);
-- 登录用户的角色权限配置
CREATE TABLE IF NOT EXISTS t_sys_permission
(
ID IDENTITY NOT NULL COMMENT '主键ID',
name varchar(100) DEFAULT NULL COMMENT '权限名称',
type tinyint(1) NOT NULL DEFAULT 0 COMMENT '权限类型: 0菜单1按钮',
parent_id bigint(20) DEFAULT NULL COMMENT '所属父权限ID',
permission varchar(100) DEFAULT NULL COMMENT '权限字符串',
PRIMARY KEY (id)
);
CREATE TABLE IF NOT EXISTS t_sys_role
(
ID IDENTITY NOT NULL COMMENT '主键ID',
role_name varchar(100) NOT NULL COMMENT '角色名称',
description varchar(100) DEFAULT NULL COMMENT '角色描述',
permission_ids varchar(500) DEFAULT NULL COMMENT '分配的权限ID',
PRIMARY KEY (id)
);
CREATE TABLE IF NOT EXISTS t_sys_user
(
ID IDENTITY NOT NULL COMMENT '主键ID',
username varchar(100) DEFAULT NULL COMMENT '用户名',
password varchar(100) DEFAULT NULL COMMENT '用户密码',
salt varchar(100) DEFAULT NULL COMMENT '加密的盐值',
role_ids varchar(100) DEFAULT NULL COMMENT '分配角色的ID',
PRIMARY KEY (id),
UNIQUE (username)
);

View File

@@ -3,9 +3,9 @@
<springProperty scope="context" name="LOGGING_HOME" source="logging.home"/>
<!-- 日志目录 -->
<property name="LOG_HOME" value="${LOGGING_HOME}/logs"/>
<property name="LOG_HOME" value="${LOGGING_HOME:-${user.dir}}/logs"/>
<!-- 日志文件名-->
<property name="APP_NAME" value="${APPLICATION_NAME:-.}"/>
<property name="APP_NAME" value="${APPLICATION_NAME:-kafka-console-ui}"/>
<!-- 使用默认的输出格式-->
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>

View File

@@ -0,0 +1,333 @@
package kafka.console
import com.xuxd.kafka.console.config.ContextConfigHolder
import kafka.utils.Implicits.MapExtensionMethods
import kafka.utils.Logging
import org.apache.kafka.clients._
import org.apache.kafka.clients.admin.AdminClientConfig
import org.apache.kafka.clients.consumer.internals.{ConsumerNetworkClient, RequestFuture}
import org.apache.kafka.common.Node
import org.apache.kafka.common.config.ConfigDef.ValidString.in
import org.apache.kafka.common.config.ConfigDef.{Importance, Type}
import org.apache.kafka.common.config.{AbstractConfig, ConfigDef}
import org.apache.kafka.common.errors.AuthenticationException
import org.apache.kafka.common.internals.ClusterResourceListeners
import org.apache.kafka.common.message.ApiVersionsResponseData.ApiVersionCollection
import org.apache.kafka.common.metrics.Metrics
import org.apache.kafka.common.network.Selector
import org.apache.kafka.common.protocol.Errors
import org.apache.kafka.common.requests._
import org.apache.kafka.common.utils.{KafkaThread, LogContext, Time}
import org.slf4j.{Logger, LoggerFactory}
import java.io.IOException
import java.util.Properties
import java.util.concurrent.atomic.AtomicInteger
import java.util.concurrent.{ConcurrentLinkedQueue, TimeUnit}
import scala.jdk.CollectionConverters.{ListHasAsScala, MapHasAsJava, PropertiesHasAsScala, SetHasAsScala}
import scala.util.{Failure, Success, Try}
/**
* kafka-console-ui.
*
* Copy from {@link kafka.admin.BrokerApiVersionsCommand}.
*
* @author xuxd
* @date 2022-01-22 15:15:57
* */
object BrokerApiVersion{
protected lazy val log : Logger = LoggerFactory.getLogger(this.getClass)
def listAllBrokerApiVersionInfo(): java.util.HashMap[Node, NodeApiVersions] = {
val res = new java.util.HashMap[Node, NodeApiVersions]()
val adminClient = createAdminClient()
try {
adminClient.awaitBrokers()
val brokerMap = adminClient.listAllBrokerVersionInfo()
brokerMap.forKeyValue {
(broker, versionInfoOrError) =>
versionInfoOrError match {
case Success(v) => {
res.put(broker, v)
}
case Failure(v) => log.error(s"${broker} -> ERROR: ${v}\n")
}
}
} finally {
adminClient.close()
}
res
}
private def createAdminClient(): AdminClient = {
val props = new Properties()
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getBootstrapServer())
props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs())
props.putAll(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties())
AdminClient.create(props)
}
// org.apache.kafka.clients.admin.AdminClient doesn't currently expose a way to retrieve the supported api versions.
// We inline the bits we need from kafka.admin.AdminClient so that we can delete it.
private class AdminClient(val time: Time,
val client: ConsumerNetworkClient,
val bootstrapBrokers: List[Node]) extends Logging {
@volatile var running = true
val pendingFutures = new ConcurrentLinkedQueue[RequestFuture[ClientResponse]]()
val networkThread = new KafkaThread("admin-client-network-thread", () => {
try {
while (running)
client.poll(time.timer(Long.MaxValue))
} catch {
case t: Throwable =>
error("admin-client-network-thread exited", t)
} finally {
pendingFutures.forEach { future =>
try {
future.raise(Errors.UNKNOWN_SERVER_ERROR)
} catch {
case _: IllegalStateException => // It is OK if the future has been completed
}
}
pendingFutures.clear()
}
}, true)
networkThread.start()
private def send(target: Node,
request: AbstractRequest.Builder[_ <: AbstractRequest]): AbstractResponse = {
val future = client.send(target, request)
pendingFutures.add(future)
future.awaitDone(Long.MaxValue, TimeUnit.MILLISECONDS)
pendingFutures.remove(future)
if (future.succeeded())
future.value().responseBody()
else
throw future.exception()
}
private def sendAnyNode(request: AbstractRequest.Builder[_ <: AbstractRequest]): AbstractResponse = {
bootstrapBrokers.foreach { broker =>
try {
return send(broker, request)
} catch {
case e: AuthenticationException =>
throw e
case e: Exception =>
debug(s"Request ${request.apiKey()} failed against node $broker", e)
}
}
throw new RuntimeException(s"Request ${request.apiKey()} failed on brokers $bootstrapBrokers")
}
private def getApiVersions(node: Node): ApiVersionCollection = {
val response = send(node, new ApiVersionsRequest.Builder()).asInstanceOf[ApiVersionsResponse]
Errors.forCode(response.data.errorCode).maybeThrow()
response.data.apiKeys
}
/**
* Wait until there is a non-empty list of brokers in the cluster.
*/
def awaitBrokers(): Unit = {
var nodes = List[Node]()
val start = System.currentTimeMillis()
val maxWait = 30 * 1000
do {
nodes = findAllBrokers()
if (nodes.isEmpty) {
Thread.sleep(50)
}
}
while (nodes.isEmpty && (System.currentTimeMillis() - start < maxWait))
}
private def findAllBrokers(): List[Node] = {
val request = MetadataRequest.Builder.allTopics()
val response = sendAnyNode(request).asInstanceOf[MetadataResponse]
val errors = response.errors
if (!errors.isEmpty) {
log.info(s"Metadata request contained errors: $errors")
}
// 在3.x版本中这个方法是buildCluster 代替cluster()了
response.buildCluster.nodes.asScala.toList
// response.cluster().nodes.asScala.toList
}
def listAllBrokerVersionInfo(): Map[Node, Try[NodeApiVersions]] =
findAllBrokers().map { broker =>
broker -> Try[NodeApiVersions](new NodeApiVersions(getApiVersions(broker)))
}.toMap
def close(): Unit = {
running = false
try {
client.close()
} catch {
case e: IOException =>
error("Exception closing nioSelector:", e)
}
}
}
private object AdminClient {
val DefaultConnectionMaxIdleMs = 9 * 60 * 1000
val DefaultRequestTimeoutMs = 5000
val DefaultSocketConnectionSetupMs = CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG
val DefaultSocketConnectionSetupMaxMs = CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG
val DefaultMaxInFlightRequestsPerConnection = 100
val DefaultReconnectBackoffMs = 50
val DefaultReconnectBackoffMax = 50
val DefaultSendBufferBytes = 128 * 1024
val DefaultReceiveBufferBytes = 32 * 1024
val DefaultRetryBackoffMs = 100
val AdminClientIdSequence = new AtomicInteger(1)
val AdminConfigDef = {
val config = new ConfigDef()
.define(
CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG,
Type.LIST,
Importance.HIGH,
CommonClientConfigs.BOOTSTRAP_SERVERS_DOC)
.define(CommonClientConfigs.CLIENT_DNS_LOOKUP_CONFIG,
Type.STRING,
ClientDnsLookup.USE_ALL_DNS_IPS.toString,
in(ClientDnsLookup.USE_ALL_DNS_IPS.toString,
ClientDnsLookup.RESOLVE_CANONICAL_BOOTSTRAP_SERVERS_ONLY.toString),
Importance.MEDIUM,
CommonClientConfigs.CLIENT_DNS_LOOKUP_DOC)
.define(
CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,
ConfigDef.Type.STRING,
CommonClientConfigs.DEFAULT_SECURITY_PROTOCOL,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.SECURITY_PROTOCOL_DOC)
.define(
CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG,
ConfigDef.Type.INT,
DefaultRequestTimeoutMs,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.REQUEST_TIMEOUT_MS_DOC)
.define(
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG,
ConfigDef.Type.LONG,
CommonClientConfigs.DEFAULT_SOCKET_CONNECTION_SETUP_TIMEOUT_MS,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_DOC)
.define(
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG,
ConfigDef.Type.LONG,
CommonClientConfigs.DEFAULT_SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_DOC)
.define(
CommonClientConfigs.RETRY_BACKOFF_MS_CONFIG,
ConfigDef.Type.LONG,
DefaultRetryBackoffMs,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.RETRY_BACKOFF_MS_DOC)
.withClientSslSupport()
.withClientSaslSupport()
config
}
class AdminConfig(originals: Map[_, _]) extends AbstractConfig(AdminConfigDef, originals.asJava, false)
def create(props: Properties): AdminClient = {
val properties = new Properties()
val names = props.stringPropertyNames()
for (name <- names.asScala.toSet) {
properties.put(name, props.get(name).toString())
}
create(properties.asScala.toMap)
}
def create(props: Map[String, _]): AdminClient = create(new AdminConfig(props))
def create(config: AdminConfig): AdminClient = {
val clientId = "admin-" + AdminClientIdSequence.getAndIncrement()
val logContext = new LogContext(s"[LegacyAdminClient clientId=$clientId] ")
val time = Time.SYSTEM
val metrics = new Metrics(time)
val metadata = new Metadata(100L, 60 * 60 * 1000L, logContext,
new ClusterResourceListeners)
val channelBuilder = ClientUtils.createChannelBuilder(config, time, logContext)
val requestTimeoutMs = config.getInt(CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG)
val connectionSetupTimeoutMs = config.getLong(CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG)
val connectionSetupTimeoutMaxMs = config.getLong(CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG)
val retryBackoffMs = config.getLong(CommonClientConfigs.RETRY_BACKOFF_MS_CONFIG)
val brokerUrls = config.getList(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)
val clientDnsLookup = config.getString(CommonClientConfigs.CLIENT_DNS_LOOKUP_CONFIG)
val brokerAddresses = ClientUtils.parseAndValidateAddresses(brokerUrls, clientDnsLookup)
metadata.bootstrap(brokerAddresses)
val selector = new Selector(
DefaultConnectionMaxIdleMs,
metrics,
time,
"admin",
channelBuilder,
logContext)
// 版本不一样,这个地方的兼容性问题也不一样了
// 3.x版本用这个
val networkClient = new NetworkClient(
selector,
metadata,
clientId,
DefaultMaxInFlightRequestsPerConnection,
DefaultReconnectBackoffMs,
DefaultReconnectBackoffMax,
DefaultSendBufferBytes,
DefaultReceiveBufferBytes,
requestTimeoutMs,
connectionSetupTimeoutMs,
connectionSetupTimeoutMaxMs,
time,
true,
new ApiVersions,
logContext)
// val networkClient = new NetworkClient(
// selector,
// metadata,
// clientId,
// DefaultMaxInFlightRequestsPerConnection,
// DefaultReconnectBackoffMs,
// DefaultReconnectBackoffMax,
// DefaultSendBufferBytes,
// DefaultReceiveBufferBytes,
// requestTimeoutMs,
// connectionSetupTimeoutMs,
// connectionSetupTimeoutMaxMs,
// ClientDnsLookup.USE_ALL_DNS_IPS,
// time,
// true,
// new ApiVersions,
// logContext)
val highLevelClient = new ConsumerNetworkClient(
logContext,
networkClient,
metadata,
time,
retryBackoffMs,
requestTimeoutMs,
Integer.MAX_VALUE)
new AdminClient(
time,
highLevelClient,
metadata.fetch.nodes.asScala.toList)
}
}
}

View File

@@ -0,0 +1,84 @@
package kafka.console
import com.xuxd.kafka.console.config.KafkaConfig
import org.apache.kafka.clients.admin.{Admin, AlterClientQuotasOptions}
import org.apache.kafka.common.quota.{ClientQuotaAlteration, ClientQuotaEntity, ClientQuotaFilter, ClientQuotaFilterComponent}
import java.util.Collections
import java.util.concurrent.TimeUnit
import scala.jdk.CollectionConverters.{IterableHasAsJava, ListHasAsScala, MapHasAsScala, SeqHasAsJava}
/**
* client quota console.
*
* @author xuxd
* @date 2022-12-30 10:55:56
* */
class ClientQuotaConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig) with Logging {
def getClientQuotasConfigs(entityTypes: java.util.List[String], entityNames: java.util.List[String]): java.util.Map[ClientQuotaEntity, java.util.Map[String, Double]] = {
withAdminClientAndCatchError(admin => getAllClientQuotasConfigs(admin, entityTypes.asScala.toList, entityNames.asScala.toList),
e => {
log.error("getAllClientQuotasConfigs error.", e)
java.util.Collections.emptyMap()
})
}.asInstanceOf[java.util.Map[ClientQuotaEntity, java.util.Map[String, Double]]]
def addQuotaConfigs(entityTypes: java.util.List[String], entityNames: java.util.List[String], configsToBeAddedMap: java.util.Map[String, String]): (Boolean, String) = {
alterQuotaConfigs(entityTypes, entityNames, configsToBeAddedMap, Collections.emptyList())
}
def deleteQuotaConfigs(entityTypes: java.util.List[String], entityNames: java.util.List[String], configsToBeDeleted: java.util.List[String]): (Boolean, String) = {
alterQuotaConfigs(entityTypes, entityNames, Collections.emptyMap(), configsToBeDeleted)
}
def alterQuotaConfigs(entityTypes: java.util.List[String], entityNames: java.util.List[String], configsToBeAddedMap: java.util.Map[String, String], configsToBeDeleted: java.util.List[String]): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
alterQuotaConfigsInner(admin, entityTypes.asScala.toList, entityNames.asScala.toList, configsToBeAddedMap.asScala.toMap, configsToBeDeleted.asScala.toSeq)
(true, "")
},
e => {
log.error("getAllClientQuotasConfigs error.", e)
(false, e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
private def getAllClientQuotasConfigs(adminClient: Admin, entityTypes: List[String], entityNames: List[String]): java.util.Map[ClientQuotaEntity, java.util.Map[String, Double]] = {
val components = entityTypes.map(Some(_)).zipAll(entityNames.map(Some(_)), None, None).map { case (entityType, entityNameOpt) =>
entityNameOpt match {
case Some("") => ClientQuotaFilterComponent.ofDefaultEntity(entityType.get)
case Some(name) => ClientQuotaFilterComponent.ofEntity(entityType.get, name)
case None => ClientQuotaFilterComponent.ofEntityType(entityType.get)
}
}
adminClient.describeClientQuotas(ClientQuotaFilter.containsOnly(components.asJava)).entities.get(30, TimeUnit.SECONDS)
}.asInstanceOf[java.util.Map[ClientQuotaEntity, java.util.Map[String, Double]]]
private def alterQuotaConfigsInner(adminClient: Admin, entityTypes: List[String], entityNames: List[String], configsToBeAddedMap: Map[String, String], configsToBeDeleted: Seq[String]) = {
// handle altering client/user quota configs
// val oldConfig = getAllClientQuotasConfigs(adminClient, entityTypes, entityNames)
// val invalidConfigs = configsToBeDeleted.filterNot(oldConfig.asScala.toMap.contains)
// if (invalidConfigs.nonEmpty)
// throw new InvalidConfigurationException(s"Invalid config(s): ${invalidConfigs.mkString(",")}")
val alterEntityNames = entityNames.map(en => if (en.nonEmpty) en else null)
// Explicitly populate a HashMap to ensure nulls are recorded properly.
val alterEntityMap = new java.util.HashMap[String, String]
entityTypes.zip(alterEntityNames).foreach { case (k, v) => alterEntityMap.put(k, v) }
val entity = new ClientQuotaEntity(alterEntityMap)
val alterOptions = new AlterClientQuotasOptions().validateOnly(false)
val alterOps = (configsToBeAddedMap.map { case (key, value) =>
val doubleValue = try value.toDouble catch {
case _: NumberFormatException =>
throw new IllegalArgumentException(s"Cannot parse quota configuration value for $key: $value")
}
new ClientQuotaAlteration.Op(key, doubleValue)
} ++ configsToBeDeleted.map(key => new ClientQuotaAlteration.Op(key, null))).asJavaCollection
adminClient.alterClientQuotas(Collections.singleton(new ClientQuotaAlteration(entity, alterOps)), alterOptions)
.all().get(60, TimeUnit.SECONDS)
}
}

View File

@@ -1,11 +1,13 @@
package kafka.console
import com.xuxd.kafka.console.beans.{BrokerNode, ClusterInfo}
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.kafka.clients.NodeApiVersions
import org.apache.kafka.clients.admin.DescribeClusterResult
import org.apache.kafka.common.Node
import java.util.Collections
import java.util.concurrent.TimeUnit
import com.xuxd.kafka.console.beans.{BrokerNode, ClusterInfo}
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.kafka.clients.admin.DescribeClusterResult
import scala.jdk.CollectionConverters.{CollectionHasAsScala, SetHasAsJava, SetHasAsScala}
/**
@@ -41,4 +43,8 @@ class ClusterConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
new ClusterInfo
}).asInstanceOf[ClusterInfo]
}
def listBrokerVersionInfo(): java.util.HashMap[Node, NodeApiVersions] = {
BrokerApiVersion.listAllBrokerApiVersionInfo()
}
}

View File

@@ -1,18 +1,21 @@
package kafka.console
import com.google.common.cache.{CacheLoader, RemovalListener, RemovalNotification}
import com.xuxd.kafka.console.cache.TimeBasedCache
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import kafka.zk.{AdminZkClient, KafkaZkClient}
import kafka.zk.AdminZkClient
import org.apache.kafka.clients.admin._
import org.apache.kafka.clients.consumer.{ConsumerConfig, KafkaConsumer, OffsetAndMetadata}
import org.apache.kafka.clients.producer.KafkaProducer
import org.apache.kafka.common.TopicPartition
import org.apache.kafka.common.requests.ListOffsetsResponse
import org.apache.kafka.common.serialization.{ByteArrayDeserializer, ByteArraySerializer, StringSerializer}
import org.apache.kafka.common.utils.Time
import org.slf4j.{Logger, LoggerFactory}
import java.util.Properties
import java.util.concurrent.Executors
import scala.collection.{Map, Seq}
import scala.concurrent.{ExecutionContext, Future}
import scala.jdk.CollectionConverters.{MapHasAsJava, MapHasAsScala}
/**
@@ -27,11 +30,13 @@ class KafkaConsole(config: KafkaConfig) {
protected def withAdminClient(f: Admin => Any): Any = {
val admin = createAdminClient()
val admin = if (config.isCacheAdminConnection()) AdminCache.cache.get(ContextConfigHolder.CONTEXT_CONFIG.get().getBootstrapServer()) else createAdminClient()
try {
f(admin)
} finally {
admin.close()
if (!config.isCacheAdminConnection) {
admin.close()
}
}
}
@@ -45,33 +50,40 @@ class KafkaConsole(config: KafkaConfig) {
protected def withConsumerAndCatchError(f: KafkaConsumer[Array[Byte], Array[Byte]] => Any, eh: Exception => Any,
extra: Properties = new Properties()): Any = {
val props = getProps()
props.putAll(extra)
props.put(ConsumerConfig.CLIENT_ID_CONFIG, String.valueOf(System.currentTimeMillis()))
val consumer = new KafkaConsumer(props, new ByteArrayDeserializer, new ByteArrayDeserializer)
// val props = getProps()
// props.putAll(extra)
// props.put(ConsumerConfig.CLIENT_ID_CONFIG, String.valueOf(System.currentTimeMillis()))
// val consumer = new KafkaConsumer(props, new ByteArrayDeserializer, new ByteArrayDeserializer)
ConsumerCache.setProperties(extra)
val consumer = if (config.isCacheConsumerConnection) ConsumerCache.cache.get(ContextConfigHolder.CONTEXT_CONFIG.get().getBootstrapServer()) else KafkaConsole.createByteArrayKVConsumer(extra)
try {
f(consumer)
} catch {
case er: Exception => eh(er)
}
finally {
consumer.close()
ConsumerCache.clearProperties()
if (!config.isCacheConsumerConnection) {
consumer.close()
}
}
}
protected def withProducerAndCatchError(f: KafkaProducer[String, String] => Any, eh: Exception => Any,
extra: Properties = new Properties()): Any = {
val props = getProps()
props.putAll(extra)
props.put(ConsumerConfig.CLIENT_ID_CONFIG, String.valueOf(System.currentTimeMillis()))
val producer = new KafkaProducer[String, String](props, new StringSerializer, new StringSerializer)
ProducerCache.setProperties(extra)
val producer = if (config.isCacheProducerConnection) ProducerCache.cache.get(ContextConfigHolder.CONTEXT_CONFIG.get().getBootstrapServer) else KafkaConsole.createProducer(extra)
try {
f(producer)
} catch {
case er: Exception => eh(er)
}
finally {
producer.close()
ProducerCache.clearProperties()
if (!config.isCacheProducerConnection) {
producer.close()
}
}
}
@@ -79,7 +91,6 @@ class KafkaConsole(config: KafkaConfig) {
extra: Properties = new Properties()): Any = {
val props = getProps()
props.putAll(extra)
props.put(ConsumerConfig.CLIENT_ID_CONFIG, String.valueOf(System.currentTimeMillis()))
val producer = new KafkaProducer[Array[Byte], Array[Byte]](props, new ByteArraySerializer, new ByteArraySerializer)
try {
f(producer)
@@ -91,14 +102,17 @@ class KafkaConsole(config: KafkaConfig) {
}
}
@Deprecated
protected def withZKClient(f: AdminZkClient => Any): Any = {
val zkClient = KafkaZkClient(config.getZookeeperAddr, false, 30000, 30000, Int.MaxValue, Time.SYSTEM)
val adminZkClient = new AdminZkClient(zkClient)
try {
f(adminZkClient)
} finally {
zkClient.close()
}
// val zkClient = KafkaZkClient(config.getZookeeperAddr, false, 30000, 30000, Int.MaxValue, Time.SYSTEM)
// 3.x
// val zkClient = KafkaZkClient(config.getZookeeperAddr, false, 30000, 30000, Int.MaxValue, Time.SYSTEM, new ZKClientConfig(), "KafkaZkClient")
// val adminZkClient = new AdminZkClient(zkClient)
// try {
// f(adminZkClient)
// } finally {
// zkClient.close()
// }
}
protected def createAdminClient(props: Properties): Admin = {
@@ -110,20 +124,47 @@ class KafkaConsole(config: KafkaConfig) {
}
private def createAdminClient(): Admin = {
Admin.create(getProps())
KafkaConsole.createAdminClient()
}
private def getProps(): Properties = {
KafkaConsole.getProps()
}
}
object KafkaConsole {
val log: Logger = LoggerFactory.getLogger(this.getClass)
def createAdminClient(): Admin = {
Admin.create(getProps())
}
def createByteArrayKVConsumer(extra: Properties) : KafkaConsumer[Array[Byte], Array[Byte]] = {
val props = getProps()
props.putAll(extra)
props.put(ConsumerConfig.CLIENT_ID_CONFIG, String.valueOf(System.currentTimeMillis()))
new KafkaConsumer(props, new ByteArrayDeserializer, new ByteArrayDeserializer)
}
def createProducer(extra: Properties) : KafkaProducer[String, String] = {
val props = getProps()
props.putAll(extra)
new KafkaProducer(props, new StringSerializer, new StringSerializer)
}
def createByteArrayStringProducer(extra: Properties) : KafkaProducer[Array[Byte], Array[Byte]] = {
val props = getProps()
props.putAll(extra)
new KafkaProducer(props, new ByteArraySerializer, new ByteArraySerializer)
}
def getProps(): Properties = {
val props: Properties = new Properties();
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getBootstrapServer())
props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs())
props.putAll(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties())
props
}
}
object KafkaConsole {
val log: Logger = LoggerFactory.getLogger(this.getClass)
def getCommittedOffsets(admin: Admin, groupId: String,
timeoutMs: Integer): Map[TopicPartition, OffsetAndMetadata] = {
@@ -174,4 +215,88 @@ object KafkaConsole {
}.toMap
res
}
implicit val ec = ExecutionContext.fromExecutorService(Executors.newFixedThreadPool(2))
}
object AdminCache {
private val log: Logger = LoggerFactory.getLogger(this.getClass)
private val cacheLoader = new CacheLoader[String, Admin] {
override def load(key: String): Admin = KafkaConsole.createAdminClient()
}
private val removeListener = new RemovalListener[String, Admin] {
override def onRemoval(notification: RemovalNotification[String, Admin]): Unit = {
Future {
log.warn("Close expired admin connection: {}", notification.getKey)
notification.getValue.close()
log.warn("Close expired admin connection complete: {}", notification.getKey)
}(KafkaConsole.ec)
}
}
val cache = new TimeBasedCache[String, Admin](cacheLoader, removeListener)
}
object ConsumerCache {
private val log: Logger = LoggerFactory.getLogger(this.getClass)
private val threadLocal = new ThreadLocal[Properties]
private val cacheLoader = new CacheLoader[String, KafkaConsumer[Array[Byte], Array[Byte]]] {
override def load(key: String): KafkaConsumer[Array[Byte], Array[Byte]] = KafkaConsole.createByteArrayKVConsumer(threadLocal.get())
}
private val removeListener = new RemovalListener[String, KafkaConsumer[Array[Byte], Array[Byte]]] {
override def onRemoval(notification: RemovalNotification[String, KafkaConsumer[Array[Byte], Array[Byte]]]): Unit = {
Future {
log.warn("Close expired consumer connection: {}", notification.getKey)
notification.getValue.close()
log.warn("Close expired consumer connection complete: {}", notification.getKey)
}(KafkaConsole.ec)
}
}
val cache = new TimeBasedCache[String, KafkaConsumer[Array[Byte], Array[Byte]]](cacheLoader, removeListener)
def setProperties(props : Properties) : Unit = {
threadLocal.set(props)
}
def clearProperties() : Unit = {
threadLocal.remove()
}
}
object ProducerCache {
private val log: Logger = LoggerFactory.getLogger(this.getClass)
private val threadLocal = new ThreadLocal[Properties]
private val cacheLoader = new CacheLoader[String, KafkaProducer[String, String]] {
override def load(key: String): KafkaProducer[String, String] = KafkaConsole.createProducer(threadLocal.get())
}
private val removeListener = new RemovalListener[String, KafkaProducer[String, String]] {
override def onRemoval(notification: RemovalNotification[String, KafkaProducer[String, String]]): Unit = {
Future {
log.warn("Close expired producer connection: {}", notification.getKey)
notification.getValue.close()
log.warn("Close expired producer connection complete: {}", notification.getKey)
}(KafkaConsole.ec)
}
}
val cache = new TimeBasedCache[String, KafkaProducer[String, String]](cacheLoader, removeListener)
def setProperties(props : Properties) : Unit = {
threadLocal.set(props)
}
def clearProperties() : Unit = {
threadLocal.remove()
}
}

View File

@@ -4,13 +4,14 @@ import com.xuxd.kafka.console.beans.MessageFilter
import com.xuxd.kafka.console.beans.enums.FilterType
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.commons.lang3.StringUtils
import org.apache.kafka.clients.admin.{DeleteRecordsOptions, RecordsToDelete}
import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord}
import org.apache.kafka.clients.producer.ProducerRecord
import org.apache.kafka.common.TopicPartition
import java.time.Duration
import java.util
import java.util.Properties
import java.util.{Properties}
import scala.collection.immutable
import scala.jdk.CollectionConverters.{CollectionHasAsScala, MapHasAsScala, SeqHasAsJava}
@@ -127,7 +128,7 @@ class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
record.offset(),
record.timestamp(),
record.timestampType(),
record.checksum(),
// record.checksum(),
record.serializedKeySize(),
record.serializedValueSize(),
record.key(),
@@ -236,4 +237,14 @@ class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
(false, e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
def delete(recordsToDelete: util.Map[TopicPartition, RecordsToDelete]): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
admin.deleteRecords(recordsToDelete, withTimeoutMs(new DeleteRecordsOptions())).all().get()
(true, "")
}, e => {
log.error("delete message error.", e)
(false, "delete error :" + e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
}

View File

@@ -242,8 +242,8 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
withAdminClientAndCatchError(admin => {
admin.listPartitionReassignments(withTimeoutMs(new ListPartitionReassignmentsOptions)).reassignments().get()
}, e => {
Collections.emptyMap()
log.error("listPartitionReassignments error.", e)
Collections.emptyMap()
}).asInstanceOf[util.Map[TopicPartition, PartitionReassignment]]
}
@@ -256,4 +256,20 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
throw e
}).asInstanceOf[util.Map[TopicPartition, Throwable]]
}
def proposedAssignments(reassignmentJson: String,
brokerListString: String): util.Map[TopicPartition, util.List[Int]] = {
withAdminClientAndCatchError(admin => {
val map = ReassignPartitionsCommand.generateAssignment(admin, reassignmentJson, brokerListString, true)._1
val res = new util.HashMap[TopicPartition, util.List[Int]]()
for (tp <- map.keys) {
res.put(tp, map(tp).asJava)
// res.put(tp, map.getOrElse(tp, Seq.empty).asJava)
}
res
}, e => {
log.error("proposedAssignments error.", e)
throw e
})
}.asInstanceOf[util.Map[TopicPartition, util.List[Int]]]
}

View File

@@ -66,17 +66,17 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
/**
* delete topic by topic name.
*
* @param topic topic name.
* @param topics topic name list.
* @return result or : fail message.
*/
def deleteTopic(topic: String): (Boolean, String) = {
def deleteTopics(topics: util.Collection[String]): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.deleteTopics(Collections.singleton(topic), new DeleteTopicsOptions().retryOnQuotaViolation(false)).all().get(timeoutMs, TimeUnit.MILLISECONDS)
admin.deleteTopics(topics, new DeleteTopicsOptions().retryOnQuotaViolation(false)).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
},
e => {
log.error("delete topic error, topic: " + topic, e)
log.error("delete topic error, topic: " + topics, e)
(false, e.getMessage)
}).asInstanceOf[(Boolean, String)]
}

View File

@@ -0,0 +1,48 @@
package com.xuxd.kafka.console.scala;
import com.xuxd.kafka.console.config.ContextConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.config.KafkaConfig;
import kafka.console.ClientQuotaConsole;
import org.apache.kafka.common.config.internals.QuotaConfigs;
import org.apache.kafka.common.quota.ClientQuotaEntity;
import org.junit.jupiter.api.Test;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
public class ClientQuotaConsoleTest {
String bootstrapServer = "localhost:9092";
@Test
void testGetClientQuotasConfigs() {
ClientQuotaConsole console = new ClientQuotaConsole(new KafkaConfig());
ContextConfig config = new ContextConfig();
config.setBootstrapServer(bootstrapServer);
ContextConfigHolder.CONTEXT_CONFIG.set(config);
Map<ClientQuotaEntity, Map<String, Object>> configs = console.getClientQuotasConfigs(Arrays.asList(ClientQuotaEntity.USER, ClientQuotaEntity.CLIENT_ID), Arrays.asList("user1", "clientA"));
configs.forEach((k, v) -> {
System.out.println(k);
System.out.println(v);
});
}
@Test
void testAlterClientQuotasConfigs() {
ClientQuotaConsole console = new ClientQuotaConsole(new KafkaConfig());
ContextConfig config = new ContextConfig();
config.setBootstrapServer(bootstrapServer);
ContextConfigHolder.CONTEXT_CONFIG.set(config);
Map<String, String> configsToBeAddedMap = new HashMap<>();
configsToBeAddedMap.put(QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG, "1024000000");
console.addQuotaConfigs(Arrays.asList(ClientQuotaEntity.USER), Arrays.asList("user-test"), configsToBeAddedMap);
console.addQuotaConfigs(Arrays.asList(ClientQuotaEntity.USER), Arrays.asList(""), configsToBeAddedMap);
console.addQuotaConfigs(Arrays.asList(ClientQuotaEntity.CLIENT_ID), Arrays.asList(""), configsToBeAddedMap);
console.addQuotaConfigs(Arrays.asList(ClientQuotaEntity.CLIENT_ID), Arrays.asList("clientA"), configsToBeAddedMap);
console.addQuotaConfigs(Arrays.asList(ClientQuotaEntity.USER, ClientQuotaEntity.CLIENT_ID), Arrays.asList("", ""), configsToBeAddedMap);
// console.deleteQuotaConfigs(Arrays.asList(ClientQuotaEntity.CLIENT_ID), Arrays.asList(""), Arrays.asList(QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG));
}
}

160
ui/package-lock.json generated
View File

@@ -1820,6 +1820,63 @@
"integrity": "sha1-/q7SVZc9LndVW4PbwIhRpsY1IPo=",
"dev": true
},
"ansi-styles": {
"version": "4.3.0",
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz",
"integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==",
"dev": true,
"optional": true,
"requires": {
"color-convert": "^2.0.1"
}
},
"chalk": {
"version": "4.1.2",
"resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz",
"integrity": "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==",
"dev": true,
"optional": true,
"requires": {
"ansi-styles": "^4.1.0",
"supports-color": "^7.1.0"
}
},
"color-convert": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
"integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
"dev": true,
"optional": true,
"requires": {
"color-name": "~1.1.4"
}
},
"color-name": {
"version": "1.1.4",
"resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
"integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==",
"dev": true,
"optional": true
},
"has-flag": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
"integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==",
"dev": true,
"optional": true
},
"loader-utils": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.4.tgz",
"integrity": "sha512-xXqpXoINfFhgua9xiqD8fPFHgkoq1mmmpE92WlDbm9rNRd/EbRb+Gqf908T2DMfuHjjJlksiK2RbHVOdD/MqSw==",
"dev": true,
"optional": true,
"requires": {
"big.js": "^5.2.2",
"emojis-list": "^3.0.0",
"json5": "^2.1.2"
}
},
"ssri": {
"version": "8.0.1",
"resolved": "https://registry.nlark.com/ssri/download/ssri-8.0.1.tgz",
@@ -1828,6 +1885,28 @@
"requires": {
"minipass": "^3.1.1"
}
},
"supports-color": {
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz",
"integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==",
"dev": true,
"optional": true,
"requires": {
"has-flag": "^4.0.0"
}
},
"vue-loader-v16": {
"version": "npm:vue-loader@16.8.3",
"resolved": "https://registry.npmjs.org/vue-loader/-/vue-loader-16.8.3.tgz",
"integrity": "sha512-7vKN45IxsKxe5GcVCbc2qFU5aWzyiLrYJyUuMz4BQLKctCj/fmCa0w6fGiiQ2cLFetNcek1ppGJQDCup0c1hpA==",
"dev": true,
"optional": true,
"requires": {
"chalk": "^4.1.0",
"hash-sum": "^2.0.0",
"loader-utils": "^2.0.0"
}
}
}
},
@@ -12097,87 +12176,6 @@
}
}
},
"vue-loader-v16": {
"version": "npm:vue-loader@16.8.3",
"resolved": "https://registry.npmjs.org/vue-loader/-/vue-loader-16.8.3.tgz",
"integrity": "sha512-7vKN45IxsKxe5GcVCbc2qFU5aWzyiLrYJyUuMz4BQLKctCj/fmCa0w6fGiiQ2cLFetNcek1ppGJQDCup0c1hpA==",
"dev": true,
"optional": true,
"requires": {
"chalk": "^4.1.0",
"hash-sum": "^2.0.0",
"loader-utils": "^2.0.0"
},
"dependencies": {
"ansi-styles": {
"version": "4.3.0",
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz",
"integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==",
"dev": true,
"optional": true,
"requires": {
"color-convert": "^2.0.1"
}
},
"chalk": {
"version": "4.1.2",
"resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz",
"integrity": "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==",
"dev": true,
"optional": true,
"requires": {
"ansi-styles": "^4.1.0",
"supports-color": "^7.1.0"
}
},
"color-convert": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
"integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
"dev": true,
"optional": true,
"requires": {
"color-name": "~1.1.4"
}
},
"color-name": {
"version": "1.1.4",
"resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
"integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==",
"dev": true,
"optional": true
},
"has-flag": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
"integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==",
"dev": true,
"optional": true
},
"loader-utils": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.2.tgz",
"integrity": "sha512-TM57VeHptv569d/GKh6TAYdzKblwDNiumOdkFnejjD0XwTH87K90w3O7AiJRqdQoXygvi1VQTJTLGhJl7WqA7A==",
"dev": true,
"optional": true,
"requires": {
"big.js": "^5.2.2",
"emojis-list": "^3.0.0",
"json5": "^2.1.2"
}
},
"supports-color": {
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz",
"integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==",
"dev": true,
"optional": true,
"requires": {
"has-flag": "^4.0.0"
}
}
}
},
"vue-ref": {
"version": "2.0.0",
"resolved": "https://registry.npm.taobao.org/vue-ref/download/vue-ref-2.0.0.tgz",

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.2 KiB

After

Width:  |  Height:  |  Size: 5.4 KiB

BIN
ui/public/vue.ico Normal file
View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

Some files were not shown because too many files have changed in this diff Show More