81 Commits

Author SHA1 Message Date
许晓东
19cc635151 升级1.0.11版本 2024-10-06 11:30:08 +08:00
许晓东
a1206fb094 主页显示broker版本 2024-09-01 21:47:35 +08:00
许晓东
83c018f6a7 计算broker版本 2024-08-30 22:57:06 +08:00
许晓东
2729627a80 更新jetbrains 最新logo 2024-08-14 21:42:20 +08:00
许晓东
096792beb0 集群信息/topic:副本,配置模态框点击背景关闭#26 2024-07-07 21:10:00 +08:00
许晓东
fd495351bc 发布v1.0.10版本. 2024-06-16 21:06:54 +08:00
许晓东
ecbd9dda6a 开启集群数据权限,同一浏览器不同账号登录看到其它账号集群信息bug fixed. 2024-06-16 20:36:00 +08:00
许晓东
30707e84a7 brokerId不是从0开始,topic管理->变更副本失败fixed. 2024-06-11 20:00:23 +08:00
许晓东
f98cca8727 升级kafka到3.5.0版本. 2024-05-23 21:37:38 +08:00
许晓东
ea95697e88 长级kafka到3.5.0版本. 2024-05-23 21:35:34 +08:00
许晓东
e90268a56e issue #36, searchByTime maxNums 改为配置. 2024-03-03 21:46:45 +08:00
许晓东
fcc315782c 更新wechat. 2024-02-25 21:47:06 +08:00
许晓东
b79529f607 调整acl user用户名/密码长度. 2024-01-15 20:38:30 +08:00
许晓东
f49e2f7d0c 更新功能脑图 2024-01-06 20:48:50 +08:00
许晓东
4929953acd 登录按钮支持回车登录. 2023-12-23 21:14:38 +08:00
许晓东
8c946a96ba 发布1.0.9版本. 2023-12-06 20:41:37 +08:00
许晓东
364a716388 增加消息菜单里发送统计的访问权限. 2023-12-03 22:04:29 +08:00
许晓东
2ef8573a5b 增加消息根据时间范围发送统计的查询. 2023-12-03 19:25:39 +08:00
许晓东
e368ba9527 增加消息根据时间范围发送统计的查询. 2023-12-03 19:24:16 +08:00
许晓东
dc84144443 在线发送消息支持选择同步、异步发送. 2023-11-28 21:21:37 +08:00
许晓东
471e1e4962 增加支持logo. 2023-11-09 21:45:34 +08:00
许晓东
b2773b596c logback-test.xml rename to logback-spring.xml. 2023-10-25 16:10:40 +08:00
许晓东
ecd55b7517 打包配置打tar.gz和zip包. 2023-10-25 15:23:03 +08:00
许晓东
fed61394bc 消费端详情,订阅分区信息去掉左内边距. 2023-10-24 21:16:33 +08:00
许晓东
1a80c37d83 消费详情fixed,在线发送页面调整. 2023-09-24 21:33:06 +08:00
许晓东
34cdc33617 Merge branch 'main' of https://github.com/xxd763795151/kafka-console-ui 2023-09-21 16:52:46 +08:00
Xiaodong Xu
0041bafb83 Merge pull request #32 from zph95/feature/send_with_header
send message with header
2023-09-20 15:32:27 +08:00
zph
b27db32ee2 send message with header 2023-09-19 02:54:28 +08:00
许晓东
5c5d4fd3a1 升级1.0.9,打压缩包增加版本号. 2023-09-11 13:23:29 +08:00
许晓东
93c1d3cd9d 支持不同的角色查看不同集群. 2023-08-27 22:06:05 +08:00
许晓东
5a28adfa6b 集群绑定角色页面. 2023-08-27 12:17:09 +08:00
许晓东
dac9295fab 增加集群绑定角色接口. 2023-08-23 22:13:22 +08:00
许晓东
f5b27d9b40 没有编辑权限,隐藏集群属性;打印全部操作日志. 2023-08-20 20:04:47 +08:00
许晓东
b529dc313e topic和消费组下的查询类弹框支持鼠标点击蒙层关闭. 2023-07-23 14:03:39 +08:00
许晓东
848c8913f2 update wechat info. 2023-07-21 22:27:10 +08:00
许晓东
3ef9f1e012 shell 脚本启停路径修复. 2023-05-22 23:17:41 +08:00
许晓东
60fb02d6d4 指定utf-8编码. 2023-05-19 15:35:44 +08:00
许晓东
6f093fbb27 fixed 打包后访问权限问题,升级1.0.8. 2023-05-19 15:20:15 +08:00
许晓东
e186fff939 contact update. 2023-05-18 23:00:32 +08:00
Xiaodong Xu
c4676fb51a Merge pull request #24 from xxd763795151/user-auth-dev
User auth dev
2023-05-18 22:57:26 +08:00
许晓东
571efe6ddc 接口权限过滤. 2023-05-18 22:56:00 +08:00
许晓东
7e98a58f60 eslint fixed and 页面权限配置 and 默认权限数据加载. 2023-05-17 22:29:04 +08:00
许晓东
b08be2aa65 权限配置、默认用户、角色配置. 2023-05-16 21:23:45 +08:00
许晓东
238507de19 权限配置. 2023-05-15 21:58:55 +08:00
许晓东
435a5ca2bc 支持登录 2023-05-14 21:25:13 +08:00
许晓东
be8e567684 个人设置更新密码. 2023-05-09 20:44:54 +08:00
许晓东
da1ddeb1e7 用户删除、重置密码,分配角色. 2023-05-09 07:16:55 +08:00
许晓东
38ca2cfc52 用户新增,删除. 2023-05-07 22:55:01 +08:00
许晓东
cc8f671fdb 角色列表页面权限分配. 2023-05-04 16:24:34 +08:00
许晓东
e3b0dd5d2a 角色列表页面权限分配雏形. 2023-04-24 22:08:05 +08:00
许晓东
a37664f6d5 角色列表页面雏形. 2023-04-19 22:10:08 +08:00
许晓东
af52e6bc61 用户权限展示. 2023-04-17 22:24:51 +08:00
许晓东
fce1154c36 新增用户管理:用户、角色、权限新增接口 2023-04-11 22:01:16 +08:00
许晓东
0bf5c6f46c 更新下载地址. 2023-04-11 13:30:53 +08:00
许晓东
fe759aaf74 集群字段长度增加到1024,修复漏洞(CVE-2023-20860):spring升级到5.3.26 2023-04-11 12:25:24 +08:00
许晓东
1e6a7bb269 polish 文档. 2023-02-10 20:41:15 +08:00
许晓东
fb440ae153 polish 文档. 2023-02-10 20:28:23 +08:00
许晓东
07c6714fd9 替换淘宝镜像,限制客户端ID太长显示宽度. 2023-02-09 22:23:34 +08:00
许晓东
5e9304efc2 增加限流说明,fixed acl 开启授权提示. 2023-02-07 22:24:37 +08:00
许晓东
fc7f05cf4e 限流,支持用户and客户端ID同时存在. 2023-02-06 22:11:56 +08:00
许晓东
5a87e9cad8 限流,支持用户. 2023-02-05 23:08:14 +08:00
许晓东
608f7cdc47 限流,支持客户端ID修改、删除. 2023-02-05 22:01:37 +08:00
许晓东
ee6defe5d2 限流,支持客户端ID查询. 2023-02-04 21:28:51 +08:00
许晓东
56621e0b8c 新增客户端限流菜单页. 2023-01-30 21:40:11 +08:00
许晓东
832b20a83e 客户端限流查询接口. 2023-01-09 22:11:50 +08:00
许晓东
4dbadee0d4 客户端限流 service. 2023-01-03 22:01:43 +08:00
许晓东
7d76632f08 客户端限流console 2023-01-03 21:28:11 +08:00
许晓东
d5102f626c 客户端限流console. 2023-01-02 21:28:47 +08:00
许晓东
daf77290da 客户端限流console. 2023-01-02 21:16:22 +08:00
许晓东
4df20f9ca5 contact jpg 2022-12-09 09:21:10 +08:00
许晓东
b465ba78b8 update contact 2022-11-01 19:05:40 +08:00
许晓东
9a69bad93a wechat contact. 2022-10-16 13:32:41 +08:00
许晓东
3785e9aaca wechat contact. 2022-10-16 13:31:35 +08:00
许晓东
d502da1b39 update weixin contact 2022-09-20 20:07:06 +08:00
许晓东
d6282cb902 认证授权分离页面未开启ACL时,显示效果. 2022-08-28 22:48:21 +08:00
许晓东
ca4dc2ebc9 Polish READM. 2022-08-28 22:04:31 +08:00
许晓东
50775994b5 Polish READM. 2022-08-28 22:02:21 +08:00
许晓东
3bd14a35d6 ACL的认证和授权管理功能分离. 2022-08-28 21:39:08 +08:00
许晓东
4c3fe5230c update contact jpg. 2022-08-04 09:12:39 +08:00
许晓东
57d549635b 恢复interceptory路径. 2022-07-27 18:19:50 +08:00
许晓东
923b89b6bd 更新下载地址. 2022-07-24 17:33:22 +08:00
160 changed files with 9083 additions and 877 deletions

View File

@@ -1,14 +1,16 @@
# kafka可视化管理平台
一款轻量级的kafka可视化管理平台安装配置快捷、简单易用。
为了开发的省事,没有国际化支持,页面只支持中文展示。
用过rocketmq-console吧前端展示风格跟那个有点类似。
用过rocketmq-console(rocketmq-dashboard)吧,对,前端展示风格跟那个有点类似。
## 页面预览
如果github能查看图片的话可以点击[查看菜单页面](./document/overview/概览.md),查看每个页面的样子
## 集群迁移支持说明
当前主分支及日后版本不再提供消息同步、集群迁移的解决方案,如有需要,查看:[集群迁移说明](./document/datasync/集群迁移.md)
## ACL说明
acl配置说明如果kafka集群启用了ACL但是控制台没看到Acl菜单可以查看[Acl配置启用说明](./document/acl/Acl.md)
最新代码运行即可看到acl菜单将权限管理和认证的用户管理SASL_SCRAM)进行了分离。分离之后支持只开启SASL_SCRAM认证的时候未开启鉴权用户变更操作。或者使用其它认证机制下的权限管理操作可视化的权限管理但是可视化的认证用户管理目前只支持Scram。
v1.0.6版本之前如果kafka集群启用了ACL但是控制台没看到Acl菜单可以查看[Acl配置启用说明](./document/acl/Acl.md)
## 功能支持
* 多集群支持
* 集群信息
@@ -16,15 +18,18 @@ acl配置说明如果kafka集群启用了ACL但是控制台没看到Acl菜
* 消费组管理
* 消息管理
* ACL
* 客户端限流
* 运维
功能明细看这个脑图:
![功能特性](./document/img/功能特性.png)
## 安装包下载
点击下载(v1.0.4版本)[kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.4/kafka-console-ui.zip)
点击下载(v1.0.10版本)[kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.10/kafka-console-ui-1.0.10.zip)
如果安装包下载的比较慢,可以查看下面的源码打包说明,把代码下载下来,快速打包不过最新main分支代码刚升级了kafka版本到3.2.0,还没有充分测试,如果需要稳定版本,可以下载 1.0.4-release分支代码
如果安装包下载的比较慢,可以查看下面的源码打包说明,把代码下载下来,本地快速打包.
github下载慢也可以试试从gitee下载点击下载[gitee来源kafka-console-ui.zip](https://gitee.com/xiaodong_xu/kafka-console-ui/releases/download/v1.0.10/kafka-console-ui-1.0.10.zip)
## 快速使用
### Windows
@@ -74,20 +79,46 @@ sh bin/shutdown.sh
如果需要本地开发,开发环境配置查看:[本地开发](./document/develop/开发配置.md)
## 登录认证和权限
目前主分支不支持登录认证,感谢@dongyinuo 同学开发了一版支持登录认证,及相关的按钮权限(主要有两个角色:管理员和普通开发人员)。
1.0.7版本及之前,主分支不支持登录认证,感谢@dongyinuo 同学开发了一版支持登录认证,及相关的按钮权限(主要有两个角色:管理员和普通开发人员)。
在分支feature/dongyinuo/20220501/devops 上。
如果有需要使用管理台登录认证的,可以切换到这个分支上进行打包,打包方式看 源码打包 说明。
默认登录账户admin/kafka-console-ui521
目前最新版本主分支已增加登录认证、用户、角色等权限配置如需开启登录认证修改配置文件config/application.yml: auth.enable=true(默认 false),如下:
```yaml
# 权限认证设置设置为true需要先登录才能访问
auth:
enable: true
# 登录用户token的过期时间单位小时
expire-hours: 24
```
默认有两个登录用户super-admin/123456admin/123456登录成功后在个人设置修改密码。super-admin和admin的唯一区别是super-admin可以增加删除用户admin不能。如果觉得不合适请在用户菜单下删除相关用户或角色自行创建合适的角色或用户。
注意:不开启登录认证,页面不显示用户菜单,正常现象。
## DockerCompose部署
感谢@wdkang123 同学分享的部署方式,如果有需要请查看[DockerCompose部署方式](./document/deploy/docker部署.md)
## 感谢支持
感谢jetbrains的开源支持如果有朋友愿意一起维护很欢迎提pr.
[![jetbrains](./document/img/jb_beam.svg "jetbrains")](https://jb.gg/OpenSourceSupport)
jetbrains官方地址: https://www.jetbrains.com/
## 联系方式
+ 微信群
<img src="./document/contact/weixin_contact.jpg" width="40%"/>
[//]: # (<img src="https://github.com/xxd763795151/kafka-console-ui/blob/main/document/contact/weixin_contact.jpeg" width="40%"/>)
[//]: # (<img src="https://github.com/xxd763795151/kafka-console-ui/blob/main/document/contact/weixin_contact.jpg" width="40%"/>)
+ 若联系方式失效, 请联系加一下微信, 说明意图
- xxd763795151
- wxid_7jy2ezljvebt12
抱歉,后面就不再提供新的联系方式加群了。
在很早之前,有个兄弟提了个建议,可以拉个群,大家可以一起交流,所以成立了一个群,当时我以为没有多少朋友愿意加入。
可是后来确实有不少朋友进群了这让我很惶恐其实这个平台我自己并不觉得有太多技术深度却有一些朋友愿意来捧场这个让我觉得很惭愧所以现在考虑了下如果真有使用问题可以留个issue。
另外,我自己也确实挺忙,对于这个项目的需求处理不够及时,实在是时间和精力上有限,所以有朋友希望新增的一些能力,拖了这么久我也没有下文,实在是抱歉。
对于一些不太耗时的功能,我还是可以积极处理的。
另外有些功能,是想要放到后面再加的,所以迟迟没有动手。

View File

@@ -2,10 +2,10 @@
xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">
<id>rocketmq-reput</id>
<id>${project.version}</id>
<formats>
<!-- <format>tar.gz</format>-->
<format>zip</format>
<format>tar.gz</format>
</formats>
<fileSets>

View File

@@ -1,7 +1,12 @@
#!/bin/bash
SCRIPT_DIR=`dirname $0`
PROJECT_DIR="$SCRIPT_DIR/.."
PREFIX='./'
CMD=$0
if [[ $CMD == $PREFIX* ]]; then
CMD=${CMD:2}
fi
SCRIPT_DIR=$(dirname "`pwd`/$CMD")
PROJECT_DIR=`dirname "$SCRIPT_DIR"`
# 不要修改进程标记,作为进程属性关闭使用
PROCESS_FLAG="kafka-console-ui-process-flag:${PROJECT_DIR}"
pkill -f $PROCESS_FLAG

View File

@@ -1,7 +1,7 @@
rem MAIN_CLASS=org.springframework.boot.loader.JarLauncher
rem JAVA_HOME=jre1.8.0_66
set JAVA_CMD=%JAVA_HOME%\bin\java
set JAVA_OPTS=-Xmx512m -Xms512m -Xmn256m -Xss256k
set JAVA_OPTS=-Xmx512m -Xms512m -Xmn256m -Xss256k -Dfile.encoding=utf-8
set CONFIG_FILE=../config/application.yml
set TARGET=../lib/kafka-console-ui.jar
set DATA_DIR=..

View File

@@ -3,8 +3,13 @@
# 设置jvm堆大小及栈大小栈大小最少设置为256K不要小于这个值比如设置为128太小了
JAVA_MEM_OPTS="-Xmx512m -Xms512m -Xmn256m -Xss256k"
SCRIPT_DIR=`dirname $0`
PROJECT_DIR="$SCRIPT_DIR/.."
PREFIX='./'
CMD=$0
if [[ $CMD == $PREFIX* ]]; then
CMD=${CMD:2}
fi
SCRIPT_DIR=$(dirname "`pwd`/$CMD")
PROJECT_DIR=`dirname "$SCRIPT_DIR"`
CONF_FILE="$PROJECT_DIR/config/application.yml"
TARGET="$PROJECT_DIR/lib/kafka-console-ui.jar"
@@ -12,12 +17,12 @@ TARGET="$PROJECT_DIR/lib/kafka-console-ui.jar"
DATA_DIR=$PROJECT_DIR
# 日志目录,默认为当前工程目录下
# 这个是错误输出如果启动命令有误输出到这个文件应用日志不会输出到error.out应用日志输出到上面的rocketmq-reput.log中
# 这个是错误输出如果启动命令有误输出到这个文件应用日志不会输出到error.out应用日志输出到上面的kafka-console-ui.log中
ERROR_OUT="$PROJECT_DIR/error.out"
# 不要修改进程标记作为进程属性关闭使用如果要修改请把stop.sh里的该属性的值保持一致
# 不要修改进程标记作为进程属性关闭使用如果要修改请把shutdown.sh里的该属性的值保持一致
PROCESS_FLAG="kafka-console-ui-process-flag:${PROJECT_DIR}"
JAVA_OPTS="$JAVA_OPTS $JAVA_MEM_OPTS"
JAVA_OPTS="$JAVA_OPTS $JAVA_MEM_OPTS -Dfile.encoding=utf-8"
nohup java -jar $JAVA_OPTS $TARGET --spring.config.location="$CONF_FILE" --logging.home="$PROJECT_DIR" --data.dir=$DATA_DIR $PROCESS_FLAG 1>/dev/null 2>$ERROR_OUT &

View File

@@ -26,7 +26,7 @@ kafka:
其中说明了kafka.config.enable-acl配置项需要为true。
注意:**现在不再支持这种方式了**
## 版本说明
## v1.0.6之前的版本说明
因为现在支持多集群配置,关于多集群配置,可以看主页说明的 配置集群 介绍。
所以这里把这些额外的配置项都去掉了。

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 205 KiB

After

Width:  |  Height:  |  Size: 127 KiB

View File

@@ -85,7 +85,7 @@ docker run -d -p 7766:7766 -v $PWD/data:/app/data -v $PWD/log:/app/log wdkang/ka
解压后 将Dockerfile放入文件夹的根目录
**Dockefile**
**Dockerfile**
```dockerfile
# jdk

13
document/img/jb_beam.svg Normal file
View File

@@ -0,0 +1,13 @@
<svg xmlns="http://www.w3.org/2000/svg" width="298" height="64" fill="none" viewBox="0 0 298 64">
<defs>
<linearGradient id="a" x1=".850001" x2="62.62" y1="62.72" y2="1.81" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF9419"/>
<stop offset=".43" stop-color="#FF021D"/>
<stop offset=".99" stop-color="#E600FF"/>
</linearGradient>
</defs>
<path fill="#000" d="M86.4844 40.5858c0 .8464-.1792 1.5933-.5377 2.2505-.3585.6573-.8564 1.1651-1.5137 1.5236-.6572.3585-1.3941.5378-2.2406.5378H78v6.1044h5.0787c1.912 0 3.6248-.4282 5.1484-1.2846 1.5236-.8564 2.7186-2.0415 3.585-3.5452.8663-1.5037 1.3045-3.1966 1.3045-5.0886V21.0178h-6.6322v19.568Zm17.8556-1.8224h13.891v-5.6065H104.34v-6.3633h15.355v-5.7758H97.8766v29.9743h22.2464v-5.7757H104.34v-6.453Zm17.865-11.8005h8.882v24.0193h6.633V26.9629h8.842v-5.9451h-24.367v5.9551l.01-.01Zm47.022 9.0022c-.517-.2788-1.085-.4879-1.673-.6472.449-.1295.877-.2888 1.275-.488 1.096-.5676 1.962-1.3643 2.579-2.39.618-1.0257.936-2.2007.936-3.5351 0-1.5237-.418-2.8879-1.244-4.0929-.827-1.195-1.992-2.131-3.486-2.8082-1.494-.6672-3.206-1.0058-5.118-1.0058h-13.315v29.9743h13.574c2.011 0 3.804-.3485 5.387-1.0556 1.573-.707 2.798-1.6829 3.675-2.9476.866-1.2547 1.304-2.6887 1.304-4.302 0-1.4837-.338-2.8082-1.026-3.9833-.687-1.175-1.633-2.0812-2.858-2.7285l-.01.0099Zm-13.603-9.9184h5.886c.816 0 1.533.1494 2.161.4382.627.2888 1.115.707 1.464 1.2547.348.5378.527 1.1751.527 1.9021 0 .7269-.179 1.414-.527 1.9817-.349.5676-.837.9958-1.464 1.3045-.628.3087-1.345.4581-2.161.4581h-5.886v-7.3492.0099Zm10.138 18.134c-.378.5676-.916 1.0058-1.603 1.3145-.697.3087-1.484.4581-2.39.4581h-6.145v-7.6878h6.145c.886 0 1.673.1693 2.37.4979.687.3286 1.235.7867 1.613 1.3842.378.5975.578 1.2747.578 2.0414 0 .7668-.19 1.4241-.568 1.9917Zm29.596-5.3077c1.663-.7967 2.947-1.922 3.864-3.3659.916-1.444 1.374-3.117 1.374-5.0289 0-1.912-.448-3.5253-1.344-4.9592-.897-1.434-2.171-2.5394-3.814-3.3261-1.644-.7867-3.546-1.1751-5.717-1.1751h-13.124v29.9743h6.642V40.0779h4.322l6.084 10.9142h7.578l-6.851-11.7208c.339-.1195.677-.249.996-.3983h-.01Zm-2.151-6.1244c-.369.6274-.896 1.1154-1.583 1.444-.688.3386-1.494.5079-2.42.5079h-5.975v-8.2953h5.975c.926 0 1.732.1693 2.42.4979.687.3287 1.214.8166 1.583 1.434.368.6174.558 1.3544.558 2.1908 0 .8365-.19 1.5734-.558 2.2008v.0199Zm20.594-11.7308-10.706 29.9743h6.742l2.121-6.6122h11.114l2.27 6.6122h6.612L220.99 21.0178h-7.189Zm-.339 18.3431 3.445-10.5756.409-1.922.408 1.922 3.685 10.5756h-7.947Zm20.693 11.6312h6.851V21.0178h-6.851v29.9743Zm31.02-9.6993-12.896-20.275h-6.463v29.9743h6.055V30.7172l12.826 20.2749h6.533V21.0178h-6.055v20.275Zm31.528-3.3559c-.647-1.2448-1.564-2.2904-2.729-3.1369-1.165-.8464-2.509-1.4041-4.023-1.6929l-5.098-1.0456c-.797-.1892-1.434-.5178-1.902-.9958-.469-.478-.708-1.0755-.708-1.7825 0-.6473.17-1.205.518-1.683.339-.478.827-.8464 1.444-1.1153.618-.2689 1.335-.3983 2.151-.3983.817 0 1.554.1394 2.181.4182.627.2788 1.115.6672 1.464 1.1751s.528 1.0755.528 1.7228h6.642c-.04-1.7427-.528-3.2863-1.444-4.6207-.916-1.3443-2.201-2.3899-3.834-3.1468-1.633-.7568-3.505-1.1352-5.597-1.1352-2.091 0-3.943.3884-5.566 1.1751-1.623.7867-2.898 1.8721-3.804 3.2663-.906 1.3941-1.364 2.9775-1.364 4.76 0 1.444.288 2.7485.876 3.9036.587 1.1652 1.414 2.1311 2.479 2.8979 1.076.7668 2.311 1.3045 3.725 1.6033l5.397 1.1153c.886.2091 1.584.5975 2.101 1.1551.518.5577.767 1.2448.767 2.0813 0 .6672-.189 1.2747-.567 1.8025-.379.5277-.907.936-1.584 1.2248-.677.2888-1.474.4282-2.39.4282-.916 0-1.782-.1593-2.529-.478-.747-.3186-1.325-.7767-1.733-1.3742-.418-.5875-.617-1.2747-.617-2.0414h-6.642c.029 1.8721.527 3.5152 1.513 4.9492.976 1.424 2.32 2.5394 4.033 3.336 1.713.7967 3.675 1.195 5.886 1.195 2.21 0 4.202-.4083 5.915-1.2249 1.723-.8165 3.057-1.9418 4.023-3.3758.966-1.434 1.444-3.0572 1.444-4.8696 0-1.4838-.329-2.848-.976-4.1028l.02.01Z"/>
<path fill="url(#a)" d="M20.34 3.66 3.66 20.34C1.32 22.68 0 25.86 0 29.18V59c0 2.76 2.24 5 5 5h29.82c3.32 0 6.49-1.32 8.84-3.66l16.68-16.68c2.34-2.34 3.66-5.52 3.66-8.84V5c0-2.76-2.24-5-5-5H29.18c-3.32 0-6.49 1.32-8.84 3.66Z"/>
<path fill="#000" d="M48 16H8v40h40V16Z"/>
<path fill="#fff" d="M30 47H13v4h17v-4Z"/>
</svg>

After

Width:  |  Height:  |  Size: 4.1 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

After

Width:  |  Height:  |  Size: 605 KiB

63
pom.xml
View File

@@ -10,7 +10,7 @@
</parent>
<groupId>com.xuxd</groupId>
<artifactId>kafka-console-ui</artifactId>
<version>1.0.5</version>
<version>1.0.11</version>
<name>kafka-console-ui</name>
<description>Kafka console manage ui</description>
<properties>
@@ -21,10 +21,11 @@
<ui.path>${project.basedir}/ui</ui.path>
<frontend-maven-plugin.version>1.11.0</frontend-maven-plugin.version>
<compiler.version>1.8</compiler.version>
<kafka.version>3.2.0</kafka.version>
<kafka.version>3.5.0</kafka.version>
<maven.assembly.plugin.version>3.0.0</maven.assembly.plugin.version>
<mybatis-plus-boot-starter.version>3.4.2</mybatis-plus-boot-starter.version>
<scala.version>2.13.6</scala.version>
<spring-framework.version>5.3.26</spring-framework.version>
</properties>
<dependencies>
<dependency>
@@ -48,6 +49,11 @@
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
@@ -84,6 +90,12 @@
</exclusions>
</dependency>
<!-- <dependency>-->
<!-- <groupId>org.apache.kafka</groupId>-->
<!-- <artifactId>kafka-tools</artifactId>-->
<!-- <version>${kafka.version}</version>-->
<!-- </dependency>-->
<dependency>
<groupId>com.typesafe.scala-logging</groupId>
<artifactId>scala-logging_2.13</artifactId>
@@ -168,26 +180,26 @@
</executions>
</plugin>
<!-- <plugin>-->
<!-- <groupId>org.codehaus.mojo</groupId>-->
<!-- <artifactId>build-helper-maven-plugin</artifactId>-->
<!-- <version>3.2.0</version>-->
<!-- <executions>-->
<!-- <execution>-->
<!-- <id>add-source</id>-->
<!-- <phase>generate-sources</phase>-->
<!-- <goals>-->
<!-- <goal>add-source</goal>-->
<!-- </goals>-->
<!-- <configuration>-->
<!-- <sources>-->
<!-- <source>src/main/java</source>-->
<!-- <source>src/main/scala</source>-->
<!-- </sources>-->
<!-- </configuration>-->
<!-- </execution>-->
<!-- </executions>-->
<!-- </plugin>-->
<!-- <plugin>-->
<!-- <groupId>org.codehaus.mojo</groupId>-->
<!-- <artifactId>build-helper-maven-plugin</artifactId>-->
<!-- <version>3.2.0</version>-->
<!-- <executions>-->
<!-- <execution>-->
<!-- <id>add-source</id>-->
<!-- <phase>generate-sources</phase>-->
<!-- <goals>-->
<!-- <goal>add-source</goal>-->
<!-- </goals>-->
<!-- <configuration>-->
<!-- <sources>-->
<!-- <source>src/main/java</source>-->
<!-- <source>src/main/scala</source>-->
<!-- </sources>-->
<!-- </configuration>-->
<!-- </execution>-->
<!-- </executions>-->
<!-- </plugin>-->
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
@@ -195,6 +207,8 @@
<source>${compiler.version}</source>
<target>${compiler.version}</target>
<encoding>${project.build.sourceEncoding}</encoding>
<!-- <debug>false</debug>-->
<!-- <parameters>false</parameters>-->
</configuration>
</plugin>
<plugin>
@@ -219,7 +233,8 @@
<goal>npm</goal>
</goals>
<configuration>
<arguments>install --registry=https://registry.npmjs.org/</arguments>
<!-- <arguments>install &#45;&#45;registry=https://registry.npmjs.org/</arguments>-->
<arguments>install --registry=https://registry.npm.taobao.org</arguments>
</configuration>
</execution>
<execution>
@@ -272,7 +287,7 @@
<attach>true</attach>
<tarLongFileMode>posix</tarLongFileMode>
<runOnlyAtExecutionRoot>false</runOnlyAtExecutionRoot>
<appendAssemblyId>false</appendAssemblyId>
<appendAssemblyId>true</appendAssemblyId>
</configuration>
</plugin>
</plugins>

View File

@@ -6,6 +6,9 @@ import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.servlet.ServletComponentScan;
import org.springframework.scheduling.annotation.EnableScheduling;
/**
* @author 晓东哥哥
*/
@MapperScan("com.xuxd.kafka.console.dao")
@SpringBootApplication
@EnableScheduling

View File

@@ -0,0 +1,137 @@
package com.xuxd.kafka.console.aspect;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.beans.Credentials;
import com.xuxd.kafka.console.config.LogConfig;
import com.xuxd.kafka.console.filter.CredentialsContext;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.locks.ReentrantLock;
/**
* @author 晓东哥哥
*/
@Slf4j
@Order(-1)
@Aspect
@Component
public class ControllerLogAspect {
private Map<String, String> descMap = new HashMap<>();
private ReentrantLock lock = new ReentrantLock();
private final LogConfig logConfig;
public ControllerLogAspect(LogConfig logConfig) {
this.logConfig = logConfig;
}
@Pointcut("@annotation(com.xuxd.kafka.console.aspect.annotation.ControllerLog)")
private void pointcut() {
}
@Around("pointcut()")
public Object around(ProceedingJoinPoint joinPoint) throws Throwable {
if (!logConfig.isPrintControllerLog()) {
return joinPoint.proceed();
}
StringBuilder params = new StringBuilder("[");
try {
String methodName = getMethodFullName(joinPoint.getTarget().getClass().getName(), joinPoint.getSignature().getName());
if (!descMap.containsKey(methodName)) {
cacheDescInfo(joinPoint);
}
Object[] args = joinPoint.getArgs();
long startTime = System.currentTimeMillis();
Object res = joinPoint.proceed();
long endTime = System.currentTimeMillis();
for (int i = 0; i < args.length; i++) {
params.append(args[i]);
}
params.append("]");
String resStr = "[" + (res != null ? res.toString() : "") + "]";
StringBuilder sb = new StringBuilder();
Credentials credentials = CredentialsContext.get();
if (credentials != null) {
sb.append("[").append(credentials.getUsername()).append("] ");
}
String shortMethodName = descMap.getOrDefault(methodName, ".-");
shortMethodName = shortMethodName.substring(shortMethodName.lastIndexOf(".") + 1);
sb.append("[").append(shortMethodName)
.append("调用完成: ")
.append("请求参数=").append(params).append(", ")
.append("响应值=").append(resStr).append(", ")
.append("耗时=").append(endTime - startTime)
.append(" ms");
log.info(sb.toString());
return res;
} catch (Throwable e) {
log.error("调用方法异常, 请求参数:" + params, e);
throw e;
}
}
private void cacheDescInfo(ProceedingJoinPoint joinPoint) {
lock.lock();
try {
String methodName = joinPoint.getSignature().getName();
Class<?> aClass = joinPoint.getTarget().getClass();
Method method = null;
try {
Object[] args = joinPoint.getArgs();
Class<?>[] clzArr = new Class[args.length];
for (int i = 0; i < args.length; i++) {
clzArr[i] = args[i].getClass();
if (List.class.isAssignableFrom(clzArr[i])) {
clzArr[i] = List.class;
}
}
method = aClass.getDeclaredMethod(methodName, clzArr);
} catch (Exception e) {
log.warn("cacheDescInfo error: {}", e.getMessage());
}
String fullMethodName = getMethodFullName(aClass.getName(), methodName);
String desc = "[" + fullMethodName + "]";
if (method == null) {
descMap.put(fullMethodName, desc);
return;
}
ControllerLog controllerLog = method.getAnnotation(ControllerLog.class);
String value = controllerLog.value();
if (StringUtils.isBlank(value)) {
descMap.put(fullMethodName, desc);
} else {
descMap.put(fullMethodName, value);
}
} finally {
lock.unlock();
}
}
private String getMethodFullName(String className, String methodName) {
return className + "#" + methodName;
}
}

View File

@@ -0,0 +1,140 @@
package com.xuxd.kafka.console.aspect;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.Credentials;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import com.xuxd.kafka.console.cache.RolePermCache;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.dao.SysPermissionMapper;
import com.xuxd.kafka.console.dao.SysRoleMapper;
import com.xuxd.kafka.console.dao.SysUserMapper;
import com.xuxd.kafka.console.exception.UnAuthorizedException;
import com.xuxd.kafka.console.filter.CredentialsContext;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;
import java.lang.reflect.Method;
import java.util.*;
import java.util.stream.Collectors;
/**
* @author: xuxd
* @date: 2023/5/17 22:32
**/
@Slf4j
@Order(1)
@Aspect
@Component
public class PermissionAspect {
private Map<String, Set<String>> permMap = new HashMap<>();
private final AuthConfig authConfig;
private final SysUserMapper userMapper;
private final SysRoleMapper roleMapper;
private final SysPermissionMapper permissionMapper;
private final RolePermCache rolePermCache;
public PermissionAspect(AuthConfig authConfig,
SysUserMapper userMapper,
SysRoleMapper roleMapper,
SysPermissionMapper permissionMapper,
RolePermCache rolePermCache) {
this.authConfig = authConfig;
this.userMapper = userMapper;
this.roleMapper = roleMapper;
this.permissionMapper = permissionMapper;
this.rolePermCache = rolePermCache;
}
@Pointcut("@annotation(com.xuxd.kafka.console.aspect.annotation.Permission)")
private void pointcut() {
}
@Before(value = "pointcut()")
public void before(JoinPoint joinPoint) {
if (!authConfig.isEnable()) {
return;
}
MethodSignature signature = (MethodSignature) joinPoint.getSignature();
Method method = signature.getMethod();
Permission permission = method.getAnnotation(Permission.class);
if (permission == null) {
return;
}
String[] value = permission.value();
if (value == null || value.length == 0) {
return;
}
String name = method.getName() + "@" + method.hashCode();
Map<String, Set<String>> pm = checkPermMap(name, value);
Set<String> allowPermSet = pm.get(name);
if (allowPermSet == null) {
log.error("解析权限出现意外啦!!!");
return;
}
Credentials credentials = CredentialsContext.get();
if (credentials == null || credentials.isInvalid()) {
throw new UnAuthorizedException("credentials is invalid");
}
QueryWrapper<SysUserDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("username", credentials.getUsername());
SysUserDO userDO = userMapper.selectOne(queryWrapper);
if (userDO == null) {
throw new UnAuthorizedException(credentials.getUsername() + ":" + allowPermSet);
}
boolean unauthorized = true;
boolean notFoundHideProperty = true;
String roleIds = userDO.getRoleIds();
List<Long> roleIdList = Arrays.stream(roleIds.split(",")).
map(String::trim).filter(StringUtils::isNotEmpty).
map(Long::valueOf).collect(Collectors.toList());
for (Long roleId : roleIdList) {
Set<String> permSet = rolePermCache.getRolePermCache().getOrDefault(roleId, Collections.emptySet());
for (String p : allowPermSet) {
if (permSet.contains(p)) {
unauthorized = false;
}
}
if (permSet.contains(authConfig.getHideClusterPropertyPerm())) {
notFoundHideProperty = false;
}
}
if (unauthorized) {
throw new UnAuthorizedException(credentials.getUsername() + ":" + allowPermSet);
}
if (authConfig.isHideClusterProperty() && notFoundHideProperty) {
credentials.setHideClusterProperty(true);
}
credentials.setRoleIdList(roleIdList);
}
private Map<String, Set<String>> checkPermMap(String methodName, String[] value) {
if (!permMap.containsKey(methodName)) {
Map<String, Set<String>> map = new HashMap<>(permMap);
map.put(methodName, new HashSet<>(Arrays.asList(value)));
permMap = map;
return map;
}
return permMap;
}
}

View File

@@ -0,0 +1,15 @@
package com.xuxd.kafka.console.aspect.annotation;
import java.lang.annotation.*;
/**
* 该注解用到controller层的方法上
* @author 晓东哥哥
*/
@Inherited
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface ControllerLog {
String value() default "";
}

View File

@@ -0,0 +1,17 @@
package com.xuxd.kafka.console.aspect.annotation;
import java.lang.annotation.*;
/**
* 权限注解,开启认证的时候拥有该权限的用户才能访问对应接口.
*
* @author: xuxd
* @date: 2023/5/17 22:30
**/
@Inherited
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Permission {
String[] value() default {};
}

View File

@@ -0,0 +1,30 @@
package com.xuxd.kafka.console.beans;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/5/14 19:37
**/
@Data
public class Credentials {
public static final Credentials INVALID = new Credentials();
private String username;
private long expiration;
/**
* 是否隐藏集群属性
*/
private boolean hideClusterProperty;
private List<Long> roleIdList;
public boolean isInvalid() {
return this == INVALID;
}
}

View File

@@ -0,0 +1,17 @@
package com.xuxd.kafka.console.beans;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/5/14 20:44
**/
@Data
public class LoginResult {
private String token;
private List<String> permissions;
}

View File

@@ -33,4 +33,6 @@ public class QueryMessage {
private String headerKey;
private String headerValue;
private int filterNumber;
}

View File

@@ -0,0 +1,22 @@
package com.xuxd.kafka.console.beans;
import lombok.Getter;
import lombok.Setter;
import lombok.ToString;
import org.springframework.context.ApplicationEvent;
/**
* @author: xuxd
* @date: 2023/5/18 15:49
**/
@ToString
public class RolePermUpdateEvent extends ApplicationEvent {
@Getter
@Setter
private boolean reload = false;
public RolePermUpdateEvent(Object source) {
super(source);
}
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.beans;
import java.util.List;
import lombok.Data;
/**
@@ -22,4 +23,18 @@ public class SendMessage {
private int num;
private long offset;
private List<Header> headers;
/**
* true: sync send.
*/
private boolean sync;
@Data
public static class Header{
private String headerKey;
private String headerValue;
}
}

View File

@@ -0,0 +1,24 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* @author: xuxd
* @since: 2023/8/23 21:35
**/
@Data
@TableName("t_cluster_role_relation")
public class ClusterRoleRelationDO {
@TableId(type = IdType.AUTO)
private Long id;
private Long roleId;
private Long clusterInfoId;
private String updateTime;
}

View File

@@ -0,0 +1,29 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
@TableName("t_sys_permission")
public class SysPermissionDO {
@TableId(type = IdType.AUTO)
private Long id;
private String name;
/**
* 权限类型: 0菜单1按钮
*/
private Integer type;
private Long parentId;
private String permission;
}

View File

@@ -0,0 +1,24 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
@TableName("t_sys_role")
public class SysRoleDO {
@TableId(type = IdType.AUTO)
private Long id;
private String roleName;
private String description;
private String permissionIds;
}

View File

@@ -0,0 +1,26 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
@TableName("t_sys_user")
public class SysUserDO {
@TableId(type = IdType.AUTO)
private Long id;
private String username;
private String password;
private String salt;
private String roleIds;
}

View File

@@ -0,0 +1,27 @@
package com.xuxd.kafka.console.beans.dto;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/1/10 20:12
**/
@Data
public class AlterClientQuotaDTO {
private String type;
private List<String> types;
private List<String> names;
private String consumerRate;
private String producerRate;
private String requestPercentage;
private List<String> deleteConfigs;
}

View File

@@ -0,0 +1,28 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.dos.ClusterRoleRelationDO;
import lombok.Data;
/**
* @author: xuxd
* @since: 2023/8/23 21:42
**/
@Data
public class ClusterRoleRelationDTO {
private Long id;
private Long roleId;
private Long clusterInfoId;
private String updateTime;
public ClusterRoleRelationDO toDO() {
ClusterRoleRelationDO aDo = new ClusterRoleRelationDO();
aDo.setId(id);
aDo.setRoleId(roleId);
aDo.setClusterInfoId(clusterInfoId);
return aDo;
}
}

View File

@@ -0,0 +1,15 @@
package com.xuxd.kafka.console.beans.dto;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/5/14 18:59
**/
@Data
public class LoginUserDTO {
private String username;
private String password;
}

View File

@@ -0,0 +1,17 @@
package com.xuxd.kafka.console.beans.dto;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/1/9 21:53
**/
@Data
public class QueryClientQuotaDTO {
private List<String> types;
private List<String> names;
}

View File

@@ -37,6 +37,8 @@ public class QueryMessageDTO {
private String headerValue;
private int filterNumber;
public QueryMessage toQueryMessage() {
QueryMessage queryMessage = new QueryMessage();
queryMessage.setTopic(topic);
@@ -69,6 +71,7 @@ public class QueryMessageDTO {
if (StringUtils.isNotBlank(headerValue)) {
queryMessage.setHeaderValue(headerValue.trim());
}
queryMessage.setFilterNumber(filterNumber);
return queryMessage;
}

View File

@@ -0,0 +1,23 @@
package com.xuxd.kafka.console.beans.dto;
import lombok.Data;
import java.util.Date;
import java.util.Set;
/**
* 发送统计查询请求,指定时间段内,发送了多少消息.
* @author: xuxd
* @since: 2023/12/1 22:01
**/
@Data
public class QuerySendStatisticsDTO {
private String topic;
private Set<Integer> partition;
private Date startTime;
private Date endTime;
}

View File

@@ -0,0 +1,32 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.dos.SysPermissionDO;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
public class SysPermissionDTO {
private String name;
/**
* 权限类型: 0菜单1按钮
*/
private Integer type;
private Long parentId;
private String permission;
public SysPermissionDO toSysPermissionDO() {
SysPermissionDO permissionDO = new SysPermissionDO();
permissionDO.setName(this.name);
permissionDO.setType(this.type);
permissionDO.setParentId(this.parentId);
permissionDO.setPermission(this.permission);
return permissionDO;
}
}

View File

@@ -0,0 +1,35 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import lombok.Data;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
public class SysRoleDTO {
private Long id;
private String roleName;
private String description;
private List<String> permissionIds;
public SysRoleDO toDO() {
SysRoleDO roleDO = new SysRoleDO();
roleDO.setId(this.id);
roleDO.setRoleName(this.roleName);
roleDO.setDescription(this.description);
if (CollectionUtils.isNotEmpty(permissionIds)) {
roleDO.setPermissionIds(StringUtils.join(this.permissionIds, ","));
}
return roleDO;
}
}

View File

@@ -0,0 +1,34 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
public class SysUserDTO {
private Long id;
private String username;
private String password;
private String salt;
private String roleIds;
private Boolean resetPassword = false;
public SysUserDO toDO() {
SysUserDO userDO = new SysUserDO();
userDO.setId(this.id);
userDO.setUsername(this.username);
userDO.setPassword(this.password);
userDO.setSalt(this.salt);
userDO.setRoleIds(this.roleIds);
return userDO;
}
}

View File

@@ -21,4 +21,6 @@ public class BrokerApiVersionVO {
private int unSupportNums;
private List<String> versionInfo;
private String brokerVersion;
}

View File

@@ -0,0 +1,82 @@
package com.xuxd.kafka.console.beans.vo;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.common.config.internals.QuotaConfigs;
import org.apache.kafka.common.quota.ClientQuotaEntity;
import java.util.List;
import java.util.Map;
/**
* @author 晓东哥哥
*/
@Data
public class ClientQuotaEntityVO {
private String user;
private String client;
private String ip;
private String consumerRate;
private String producerRate;
private String requestPercentage;
public static ClientQuotaEntityVO from(ClientQuotaEntity entity, List<String> entityTypes, Map<String, Object> config) {
ClientQuotaEntityVO entityVO = new ClientQuotaEntityVO();
Map<String, String> entries = entity.entries();
entityTypes.forEach(type -> {
switch (type) {
case ClientQuotaEntity.USER:
entityVO.setUser(entries.get(type));
break;
case ClientQuotaEntity.CLIENT_ID:
entityVO.setClient(entries.get(type));
break;
case ClientQuotaEntity.IP:
entityVO.setIp(entries.get(type));
break;
default:
break;
}
});
entityVO.setConsumerRate(convert(config.getOrDefault(QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG, "")));
entityVO.setProducerRate(convert(config.getOrDefault(QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG, "")));
entityVO.setRequestPercentage(config.getOrDefault(QuotaConfigs.REQUEST_PERCENTAGE_OVERRIDE_CONFIG, "").toString());
return entityVO;
}
public static String convert(Object num) {
if (num == null) {
return null;
}
if (num instanceof String) {
if ((StringUtils.isBlank((String) num))) {
return (String) num;
}
}
if (num instanceof Number) {
Number number = (Number) num;
double value = number.doubleValue();
double _1kb = 1024;
double _1mb = 1024 * _1kb;
if (value < _1kb) {
return value + " Byte";
}
if (value < _1mb) {
return String.format("%.1f KB", (value / _1kb));
}
if (value >= _1mb) {
return String.format("%.1f MB", (value / _1mb));
}
}
return String.valueOf(num);
}
}

View File

@@ -0,0 +1,33 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.dos.ClusterRoleRelationDO;
import lombok.Data;
/**
* @author: xuxd
* @since: 2023/8/23 21:45
**/
@Data
public class ClusterRoleRelationVO {
private Long id;
private Long roleId;
private Long clusterInfoId;
private String updateTime;
private String roleName;
private String clusterName;
public static ClusterRoleRelationVO from(ClusterRoleRelationDO relationDO) {
ClusterRoleRelationVO vo = new ClusterRoleRelationVO();
vo.setId(relationDO.getId());
vo.setRoleId(relationDO.getRoleId());
vo.setClusterInfoId(relationDO.getClusterInfoId());
vo.setUpdateTime(relationDO.getUpdateTime());
return vo;
}
}

View File

@@ -0,0 +1,36 @@
package com.xuxd.kafka.console.beans.vo;
import lombok.Data;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.Map;
/**
* @author: xuxd
* @since: 2023/12/1 17:49
**/
@Data
public class QuerySendStatisticsVO {
private static final String FORMAT = "yyyy-MM-dd'T'HH:mm:ss.SSSXXX";
private static final DateFormat DATE_FORMAT = new SimpleDateFormat(FORMAT);
private String topic;
private Long total;
private Map<Integer, Long> detail;
private String startTime;
private String endTime;
private String searchTime = format(new Date());
public static String format(Date date) {
return DATE_FORMAT.format(date);
}
}

View File

@@ -0,0 +1,43 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.dos.SysPermissionDO;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/4/17 21:18
**/
@Data
public class SysPermissionVO {
private Long id;
private String name;
/**
* 权限类型: 0菜单1按钮
*/
private Integer type;
private Long parentId;
private String permission;
private Long key;
private List<SysPermissionVO> children;
public static SysPermissionVO from(SysPermissionDO permissionDO) {
SysPermissionVO permissionVO = new SysPermissionVO();
permissionVO.setPermission(permissionDO.getPermission());
permissionVO.setType(permissionDO.getType());
permissionVO.setName(permissionDO.getName());
permissionVO.setParentId(permissionDO.getParentId());
permissionVO.setKey(permissionDO.getId());
permissionVO.setId(permissionDO.getId());
return permissionVO;
}
}

View File

@@ -0,0 +1,38 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
/**
* @author: xuxd
* @date: 2023/4/19 21:12
**/
@Data
public class SysRoleVO {
private Long id;
private String roleName;
private String description;
private List<Long> permissionIds;
public static SysRoleVO from(SysRoleDO roleDO) {
SysRoleVO roleVO = new SysRoleVO();
roleVO.setId(roleDO.getId());
roleVO.setRoleName(roleDO.getRoleName());
roleVO.setDescription(roleDO.getDescription());
if (StringUtils.isNotEmpty(roleDO.getPermissionIds())) {
List<Long> list = Arrays.stream(roleDO.getPermissionIds().split(",")).
filter(StringUtils::isNotEmpty).map(e -> Long.valueOf(e.trim())).collect(Collectors.toList());
roleVO.setPermissionIds(list);
}
return roleVO;
}
}

View File

@@ -0,0 +1,31 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/5/6 13:06
**/
@Data
public class SysUserVO {
private Long id;
private String username;
private String password;
private String roleIds;
private String roleNames;
public static SysUserVO from(SysUserDO userDO) {
SysUserVO userVO = new SysUserVO();
userVO.setId(userDO.getId());
userVO.setUsername(userDO.getUsername());
userVO.setRoleIds(userDO.getRoleIds());
userVO.setPassword(userDO.getPassword());
return userVO;
}
}

View File

@@ -0,0 +1,91 @@
package com.xuxd.kafka.console.cache;
import com.xuxd.kafka.console.beans.RolePermUpdateEvent;
import com.xuxd.kafka.console.beans.dos.SysPermissionDO;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import com.xuxd.kafka.console.dao.SysPermissionMapper;
import com.xuxd.kafka.console.dao.SysRoleMapper;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.SmartInitializingSingleton;
import org.springframework.context.ApplicationListener;
import org.springframework.context.annotation.DependsOn;
import org.springframework.stereotype.Component;
import java.util.*;
import java.util.function.Function;
import java.util.stream.Collectors;
/**
* @author: xuxd
* @date: 2023/5/18 15:47
**/
@DependsOn("dataInit")
@Slf4j
@Component
public class RolePermCache implements ApplicationListener<RolePermUpdateEvent>, SmartInitializingSingleton {
private Map<Long, SysPermissionDO> permCache = new HashMap<>();
private Map<Long, Set<String>> rolePermCache = new HashMap<>();
private final SysPermissionMapper permissionMapper;
private final SysRoleMapper roleMapper;
public RolePermCache(SysPermissionMapper permissionMapper, SysRoleMapper roleMapper) {
this.permissionMapper = permissionMapper;
this.roleMapper = roleMapper;
}
@Override
public void onApplicationEvent(RolePermUpdateEvent event) {
log.info("更新角色权限信息:{}", event);
if (event.isReload()) {
this.loadPermCache();
}
refresh();
}
public Map<Long, SysPermissionDO> getPermCache() {
return permCache;
}
public Map<Long, Set<String>> getRolePermCache() {
return rolePermCache;
}
private void refresh() {
List<SysRoleDO> roleDOS = roleMapper.selectList(null);
Map<Long, Set<String>> tmp = new HashMap<>();
for (SysRoleDO roleDO : roleDOS) {
String permissionIds = roleDO.getPermissionIds();
if (StringUtils.isEmpty(permissionIds)) {
continue;
}
List<Long> list = Arrays.stream(permissionIds.split(",")).map(String::trim).filter(StringUtils::isNotEmpty).map(Long::valueOf).collect(Collectors.toList());
Set<String> permSet = tmp.getOrDefault(roleDO.getId(), new HashSet<>());
for (Long permId : list) {
SysPermissionDO permissionDO = permCache.get(permId);
if (permissionDO != null) {
permSet.add(permissionDO.getPermission());
}
}
tmp.put(roleDO.getId(), permSet);
}
rolePermCache = tmp;
}
private void loadPermCache() {
List<SysPermissionDO> roleDOS = permissionMapper.selectList(null);
Map<Long, SysPermissionDO> map = roleDOS.stream().collect(Collectors.toMap(SysPermissionDO::getId, Function.identity(), (e1, e2) -> e1));
permCache = map;
}
@Override
public void afterSingletonsInstantiated() {
this.loadPermCache();
this.refresh();
}
}

View File

@@ -0,0 +1,54 @@
package com.xuxd.kafka.console.config;
import lombok.Data;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
/**
* @author: xuxd
* @date: 2023/5/9 21:08
**/
@Data
@Configuration
@ConfigurationProperties(prefix = "auth")
public class AuthConfig {
/**
* 是否启用登录权限认证.
*/
private boolean enable;
/**
* 认证生成Jwt token用的,随便写.
*/
private String secret = "kafka-console-ui-default-secret";
/**
* token有效期小时.
*/
private long expireHours;
/**
* 隐藏集群的属性信息如果当前用户没有集群切换里的编辑权限就不能看集群的属性信息有开启ACL的集群需要开启这个.
* 不隐藏属性不行开启ACL的时候属性里需要配置认证信息比如超管的用户名密码等不等被普通角色看到.
*/
private boolean hideClusterProperty;
/**
* 不要修改.与data-h2.sql里配置的一致即可.
*/
private String hideClusterPropertyPerm = "op:cluster-switch:edit";
/**
* 是否启用集群的数据权限,如果启用,可以配置哪些角色看到哪些集群.
* 默认false是为了兼容老版本.
*
* @since 1.0.9
*/
private boolean enableClusterAuthority;
/**
* 重新加载权限信息版本升级替换jar包的时候新版本里增加了新的权限菜单这个设置为true.
*/
private boolean reloadPermission;
}

View File

@@ -1,13 +1,6 @@
package com.xuxd.kafka.console.config;
import kafka.console.ClusterConsole;
import kafka.console.ConfigConsole;
import kafka.console.ConsumerConsole;
import kafka.console.KafkaAclConsole;
import kafka.console.KafkaConfigConsole;
import kafka.console.MessageConsole;
import kafka.console.OperationConsole;
import kafka.console.TopicConsole;
import kafka.console.*;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@@ -52,7 +45,7 @@ public class KafkaConfiguration {
@Bean
public OperationConsole operationConsole(KafkaConfig config, TopicConsole topicConsole,
ConsumerConsole consumerConsole) {
ConsumerConsole consumerConsole) {
return new OperationConsole(config, topicConsole, consumerConsole);
}
@@ -60,4 +53,9 @@ public class KafkaConfiguration {
public MessageConsole messageConsole(KafkaConfig config) {
return new MessageConsole(config);
}
@Bean
public ClientQuotaConsole clientQuotaConsole(KafkaConfig config) {
return new ClientQuotaConsole(config);
}
}

View File

@@ -0,0 +1,23 @@
package com.xuxd.kafka.console.config;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
/**
* @author: xuxd
* @since: 2023/8/20 20:00
**/
@Configuration
@ConfigurationProperties(prefix = "log")
public class LogConfig {
private boolean printControllerLog = true;
public boolean isPrintControllerLog() {
return printControllerLog;
}
public void setPrintControllerLog(boolean printControllerLog) {
this.printControllerLog = printControllerLog;
}
}

View File

@@ -1,5 +1,7 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.AclEntry;
import com.xuxd.kafka.console.beans.dto.AddAuthDTO;
import com.xuxd.kafka.console.beans.dto.ConsumerAuthDTO;
@@ -28,6 +30,7 @@ public class AclAuthController {
@Autowired
private AclService aclService;
@Permission({"acl:authority:detail", "acl:sasl-scram:detail"})
@PostMapping("/detail")
public Object getAclDetailList(@RequestBody QueryAclDTO param) {
return aclService.getAclDetailList(param.toEntry());
@@ -38,11 +41,14 @@ public class AclAuthController {
return aclService.getOperationList();
}
@Permission("acl:authority")
@PostMapping("/list")
public Object getAclList(@RequestBody QueryAclDTO param) {
return aclService.getAclList(param.toEntry());
}
@ControllerLog("增加Acl")
@Permission({"acl:authority:add-principal", "acl:authority:add", "acl:sasl-scram:add-auth"})
@PostMapping
public Object addAcl(@RequestBody AddAuthDTO param) {
return aclService.addAcl(param.toAclEntry());
@@ -54,6 +60,8 @@ public class AclAuthController {
* @param param entry.topic && entry.username must.
* @return
*/
@ControllerLog("增加ProducerAcl")
@Permission({"acl:authority:producer", "acl:sasl-scram:producer"})
@PostMapping("/producer")
public Object addProducerAcl(@RequestBody ProducerAuthDTO param) {
@@ -66,6 +74,8 @@ public class AclAuthController {
* @param param entry.topic && entry.groupId entry.username must.
* @return
*/
@ControllerLog("增加ConsumerAcl")
@Permission({"acl:authority:consumer", "acl:sasl-scram:consumer"})
@PostMapping("/consumer")
public Object addConsumerAcl(@RequestBody ConsumerAuthDTO param) {
@@ -78,6 +88,8 @@ public class AclAuthController {
* @param entry entry
* @return
*/
@ControllerLog("删除Acl")
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping
public Object deleteAclByUser(@RequestBody AclEntry entry) {
return aclService.deleteAcl(entry);
@@ -89,17 +101,21 @@ public class AclAuthController {
* @param param entry.username
* @return
*/
@ControllerLog("删除Acl")
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/user")
public Object deleteAclByUser(@RequestBody DeleteAclDTO param) {
return aclService.deleteUserAcl(param.toUserEntry());
}
/**
* add producer acl.
* delete producer acl.
*
* @param param entry.topic && entry.username must.
* @return
*/
@ControllerLog("删除ProducerAcl")
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/producer")
public Object deleteProducerAcl(@RequestBody ProducerAuthDTO param) {
@@ -107,15 +123,29 @@ public class AclAuthController {
}
/**
* add consumer acl.
* delete consumer acl.
*
* @param param entry.topic && entry.groupId entry.username must.
* @return
*/
@ControllerLog("删除ConsumerAcl")
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/consumer")
public Object deleteConsumerAcl(@RequestBody ConsumerAuthDTO param) {
return aclService.deleteConsumerAcl(param.toTopicEntry(), param.toGroupEntry());
}
/**
* clear principal acls.
*
* @param param acl principal.
* @return true or false.
*/
@ControllerLog("清除Acl")
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/clear")
public Object clearAcl(@RequestBody DeleteAclDTO param) {
return aclService.clearAcl(param.toUserEntry());
}
}

View File

@@ -1,7 +1,11 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.AclEntry;
import com.xuxd.kafka.console.beans.AclUser;
import com.xuxd.kafka.console.service.AclService;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
@@ -12,7 +16,7 @@ import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
/**
* kafka-console-ui.
* kafka-console-ui. sasl scram user.
*
* @author xuxd
* @date 2021-08-28 21:13:05
@@ -24,29 +28,44 @@ public class AclUserController {
@Autowired
private AclService aclService;
@Permission("acl:sasl-scram")
@GetMapping
public Object getUserList() {
return aclService.getUserList();
}
@ControllerLog("增加SaslUser")
@Permission({"acl:sasl-scram:add-update", "acl:sasl-scram:add-auth"})
@PostMapping
public Object addOrUpdateUser(@RequestBody AclUser user) {
return aclService.addOrUpdateUser(user.getUsername(), user.getPassword());
}
@ControllerLog("删除SaslUser")
@Permission({"acl:sasl-scram:del", "acl:sasl-scram:pure"})
@DeleteMapping
public Object deleteUser(@RequestBody AclUser user) {
return aclService.deleteUser(user.getUsername());
}
@ControllerLog("删除SaslUser和Acl")
@Permission({"acl:sasl-scram:del", "acl:sasl-scram:pure"})
@DeleteMapping("/auth")
public Object deleteUserAndAuth(@RequestBody AclUser user) {
return aclService.deleteUserAndAuth(user.getUsername());
}
@Permission("acl:sasl-scram:detail")
@GetMapping("/detail")
public Object getUserDetail(@RequestParam String username) {
public Object getUserDetail(@RequestParam("username") String username) {
return aclService.getUserDetail(username);
}
@GetMapping("/scram")
public Object getSaslScramUserList(@RequestParam(required = false, name = "username") String username) {
AclEntry entry = new AclEntry();
entry.setPrincipal(StringUtils.isNotBlank(username) ? username : null);
return aclService.getSaslScramUserList(entry);
}
}

View File

@@ -0,0 +1,41 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.LoginUserDTO;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.service.AuthService;
import org.springframework.web.bind.annotation.*;
/**
* @author: xuxd
* @date: 2023/5/11 18:54
**/
@RestController
@RequestMapping("/auth")
public class AuthController {
private final AuthConfig authConfig;
private final AuthService authService;
public AuthController(AuthConfig authConfig, AuthService authService) {
this.authConfig = authConfig;
this.authService = authService;
}
@GetMapping("/enable")
public boolean enable() {
return authConfig.isEnable();
}
@PostMapping("/login")
public ResponseData login(@RequestBody LoginUserDTO userDTO) {
return authService.login(userDTO);
}
@GetMapping("/own/data/auth")
public boolean ownDataAuthority() {
return authService.ownDataAuthority();
}
}

View File

@@ -0,0 +1,59 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterClientQuotaDTO;
import com.xuxd.kafka.console.beans.dto.QueryClientQuotaDTO;
import com.xuxd.kafka.console.service.ClientQuotaService;
import org.apache.commons.collections.CollectionUtils;
import org.springframework.web.bind.annotation.*;
/**
* @author: xuxd
* @date: 2023/1/9 21:50
**/
@RestController
@RequestMapping("/client/quota")
public class ClientQuotaController {
private final ClientQuotaService clientQuotaService;
public ClientQuotaController(ClientQuotaService clientQuotaService) {
this.clientQuotaService = clientQuotaService;
}
@Permission({"quota:user", "quota:client", "quota:user-client"})
@PostMapping("/list")
public Object getClientQuotaConfigs(@RequestBody QueryClientQuotaDTO request) {
return clientQuotaService.getClientQuotaConfigs(request.getTypes(), request.getNames());
}
@ControllerLog("增加限流配额")
@Permission({"quota:user:add", "quota:client:add", "quota:user-client:add", "quota:edit"})
@PostMapping
public Object alterClientQuotaConfigs(@RequestBody AlterClientQuotaDTO request) {
if (request.getTypes().size() != 2) {
if (CollectionUtils.isEmpty(request.getTypes())
|| CollectionUtils.isEmpty(request.getNames())
|| request.getTypes().size() != request.getNames().size()) {
return ResponseData.create().failed("types length and names length is invalid.");
}
}
return clientQuotaService.alterClientQuotaConfigs(request);
}
@ControllerLog("删除限流配额")
@Permission("quota:del")
@DeleteMapping
public Object deleteClientQuotaConfigs(@RequestBody AlterClientQuotaDTO request) {
if (request.getTypes().size() != 2) {
if (CollectionUtils.isEmpty(request.getTypes())
|| CollectionUtils.isEmpty(request.getNames())
|| request.getTypes().size() != request.getNames().size()) {
return ResponseData.create().failed("types length and names length is invalid.");
}
}
return clientQuotaService.deleteClientQuotaConfigs(request);
}
}

View File

@@ -1,15 +1,11 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.dto.ClusterInfoDTO;
import com.xuxd.kafka.console.service.ClusterService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.*;
/**
* kafka-console-ui.
@@ -29,21 +25,34 @@ public class ClusterController {
return clusterService.getClusterInfo();
}
@Permission({"op:cluster-switch", "user-manage:cluster-role:add"})
@GetMapping("/info")
public Object getClusterInfoList() {
return clusterService.getClusterInfoList();
}
@Permission({"user-manage:cluster-role:add"})
@GetMapping("/info/select")
public Object getClusterInfoListForSelect() {
return clusterService.getClusterInfoListForSelect();
}
@ControllerLog("增加集群信息")
@Permission("op:cluster-switch:add")
@PostMapping("/info")
public Object addClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.addClusterInfo(dto.to());
}
@ControllerLog("删除集群信息")
@Permission("op:cluster-switch:del")
@DeleteMapping("/info")
public Object deleteClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.deleteClusterInfo(dto.getId());
}
@ControllerLog("编辑集群信息")
@Permission("op:cluster-switch:edit")
@PutMapping("/info")
public Object updateClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.updateClusterInfo(dto.to());

View File

@@ -0,0 +1,42 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.dto.ClusterRoleRelationDTO;
import com.xuxd.kafka.console.service.ClusterRoleRelationService;
import org.springframework.web.bind.annotation.*;
/**
* @author: xuxd
* @since: 2023/8/23 22:01
**/
@RestController
@RequestMapping("/cluster-role/relation")
public class ClusterRoleRelationController {
private final ClusterRoleRelationService service;
public ClusterRoleRelationController(ClusterRoleRelationService service) {
this.service = service;
}
@Permission("user-manage:cluster-role")
@GetMapping
public Object select() {
return service.select();
}
@ControllerLog("增加集群归属角色信息")
@Permission("user-manage:cluster-role:add")
@PostMapping
public Object add(@RequestBody ClusterRoleRelationDTO dto) {
return service.add(dto);
}
@ControllerLog("删除集群归属角色信息")
@Permission("user-manage:cluster-role:delete")
@DeleteMapping
public Object delete(@RequestParam("id") Long id) {
return service.delete(id);
}
}

View File

@@ -1,5 +1,7 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterConfigDTO;
import com.xuxd.kafka.console.beans.enums.AlterType;
@@ -41,46 +43,61 @@ public class ConfigController {
return ResponseData.create().data(configMap).success();
}
@Permission("topic:property-config")
@GetMapping("/topic")
public Object getTopicConfig(String topic) {
return configService.getTopicConfig(topic);
}
@ControllerLog("编辑topic配置")
@Permission("topic:property-config:edit")
@PostMapping("/topic")
public Object setTopicConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterTopicConfig(dto.getEntity(), dto.to(), AlterType.SET);
}
@ControllerLog("删除topic配置")
@Permission("topic:property-config:del")
@DeleteMapping("/topic")
public Object deleteTopicConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterTopicConfig(dto.getEntity(), dto.to(), AlterType.DELETE);
}
@Permission("cluster:property-config")
@GetMapping("/broker")
public Object getBrokerConfig(String brokerId) {
return configService.getBrokerConfig(brokerId);
}
@ControllerLog("设置broker配置")
@Permission("cluster:edit")
@PostMapping("/broker")
public Object setBrokerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerConfig(dto.getEntity(), dto.to(), AlterType.SET);
}
@ControllerLog("编辑broker配置")
@Permission("cluster:edit")
@DeleteMapping("/broker")
public Object deleteBrokerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerConfig(dto.getEntity(), dto.to(), AlterType.DELETE);
}
@Permission("cluster:log-config")
@GetMapping("/broker/logger")
public Object getBrokerLoggerConfig(String brokerId) {
return configService.getBrokerLoggerConfig(brokerId);
}
@ControllerLog("编辑broker日志配置")
@Permission("cluster:edit")
@PostMapping("/broker/logger")
public Object setBrokerLoggerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerLoggerConfig(dto.getEntity(), dto.to(), AlterType.SET);
}
@ControllerLog("删除broker日志配置")
@Permission("cluster:edit")
@DeleteMapping("/broker/logger")
public Object deleteBrokerLoggerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerLoggerConfig(dto.getEntity(), dto.to(), AlterType.DELETE);

View File

@@ -1,28 +1,21 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AddSubscriptionDTO;
import com.xuxd.kafka.console.beans.dto.QueryConsumerGroupDTO;
import com.xuxd.kafka.console.beans.dto.ResetOffsetDTO;
import com.xuxd.kafka.console.service.ConsumerService;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Objects;
import java.util.Set;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.consumer.OffsetResetStrategy;
import org.apache.kafka.common.ConsumerGroupState;
import org.apache.kafka.common.TopicPartition;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.*;
import java.util.*;
/**
* kafka-console-ui.
@@ -51,26 +44,37 @@ public class ConsumerController {
return consumerService.getConsumerGroupList(groupIdList, stateSet);
}
@ControllerLog("删除消费组")
@Permission("group:del")
@DeleteMapping("/group")
public Object deleteConsumerGroup(@RequestParam String groupId) {
public Object deleteConsumerGroup(@RequestParam("groupId") String groupId) {
return consumerService.deleteConsumerGroup(groupId);
}
@Permission("group:client")
@GetMapping("/member")
public Object getConsumerMembers(@RequestParam String groupId) {
public Object getConsumerMembers(@RequestParam("groupId") String groupId) {
return consumerService.getConsumerMembers(groupId);
}
@Permission("group:consumer-detail")
@GetMapping("/detail")
public Object getConsumerDetail(@RequestParam String groupId) {
public Object getConsumerDetail(@RequestParam("groupId") String groupId) {
return consumerService.getConsumerDetail(groupId);
}
@ControllerLog("新增消费组")
@Permission("group:add")
@PostMapping("/subscription")
public Object addSubscription(@RequestBody AddSubscriptionDTO subscriptionDTO) {
return consumerService.addSubscription(subscriptionDTO.getGroupId(), subscriptionDTO.getTopic());
}
@ControllerLog("重置消费位点")
@Permission({"group:consumer-detail:min",
"group:consumer-detail:last",
"group:consumer-detail:timestamp",
"group:consumer-detail:any"})
@PostMapping("/reset/offset")
public Object restOffset(@RequestBody ResetOffsetDTO offsetDTO) {
ResponseData res = ResponseData.create().failed("unknown");
@@ -78,7 +82,7 @@ public class ConsumerController {
case ResetOffsetDTO.Level.TOPIC:
switch (offsetDTO.getType()) {
case ResetOffsetDTO.Type
.EARLIEST:
.EARLIEST:
res = consumerService.resetOffsetToEndpoint(offsetDTO.getGroupId(), offsetDTO.getTopic(), OffsetResetStrategy.EARLIEST);
break;
case ResetOffsetDTO.Type.LATEST:
@@ -94,7 +98,7 @@ public class ConsumerController {
case ResetOffsetDTO.Level.PARTITION:
switch (offsetDTO.getType()) {
case ResetOffsetDTO.Type
.SPECIAL:
.SPECIAL:
res = consumerService.resetPartitionToTargetOffset(offsetDTO.getGroupId(), new TopicPartition(offsetDTO.getTopic(), offsetDTO.getPartition()), offsetDTO.getOffset());
break;
default:
@@ -114,17 +118,19 @@ public class ConsumerController {
}
@GetMapping("/topic/list")
public Object getSubscribeTopicList(@RequestParam String groupId) {
public Object getSubscribeTopicList(@RequestParam("groupId") String groupId) {
return consumerService.getSubscribeTopicList(groupId);
}
@Permission({"topic:consumer-detail"})
@GetMapping("/topic/subscribed")
public Object getTopicSubscribedByGroups(@RequestParam String topic) {
public Object getTopicSubscribedByGroups(@RequestParam("topic") String topic) {
return consumerService.getTopicSubscribedByGroups(topic);
}
@Permission("group:offset-partition")
@GetMapping("/offset/partition")
public Object getOffsetPartition(@RequestParam String groupId) {
public Object getOffsetPartition(@RequestParam("groupId") String groupId) {
return consumerService.getOffsetPartition(groupId);
}
}

View File

@@ -1,11 +1,15 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage;
import com.xuxd.kafka.console.beans.dto.QueryMessageDTO;
import com.xuxd.kafka.console.beans.dto.QuerySendStatisticsDTO;
import com.xuxd.kafka.console.service.MessageService;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
@@ -24,16 +28,19 @@ public class MessageController {
@Autowired
private MessageService messageService;
@Permission("message:search-time")
@PostMapping("/search/time")
public Object searchByTime(@RequestBody QueryMessageDTO dto) {
return messageService.searchByTime(dto.toQueryMessage());
}
@Permission("message:search-offset")
@PostMapping("/search/offset")
public Object searchByOffset(@RequestBody QueryMessageDTO dto) {
return messageService.searchByOffset(dto.toQueryMessage());
}
@Permission("message:detail")
@PostMapping("/search/detail")
public Object searchDetail(@RequestBody QueryMessageDTO dto) {
return messageService.searchDetail(dto.toQueryMessage());
@@ -45,15 +52,21 @@ public class MessageController {
}
@PostMapping("/send")
@ControllerLog("在线发送消息")
@Permission("message:send")
public Object send(@RequestBody SendMessage message) {
return messageService.send(message);
return messageService.sendWithHeader(message);
}
@ControllerLog("重新发送消息")
@Permission("message:resend")
@PostMapping("/resend")
public Object resend(@RequestBody SendMessage message) {
return messageService.resend(message);
}
@ControllerLog("在线删除消息")
@Permission("message:del")
@DeleteMapping
public Object delete(@RequestBody List<QueryMessage> messages) {
if (CollectionUtils.isEmpty(messages)) {
@@ -61,4 +74,13 @@ public class MessageController {
}
return messageService.delete(messages);
}
@Permission("message:send-statistics")
@PostMapping("/send/statistics")
public Object sendStatistics(@RequestBody QuerySendStatisticsDTO dto) {
if (StringUtils.isEmpty(dto.getTopic())) {
return ResponseData.create().failed("Topic is null");
}
return messageService.sendStatisticsByTime(dto);
}
}

View File

@@ -1,5 +1,7 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.TopicPartition;
import com.xuxd.kafka.console.beans.dto.BrokerThrottleDTO;
import com.xuxd.kafka.console.beans.dto.ProposedAssignmentDTO;
@@ -8,13 +10,7 @@ import com.xuxd.kafka.console.beans.dto.SyncDataDTO;
import com.xuxd.kafka.console.service.OperationService;
import org.apache.kafka.clients.admin.AdminClientConfig;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.*;
/**
* kafka-console-ui.
@@ -29,12 +25,14 @@ public class OperationController {
@Autowired
private OperationService operationService;
@ControllerLog("同步消费位点")
@PostMapping("/sync/consumer/offset")
public Object syncConsumerOffset(@RequestBody SyncDataDTO dto) {
dto.getProperties().put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, dto.getAddress());
return operationService.syncConsumerOffset(dto.getGroupId(), dto.getTopic(), dto.getProperties());
}
@ControllerLog("重新位点对齐")
@PostMapping("/sync/min/offset/alignment")
public Object minOffsetAlignment(@RequestBody SyncDataDTO dto) {
dto.getProperties().put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, dto.getAddress());
@@ -46,31 +44,41 @@ public class OperationController {
return operationService.getAlignmentList();
}
@ControllerLog("deleteAlignment")
@DeleteMapping("/sync/alignment")
public Object deleteAlignment(@RequestParam Long id) {
public Object deleteAlignment(@RequestParam("id") Long id) {
return operationService.deleteAlignmentById(id);
}
@ControllerLog("优先副本leader")
@Permission({"topic:partition-detail:preferred", "op:replication-preferred"})
@PostMapping("/replication/preferred")
public Object electPreferredLeader(@RequestBody ReplicationDTO dto) {
return operationService.electPreferredLeader(dto.getTopic(), dto.getPartition());
}
@ControllerLog("配置同步限流")
@Permission("op:config-throttle")
@PostMapping("/broker/throttle")
public Object configThrottle(@RequestBody BrokerThrottleDTO dto) {
return operationService.configThrottle(dto.getBrokerList(), dto.getUnit().toKb(dto.getThrottle()));
}
@ControllerLog("移除限流配置")
@Permission("op:remove-throttle")
@DeleteMapping("/broker/throttle")
public Object removeThrottle(@RequestBody BrokerThrottleDTO dto) {
return operationService.removeThrottle(dto.getBrokerList());
}
@Permission("op:replication-update-detail")
@GetMapping("/replication/reassignments")
public Object currentReassignments() {
return operationService.currentReassignments();
}
@ControllerLog("取消副本重分配")
@Permission("op:replication-update-detail:cancel")
@DeleteMapping("/replication/reassignments")
public Object cancelReassignment(@RequestBody TopicPartition partition) {
return operationService.cancelReassignment(new org.apache.kafka.common.TopicPartition(partition.getTopic(), partition.getPartition()));

View File

@@ -1,23 +1,20 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ReplicaAssignment;
import com.xuxd.kafka.console.beans.dto.AddPartitionDTO;
import com.xuxd.kafka.console.beans.dto.NewTopicDTO;
import com.xuxd.kafka.console.beans.dto.TopicThrottleDTO;
import com.xuxd.kafka.console.beans.enums.TopicType;
import com.xuxd.kafka.console.service.TopicService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
/**
* kafka-console-ui.
@@ -37,26 +34,34 @@ public class TopicController {
return topicService.getTopicNameList(false);
}
@Permission("topic:load")
@GetMapping("/list")
public Object getTopicList(@RequestParam(required = false) String topic, @RequestParam String type) {
public Object getTopicList(@RequestParam(required = false, name = "topic") String topic, @RequestParam("type") String type) {
return topicService.getTopicList(topic, TopicType.valueOf(type.toUpperCase()));
}
@ControllerLog("删除topic")
@Permission({"topic:batch-del", "topic:del"})
@DeleteMapping
public Object deleteTopic(@RequestBody List<String> topics) {
return topicService.deleteTopics(topics);
}
@Permission("topic:partition-detail")
@GetMapping("/partition")
public Object getTopicPartitionInfo(@RequestParam String topic) {
public Object getTopicPartitionInfo(@RequestParam("topic") String topic) {
return topicService.getTopicPartitionInfo(topic.trim());
}
@ControllerLog("创建topic")
@Permission("topic:add")
@PostMapping("/new")
public Object createNewTopic(@RequestBody NewTopicDTO topicDTO) {
return topicService.createTopic(topicDTO.toNewTopic());
}
@ControllerLog("增加topic分区")
@Permission("topic:partition-add")
@PostMapping("/partition/new")
public Object addPartition(@RequestBody AddPartitionDTO partitionDTO) {
String topic = partitionDTO.getTopic().trim();
@@ -75,22 +80,27 @@ public class TopicController {
}
@GetMapping("/replica/assignment")
public Object getCurrentReplicaAssignment(@RequestParam String topic) {
public Object getCurrentReplicaAssignment(@RequestParam("topic") String topic) {
return topicService.getCurrentReplicaAssignment(topic);
}
@ControllerLog("更新副本")
@Permission({"topic:replication-modify", "op:replication-reassign"})
@PostMapping("/replica/assignment")
public Object updateReplicaAssignment(@RequestBody ReplicaAssignment assignment) {
return topicService.updateReplicaAssignment(assignment);
}
@ControllerLog("配置限流")
@Permission("topic:replication-sync-throttle")
@PostMapping("/replica/throttle")
public Object configThrottle(@RequestBody TopicThrottleDTO dto) {
return topicService.configThrottle(dto.getTopic(), dto.getPartitions(), dto.getOperation());
}
@Permission("topic:send-count")
@GetMapping("/send/stats")
public Object sendStats(@RequestParam String topic) {
public Object sendStats(@RequestParam("topic") String topic) {
return topicService.sendStats(topic);
}
}

View File

@@ -0,0 +1,97 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.Credentials;
import com.xuxd.kafka.console.beans.dto.SysPermissionDTO;
import com.xuxd.kafka.console.beans.dto.SysRoleDTO;
import com.xuxd.kafka.console.beans.dto.SysUserDTO;
import com.xuxd.kafka.console.service.UserManageService;
import org.springframework.web.bind.annotation.*;
import javax.servlet.http.HttpServletRequest;
/**
* @author: xuxd
* @date: 2023/4/11 21:34
**/
@RestController
@RequestMapping("/sys/user/manage")
public class UserManageController {
private final UserManageService userManageService;
public UserManageController(UserManageService userManageService) {
this.userManageService = userManageService;
}
@Permission({"user-manage:user:add", "user-manage:user:change-role", "user-manage:user:reset-pass"})
@ControllerLog("新增/更新用户")
@PostMapping("/user")
public Object addOrUpdateUser(@RequestBody SysUserDTO userDTO) {
return userManageService.addOrUpdateUser(userDTO);
}
@Permission("user-manage:role:save")
@ControllerLog("新增/更新角色")
@PostMapping("/role")
public Object addOrUpdateRole(@RequestBody SysRoleDTO roleDTO) {
return userManageService.addOrUdpateRole(roleDTO);
}
@ControllerLog("新增权限")
@PostMapping("/permission")
public Object addPermission(@RequestBody SysPermissionDTO permissionDTO) {
return userManageService.addPermission(permissionDTO);
}
@Permission("user-manage:role:save")
@ControllerLog("更新角色")
@PutMapping("/role")
public Object updateRole(@RequestBody SysRoleDTO roleDTO) {
return userManageService.updateRole(roleDTO);
}
@Permission({"user-manage:role"})
@GetMapping("/role")
public Object selectRole() {
return userManageService.selectRole();
}
@Permission({"user-manage:permission"})
@GetMapping("/permission")
public Object selectPermission() {
return userManageService.selectPermission();
}
@Permission({"user-manage:user"})
@GetMapping("/user")
public Object selectUser() {
return userManageService.selectUser();
}
@Permission("user-manage:role:del")
@ControllerLog("删除角色")
@DeleteMapping("/role")
public Object deleteRole(@RequestParam("id") Long id) {
return userManageService.deleteRole(id);
}
@Permission("user-manage:user:del")
@ControllerLog("删除用户")
@DeleteMapping("/user")
public Object deleteUser(@RequestParam("id") Long id) {
return userManageService.deleteUser(id);
}
@Permission("user-manage:setting")
@ControllerLog("更新密码")
@PostMapping("/user/password")
public Object updatePassword(@RequestBody SysUserDTO userDTO, HttpServletRequest request) {
Credentials credentials = (Credentials) request.getAttribute("credentials");
if (credentials != null && !credentials.isInvalid()) {
userDTO.setUsername(credentials.getUsername());
}
return userManageService.updatePassword(userDTO);
}
}

View File

@@ -0,0 +1,15 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.ClusterRoleRelationDO;
import org.apache.ibatis.annotations.Mapper;
/**
* Cluster info and role relation.
*
* @author: xuxd
* @since: 2023/8/23 21:40
**/
@Mapper
public interface ClusterRoleRelationMapper extends BaseMapper<ClusterRoleRelationDO> {
}

View File

@@ -0,0 +1,15 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.SysPermissionDO;
import org.apache.ibatis.annotations.Mapper;
/**
* 系统权限 .
*
* @author: xuxd
* @date: 2023/4/11 21:21
**/
@Mapper
public interface SysPermissionMapper extends BaseMapper<SysPermissionDO> {
}

View File

@@ -0,0 +1,13 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import org.apache.ibatis.annotations.Mapper;
/**
* @author: xuxd
* @date: 2023/4/11 21:22
**/
@Mapper
public interface SysRoleMapper extends BaseMapper<SysRoleDO> {
}

View File

@@ -0,0 +1,13 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import org.apache.ibatis.annotations.Mapper;
/**
* @author: xuxd
* @date: 2023/4/11 21:22
**/
@Mapper
public interface SysUserMapper extends BaseMapper<SysUserDO> {
}

View File

@@ -0,0 +1,96 @@
package com.xuxd.kafka.console.dao.init;
import com.xuxd.kafka.console.beans.RolePermUpdateEvent;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.dao.SysPermissionMapper;
import com.xuxd.kafka.console.dao.SysRoleMapper;
import com.xuxd.kafka.console.dao.SysUserMapper;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.SmartInitializingSingleton;
import org.springframework.context.ApplicationEventPublisher;
import org.springframework.stereotype.Component;
import javax.sql.DataSource;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
/**
* @author: xuxd
* @date: 2023/5/17 13:10
**/
@Slf4j
@Component
public class DataInit implements SmartInitializingSingleton {
private final AuthConfig authConfig;
private final SysUserMapper userMapper;
private final SysRoleMapper roleMapper;
private final SysPermissionMapper permissionMapper;
private final DataSource dataSource;
private final SqlParse sqlParse;
private final ApplicationEventPublisher publisher;
public DataInit(AuthConfig authConfig,
SysUserMapper userMapper,
SysRoleMapper roleMapper,
SysPermissionMapper permissionMapper,
DataSource dataSource,
ApplicationEventPublisher publisher) {
this.authConfig = authConfig;
this.userMapper = userMapper;
this.roleMapper = roleMapper;
this.permissionMapper = permissionMapper;
this.dataSource = dataSource;
this.publisher = publisher;
this.sqlParse = new SqlParse();
}
@Override
public void afterSingletonsInstantiated() {
if (!authConfig.isEnable()) {
log.info("Disable login authentication, no longer try to initialize the data");
return;
}
try {
Connection connection = dataSource.getConnection();
Integer userCount = userMapper.selectCount(null);
if (userCount == null || userCount == 0) {
initData(connection, SqlParse.USER_TABLE);
}
Integer roleCount = roleMapper.selectCount(null);
if (roleCount == null || roleCount == 0) {
initData(connection, SqlParse.ROLE_TABLE);
}
if (authConfig.isReloadPermission()) {
permissionMapper.delete(null);
}
Integer permCount = permissionMapper.selectCount(null);
if (permCount == null || permCount == 0) {
initData(connection, SqlParse.PERM_TABLE);
}
RolePermUpdateEvent event = new RolePermUpdateEvent(this);
event.setReload(true);
publisher.publishEvent(event);
} catch (SQLException e) {
throw new RuntimeException(e);
}
}
private void initData(Connection connection, String table) throws SQLException {
log.info("Init default data for {}", table);
String sql = sqlParse.getMergeSql(table);
PreparedStatement statement = connection.prepareStatement(sql);
statement.execute();
}
}

View File

@@ -0,0 +1,96 @@
package com.xuxd.kafka.console.dao.init;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import scala.collection.mutable.StringBuilder;
import java.io.BufferedReader;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* @author: xuxd
* @date: 2023/5/17 21:22
**/
@Slf4j
public class SqlParse {
private final String FILE = "db/data-h2.sql";
private final Map<String, List<String>> sqlMap = new HashMap<>();
public static final String ROLE_TABLE = "t_sys_role";
public static final String USER_TABLE = "t_sys_user";
public static final String PERM_TABLE = "t_sys_permission";
public SqlParse() {
sqlMap.put(ROLE_TABLE, new ArrayList<>());
sqlMap.put(USER_TABLE, new ArrayList<>());
sqlMap.put(PERM_TABLE, new ArrayList<>());
String table = null;
try {
List<String> lines = getSqlLines();
for (String str : lines) {
if (StringUtils.isNotEmpty(str)) {
if (str.indexOf("start--") > 0) {
if (str.indexOf(ROLE_TABLE) > 0) {
table = ROLE_TABLE;
}
if (str.indexOf(USER_TABLE) > 0) {
table = USER_TABLE;
}
if (str.indexOf(PERM_TABLE) > 0) {
table = PERM_TABLE;
}
}
if (isSql(str)) {
if (table == null) {
log.error("Table is null, can not load sql: {}", str);
continue;
}
sqlMap.get(table).add(str);
}
}
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}
public List<String> getSqlList(String table) {
return sqlMap.get(table);
}
public String getMergeSql(String table) {
List<String> list = getSqlList(table);
StringBuilder sb = new StringBuilder();
list.forEach(sql -> sb.append(sql));
return sb.toString();
}
private boolean isSql(String str) {
return StringUtils.isNotEmpty(str) && str.startsWith("insert");
}
private List<String> getSqlLines() throws Exception {
// File file = ResourceUtils.getFile(FILE);
// List<String> lines = Files.readLines(file, Charset.forName("UTF-8"));
List<String> lines = new ArrayList<>();
try (InputStream inputStream = Thread.currentThread().getContextClassLoader().getResourceAsStream(FILE)) {
try (InputStreamReader inputStreamReader = new InputStreamReader(inputStream, "UTF-8")) {
try (BufferedReader reader = new BufferedReader(inputStreamReader)) {
String line;
while ((line = reader.readLine()) != null) {
lines.add(line);
}
}
}
}
return lines;
}
}

View File

@@ -1,11 +1,14 @@
package com.xuxd.kafka.console.service.interceptor;
package com.xuxd.kafka.console.exception;
import com.xuxd.kafka.console.beans.ResponseData;
import javax.servlet.http.HttpServletRequest;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ControllerAdvice;
import org.springframework.web.bind.annotation.ExceptionHandler;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.ResponseStatus;
import javax.servlet.http.HttpServletRequest;
/**
* kafka-console-ui.
@@ -17,6 +20,14 @@ import org.springframework.web.bind.annotation.ResponseBody;
@ControllerAdvice(basePackages = "com.xuxd.kafka.console.controller")
public class GlobalExceptionHandler {
@ResponseStatus(code = HttpStatus.FORBIDDEN)
@ExceptionHandler(value = UnAuthorizedException.class)
@ResponseBody
public Object unAuthorizedExceptionHandler(HttpServletRequest req, Exception ex) throws Exception {
log.error("unAuthorized: {}", ex.getMessage());
return ResponseData.create().failed("UnAuthorized: " + ex.getMessage());
}
@ExceptionHandler(value = Exception.class)
@ResponseBody
public Object exceptionHandler(HttpServletRequest req, Exception ex) throws Exception {

View File

@@ -0,0 +1,12 @@
package com.xuxd.kafka.console.exception;
/**
* @author: xuxd
* @date: 2023/5/17 23:08
**/
public class UnAuthorizedException extends RuntimeException{
public UnAuthorizedException(String message) {
super(message);
}
}

View File

@@ -0,0 +1,78 @@
package com.xuxd.kafka.console.filter;
import com.xuxd.kafka.console.beans.Credentials;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.utils.AuthUtil;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.core.annotation.Order;
import org.springframework.http.HttpStatus;
import javax.servlet.*;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
/**
* @author: xuxd
* @date: 2023/5/9 21:20
**/
@Order(1)
@WebFilter(filterName = "auth-filter", urlPatterns = {"/*"})
@Slf4j
public class AuthFilter implements Filter {
private final AuthConfig authConfig;
private final String TOKEN_HEADER = "X-Auth-Token";
private final String AUTH_URI_PREFIX = "/auth";
public AuthFilter(AuthConfig authConfig) {
this.authConfig = authConfig;
}
@Override
public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
if (!authConfig.isEnable()) {
filterChain.doFilter(servletRequest, servletResponse);
return;
}
HttpServletRequest request = (HttpServletRequest) servletRequest;
HttpServletResponse response = (HttpServletResponse) servletResponse;
String accessToken = request.getHeader(TOKEN_HEADER);
String requestURI = request.getRequestURI();
if (isResourceRequest(requestURI)) {
filterChain.doFilter(servletRequest, servletResponse);
return;
}
if (requestURI.startsWith(AUTH_URI_PREFIX)) {
filterChain.doFilter(servletRequest, servletResponse);
return;
}
if (StringUtils.isEmpty(accessToken)) {
response.setStatus(HttpStatus.UNAUTHORIZED.value());
return;
}
Credentials credentials = AuthUtil.parseToken(authConfig.getSecret(), accessToken);
if (credentials.isInvalid()) {
response.setStatus(HttpStatus.UNAUTHORIZED.value());
return;
}
request.setAttribute("credentials", credentials);
try {
CredentialsContext.set(credentials);
filterChain.doFilter(servletRequest, servletResponse);
} finally {
CredentialsContext.remove();
}
}
private boolean isResourceRequest(String requestURI) {
return requestURI.contains(".") || requestURI.equals("/");
}
}

View File

@@ -1,4 +1,4 @@
package com.xuxd.kafka.console.service.interceptor;
package com.xuxd.kafka.console.filter;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
@@ -6,28 +6,27 @@ import com.xuxd.kafka.console.config.ContextConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.utils.ConvertUtil;
import java.io.IOException;
import java.util.HashSet;
import java.util.Set;
import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.core.annotation.Order;
import org.springframework.http.MediaType;
import javax.servlet.*;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import java.io.IOException;
import java.util.HashSet;
import java.util.Set;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-05 19:56:25
**/
@WebFilter(filterName = "context-set-filter", urlPatterns = {"/acl/*","/user/*","/cluster/*","/config/*","/consumer/*","/message/*","/topic/*","/op/*"})
@Order(100)
@WebFilter(filterName = "context-set-filter", urlPatterns = {"/acl/*", "/user/*", "/cluster/*", "/config/*", "/consumer/*", "/message/*", "/topic/*", "/op/*", "/client/*"})
@Slf4j
public class ContextSetFilter implements Filter {
@@ -42,8 +41,9 @@ public class ContextSetFilter implements Filter {
@Autowired
private ClusterInfoMapper clusterInfoMapper;
@Override public void doFilter(ServletRequest req, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
@Override
public void doFilter(ServletRequest req, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
try {
HttpServletRequest request = (HttpServletRequest) req;
String uri = request.getRequestURI();
@@ -72,6 +72,7 @@ public class ContextSetFilter implements Filter {
config.setProperties(ConvertUtil.toProperties(infoDO.getProperties()));
}
ContextConfigHolder.CONTEXT_CONFIG.set(config);
// log.info("current kafka config: {}", config);
}
}
chain.doFilter(req, response);

View File

@@ -0,0 +1,23 @@
package com.xuxd.kafka.console.filter;
import com.xuxd.kafka.console.beans.Credentials;
/**
* @author: xuxd
* @date: 2023/5/17 23:02
**/
public class CredentialsContext {
private static final ThreadLocal<Credentials> CREDENTIALS = new ThreadLocal<>();
public static void set(Credentials credentials) {
CREDENTIALS.set(credentials);
}
public static Credentials get() {
return CREDENTIALS.get();
}
public static void remove() {
CREDENTIALS.remove();
}
}

View File

@@ -42,4 +42,7 @@ public interface AclService {
ResponseData getUserDetail(String username);
ResponseData clearAcl(AclEntry entry);
ResponseData getSaslScramUserList(AclEntry entry);
}

View File

@@ -0,0 +1,15 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.LoginUserDTO;
/**
* @author: xuxd
* @date: 2023/5/14 19:00
**/
public interface AuthService {
ResponseData login(LoginUserDTO userDTO);
boolean ownDataAuthority();
}

View File

@@ -0,0 +1,18 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterClientQuotaDTO;
import java.util.List;
/**
* @author 晓东哥哥
*/
public interface ClientQuotaService {
ResponseData getClientQuotaConfigs(List<String> types, List<String> names);
ResponseData alterClientQuotaConfigs(AlterClientQuotaDTO request);
ResponseData deleteClientQuotaConfigs(AlterClientQuotaDTO request);
}

View File

@@ -0,0 +1,19 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.ClusterRoleRelationDTO;
/**
* Cluster info and role relation.
*
* @author: xuxd
* @since: 2023/8/23 21:42
**/
public interface ClusterRoleRelationService {
ResponseData select();
ResponseData add(ClusterRoleRelationDTO dto);
ResponseData delete(Long id);
}

View File

@@ -12,6 +12,8 @@ import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
public interface ClusterService {
ResponseData getClusterInfo();
ResponseData getClusterInfoListForSelect();
ResponseData getClusterInfoList();
ResponseData addClusterInfo(ClusterInfoDO infoDO);

View File

@@ -1,6 +1,7 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.dto.QuerySendStatisticsDTO;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage;
@@ -24,7 +25,11 @@ public interface MessageService {
ResponseData send(SendMessage message);
ResponseData sendWithHeader(SendMessage message);
ResponseData resend(SendMessage message);
ResponseData delete(List<QueryMessage> messages);
ResponseData sendStatisticsByTime(QuerySendStatisticsDTO request);
}

View File

@@ -0,0 +1,40 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.SysPermissionDTO;
import com.xuxd.kafka.console.beans.dto.SysRoleDTO;
import com.xuxd.kafka.console.beans.dto.SysUserDTO;
/**
* 登录用户权限管理.
*
* @author: xuxd
* @date: 2023/4/11 21:24
**/
public interface UserManageService {
/**
* 增加权限
*/
ResponseData addPermission(SysPermissionDTO permissionDTO);
ResponseData addOrUdpateRole(SysRoleDTO roleDTO);
ResponseData addOrUpdateUser(SysUserDTO userDTO);
ResponseData selectRole();
ResponseData selectPermission();
ResponseData selectUser();
ResponseData updateUser(SysUserDTO userDTO);
ResponseData updateRole(SysRoleDTO roleDTO);
ResponseData deleteRole(Long id);
ResponseData deleteUser(Long id);
ResponseData updatePassword(SysUserDTO userDTO);
}

View File

@@ -10,30 +10,23 @@ import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.KafkaUserMapper;
import com.xuxd.kafka.console.service.AclService;
import com.xuxd.kafka.console.utils.SaslUtil;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import java.util.stream.Collectors;
import kafka.console.KafkaAclConsole;
import kafka.console.KafkaConfigConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.admin.ScramMechanism;
import org.apache.kafka.clients.admin.UserScramCredentialsDescription;
import org.apache.kafka.common.acl.AclBinding;
import org.apache.kafka.common.acl.AclOperation;
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.security.auth.SecurityProtocol;
import org.apache.kafka.common.errors.SecurityDisabledException;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import scala.Tuple2;
import java.util.*;
import java.util.stream.Collectors;
import static com.xuxd.kafka.console.utils.SaslUtil.isEnableSasl;
import static com.xuxd.kafka.console.utils.SaslUtil.isEnableScram;
@@ -139,39 +132,52 @@ public class AclServiceImpl implements AclService {
}
@Override public ResponseData getAclList(AclEntry entry) {
List<AclBinding> aclBindingList = entry.isNull() ? aclConsole.getAclList(null) : aclConsole.getAclList(entry);
List<AclBinding> aclBindingList = Collections.emptyList();
try {
aclBindingList = entry.isNull() ? aclConsole.getAclList(null) : aclConsole.getAclList(entry);
}catch (Exception ex) {
if (ex.getCause() instanceof SecurityDisabledException) {
Throwable e = ex.getCause();
log.info("SecurityDisabledException: {}", e.getMessage());
Map<String, String> hint = new HashMap<>(2);
hint.put("hint", "Security Disabled: " + e.getMessage());
return ResponseData.create().data(hint).success();
}
throw new RuntimeException(ex.getCause());
}
// List<AclBinding> aclBindingList = entry.isNull() ? aclConsole.getAclList(null) : aclConsole.getAclList(entry);
List<AclEntry> entryList = aclBindingList.stream().map(x -> AclEntry.valueOf(x)).collect(Collectors.toList());
Map<String, List<AclEntry>> entryMap = entryList.stream().collect(Collectors.groupingBy(AclEntry::getPrincipal));
Map<String, Object> resultMap = new HashMap<>();
entryMap.forEach((k, v) -> {
Map<String, List<AclEntry>> map = v.stream().collect(Collectors.groupingBy(e -> e.getResourceType() + "#" + e.getName()));
String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
if (k.equals(username)) {
Map<String, Object> map2 = new HashMap<>(map);
Map<String, Object> userMap = new HashMap<>();
userMap.put("role", "admin");
map2.put("USER", userMap);
}
// String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
// if (k.equals(username)) {
// Map<String, Object> map2 = new HashMap<>(map);
// Map<String, Object> userMap = new HashMap<>();
// userMap.put("role", "admin");
// map2.put("USER", userMap);
// }
resultMap.put(k, map);
});
if (entry.isNull() || StringUtils.isNotBlank(entry.getPrincipal())) {
Map<String, UserScramCredentialsDescription> detailList = configConsole.getUserDetailList(StringUtils.isNotBlank(entry.getPrincipal()) ? Collections.singletonList(entry.getPrincipal()) : null);
detailList.values().forEach(u -> {
if (!resultMap.containsKey(u.name()) && !u.credentialInfos().isEmpty()) {
String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
if (!u.name().equals(username)) {
resultMap.put(u.name(), Collections.emptyMap());
} else {
Map<String, Object> map2 = new HashMap<>();
Map<String, Object> userMap = new HashMap<>();
userMap.put("role", "admin");
map2.put("USER", userMap);
resultMap.put(u.name(), map2);
}
}
});
}
// if (entry.isNull() || StringUtils.isNotBlank(entry.getPrincipal())) {
// Map<String, UserScramCredentialsDescription> detailList = configConsole.getUserDetailList(StringUtils.isNotBlank(entry.getPrincipal()) ? Collections.singletonList(entry.getPrincipal()) : null);
//
// detailList.values().forEach(u -> {
// if (!resultMap.containsKey(u.name()) && !u.credentialInfos().isEmpty()) {
// String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
// if (!u.name().equals(username)) {
// resultMap.put(u.name(), Collections.emptyMap());
// } else {
// Map<String, Object> map2 = new HashMap<>();
// Map<String, Object> userMap = new HashMap<>();
// userMap.put("role", "admin");
// map2.put("USER", userMap);
// resultMap.put(u.name(), map2);
// }
// }
// });
// }
return ResponseData.create().data(new CounterMap<>(resultMap)).success();
}
@@ -236,6 +242,37 @@ public class AclServiceImpl implements AclService {
return ResponseData.create().data(vo).success();
}
@Override
public ResponseData clearAcl(AclEntry entry) {
log.info("Start clear acl, principal: {}", entry);
return aclConsole.deleteUserAcl(entry) ? ResponseData.create().success() : ResponseData.create().failed("操作失败");
}
@Override
public ResponseData getSaslScramUserList(AclEntry entry) {
Map<String, Object> resultMap = new HashMap<>();
if (entry.isNull() || StringUtils.isNotBlank(entry.getPrincipal())) {
Map<String, UserScramCredentialsDescription> detailList = configConsole.getUserDetailList(StringUtils.isNotBlank(entry.getPrincipal()) ? Collections.singletonList(entry.getPrincipal()) : null);
detailList.values().forEach(u -> {
if (!resultMap.containsKey(u.name()) && !u.credentialInfos().isEmpty()) {
String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
if (!u.name().equals(username)) {
resultMap.put(u.name(), Collections.emptyMap());
} else {
Map<String, Object> map2 = new HashMap<>();
Map<String, Object> userMap = new HashMap<>();
userMap.put("role", "admin");
map2.put("USER", userMap);
resultMap.put(u.name(), map2);
}
}
});
}
return ResponseData.create().data(new CounterMap<>(resultMap)).success();
}
// @Override public void afterSingletonsInstantiated() {
// if (kafkaConfig.isEnableAcl() && kafkaConfig.isAdminCreate()) {
// log.info("Start create admin user, username: {}, password: {}", kafkaConfig.getAdminUsername(), kafkaConfig.getAdminPassword());

View File

@@ -0,0 +1,108 @@
package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.Credentials;
import com.xuxd.kafka.console.beans.LoginResult;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import com.xuxd.kafka.console.beans.dto.LoginUserDTO;
import com.xuxd.kafka.console.cache.RolePermCache;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.dao.SysRoleMapper;
import com.xuxd.kafka.console.dao.SysUserMapper;
import com.xuxd.kafka.console.service.AuthService;
import com.xuxd.kafka.console.utils.AuthUtil;
import com.xuxd.kafka.console.utils.UUIDStrUtil;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.stereotype.Service;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
/**
* @author: xuxd
* @date: 2023/5/14 19:01
**/
@Slf4j
@Service
public class AuthServiceImpl implements AuthService {
private final SysUserMapper userMapper;
private final SysRoleMapper roleMapper;
private final AuthConfig authConfig;
private final RolePermCache rolePermCache;
public AuthServiceImpl(SysUserMapper userMapper,
SysRoleMapper roleMapper,
AuthConfig authConfig,
RolePermCache rolePermCache) {
this.userMapper = userMapper;
this.roleMapper = roleMapper;
this.authConfig = authConfig;
this.rolePermCache = rolePermCache;
}
@Override
public ResponseData login(LoginUserDTO userDTO) {
QueryWrapper<SysUserDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("username", userDTO.getUsername());
SysUserDO userDO = userMapper.selectOne(queryWrapper);
if (userDO == null) {
return ResponseData.create().failed("用户名/密码不正确");
}
String encrypt = UUIDStrUtil.generate(userDTO.getPassword(), userDO.getSalt());
if (!userDO.getPassword().equals(encrypt)) {
return ResponseData.create().failed("用户名/密码不正确");
}
Credentials credentials = new Credentials();
credentials.setUsername(userDO.getUsername());
credentials.setExpiration(System.currentTimeMillis() + authConfig.getExpireHours() * 3600 * 1000);
String token = AuthUtil.generateToken(authConfig.getSecret(), credentials);
LoginResult loginResult = new LoginResult();
List<String> permissions = new ArrayList<>();
String roleIds = userDO.getRoleIds();
if (StringUtils.isNotEmpty(roleIds)) {
List<String> roleIdList = Arrays.stream(roleIds.split(",")).map(String::trim).filter(StringUtils::isNotEmpty).collect(Collectors.toList());
roleIdList.forEach(roleId -> {
Long rId = Long.valueOf(roleId);
SysRoleDO roleDO = roleMapper.selectById(rId);
String permissionIds = roleDO.getPermissionIds();
if (StringUtils.isNotEmpty(permissionIds)) {
List<Long> permIds = Arrays.stream(permissionIds.split(",")).map(String::trim).
filter(StringUtils::isNotEmpty).map(Long::valueOf).collect(Collectors.toList());
permIds.forEach(id -> {
String permission = rolePermCache.getPermCache().get(id).getPermission();
if (StringUtils.isNotEmpty(permission)) {
permissions.add(permission);
} else {
log.error("角色:{}权限id: {},不存在", roleId, id);
}
});
}
});
}
loginResult.setToken(token);
loginResult.setPermissions(permissions);
return ResponseData.create().data(loginResult).success();
}
@Override
public boolean ownDataAuthority() {
if (!authConfig.isEnable()) {
return true;
}
if (!authConfig.isEnableClusterAuthority()) {
return true;
}
return false;
}
}

View File

@@ -0,0 +1,172 @@
package com.xuxd.kafka.console.service.impl;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterClientQuotaDTO;
import com.xuxd.kafka.console.beans.vo.ClientQuotaEntityVO;
import com.xuxd.kafka.console.service.ClientQuotaService;
import kafka.console.ClientQuotaConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.common.config.internals.QuotaConfigs;
import org.apache.kafka.common.quota.ClientQuotaEntity;
import org.springframework.stereotype.Service;
import scala.Tuple2;
import java.util.*;
import java.util.stream.Collectors;
/**
* @author 晓东哥哥
*/
@Slf4j
@Service
public class ClientQuotaServiceImpl implements ClientQuotaService {
private final ClientQuotaConsole clientQuotaConsole;
private final Map<String, String> typeDict = new HashMap<>();
private final Map<String, String> configDict = new HashMap<>();
private final String USER = "user";
private final String CLIENT_ID = "client-id";
private final String IP = "ip";
private final String USER_CLIENT = "user&client-id";
{
typeDict.put(USER, ClientQuotaEntity.USER);
typeDict.put(CLIENT_ID, ClientQuotaEntity.CLIENT_ID);
typeDict.put(IP, ClientQuotaEntity.IP);
typeDict.put(USER_CLIENT, USER_CLIENT);
configDict.put("producerRate", QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG);
configDict.put("consumerRate", QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG);
configDict.put("requestPercentage", QuotaConfigs.REQUEST_PERCENTAGE_OVERRIDE_CONFIG);
}
public ClientQuotaServiceImpl(ClientQuotaConsole clientQuotaConsole) {
this.clientQuotaConsole = clientQuotaConsole;
}
@Override
public ResponseData getClientQuotaConfigs(List<String> types, List<String> names) {
List<String> entityNames = names == null ? Collections.emptyList() : new ArrayList<>(names);
List<String> entityTypes = types.stream().map(e -> typeDict.get(e)).filter(e -> e != null).collect(Collectors.toList());
if (entityTypes.isEmpty() || entityTypes.size() != types.size()) {
throw new IllegalArgumentException("types illegal.");
}
boolean userAndClientFilterClientOnly = false;
// only type: [user and client-id], type.size == 2
if (entityTypes.size() == 2) {
if (names.size() == 2 && StringUtils.isBlank(names.get(0)) && StringUtils.isNotBlank(names.get(1))) {
userAndClientFilterClientOnly = true;
}
}
Map<ClientQuotaEntity, Map<String, Object>> clientQuotasConfigs = clientQuotaConsole.getClientQuotasConfigs(entityTypes,
userAndClientFilterClientOnly ? Collections.emptyList() : entityNames);
List<ClientQuotaEntityVO> voList = clientQuotasConfigs.entrySet().stream().map(entry -> ClientQuotaEntityVO.from(
entry.getKey(), entityTypes, entry.getValue())).collect(Collectors.toList());
if (!userAndClientFilterClientOnly) {
return ResponseData.create().data(voList).success();
}
List<ClientQuotaEntityVO> list = voList.stream().filter(e -> names.get(1).equals(e.getClient())).collect(Collectors.toList());
return ResponseData.create().data(list).success();
}
@Override
public ResponseData alterClientQuotaConfigs(AlterClientQuotaDTO request) {
if (StringUtils.isEmpty(request.getType()) || !typeDict.containsKey(request.getType())) {
return ResponseData.create().failed("Unknown type.");
}
List<String> types = new ArrayList<>();
List<String> names = new ArrayList<>();
parseTypesAndNames(request, types, names, request.getType());
Map<String, String> configsToBeAddedMap = new HashMap<>();
if (StringUtils.isNotEmpty(request.getProducerRate())) {
configsToBeAddedMap.put(QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG, String.valueOf(Math.floor(Double.valueOf(request.getProducerRate()))));
}
if (StringUtils.isNotEmpty(request.getConsumerRate())) {
configsToBeAddedMap.put(QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG, String.valueOf(Math.floor(Double.valueOf(request.getConsumerRate()))));
}
if (StringUtils.isNotEmpty(request.getRequestPercentage())) {
configsToBeAddedMap.put(QuotaConfigs.REQUEST_PERCENTAGE_OVERRIDE_CONFIG, String.valueOf(Math.floor(Double.valueOf(request.getRequestPercentage()))));
}
Tuple2<Object, String> tuple2 = clientQuotaConsole.addQuotaConfigs(types, names, configsToBeAddedMap);
if (!(Boolean) tuple2._1) {
return ResponseData.create().failed(tuple2._2);
}
if (CollectionUtils.isNotEmpty(request.getDeleteConfigs())) {
List<String> delete = request.getDeleteConfigs().stream().map(key -> configDict.get(key)).collect(Collectors.toList());
Tuple2<Object, String> tuple2Del = clientQuotaConsole.deleteQuotaConfigs(types, names, delete);
if (!(Boolean) tuple2Del._1) {
return ResponseData.create().failed(tuple2Del._2);
}
}
return ResponseData.create().success();
}
@Override
public ResponseData deleteClientQuotaConfigs(AlterClientQuotaDTO request) {
if (StringUtils.isEmpty(request.getType()) || !typeDict.containsKey(request.getType())) {
return ResponseData.create().failed("Unknown type.");
}
List<String> types = new ArrayList<>();
List<String> names = new ArrayList<>();
parseTypesAndNames(request, types, names, request.getType());
List<String> configs = new ArrayList<>();
configs.add(QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG);
configs.add(QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG);
configs.add(QuotaConfigs.REQUEST_PERCENTAGE_OVERRIDE_CONFIG);
Tuple2<Object, String> tuple2 = clientQuotaConsole.deleteQuotaConfigs(types, names, configs);
if (!(Boolean) tuple2._1) {
return ResponseData.create().failed(tuple2._2);
}
return ResponseData.create().success();
}
private void parseTypesAndNames(AlterClientQuotaDTO request, List<String> types, List<String> names, String type) {
switch (request.getType()) {
case USER:
getTypesAndNames(request, types, names, USER);
break;
case CLIENT_ID:
getTypesAndNames(request, types, names, CLIENT_ID);
break;
case IP:
getTypesAndNames(request, types, names, IP);
break;
case USER_CLIENT:
getTypesAndNames(request, types, names, USER);
getTypesAndNames(request, types, names, CLIENT_ID);
break;
}
}
private void getTypesAndNames(AlterClientQuotaDTO request, List<String> types, List<String> names, String type) {
int index = -1;
for (int i = 0; i < request.getTypes().size(); i++) {
if (type.equals(request.getTypes().get(i))) {
index = i;
break;
}
}
if (index < 0) {
throw new IllegalArgumentException("Does not contain the type" + type);
}
types.add(request.getTypes().get(index));
if (CollectionUtils.isNotEmpty(request.getNames()) && request.getNames().size() > index) {
names.add(request.getNames().get(index));
} else {
names.add("");
}
}
}

View File

@@ -0,0 +1,115 @@
package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.beans.dos.ClusterRoleRelationDO;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import com.xuxd.kafka.console.beans.dto.ClusterRoleRelationDTO;
import com.xuxd.kafka.console.beans.vo.ClusterRoleRelationVO;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.dao.ClusterRoleRelationMapper;
import com.xuxd.kafka.console.dao.SysRoleMapper;
import com.xuxd.kafka.console.service.ClusterRoleRelationService;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.function.Function;
import java.util.stream.Collectors;
/**
* @author: xuxd
* @since: 2023/8/23 21:50
**/
@Slf4j
@Service
public class ClusterRoleRelationServiceImpl implements ClusterRoleRelationService {
private final ClusterRoleRelationMapper mapper;
private final SysRoleMapper roleMapper;
private final ClusterInfoMapper clusterInfoMapper;
private final AuthConfig authConfig;
public ClusterRoleRelationServiceImpl(final ClusterRoleRelationMapper mapper,
final SysRoleMapper roleMapper,
final ClusterInfoMapper clusterInfoMapper,
final AuthConfig authConfig) {
this.mapper = mapper;
this.roleMapper = roleMapper;
this.clusterInfoMapper = clusterInfoMapper;
this.authConfig = authConfig;
}
@Override
public ResponseData select() {
if (!authConfig.isEnableClusterAuthority()) {
return ResponseData.create().data(Collections.emptyList()).success();
}
List<ClusterRoleRelationDO> dos = mapper.selectList(null);
Map<Long, SysRoleDO> roleMap = roleMapper.selectList(null).stream().
collect(Collectors.toMap(SysRoleDO::getId, Function.identity(), (e1, e2) -> e2));
Map<Long, ClusterInfoDO> clusterMap = clusterInfoMapper.selectList(null).stream().
collect(Collectors.toMap(ClusterInfoDO::getId, Function.identity(), (e1, e2) -> e2));
List<ClusterRoleRelationVO> vos = dos.stream().
map(aDo -> {
ClusterRoleRelationVO vo = ClusterRoleRelationVO.from(aDo);
if (roleMap.containsKey(vo.getRoleId())) {
vo.setRoleName(roleMap.get(vo.getRoleId()).getRoleName());
}
if (clusterMap.containsKey(vo.getClusterInfoId())) {
vo.setClusterName(clusterMap.get(vo.getClusterInfoId()).getClusterName());
}
return vo;
}).collect(Collectors.toList());
return ResponseData.create().data(vos).success();
}
@Override
public ResponseData add(ClusterRoleRelationDTO dto) {
if (!authConfig.isEnableClusterAuthority()) {
return ResponseData.create().failed("未启用集群的数据权限管理");
}
ClusterRoleRelationDO relationDO = dto.toDO();
if (relationDO.getClusterInfoId() == -1L) {
// all insert
for (ClusterInfoDO clusterInfoDO : clusterInfoMapper.selectList(null)) {
ClusterRoleRelationDO aDo = new ClusterRoleRelationDO();
aDo.setRoleId(relationDO.getRoleId());
aDo.setClusterInfoId(clusterInfoDO.getId());
insertIfNotExist(aDo);
}
} else {
insertIfNotExist(relationDO);
}
return ResponseData.create().success();
}
@Override
public ResponseData delete(Long id) {
if (!authConfig.isEnableClusterAuthority()) {
return ResponseData.create().failed("未启用集群的数据权限管理");
}
mapper.deleteById(id);
return ResponseData.create().success();
}
private void insertIfNotExist(ClusterRoleRelationDO relationDO) {
QueryWrapper<ClusterRoleRelationDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("role_id", relationDO.getRoleId()).
eq("cluster_info_id", relationDO.getClusterInfoId());
Integer count = mapper.selectCount(queryWrapper);
if (count > 0) {
log.info("已存在,不再增加:{}", relationDO);
return;
}
mapper.insert(relationDO);
}
}

View File

@@ -3,15 +3,18 @@ package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.BrokerNode;
import com.xuxd.kafka.console.beans.ClusterInfo;
import com.xuxd.kafka.console.beans.Credentials;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.beans.dos.ClusterRoleRelationDO;
import com.xuxd.kafka.console.beans.vo.BrokerApiVersionVO;
import com.xuxd.kafka.console.beans.vo.ClusterInfoVO;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.dao.ClusterRoleRelationMapper;
import com.xuxd.kafka.console.filter.CredentialsContext;
import com.xuxd.kafka.console.service.ClusterService;
import java.util.*;
import java.util.stream.Collectors;
import kafka.console.BrokerVersion;
import kafka.console.ClusterConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
@@ -21,6 +24,9 @@ import org.apache.kafka.common.Node;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.stereotype.Service;
import java.util.*;
import java.util.stream.Collectors;
/**
* kafka-console-ui.
*
@@ -35,13 +41,22 @@ public class ClusterServiceImpl implements ClusterService {
private final ClusterInfoMapper clusterInfoMapper;
public ClusterServiceImpl(ObjectProvider<ClusterConsole> clusterConsole,
ObjectProvider<ClusterInfoMapper> clusterInfoMapper) {
private final AuthConfig authConfig;
private final ClusterRoleRelationMapper clusterRoleRelationMapper;
public ClusterServiceImpl(final ObjectProvider<ClusterConsole> clusterConsole,
final ObjectProvider<ClusterInfoMapper> clusterInfoMapper,
final AuthConfig authConfig,
final ClusterRoleRelationMapper clusterRoleRelationMapper) {
this.clusterConsole = clusterConsole.getIfAvailable();
this.clusterInfoMapper = clusterInfoMapper.getIfAvailable();
this.authConfig = authConfig;
this.clusterRoleRelationMapper = clusterRoleRelationMapper;
}
@Override public ResponseData getClusterInfo() {
@Override
public ResponseData getClusterInfo() {
ClusterInfo clusterInfo = clusterConsole.clusterInfo();
Set<BrokerNode> nodes = clusterInfo.getNodes();
if (nodes == null) {
@@ -52,32 +67,100 @@ public class ClusterServiceImpl implements ClusterService {
return ResponseData.create().data(clusterInfo).success();
}
@Override public ResponseData getClusterInfoList() {
return ResponseData.create().data(clusterInfoMapper.selectList(null)
.stream().map(ClusterInfoVO::from).collect(Collectors.toList())).success();
@Override
public ResponseData getClusterInfoListForSelect() {
return ResponseData.create().
data(clusterInfoMapper.selectList(null).stream().
map(e -> {
ClusterInfoVO vo = ClusterInfoVO.from(e);
vo.setProperties(Collections.emptyList());
vo.setAddress("");
return vo;
}).collect(Collectors.toList())).success();
}
@Override public ResponseData addClusterInfo(ClusterInfoDO infoDO) {
@Override
public ResponseData getClusterInfoList() {
// 如果开启权限管理,当前用户没有集群切换->集群信息的编辑权限隐藏集群的属性信息避免ACL属性暴露出来
Credentials credentials = CredentialsContext.get();
boolean enableClusterAuthority = credentials != null && authConfig.isEnableClusterAuthority();
final Set<Long> clusterInfoIdSet = new HashSet<>();
if (enableClusterAuthority) {
List<Long> roleIdList = credentials.getRoleIdList();
QueryWrapper<ClusterRoleRelationDO> queryWrapper = new QueryWrapper<>();
queryWrapper.in("role_id", roleIdList);
clusterInfoIdSet.addAll(clusterRoleRelationMapper.selectList(queryWrapper).
stream().map(ClusterRoleRelationDO::getClusterInfoId).
collect(Collectors.toSet()));
}
return ResponseData.create().
data(clusterInfoMapper.selectList(null).stream().
filter(e -> !enableClusterAuthority || clusterInfoIdSet.contains(e.getId())).
map(e -> {
ClusterInfoVO vo = ClusterInfoVO.from(e);
if (credentials != null && credentials.isHideClusterProperty()) {
vo.setProperties(Collections.emptyList());
}
return vo;
}).collect(Collectors.toList())).success();
}
@Override
public ResponseData addClusterInfo(ClusterInfoDO infoDO) {
QueryWrapper<ClusterInfoDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("cluster_name", infoDO.getClusterName());
if (clusterInfoMapper.selectCount(queryWrapper) > 0) {
return ResponseData.create().failed("cluster name exist.");
}
clusterInfoMapper.insert(infoDO);
Credentials credentials = CredentialsContext.get();
boolean enableClusterAuthority = credentials != null && authConfig.isEnableClusterAuthority();
if (enableClusterAuthority) {
for (Long roleId : credentials.getRoleIdList()) {
// 开启集群的数据权限控制,新增集群的时候必须要录入一条信息
QueryWrapper<ClusterRoleRelationDO> relationQueryWrapper = new QueryWrapper<>();
relationQueryWrapper.eq("role_id", roleId).
eq("cluster_info_id", infoDO.getId());
Integer count = clusterRoleRelationMapper.selectCount(relationQueryWrapper);
if (count <= 0) {
ClusterRoleRelationDO relationDO = new ClusterRoleRelationDO();
relationDO.setRoleId(roleId);
relationDO.setClusterInfoId(infoDO.getId());
clusterRoleRelationMapper.insert(relationDO);
}
}
}
return ResponseData.create().success();
}
@Override public ResponseData deleteClusterInfo(Long id) {
@Override
public ResponseData deleteClusterInfo(Long id) {
clusterInfoMapper.deleteById(id);
Credentials credentials = CredentialsContext.get();
boolean enableClusterAuthority = credentials != null && authConfig.isEnableClusterAuthority();
if (enableClusterAuthority) {
for (Long roleId : credentials.getRoleIdList()) {
// 开启集群的数据权限控制,删除集群的时候必须要删除对应的数据权限
QueryWrapper<ClusterRoleRelationDO> relationQueryWrapper = new QueryWrapper<>();
relationQueryWrapper.eq("role_id", roleId).eq("cluster_info_id", id);
clusterRoleRelationMapper.delete(relationQueryWrapper);
}
}
return ResponseData.create().success();
}
@Override public ResponseData updateClusterInfo(ClusterInfoDO infoDO) {
@Override
public ResponseData updateClusterInfo(ClusterInfoDO infoDO) {
if (infoDO.getProperties() == null) {
// null 的话不更新这个是bug设置为空字符串解决
infoDO.setProperties("");
}
clusterInfoMapper.updateById(infoDO);
return ResponseData.create().success();
}
@Override public ResponseData peekClusterInfo() {
@Override
public ResponseData peekClusterInfo() {
List<ClusterInfoDO> dos = clusterInfoMapper.selectList(null);
if (CollectionUtils.isEmpty(dos)) {
return ResponseData.create().failed("No Cluster Info.");
@@ -85,7 +168,8 @@ public class ClusterServiceImpl implements ClusterService {
return ResponseData.create().data(dos.stream().findFirst().map(ClusterInfoVO::from)).success();
}
@Override public ResponseData getBrokerApiVersionInfo() {
@Override
public ResponseData getBrokerApiVersionInfo() {
HashMap<Node, NodeApiVersions> map = clusterConsole.listBrokerVersionInfo();
List<BrokerApiVersionVO> list = new ArrayList<>(map.size());
map.forEach(((node, versions) -> {
@@ -105,6 +189,9 @@ public class ClusterServiceImpl implements ClusterService {
versionInfo = versionInfo.substring(1, versionInfo.length() - 2);
vo.setVersionInfo(Arrays.asList(StringUtils.split(versionInfo, ",")));
list.add(vo);
// 推测broker版本
String vs = BrokerVersion.guessBrokerVersion(versions);
vo.setBrokerVersion(vs);
}));
Collections.sort(list, Comparator.comparingInt(BrokerApiVersionVO::getBrokerId));
return ResponseData.create().data(list).success();

View File

@@ -4,20 +4,13 @@ import com.xuxd.kafka.console.beans.MessageFilter;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage;
import com.xuxd.kafka.console.beans.dto.QuerySendStatisticsDTO;
import com.xuxd.kafka.console.beans.enums.FilterType;
import com.xuxd.kafka.console.beans.vo.ConsumerRecordVO;
import com.xuxd.kafka.console.beans.vo.MessageDetailVO;
import com.xuxd.kafka.console.beans.vo.QuerySendStatisticsVO;
import com.xuxd.kafka.console.service.ConsumerService;
import com.xuxd.kafka.console.service.MessageService;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.stream.Collectors;
import kafka.console.ConsumerConsole;
import kafka.console.MessageConsole;
import kafka.console.TopicConsole;
@@ -29,14 +22,7 @@ import org.apache.kafka.clients.admin.TopicDescription;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.apache.kafka.common.serialization.BytesDeserializer;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.DoubleDeserializer;
import org.apache.kafka.common.serialization.FloatDeserializer;
import org.apache.kafka.common.serialization.IntegerDeserializer;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.*;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
@@ -44,6 +30,9 @@ import org.springframework.context.ApplicationContextAware;
import org.springframework.stereotype.Service;
import scala.Tuple2;
import java.util.*;
import java.util.stream.Collectors;
/**
* kafka-console-ui.
*
@@ -79,8 +68,9 @@ public class MessageServiceImpl implements MessageService, ApplicationContextAwa
public static String defaultDeserializer = "String";
@Override public ResponseData searchByTime(QueryMessage queryMessage) {
int maxNums = 5000;
@Override
public ResponseData searchByTime(QueryMessage queryMessage) {
int maxNums = queryMessage.getFilterNumber() <= 0 ? 5000 : queryMessage.getFilterNumber();
Object searchContent = null;
String headerKey = null;
@@ -144,7 +134,7 @@ public class MessageServiceImpl implements MessageService, ApplicationContextAwa
List<ConsumerRecord<byte[], byte[]>> records = tuple2._1();
log.info("search message by time, cost time: {}", (System.currentTimeMillis() - startTime));
List<ConsumerRecordVO> vos = records.stream().filter(record -> record.timestamp() <= queryMessage.getEndTime())
.map(ConsumerRecordVO::fromConsumerRecord).collect(Collectors.toList());
.map(ConsumerRecordVO::fromConsumerRecord).collect(Collectors.toList());
Map<String, Object> res = new HashMap<>();
vos = vos.subList(0, Math.min(maxNums, vos.size()));
res.put("maxNum", maxNums);
@@ -154,13 +144,15 @@ public class MessageServiceImpl implements MessageService, ApplicationContextAwa
return ResponseData.create().data(res).success();
}
@Override public ResponseData searchByOffset(QueryMessage queryMessage) {
@Override
public ResponseData searchByOffset(QueryMessage queryMessage) {
Map<TopicPartition, ConsumerRecord<byte[], byte[]>> recordMap = searchRecordByOffset(queryMessage);
return ResponseData.create().data(recordMap.values().stream().map(ConsumerRecordVO::fromConsumerRecord).collect(Collectors.toList())).success();
}
@Override public ResponseData searchDetail(QueryMessage queryMessage) {
@Override
public ResponseData searchDetail(QueryMessage queryMessage) {
if (queryMessage.getPartition() == -1) {
throw new IllegalArgumentException();
}
@@ -219,16 +211,35 @@ public class MessageServiceImpl implements MessageService, ApplicationContextAwa
return ResponseData.create().failed("Not found message detail.");
}
@Override public ResponseData deserializerList() {
@Override
public ResponseData deserializerList() {
return ResponseData.create().data(deserializerDict.keySet()).success();
}
@Override public ResponseData send(SendMessage message) {
@Override
public ResponseData send(SendMessage message) {
messageConsole.send(message.getTopic(), message.getPartition(), message.getKey(), message.getBody(), message.getNum());
return ResponseData.create().success();
}
@Override public ResponseData resend(SendMessage message) {
@Override
public ResponseData sendWithHeader(SendMessage message) {
String[] headerKeys = message.getHeaders().stream().map(SendMessage.Header::getHeaderKey).toArray(String[]::new);
String[] headerValues = message.getHeaders().stream().map(SendMessage.Header::getHeaderValue).toArray(String[]::new);
// log.info("send with header:keys{},values{}",headerKeys, headerValues);
Tuple2<Object, String> tuple2 = messageConsole.send(message.getTopic(),
message.getPartition(),
message.getKey(),
message.getBody(),
message.getNum(),
headerKeys,
headerValues,
message.isSync());
return (boolean) tuple2._1 ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2);
}
@Override
public ResponseData resend(SendMessage message) {
TopicPartition partition = new TopicPartition(message.getTopic(), message.getPartition());
Map<TopicPartition, Object> offsetTable = new HashMap<>(1, 1.0f);
offsetTable.put(partition, message.getOffset());
@@ -255,6 +266,44 @@ public class MessageServiceImpl implements MessageService, ApplicationContextAwa
return success ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2());
}
@Override
public ResponseData sendStatisticsByTime(QuerySendStatisticsDTO request) {
if (request.getPartition() != null && request.getPartition().contains(-1)) {
request.setPartition(Collections.emptySet());
}
Map<Integer, Long> startOffsetMap = topicConsole.getOffsetForTimestamp(request.getTopic(), request.getStartTime().getTime()).
entrySet().stream().collect(Collectors.toMap(e -> e.getKey().partition(), Map.Entry::getValue, (e1, e2) -> e2));
Map<Integer, Long> endOffsetMap = topicConsole.getOffsetForTimestamp(request.getTopic(), request.getEndTime().getTime()).
entrySet().stream().collect(Collectors.toMap(e -> e.getKey().partition(), Map.Entry::getValue, (e1, e2) -> e2));
Map<Integer, Long> diffOffsetMap = endOffsetMap.entrySet().stream().
collect(Collectors.toMap(e -> e.getKey(),
e -> Arrays.asList(e.getValue(), startOffsetMap.getOrDefault(e.getKey(), 0L)).
stream().reduce((a, b) -> a - b).get()));
if (CollectionUtils.isNotEmpty(request.getPartition())) {
Iterator<Map.Entry<Integer, Long>> iterator = diffOffsetMap.entrySet().iterator();
while (iterator.hasNext()) {
Integer partition = iterator.next().getKey();
if (!request.getPartition().contains(partition)) {
iterator.remove();
}
}
}
Long total = diffOffsetMap.values().stream().reduce(0L, (a, b) -> a + b);
QuerySendStatisticsVO vo = new QuerySendStatisticsVO();
vo.setTopic(request.getTopic());
vo.setTotal(total);
vo.setDetail(diffOffsetMap);
vo.setStartTime(QuerySendStatisticsVO.format(request.getStartTime()));
vo.setEndTime(QuerySendStatisticsVO.format(request.getEndTime()));
return ResponseData.create().data(vo).success();
}
private Map<TopicPartition, ConsumerRecord<byte[], byte[]>> searchRecordByOffset(QueryMessage queryMessage) {
Set<TopicPartition> partitions = getPartitions(queryMessage);
@@ -276,13 +325,14 @@ public class MessageServiceImpl implements MessageService, ApplicationContextAwa
throw new IllegalArgumentException("Can not find topic info.");
}
Set<TopicPartition> set = list.get(0).partitions().stream()
.map(tp -> new TopicPartition(queryMessage.getTopic(), tp.partition())).collect(Collectors.toSet());
.map(tp -> new TopicPartition(queryMessage.getTopic(), tp.partition())).collect(Collectors.toSet());
partitions.addAll(set);
}
return partitions;
}
@Override public void setApplicationContext(ApplicationContext context) throws BeansException {
@Override
public void setApplicationContext(ApplicationContext context) throws BeansException {
this.applicationContext = context;
}
}

View File

@@ -0,0 +1,230 @@
package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.RolePermUpdateEvent;
import com.xuxd.kafka.console.beans.dos.SysPermissionDO;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import com.xuxd.kafka.console.beans.dto.SysPermissionDTO;
import com.xuxd.kafka.console.beans.dto.SysRoleDTO;
import com.xuxd.kafka.console.beans.dto.SysUserDTO;
import com.xuxd.kafka.console.beans.vo.SysPermissionVO;
import com.xuxd.kafka.console.beans.vo.SysRoleVO;
import com.xuxd.kafka.console.beans.vo.SysUserVO;
import com.xuxd.kafka.console.dao.SysPermissionMapper;
import com.xuxd.kafka.console.dao.SysRoleMapper;
import com.xuxd.kafka.console.dao.SysUserMapper;
import com.xuxd.kafka.console.service.UserManageService;
import com.xuxd.kafka.console.utils.RandomStringUtil;
import com.xuxd.kafka.console.utils.UUIDStrUtil;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.context.ApplicationEventPublisher;
import org.springframework.stereotype.Service;
import java.util.*;
import java.util.function.Function;
import java.util.stream.Collectors;
/**
* @author: xuxd
* @date: 2023/4/11 21:24
**/
@Slf4j
@Service
public class UserManageServiceImpl implements UserManageService {
private final SysUserMapper userMapper;
private final SysRoleMapper roleMapper;
private final SysPermissionMapper permissionMapper;
private final ApplicationEventPublisher publisher;
public UserManageServiceImpl(ObjectProvider<SysUserMapper> userMapper,
ObjectProvider<SysRoleMapper> roleMapper,
ObjectProvider<SysPermissionMapper> permissionMapper,
ApplicationEventPublisher publisher) {
this.userMapper = userMapper.getIfAvailable();
this.roleMapper = roleMapper.getIfAvailable();
this.permissionMapper = permissionMapper.getIfAvailable();
this.publisher = publisher;
}
@Override
public ResponseData addPermission(SysPermissionDTO permissionDTO) {
permissionMapper.insert(permissionDTO.toSysPermissionDO());
return ResponseData.create().success();
}
@Override
public ResponseData addOrUdpateRole(SysRoleDTO roleDTO) {
SysRoleDO roleDO = roleDTO.toDO();
if (roleDO.getId() == null) {
roleMapper.insert(roleDO);
} else {
roleMapper.updateById(roleDO);
}
publisher.publishEvent(new RolePermUpdateEvent(this));
return ResponseData.create().success();
}
@Override
public ResponseData addOrUpdateUser(SysUserDTO userDTO) {
if (userDTO.getId() == null) {
if (StringUtils.isEmpty(userDTO.getPassword())) {
userDTO.setPassword(RandomStringUtil.random6Str());
}
SysUserDO userDO = userDTO.toDO();
QueryWrapper<SysUserDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq(true, "username", userDO.getUsername());
SysUserDO exist = userMapper.selectOne(queryWrapper);
if (exist != null) {
return ResponseData.create().failed("用户已存在:" + userDO.getUsername());
}
userDO.setSalt(UUIDStrUtil.random());
userDO.setPassword(UUIDStrUtil.generate(userDTO.getPassword(), userDO.getSalt()));
userMapper.insert(userDO);
} else {
SysUserDO userDO = userMapper.selectById(userDTO.getId());
if (userDO == null) {
log.error("查不到用户: {}", userDTO.getId());
return ResponseData.create().failed("Unknown User.");
}
// 判断是否更新密码
if (userDTO.getResetPassword()) {
userDTO.setPassword(RandomStringUtil.random6Str());
userDO.setSalt(UUIDStrUtil.random());
userDO.setPassword(UUIDStrUtil.generate(userDTO.getPassword(), userDO.getSalt()));
}
userDO.setRoleIds(userDTO.getRoleIds());
userDO.setUsername(userDTO.getUsername());
userMapper.updateById(userDO);
}
return ResponseData.create().data(userDTO.getPassword()).success();
}
@Override
public ResponseData selectRole() {
List<SysRoleDO> dos = roleMapper.selectList(new QueryWrapper<>());
return ResponseData.create().data(dos.stream().map(SysRoleVO::from).collect(Collectors.toList())).success();
}
@Override
public ResponseData selectPermission() {
QueryWrapper<SysPermissionDO> queryWrapper = new QueryWrapper<>();
List<SysPermissionDO> permissionDOS = permissionMapper.selectList(queryWrapper);
List<SysPermissionVO> vos = new ArrayList<>();
Map<Long, Integer> posMap = new HashMap<>();
Map<Long, SysPermissionVO> voMap = new HashMap<>();
Iterator<SysPermissionDO> iterator = permissionDOS.iterator();
while (iterator.hasNext()) {
SysPermissionDO permissionDO = iterator.next();
if (permissionDO.getParentId() == null) {
// 菜单
SysPermissionVO vo = SysPermissionVO.from(permissionDO);
vos.add(vo);
int index = vos.size() - 1;
// 记录位置
posMap.put(permissionDO.getId(), index);
iterator.remove();
}
}
// 上面把菜单都处理过了
while (!permissionDOS.isEmpty()) {
iterator = permissionDOS.iterator();
while (iterator.hasNext()) {
SysPermissionDO permissionDO = iterator.next();
Long parentId = permissionDO.getParentId();
if (posMap.containsKey(parentId)) {
// 菜单下的按扭
SysPermissionVO vo = SysPermissionVO.from(permissionDO);
Integer index = posMap.get(parentId);
SysPermissionVO menuVO = vos.get(index);
if (menuVO.getChildren() == null) {
menuVO.setChildren(new ArrayList<>());
}
menuVO.getChildren().add(vo);
voMap.put(permissionDO.getId(), vo);
iterator.remove();
} else if (voMap.containsKey(parentId)) {
// 按钮下的按扭
SysPermissionVO vo = SysPermissionVO.from(permissionDO);
SysPermissionVO buttonVO = voMap.get(parentId);
if (buttonVO.getChildren() == null) {
buttonVO.setChildren(new ArrayList<>());
}
buttonVO.getChildren().add(vo);
voMap.put(permissionDO.getId(), vo);
iterator.remove();
}
}
}
return ResponseData.create().data(vos).success();
}
@Override
public ResponseData selectUser() {
QueryWrapper<SysUserDO> queryWrapper = new QueryWrapper<>();
List<SysUserDO> userDOS = userMapper.selectList(queryWrapper);
List<SysRoleDO> roleDOS = roleMapper.selectList(null);
Map<Long, SysRoleDO> roleDOMap = roleDOS.stream().collect(Collectors.toMap(SysRoleDO::getId, Function.identity(), (e1, e2) -> e1));
List<SysUserVO> voList = userDOS.stream().map(SysUserVO::from).collect(Collectors.toList());
voList.forEach(vo -> {
if (vo.getRoleIds() != null) {
Long roleId = Long.valueOf(vo.getRoleIds());
vo.setRoleNames(roleDOMap.containsKey(roleId) ? roleDOMap.get(roleId).getRoleName() : null);
}
});
return ResponseData.create().data(voList).success();
}
@Override
public ResponseData updateUser(SysUserDTO userDTO) {
userMapper.updateById(userDTO.toDO());
return ResponseData.create().success();
}
@Override
public ResponseData updateRole(SysRoleDTO roleDTO) {
roleMapper.updateById(roleDTO.toDO());
publisher.publishEvent(new RolePermUpdateEvent(this));
return ResponseData.create().success();
}
@Override
public ResponseData deleteRole(Long id) {
QueryWrapper<SysUserDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq(true, "role_ids", id);
Integer count = userMapper.selectCount(queryWrapper);
if (count > 0) {
return ResponseData.create().failed("存在用户被分配为当前角色,不允许删除");
}
roleMapper.deleteById(id);
publisher.publishEvent(new RolePermUpdateEvent(this));
return ResponseData.create().success();
}
@Override
public ResponseData deleteUser(Long id) {
userMapper.deleteById(id);
return ResponseData.create().success();
}
@Override
public ResponseData updatePassword(SysUserDTO userDTO) {
SysUserDO userDO = userDTO.toDO();
userDO.setSalt(UUIDStrUtil.random());
userDO.setPassword(UUIDStrUtil.generate(userDTO.getPassword(), userDO.getSalt()));
QueryWrapper<SysUserDO> wrapper = new QueryWrapper<>();
wrapper.eq("username", userDTO.getUsername());
userMapper.update(userDO, wrapper);
return ResponseData.create().success();
}
}

View File

@@ -0,0 +1,54 @@
package com.xuxd.kafka.console.utils;
import com.google.gson.Gson;
import com.xuxd.kafka.console.beans.Credentials;
import lombok.extern.slf4j.Slf4j;
import org.springframework.util.Base64Utils;
import java.nio.charset.StandardCharsets;
/**
* @author: xuxd
* @date: 2023/5/14 19:34
**/
@Slf4j
public class AuthUtil {
private static Gson gson = GsonUtil.INSTANCE.get();
public static String generateToken(String secret, Credentials info) {
String json = gson.toJson(info);
String str = json + secret;
String signature = MD5Util.md5(str);
return Base64Utils.encodeToString(json.getBytes(StandardCharsets.UTF_8)) + "." +
Base64Utils.encodeToString(signature.getBytes(StandardCharsets.UTF_8));
}
public static boolean isToken(String token) {
return token.split("\\.").length == 2;
}
public static Credentials parseToken(String secret, String token) {
if (!isToken(token)) {
return Credentials.INVALID;
}
String[] arr = token.split("\\.");
String infoStr = new String(Base64Utils.decodeFromString(arr[0]), StandardCharsets.UTF_8);
String signature = new String(Base64Utils.decodeFromString(arr[1]), StandardCharsets.UTF_8);
String encrypt = MD5Util.md5(infoStr + secret);
if (!encrypt.equals(signature)) {
return Credentials.INVALID;
}
try {
Credentials credentials = gson.fromJson(infoStr, Credentials.class);
if (credentials.getExpiration() < System.currentTimeMillis()) {
return Credentials.INVALID;
}
return credentials;
} catch (Exception e) {
log.error("解析token失败: {}", token, e);
return Credentials.INVALID;
}
}
}

View File

@@ -0,0 +1,32 @@
package com.xuxd.kafka.console.utils;
import lombok.extern.slf4j.Slf4j;
import java.nio.charset.StandardCharsets;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
/**
* @author: xuxd
* @date: 2023/5/14 20:25
**/
@Slf4j
public class MD5Util {
public static MessageDigest getInstance() {
try {
MessageDigest md5 = MessageDigest.getInstance("MD5");
return md5;
} catch (NoSuchAlgorithmException e) {
return null;
}
}
public static String md5(String str) {
MessageDigest digest = getInstance();
if (digest == null) {
return null;
}
return new String(digest.digest(str.getBytes(StandardCharsets.UTF_8)), StandardCharsets.UTF_8);
}
}

View File

@@ -0,0 +1,26 @@
package com.xuxd.kafka.console.utils;
import java.util.Random;
/**
* @author: xuxd
* @date: 2023/5/8 9:19
**/
public class RandomStringUtil {
private final static String ALLOWED_CHARS = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
public static String random6Str() {
return generateRandomString(6);
}
public static String generateRandomString(int length) {
Random random = new Random();
StringBuilder sb = new StringBuilder(length);
for (int i = 0; i < length; i++) {
int index = random.nextInt(ALLOWED_CHARS.length());
sb.append(ALLOWED_CHARS.charAt(index));
}
return sb.toString();
}
}

View File

@@ -0,0 +1,22 @@
package com.xuxd.kafka.console.utils;
import java.util.UUID;
/**
* @author: xuxd
* @date: 2023/5/6 13:30
**/
public class UUIDStrUtil {
public static String random() {
return UUID.randomUUID().toString();
}
public static String generate(String ... strs) {
StringBuilder sb = new StringBuilder();
for (String str : strs) {
sb.append(str);
}
return UUID.nameUUIDFromBytes(sb.toString().getBytes()).toString();
}
}

View File

@@ -47,3 +47,18 @@ logging:
cron:
# clear-dirty-user: 0 * * * * ?
clear-dirty-user: 0 0 1 * * ?
# 权限认证设置设置为true需要先登录才能访问
auth:
enable: false
# 登录用户token的过期时间单位小时
expire-hours: 24
# 隐藏集群的属性信息如果当前用户没有集群切换里的编辑权限就不能看集群的属性信息有开启ACL的集群需要开启这个
hide-cluster-property: true
# 是否启用集群的数据权限,如果启用,可以配置哪些角色看到哪些集群. 不启用,即使配置了也不生效,每个角色的用户都可以看到所有集群信息.
enable-cluster-authority: false
# 重新加载权限信息版本升级替换jar包的时候新版本里增加了新的权限菜单这个设置为true.然后在角色列表里分配新增加的菜单权限.
reload-permission: true
log:
# 是否打印操作日志(增加、删除、编辑)
print-controller-log: true

View File

@@ -1,5 +1,113 @@
-- DELETE FROM t_kafka_user;
--
-- INSERT INTO t_kafka_user (id, username, password) VALUES
-- (1, 'Jone', 'p1'),
-- (2, 'Jack', 'p2');
-- 不要随便修改下面的注释,要根据这个注释初始化加载数据
-- t_sys_permission start--
insert into t_sys_permission(id, name,type,parent_id,permission) values(0,'主页',0,null,'home');
insert into t_sys_permission(id, name,type,parent_id,permission) values(11,'集群',0,null,'cluster');
insert into t_sys_permission(id, name,type,parent_id,permission) values(12,'属性配置',1,11,'cluster:property-config');
insert into t_sys_permission(id, name,type,parent_id,permission) values(13,'日志配置',1,11,'cluster:log-config');
insert into t_sys_permission(id, name,type,parent_id,permission) values(14,'编辑配置',1,11,'cluster:edit');
insert into t_sys_permission(id, name,type,parent_id,permission) values(21,'Topic',0,null,'topic');
insert into t_sys_permission(id, name,type,parent_id,permission) values(22,'刷新',1,21,'topic:load');
insert into t_sys_permission(id, name,type,parent_id,permission) values(23,'新增',1,21,'topic:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(24,'批量删除',1,21,'topic:batch-del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(25,'删除',1,21,'topic:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(26,'分区详情',1,21,'topic:partition-detail');
insert into t_sys_permission(id, name,type,parent_id,permission) values(27,'首选副本作leader',1,26,'topic:partition-detail:preferred');
insert into t_sys_permission(id, name,type,parent_id,permission) values(28,'增加分区',1,21,'topic:partition-add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(29,'消费详情',1,21,'topic:consumer-detail');
insert into t_sys_permission(id, name,type,parent_id,permission) values(30,'属性配置',1,21,'topic:property-config');
insert into t_sys_permission(id, name,type,parent_id,permission) values(31,'变更副本',1,21,'topic:replication-modify');
insert into t_sys_permission(id, name,type,parent_id,permission) values(32,'发送统计',1,21,'topic:send-count');
insert into t_sys_permission(id, name,type,parent_id,permission) values(33,'限流',1,21,'topic:replication-sync-throttle');
insert into t_sys_permission(id, name,type,parent_id,permission) values(34,'编辑属性配置',1,30,'topic:property-config:edit');
insert into t_sys_permission(id, name,type,parent_id,permission) values(35,'删除属性配置',1,30,'topic:property-config:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(41,'消费组',0,null,'group');
insert into t_sys_permission(id, name,type,parent_id,permission) values(42,'新增订阅',1,41,'group:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(43,'删除',1,41,'group:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(44,'消费端',1,41,'group:client');
insert into t_sys_permission(id, name,type,parent_id,permission) values(45,'消费详情',1,41,'group:consumer-detail');
insert into t_sys_permission(id, name,type,parent_id,permission) values(46,'最小位点',1,45,'group:consumer-detail:min');
insert into t_sys_permission(id, name,type,parent_id,permission) values(47,'最新位点',1,45,'group:consumer-detail:last');
insert into t_sys_permission(id, name,type,parent_id,permission) values(48,'时间戳',1,45,'group:consumer-detail:timestamp');
insert into t_sys_permission(id, name,type,parent_id,permission) values(49,'重置位点',1,45,'group:consumer-detail:any');
insert into t_sys_permission(id, name,type,parent_id,permission) values(50,'位移分区',1,41,'group:offset-partition');
insert into t_sys_permission(id, name,type,parent_id,permission) values(61,'消息',0,null,'message');
insert into t_sys_permission(id, name,type,parent_id,permission) values(62,'根据时间查询',1,61,'message:search-time');
insert into t_sys_permission(id, name,type,parent_id,permission) values(63,'根据偏移查询',1,61,'message:search-offset');
insert into t_sys_permission(id, name,type,parent_id,permission) values(64,'在线发送',1,61,'message:send');
insert into t_sys_permission(id, name,type,parent_id,permission) values(65,'在线删除',1,61,'message:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(66,'消息详情',1,61,'message:detail');
insert into t_sys_permission(id, name,type,parent_id,permission) values(67,'重新发送',1,61,'message:resend');
insert into t_sys_permission(id, name,type,parent_id,permission) values(68,'发送统计',1,61,'message:send-statistics');
insert into t_sys_permission(id, name,type,parent_id,permission) values(80,'限流',0,null,'quota');
insert into t_sys_permission(id, name,type,parent_id,permission) values(81,'用户',1,80,'quota:user');
insert into t_sys_permission(id, name,type,parent_id,permission) values(82,'新增配置',1,81,'quota:user:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(83,'客户端ID',1,80,'quota:client');
insert into t_sys_permission(id, name,type,parent_id,permission) values(84,'新增配置',1,83,'quota:client:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(85,'用户和客户端ID',1,80,'quota:user-client');
insert into t_sys_permission(id, name,type,parent_id,permission) values(86,'新增配置',1,85,'quota:user-client:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(87,'删除',1,80,'quota:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(88,'修改',1,80,'quota:edit');
insert into t_sys_permission(id, name,type,parent_id,permission) values(100,'Acl',0,null,'acl');
insert into t_sys_permission(id, name,type,parent_id,permission) values(101,'资源授权',1,100,'acl:authority');
insert into t_sys_permission(id, name,type,parent_id,permission) values(102,'新增主体权限',1,101,'acl:authority:add-principal');
insert into t_sys_permission(id, name,type,parent_id,permission) values(103,'权限详情',1,101,'acl:authority:detail');
insert into t_sys_permission(id, name,type,parent_id,permission) values(104,'管理生产权限',1,101,'acl:authority:producer');
insert into t_sys_permission(id, name,type,parent_id,permission) values(105,'管理消费权限',1,101,'acl:authority:consumer');
insert into t_sys_permission(id, name,type,parent_id,permission) values(106,'增加权限',1,101,'acl:authority:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(107,'清除权限',1,101,'acl:authority:clean');
insert into t_sys_permission(id, name,type,parent_id,permission) values(108,'SaslScram用户管理',1,100,'acl:sasl-scram');
insert into t_sys_permission(id, name,type,parent_id,permission) values(109,'新增/更新用户',1,108,'acl:sasl-scram:add-update');
insert into t_sys_permission(id, name,type,parent_id,permission) values(110,'详情',1,108,'acl:sasl-scram:detail');
insert into t_sys_permission(id, name,type,parent_id,permission) values(111,'删除',1,108,'acl:sasl-scram:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(112,'管理生产权限',1,108,'acl:sasl-scram:producer');
insert into t_sys_permission(id, name,type,parent_id,permission) values(113,'管理消费权限',1,108,'acl:sasl-scram:consumer');
insert into t_sys_permission(id, name,type,parent_id,permission) values(114,'增加权限',1,108,'acl:sasl-scram:add-auth');
insert into t_sys_permission(id, name,type,parent_id,permission) values(115,'彻底删除',1,108,'acl:sasl-scram:pure');
insert into t_sys_permission(id, name,type,parent_id,permission) values(140,'用户',0,null,'user-manage');
insert into t_sys_permission(id, name,type,parent_id,permission) values(141,'用户列表',1,140,'user-manage:user');
insert into t_sys_permission(id, name,type,parent_id,permission) values(142,'新增',1,141,'user-manage:user:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(143,'删除',1,141,'user-manage:user:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(144,'重置密码',1,141,'user-manage:user:reset-pass');
insert into t_sys_permission(id, name,type,parent_id,permission) values(145,'分配角色',1,141,'user-manage:user:change-role');
insert into t_sys_permission(id, name,type,parent_id,permission) values(146,'角色列表',1,140,'user-manage:role');
insert into t_sys_permission(id, name,type,parent_id,permission) values(147,'保存',1,146,'user-manage:role:save');
insert into t_sys_permission(id, name,type,parent_id,permission) values(148,'删除',1,146,'user-manage:role:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(149,'权限列表',1,140,'user-manage:permission');
insert into t_sys_permission(id, name,type,parent_id,permission) values(150,'个人设置',1,140,'user-manage:setting');
insert into t_sys_permission(id, name,type,parent_id,permission) values(151,'集群权限',1,140,'user-manage:cluster-role');
insert into t_sys_permission(id, name,type,parent_id,permission) values(152,'新增',1,151,'user-manage:cluster-role:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(153,'删除',1,151,'user-manage:cluster-role:delete');
insert into t_sys_permission(id, name,type,parent_id,permission) values(160,'运维',0,null,'op');
insert into t_sys_permission(id, name,type,parent_id,permission) values(161,'集群切换',1,160,'op:cluster-switch');
insert into t_sys_permission(id, name,type,parent_id,permission) values(162,'新增集群',1,161,'op:cluster-switch:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(163,'切换',1,161,'op:cluster-switch:switch');
insert into t_sys_permission(id, name,type,parent_id,permission) values(164,'编辑',1,161,'op:cluster-switch:edit');
insert into t_sys_permission(id, name,type,parent_id,permission) values(165,'删除',1,161,'op:cluster-switch:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(166,'配置限流',1,160,'op:config-throttle');
insert into t_sys_permission(id, name,type,parent_id,permission) values(167,'解除限流',1,160,'op:remove-throttle');
insert into t_sys_permission(id, name,type,parent_id,permission) values(168,'首选副本作leader',1,160,'op:replication-preferred');
insert into t_sys_permission(id, name,type,parent_id,permission) values(169,'副本变更详情',1,160,'op:replication-update-detail');
insert into t_sys_permission(id, name,type,parent_id,permission) values(170,'副本重分配',1,160,'op:replication-reassign');
insert into t_sys_permission(id, name,type,parent_id,permission) values(171,'取消副本重分配',1,169,'op:replication-update-detail:cancel');
-- t_sys_permission end--
-- t_sys_role start--
insert into t_sys_role(id, role_name, description, permission_ids) VALUES (1,'超级管理员','超级管理员','12,13,14,22,23,24,25,26,27,28,29,30,34,35,31,32,33,42,43,44,45,46,47,48,49,50,62,63,64,65,66,67,68,81,82,83,84,85,86,87,88,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,141,142,143,144,145,146,147,148,149,150,151,152,153,161,162,163,164,165,166,167,168,169,171,170');
insert into t_sys_role(id, role_name, description, permission_ids) VALUES (2,'普通管理员','普通管理员,不能更改用户信息','12,13,14,22,23,24,25,26,27,28,29,30,34,35,31,32,33,42,43,44,45,46,47,48,49,50,62,63,64,65,66,67,68,81,82,83,84,85,86,87,88,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,141,146,149,150,161,162,163,164,165,166,167,168,169,171,170');
-- insert into t_sys_role(id, role_name, description, permission_ids) VALUES (2,'访客','访客','12,13,22,26,29,32,44,45,50,62,63,81,83,85,141,146,149,150,161,163');
-- t_sys_role end--
-- t_sys_user start--
insert into t_sys_user(id, username, password, salt, role_ids) VALUES (1,'super-admin','3a3e4d32-5247-321b-9efb-9cbf60b2bf6c','e6973cfc-7583-4baa-8802-65ded1268ab6','1' );
insert into t_sys_user(id, username, password, salt, role_ids) VALUES (2,'admin','3a3e4d32-5247-321b-9efb-9cbf60b2bf6c','e6973cfc-7583-4baa-8802-65ded1268ab6','2' );
-- t_sys_user end--

View File

@@ -4,8 +4,8 @@
CREATE TABLE IF NOT EXISTS T_KAFKA_USER
(
ID IDENTITY NOT NULL COMMENT '主键ID',
USERNAME VARCHAR(50) NOT NULL DEFAULT '' COMMENT '用户名',
PASSWORD VARCHAR(50) NOT NULL DEFAULT '' COMMENT '密码',
USERNAME VARCHAR(128) NOT NULL DEFAULT '' COMMENT '用户名',
PASSWORD VARCHAR(128) NOT NULL DEFAULT '' COMMENT '密码',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
CLUSTER_INFO_ID BIGINT NOT NULL COMMENT '集群信息里的集群ID',
PRIMARY KEY (ID),
@@ -29,9 +29,51 @@ CREATE TABLE IF NOT EXISTS T_CLUSTER_INFO
(
ID IDENTITY NOT NULL COMMENT '主键ID',
CLUSTER_NAME VARCHAR(128) NOT NULL DEFAULT '' COMMENT '集群名',
ADDRESS VARCHAR(256) NOT NULL DEFAULT '' COMMENT '集群地址',
PROPERTIES VARCHAR(512) NOT NULL DEFAULT '' COMMENT '集群的其它属性配置',
ADDRESS VARCHAR(1024) NOT NULL DEFAULT '' COMMENT '集群地址',
PROPERTIES VARCHAR(1024) NOT NULL DEFAULT '' COMMENT '集群的其它属性配置',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
PRIMARY KEY (ID),
UNIQUE (CLUSTER_NAME)
);
-- 登录用户的角色权限配置
CREATE TABLE IF NOT EXISTS t_sys_permission
(
ID IDENTITY NOT NULL COMMENT '主键ID',
name varchar(100) DEFAULT NULL COMMENT '权限名称',
type tinyint(1) NOT NULL DEFAULT 0 COMMENT '权限类型: 0菜单1按钮',
parent_id bigint(20) DEFAULT NULL COMMENT '所属父权限ID',
permission varchar(100) DEFAULT NULL COMMENT '权限字符串',
PRIMARY KEY (id)
);
CREATE TABLE IF NOT EXISTS t_sys_role
(
ID IDENTITY NOT NULL COMMENT '主键ID',
role_name varchar(100) NOT NULL COMMENT '角色名称',
description varchar(100) DEFAULT NULL COMMENT '角色描述',
permission_ids varchar(500) DEFAULT NULL COMMENT '分配的权限ID',
PRIMARY KEY (id)
);
CREATE TABLE IF NOT EXISTS t_sys_user
(
ID IDENTITY NOT NULL COMMENT '主键ID',
username varchar(100) DEFAULT NULL COMMENT '用户名',
password varchar(100) DEFAULT NULL COMMENT '用户密码',
salt varchar(100) DEFAULT NULL COMMENT '加密的盐值',
role_ids varchar(100) DEFAULT NULL COMMENT '分配角色的ID',
PRIMARY KEY (id),
UNIQUE (username)
);
-- 集群数据权限与角色绑定
CREATE TABLE IF NOT EXISTS t_cluster_role_relation
(
ID IDENTITY NOT NULL COMMENT '主键ID',
ROLE_ID bigint(20) NOT NULL COMMENT '角色ID',
CLUSTER_INFO_ID bigint(20) NOT NULL COMMENT '集群信息的ID',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
PRIMARY KEY (id),
UNIQUE (ROLE_ID, CLUSTER_INFO_ID)
);

View File

@@ -3,9 +3,9 @@
<springProperty scope="context" name="LOGGING_HOME" source="logging.home"/>
<!-- 日志目录 -->
<property name="LOG_HOME" value="${LOGGING_HOME}/logs"/>
<property name="LOG_HOME" value="${LOGGING_HOME:-${user.dir}}/logs"/>
<!-- 日志文件名-->
<property name="APP_NAME" value="${APPLICATION_NAME:-.}"/>
<property name="APP_NAME" value="${APPLICATION_NAME:-kafka-console-ui}"/>
<!-- 使用默认的输出格式-->
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
@@ -45,7 +45,12 @@
<appender-ref ref="AsyncFileAppender"/>
</logger>
<logger name="org.apache.kafka.clients.consumer" level="warn" additivity="false">
<logger name="org.apache.kafka.common" level="warn" additivity="false">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="AsyncFileAppender"/>
</logger>
<logger name="org.apache.kafka.clients.Metadata" level="warn" additivity="false">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="AsyncFileAppender"/>
</logger>
@@ -59,4 +64,6 @@
<appender-ref ref="CONSOLE"/>
<appender-ref ref="AsyncFileAppender"/>
</logger>
<logger name="org.apache.kafka" level="warn"/>
</configuration>

View File

@@ -21,7 +21,7 @@ import org.apache.kafka.common.utils.{KafkaThread, LogContext, Time}
import org.slf4j.{Logger, LoggerFactory}
import java.io.IOException
import java.util.Properties
import java.util.{Collections, Properties}
import java.util.concurrent.atomic.AtomicInteger
import java.util.concurrent.{ConcurrentLinkedQueue, TimeUnit}
import scala.jdk.CollectionConverters.{ListHasAsScala, MapHasAsJava, PropertiesHasAsScala, SetHasAsScala}
@@ -162,7 +162,7 @@ object BrokerApiVersion{
def listAllBrokerVersionInfo(): Map[Node, Try[NodeApiVersions]] =
findAllBrokers().map { broker =>
broker -> Try[NodeApiVersions](new NodeApiVersions(getApiVersions(broker)))
broker -> Try[NodeApiVersions](new NodeApiVersions(getApiVersions(broker), Collections.emptyList(), false))
}.toMap
def close(): Unit = {

View File

@@ -0,0 +1,101 @@
package kafka.console
import org.apache.kafka.clients.NodeApiVersions
import org.apache.kafka.common.message.ApiVersionsResponseData.ApiVersion
import java.util.Collections
import scala.jdk.CollectionConverters.SeqHasAsJava
import scala.reflect.runtime.universe._
import scala.reflect.runtime.{universe => ru}
/**
* broker version with api version.
*
* @author: xuxd
* @since: 2024/8/29 16:12
* */
class BrokerVersion(val apiVersion: Int, val brokerVersion: String)
object BrokerVersion {
val define: List[BrokerVersion] = List(
new BrokerVersion(33, "0.11"),
new BrokerVersion(37, "1.0"),
new BrokerVersion(42, "1.1"),
new BrokerVersion(43, "2.2"),
new BrokerVersion(44, "2.3"),
new BrokerVersion(47, "2.4~2.5"),
new BrokerVersion(49, "2.6"),
new BrokerVersion(57, "2.7"),
new BrokerVersion(64, "2.8"),
new BrokerVersion(67, "3.0~3.4"),
new BrokerVersion(68, "3.5~3.6"),
new BrokerVersion(74, "3.7"),
new BrokerVersion(75, "3.8")
)
def getDefineWithJavaList(): java.util.List[BrokerVersion] = {
define.toBuffer.asJava
}
def guessBrokerVersion(nodeVersion: NodeApiVersions): String = {
// if (nodeVersion.)
val unknown = "unknown";
var guessVersion = unknown
var maxApiKey: Short = -1
if (nodeVersion.toString().contains("UNKNOWN")) {
val unknownApis = getFieldValueByName(nodeVersion, "unknownApis")
unknownApis match {
case Some(unknownApis: java.util.List[ApiVersion]) => {
if (unknownApis.size > 0) {
maxApiKey = unknownApis.get(unknownApis.size() - 1).apiKey()
}
}
case _ => -1
}
}
if (maxApiKey < 0) {
val versions = new java.util.ArrayList[ApiVersion](nodeVersion.allSupportedApiVersions().values())
Collections.sort(versions, (o1: ApiVersion, o2: ApiVersion) => o2.apiKey() - o1.apiKey)
maxApiKey = versions.get(0).apiKey()
}
if (maxApiKey > 0) {
if (maxApiKey > define.last.apiVersion) {
guessVersion = "> " + define.last.brokerVersion
} else if (maxApiKey < define.head.apiVersion) {
guessVersion = "< " + define.head.brokerVersion
} else {
for (i <- define.indices) {
if (maxApiKey <= define(i).apiVersion && guessVersion == unknown) {
guessVersion = define(i).brokerVersion
}
}
}
}
guessVersion
}
def getFieldValueByName(obj: Object, fieldName: String): Option[Any] = {
val runtimeMirror = ru.runtimeMirror(obj.getClass.getClassLoader)
val instanceMirror = runtimeMirror.reflect(obj)
val typeOfObj = instanceMirror.symbol.toType
// 查找名为 fieldName 的字段
val fieldSymbol = typeOfObj.member(newTermName(fieldName)).asTerm
// 检查字段是否存在并且不是私有字段
if (fieldSymbol.isPrivate || fieldSymbol.isPrivateThis) {
// None // 如果字段是私有的,返回 None
val fieldMirror = runtimeMirror.reflect(obj).reflectField(fieldSymbol)
Some(fieldMirror.get)
} else {
// 反射获取字段值
val fieldMirror = instanceMirror.reflectField(fieldSymbol)
Some(fieldMirror.get)
}
}
}

View File

@@ -0,0 +1,84 @@
package kafka.console
import com.xuxd.kafka.console.config.KafkaConfig
import org.apache.kafka.clients.admin.{Admin, AlterClientQuotasOptions}
import org.apache.kafka.common.quota.{ClientQuotaAlteration, ClientQuotaEntity, ClientQuotaFilter, ClientQuotaFilterComponent}
import java.util.Collections
import java.util.concurrent.TimeUnit
import scala.jdk.CollectionConverters.{IterableHasAsJava, ListHasAsScala, MapHasAsScala, SeqHasAsJava}
/**
* client quota console.
*
* @author xuxd
* @date 2022-12-30 10:55:56
* */
class ClientQuotaConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig) with Logging {
def getClientQuotasConfigs(entityTypes: java.util.List[String], entityNames: java.util.List[String]): java.util.Map[ClientQuotaEntity, java.util.Map[String, Double]] = {
withAdminClientAndCatchError(admin => getAllClientQuotasConfigs(admin, entityTypes.asScala.toList, entityNames.asScala.toList),
e => {
log.error("getAllClientQuotasConfigs error.", e)
java.util.Collections.emptyMap()
})
}.asInstanceOf[java.util.Map[ClientQuotaEntity, java.util.Map[String, Double]]]
def addQuotaConfigs(entityTypes: java.util.List[String], entityNames: java.util.List[String], configsToBeAddedMap: java.util.Map[String, String]): (Boolean, String) = {
alterQuotaConfigs(entityTypes, entityNames, configsToBeAddedMap, Collections.emptyList())
}
def deleteQuotaConfigs(entityTypes: java.util.List[String], entityNames: java.util.List[String], configsToBeDeleted: java.util.List[String]): (Boolean, String) = {
alterQuotaConfigs(entityTypes, entityNames, Collections.emptyMap(), configsToBeDeleted)
}
def alterQuotaConfigs(entityTypes: java.util.List[String], entityNames: java.util.List[String], configsToBeAddedMap: java.util.Map[String, String], configsToBeDeleted: java.util.List[String]): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
alterQuotaConfigsInner(admin, entityTypes.asScala.toList, entityNames.asScala.toList, configsToBeAddedMap.asScala.toMap, configsToBeDeleted.asScala.toSeq)
(true, "")
},
e => {
log.error("getAllClientQuotasConfigs error.", e)
(false, e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
private def getAllClientQuotasConfigs(adminClient: Admin, entityTypes: List[String], entityNames: List[String]): java.util.Map[ClientQuotaEntity, java.util.Map[String, Double]] = {
val components = entityTypes.map(Some(_)).zipAll(entityNames.map(Some(_)), None, None).map { case (entityType, entityNameOpt) =>
entityNameOpt match {
case Some("") => ClientQuotaFilterComponent.ofDefaultEntity(entityType.get)
case Some(name) => ClientQuotaFilterComponent.ofEntity(entityType.get, name)
case None => ClientQuotaFilterComponent.ofEntityType(entityType.get)
}
}
adminClient.describeClientQuotas(ClientQuotaFilter.containsOnly(components.asJava)).entities.get(30, TimeUnit.SECONDS)
}.asInstanceOf[java.util.Map[ClientQuotaEntity, java.util.Map[String, Double]]]
private def alterQuotaConfigsInner(adminClient: Admin, entityTypes: List[String], entityNames: List[String], configsToBeAddedMap: Map[String, String], configsToBeDeleted: Seq[String]) = {
// handle altering client/user quota configs
// val oldConfig = getAllClientQuotasConfigs(adminClient, entityTypes, entityNames)
// val invalidConfigs = configsToBeDeleted.filterNot(oldConfig.asScala.toMap.contains)
// if (invalidConfigs.nonEmpty)
// throw new InvalidConfigurationException(s"Invalid config(s): ${invalidConfigs.mkString(",")}")
val alterEntityNames = entityNames.map(en => if (en.nonEmpty) en else null)
// Explicitly populate a HashMap to ensure nulls are recorded properly.
val alterEntityMap = new java.util.HashMap[String, String]
entityTypes.zip(alterEntityNames).foreach { case (k, v) => alterEntityMap.put(k, v) }
val entity = new ClientQuotaEntity(alterEntityMap)
val alterOptions = new AlterClientQuotasOptions().validateOnly(false)
val alterOps = (configsToBeAddedMap.map { case (key, value) =>
val doubleValue = try value.toDouble catch {
case _: NumberFormatException =>
throw new IllegalArgumentException(s"Cannot parse quota configuration value for $key: $value")
}
new ClientQuotaAlteration.Op(key, doubleValue)
} ++ configsToBeDeleted.map(key => new ClientQuotaAlteration.Op(key, null))).asJavaCollection
adminClient.alterClientQuotas(Collections.singleton(new ClientQuotaAlteration(entity, alterOps)), alterOptions)
.all().get(60, TimeUnit.SECONDS)
}
}

View File

@@ -94,7 +94,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
t.lag = t.logEndOffset - t.consumerOffset
}
}
t.lag = t.logEndOffset - t.consumerOffset
// t.lag = t.logEndOffset - t.consumerOffset
(topicPartition, t)
}).toMap

View File

@@ -189,10 +189,10 @@ object KafkaConsole {
case (topicPartition, listOffsetsResultInfo) => topicPartition -> new OffsetAndMetadata(listOffsetsResultInfo.offset)
}.toMap
unsuccessfulOffsetsForTimes.foreach { entry =>
log.warn(s"\nWarn: Partition " + entry._1.partition() + " from topic " + entry._1.topic() +
" is empty. Falling back to latest known offset.")
}
// unsuccessfulOffsetsForTimes.foreach { entry =>
// log.warn(s"\nWarn: Partition " + entry._1.partition() + " from topic " + entry._1.topic() +
// " is empty. Falling back to latest known offset.")
// }
successfulLogTimestampOffsets ++ getLogEndOffsets(admin, unsuccessfulOffsetsForTimes.keySet.toSeq, timeoutMs)
}

View File

@@ -6,13 +6,17 @@ import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.commons.lang3.StringUtils
import org.apache.kafka.clients.admin.{DeleteRecordsOptions, RecordsToDelete}
import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord}
import org.apache.kafka.clients.producer.ProducerRecord
import org.apache.kafka.clients.producer.{ProducerRecord, RecordMetadata}
import org.apache.kafka.common.TopicPartition
import org.apache.kafka.common.header.Header
import org.apache.kafka.common.header.internals.RecordHeader
import java.time.Duration
import java.util
import java.util.{Properties}
import java.util.Properties
import java.util.concurrent.Future
import scala.collection.immutable
import scala.collection.mutable.ArrayBuffer
import scala.jdk.CollectionConverters.{CollectionHasAsScala, MapHasAsScala, SeqHasAsJava}
/**
@@ -218,7 +222,7 @@ class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
def send(topic: String, partition: Int, key: String, value: String, num: Int): Unit = {
withProducerAndCatchError(producer => {
val nullKey = if (key != null && key.trim().length() == 0) null else key
val nullKey = if (key != null && key.trim().isEmpty) null else key
for (a <- 1 to num) {
val record = if (partition != -1) new ProducerRecord[String, String](topic, partition, nullKey, value)
else new ProducerRecord[String, String](topic, nullKey, value)
@@ -228,6 +232,39 @@ class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
}
def send(topic: String,
partition: Int,
key: String,
value: String,
num: Int,
headerKeys: Array[String],
headerValues: Array[String],
sync: Boolean): (Boolean, String) = {
withProducerAndCatchError(producer => {
val nullKey = if (key != null && key.trim().isEmpty) null else key
val results = ArrayBuffer.empty[Future[RecordMetadata]]
for (a <- 1 to num) {
val record = if (partition != -1) new ProducerRecord[String, String](topic, partition, nullKey, value)
else new ProducerRecord[String, String](topic, nullKey, value)
if (!headerKeys.isEmpty && headerKeys.length == headerValues.length) {
val headers: Array[Header] = headerKeys.zip(headerValues).map { case (key, value) =>
new RecordHeader(key, value.getBytes())
}
headers.foreach(record.headers().add)
}
results += producer.send(record)
}
if (sync) {
results.foreach(_.get())
}
(true, "")
}, e => {
log.error("send error.", e)
(false, e.getMessage)
})
}.asInstanceOf[(Boolean, String)]
def sendSync(record: ProducerRecord[Array[Byte], Array[Byte]]): (Boolean, String) = {
withByteProducerAndCatchError(producer => {
val metadata = producer.send(record).get()

View File

@@ -0,0 +1,48 @@
package com.xuxd.kafka.console.scala;
import com.xuxd.kafka.console.config.ContextConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.config.KafkaConfig;
import kafka.console.ClientQuotaConsole;
import org.apache.kafka.common.config.internals.QuotaConfigs;
import org.apache.kafka.common.quota.ClientQuotaEntity;
import org.junit.jupiter.api.Test;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
public class ClientQuotaConsoleTest {
String bootstrapServer = "localhost:9092";
@Test
void testGetClientQuotasConfigs() {
ClientQuotaConsole console = new ClientQuotaConsole(new KafkaConfig());
ContextConfig config = new ContextConfig();
config.setBootstrapServer(bootstrapServer);
ContextConfigHolder.CONTEXT_CONFIG.set(config);
Map<ClientQuotaEntity, Map<String, Object>> configs = console.getClientQuotasConfigs(Arrays.asList(ClientQuotaEntity.USER, ClientQuotaEntity.CLIENT_ID), Arrays.asList("user1", "clientA"));
configs.forEach((k, v) -> {
System.out.println(k);
System.out.println(v);
});
}
@Test
void testAlterClientQuotasConfigs() {
ClientQuotaConsole console = new ClientQuotaConsole(new KafkaConfig());
ContextConfig config = new ContextConfig();
config.setBootstrapServer(bootstrapServer);
ContextConfigHolder.CONTEXT_CONFIG.set(config);
Map<String, String> configsToBeAddedMap = new HashMap<>();
configsToBeAddedMap.put(QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG, "1024000000");
console.addQuotaConfigs(Arrays.asList(ClientQuotaEntity.USER), Arrays.asList("user-test"), configsToBeAddedMap);
console.addQuotaConfigs(Arrays.asList(ClientQuotaEntity.USER), Arrays.asList(""), configsToBeAddedMap);
console.addQuotaConfigs(Arrays.asList(ClientQuotaEntity.CLIENT_ID), Arrays.asList(""), configsToBeAddedMap);
console.addQuotaConfigs(Arrays.asList(ClientQuotaEntity.CLIENT_ID), Arrays.asList("clientA"), configsToBeAddedMap);
console.addQuotaConfigs(Arrays.asList(ClientQuotaEntity.USER, ClientQuotaEntity.CLIENT_ID), Arrays.asList("", ""), configsToBeAddedMap);
// console.deleteQuotaConfigs(Arrays.asList(ClientQuotaEntity.CLIENT_ID), Arrays.asList(""), Arrays.asList(QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG));
}
}

160
ui/package-lock.json generated
View File

@@ -1820,6 +1820,63 @@
"integrity": "sha1-/q7SVZc9LndVW4PbwIhRpsY1IPo=",
"dev": true
},
"ansi-styles": {
"version": "4.3.0",
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz",
"integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==",
"dev": true,
"optional": true,
"requires": {
"color-convert": "^2.0.1"
}
},
"chalk": {
"version": "4.1.2",
"resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz",
"integrity": "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==",
"dev": true,
"optional": true,
"requires": {
"ansi-styles": "^4.1.0",
"supports-color": "^7.1.0"
}
},
"color-convert": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
"integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
"dev": true,
"optional": true,
"requires": {
"color-name": "~1.1.4"
}
},
"color-name": {
"version": "1.1.4",
"resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
"integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==",
"dev": true,
"optional": true
},
"has-flag": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
"integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==",
"dev": true,
"optional": true
},
"loader-utils": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.4.tgz",
"integrity": "sha512-xXqpXoINfFhgua9xiqD8fPFHgkoq1mmmpE92WlDbm9rNRd/EbRb+Gqf908T2DMfuHjjJlksiK2RbHVOdD/MqSw==",
"dev": true,
"optional": true,
"requires": {
"big.js": "^5.2.2",
"emojis-list": "^3.0.0",
"json5": "^2.1.2"
}
},
"ssri": {
"version": "8.0.1",
"resolved": "https://registry.nlark.com/ssri/download/ssri-8.0.1.tgz",
@@ -1828,6 +1885,28 @@
"requires": {
"minipass": "^3.1.1"
}
},
"supports-color": {
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz",
"integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==",
"dev": true,
"optional": true,
"requires": {
"has-flag": "^4.0.0"
}
},
"vue-loader-v16": {
"version": "npm:vue-loader@16.8.3",
"resolved": "https://registry.npmjs.org/vue-loader/-/vue-loader-16.8.3.tgz",
"integrity": "sha512-7vKN45IxsKxe5GcVCbc2qFU5aWzyiLrYJyUuMz4BQLKctCj/fmCa0w6fGiiQ2cLFetNcek1ppGJQDCup0c1hpA==",
"dev": true,
"optional": true,
"requires": {
"chalk": "^4.1.0",
"hash-sum": "^2.0.0",
"loader-utils": "^2.0.0"
}
}
}
},
@@ -12097,87 +12176,6 @@
}
}
},
"vue-loader-v16": {
"version": "npm:vue-loader@16.8.3",
"resolved": "https://registry.npmjs.org/vue-loader/-/vue-loader-16.8.3.tgz",
"integrity": "sha512-7vKN45IxsKxe5GcVCbc2qFU5aWzyiLrYJyUuMz4BQLKctCj/fmCa0w6fGiiQ2cLFetNcek1ppGJQDCup0c1hpA==",
"dev": true,
"optional": true,
"requires": {
"chalk": "^4.1.0",
"hash-sum": "^2.0.0",
"loader-utils": "^2.0.0"
},
"dependencies": {
"ansi-styles": {
"version": "4.3.0",
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz",
"integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==",
"dev": true,
"optional": true,
"requires": {
"color-convert": "^2.0.1"
}
},
"chalk": {
"version": "4.1.2",
"resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz",
"integrity": "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==",
"dev": true,
"optional": true,
"requires": {
"ansi-styles": "^4.1.0",
"supports-color": "^7.1.0"
}
},
"color-convert": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
"integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
"dev": true,
"optional": true,
"requires": {
"color-name": "~1.1.4"
}
},
"color-name": {
"version": "1.1.4",
"resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
"integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==",
"dev": true,
"optional": true
},
"has-flag": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
"integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==",
"dev": true,
"optional": true
},
"loader-utils": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.2.tgz",
"integrity": "sha512-TM57VeHptv569d/GKh6TAYdzKblwDNiumOdkFnejjD0XwTH87K90w3O7AiJRqdQoXygvi1VQTJTLGhJl7WqA7A==",
"dev": true,
"optional": true,
"requires": {
"big.js": "^5.2.2",
"emojis-list": "^3.0.0",
"json5": "^2.1.2"
}
},
"supports-color": {
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz",
"integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==",
"dev": true,
"optional": true,
"requires": {
"has-flag": "^4.0.0"
}
}
}
},
"vue-ref": {
"version": "2.0.0",
"resolved": "https://registry.npm.taobao.org/vue-ref/download/vue-ref-2.0.0.tgz",

View File

@@ -11,24 +11,50 @@
><router-link to="/group-page" class="pad-l-r">消费组</router-link>
<span>|</span
><router-link to="/message-page" class="pad-l-r">消息</router-link>
<span v-show="enableSasl">|</span
><router-link to="/acl-page" class="pad-l-r" v-show="enableSasl"
>Acl</router-link
<span>|</span
><router-link to="/client-quota-page" class="pad-l-r">限流</router-link>
<span>|</span
><router-link to="/acl-page" class="pad-l-r">Acl</router-link>
<span v-show="showUserMenu">|</span
><router-link to="/user-page" class="pad-l-r" v-show="showUserMenu"
>用户</router-link
>
<span>|</span
><router-link to="/op-page" class="pad-l-r">运维</router-link>
<span class="right">集群{{ clusterName }}</span>
<div class="right">
<span>集群{{ clusterName }} </span>
<a-dropdown v-show="showUsername">
<span>
<span> | </span><a-icon type="smile"></a-icon>
<span>{{ username }}</span>
</span>
<a-menu slot="overlay">
<a-menu-item key="1">
<a href="javascript:;" @click="logout">
<!-- <a-icon type="logout"/>-->
<span>退出</span>
</a>
</a-menu-item>
</a-menu>
</a-dropdown>
</div>
</div>
<router-view class="content" />
</div>
</template>
<script>
import { KafkaClusterApi } from "@/utils/api";
import { KafkaClusterApi, AuthApi } from "@/utils/api";
import request from "@/utils/request";
import { mapMutations, mapState } from "vuex";
import { getClusterInfo } from "@/utils/local-cache";
import {
deleteToken,
deleteUsername,
getClusterInfo,
getPermissions,
getUsername,
} from "@/utils/local-cache";
import notification from "ant-design-vue/lib/notification";
import { CLUSTER } from "@/store/mutation-types";
import { AUTH, CLUSTER } from "@/store/mutation-types";
export default {
data() {
@@ -37,35 +63,69 @@ export default {
};
},
created() {
const clusterInfo = getClusterInfo();
if (!clusterInfo) {
request({
url: KafkaClusterApi.peekClusterInfo.url,
method: KafkaClusterApi.peekClusterInfo.method,
}).then((res) => {
if (res.code == 0) {
this.switchCluster(res.data);
} else {
notification.error({
message: "error",
description: res.msg,
});
}
});
} else {
this.switchCluster(clusterInfo);
}
this.intAuthState();
this.initClusterInfo();
},
computed: {
...mapState({
clusterName: (state) => state.clusterInfo.clusterName,
enableSasl: (state) => state.clusterInfo.enableSasl,
showUsername: (state) => state.auth.enable && state.auth.username,
username: (state) => state.auth.username,
showUserMenu: (state) => state.auth.enable,
}),
},
methods: {
...mapMutations({
switchCluster: CLUSTER.SWITCH,
enableAuth: AUTH.ENABLE,
setUsername: AUTH.SET_USERNAME,
setPermissions: AUTH.SET_PERMISSIONS,
}),
beforeLoadFn() {
this.setUsername(getUsername());
this.setPermissions(getPermissions());
},
intAuthState() {
request({
url: AuthApi.enable.url,
method: AuthApi.enable.method,
}).then((res) => {
const enable = res;
this.enableAuth(enable);
// if (!enable){
// this.initClusterInfo();
// }
});
},
initClusterInfo() {
const clusterInfo = getClusterInfo();
if (!clusterInfo) {
request({
url: KafkaClusterApi.peekClusterInfo.url,
method: KafkaClusterApi.peekClusterInfo.method,
}).then((res) => {
if (res.code == 0) {
this.switchCluster(res.data);
} else {
notification.error({
message: "error",
description: res.msg,
});
}
});
} else {
this.switchCluster(clusterInfo);
}
},
logout() {
deleteToken();
deleteUsername();
this.$router.push("/login-page");
},
},
mounted() {
this.beforeLoadFn();
},
};
</script>

View File

@@ -1,32 +0,0 @@
<template>
<div class="hello">
<h1>{{ msg }}</h1>
</div>
</template>
<script>
export default {
name: "HelloWorld",
props: {
msg: String,
},
};
</script>
<!-- Add "scoped" attribute to limit CSS to this component only -->
<style scoped>
h3 {
margin: 40px 0 0;
}
ul {
list-style-type: none;
padding: 0;
}
li {
display: inline-block;
margin: 0 10px;
}
a {
color: #42b983;
}
</style>

Some files were not shown because too many files have changed in this diff Show More