88 Commits

Author SHA1 Message Date
许晓东
571efe6ddc 接口权限过滤. 2023-05-18 22:56:00 +08:00
许晓东
7e98a58f60 eslint fixed and 页面权限配置 and 默认权限数据加载. 2023-05-17 22:29:04 +08:00
许晓东
b08be2aa65 权限配置、默认用户、角色配置. 2023-05-16 21:23:45 +08:00
许晓东
238507de19 权限配置. 2023-05-15 21:58:55 +08:00
许晓东
435a5ca2bc 支持登录 2023-05-14 21:25:13 +08:00
许晓东
be8e567684 个人设置更新密码. 2023-05-09 20:44:54 +08:00
许晓东
da1ddeb1e7 用户删除、重置密码,分配角色. 2023-05-09 07:16:55 +08:00
许晓东
38ca2cfc52 用户新增,删除. 2023-05-07 22:55:01 +08:00
许晓东
cc8f671fdb 角色列表页面权限分配. 2023-05-04 16:24:34 +08:00
许晓东
e3b0dd5d2a 角色列表页面权限分配雏形. 2023-04-24 22:08:05 +08:00
许晓东
a37664f6d5 角色列表页面雏形. 2023-04-19 22:10:08 +08:00
许晓东
af52e6bc61 用户权限展示. 2023-04-17 22:24:51 +08:00
许晓东
fce1154c36 新增用户管理:用户、角色、权限新增接口 2023-04-11 22:01:16 +08:00
许晓东
0bf5c6f46c 更新下载地址. 2023-04-11 13:30:53 +08:00
许晓东
fe759aaf74 集群字段长度增加到1024,修复漏洞(CVE-2023-20860):spring升级到5.3.26 2023-04-11 12:25:24 +08:00
许晓东
1e6a7bb269 polish 文档. 2023-02-10 20:41:15 +08:00
许晓东
fb440ae153 polish 文档. 2023-02-10 20:28:23 +08:00
许晓东
07c6714fd9 替换淘宝镜像,限制客户端ID太长显示宽度. 2023-02-09 22:23:34 +08:00
许晓东
5e9304efc2 增加限流说明,fixed acl 开启授权提示. 2023-02-07 22:24:37 +08:00
许晓东
fc7f05cf4e 限流,支持用户and客户端ID同时存在. 2023-02-06 22:11:56 +08:00
许晓东
5a87e9cad8 限流,支持用户. 2023-02-05 23:08:14 +08:00
许晓东
608f7cdc47 限流,支持客户端ID修改、删除. 2023-02-05 22:01:37 +08:00
许晓东
ee6defe5d2 限流,支持客户端ID查询. 2023-02-04 21:28:51 +08:00
许晓东
56621e0b8c 新增客户端限流菜单页. 2023-01-30 21:40:11 +08:00
许晓东
832b20a83e 客户端限流查询接口. 2023-01-09 22:11:50 +08:00
许晓东
4dbadee0d4 客户端限流 service. 2023-01-03 22:01:43 +08:00
许晓东
7d76632f08 客户端限流console 2023-01-03 21:28:11 +08:00
许晓东
d5102f626c 客户端限流console. 2023-01-02 21:28:47 +08:00
许晓东
daf77290da 客户端限流console. 2023-01-02 21:16:22 +08:00
许晓东
4df20f9ca5 contact jpg 2022-12-09 09:21:10 +08:00
许晓东
b465ba78b8 update contact 2022-11-01 19:05:40 +08:00
许晓东
9a69bad93a wechat contact. 2022-10-16 13:32:41 +08:00
许晓东
3785e9aaca wechat contact. 2022-10-16 13:31:35 +08:00
许晓东
d502da1b39 update weixin contact 2022-09-20 20:07:06 +08:00
许晓东
d6282cb902 认证授权分离页面未开启ACL时,显示效果. 2022-08-28 22:48:21 +08:00
许晓东
ca4dc2ebc9 Polish READM. 2022-08-28 22:04:31 +08:00
许晓东
50775994b5 Polish READM. 2022-08-28 22:02:21 +08:00
许晓东
3bd14a35d6 ACL的认证和授权管理功能分离. 2022-08-28 21:39:08 +08:00
许晓东
4c3fe5230c update contact jpg. 2022-08-04 09:12:39 +08:00
许晓东
57d549635b 恢复interceptory路径. 2022-07-27 18:19:50 +08:00
许晓东
923b89b6bd 更新下载地址. 2022-07-24 17:33:22 +08:00
许晓东
e9f34e1d19 支持批量删除topic. 2022-07-24 11:52:48 +08:00
许晓东
ccdcebb24d 更新icon. 2022-07-24 11:09:25 +08:00
许晓东
7ddd75e34f 更新icon. 2022-07-24 11:09:21 +08:00
许晓东
aebea435fa 更新概览. 2022-07-14 21:59:25 +08:00
许晓东
ea788313c6 更新概览. 2022-07-14 21:59:18 +08:00
许晓东
727edfcca8 支持缓存生产者连接,缓存连接默认关闭 2022-07-09 18:54:20 +08:00
Xiaodong Xu
cc1989a74b Merge pull request #17 from comdotwww/main
解决 Windows 操作系统下 CMD 路径转义的问题
2022-07-07 22:46:39 +08:00
comdotwww
0196a90b69 Update start.bat 2022-07-07 22:05:46 +08:00
许晓东
9c3e3988e0 consumer连接属性处理、联系更新 2022-07-07 20:09:27 +08:00
许晓东
458e13c9e0 缓存连接 2022-07-05 10:19:51 +08:00
许晓东
979859b232 支持在线删除消息 2022-07-04 17:16:00 +08:00
许晓东
b163e5f776 升级kafka版本从2.8.0 -> 3.2.0,增加DockerCompose部署说明 2022-06-30 20:11:29 +08:00
Xiaodong Xu
d062e18940 Merge pull request #16 from wdkang123/main
new(md): Docker DockerCompose部署方式
2022-06-30 19:42:14 +08:00
武子康
87c1e7ba4a new(md): Docker DockerCompose部署方式 2022-06-30 19:12:42 +08:00
许晓东
5194c952f2 polish README. 2022-06-29 19:17:21 +08:00
许晓东
c1cc44d32f 修复集群无活跃节点时NPE,更新README. 2022-06-29 17:22:29 +08:00
许晓东
82fafe980d 修复集群无活跃节点时NPE,更新README. 2022-06-29 17:20:57 +08:00
许晓东
34752deca2 update wechat contact. 2022-06-17 10:10:00 +08:00
yinuo
9e42e2c72a 更新联系方式 2022-05-06 11:02:28 +08:00
dongyinuo
e531f5d786 Delete weixin_contact.jpeg 2022-05-06 11:00:56 +08:00
dongyinuo
10e75ac55d Update README.md
更新联系方式
2022-05-06 10:58:55 +08:00
yinuo
4a8d09dc89 更新联系方式 2022-05-06 10:56:22 +08:00
dongyinuo
116bc100a7 Add files via upload
替换微信群图片
2022-05-06 10:48:00 +08:00
Xiaodong Xu
b1feaad9f7 Merge pull request #14 from dongyinuo/feature/dongyinuo/add/contact
Feature/dongyinuo/add/contact
2022-04-29 17:27:19 +08:00
yinuo
4d372f8374 添加联系方式 2022-04-29 17:22:44 +08:00
yinuo
4b2c544c0d 添加联系方式 2022-04-29 17:21:32 +08:00
许晓东
8131cb1a42 发布1.0.4安装包下载地址 2022-02-16 20:01:06 +08:00
许晓东
1dd6466261 副本重分配 2022-02-16 19:50:35 +08:00
许晓东
dda08a2152 副本重分配-》生成分配计划 2022-02-15 20:13:07 +08:00
许晓东
01c7121ee4 集群节点列表有序 2022-01-22 23:33:13 +08:00
许晓东
d939d7653c 主页展示Broker API的版本兼容信息 2022-01-22 23:07:41 +08:00
许晓东
058cd5a24e 查询当前重分配,版本不支持异常处理 2022-01-20 13:44:37 +08:00
许晓东
db3f55ac4a polish README 2022-01-19 19:05:00 +08:00
许晓东
a311a34537 分区比较栈溢出bug修复 2022-01-18 20:42:11 +08:00
许晓东
e8fe2ea1c7 集群名称支持中文,消息查询可选择时间展示顺序 2022-01-13 14:19:17 +08:00
许晓东
10302dd39c v1.0.3安装包下载地址 2022-01-09 23:57:00 +08:00
许晓东
55a4483fcc 打包只打zip包 2022-01-09 23:45:39 +08:00
许晓东
4dd2412b78 polish README 2022-01-09 23:40:20 +08:00
许晓东
387c714072 polish README 2022-01-09 23:27:23 +08:00
许晓东
6a2d876d50 多集群支持,集群切换 2022-01-06 19:31:44 +08:00
许晓东
f5fb2c4f88 集群切换 2022-01-05 21:19:46 +08:00
许晓东
6f9676e259 集群列表、新增集群 2022-01-04 21:06:50 +08:00
许晓东
2427ce2c1e 准备开发多集群支持 2022-01-03 22:02:03 +08:00
许晓东
02abe67fce 消息查询过滤 2021-12-30 14:17:47 +08:00
许晓东
ad39f4e82c 消息查询过滤 2021-12-29 21:15:56 +08:00
许晓东
243c89b459 topic模糊查询,消息过滤页面配置 2021-12-28 20:39:07 +08:00
许晓东
11418cd6e0 去掉消息同步方案入口 2021-12-27 20:41:19 +08:00
193 changed files with 10716 additions and 1016 deletions

149
README.md
View File

@@ -1,90 +1,99 @@
# kafka可视化管理平台
一款轻量级的kafka可视化管理平台安装配置快捷、简单易用。
为了开发的省事,没有国际化支持,只支持中文展示。
为了开发的省事,没有国际化支持,页面只支持中文展示。
用过rocketmq-console吧前端展示风格跟那个有点类似。
## 安装包下载
以下两种方式2选一直接下载安装包或下载源码手动打包
* 点击下载(v1.0.2版本)[kafka-console-ui.tar.gz](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.2/kafka-console-ui.tar.gz) 或 [kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.2/kafka-console-ui.zip)
* 参考下面的打包部署,下载源码重新打包(提交的最新功能特性)
## 页面预览
如果github能查看图片的话可以点击[查看菜单页面](./document/overview/概览.md),查看每个页面的样子
## 集群迁移支持说明
当前主分支及日后版本不再提供消息同步、集群迁移的解决方案,如有需要,查看:[集群迁移说明](./document/datasync/集群迁移.md)
## ACL说明
最新代码运行即可看到acl菜单将权限管理和认证的用户管理SASL_SCRAM)进行了分离。分离之后支持只开启SASL_SCRAM认证的时候未开启鉴权用户变更操作。或者使用其它认证机制下的权限管理操作可视化的权限管理但是可视化的认证用户管理目前只支持Scram。
v1.0.6版本之前如果kafka集群启用了ACL但是控制台没看到Acl菜单可以查看[Acl配置启用说明](./document/acl/Acl.md)
## 功能支持
* 多集群支持
* 集群信息
* Topic管理
* 消费组管理
* 消息管理
* 基于SASL_SCRAM认证授权管理
* ACL
* 客户端限流
* 运维
![功能特性](./document/功能特性.png)
## 技术栈
* spring boot
* java、scala
* kafka
* h2
* vue
## kafka版本
* 当前使用的kafka 2.8.0
## 监控
仅提供运维管理功能监控、告警需要配合其它组件使用请查看https://blog.csdn.net/x763795151/article/details/119705372
# 打包、部署
## 打包
环境要求
* maven 3.6+
* jdk 8
* git
功能明细看这个脑图:
![功能特性](./document/img/功能特性.png)
## 安装包下载
点击下载(v1.0.7版本)[kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.7/kafka-console-ui.zip)
如果安装包下载的比较慢,可以查看下面的源码打包说明,把代码下载下来,本地快速打包.
github下载慢也可以试试从gitee下载点击下载[gitee来源kafka-console-ui.zip](https://gitee.com/xiaodong_xu/kafka-console-ui/releases/download/v1.0.7/kafka-console-ui.zip)
## 快速使用
### Windows
1. 解压缩zip安装包
2. 进入bin目录必须在bin目录下双击执行`start.bat`启动
3. 停止:直接关闭启动的命令行窗口即可
### Linux或Mac OS
```
git clone https://github.com/xxd763795151/kafka-console-ui.git
cd kafka-console-ui
# linux或mac执行
sh package.sh
# windows执行
package.bat
```
打包成功,输出文件(以下2种归档类型)
* target/kafka-console-ui.tar.gz
* target/kafka-console-ui.zip
## 部署
### Mac OS 或 Linux
```
# 解压缩(以tar.gz为例)
tar -zxvf kafka-console-ui.tar.gz
# 解压缩
unzip kafka-console-ui.zip
# 进入解压缩后的目录
cd kafka-console-ui
# 编辑配置
vim config/application.yml
# 启动
sh bin/start.sh
# 停止
sh bin/shutdown.sh
```
### Windows
1.解压缩zip安装包
2.编辑配置文件 `config/application.yml`
3.进入bin目录必须在bin目录下执行`start.bat`启动
启动完成访问http://127.0.0.1:7766
### 访问地址
启动完成访问http://127.0.0.1:7766
# 开发环境
* jdk 8
* idea
* scala 2.13
* maven >=3.6+
* webstorm
除了webstorm是开发前端的ide可以根据自己需要代替jdk scala是必须有的。
# 本地开发配置
以我自己为例开发环境里的工具准备好然后代码clone到本地。
## 后端配置
1. 用idea打开项目
2. 打开idea的Project Structure(Settings) -> Modules -> 设置src/main/scala为Sources因为约定src/main/java是源码目录所以这里要再加一个
3. 打开idea的Project Structure(Settings) -> Libraries 添加scala sdk然后选择本地下载的scala 2.13的目录,确定添加进来
## 前端
前端代码在工程的ui目录下找个前端开发的ide打开进行开发即可。
## 注意
前后分离,直接启动后端如果未编译前端代码是没有前端页面的,可以先打包进行编译`sh package.sh`然后再用idea启动或者前端部分单独启动
# 页面示例
如果未启用ACL配置不会显示ACL的菜单页面所以导航栏上没有Acl这一项
### 配置集群
第一次启动打开浏览器后因为还没有配置kafka集群信息所以页面右上角可能会有错误信息比如No Cluster Info或者是没有集群信息请先切换集群之类的提示。
![集群](./document/集群.png)
![Topic](./document/Topic.png)
![消费组](./document/消费组.png)
![运维](./document/运维.png)
增加消息检索页面
![消息](./document/消息.png)
集群配置如下:
1. 点击页面上方导航栏的 [运维] 菜单
2. 点击集群管理下的 [集群切换] 按钮
3. 在弹框里点击 [新增集群]
4. 然后输入kafka集群地址和一个名称随便起个名字
5. 点击提交便增加成功了
6. 增加成功可以看到会话框已经有这个集群信息,然后点击右侧的 [切换] 按钮,便切换该集群为当前集群
后续如果再增加其它集群,就可以按上面这个流程,如果想切换到哪个集群,点击切换按钮,便会切换到对应的集群,页面的右上角会显示当前是使用的哪个集群,如果不确定,可以刷新下页面。
在新增集群的时候除了集群地址还可以输入集群的其它属性配置比如请求超时ACL配置等。如果开启了ACL切换到该集群的时候导航栏上便会出现ACL菜单支持进行相关操作目前是基于SASL_SCRAM认证授权管理支持的最完善其它的我也没验证过虽然是我开发的但是我也没具体全部验证这一块功能授权部分应该是通用的
## kafka版本
* 当前使用的kafka 3.2.0
## 监控
仅提供运维管理功能监控、告警需要配合其它组件如有需要建议请查看https://blog.csdn.net/x763795151/article/details/119705372
## 源码打包
如果想通过源码打包,查看:[源码打包说明](./document/package/源码打包.md)
## 本地开发
如果需要本地开发,开发环境配置查看:[本地开发](./document/develop/开发配置.md)
## 登录认证和权限
目前主分支不支持登录认证,感谢@dongyinuo 同学开发了一版支持登录认证,及相关的按钮权限(主要有两个角色:管理员和普通开发人员)。
在分支feature/dongyinuo/20220501/devops 上。
如果有需要使用管理台登录认证的,可以切换到这个分支上进行打包,打包方式看 源码打包 说明。
默认登录账户admin/kafka-console-ui521
## DockerCompose部署
感谢@wdkang123 同学分享的部署方式,如果有需要请查看[DockerCompose部署方式](./document/deploy/docker部署.md)
## 联系方式
+ 微信群
<img src="./document/contact/weixin_contact.jpg" width="40%"/>
[//]: # (<img src="https://github.com/xxd763795151/kafka-console-ui/blob/main/document/contact/weixin_contact.jpg" width="40%"/>)
+ 若联系方式失效, 请联系加一下微信, 说明意图
- xxd763795151
- wxid_7jy2ezljvebt12

View File

@@ -4,7 +4,7 @@
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">
<id>rocketmq-reput</id>
<formats>
<format>tar.gz</format>
<!-- <format>tar.gz</format>-->
<format>zip</format>
</formats>

View File

@@ -5,4 +5,4 @@ set JAVA_OPTS=-Xmx512m -Xms512m -Xmn256m -Xss256k
set CONFIG_FILE=../config/application.yml
set TARGET=../lib/kafka-console-ui.jar
set DATA_DIR=..
%JAVA_CMD% -jar %TARGET% --spring.config.location=%CONFIG_FILE% --data.dir=%DATA_DIR%
"%JAVA_CMD%" -jar %TARGET% --spring.config.location=%CONFIG_FILE% --data.dir=%DATA_DIR%

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

36
document/acl/Acl.md Normal file
View File

@@ -0,0 +1,36 @@
# Acl配置启用说明
## 前言
可能有的同学是看了这篇文章来的:[如何通过可视化方式快捷管理kafka的acl配置](https://blog.csdn.net/x763795151/article/details/120200119)
这篇文章里可能说了是通过修改配置文件application.yml的方式来启用ACL示例如下
```yaml
kafka:
config:
# kafka broker地址多个以逗号分隔
bootstrap-server: 'localhost:9092'
# 服务端是否启用acl如果不启用下面的几项都忽略即可
enable-acl: true
# 只支持2种安全协议SASL_PLAINTEXT和PLAINTEXT启用acl则设置为SASL_PLAINTEXT不启用acl不需关心这个配置
security-protocol: SASL_PLAINTEXT
sasl-mechanism: SCRAM-SHA-256
# 超级管理员用户名在broker上已经配置为超级管理员
admin-username: admin
# 超级管理员密码
admin-password: admin
# 启动自动创建配置的超级管理员用户
admin-create: true
# broker连接的zk地址
zookeeper-addr: localhost:2181
sasl-jaas-config: org.apache.kafka.common.security.scram.ScramLoginModule required username="${kafka.config.admin-username}" password="${kafka.config.admin-password}";
```
其中说明了kafka.config.enable-acl配置项需要为true。
注意:**现在不再支持这种方式了**
## v1.0.6之前的版本说明
因为现在支持多集群配置,关于多集群配置,可以看主页说明的 配置集群 介绍。
所以这里把这些额外的配置项都去掉了。
如果启用了ACL在页面上新增集群的时候在属性里配置集群的ACL相关信息如下![新增集群](./新增集群.png)
如果控制台检测到属性里有ACL相关属性配置切换到这个集群后ACL菜单会自动出现的。
注意只支持SASL。

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 245 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 178 KiB

View File

@@ -0,0 +1,11 @@
# 集群迁移
可能是有的同学看了我以前发的解决云上、云下集群迁移的方案,好奇看到了这里。
当时发的文章链接在这里:[kafka新老集群平滑迁移实践](https://blog.csdn.net/x763795151/article/details/121070563)
不过考虑到这些功能涉及到业务属性,已经在新的版本中都去掉了。
当前主分支及日后版本不再提供消息同步、集群迁移的解决方案如有需要可以使用single-data-sync分支的代码或者历史发布的v1.0.2(直接下载) [kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.2/kafka-console-ui.zip) 的安装包。
v1.0.2版本及其之前的版本只支持单集群配置但是对于SASL_SCRAM认证授权管理功能相当完善。
后续版本会支持多集群管理并将v1.0.2之前的部分功能去掉或优化,目的是做为一个足够轻量的管理工具,不再涉及其它属性。

View File

@@ -0,0 +1,189 @@
# Docker/DockerCompose部署
# 1.快速上手
## 1.1 镜像拉取
```shell
docker pull wdkang/kafka-console-ui
```
## 1.2 查看镜像
```shell
docker images
```
## 1.3 启动服务
由于Docker内不会对数据进行持久化 所以这里推荐将数据目录映射到实体机中
详见 **2.数据持久**
```shell
docker run -d -p 7766:7766 wdkang/kafka-console-ui
```
## 1.4 查看状态
```shell
docker ps -a
```
## 1.5 查看日志
```shell
docker logs -f ${containerId}
```
## 1.6 访问服务
```shell
http://localhost:7766
```
# 2. 数据持久
推荐对数据进行持久化
## 2.1 新建目录
```shell
mkdir -p /home/kafka-console-ui/data /home/kafka-console-ui/log
cd /home/kafka-console-ui
```
## 2.2 启动服务
```shell
docker run -d -p 7766:7766 -v $PWD/data:/app/data -v $PWD/log:/app/log wdkang/kafka-console-ui
```
# 3.自主打包
## 3.1 构建镜像
**前置需求**
(可根据自身情况修改Dockerfile)
下载[kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases)包
解压后 将Dockerfile放入文件夹的根目录
**Dockerfile**
```dockerfile
# jdk
FROM openjdk:8-jdk-alpine
# label
LABEL by="https://github.com/xxd763795151/kafka-console-ui"
# root
RUN mkdir -p /app && cd /app
WORKDIR /app
# config log data
RUN mkdir -p /app/config && mkdir -p /app/log && mkdir -p /app/data && mkdir -p /app/lib
# add file
ADD ./lib/kafka-console-ui.jar /app/lib
ADD ./config /app/config
# port
EXPOSE 7766
# start server
CMD java -jar -Xmx512m -Xms512m -Xmn256m -Xss256k /app/lib/kafka-console-ui.jar --spring.config.location="/app/config/" --logging.home="/app/log" --data.dir="/app/data"
```
**进行打包**
在文件夹根目录下
(注意末尾有个点)
```shell
docker build -t ${your_docker_hub_addr} .
```
## 3.2 上传镜像
```shell
docker push ${your_docker_hub_addr}
```
# 4.容器编排
```dockerfile
# docker-compose 编排
version: '3'
services:
# 服务名
kafka-console-ui:
# 容器名
container_name: "kafka-console-ui"
# 端口
ports:
- "7766:7766"
# 持久化
volumes:
- ./data:/app/data
- ./log:/app/log
# 防止读写文件有问题
privileged: true
user: root
# 镜像地址
image: "wdkang/kafka-console-ui"
```
## 4.1 拉取镜像
```shell
docker-compose pull kafka-console-ui
```
## 4.2 构建启动
```shell
docker-compose up --detach --build kafka-console-ui
```
## 4.3 查看状态
```shell
docker-compose ps -a
```
## 4.3 停止服务
```shell
docker-compose down
```

View File

@@ -0,0 +1,35 @@
# 本地开发配置说明
## 技术栈
* spring boot
* java、scala
* kafka
* h2
* vue
## 开发环境
* jdk 8
* idea
* scala 2.13
* maven >=3.6+
* webstorm
* Node
除了webstorm是开发前端的ide可以根据自己需要代替jdk scala是必须有的。
开发的时候我本地用的node版本是v14.16.0下载目录https://nodejs.org/download/release/v14.16.0/ . 过高或过低版本是否适用,我也没测试过。
scala 2.13下载地址在这个页面最下面https://www.scala-lang.org/download/scala2.html
## 克隆代码
以我自己为例开发环境里的工具准备好然后代码clone到本地。
## 后端配置
1. 用idea打开项目
2. 打开idea的Project Structure(Settings) -> Modules -> 设置src/main/scala为Sources因为约定src/main/java是源码目录所以这里要再加一个源码目录
3. 打开idea的Settings -> plugins 搜索scala plugin并安装然后应该是要重启idea生效这一步必须在第4步之前
4. 打开idea的Project Structure(Settings) -> Libraries 添加scala sdk然后选择本地下载的scala 2.13的目录确定添加进来如果使用的idea可以直接勾选也可以不用先下载到本地
## 前端
前端代码在工程的ui目录下找个前端开发的ide如web storm打开进行开发即可。
## 注意
前后分离直接启动后端工程的话src/main/resources目录下可能是没有静态文件的所以直接通过浏览器访问应该是没页面的。
可以先打包编译一下前端文件,比如执行:`sh package.sh`然后再用idea启动。或者是后端用idea启动完成后找个前端的ide 比如web storm打开工程的ui目录下的前端项目单独启动进行开发。

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 439 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

View File

@@ -0,0 +1,32 @@
# 菜单预览
如果未启用ACL配置不会显示ACL的菜单页面所以导航栏上没有Acl这一项
## 集群
* 展示集群列表
![集群](./img/集群.png)
* 查看或修改集群配置
![Broker配置](./img/Broker配置.png)
## Topic
* Topic列表
![Topic](./img/Topic.png)
## 消费组
* 消费组列表
![消费组](./img/消费组.png)
## 消息
* 根据时间检索或过滤消息
![消息时间查询](./img/消息时间查询.png)
* 消息详情
![消息详情](./img/消息详情.png)
## 运维
* 运维页面
![运维](./img/运维.png)
* 集群切换
![集群切换](./img/集群切换.png)

View File

@@ -0,0 +1,34 @@
# 源码打包说明
可以直接下载最新代码,进行打包,最新代码相比已经发布的安装包可能会包含最新的特性
## 环境要求
* maven 3.6+
* jdk 8
* git非必须
maven是建议版本>=3.6版本3.4+和3.5+我也没试过3.3+的版本在windows上我试了下打包有bug可能把spring boot的application.yml打不到合适的目录。
如果3.6+在mac上也不行也是上面这个问题建议用最新版本试试。
## 源码下载
```
git clone https://github.com/xxd763795151/kafka-console-ui.git
```
或者直接在页面下载源码
## 打包
我已经写了个简单的打包脚本,直接执行即可。
### Windows
```
cd kafka-console-ui
# windows执行
package.bat
```
### Linux或Mac OS
```
cd kafka-console-ui
# linux或mac执行
sh package.sh
```
打包完成会在target目录下生成一个kafka-console-ui.zip的安装包

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 492 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 400 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 126 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

25
pom.xml
View File

@@ -10,7 +10,7 @@
</parent>
<groupId>com.xuxd</groupId>
<artifactId>kafka-console-ui</artifactId>
<version>1.0.2</version>
<version>1.0.7</version>
<name>kafka-console-ui</name>
<description>Kafka console manage ui</description>
<properties>
@@ -21,10 +21,11 @@
<ui.path>${project.basedir}/ui</ui.path>
<frontend-maven-plugin.version>1.11.0</frontend-maven-plugin.version>
<compiler.version>1.8</compiler.version>
<kafka.version>2.8.0</kafka.version>
<kafka.version>3.2.0</kafka.version>
<maven.assembly.plugin.version>3.0.0</maven.assembly.plugin.version>
<mybatis-plus-boot-starter.version>3.4.2</mybatis-plus-boot-starter.version>
<scala.version>2.13.6</scala.version>
<spring-framework.version>5.3.26</spring-framework.version>
</properties>
<dependencies>
<dependency>
@@ -48,6 +49,11 @@
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
@@ -76,6 +82,18 @@
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.13</artifactId>
<version>${kafka.version}</version>
<exclusions>
<exclusion>
<groupId>com.typesafe.scala-logging</groupId>
<artifactId>scala-logging_2.13</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.typesafe.scala-logging</groupId>
<artifactId>scala-logging_2.13</artifactId>
<version>3.9.2</version>
</dependency>
<dependency>
@@ -207,7 +225,8 @@
<goal>npm</goal>
</goals>
<configuration>
<arguments>install --registry=https://registry.npmjs.org/</arguments>
<!-- <arguments>install &#45;&#45;registry=https://registry.npmjs.org/</arguments>-->
<arguments>install --registry=https://registry.npm.taobao.org</arguments>
</configuration>
</execution>
<execution>

View File

@@ -3,11 +3,16 @@ package com.xuxd.kafka.console;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.servlet.ServletComponentScan;
import org.springframework.scheduling.annotation.EnableScheduling;
/**
* @author 晓东哥哥
*/
@MapperScan("com.xuxd.kafka.console.dao")
@SpringBootApplication
@EnableScheduling
@ServletComponentScan
public class KafkaConsoleUiApplication {
public static void main(String[] args) {

View File

@@ -0,0 +1,117 @@
package com.xuxd.kafka.console.aspect;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.locks.ReentrantLock;
/**
* @author 晓东哥哥
*/
@Slf4j
@Order(-1)
@Aspect
@Component
public class ControllerLogAspect {
private Map<String, String> descMap = new HashMap<>();
private ReentrantLock lock = new ReentrantLock();
@Pointcut("@annotation(com.xuxd.kafka.console.aspect.annotation.ControllerLog)")
private void pointcut() {
}
@Around("pointcut()")
public Object around(ProceedingJoinPoint joinPoint) throws Throwable {
StringBuilder params = new StringBuilder("[");
try {
String methodName = getMethodFullName(joinPoint.getTarget().getClass().getName(), joinPoint.getSignature().getName());
if (!descMap.containsKey(methodName)) {
cacheDescInfo(joinPoint);
}
Object[] args = joinPoint.getArgs();
long startTime = System.currentTimeMillis();
Object res = joinPoint.proceed();
long endTime = System.currentTimeMillis();
for (int i = 0; i < args.length; i++) {
params.append(args[i]);
}
params.append("]");
String resStr = "[" + (res != null ? res.toString() : "") + "]";
StringBuilder sb = new StringBuilder();
String shortMethodName = descMap.getOrDefault(methodName, ".-");
shortMethodName = shortMethodName.substring(shortMethodName.lastIndexOf(".") + 1);
sb.append("[").append(shortMethodName)
.append("调用完成: ")
.append("请求参数=").append(params).append(", ")
.append("响应值=").append(resStr).append(", ")
.append("耗时=").append(endTime - startTime)
.append(" ms");
log.info(sb.toString());
return res;
} catch (Throwable e) {
log.error("调用方法异常, 请求参数:" + params, e);
throw e;
}
}
private void cacheDescInfo(ProceedingJoinPoint joinPoint) {
lock.lock();
try {
String methodName = joinPoint.getSignature().getName();
Class<?> aClass = joinPoint.getTarget().getClass();
Method method = null;
try {
Object[] args = joinPoint.getArgs();
Class<?>[] clzArr = new Class[args.length];
for (int i = 0; i < args.length; i++) {
clzArr[i] = args[i].getClass();
}
method = aClass.getDeclaredMethod(methodName, clzArr);
} catch (Exception e) {
log.warn("cacheDescInfo error: {}", e.getMessage());
}
String fullMethodName = getMethodFullName(aClass.getName(), methodName);
String desc = "[" + fullMethodName + "]";
if (method == null) {
descMap.put(fullMethodName, desc);
return;
}
ControllerLog controllerLog = method.getAnnotation(ControllerLog.class);
String value = controllerLog.value();
if (StringUtils.isBlank(value)) {
descMap.put(fullMethodName, desc);
} else {
descMap.put(fullMethodName, value);
}
} finally {
lock.unlock();
}
}
private String getMethodFullName(String className, String methodName) {
return className + "#" + methodName;
}
}

View File

@@ -0,0 +1,127 @@
package com.xuxd.kafka.console.aspect;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.Credentials;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import com.xuxd.kafka.console.cache.RolePermCache;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.dao.SysPermissionMapper;
import com.xuxd.kafka.console.dao.SysRoleMapper;
import com.xuxd.kafka.console.dao.SysUserMapper;
import com.xuxd.kafka.console.exception.UnAuthorizedException;
import com.xuxd.kafka.console.filter.CredentialsContext;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;
import java.lang.reflect.Method;
import java.util.*;
import java.util.stream.Collectors;
/**
* @author: xuxd
* @date: 2023/5/17 22:32
**/
@Slf4j
@Order(1)
@Aspect
@Component
public class PermissionAspect {
private Map<String, Set<String>> permMap = new HashMap<>();
private final AuthConfig authConfig;
private final SysUserMapper userMapper;
private final SysRoleMapper roleMapper;
private final SysPermissionMapper permissionMapper;
private final RolePermCache rolePermCache;
public PermissionAspect(AuthConfig authConfig,
SysUserMapper userMapper,
SysRoleMapper roleMapper,
SysPermissionMapper permissionMapper,
RolePermCache rolePermCache) {
this.authConfig = authConfig;
this.userMapper = userMapper;
this.roleMapper = roleMapper;
this.permissionMapper = permissionMapper;
this.rolePermCache = rolePermCache;
}
@Pointcut("@annotation(com.xuxd.kafka.console.aspect.annotation.Permission)")
private void pointcut() {
}
@Before(value = "pointcut()")
public void before(JoinPoint joinPoint) {
if (!authConfig.isEnable()) {
return;
}
MethodSignature signature = (MethodSignature) joinPoint.getSignature();
Method method = signature.getMethod();
Permission permission = method.getAnnotation(Permission.class);
if (permission == null) {
return;
}
String[] value = permission.value();
if (value == null || value.length == 0) {
return;
}
String name = method.getName() + "@" + method.hashCode();
Map<String, Set<String>> pm = checkPermMap(name, value);
Set<String> allowPermSet = pm.get(name);
if (allowPermSet == null) {
log.error("解析权限出现意外啦!!!");
return;
}
Credentials credentials = CredentialsContext.get();
if (credentials == null || credentials.isInvalid()) {
throw new UnAuthorizedException("credentials is invalid");
}
QueryWrapper<SysUserDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("username", credentials.getUsername());
SysUserDO userDO = userMapper.selectOne(queryWrapper);
if (userDO == null) {
throw new UnAuthorizedException(credentials.getUsername() + ":" + allowPermSet);
}
String roleIds = userDO.getRoleIds();
List<Long> roleIdList = Arrays.stream(roleIds.split(",")).map(String::trim).filter(StringUtils::isNotEmpty).map(Long::valueOf).collect(Collectors.toList());
for (Long roleId : roleIdList) {
Set<String> permSet = rolePermCache.getRolePermCache().getOrDefault(roleId, Collections.emptySet());
for (String p : allowPermSet) {
if (permSet.contains(p)) {
return;
}
}
}
throw new UnAuthorizedException(credentials.getUsername() + ":" + allowPermSet);
}
private Map<String, Set<String>> checkPermMap(String methodName, String[] value) {
if (!permMap.containsKey(methodName)) {
Map<String, Set<String>> map = new HashMap<>(permMap);
map.put(methodName, new HashSet<>(Arrays.asList(value)));
permMap = map;
return map;
}
return permMap;
}
}

View File

@@ -0,0 +1,15 @@
package com.xuxd.kafka.console.aspect.annotation;
import java.lang.annotation.*;
/**
* 该注解用到controller层的方法上
* @author 晓东哥哥
*/
@Inherited
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface ControllerLog {
String value() default "";
}

View File

@@ -0,0 +1,17 @@
package com.xuxd.kafka.console.aspect.annotation;
import java.lang.annotation.*;
/**
* 权限注解,开启认证的时候拥有该权限的用户才能访问对应接口.
*
* @author: xuxd
* @date: 2023/5/17 22:30
**/
@Inherited
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Permission {
String[] value() default {};
}

View File

@@ -1,18 +1,15 @@
package com.xuxd.kafka.console.beans;
import java.util.Objects;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.common.acl.AccessControlEntry;
import org.apache.kafka.common.acl.AccessControlEntryFilter;
import org.apache.kafka.common.acl.AclBinding;
import org.apache.kafka.common.acl.AclBindingFilter;
import org.apache.kafka.common.acl.AclOperation;
import org.apache.kafka.common.acl.AclPermissionType;
import org.apache.kafka.common.acl.*;
import org.apache.kafka.common.resource.PatternType;
import org.apache.kafka.common.resource.ResourcePattern;
import org.apache.kafka.common.resource.ResourcePatternFilter;
import org.apache.kafka.common.resource.ResourceType;
import org.apache.kafka.common.security.auth.KafkaPrincipal;
import org.apache.kafka.common.utils.SecurityUtils;
import java.util.Objects;
/**
* kafka-console-ui.
@@ -41,7 +38,9 @@ public class AclEntry {
entry.setResourceType(binding.pattern().resourceType().name());
entry.setName(binding.pattern().name());
entry.setPatternType(binding.pattern().patternType().name());
entry.setPrincipal(KafkaPrincipal.fromString(binding.entry().principal()).getName());
// entry.setPrincipal(KafkaPrincipal.fromString(binding.entry().principal()).getName());
// 3.x版本使用该方法
entry.setPrincipal(SecurityUtils.parseKafkaPrincipal(binding.entry().principal()).getName());
entry.setHost(binding.entry().host());
entry.setOperation(binding.entry().operation().name());
entry.setPermissionType(binding.entry().permissionType().name());

View File

@@ -8,7 +8,7 @@ import org.apache.kafka.common.Node;
* @author xuxd
* @date 2021-10-08 14:03:21
**/
public class BrokerNode {
public class BrokerNode implements Comparable{
private int id;
@@ -80,4 +80,8 @@ public class BrokerNode {
public void setController(boolean controller) {
isController = controller;
}
@Override public int compareTo(Object o) {
return this.id - ((BrokerNode)o).id;
}
}

View File

@@ -0,0 +1,21 @@
package com.xuxd.kafka.console.beans;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/5/14 19:37
**/
@Data
public class Credentials {
public static final Credentials INVALID = new Credentials();
private String username;
private long expiration;
public boolean isInvalid() {
return this == INVALID;
}
}

View File

@@ -0,0 +1,17 @@
package com.xuxd.kafka.console.beans;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/5/14 20:44
**/
@Data
public class LoginResult {
private String token;
private List<String> permissions;
}

View File

@@ -0,0 +1,73 @@
package com.xuxd.kafka.console.beans;
import com.xuxd.kafka.console.beans.enums.FilterType;
import org.apache.kafka.common.serialization.Deserializer;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-29 15:30:08
**/
public class MessageFilter {
private FilterType filterType = FilterType.NONE;
private Object searchContent = null;
private String headerKey = null;
private String headerValue = null;
private Deserializer deserializer = null;
private boolean isContainsValue = false;
public FilterType getFilterType() {
return filterType;
}
public void setFilterType(FilterType filterType) {
this.filterType = filterType;
}
public Object getSearchContent() {
return searchContent;
}
public void setSearchContent(Object searchContent) {
this.searchContent = searchContent;
}
public String getHeaderKey() {
return headerKey;
}
public void setHeaderKey(String headerKey) {
this.headerKey = headerKey;
}
public String getHeaderValue() {
return headerValue;
}
public void setHeaderValue(String headerValue) {
this.headerValue = headerValue;
}
public Deserializer getDeserializer() {
return deserializer;
}
public void setDeserializer(Deserializer deserializer) {
this.deserializer = deserializer;
}
public boolean isContainsValue() {
return isContainsValue;
}
public void setContainsValue(boolean containsValue) {
isContainsValue = containsValue;
}
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.beans;
import com.xuxd.kafka.console.beans.enums.FilterType;
import lombok.Data;
/**
@@ -24,4 +25,12 @@ public class QueryMessage {
private String keyDeserializer;
private String valueDeserializer;
private FilterType filter;
private String value;
private String headerKey;
private String headerValue;
}

View File

@@ -0,0 +1,22 @@
package com.xuxd.kafka.console.beans;
import lombok.Getter;
import lombok.Setter;
import lombok.ToString;
import org.springframework.context.ApplicationEvent;
/**
* @author: xuxd
* @date: 2023/5/18 15:49
**/
@ToString
public class RolePermUpdateEvent extends ApplicationEvent {
@Getter
@Setter
private boolean reload = false;
public RolePermUpdateEvent(Object source) {
super(source);
}
}

View File

@@ -21,7 +21,7 @@ public class TopicPartition implements Comparable {
}
TopicPartition other = (TopicPartition) o;
if (!this.topic.equals(other.getTopic())) {
return this.compareTo(other);
return this.topic.compareTo(other.topic);
}
return this.partition - other.partition;

View File

@@ -0,0 +1,28 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-31 09:54:24
**/
@Data
@TableName("t_cluster_info")
public class ClusterInfoDO {
@TableId(type = IdType.AUTO)
private Long id;
private String clusterName;
private String address;
private String properties;
private String updateTime;
}

View File

@@ -23,4 +23,6 @@ public class KafkaUserDO {
private String password;
private String updateTime;
private Long clusterInfoId;
}

View File

@@ -0,0 +1,29 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
@TableName("t_sys_permission")
public class SysPermissionDO {
@TableId(type = IdType.AUTO)
private Long id;
private String name;
/**
* 权限类型: 0菜单1按钮
*/
private Integer type;
private Long parentId;
private String permission;
}

View File

@@ -0,0 +1,24 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
@TableName("t_sys_role")
public class SysRoleDO {
@TableId(type = IdType.AUTO)
private Long id;
private String roleName;
private String description;
private String permissionIds;
}

View File

@@ -0,0 +1,26 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
@TableName("t_sys_user")
public class SysUserDO {
@TableId(type = IdType.AUTO)
private Long id;
private String username;
private String password;
private String salt;
private String roleIds;
}

View File

@@ -0,0 +1,27 @@
package com.xuxd.kafka.console.beans.dto;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/1/10 20:12
**/
@Data
public class AlterClientQuotaDTO {
private String type;
private List<String> types;
private List<String> names;
private String consumerRate;
private String producerRate;
private String requestPercentage;
private List<String> deleteConfigs;
}

View File

@@ -0,0 +1,38 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.utils.ConvertUtil;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-04 20:19:03
**/
@Data
public class ClusterInfoDTO {
private Long id;
private String clusterName;
private String address;
private String properties;
private String updateTime;
public ClusterInfoDO to() {
ClusterInfoDO infoDO = new ClusterInfoDO();
infoDO.setId(id);
infoDO.setClusterName(clusterName);
infoDO.setAddress(address);
if (StringUtils.isNotBlank(properties)) {
infoDO.setProperties(ConvertUtil.propertiesStr2JsonStr(properties));
}
return infoDO;
}
}

View File

@@ -0,0 +1,15 @@
package com.xuxd.kafka.console.beans.dto;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/5/14 18:59
**/
@Data
public class LoginUserDTO {
private String username;
private String password;
}

View File

@@ -0,0 +1,18 @@
package com.xuxd.kafka.console.beans.dto;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-02-15 19:08:13
**/
@Data
public class ProposedAssignmentDTO {
private String topic;
private List<Integer> brokers;
}

View File

@@ -0,0 +1,17 @@
package com.xuxd.kafka.console.beans.dto;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/1/9 21:53
**/
@Data
public class QueryClientQuotaDTO {
private List<String> types;
private List<String> names;
}

View File

@@ -1,8 +1,10 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.enums.FilterType;
import java.util.Date;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
/**
* kafka-console-ui.
@@ -27,6 +29,14 @@ public class QueryMessageDTO {
private String valueDeserializer;
private String filter;
private String value;
private String headerKey;
private String headerValue;
public QueryMessage toQueryMessage() {
QueryMessage queryMessage = new QueryMessage();
queryMessage.setTopic(topic);
@@ -45,6 +55,21 @@ public class QueryMessageDTO {
queryMessage.setKeyDeserializer(keyDeserializer);
queryMessage.setValueDeserializer(valueDeserializer);
if (StringUtils.isNotBlank(filter)) {
queryMessage.setFilter(FilterType.valueOf(filter.toUpperCase()));
} else {
queryMessage.setFilter(FilterType.NONE);
}
if (StringUtils.isNotBlank(value)) {
queryMessage.setValue(value.trim());
}
if (StringUtils.isNotBlank(headerKey)) {
queryMessage.setHeaderKey(headerKey.trim());
}
if (StringUtils.isNotBlank(headerValue)) {
queryMessage.setHeaderValue(headerValue.trim());
}
return queryMessage;
}
}

View File

@@ -0,0 +1,32 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.dos.SysPermissionDO;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
public class SysPermissionDTO {
private String name;
/**
* 权限类型: 0菜单1按钮
*/
private Integer type;
private Long parentId;
private String permission;
public SysPermissionDO toSysPermissionDO() {
SysPermissionDO permissionDO = new SysPermissionDO();
permissionDO.setName(this.name);
permissionDO.setType(this.type);
permissionDO.setParentId(this.parentId);
permissionDO.setPermission(this.permission);
return permissionDO;
}
}

View File

@@ -0,0 +1,35 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import lombok.Data;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
public class SysRoleDTO {
private Long id;
private String roleName;
private String description;
private List<String> permissionIds;
public SysRoleDO toDO() {
SysRoleDO roleDO = new SysRoleDO();
roleDO.setId(this.id);
roleDO.setRoleName(this.roleName);
roleDO.setDescription(this.description);
if (CollectionUtils.isNotEmpty(permissionIds)) {
roleDO.setPermissionIds(StringUtils.join(this.permissionIds, ","));
}
return roleDO;
}
}

View File

@@ -0,0 +1,34 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/4/11 21:17
**/
@Data
public class SysUserDTO {
private Long id;
private String username;
private String password;
private String salt;
private String roleIds;
private Boolean resetPassword = false;
public SysUserDO toDO() {
SysUserDO userDO = new SysUserDO();
userDO.setId(this.id);
userDO.setUsername(this.username);
userDO.setPassword(this.password);
userDO.setSalt(this.salt);
userDO.setRoleIds(this.roleIds);
return userDO;
}
}

View File

@@ -0,0 +1,11 @@
package com.xuxd.kafka.console.beans.enums;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-29 14:36:01
**/
public enum FilterType {
NONE, BODY, HEADER
}

View File

@@ -0,0 +1,24 @@
package com.xuxd.kafka.console.beans.vo;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-22 16:24:58
**/
@Data
public class BrokerApiVersionVO {
private int brokerId;
private String host;
private int supportNums;
private int unSupportNums;
private List<String> versionInfo;
}

View File

@@ -0,0 +1,82 @@
package com.xuxd.kafka.console.beans.vo;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.common.config.internals.QuotaConfigs;
import org.apache.kafka.common.quota.ClientQuotaEntity;
import java.util.List;
import java.util.Map;
/**
* @author 晓东哥哥
*/
@Data
public class ClientQuotaEntityVO {
private String user;
private String client;
private String ip;
private String consumerRate;
private String producerRate;
private String requestPercentage;
public static ClientQuotaEntityVO from(ClientQuotaEntity entity, List<String> entityTypes, Map<String, Object> config) {
ClientQuotaEntityVO entityVO = new ClientQuotaEntityVO();
Map<String, String> entries = entity.entries();
entityTypes.forEach(type -> {
switch (type) {
case ClientQuotaEntity.USER:
entityVO.setUser(entries.get(type));
break;
case ClientQuotaEntity.CLIENT_ID:
entityVO.setClient(entries.get(type));
break;
case ClientQuotaEntity.IP:
entityVO.setIp(entries.get(type));
break;
default:
break;
}
});
entityVO.setConsumerRate(convert(config.getOrDefault(QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG, "")));
entityVO.setProducerRate(convert(config.getOrDefault(QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG, "")));
entityVO.setRequestPercentage(config.getOrDefault(QuotaConfigs.REQUEST_PERCENTAGE_OVERRIDE_CONFIG, "").toString());
return entityVO;
}
public static String convert(Object num) {
if (num == null) {
return null;
}
if (num instanceof String) {
if ((StringUtils.isBlank((String) num))) {
return (String) num;
}
}
if (num instanceof Number) {
Number number = (Number) num;
double value = number.doubleValue();
double _1kb = 1024;
double _1mb = 1024 * _1kb;
if (value < _1kb) {
return value + " Byte";
}
if (value < _1mb) {
return String.format("%.1f KB", (value / _1kb));
}
if (value >= _1mb) {
return String.format("%.1f MB", (value / _1mb));
}
}
return String.valueOf(num);
}
}

View File

@@ -0,0 +1,40 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.utils.ConvertUtil;
import java.util.List;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-04 19:16:11
**/
@Data
public class ClusterInfoVO {
private Long id;
private String clusterName;
private String address;
private List<String> properties;
private String updateTime;
public static ClusterInfoVO from(ClusterInfoDO infoDO) {
ClusterInfoVO vo = new ClusterInfoVO();
vo.setId(infoDO.getId());
vo.setClusterName(infoDO.getClusterName());
vo.setAddress(infoDO.getAddress());
vo.setUpdateTime(infoDO.getUpdateTime());
if (StringUtils.isNotBlank(infoDO.getProperties())) {
vo.setProperties(ConvertUtil.jsonStr2List(infoDO.getProperties()));
}
return vo;
}
}

View File

@@ -0,0 +1,43 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.dos.SysPermissionDO;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/4/17 21:18
**/
@Data
public class SysPermissionVO {
private Long id;
private String name;
/**
* 权限类型: 0菜单1按钮
*/
private Integer type;
private Long parentId;
private String permission;
private Long key;
private List<SysPermissionVO> children;
public static SysPermissionVO from(SysPermissionDO permissionDO) {
SysPermissionVO permissionVO = new SysPermissionVO();
permissionVO.setPermission(permissionDO.getPermission());
permissionVO.setType(permissionDO.getType());
permissionVO.setName(permissionDO.getName());
permissionVO.setParentId(permissionDO.getParentId());
permissionVO.setKey(permissionDO.getId());
permissionVO.setId(permissionDO.getId());
return permissionVO;
}
}

View File

@@ -0,0 +1,38 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
/**
* @author: xuxd
* @date: 2023/4/19 21:12
**/
@Data
public class SysRoleVO {
private Long id;
private String roleName;
private String description;
private List<Long> permissionIds;
public static SysRoleVO from(SysRoleDO roleDO) {
SysRoleVO roleVO = new SysRoleVO();
roleVO.setId(roleDO.getId());
roleVO.setRoleName(roleDO.getRoleName());
roleVO.setDescription(roleDO.getDescription());
if (StringUtils.isNotEmpty(roleDO.getPermissionIds())) {
List<Long> list = Arrays.stream(roleDO.getPermissionIds().split(",")).
filter(StringUtils::isNotEmpty).map(e -> Long.valueOf(e.trim())).collect(Collectors.toList());
roleVO.setPermissionIds(list);
}
return roleVO;
}
}

View File

@@ -0,0 +1,31 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import lombok.Data;
/**
* @author: xuxd
* @date: 2023/5/6 13:06
**/
@Data
public class SysUserVO {
private Long id;
private String username;
private String password;
private String roleIds;
private String roleNames;
public static SysUserVO from(SysUserDO userDO) {
SysUserVO userVO = new SysUserVO();
userVO.setId(userDO.getId());
userVO.setUsername(userDO.getUsername());
userVO.setRoleIds(userDO.getRoleIds());
userVO.setPassword(userDO.getPassword());
return userVO;
}
}

View File

@@ -0,0 +1,66 @@
package com.xuxd.kafka.console.boot;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.config.KafkaConfig;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.utils.ConvertUtil;
import java.util.List;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.SmartInitializingSingleton;
import org.springframework.stereotype.Component;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-30 19:16:50
**/
@Slf4j
@Component
public class Bootstrap implements SmartInitializingSingleton {
public static final String DEFAULT_CLUSTER_NAME = "default";
private final KafkaConfig config;
private final ClusterInfoMapper clusterInfoMapper;
public Bootstrap(KafkaConfig config, ObjectProvider<ClusterInfoMapper> clusterInfoMapper) {
this.config = config;
this.clusterInfoMapper = clusterInfoMapper.getIfAvailable();
}
private void initialize() {
loadDefaultClusterConfig();
}
private void loadDefaultClusterConfig() {
log.info("load default kafka config.");
if (StringUtils.isBlank(config.getBootstrapServer())) {
return;
}
QueryWrapper<ClusterInfoDO> clusterInfoDOQueryWrapper = new QueryWrapper<>();
clusterInfoDOQueryWrapper.eq("cluster_name", DEFAULT_CLUSTER_NAME);
List<Object> objects = clusterInfoMapper.selectObjs(clusterInfoDOQueryWrapper);
if (CollectionUtils.isNotEmpty(objects)) {
log.warn("default kafka cluster config has existed[any of cluster name or address].");
return;
}
ClusterInfoDO infoDO = new ClusterInfoDO();
infoDO.setClusterName(DEFAULT_CLUSTER_NAME);
infoDO.setAddress(config.getBootstrapServer().trim());
infoDO.setProperties(ConvertUtil.toJsonString(config.getProperties()));
clusterInfoMapper.insert(infoDO);
log.info("Insert default config: {}", infoDO);
}
@Override public void afterSingletonsInstantiated() {
initialize();
}
}

View File

@@ -0,0 +1,91 @@
package com.xuxd.kafka.console.cache;
import com.xuxd.kafka.console.beans.RolePermUpdateEvent;
import com.xuxd.kafka.console.beans.dos.SysPermissionDO;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import com.xuxd.kafka.console.dao.SysPermissionMapper;
import com.xuxd.kafka.console.dao.SysRoleMapper;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.SmartInitializingSingleton;
import org.springframework.context.ApplicationListener;
import org.springframework.context.annotation.DependsOn;
import org.springframework.stereotype.Component;
import java.util.*;
import java.util.function.Function;
import java.util.stream.Collectors;
/**
* @author: xuxd
* @date: 2023/5/18 15:47
**/
@DependsOn("dataInit")
@Slf4j
@Component
public class RolePermCache implements ApplicationListener<RolePermUpdateEvent>, SmartInitializingSingleton {
private Map<Long, SysPermissionDO> permCache = new HashMap<>();
private Map<Long, Set<String>> rolePermCache = new HashMap<>();
private final SysPermissionMapper permissionMapper;
private final SysRoleMapper roleMapper;
public RolePermCache(SysPermissionMapper permissionMapper, SysRoleMapper roleMapper) {
this.permissionMapper = permissionMapper;
this.roleMapper = roleMapper;
}
@Override
public void onApplicationEvent(RolePermUpdateEvent event) {
log.info("更新角色权限信息:{}", event);
if (event.isReload()) {
this.loadPermCache();
}
refresh();
}
public Map<Long, SysPermissionDO> getPermCache() {
return permCache;
}
public Map<Long, Set<String>> getRolePermCache() {
return rolePermCache;
}
private void refresh() {
List<SysRoleDO> roleDOS = roleMapper.selectList(null);
Map<Long, Set<String>> tmp = new HashMap<>();
for (SysRoleDO roleDO : roleDOS) {
String permissionIds = roleDO.getPermissionIds();
if (StringUtils.isEmpty(permissionIds)) {
continue;
}
List<Long> list = Arrays.stream(permissionIds.split(",")).map(String::trim).filter(StringUtils::isNotEmpty).map(Long::valueOf).collect(Collectors.toList());
Set<String> permSet = tmp.getOrDefault(roleDO.getId(), new HashSet<>());
for (Long permId : list) {
SysPermissionDO permissionDO = permCache.get(permId);
if (permissionDO != null) {
permSet.add(permissionDO.getPermission());
}
}
tmp.put(roleDO.getId(), permSet);
}
rolePermCache = tmp;
}
private void loadPermCache() {
List<SysPermissionDO> roleDOS = permissionMapper.selectList(null);
Map<Long, SysPermissionDO> map = roleDOS.stream().collect(Collectors.toMap(SysPermissionDO::getId, Function.identity(), (e1, e2) -> e1));
permCache = map;
}
@Override
public void afterSingletonsInstantiated() {
this.loadPermCache();
this.refresh();
}
}

View File

@@ -0,0 +1,33 @@
package com.xuxd.kafka.console.cache;
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
import com.google.common.cache.RemovalListener;
import kafka.console.KafkaConsole;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
public class TimeBasedCache<K, V> {
private LoadingCache<K, V> cache;
private KafkaConsole console;
public TimeBasedCache(CacheLoader<K, V> loader, RemovalListener<K, V> listener) {
cache = CacheBuilder.newBuilder()
.maximumSize(50) // maximum 100 records can be cached
.expireAfterAccess(30, TimeUnit.MINUTES) // cache will expire after 30 minutes of access
.removalListener(listener)
.build(loader);
}
public V get(K k) {
try {
return cache.get(k);
} catch (ExecutionException e) {
throw new RuntimeException("Get connection from cache error.", e);
}
}
}

View File

@@ -0,0 +1,21 @@
package com.xuxd.kafka.console.config;
import lombok.Data;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
/**
* @author: xuxd
* @date: 2023/5/9 21:08
**/
@Data
@Configuration
@ConfigurationProperties(prefix = "auth")
public class AuthConfig {
private boolean enable;
private String secret = "kafka-console-ui-default-secret";
private long expireHours;
}

View File

@@ -0,0 +1,74 @@
package com.xuxd.kafka.console.config;
import java.util.Properties;
import org.apache.kafka.clients.CommonClientConfigs;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-30 15:46:55
**/
public class ContextConfig {
public static final int DEFAULT_REQUEST_TIMEOUT_MS = 5000;
private Long clusterInfoId;
private String clusterName;
private String bootstrapServer;
private int requestTimeoutMs = DEFAULT_REQUEST_TIMEOUT_MS;
private Properties properties = new Properties();
public String getBootstrapServer() {
return bootstrapServer;
}
public void setBootstrapServer(String bootstrapServer) {
this.bootstrapServer = bootstrapServer;
}
public int getRequestTimeoutMs() {
return properties.containsKey(CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG) ?
Integer.parseInt(properties.getProperty(CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG)) : requestTimeoutMs;
}
public void setRequestTimeoutMs(int requestTimeoutMs) {
this.requestTimeoutMs = requestTimeoutMs;
}
public Properties getProperties() {
return properties;
}
public Long getClusterInfoId() {
return clusterInfoId;
}
public void setClusterInfoId(Long clusterInfoId) {
this.clusterInfoId = clusterInfoId;
}
public String getClusterName() {
return clusterName;
}
public void setClusterName(String clusterName) {
this.clusterName = clusterName;
}
public void setProperties(Properties properties) {
this.properties = properties;
}
@Override public String toString() {
return "KafkaContextConfig{" +
"bootstrapServer='" + bootstrapServer + '\'' +
", requestTimeoutMs=" + requestTimeoutMs +
", properties=" + properties +
'}';
}
}

View File

@@ -0,0 +1,12 @@
package com.xuxd.kafka.console.config;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-30 18:55:28
**/
public class ContextConfigHolder {
public static final ThreadLocal<ContextConfig> CONTEXT_CONFIG = new ThreadLocal<>();
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.config;
import java.util.Properties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
@@ -15,23 +16,15 @@ public class KafkaConfig {
private String bootstrapServer;
private int requestTimeoutMs;
private String securityProtocol;
private String saslMechanism;
private String saslJaasConfig;
private String adminUsername;
private String adminPassword;
private boolean adminCreate;
private String zookeeperAddr;
private boolean enableAcl;
private Properties properties;
private boolean cacheAdminConnection;
private boolean cacheProducerConnection;
private boolean cacheConsumerConnection;
public String getBootstrapServer() {
return bootstrapServer;
@@ -41,62 +34,6 @@ public class KafkaConfig {
this.bootstrapServer = bootstrapServer;
}
public int getRequestTimeoutMs() {
return requestTimeoutMs;
}
public void setRequestTimeoutMs(int requestTimeoutMs) {
this.requestTimeoutMs = requestTimeoutMs;
}
public String getSecurityProtocol() {
return securityProtocol;
}
public void setSecurityProtocol(String securityProtocol) {
this.securityProtocol = securityProtocol;
}
public String getSaslMechanism() {
return saslMechanism;
}
public void setSaslMechanism(String saslMechanism) {
this.saslMechanism = saslMechanism;
}
public String getSaslJaasConfig() {
return saslJaasConfig;
}
public void setSaslJaasConfig(String saslJaasConfig) {
this.saslJaasConfig = saslJaasConfig;
}
public String getAdminUsername() {
return adminUsername;
}
public void setAdminUsername(String adminUsername) {
this.adminUsername = adminUsername;
}
public String getAdminPassword() {
return adminPassword;
}
public void setAdminPassword(String adminPassword) {
this.adminPassword = adminPassword;
}
public boolean isAdminCreate() {
return adminCreate;
}
public void setAdminCreate(boolean adminCreate) {
this.adminCreate = adminCreate;
}
public String getZookeeperAddr() {
return zookeeperAddr;
}
@@ -105,11 +42,35 @@ public class KafkaConfig {
this.zookeeperAddr = zookeeperAddr;
}
public boolean isEnableAcl() {
return enableAcl;
public Properties getProperties() {
return properties;
}
public void setEnableAcl(boolean enableAcl) {
this.enableAcl = enableAcl;
public void setProperties(Properties properties) {
this.properties = properties;
}
public boolean isCacheAdminConnection() {
return cacheAdminConnection;
}
public void setCacheAdminConnection(boolean cacheAdminConnection) {
this.cacheAdminConnection = cacheAdminConnection;
}
public boolean isCacheProducerConnection() {
return cacheProducerConnection;
}
public void setCacheProducerConnection(boolean cacheProducerConnection) {
this.cacheProducerConnection = cacheProducerConnection;
}
public boolean isCacheConsumerConnection() {
return cacheConsumerConnection;
}
public void setCacheConsumerConnection(boolean cacheConsumerConnection) {
this.cacheConsumerConnection = cacheConsumerConnection;
}
}

View File

@@ -1,13 +1,6 @@
package com.xuxd.kafka.console.config;
import kafka.console.ClusterConsole;
import kafka.console.ConfigConsole;
import kafka.console.ConsumerConsole;
import kafka.console.KafkaAclConsole;
import kafka.console.KafkaConfigConsole;
import kafka.console.MessageConsole;
import kafka.console.OperationConsole;
import kafka.console.TopicConsole;
import kafka.console.*;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@@ -52,7 +45,7 @@ public class KafkaConfiguration {
@Bean
public OperationConsole operationConsole(KafkaConfig config, TopicConsole topicConsole,
ConsumerConsole consumerConsole) {
ConsumerConsole consumerConsole) {
return new OperationConsole(config, topicConsole, consumerConsole);
}
@@ -60,4 +53,9 @@ public class KafkaConfiguration {
public MessageConsole messageConsole(KafkaConfig config) {
return new MessageConsole(config);
}
@Bean
public ClientQuotaConsole clientQuotaConsole(KafkaConfig config) {
return new ClientQuotaConsole(config);
}
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.AclEntry;
import com.xuxd.kafka.console.beans.dto.AddAuthDTO;
import com.xuxd.kafka.console.beans.dto.ConsumerAuthDTO;
@@ -28,6 +29,7 @@ public class AclAuthController {
@Autowired
private AclService aclService;
@Permission({"acl:authority:detail", "acl:sasl-scram:detail"})
@PostMapping("/detail")
public Object getAclDetailList(@RequestBody QueryAclDTO param) {
return aclService.getAclDetailList(param.toEntry());
@@ -38,11 +40,13 @@ public class AclAuthController {
return aclService.getOperationList();
}
@Permission("acl:authority")
@PostMapping("/list")
public Object getAclList(@RequestBody QueryAclDTO param) {
return aclService.getAclList(param.toEntry());
}
@Permission({"acl:authority:add-principal", "acl:authority:add", "acl:sasl-scram:add-auth"})
@PostMapping
public Object addAcl(@RequestBody AddAuthDTO param) {
return aclService.addAcl(param.toAclEntry());
@@ -54,6 +58,7 @@ public class AclAuthController {
* @param param entry.topic && entry.username must.
* @return
*/
@Permission({"acl:authority:producer", "acl:sasl-scram:producer"})
@PostMapping("/producer")
public Object addProducerAcl(@RequestBody ProducerAuthDTO param) {
@@ -66,6 +71,7 @@ public class AclAuthController {
* @param param entry.topic && entry.groupId entry.username must.
* @return
*/
@Permission({"acl:authority:consumer", "acl:sasl-scram:consumer"})
@PostMapping("/consumer")
public Object addConsumerAcl(@RequestBody ConsumerAuthDTO param) {
@@ -78,6 +84,7 @@ public class AclAuthController {
* @param entry entry
* @return
*/
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping
public Object deleteAclByUser(@RequestBody AclEntry entry) {
return aclService.deleteAcl(entry);
@@ -89,6 +96,7 @@ public class AclAuthController {
* @param param entry.username
* @return
*/
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/user")
public Object deleteAclByUser(@RequestBody DeleteAclDTO param) {
return aclService.deleteUserAcl(param.toUserEntry());
@@ -100,6 +108,7 @@ public class AclAuthController {
* @param param entry.topic && entry.username must.
* @return
*/
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/producer")
public Object deleteProducerAcl(@RequestBody ProducerAuthDTO param) {
@@ -112,10 +121,22 @@ public class AclAuthController {
* @param param entry.topic && entry.groupId entry.username must.
* @return
*/
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/consumer")
public Object deleteConsumerAcl(@RequestBody ConsumerAuthDTO param) {
return aclService.deleteConsumerAcl(param.toTopicEntry(), param.toGroupEntry());
}
/**
* clear principal acls.
*
* @param param acl principal.
* @return true or false.
*/
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/clear")
public Object clearAcl(@RequestBody DeleteAclDTO param) {
return aclService.clearAcl(param.toUserEntry());
}
}

View File

@@ -1,7 +1,10 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.AclEntry;
import com.xuxd.kafka.console.beans.AclUser;
import com.xuxd.kafka.console.service.AclService;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
@@ -12,7 +15,7 @@ import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
/**
* kafka-console-ui.
* kafka-console-ui. sasl scram user.
*
* @author xuxd
* @date 2021-08-28 21:13:05
@@ -24,29 +27,41 @@ public class AclUserController {
@Autowired
private AclService aclService;
@Permission("acl:sasl-scram")
@GetMapping
public Object getUserList() {
return aclService.getUserList();
}
@Permission({"acl:sasl-scram:add-update", "acl:sasl-scram:add-auth"})
@PostMapping
public Object addOrUpdateUser(@RequestBody AclUser user) {
return aclService.addOrUpdateUser(user.getUsername(), user.getPassword());
}
@Permission({"acl:sasl-scram:del", "acl:sasl-scram:pure"})
@DeleteMapping
public Object deleteUser(@RequestBody AclUser user) {
return aclService.deleteUser(user.getUsername());
}
@Permission({"acl:sasl-scram:del", "acl:sasl-scram:pure"})
@DeleteMapping("/auth")
public Object deleteUserAndAuth(@RequestBody AclUser user) {
return aclService.deleteUserAndAuth(user.getUsername());
}
@Permission("acl:sasl-scram:detail")
@GetMapping("/detail")
public Object getUserDetail(@RequestParam String username) {
return aclService.getUserDetail(username);
}
@GetMapping("/scram")
public Object getSaslScramUserList(@RequestParam(required = false) String username) {
AclEntry entry = new AclEntry();
entry.setPrincipal(StringUtils.isNotBlank(username) ? username : null);
return aclService.getSaslScramUserList(entry);
}
}

View File

@@ -0,0 +1,36 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.LoginUserDTO;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.service.AuthService;
import org.springframework.web.bind.annotation.*;
/**
* @author: xuxd
* @date: 2023/5/11 18:54
**/
@RestController
@RequestMapping("/auth")
public class AuthController {
private final AuthConfig authConfig;
private final AuthService authService;
public AuthController(AuthConfig authConfig, AuthService authService) {
this.authConfig = authConfig;
this.authService = authService;
}
@GetMapping("/enable")
public boolean enable() {
return authConfig.isEnable();
}
@PostMapping("/login")
public ResponseData login(@RequestBody LoginUserDTO userDTO) {
return authService.login(userDTO);
}
}

View File

@@ -0,0 +1,56 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterClientQuotaDTO;
import com.xuxd.kafka.console.beans.dto.QueryClientQuotaDTO;
import com.xuxd.kafka.console.service.ClientQuotaService;
import org.apache.commons.collections.CollectionUtils;
import org.springframework.web.bind.annotation.*;
/**
* @author: xuxd
* @date: 2023/1/9 21:50
**/
@RestController
@RequestMapping("/client/quota")
public class ClientQuotaController {
private final ClientQuotaService clientQuotaService;
public ClientQuotaController(ClientQuotaService clientQuotaService) {
this.clientQuotaService = clientQuotaService;
}
@Permission({"quota:user", "quota:client", "quota:user-client"})
@PostMapping("/list")
public Object getClientQuotaConfigs(@RequestBody QueryClientQuotaDTO request) {
return clientQuotaService.getClientQuotaConfigs(request.getTypes(), request.getNames());
}
@Permission({"quota:user:add", "quota:client:add", "quota:user-client:add", "quota:edit"})
@PostMapping
public Object alterClientQuotaConfigs(@RequestBody AlterClientQuotaDTO request) {
if (request.getTypes().size() != 2) {
if (CollectionUtils.isEmpty(request.getTypes())
|| CollectionUtils.isEmpty(request.getNames())
|| request.getTypes().size() != request.getNames().size()) {
return ResponseData.create().failed("types length and names length is invalid.");
}
}
return clientQuotaService.alterClientQuotaConfigs(request);
}
@Permission("quota:del")
@DeleteMapping
public Object deleteClientQuotaConfigs(@RequestBody AlterClientQuotaDTO request) {
if (request.getTypes().size() != 2) {
if (CollectionUtils.isEmpty(request.getTypes())
|| CollectionUtils.isEmpty(request.getNames())
|| request.getTypes().size() != request.getNames().size()) {
return ResponseData.create().failed("types length and names length is invalid.");
}
}
return clientQuotaService.deleteClientQuotaConfigs(request);
}
}

View File

@@ -1,8 +1,14 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.dto.ClusterInfoDTO;
import com.xuxd.kafka.console.service.ClusterService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@@ -23,4 +29,37 @@ public class ClusterController {
public Object getClusterInfo() {
return clusterService.getClusterInfo();
}
@GetMapping("/info")
public Object getClusterInfoList() {
return clusterService.getClusterInfoList();
}
@Permission("op:cluster-switch:add")
@PostMapping("/info")
public Object addClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.addClusterInfo(dto.to());
}
@Permission("op:cluster-switch:del")
@DeleteMapping("/info")
public Object deleteClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.deleteClusterInfo(dto.getId());
}
@Permission("op:cluster-switch:edit")
@PutMapping("/info")
public Object updateClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.updateClusterInfo(dto.to());
}
@GetMapping("/info/peek")
public Object peekClusterInfo() {
return clusterService.peekClusterInfo();
}
@GetMapping("/info/api/version")
public Object getBrokerApiVersionInfo() {
return clusterService.getBrokerApiVersionInfo();
}
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterConfigDTO;
import com.xuxd.kafka.console.beans.enums.AlterType;
@@ -36,51 +37,60 @@ public class ConfigController {
this.configService = configService;
}
@GetMapping
@GetMapping("/console")
public Object getConfig() {
return ResponseData.create().data(configMap).success();
}
@Permission("topic:property-config")
@GetMapping("/topic")
public Object getTopicConfig(String topic) {
return configService.getTopicConfig(topic);
}
@Permission("topic:property-config:edit")
@PostMapping("/topic")
public Object setTopicConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterTopicConfig(dto.getEntity(), dto.to(), AlterType.SET);
}
@Permission("topic:property-config:del")
@DeleteMapping("/topic")
public Object deleteTopicConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterTopicConfig(dto.getEntity(), dto.to(), AlterType.DELETE);
}
@Permission("cluster:property-config")
@GetMapping("/broker")
public Object getBrokerConfig(String brokerId) {
return configService.getBrokerConfig(brokerId);
}
@Permission("cluster:edit")
@PostMapping("/broker")
public Object setBrokerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerConfig(dto.getEntity(), dto.to(), AlterType.SET);
}
@Permission("cluster:edit")
@DeleteMapping("/broker")
public Object deleteBrokerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerConfig(dto.getEntity(), dto.to(), AlterType.DELETE);
}
@Permission("cluster:log-config")
@GetMapping("/broker/logger")
public Object getBrokerLoggerConfig(String brokerId) {
return configService.getBrokerLoggerConfig(brokerId);
}
@Permission("cluster:edit")
@PostMapping("/broker/logger")
public Object setBrokerLoggerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerLoggerConfig(dto.getEntity(), dto.to(), AlterType.SET);
}
@Permission("cluster:edit")
@DeleteMapping("/broker/logger")
public Object deleteBrokerLoggerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerLoggerConfig(dto.getEntity(), dto.to(), AlterType.DELETE);

View File

@@ -1,28 +1,20 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AddSubscriptionDTO;
import com.xuxd.kafka.console.beans.dto.QueryConsumerGroupDTO;
import com.xuxd.kafka.console.beans.dto.ResetOffsetDTO;
import com.xuxd.kafka.console.service.ConsumerService;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Objects;
import java.util.Set;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.consumer.OffsetResetStrategy;
import org.apache.kafka.common.ConsumerGroupState;
import org.apache.kafka.common.TopicPartition;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.*;
import java.util.*;
/**
* kafka-console-ui.
@@ -51,26 +43,34 @@ public class ConsumerController {
return consumerService.getConsumerGroupList(groupIdList, stateSet);
}
@Permission("group:del")
@DeleteMapping("/group")
public Object deleteConsumerGroup(@RequestParam String groupId) {
return consumerService.deleteConsumerGroup(groupId);
}
@Permission("group:client")
@GetMapping("/member")
public Object getConsumerMembers(@RequestParam String groupId) {
return consumerService.getConsumerMembers(groupId);
}
@Permission("group:consumer-detail")
@GetMapping("/detail")
public Object getConsumerDetail(@RequestParam String groupId) {
return consumerService.getConsumerDetail(groupId);
}
@Permission("group:add")
@PostMapping("/subscription")
public Object addSubscription(@RequestBody AddSubscriptionDTO subscriptionDTO) {
return consumerService.addSubscription(subscriptionDTO.getGroupId(), subscriptionDTO.getTopic());
}
@Permission({"group:consumer-detail:min",
"group:consumer-detail:last",
"group:consumer-detail:timestamp",
"group:consumer-detail:any"})
@PostMapping("/reset/offset")
public Object restOffset(@RequestBody ResetOffsetDTO offsetDTO) {
ResponseData res = ResponseData.create().failed("unknown");
@@ -78,7 +78,7 @@ public class ConsumerController {
case ResetOffsetDTO.Level.TOPIC:
switch (offsetDTO.getType()) {
case ResetOffsetDTO.Type
.EARLIEST:
.EARLIEST:
res = consumerService.resetOffsetToEndpoint(offsetDTO.getGroupId(), offsetDTO.getTopic(), OffsetResetStrategy.EARLIEST);
break;
case ResetOffsetDTO.Type.LATEST:
@@ -94,7 +94,7 @@ public class ConsumerController {
case ResetOffsetDTO.Level.PARTITION:
switch (offsetDTO.getType()) {
case ResetOffsetDTO.Type
.SPECIAL:
.SPECIAL:
res = consumerService.resetPartitionToTargetOffset(offsetDTO.getGroupId(), new TopicPartition(offsetDTO.getTopic(), offsetDTO.getPartition()), offsetDTO.getOffset());
break;
default:
@@ -118,11 +118,13 @@ public class ConsumerController {
return consumerService.getSubscribeTopicList(groupId);
}
@Permission({"topic:consumer-detail"})
@GetMapping("/topic/subscribed")
public Object getTopicSubscribedByGroups(@RequestParam String topic) {
return consumerService.getTopicSubscribedByGroups(topic);
}
@Permission("group:offset-partition")
@GetMapping("/offset/partition")
public Object getOffsetPartition(@RequestParam String groupId) {
return consumerService.getOffsetPartition(groupId);

View File

@@ -1,14 +1,16 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage;
import com.xuxd.kafka.console.beans.dto.QueryMessageDTO;
import com.xuxd.kafka.console.service.MessageService;
import org.apache.commons.collections.CollectionUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.*;
import java.util.List;
/**
* kafka-console-ui.
@@ -23,16 +25,19 @@ public class MessageController {
@Autowired
private MessageService messageService;
@Permission("message:search-time")
@PostMapping("/search/time")
public Object searchByTime(@RequestBody QueryMessageDTO dto) {
return messageService.searchByTime(dto.toQueryMessage());
}
@Permission("message:search-offset")
@PostMapping("/search/offset")
public Object searchByOffset(@RequestBody QueryMessageDTO dto) {
return messageService.searchByOffset(dto.toQueryMessage());
}
@Permission("message:detail")
@PostMapping("/search/detail")
public Object searchDetail(@RequestBody QueryMessageDTO dto) {
return messageService.searchDetail(dto.toQueryMessage());
@@ -43,13 +48,24 @@ public class MessageController {
return messageService.deserializerList();
}
@Permission("message:send")
@PostMapping("/send")
public Object send(@RequestBody SendMessage message) {
return messageService.send(message);
}
@Permission("message:resend")
@PostMapping("/resend")
public Object resend(@RequestBody SendMessage message) {
return messageService.resend(message);
}
@Permission("message:del")
@DeleteMapping
public Object delete(@RequestBody List<QueryMessage> messages) {
if (CollectionUtils.isEmpty(messages)) {
return ResponseData.create().failed("params is null");
}
return messageService.delete(messages);
}
}

View File

@@ -1,19 +1,15 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.TopicPartition;
import com.xuxd.kafka.console.beans.dto.BrokerThrottleDTO;
import com.xuxd.kafka.console.beans.dto.ProposedAssignmentDTO;
import com.xuxd.kafka.console.beans.dto.ReplicationDTO;
import com.xuxd.kafka.console.beans.dto.SyncDataDTO;
import com.xuxd.kafka.console.service.OperationService;
import org.apache.kafka.clients.admin.AdminClientConfig;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.*;
/**
* kafka-console-ui.
@@ -50,28 +46,38 @@ public class OperationController {
return operationService.deleteAlignmentById(id);
}
@Permission({"topic:partition-detail:preferred", "op:replication-preferred"})
@PostMapping("/replication/preferred")
public Object electPreferredLeader(@RequestBody ReplicationDTO dto) {
return operationService.electPreferredLeader(dto.getTopic(), dto.getPartition());
}
@Permission("op:config-throttle")
@PostMapping("/broker/throttle")
public Object configThrottle(@RequestBody BrokerThrottleDTO dto) {
return operationService.configThrottle(dto.getBrokerList(), dto.getUnit().toKb(dto.getThrottle()));
}
@Permission("op:remove-throttle")
@DeleteMapping("/broker/throttle")
public Object removeThrottle(@RequestBody BrokerThrottleDTO dto) {
return operationService.removeThrottle(dto.getBrokerList());
}
@Permission("op:replication-update-detail")
@GetMapping("/replication/reassignments")
public Object currentReassignments() {
return operationService.currentReassignments();
}
@Permission("op:replication-update-detail:cancel")
@DeleteMapping("/replication/reassignments")
public Object cancelReassignment(@RequestBody TopicPartition partition) {
return operationService.cancelReassignment(new org.apache.kafka.common.TopicPartition(partition.getTopic(), partition.getPartition()));
}
@PostMapping("/replication/reassignments/proposed")
public Object proposedAssignments(@RequestBody ProposedAssignmentDTO dto) {
return operationService.proposedAssignments(dto.getTopic(), dto.getBrokers());
}
}

View File

@@ -1,23 +1,19 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ReplicaAssignment;
import com.xuxd.kafka.console.beans.dto.AddPartitionDTO;
import com.xuxd.kafka.console.beans.dto.NewTopicDTO;
import com.xuxd.kafka.console.beans.dto.TopicThrottleDTO;
import com.xuxd.kafka.console.beans.enums.TopicType;
import com.xuxd.kafka.console.service.TopicService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
/**
* kafka-console-ui.
@@ -37,26 +33,31 @@ public class TopicController {
return topicService.getTopicNameList(false);
}
@Permission("topic:load")
@GetMapping("/list")
public Object getTopicList(@RequestParam(required = false) String topic, @RequestParam String type) {
return topicService.getTopicList(topic, TopicType.valueOf(type.toUpperCase()));
}
@Permission({"topic:batch-del", "topic:del"})
@DeleteMapping
public Object deleteTopic(@RequestParam String topic) {
return topicService.deleteTopic(topic);
public Object deleteTopic(@RequestBody List<String> topics) {
return topicService.deleteTopics(topics);
}
@Permission("topic:partition-detail")
@GetMapping("/partition")
public Object getTopicPartitionInfo(@RequestParam String topic) {
return topicService.getTopicPartitionInfo(topic.trim());
}
@Permission("topic:add")
@PostMapping("/new")
public Object createNewTopic(@RequestBody NewTopicDTO topicDTO) {
return topicService.createTopic(topicDTO.toNewTopic());
}
@Permission("topic:partition-add")
@PostMapping("/partition/new")
public Object addPartition(@RequestBody AddPartitionDTO partitionDTO) {
String topic = partitionDTO.getTopic().trim();
@@ -79,16 +80,19 @@ public class TopicController {
return topicService.getCurrentReplicaAssignment(topic);
}
@Permission({"topic:replication-modify", "op:replication-reassign"})
@PostMapping("/replica/assignment")
public Object updateReplicaAssignment(@RequestBody ReplicaAssignment assignment) {
return topicService.updateReplicaAssignment(assignment);
}
@Permission("topic:replication-sync-throttle")
@PostMapping("/replica/throttle")
public Object configThrottle(@RequestBody TopicThrottleDTO dto) {
return topicService.configThrottle(dto.getTopic(), dto.getPartitions(), dto.getOperation());
}
@Permission("topic:send-count")
@GetMapping("/send/stats")
public Object sendStats(@RequestParam String topic) {
return topicService.sendStats(topic);

View File

@@ -0,0 +1,97 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.Credentials;
import com.xuxd.kafka.console.beans.dto.SysPermissionDTO;
import com.xuxd.kafka.console.beans.dto.SysRoleDTO;
import com.xuxd.kafka.console.beans.dto.SysUserDTO;
import com.xuxd.kafka.console.service.UserManageService;
import org.springframework.web.bind.annotation.*;
import javax.servlet.http.HttpServletRequest;
/**
* @author: xuxd
* @date: 2023/4/11 21:34
**/
@RestController
@RequestMapping("/sys/user/manage")
public class UserManageController {
private final UserManageService userManageService;
public UserManageController(UserManageService userManageService) {
this.userManageService = userManageService;
}
@Permission({"user-manage:user:add", "user-manage:user:change-role", "user-manage:user:reset-pass"})
@ControllerLog("新增/更新用户")
@PostMapping("/user")
public Object addOrUpdateUser(@RequestBody SysUserDTO userDTO) {
return userManageService.addOrUpdateUser(userDTO);
}
@Permission("user-manage:role:save")
@ControllerLog("新增/更新角色")
@PostMapping("/role")
public Object addOrUpdateRole(@RequestBody SysRoleDTO roleDTO) {
return userManageService.addOrUdpateRole(roleDTO);
}
@ControllerLog("新增权限")
@PostMapping("/permission")
public Object addPermission(@RequestBody SysPermissionDTO permissionDTO) {
return userManageService.addPermission(permissionDTO);
}
@Permission("user-manage:role:save")
@ControllerLog("更新角色")
@PutMapping("/role")
public Object updateRole(@RequestBody SysRoleDTO roleDTO) {
return userManageService.updateRole(roleDTO);
}
@Permission({"user-manage:role"})
@GetMapping("/role")
public Object selectRole() {
return userManageService.selectRole();
}
@Permission({"user-manage:permission"})
@GetMapping("/permission")
public Object selectPermission() {
return userManageService.selectPermission();
}
@Permission({"user-manage:user"})
@GetMapping("/user")
public Object selectUser() {
return userManageService.selectUser();
}
@Permission("user-manage:role:del")
@ControllerLog("删除角色")
@DeleteMapping("/role")
public Object deleteRole(@RequestParam Long id) {
return userManageService.deleteRole(id);
}
@Permission("user-manage:user:del")
@ControllerLog("删除用户")
@DeleteMapping("/user")
public Object deleteUser(@RequestParam Long id) {
return userManageService.deleteUser(id);
}
@Permission("user-manage:setting")
@ControllerLog("更新密码")
@PostMapping("/user/password")
public Object updatePassword(@RequestBody SysUserDTO userDTO, HttpServletRequest request) {
Credentials credentials = (Credentials) request.getAttribute("credentials");
if (credentials != null && !credentials.isInvalid()) {
userDTO.setUsername(credentials.getUsername());
}
return userManageService.updatePassword(userDTO);
}
}

View File

@@ -0,0 +1,13 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-31 09:58:52
**/
public interface ClusterInfoMapper extends BaseMapper<ClusterInfoDO> {
}

View File

@@ -0,0 +1,15 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.SysPermissionDO;
import org.apache.ibatis.annotations.Mapper;
/**
* 系统权限 .
*
* @author: xuxd
* @date: 2023/4/11 21:21
**/
@Mapper
public interface SysPermissionMapper extends BaseMapper<SysPermissionDO> {
}

View File

@@ -0,0 +1,13 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import org.apache.ibatis.annotations.Mapper;
/**
* @author: xuxd
* @date: 2023/4/11 21:22
**/
@Mapper
public interface SysRoleMapper extends BaseMapper<SysRoleDO> {
}

View File

@@ -0,0 +1,13 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.SysUserDO;
import org.apache.ibatis.annotations.Mapper;
/**
* @author: xuxd
* @date: 2023/4/11 21:22
**/
@Mapper
public interface SysUserMapper extends BaseMapper<SysUserDO> {
}

View File

@@ -0,0 +1,93 @@
package com.xuxd.kafka.console.dao.init;
import com.xuxd.kafka.console.beans.RolePermUpdateEvent;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.dao.SysPermissionMapper;
import com.xuxd.kafka.console.dao.SysRoleMapper;
import com.xuxd.kafka.console.dao.SysUserMapper;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.SmartInitializingSingleton;
import org.springframework.context.ApplicationEventPublisher;
import org.springframework.stereotype.Component;
import javax.sql.DataSource;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
/**
* @author: xuxd
* @date: 2023/5/17 13:10
**/
@Slf4j
@Component
public class DataInit implements SmartInitializingSingleton {
private final AuthConfig authConfig;
private final SysUserMapper userMapper;
private final SysRoleMapper roleMapper;
private final SysPermissionMapper permissionMapper;
private final DataSource dataSource;
private final SqlParse sqlParse;
private final ApplicationEventPublisher publisher;
public DataInit(AuthConfig authConfig,
SysUserMapper userMapper,
SysRoleMapper roleMapper,
SysPermissionMapper permissionMapper,
DataSource dataSource,
ApplicationEventPublisher publisher) {
this.authConfig = authConfig;
this.userMapper = userMapper;
this.roleMapper = roleMapper;
this.permissionMapper = permissionMapper;
this.dataSource = dataSource;
this.publisher = publisher;
this.sqlParse = new SqlParse();
}
@Override
public void afterSingletonsInstantiated() {
if (!authConfig.isEnable()) {
log.info("Disable login authentication, no longer try to initialize the data");
return;
}
try {
Connection connection = dataSource.getConnection();
Integer userCount = userMapper.selectCount(null);
if (userCount == null || userCount == 0) {
initData(connection, SqlParse.USER_TABLE);
}
Integer roleCount = roleMapper.selectCount(null);
if (roleCount == null || roleCount == 0) {
initData(connection, SqlParse.ROLE_TABLE);
}
Integer permCount = permissionMapper.selectCount(null);
if (permCount == null || permCount == 0) {
initData(connection, SqlParse.PERM_TABLE);
}
RolePermUpdateEvent event = new RolePermUpdateEvent(this);
event.setReload(true);
publisher.publishEvent(event);
} catch (SQLException e) {
throw new RuntimeException(e);
}
}
private void initData(Connection connection, String table) throws SQLException {
log.info("Init default data for {}", table);
String sql = sqlParse.getMergeSql(table);
PreparedStatement statement = connection.prepareStatement(sql);
statement.execute();
}
}

View File

@@ -0,0 +1,85 @@
package com.xuxd.kafka.console.dao.init;
import com.google.common.io.Files;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.util.ResourceUtils;
import scala.collection.mutable.StringBuilder;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* @author: xuxd
* @date: 2023/5/17 21:22
**/
@Slf4j
public class SqlParse {
private final String FILE = "classpath:db/data-h2.sql";
private final Map<String, List<String>> sqlMap = new HashMap<>();
public static final String ROLE_TABLE = "t_sys_role";
public static final String USER_TABLE = "t_sys_user";
public static final String PERM_TABLE = "t_sys_permission";
public SqlParse() {
sqlMap.put(ROLE_TABLE, new ArrayList<>());
sqlMap.put(USER_TABLE, new ArrayList<>());
sqlMap.put(PERM_TABLE, new ArrayList<>());
String table = null;
try {
File file = ResourceUtils.getFile(FILE);
List<String> lines = Files.readLines(file, Charset.forName("UTF-8"));
for (String str : lines) {
if (StringUtils.isNotEmpty(str)) {
if (str.indexOf("start--") > 0) {
if (str.indexOf(ROLE_TABLE) > 0) {
table = ROLE_TABLE;
}
if (str.indexOf(USER_TABLE) > 0) {
table = USER_TABLE;
}
if (str.indexOf(PERM_TABLE) > 0) {
table = PERM_TABLE;
}
}
if (isSql(str)) {
if (table == null) {
log.error("Table is null, can not load sql: {}", str);
continue;
}
sqlMap.get(table).add(str);
}
}
}
} catch (FileNotFoundException e) {
throw new RuntimeException(e);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
public List<String> getSqlList(String table) {
return sqlMap.get(table);
}
public String getMergeSql(String table) {
List<String> list = getSqlList(table);
StringBuilder sb = new StringBuilder();
list.forEach(sql -> sb.append(sql));
return sb.toString();
}
private boolean isSql(String str) {
return StringUtils.isNotEmpty(str) && str.startsWith("insert");
}
}

View File

@@ -1,11 +1,14 @@
package com.xuxd.kafka.console.interceptor;
package com.xuxd.kafka.console.exception;
import com.xuxd.kafka.console.beans.ResponseData;
import javax.servlet.http.HttpServletRequest;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ControllerAdvice;
import org.springframework.web.bind.annotation.ExceptionHandler;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.ResponseStatus;
import javax.servlet.http.HttpServletRequest;
/**
* kafka-console-ui.
@@ -17,6 +20,14 @@ import org.springframework.web.bind.annotation.ResponseBody;
@ControllerAdvice(basePackages = "com.xuxd.kafka.console.controller")
public class GlobalExceptionHandler {
@ResponseStatus(code = HttpStatus.FORBIDDEN)
@ExceptionHandler(value = UnAuthorizedException.class)
@ResponseBody
public Object unAuthorizedExceptionHandler(HttpServletRequest req, Exception ex) throws Exception {
log.error("unAuthorized: {}", ex.getMessage());
return ResponseData.create().failed("UnAuthorized: " + ex.getMessage());
}
@ExceptionHandler(value = Exception.class)
@ResponseBody
public Object exceptionHandler(HttpServletRequest req, Exception ex) throws Exception {

View File

@@ -0,0 +1,12 @@
package com.xuxd.kafka.console.exception;
/**
* @author: xuxd
* @date: 2023/5/17 23:08
**/
public class UnAuthorizedException extends RuntimeException{
public UnAuthorizedException(String message) {
super(message);
}
}

View File

@@ -0,0 +1,70 @@
package com.xuxd.kafka.console.filter;
import com.xuxd.kafka.console.beans.Credentials;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.utils.AuthUtil;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.core.annotation.Order;
import org.springframework.http.HttpStatus;
import javax.servlet.*;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
/**
* @author: xuxd
* @date: 2023/5/9 21:20
**/
@Order(1)
@WebFilter(filterName = "auth-filter", urlPatterns = {"/*"})
@Slf4j
public class AuthFilter implements Filter {
private final AuthConfig authConfig;
private final String TOKEN_HEADER = "X-Auth-Token";
private final String AUTH_URI_PREFIX = "/auth";
public AuthFilter(AuthConfig authConfig) {
this.authConfig = authConfig;
}
@Override
public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
if (!authConfig.isEnable()) {
filterChain.doFilter(servletRequest, servletResponse);
return;
}
HttpServletRequest request = (HttpServletRequest) servletRequest;
HttpServletResponse response = (HttpServletResponse) servletResponse;
String accessToken = request.getHeader(TOKEN_HEADER);
String requestURI = request.getRequestURI();
if (requestURI.startsWith(AUTH_URI_PREFIX)) {
filterChain.doFilter(servletRequest, servletResponse);
return;
}
if (StringUtils.isEmpty(accessToken)) {
response.setStatus(HttpStatus.UNAUTHORIZED.value());
return;
}
Credentials credentials = AuthUtil.parseToken(authConfig.getSecret(), accessToken);
if (credentials.isInvalid()) {
response.setStatus(HttpStatus.UNAUTHORIZED.value());
return;
}
request.setAttribute("credentials", credentials);
try {
CredentialsContext.set(credentials);
filterChain.doFilter(servletRequest, servletResponse);
}finally {
CredentialsContext.remove();
}
}
}

View File

@@ -0,0 +1,88 @@
package com.xuxd.kafka.console.filter;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.config.ContextConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.utils.ConvertUtil;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.core.annotation.Order;
import org.springframework.http.MediaType;
import javax.servlet.*;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import java.io.IOException;
import java.util.HashSet;
import java.util.Set;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-05 19:56:25
**/
@Order(100)
@WebFilter(filterName = "context-set-filter", urlPatterns = {"/acl/*", "/user/*", "/cluster/*", "/config/*", "/consumer/*", "/message/*", "/topic/*", "/op/*", "/client/*"})
@Slf4j
public class ContextSetFilter implements Filter {
private Set<String> excludes = new HashSet<>();
{
excludes.add("/cluster/info/peek");
excludes.add("/cluster/info");
excludes.add("/config/console");
}
@Autowired
private ClusterInfoMapper clusterInfoMapper;
@Override
public void doFilter(ServletRequest req, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
try {
HttpServletRequest request = (HttpServletRequest) req;
String uri = request.getRequestURI();
if (!excludes.contains(uri)) {
String headerId = request.getHeader(Header.ID);
if (StringUtils.isBlank(headerId)) {
// ResponseData failed = ResponseData.create().failed("Cluster info is null.");
ResponseData failed = ResponseData.create().failed("没有集群信息,请先切换集群");
response.setContentType(MediaType.APPLICATION_JSON_UTF8_VALUE);
response.getWriter().println(ConvertUtil.toJsonString(failed));
return;
} else {
ClusterInfoDO infoDO = clusterInfoMapper.selectById(Long.valueOf(headerId));
if (infoDO == null) {
ResponseData failed = ResponseData.create().failed("该集群找不到信息,请切换一个有效集群");
response.setContentType(MediaType.APPLICATION_JSON_UTF8_VALUE);
response.getWriter().println(ConvertUtil.toJsonString(failed));
return;
}
ContextConfig config = new ContextConfig();
config.setClusterInfoId(infoDO.getId());
config.setClusterName(infoDO.getClusterName());
config.setBootstrapServer(infoDO.getAddress());
if (StringUtils.isNotBlank(infoDO.getProperties())) {
config.setProperties(ConvertUtil.toProperties(infoDO.getProperties()));
}
ContextConfigHolder.CONTEXT_CONFIG.set(config);
// log.info("current kafka config: {}", config);
}
}
chain.doFilter(req, response);
} finally {
ContextConfigHolder.CONTEXT_CONFIG.remove();
}
}
interface Header {
String ID = "X-Cluster-Info-Id";
String NAME = "X-Cluster-Info-Name";
}
}

View File

@@ -0,0 +1,23 @@
package com.xuxd.kafka.console.filter;
import com.xuxd.kafka.console.beans.Credentials;
/**
* @author: xuxd
* @date: 2023/5/17 23:02
**/
public class CredentialsContext {
private static final ThreadLocal<Credentials> CREDENTIALS = new ThreadLocal<>();
public static void set(Credentials credentials) {
CREDENTIALS.set(credentials);
}
public static Credentials get() {
return CREDENTIALS.get();
}
public static void remove() {
CREDENTIALS.remove();
}
}

View File

@@ -1,9 +1,22 @@
package com.xuxd.kafka.console.schedule;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.beans.dos.KafkaUserDO;
import com.xuxd.kafka.console.config.ContextConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.dao.KafkaUserMapper;
import com.xuxd.kafka.console.utils.ConvertUtil;
import com.xuxd.kafka.console.utils.SaslUtil;
import java.util.ArrayList;
import java.util.List;
import java.util.Set;
import kafka.console.KafkaConfigConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
@@ -21,25 +34,58 @@ public class KafkaAclSchedule {
private final KafkaConfigConsole configConsole;
public KafkaAclSchedule(KafkaUserMapper userMapper, KafkaConfigConsole configConsole) {
this.userMapper = userMapper;
this.configConsole = configConsole;
private final ClusterInfoMapper clusterInfoMapper;
public KafkaAclSchedule(ObjectProvider<KafkaUserMapper> userMapper,
ObjectProvider<KafkaConfigConsole> configConsole, ObjectProvider<ClusterInfoMapper> clusterInfoMapper) {
this.userMapper = userMapper.getIfAvailable();
this.configConsole = configConsole.getIfAvailable();
this.clusterInfoMapper = clusterInfoMapper.getIfAvailable();
}
@Scheduled(cron = "${cron.clear-dirty-user}")
public void clearDirtyKafkaUser() {
log.info("Start clear dirty data for kafka user from database.");
Set<String> userSet = configConsole.getUserList(null);
userMapper.selectList(null).forEach(u -> {
if (!userSet.contains(u.getUsername())) {
log.info("clear user: {} from database.", u.getUsername());
try {
userMapper.deleteById(u.getId());
} catch (Exception e) {
log.error("userMapper.deleteById error, user: " + u, e);
try {
log.info("Start clear dirty data for kafka user from database.");
List<ClusterInfoDO> clusterInfoDOS = clusterInfoMapper.selectList(null);
List<Long> clusterInfoIds = new ArrayList<>();
for (ClusterInfoDO infoDO : clusterInfoDOS) {
ContextConfig config = new ContextConfig();
config.setClusterInfoId(infoDO.getId());
config.setClusterName(infoDO.getClusterName());
config.setBootstrapServer(infoDO.getAddress());
if (StringUtils.isNotBlank(infoDO.getProperties())) {
config.setProperties(ConvertUtil.toProperties(infoDO.getProperties()));
}
ContextConfigHolder.CONTEXT_CONFIG.set(config);
if (SaslUtil.isEnableSasl() && SaslUtil.isEnableScram()) {
log.info("Start clear cluster: {}", infoDO.getClusterName());
Set<String> userSet = configConsole.getUserList(null);
QueryWrapper<KafkaUserDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("cluster_info_id", infoDO.getId());
userMapper.selectList(queryWrapper).forEach(u -> {
if (!userSet.contains(u.getUsername())) {
log.info("clear user: {} from database.", u.getUsername());
try {
userMapper.deleteById(u.getId());
} catch (Exception e) {
log.error("userMapper.deleteById error, user: " + u, e);
}
}
});
clusterInfoIds.add(infoDO.getId());
}
}
});
log.info("Clear end.");
if (CollectionUtils.isNotEmpty(clusterInfoIds)) {
log.info("Clear the cluster id {}, which not found.", clusterInfoIds);
QueryWrapper<KafkaUserDO> wrapper = new QueryWrapper<>();
wrapper.notIn("cluster_info_id", clusterInfoIds);
userMapper.delete(wrapper);
}
log.info("Clear end.");
} finally {
ContextConfigHolder.CONTEXT_CONFIG.remove();
}
}
}

View File

@@ -42,4 +42,7 @@ public interface AclService {
ResponseData getUserDetail(String username);
ResponseData clearAcl(AclEntry entry);
ResponseData getSaslScramUserList(AclEntry entry);
}

View File

@@ -0,0 +1,13 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.LoginUserDTO;
/**
* @author: xuxd
* @date: 2023/5/14 19:00
**/
public interface AuthService {
ResponseData login(LoginUserDTO userDTO);
}

View File

@@ -0,0 +1,18 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterClientQuotaDTO;
import java.util.List;
/**
* @author 晓东哥哥
*/
public interface ClientQuotaService {
ResponseData getClientQuotaConfigs(List<String> types, List<String> names);
ResponseData alterClientQuotaConfigs(AlterClientQuotaDTO request);
ResponseData deleteClientQuotaConfigs(AlterClientQuotaDTO request);
}

View File

@@ -0,0 +1,10 @@
package com.xuxd.kafka.console.service;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-31 11:42:43
**/
public interface ClusterInfoService {
}

View File

@@ -1,6 +1,7 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
/**
* kafka-console-ui.
@@ -10,4 +11,16 @@ import com.xuxd.kafka.console.beans.ResponseData;
**/
public interface ClusterService {
ResponseData getClusterInfo();
ResponseData getClusterInfoList();
ResponseData addClusterInfo(ClusterInfoDO infoDO);
ResponseData deleteClusterInfo(Long id);
ResponseData updateClusterInfo(ClusterInfoDO infoDO);
ResponseData peekClusterInfo();
ResponseData getBrokerApiVersionInfo();
}

View File

@@ -4,6 +4,8 @@ import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage;
import java.util.List;
/**
* kafka-console-ui.
*
@@ -23,4 +25,6 @@ public interface MessageService {
ResponseData send(SendMessage message);
ResponseData resend(SendMessage message);
ResponseData delete(List<QueryMessage> messages);
}

View File

@@ -30,4 +30,6 @@ public interface OperationService {
ResponseData currentReassignments();
ResponseData cancelReassignment(TopicPartition partition);
ResponseData proposedAssignments(String topic, List<Integer> brokerList);
}

Some files were not shown because too many files have changed in this diff Show More