86 Commits

Author SHA1 Message Date
许晓东
07c6714fd9 替换淘宝镜像,限制客户端ID太长显示宽度. 2023-02-09 22:23:34 +08:00
许晓东
5e9304efc2 增加限流说明,fixed acl 开启授权提示. 2023-02-07 22:24:37 +08:00
许晓东
fc7f05cf4e 限流,支持用户and客户端ID同时存在. 2023-02-06 22:11:56 +08:00
许晓东
5a87e9cad8 限流,支持用户. 2023-02-05 23:08:14 +08:00
许晓东
608f7cdc47 限流,支持客户端ID修改、删除. 2023-02-05 22:01:37 +08:00
许晓东
ee6defe5d2 限流,支持客户端ID查询. 2023-02-04 21:28:51 +08:00
许晓东
56621e0b8c 新增客户端限流菜单页. 2023-01-30 21:40:11 +08:00
许晓东
832b20a83e 客户端限流查询接口. 2023-01-09 22:11:50 +08:00
许晓东
4dbadee0d4 客户端限流 service. 2023-01-03 22:01:43 +08:00
许晓东
7d76632f08 客户端限流console 2023-01-03 21:28:11 +08:00
许晓东
d5102f626c 客户端限流console. 2023-01-02 21:28:47 +08:00
许晓东
daf77290da 客户端限流console. 2023-01-02 21:16:22 +08:00
许晓东
4df20f9ca5 contact jpg 2022-12-09 09:21:10 +08:00
许晓东
b465ba78b8 update contact 2022-11-01 19:05:40 +08:00
许晓东
9a69bad93a wechat contact. 2022-10-16 13:32:41 +08:00
许晓东
3785e9aaca wechat contact. 2022-10-16 13:31:35 +08:00
许晓东
d502da1b39 update weixin contact 2022-09-20 20:07:06 +08:00
许晓东
d6282cb902 认证授权分离页面未开启ACL时,显示效果. 2022-08-28 22:48:21 +08:00
许晓东
ca4dc2ebc9 Polish READM. 2022-08-28 22:04:31 +08:00
许晓东
50775994b5 Polish READM. 2022-08-28 22:02:21 +08:00
许晓东
3bd14a35d6 ACL的认证和授权管理功能分离. 2022-08-28 21:39:08 +08:00
许晓东
4c3fe5230c update contact jpg. 2022-08-04 09:12:39 +08:00
许晓东
57d549635b 恢复interceptory路径. 2022-07-27 18:19:50 +08:00
许晓东
923b89b6bd 更新下载地址. 2022-07-24 17:33:22 +08:00
许晓东
e9f34e1d19 支持批量删除topic. 2022-07-24 11:52:48 +08:00
许晓东
ccdcebb24d 更新icon. 2022-07-24 11:09:25 +08:00
许晓东
7ddd75e34f 更新icon. 2022-07-24 11:09:21 +08:00
许晓东
aebea435fa 更新概览. 2022-07-14 21:59:25 +08:00
许晓东
ea788313c6 更新概览. 2022-07-14 21:59:18 +08:00
许晓东
727edfcca8 支持缓存生产者连接,缓存连接默认关闭 2022-07-09 18:54:20 +08:00
Xiaodong Xu
cc1989a74b Merge pull request #17 from comdotwww/main
解决 Windows 操作系统下 CMD 路径转义的问题
2022-07-07 22:46:39 +08:00
comdotwww
0196a90b69 Update start.bat 2022-07-07 22:05:46 +08:00
许晓东
9c3e3988e0 consumer连接属性处理、联系更新 2022-07-07 20:09:27 +08:00
许晓东
458e13c9e0 缓存连接 2022-07-05 10:19:51 +08:00
许晓东
979859b232 支持在线删除消息 2022-07-04 17:16:00 +08:00
许晓东
b163e5f776 升级kafka版本从2.8.0 -> 3.2.0,增加DockerCompose部署说明 2022-06-30 20:11:29 +08:00
Xiaodong Xu
d062e18940 Merge pull request #16 from wdkang123/main
new(md): Docker DockerCompose部署方式
2022-06-30 19:42:14 +08:00
武子康
87c1e7ba4a new(md): Docker DockerCompose部署方式 2022-06-30 19:12:42 +08:00
许晓东
5194c952f2 polish README. 2022-06-29 19:17:21 +08:00
许晓东
c1cc44d32f 修复集群无活跃节点时NPE,更新README. 2022-06-29 17:22:29 +08:00
许晓东
82fafe980d 修复集群无活跃节点时NPE,更新README. 2022-06-29 17:20:57 +08:00
许晓东
34752deca2 update wechat contact. 2022-06-17 10:10:00 +08:00
yinuo
9e42e2c72a 更新联系方式 2022-05-06 11:02:28 +08:00
dongyinuo
e531f5d786 Delete weixin_contact.jpeg 2022-05-06 11:00:56 +08:00
dongyinuo
10e75ac55d Update README.md
更新联系方式
2022-05-06 10:58:55 +08:00
yinuo
4a8d09dc89 更新联系方式 2022-05-06 10:56:22 +08:00
dongyinuo
116bc100a7 Add files via upload
替换微信群图片
2022-05-06 10:48:00 +08:00
Xiaodong Xu
b1feaad9f7 Merge pull request #14 from dongyinuo/feature/dongyinuo/add/contact
Feature/dongyinuo/add/contact
2022-04-29 17:27:19 +08:00
yinuo
4d372f8374 添加联系方式 2022-04-29 17:22:44 +08:00
yinuo
4b2c544c0d 添加联系方式 2022-04-29 17:21:32 +08:00
许晓东
8131cb1a42 发布1.0.4安装包下载地址 2022-02-16 20:01:06 +08:00
许晓东
1dd6466261 副本重分配 2022-02-16 19:50:35 +08:00
许晓东
dda08a2152 副本重分配-》生成分配计划 2022-02-15 20:13:07 +08:00
许晓东
01c7121ee4 集群节点列表有序 2022-01-22 23:33:13 +08:00
许晓东
d939d7653c 主页展示Broker API的版本兼容信息 2022-01-22 23:07:41 +08:00
许晓东
058cd5a24e 查询当前重分配,版本不支持异常处理 2022-01-20 13:44:37 +08:00
许晓东
db3f55ac4a polish README 2022-01-19 19:05:00 +08:00
许晓东
a311a34537 分区比较栈溢出bug修复 2022-01-18 20:42:11 +08:00
许晓东
e8fe2ea1c7 集群名称支持中文,消息查询可选择时间展示顺序 2022-01-13 14:19:17 +08:00
许晓东
10302dd39c v1.0.3安装包下载地址 2022-01-09 23:57:00 +08:00
许晓东
55a4483fcc 打包只打zip包 2022-01-09 23:45:39 +08:00
许晓东
4dd2412b78 polish README 2022-01-09 23:40:20 +08:00
许晓东
387c714072 polish README 2022-01-09 23:27:23 +08:00
许晓东
6a2d876d50 多集群支持,集群切换 2022-01-06 19:31:44 +08:00
许晓东
f5fb2c4f88 集群切换 2022-01-05 21:19:46 +08:00
许晓东
6f9676e259 集群列表、新增集群 2022-01-04 21:06:50 +08:00
许晓东
2427ce2c1e 准备开发多集群支持 2022-01-03 22:02:03 +08:00
许晓东
02abe67fce 消息查询过滤 2021-12-30 14:17:47 +08:00
许晓东
ad39f4e82c 消息查询过滤 2021-12-29 21:15:56 +08:00
许晓东
243c89b459 topic模糊查询,消息过滤页面配置 2021-12-28 20:39:07 +08:00
许晓东
11418cd6e0 去掉消息同步方案入口 2021-12-27 20:41:19 +08:00
许晓东
b19c6200d2 polish README 2021-12-21 14:57:16 +08:00
许晓东
5f6a06c100 1.0.2安装包下载地址 2021-12-21 14:44:55 +08:00
许晓东
5930e44fdf 消息详情支持重新发送 2021-12-21 14:08:36 +08:00
许晓东
98f33bb2cc 分区信息里展示当前分区的有效时间范围 2021-12-20 20:29:34 +08:00
许晓东
0ec3bac6c2 在线发送消息 2021-12-20 00:09:20 +08:00
许晓东
bd814d550d 按时间查询消息及时释放内存 2021-12-17 20:06:23 +08:00
许晓东
b9548d1640 fix按时间查询消息bug,加长页面请求超时 2021-12-13 19:05:57 +08:00
许晓东
57a41e087f 查询消息详情的时候展示消费情况 2021-12-12 23:35:17 +08:00
许晓东
54cd402810 查询消息详情信息 2021-12-12 18:53:29 +08:00
许晓东
c17b0aa4b9 根据偏移查询消息 2021-12-11 23:56:18 +08:00
Xiaodong Xu
8169ddb019 Merge pull request #5 from xxd763795151/master
根据时间查询消息
2021-12-11 14:55:07 +08:00
许晓东
5f24c62855 根据时间查询消息 2021-12-11 14:53:54 +08:00
许晓东
3b21fc4cd8 加一个消息页面 2021-12-05 23:18:51 +08:00
许晓东
d15ec4a2db 页面左上角加个标志 2021-12-04 14:47:32 +08:00
许晓东
12431db525 更新最新版本安装包下载地址 2021-11-30 20:20:03 +08:00
142 changed files with 8693 additions and 880 deletions

146
README.md
View File

@@ -1,86 +1,98 @@
# kafka可视化管理平台
一款轻量级的kafka可视化管理平台安装配置快捷、简单易用。
为了开发的省事,没有多语言支持,只支持中文展示。
为了开发的省事,没有国际化支持,页面只支持中文展示。
用过rocketmq-console吧前端展示风格跟那个有点类似。
## 安装包下载
* 点击下载:[kafka-console-ui.tar.gz](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.0/kafka-console-ui.tar.gz) 或 [kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.0/kafka-console-ui.zip)
* 参考下面的打包部署,下载源码重新打包(最新功能特性)
## 页面预览
如果github能查看图片的话可以点击[查看菜单页面](./document/overview/概览.md),查看每个页面的样子
## 集群迁移支持说明
当前主分支及日后版本不再提供消息同步、集群迁移的解决方案,如有需要,查看:[集群迁移说明](./document/datasync/集群迁移.md)
## ACL说明
最新代码运行即可看到acl菜单将权限管理和认证的用户管理SASL_SCRAM)进行了分离。分离之后支持只开启SASL_SCRAM认证的时候未开启鉴权用户变更操作。或者使用其它认证机制下的权限管理操作可视化的权限管理但是可视化的认证用户管理目前只支持Scram。
v1.0.6版本之前如果kafka集群启用了ACL但是控制台没看到Acl菜单可以查看[Acl配置启用说明](./document/acl/Acl.md)
## 功能支持
* 多集群支持
* 集群信息
* Topic管理
* 消费组管理
* 基于SASL_SCRAM认证授权管理
* 消息管理
* ACL
* 运维
![功能特性](./document/功能特性.png)
## 技术栈
* spring boot
* java、scala
* kafka
* h2
* vue
## kafka版本
* 当前使用的kafka 2.8.0
## 监控
仅提供运维管理功能监控、告警需要配合其它组件使用请查看https://blog.csdn.net/x763795151/article/details/119705372
# 打包、部署
## 打包
环境要求
* maven 3.6+
* jdk 8
* git
功能明细看这个脑图:
![功能特性](./document/img/功能特性.png)
## 安装包下载
点击下载(v1.0.5版本)[kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.5/kafka-console-ui.zip)
如果安装包下载的比较慢可以查看下面的源码打包说明把代码下载下来快速打包不过最新main分支代码刚升级了kafka版本到3.2.0,还没有充分测试,如果需要稳定版本,可以下载 1.0.4-release分支代码
github下载慢也可以试试从gitee下载点击下载[gitee来源kafka-console-ui.zip](https://gitee.com/xiaodong_xu/kafka-console-ui/attach_files/969018/download/kafka-console-ui.zip)
## 快速使用
### Windows
1. 解压缩zip安装包
2. 进入bin目录必须在bin目录下双击执行`start.bat`启动
3. 停止:直接关闭启动的命令行窗口即可
### Linux或Mac OS
```
git clone https://github.com/xxd763795151/kafka-console-ui.git
cd kafka-console-ui
# linux或mac执行
sh package.sh
# windows执行
package.bat
```
打包成功,输出文件(以下2种归档类型)
* target/kafka-console-ui.tar.gz
* target/kafka-console-ui.zip
## 部署
### Mac OS 或 Linux
```
# 解压缩(以tar.gz为例)
tar -zxvf kafka-console-ui.tar.gz
# 解压缩
unzip kafka-console-ui.zip
# 进入解压缩后的目录
cd kafka-console-ui
# 编辑配置
vim config/application.yml
# 启动
sh bin/start.sh
# 停止
sh bin/shutdown.sh
```
### Windows
1.解压缩zip安装包
2.编辑配置文件 `config/application.yml`
3.进入bin目录必须在bin目录下执行`start.bat`启动
启动完成访问http://127.0.0.1:7766
### 访问地址
启动完成访问http://127.0.0.1:7766
# 开发环境
* jdk 8
* idea
* scala 2.13
* maven >=3.6+
* webstorm
除了webstorm是开发前端的ide可以根据自己需要代替jdk scala是必须有的。
# 本地开发配置
以我自己为例开发环境里的工具准备好然后代码clone到本地。
## 后端配置
1. 用idea打开项目
2. 打开idea的Project Structure(Settings) -> Modules -> 设置src/main/scala为Sources因为约定src/main/java是源码目录所以这里要再加一个
3. 打开idea的Project Structure(Settings) -> Libraries 添加scala sdk然后选择本地下载的scala 2.13的目录,确定添加进来
## 前端
前端代码在工程的ui目录下找个前端开发的ide打开进行开发即可。
## 注意
前后分离,直接启动后端如果未编译前端代码是没有前端页面的,可以先打包进行编译`sh package.sh`然后再用idea启动或者前端部分单独启动
# 页面示例
如果未启用ACL配置不会显示ACL的菜单页面所以导航栏上没有Acl这一项
### 配置集群
第一次启动打开浏览器后因为还没有配置kafka集群信息所以页面右上角可能会有错误信息比如No Cluster Info或者是没有集群信息请先切换集群之类的提示。
![集群](./document/集群.png)
![Topic](./document/Topic.png)
![消费组](./document/消费组.png)
![运维](./document/运维.png)
集群配置如下:
1. 点击页面上方导航栏的 [运维] 菜单
2. 点击集群管理下的 [集群切换] 按钮
3. 在弹框里点击 [新增集群]
4. 然后输入kafka集群地址和一个名称随便起个名字
5. 点击提交便增加成功了
6. 增加成功可以看到会话框已经有这个集群信息,然后点击右侧的 [切换] 按钮,便切换该集群为当前集群
后续如果再增加其它集群,就可以按上面这个流程,如果想切换到哪个集群,点击切换按钮,便会切换到对应的集群,页面的右上角会显示当前是使用的哪个集群,如果不确定,可以刷新下页面。
在新增集群的时候除了集群地址还可以输入集群的其它属性配置比如请求超时ACL配置等。如果开启了ACL切换到该集群的时候导航栏上便会出现ACL菜单支持进行相关操作目前是基于SASL_SCRAM认证授权管理支持的最完善其它的我也没验证过虽然是我开发的但是我也没具体全部验证这一块功能授权部分应该是通用的
## kafka版本
* 当前使用的kafka 3.2.0
## 监控
仅提供运维管理功能监控、告警需要配合其它组件如有需要建议请查看https://blog.csdn.net/x763795151/article/details/119705372
## 源码打包
如果想通过源码打包,查看:[源码打包说明](./document/package/源码打包.md)
## 本地开发
如果需要本地开发,开发环境配置查看:[本地开发](./document/develop/开发配置.md)
## 登录认证和权限
目前主分支不支持登录认证,感谢@dongyinuo 同学开发了一版支持登录认证,及相关的按钮权限(主要有两个角色:管理员和普通开发人员)。
在分支feature/dongyinuo/20220501/devops 上。
如果有需要使用管理台登录认证的,可以切换到这个分支上进行打包,打包方式看 源码打包 说明。
默认登录账户admin/kafka-console-ui521
## DockerCompose部署
感谢@wdkang123 同学分享的部署方式,如果有需要请查看[DockerCompose部署方式](./document/deploy/docker部署.md)
## 联系方式
+ 微信群
<img src="./document/contact/weixin_contact.jpg" width="40%"/>
[//]: # (<img src="https://github.com/xxd763795151/kafka-console-ui/blob/main/document/contact/weixin_contact.jpg" width="40%"/>)
+ 若联系方式失效, 请联系加一下微信, 说明意图
- xxd763795151
- wxid_7jy2ezljvebt12

View File

@@ -4,7 +4,7 @@
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">
<id>rocketmq-reput</id>
<formats>
<format>tar.gz</format>
<!-- <format>tar.gz</format>-->
<format>zip</format>
</formats>

View File

@@ -5,4 +5,4 @@ set JAVA_OPTS=-Xmx512m -Xms512m -Xmn256m -Xss256k
set CONFIG_FILE=../config/application.yml
set TARGET=../lib/kafka-console-ui.jar
set DATA_DIR=..
%JAVA_CMD% -jar %TARGET% --spring.config.location=%CONFIG_FILE% --data.dir=%DATA_DIR%
"%JAVA_CMD%" -jar %TARGET% --spring.config.location=%CONFIG_FILE% --data.dir=%DATA_DIR%

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

36
document/acl/Acl.md Normal file
View File

@@ -0,0 +1,36 @@
# Acl配置启用说明
## 前言
可能有的同学是看了这篇文章来的:[如何通过可视化方式快捷管理kafka的acl配置](https://blog.csdn.net/x763795151/article/details/120200119)
这篇文章里可能说了是通过修改配置文件application.yml的方式来启用ACL示例如下
```yaml
kafka:
config:
# kafka broker地址多个以逗号分隔
bootstrap-server: 'localhost:9092'
# 服务端是否启用acl如果不启用下面的几项都忽略即可
enable-acl: true
# 只支持2种安全协议SASL_PLAINTEXT和PLAINTEXT启用acl则设置为SASL_PLAINTEXT不启用acl不需关心这个配置
security-protocol: SASL_PLAINTEXT
sasl-mechanism: SCRAM-SHA-256
# 超级管理员用户名在broker上已经配置为超级管理员
admin-username: admin
# 超级管理员密码
admin-password: admin
# 启动自动创建配置的超级管理员用户
admin-create: true
# broker连接的zk地址
zookeeper-addr: localhost:2181
sasl-jaas-config: org.apache.kafka.common.security.scram.ScramLoginModule required username="${kafka.config.admin-username}" password="${kafka.config.admin-password}";
```
其中说明了kafka.config.enable-acl配置项需要为true。
注意:**现在不再支持这种方式了**
## v1.0.6之前的版本说明
因为现在支持多集群配置,关于多集群配置,可以看主页说明的 配置集群 介绍。
所以这里把这些额外的配置项都去掉了。
如果启用了ACL在页面上新增集群的时候在属性里配置集群的ACL相关信息如下![新增集群](./新增集群.png)
如果控制台检测到属性里有ACL相关属性配置切换到这个集群后ACL菜单会自动出现的。
注意只支持SASL。

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 245 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 238 KiB

View File

@@ -0,0 +1,11 @@
# 集群迁移
可能是有的同学看了我以前发的解决云上、云下集群迁移的方案,好奇看到了这里。
当时发的文章链接在这里:[kafka新老集群平滑迁移实践](https://blog.csdn.net/x763795151/article/details/121070563)
不过考虑到这些功能涉及到业务属性,已经在新的版本中都去掉了。
当前主分支及日后版本不再提供消息同步、集群迁移的解决方案如有需要可以使用single-data-sync分支的代码或者历史发布的v1.0.2(直接下载) [kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.2/kafka-console-ui.zip) 的安装包。
v1.0.2版本及其之前的版本只支持单集群配置但是对于SASL_SCRAM认证授权管理功能相当完善。
后续版本会支持多集群管理并将v1.0.2之前的部分功能去掉或优化,目的是做为一个足够轻量的管理工具,不再涉及其它属性。

View File

@@ -0,0 +1,189 @@
# Docker/DockerCompose部署
# 1.快速上手
## 1.1 镜像拉取
```shell
docker pull wdkang/kafka-console-ui
```
## 1.2 查看镜像
```shell
docker images
```
## 1.3 启动服务
由于Docker内不会对数据进行持久化 所以这里推荐将数据目录映射到实体机中
详见 **2.数据持久**
```shell
docker run -d -p 7766:7766 wdkang/kafka-console-ui
```
## 1.4 查看状态
```shell
docker ps -a
```
## 1.5 查看日志
```shell
docker logs -f ${containerId}
```
## 1.6 访问服务
```shell
http://localhost:7766
```
# 2. 数据持久
推荐对数据进行持久化
## 2.1 新建目录
```shell
mkdir -p /home/kafka-console-ui/data /home/kafka-console-ui/log
cd /home/kafka-console-ui
```
## 2.2 启动服务
```shell
docker run -d -p 7766:7766 -v $PWD/data:/app/data -v $PWD/log:/app/log wdkang/kafka-console-ui
```
# 3.自主打包
## 3.1 构建镜像
**前置需求**
(可根据自身情况修改Dockerfile)
下载[kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases)包
解压后 将Dockerfile放入文件夹的根目录
**Dockefile**
```dockerfile
# jdk
FROM openjdk:8-jdk-alpine
# label
LABEL by="https://github.com/xxd763795151/kafka-console-ui"
# root
RUN mkdir -p /app && cd /app
WORKDIR /app
# config log data
RUN mkdir -p /app/config && mkdir -p /app/log && mkdir -p /app/data && mkdir -p /app/lib
# add file
ADD ./lib/kafka-console-ui.jar /app/lib
ADD ./config /app/config
# port
EXPOSE 7766
# start server
CMD java -jar -Xmx512m -Xms512m -Xmn256m -Xss256k /app/lib/kafka-console-ui.jar --spring.config.location="/app/config/" --logging.home="/app/log" --data.dir="/app/data"
```
**进行打包**
在文件夹根目录下
(注意末尾有个点)
```shell
docker build -t ${your_docker_hub_addr} .
```
## 3.2 上传镜像
```shell
docker push ${your_docker_hub_addr}
```
# 4.容器编排
```dockerfile
# docker-compose 编排
version: '3'
services:
# 服务名
kafka-console-ui:
# 容器名
container_name: "kafka-console-ui"
# 端口
ports:
- "7766:7766"
# 持久化
volumes:
- ./data:/app/data
- ./log:/app/log
# 防止读写文件有问题
privileged: true
user: root
# 镜像地址
image: "wdkang/kafka-console-ui"
```
## 4.1 拉取镜像
```shell
docker-compose pull kafka-console-ui
```
## 4.2 构建启动
```shell
docker-compose up --detach --build kafka-console-ui
```
## 4.3 查看状态
```shell
docker-compose ps -a
```
## 4.3 停止服务
```shell
docker-compose down
```

View File

@@ -0,0 +1,35 @@
# 本地开发配置说明
## 技术栈
* spring boot
* java、scala
* kafka
* h2
* vue
## 开发环境
* jdk 8
* idea
* scala 2.13
* maven >=3.6+
* webstorm
* Node
除了webstorm是开发前端的ide可以根据自己需要代替jdk scala是必须有的。
开发的时候我本地用的node版本是v14.16.0下载目录https://nodejs.org/download/release/v14.16.0/ . 过高或过低版本是否适用,我也没测试过。
scala 2.13下载地址在这个页面最下面https://www.scala-lang.org/download/scala2.html
## 克隆代码
以我自己为例开发环境里的工具准备好然后代码clone到本地。
## 后端配置
1. 用idea打开项目
2. 打开idea的Project Structure(Settings) -> Modules -> 设置src/main/scala为Sources因为约定src/main/java是源码目录所以这里要再加一个源码目录
3. 打开idea的Settings -> plugins 搜索scala plugin并安装然后应该是要重启idea生效这一步必须在第4步之前
4. 打开idea的Project Structure(Settings) -> Libraries 添加scala sdk然后选择本地下载的scala 2.13的目录确定添加进来如果使用的idea可以直接勾选也可以不用先下载到本地
## 前端
前端代码在工程的ui目录下找个前端开发的ide如web storm打开进行开发即可。
## 注意
前后分离直接启动后端工程的话src/main/resources目录下可能是没有静态文件的所以直接通过浏览器访问应该是没页面的。
可以先打包编译一下前端文件,比如执行:`sh package.sh`然后再用idea启动。或者是后端用idea启动完成后找个前端的ide 比如web storm打开工程的ui目录下的前端项目单独启动进行开发。

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

View File

@@ -0,0 +1,32 @@
# 菜单预览
如果未启用ACL配置不会显示ACL的菜单页面所以导航栏上没有Acl这一项
## 集群
* 展示集群列表
![集群](./img/集群.png)
* 查看或修改集群配置
![Broker配置](./img/Broker配置.png)
## Topic
* Topic列表
![Topic](./img/Topic.png)
## 消费组
* 消费组列表
![消费组](./img/消费组.png)
## 消息
* 根据时间检索或过滤消息
![消息时间查询](./img/消息时间查询.png)
* 消息详情
![消息详情](./img/消息详情.png)
## 运维
* 运维页面
![运维](./img/运维.png)
* 集群切换
![集群切换](./img/集群切换.png)

View File

@@ -0,0 +1,34 @@
# 源码打包说明
可以直接下载最新代码,进行打包,最新代码相比已经发布的安装包可能会包含最新的特性
## 环境要求
* maven 3.6+
* jdk 8
* git非必须
maven是建议版本>=3.6版本3.4+和3.5+我也没试过3.3+的版本在windows上我试了下打包有bug可能把spring boot的application.yml打不到合适的目录。
如果3.6+在mac上也不行也是上面这个问题建议用最新版本试试。
## 源码下载
```
git clone https://github.com/xxd763795151/kafka-console-ui.git
```
或者直接在页面下载源码
## 打包
我已经写了个简单的打包脚本,直接执行即可。
### Windows
```
cd kafka-console-ui
# windows执行
package.bat
```
### Linux或Mac OS
```
cd kafka-console-ui
# linux或mac执行
sh package.sh
```
打包完成会在target目录下生成一个kafka-console-ui.zip的安装包

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 204 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 126 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

19
pom.xml
View File

@@ -10,7 +10,7 @@
</parent>
<groupId>com.xuxd</groupId>
<artifactId>kafka-console-ui</artifactId>
<version>1.0.1</version>
<version>1.0.6</version>
<name>kafka-console-ui</name>
<description>Kafka console manage ui</description>
<properties>
@@ -21,7 +21,7 @@
<ui.path>${project.basedir}/ui</ui.path>
<frontend-maven-plugin.version>1.11.0</frontend-maven-plugin.version>
<compiler.version>1.8</compiler.version>
<kafka.version>2.8.0</kafka.version>
<kafka.version>3.2.0</kafka.version>
<maven.assembly.plugin.version>3.0.0</maven.assembly.plugin.version>
<mybatis-plus-boot-starter.version>3.4.2</mybatis-plus-boot-starter.version>
<scala.version>2.13.6</scala.version>
@@ -76,6 +76,18 @@
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.13</artifactId>
<version>${kafka.version}</version>
<exclusions>
<exclusion>
<groupId>com.typesafe.scala-logging</groupId>
<artifactId>scala-logging_2.13</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.typesafe.scala-logging</groupId>
<artifactId>scala-logging_2.13</artifactId>
<version>3.9.2</version>
</dependency>
<dependency>
@@ -207,7 +219,8 @@
<goal>npm</goal>
</goals>
<configuration>
<arguments>install --registry=https://registry.npmjs.org/</arguments>
<!-- <arguments>install &#45;&#45;registry=https://registry.npmjs.org/</arguments>-->
<arguments>install --registry=https://registry.npm.taobao.org</arguments>
</configuration>
</execution>
<execution>

View File

@@ -3,11 +3,13 @@ package com.xuxd.kafka.console;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.servlet.ServletComponentScan;
import org.springframework.scheduling.annotation.EnableScheduling;
@MapperScan("com.xuxd.kafka.console.dao")
@SpringBootApplication
@EnableScheduling
@ServletComponentScan
public class KafkaConsoleUiApplication {
public static void main(String[] args) {

View File

@@ -1,18 +1,15 @@
package com.xuxd.kafka.console.beans;
import java.util.Objects;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.common.acl.AccessControlEntry;
import org.apache.kafka.common.acl.AccessControlEntryFilter;
import org.apache.kafka.common.acl.AclBinding;
import org.apache.kafka.common.acl.AclBindingFilter;
import org.apache.kafka.common.acl.AclOperation;
import org.apache.kafka.common.acl.AclPermissionType;
import org.apache.kafka.common.acl.*;
import org.apache.kafka.common.resource.PatternType;
import org.apache.kafka.common.resource.ResourcePattern;
import org.apache.kafka.common.resource.ResourcePatternFilter;
import org.apache.kafka.common.resource.ResourceType;
import org.apache.kafka.common.security.auth.KafkaPrincipal;
import org.apache.kafka.common.utils.SecurityUtils;
import java.util.Objects;
/**
* kafka-console-ui.
@@ -41,7 +38,9 @@ public class AclEntry {
entry.setResourceType(binding.pattern().resourceType().name());
entry.setName(binding.pattern().name());
entry.setPatternType(binding.pattern().patternType().name());
entry.setPrincipal(KafkaPrincipal.fromString(binding.entry().principal()).getName());
// entry.setPrincipal(KafkaPrincipal.fromString(binding.entry().principal()).getName());
// 3.x版本使用该方法
entry.setPrincipal(SecurityUtils.parseKafkaPrincipal(binding.entry().principal()).getName());
entry.setHost(binding.entry().host());
entry.setOperation(binding.entry().operation().name());
entry.setPermissionType(binding.entry().permissionType().name());

View File

@@ -8,7 +8,7 @@ import org.apache.kafka.common.Node;
* @author xuxd
* @date 2021-10-08 14:03:21
**/
public class BrokerNode {
public class BrokerNode implements Comparable{
private int id;
@@ -80,4 +80,8 @@ public class BrokerNode {
public void setController(boolean controller) {
isController = controller;
}
@Override public int compareTo(Object o) {
return this.id - ((BrokerNode)o).id;
}
}

View File

@@ -0,0 +1,73 @@
package com.xuxd.kafka.console.beans;
import com.xuxd.kafka.console.beans.enums.FilterType;
import org.apache.kafka.common.serialization.Deserializer;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-29 15:30:08
**/
public class MessageFilter {
private FilterType filterType = FilterType.NONE;
private Object searchContent = null;
private String headerKey = null;
private String headerValue = null;
private Deserializer deserializer = null;
private boolean isContainsValue = false;
public FilterType getFilterType() {
return filterType;
}
public void setFilterType(FilterType filterType) {
this.filterType = filterType;
}
public Object getSearchContent() {
return searchContent;
}
public void setSearchContent(Object searchContent) {
this.searchContent = searchContent;
}
public String getHeaderKey() {
return headerKey;
}
public void setHeaderKey(String headerKey) {
this.headerKey = headerKey;
}
public String getHeaderValue() {
return headerValue;
}
public void setHeaderValue(String headerValue) {
this.headerValue = headerValue;
}
public Deserializer getDeserializer() {
return deserializer;
}
public void setDeserializer(Deserializer deserializer) {
this.deserializer = deserializer;
}
public boolean isContainsValue() {
return isContainsValue;
}
public void setContainsValue(boolean containsValue) {
isContainsValue = containsValue;
}
}

View File

@@ -0,0 +1,36 @@
package com.xuxd.kafka.console.beans;
import com.xuxd.kafka.console.beans.enums.FilterType;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-11 09:45:49
**/
@Data
public class QueryMessage {
private String topic;
private int partition;
private long startTime;
private long endTime;
private long offset;
private String keyDeserializer;
private String valueDeserializer;
private FilterType filter;
private String value;
private String headerKey;
private String headerValue;
}

View File

@@ -0,0 +1,25 @@
package com.xuxd.kafka.console.beans;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-19 23:28:31
**/
@Data
public class SendMessage {
private String topic;
private int partition;
private String key;
private String body;
private int num;
private long offset;
}

View File

@@ -21,7 +21,7 @@ public class TopicPartition implements Comparable {
}
TopicPartition other = (TopicPartition) o;
if (!this.topic.equals(other.getTopic())) {
return this.compareTo(other);
return this.topic.compareTo(other.topic);
}
return this.partition - other.partition;

View File

@@ -0,0 +1,28 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-31 09:54:24
**/
@Data
@TableName("t_cluster_info")
public class ClusterInfoDO {
@TableId(type = IdType.AUTO)
private Long id;
private String clusterName;
private String address;
private String properties;
private String updateTime;
}

View File

@@ -23,4 +23,6 @@ public class KafkaUserDO {
private String password;
private String updateTime;
private Long clusterInfoId;
}

View File

@@ -0,0 +1,27 @@
package com.xuxd.kafka.console.beans.dto;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/1/10 20:12
**/
@Data
public class AlterClientQuotaDTO {
private String type;
private List<String> types;
private List<String> names;
private String consumerRate;
private String producerRate;
private String requestPercentage;
private List<String> deleteConfigs;
}

View File

@@ -0,0 +1,38 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.utils.ConvertUtil;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-04 20:19:03
**/
@Data
public class ClusterInfoDTO {
private Long id;
private String clusterName;
private String address;
private String properties;
private String updateTime;
public ClusterInfoDO to() {
ClusterInfoDO infoDO = new ClusterInfoDO();
infoDO.setId(id);
infoDO.setClusterName(clusterName);
infoDO.setAddress(address);
if (StringUtils.isNotBlank(properties)) {
infoDO.setProperties(ConvertUtil.propertiesStr2JsonStr(properties));
}
return infoDO;
}
}

View File

@@ -0,0 +1,18 @@
package com.xuxd.kafka.console.beans.dto;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-02-15 19:08:13
**/
@Data
public class ProposedAssignmentDTO {
private String topic;
private List<Integer> brokers;
}

View File

@@ -0,0 +1,17 @@
package com.xuxd.kafka.console.beans.dto;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/1/9 21:53
**/
@Data
public class QueryClientQuotaDTO {
private List<String> types;
private List<String> names;
}

View File

@@ -0,0 +1,75 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.enums.FilterType;
import java.util.Date;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-11 09:17:59
**/
@Data
public class QueryMessageDTO {
private String topic;
private int partition;
private Date startTime;
private Date endTime;
private Long offset;
private String keyDeserializer;
private String valueDeserializer;
private String filter;
private String value;
private String headerKey;
private String headerValue;
public QueryMessage toQueryMessage() {
QueryMessage queryMessage = new QueryMessage();
queryMessage.setTopic(topic);
queryMessage.setPartition(partition);
if (startTime != null) {
queryMessage.setStartTime(startTime.getTime());
}
if (endTime != null) {
queryMessage.setEndTime(endTime.getTime());
}
if (offset != null) {
queryMessage.setOffset(offset);
}
queryMessage.setKeyDeserializer(keyDeserializer);
queryMessage.setValueDeserializer(valueDeserializer);
if (StringUtils.isNotBlank(filter)) {
queryMessage.setFilter(FilterType.valueOf(filter.toUpperCase()));
} else {
queryMessage.setFilter(FilterType.NONE);
}
if (StringUtils.isNotBlank(value)) {
queryMessage.setValue(value.trim());
}
if (StringUtils.isNotBlank(headerKey)) {
queryMessage.setHeaderKey(headerKey.trim());
}
if (StringUtils.isNotBlank(headerValue)) {
queryMessage.setHeaderValue(headerValue.trim());
}
return queryMessage;
}
}

View File

@@ -0,0 +1,11 @@
package com.xuxd.kafka.console.beans.enums;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-29 14:36:01
**/
public enum FilterType {
NONE, BODY, HEADER
}

View File

@@ -0,0 +1,24 @@
package com.xuxd.kafka.console.beans.vo;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-22 16:24:58
**/
@Data
public class BrokerApiVersionVO {
private int brokerId;
private String host;
private int supportNums;
private int unSupportNums;
private List<String> versionInfo;
}

View File

@@ -0,0 +1,82 @@
package com.xuxd.kafka.console.beans.vo;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.common.config.internals.QuotaConfigs;
import org.apache.kafka.common.quota.ClientQuotaEntity;
import java.util.List;
import java.util.Map;
/**
* @author 晓东哥哥
*/
@Data
public class ClientQuotaEntityVO {
private String user;
private String client;
private String ip;
private String consumerRate;
private String producerRate;
private String requestPercentage;
public static ClientQuotaEntityVO from(ClientQuotaEntity entity, List<String> entityTypes, Map<String, Object> config) {
ClientQuotaEntityVO entityVO = new ClientQuotaEntityVO();
Map<String, String> entries = entity.entries();
entityTypes.forEach(type -> {
switch (type) {
case ClientQuotaEntity.USER:
entityVO.setUser(entries.get(type));
break;
case ClientQuotaEntity.CLIENT_ID:
entityVO.setClient(entries.get(type));
break;
case ClientQuotaEntity.IP:
entityVO.setIp(entries.get(type));
break;
default:
break;
}
});
entityVO.setConsumerRate(convert(config.getOrDefault(QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG, "")));
entityVO.setProducerRate(convert(config.getOrDefault(QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG, "")));
entityVO.setRequestPercentage(config.getOrDefault(QuotaConfigs.REQUEST_PERCENTAGE_OVERRIDE_CONFIG, "").toString());
return entityVO;
}
public static String convert(Object num) {
if (num == null) {
return null;
}
if (num instanceof String) {
if ((StringUtils.isBlank((String) num))) {
return (String) num;
}
}
if (num instanceof Number) {
Number number = (Number) num;
double value = number.doubleValue();
double _1kb = 1024;
double _1mb = 1024 * _1kb;
if (value < _1kb) {
return value + " Byte";
}
if (value < _1mb) {
return String.format("%.1f KB", (value / _1kb));
}
if (value >= _1mb) {
return String.format("%.1f MB", (value / _1mb));
}
}
return String.valueOf(num);
}
}

View File

@@ -0,0 +1,40 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.utils.ConvertUtil;
import java.util.List;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-04 19:16:11
**/
@Data
public class ClusterInfoVO {
private Long id;
private String clusterName;
private String address;
private List<String> properties;
private String updateTime;
public static ClusterInfoVO from(ClusterInfoDO infoDO) {
ClusterInfoVO vo = new ClusterInfoVO();
vo.setId(infoDO.getId());
vo.setClusterName(infoDO.getClusterName());
vo.setAddress(infoDO.getAddress());
vo.setUpdateTime(infoDO.getUpdateTime());
if (StringUtils.isNotBlank(infoDO.getProperties())) {
vo.setProperties(ConvertUtil.jsonStr2List(infoDO.getProperties()));
}
return vo;
}
}

View File

@@ -0,0 +1,32 @@
package com.xuxd.kafka.console.beans.vo;
import lombok.Data;
import org.apache.kafka.clients.consumer.ConsumerRecord;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-11 14:19:35
**/
@Data
public class ConsumerRecordVO {
private String topic;
private int partition;
private long offset;
private long timestamp;
public static ConsumerRecordVO fromConsumerRecord(ConsumerRecord record) {
ConsumerRecordVO vo = new ConsumerRecordVO();
vo.setTopic(record.topic());
vo.setPartition(record.partition());
vo.setOffset(record.offset());
vo.setTimestamp(record.timestamp());
return vo;
}
}

View File

@@ -0,0 +1,47 @@
package com.xuxd.kafka.console.beans.vo;
import java.util.ArrayList;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-12 12:45:23
**/
@Data
public class MessageDetailVO {
private String topic;
private int partition;
private long offset;
private long timestamp;
private String timestampType;
private List<HeaderVO> headers = new ArrayList<>();
private Object key;
private Object value;
private List<ConsumerVO> consumers;
@Data
public static class HeaderVO {
String key;
String value;
}
@Data
public static class ConsumerVO {
String groupId;
String status;
}
}

View File

@@ -29,6 +29,10 @@ public class TopicPartitionVO {
private long diff;
private long beginTime;
private long endTime;
public static TopicPartitionVO from(TopicPartitionInfo partitionInfo) {
TopicPartitionVO partitionVO = new TopicPartitionVO();
partitionVO.setPartition(partitionInfo.partition());

View File

@@ -0,0 +1,66 @@
package com.xuxd.kafka.console.boot;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.config.KafkaConfig;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.utils.ConvertUtil;
import java.util.List;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.SmartInitializingSingleton;
import org.springframework.stereotype.Component;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-30 19:16:50
**/
@Slf4j
@Component
public class Bootstrap implements SmartInitializingSingleton {
public static final String DEFAULT_CLUSTER_NAME = "default";
private final KafkaConfig config;
private final ClusterInfoMapper clusterInfoMapper;
public Bootstrap(KafkaConfig config, ObjectProvider<ClusterInfoMapper> clusterInfoMapper) {
this.config = config;
this.clusterInfoMapper = clusterInfoMapper.getIfAvailable();
}
private void initialize() {
loadDefaultClusterConfig();
}
private void loadDefaultClusterConfig() {
log.info("load default kafka config.");
if (StringUtils.isBlank(config.getBootstrapServer())) {
return;
}
QueryWrapper<ClusterInfoDO> clusterInfoDOQueryWrapper = new QueryWrapper<>();
clusterInfoDOQueryWrapper.eq("cluster_name", DEFAULT_CLUSTER_NAME);
List<Object> objects = clusterInfoMapper.selectObjs(clusterInfoDOQueryWrapper);
if (CollectionUtils.isNotEmpty(objects)) {
log.warn("default kafka cluster config has existed[any of cluster name or address].");
return;
}
ClusterInfoDO infoDO = new ClusterInfoDO();
infoDO.setClusterName(DEFAULT_CLUSTER_NAME);
infoDO.setAddress(config.getBootstrapServer().trim());
infoDO.setProperties(ConvertUtil.toJsonString(config.getProperties()));
clusterInfoMapper.insert(infoDO);
log.info("Insert default config: {}", infoDO);
}
@Override public void afterSingletonsInstantiated() {
initialize();
}
}

View File

@@ -0,0 +1,33 @@
package com.xuxd.kafka.console.cache;
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
import com.google.common.cache.RemovalListener;
import kafka.console.KafkaConsole;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
public class TimeBasedCache<K, V> {
private LoadingCache<K, V> cache;
private KafkaConsole console;
public TimeBasedCache(CacheLoader<K, V> loader, RemovalListener<K, V> listener) {
cache = CacheBuilder.newBuilder()
.maximumSize(50) // maximum 100 records can be cached
.expireAfterAccess(30, TimeUnit.MINUTES) // cache will expire after 30 minutes of access
.removalListener(listener)
.build(loader);
}
public V get(K k) {
try {
return cache.get(k);
} catch (ExecutionException e) {
throw new RuntimeException("Get connection from cache error.", e);
}
}
}

View File

@@ -0,0 +1,74 @@
package com.xuxd.kafka.console.config;
import java.util.Properties;
import org.apache.kafka.clients.CommonClientConfigs;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-30 15:46:55
**/
public class ContextConfig {
public static final int DEFAULT_REQUEST_TIMEOUT_MS = 5000;
private Long clusterInfoId;
private String clusterName;
private String bootstrapServer;
private int requestTimeoutMs = DEFAULT_REQUEST_TIMEOUT_MS;
private Properties properties = new Properties();
public String getBootstrapServer() {
return bootstrapServer;
}
public void setBootstrapServer(String bootstrapServer) {
this.bootstrapServer = bootstrapServer;
}
public int getRequestTimeoutMs() {
return properties.containsKey(CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG) ?
Integer.parseInt(properties.getProperty(CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG)) : requestTimeoutMs;
}
public void setRequestTimeoutMs(int requestTimeoutMs) {
this.requestTimeoutMs = requestTimeoutMs;
}
public Properties getProperties() {
return properties;
}
public Long getClusterInfoId() {
return clusterInfoId;
}
public void setClusterInfoId(Long clusterInfoId) {
this.clusterInfoId = clusterInfoId;
}
public String getClusterName() {
return clusterName;
}
public void setClusterName(String clusterName) {
this.clusterName = clusterName;
}
public void setProperties(Properties properties) {
this.properties = properties;
}
@Override public String toString() {
return "KafkaContextConfig{" +
"bootstrapServer='" + bootstrapServer + '\'' +
", requestTimeoutMs=" + requestTimeoutMs +
", properties=" + properties +
'}';
}
}

View File

@@ -0,0 +1,12 @@
package com.xuxd.kafka.console.config;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-30 18:55:28
**/
public class ContextConfigHolder {
public static final ThreadLocal<ContextConfig> CONTEXT_CONFIG = new ThreadLocal<>();
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.config;
import java.util.Properties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
@@ -15,23 +16,15 @@ public class KafkaConfig {
private String bootstrapServer;
private int requestTimeoutMs;
private String securityProtocol;
private String saslMechanism;
private String saslJaasConfig;
private String adminUsername;
private String adminPassword;
private boolean adminCreate;
private String zookeeperAddr;
private boolean enableAcl;
private Properties properties;
private boolean cacheAdminConnection;
private boolean cacheProducerConnection;
private boolean cacheConsumerConnection;
public String getBootstrapServer() {
return bootstrapServer;
@@ -41,62 +34,6 @@ public class KafkaConfig {
this.bootstrapServer = bootstrapServer;
}
public int getRequestTimeoutMs() {
return requestTimeoutMs;
}
public void setRequestTimeoutMs(int requestTimeoutMs) {
this.requestTimeoutMs = requestTimeoutMs;
}
public String getSecurityProtocol() {
return securityProtocol;
}
public void setSecurityProtocol(String securityProtocol) {
this.securityProtocol = securityProtocol;
}
public String getSaslMechanism() {
return saslMechanism;
}
public void setSaslMechanism(String saslMechanism) {
this.saslMechanism = saslMechanism;
}
public String getSaslJaasConfig() {
return saslJaasConfig;
}
public void setSaslJaasConfig(String saslJaasConfig) {
this.saslJaasConfig = saslJaasConfig;
}
public String getAdminUsername() {
return adminUsername;
}
public void setAdminUsername(String adminUsername) {
this.adminUsername = adminUsername;
}
public String getAdminPassword() {
return adminPassword;
}
public void setAdminPassword(String adminPassword) {
this.adminPassword = adminPassword;
}
public boolean isAdminCreate() {
return adminCreate;
}
public void setAdminCreate(boolean adminCreate) {
this.adminCreate = adminCreate;
}
public String getZookeeperAddr() {
return zookeeperAddr;
}
@@ -105,11 +42,35 @@ public class KafkaConfig {
this.zookeeperAddr = zookeeperAddr;
}
public boolean isEnableAcl() {
return enableAcl;
public Properties getProperties() {
return properties;
}
public void setEnableAcl(boolean enableAcl) {
this.enableAcl = enableAcl;
public void setProperties(Properties properties) {
this.properties = properties;
}
public boolean isCacheAdminConnection() {
return cacheAdminConnection;
}
public void setCacheAdminConnection(boolean cacheAdminConnection) {
this.cacheAdminConnection = cacheAdminConnection;
}
public boolean isCacheProducerConnection() {
return cacheProducerConnection;
}
public void setCacheProducerConnection(boolean cacheProducerConnection) {
this.cacheProducerConnection = cacheProducerConnection;
}
public boolean isCacheConsumerConnection() {
return cacheConsumerConnection;
}
public void setCacheConsumerConnection(boolean cacheConsumerConnection) {
this.cacheConsumerConnection = cacheConsumerConnection;
}
}

View File

@@ -1,12 +1,6 @@
package com.xuxd.kafka.console.config;
import kafka.console.ClusterConsole;
import kafka.console.ConfigConsole;
import kafka.console.ConsumerConsole;
import kafka.console.KafkaAclConsole;
import kafka.console.KafkaConfigConsole;
import kafka.console.OperationConsole;
import kafka.console.TopicConsole;
import kafka.console.*;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@@ -51,7 +45,17 @@ public class KafkaConfiguration {
@Bean
public OperationConsole operationConsole(KafkaConfig config, TopicConsole topicConsole,
ConsumerConsole consumerConsole) {
ConsumerConsole consumerConsole) {
return new OperationConsole(config, topicConsole, consumerConsole);
}
@Bean
public MessageConsole messageConsole(KafkaConfig config) {
return new MessageConsole(config);
}
@Bean
public ClientQuotaConsole clientQuotaConsole(KafkaConfig config) {
return new ClientQuotaConsole(config);
}
}

View File

@@ -118,4 +118,14 @@ public class AclAuthController {
return aclService.deleteConsumerAcl(param.toTopicEntry(), param.toGroupEntry());
}
/**
* clear principal acls.
*
* @param param acl principal.
* @return true or false.
*/
@DeleteMapping("/clear")
public Object clearAcl(@RequestBody DeleteAclDTO param) {
return aclService.clearAcl(param.toUserEntry());
}
}

View File

@@ -1,7 +1,9 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.AclEntry;
import com.xuxd.kafka.console.beans.AclUser;
import com.xuxd.kafka.console.service.AclService;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
@@ -12,7 +14,7 @@ import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
/**
* kafka-console-ui.
* kafka-console-ui. sasl scram user.
*
* @author xuxd
* @date 2021-08-28 21:13:05
@@ -49,4 +51,11 @@ public class AclUserController {
public Object getUserDetail(@RequestParam String username) {
return aclService.getUserDetail(username);
}
@GetMapping("/scram")
public Object getSaslScramUserList(@RequestParam(required = false) String username) {
AclEntry entry = new AclEntry();
entry.setPrincipal(StringUtils.isNotBlank(username) ? username : null);
return aclService.getSaslScramUserList(entry);
}
}

View File

@@ -0,0 +1,52 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterClientQuotaDTO;
import com.xuxd.kafka.console.beans.dto.QueryClientQuotaDTO;
import com.xuxd.kafka.console.service.ClientQuotaService;
import org.apache.commons.collections.CollectionUtils;
import org.springframework.web.bind.annotation.*;
/**
* @author: xuxd
* @date: 2023/1/9 21:50
**/
@RestController
@RequestMapping("/client/quota")
public class ClientQuotaController {
private final ClientQuotaService clientQuotaService;
public ClientQuotaController(ClientQuotaService clientQuotaService) {
this.clientQuotaService = clientQuotaService;
}
@PostMapping("/list")
public Object getClientQuotaConfigs(@RequestBody QueryClientQuotaDTO request) {
return clientQuotaService.getClientQuotaConfigs(request.getTypes(), request.getNames());
}
@PostMapping
public Object alterClientQuotaConfigs(@RequestBody AlterClientQuotaDTO request) {
if (request.getTypes().size() != 2) {
if (CollectionUtils.isEmpty(request.getTypes())
|| CollectionUtils.isEmpty(request.getNames())
|| request.getTypes().size() != request.getNames().size()) {
return ResponseData.create().failed("types length and names length is invalid.");
}
}
return clientQuotaService.alterClientQuotaConfigs(request);
}
@DeleteMapping
public Object deleteClientQuotaConfigs(@RequestBody AlterClientQuotaDTO request) {
if (request.getTypes().size() != 2) {
if (CollectionUtils.isEmpty(request.getTypes())
|| CollectionUtils.isEmpty(request.getNames())
|| request.getTypes().size() != request.getNames().size()) {
return ResponseData.create().failed("types length and names length is invalid.");
}
}
return clientQuotaService.deleteClientQuotaConfigs(request);
}
}

View File

@@ -1,8 +1,13 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.dto.ClusterInfoDTO;
import com.xuxd.kafka.console.service.ClusterService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@@ -23,4 +28,34 @@ public class ClusterController {
public Object getClusterInfo() {
return clusterService.getClusterInfo();
}
@GetMapping("/info")
public Object getClusterInfoList() {
return clusterService.getClusterInfoList();
}
@PostMapping("/info")
public Object addClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.addClusterInfo(dto.to());
}
@DeleteMapping("/info")
public Object deleteClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.deleteClusterInfo(dto.getId());
}
@PutMapping("/info")
public Object updateClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.updateClusterInfo(dto.to());
}
@GetMapping("/info/peek")
public Object peekClusterInfo() {
return clusterService.peekClusterInfo();
}
@GetMapping("/info/api/version")
public Object getBrokerApiVersionInfo() {
return clusterService.getBrokerApiVersionInfo();
}
}

View File

@@ -36,7 +36,7 @@ public class ConfigController {
this.configService = configService;
}
@GetMapping
@GetMapping("/console")
public Object getConfig() {
return ResponseData.create().data(configMap).success();
}

View File

@@ -0,0 +1,64 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage;
import com.xuxd.kafka.console.beans.dto.QueryMessageDTO;
import com.xuxd.kafka.console.service.MessageService;
import org.apache.commons.collections.CollectionUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import java.util.List;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-11 09:22:19
**/
@RestController
@RequestMapping("/message")
public class MessageController {
@Autowired
private MessageService messageService;
@PostMapping("/search/time")
public Object searchByTime(@RequestBody QueryMessageDTO dto) {
return messageService.searchByTime(dto.toQueryMessage());
}
@PostMapping("/search/offset")
public Object searchByOffset(@RequestBody QueryMessageDTO dto) {
return messageService.searchByOffset(dto.toQueryMessage());
}
@PostMapping("/search/detail")
public Object searchDetail(@RequestBody QueryMessageDTO dto) {
return messageService.searchDetail(dto.toQueryMessage());
}
@GetMapping("/deserializer/list")
public Object deserializerList() {
return messageService.deserializerList();
}
@PostMapping("/send")
public Object send(@RequestBody SendMessage message) {
return messageService.send(message);
}
@PostMapping("/resend")
public Object resend(@RequestBody SendMessage message) {
return messageService.resend(message);
}
@DeleteMapping
public Object delete(@RequestBody List<QueryMessage> messages) {
if (CollectionUtils.isEmpty(messages)) {
return ResponseData.create().failed("params is null");
}
return messageService.delete(messages);
}
}

View File

@@ -2,6 +2,7 @@ package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.TopicPartition;
import com.xuxd.kafka.console.beans.dto.BrokerThrottleDTO;
import com.xuxd.kafka.console.beans.dto.ProposedAssignmentDTO;
import com.xuxd.kafka.console.beans.dto.ReplicationDTO;
import com.xuxd.kafka.console.beans.dto.SyncDataDTO;
import com.xuxd.kafka.console.service.OperationService;
@@ -74,4 +75,9 @@ public class OperationController {
public Object cancelReassignment(@RequestBody TopicPartition partition) {
return operationService.cancelReassignment(new org.apache.kafka.common.TopicPartition(partition.getTopic(), partition.getPartition()));
}
@PostMapping("/replication/reassignments/proposed")
public Object proposedAssignments(@RequestBody ProposedAssignmentDTO dto) {
return operationService.proposedAssignments(dto.getTopic(), dto.getBrokers());
}
}

View File

@@ -43,8 +43,8 @@ public class TopicController {
}
@DeleteMapping
public Object deleteTopic(@RequestParam String topic) {
return topicService.deleteTopic(topic);
public Object deleteTopic(@RequestBody List<String> topics) {
return topicService.deleteTopics(topics);
}
@GetMapping("/partition")

View File

@@ -0,0 +1,13 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-31 09:58:52
**/
public interface ClusterInfoMapper extends BaseMapper<ClusterInfoDO> {
}

View File

@@ -0,0 +1,86 @@
package com.xuxd.kafka.console.interceptor;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.config.ContextConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.utils.ConvertUtil;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.MediaType;
import javax.servlet.*;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import java.io.IOException;
import java.util.HashSet;
import java.util.Set;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-05 19:56:25
**/
@WebFilter(filterName = "context-set-filter", urlPatterns = {"/acl/*", "/user/*", "/cluster/*", "/config/*", "/consumer/*", "/message/*", "/topic/*", "/op/*", "/client/*"})
@Slf4j
public class ContextSetFilter implements Filter {
private Set<String> excludes = new HashSet<>();
{
excludes.add("/cluster/info/peek");
excludes.add("/cluster/info");
excludes.add("/config/console");
}
@Autowired
private ClusterInfoMapper clusterInfoMapper;
@Override
public void doFilter(ServletRequest req, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
try {
HttpServletRequest request = (HttpServletRequest) req;
String uri = request.getRequestURI();
if (!excludes.contains(uri)) {
String headerId = request.getHeader(Header.ID);
if (StringUtils.isBlank(headerId)) {
// ResponseData failed = ResponseData.create().failed("Cluster info is null.");
ResponseData failed = ResponseData.create().failed("没有集群信息,请先切换集群");
response.setContentType(MediaType.APPLICATION_JSON_UTF8_VALUE);
response.getWriter().println(ConvertUtil.toJsonString(failed));
return;
} else {
ClusterInfoDO infoDO = clusterInfoMapper.selectById(Long.valueOf(headerId));
if (infoDO == null) {
ResponseData failed = ResponseData.create().failed("该集群找不到信息,请切换一个有效集群");
response.setContentType(MediaType.APPLICATION_JSON_UTF8_VALUE);
response.getWriter().println(ConvertUtil.toJsonString(failed));
return;
}
ContextConfig config = new ContextConfig();
config.setClusterInfoId(infoDO.getId());
config.setClusterName(infoDO.getClusterName());
config.setBootstrapServer(infoDO.getAddress());
if (StringUtils.isNotBlank(infoDO.getProperties())) {
config.setProperties(ConvertUtil.toProperties(infoDO.getProperties()));
}
ContextConfigHolder.CONTEXT_CONFIG.set(config);
// log.info("current kafka config: {}", config);
}
}
chain.doFilter(req, response);
} finally {
ContextConfigHolder.CONTEXT_CONFIG.remove();
}
}
interface Header {
String ID = "X-Cluster-Info-Id";
String NAME = "X-Cluster-Info-Name";
}
}

View File

@@ -1,9 +1,22 @@
package com.xuxd.kafka.console.schedule;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.beans.dos.KafkaUserDO;
import com.xuxd.kafka.console.config.ContextConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.dao.KafkaUserMapper;
import com.xuxd.kafka.console.utils.ConvertUtil;
import com.xuxd.kafka.console.utils.SaslUtil;
import java.util.ArrayList;
import java.util.List;
import java.util.Set;
import kafka.console.KafkaConfigConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
@@ -21,25 +34,58 @@ public class KafkaAclSchedule {
private final KafkaConfigConsole configConsole;
public KafkaAclSchedule(KafkaUserMapper userMapper, KafkaConfigConsole configConsole) {
this.userMapper = userMapper;
this.configConsole = configConsole;
private final ClusterInfoMapper clusterInfoMapper;
public KafkaAclSchedule(ObjectProvider<KafkaUserMapper> userMapper,
ObjectProvider<KafkaConfigConsole> configConsole, ObjectProvider<ClusterInfoMapper> clusterInfoMapper) {
this.userMapper = userMapper.getIfAvailable();
this.configConsole = configConsole.getIfAvailable();
this.clusterInfoMapper = clusterInfoMapper.getIfAvailable();
}
@Scheduled(cron = "${cron.clear-dirty-user}")
public void clearDirtyKafkaUser() {
log.info("Start clear dirty data for kafka user from database.");
Set<String> userSet = configConsole.getUserList(null);
userMapper.selectList(null).forEach(u -> {
if (!userSet.contains(u.getUsername())) {
log.info("clear user: {} from database.", u.getUsername());
try {
userMapper.deleteById(u.getId());
} catch (Exception e) {
log.error("userMapper.deleteById error, user: " + u, e);
try {
log.info("Start clear dirty data for kafka user from database.");
List<ClusterInfoDO> clusterInfoDOS = clusterInfoMapper.selectList(null);
List<Long> clusterInfoIds = new ArrayList<>();
for (ClusterInfoDO infoDO : clusterInfoDOS) {
ContextConfig config = new ContextConfig();
config.setClusterInfoId(infoDO.getId());
config.setClusterName(infoDO.getClusterName());
config.setBootstrapServer(infoDO.getAddress());
if (StringUtils.isNotBlank(infoDO.getProperties())) {
config.setProperties(ConvertUtil.toProperties(infoDO.getProperties()));
}
ContextConfigHolder.CONTEXT_CONFIG.set(config);
if (SaslUtil.isEnableSasl() && SaslUtil.isEnableScram()) {
log.info("Start clear cluster: {}", infoDO.getClusterName());
Set<String> userSet = configConsole.getUserList(null);
QueryWrapper<KafkaUserDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("cluster_info_id", infoDO.getId());
userMapper.selectList(queryWrapper).forEach(u -> {
if (!userSet.contains(u.getUsername())) {
log.info("clear user: {} from database.", u.getUsername());
try {
userMapper.deleteById(u.getId());
} catch (Exception e) {
log.error("userMapper.deleteById error, user: " + u, e);
}
}
});
clusterInfoIds.add(infoDO.getId());
}
}
});
log.info("Clear end.");
if (CollectionUtils.isNotEmpty(clusterInfoIds)) {
log.info("Clear the cluster id {}, which not found.", clusterInfoIds);
QueryWrapper<KafkaUserDO> wrapper = new QueryWrapper<>();
wrapper.notIn("cluster_info_id", clusterInfoIds);
userMapper.delete(wrapper);
}
log.info("Clear end.");
} finally {
ContextConfigHolder.CONTEXT_CONFIG.remove();
}
}
}

View File

@@ -42,4 +42,7 @@ public interface AclService {
ResponseData getUserDetail(String username);
ResponseData clearAcl(AclEntry entry);
ResponseData getSaslScramUserList(AclEntry entry);
}

View File

@@ -0,0 +1,18 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterClientQuotaDTO;
import java.util.List;
/**
* @author 晓东哥哥
*/
public interface ClientQuotaService {
ResponseData getClientQuotaConfigs(List<String> types, List<String> names);
ResponseData alterClientQuotaConfigs(AlterClientQuotaDTO request);
ResponseData deleteClientQuotaConfigs(AlterClientQuotaDTO request);
}

View File

@@ -0,0 +1,10 @@
package com.xuxd.kafka.console.service;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-31 11:42:43
**/
public interface ClusterInfoService {
}

View File

@@ -1,6 +1,7 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
/**
* kafka-console-ui.
@@ -10,4 +11,16 @@ import com.xuxd.kafka.console.beans.ResponseData;
**/
public interface ClusterService {
ResponseData getClusterInfo();
ResponseData getClusterInfoList();
ResponseData addClusterInfo(ClusterInfoDO infoDO);
ResponseData deleteClusterInfo(Long id);
ResponseData updateClusterInfo(ClusterInfoDO infoDO);
ResponseData peekClusterInfo();
ResponseData getBrokerApiVersionInfo();
}

View File

@@ -38,4 +38,6 @@ public interface ConsumerService {
ResponseData getTopicSubscribedByGroups(String topic);
ResponseData getOffsetPartition(String groupId);
ResponseData<Set<String>> getSubscribedGroups(String topic);
}

View File

@@ -0,0 +1,30 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage;
import java.util.List;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-11 09:43:26
**/
public interface MessageService {
ResponseData searchByTime(QueryMessage queryMessage);
ResponseData searchByOffset(QueryMessage queryMessage);
ResponseData searchDetail(QueryMessage queryMessage);
ResponseData deserializerList();
ResponseData send(SendMessage message);
ResponseData resend(SendMessage message);
ResponseData delete(List<QueryMessage> messages);
}

View File

@@ -30,4 +30,6 @@ public interface OperationService {
ResponseData currentReassignments();
ResponseData cancelReassignment(TopicPartition partition);
ResponseData proposedAssignments(String topic, List<Integer> brokerList);
}

View File

@@ -4,6 +4,8 @@ import com.xuxd.kafka.console.beans.ReplicaAssignment;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.enums.TopicThrottleSwitch;
import com.xuxd.kafka.console.beans.enums.TopicType;
import java.util.Collection;
import java.util.List;
import org.apache.kafka.clients.admin.NewTopic;
@@ -19,7 +21,7 @@ public interface TopicService {
ResponseData getTopicList(String topic, TopicType type);
ResponseData deleteTopic(String topic);
ResponseData deleteTopics(Collection<String> topics);
ResponseData getTopicPartitionInfo(String topic);

View File

@@ -6,16 +6,10 @@ import com.xuxd.kafka.console.beans.CounterMap;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.KafkaUserDO;
import com.xuxd.kafka.console.beans.vo.KafkaUserDetailVO;
import com.xuxd.kafka.console.config.KafkaConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.KafkaUserMapper;
import com.xuxd.kafka.console.service.AclService;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.stream.Collectors;
import com.xuxd.kafka.console.utils.SaslUtil;
import kafka.console.KafkaAclConsole;
import kafka.console.KafkaConfigConsole;
import lombok.extern.slf4j.Slf4j;
@@ -23,12 +17,19 @@ import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.admin.UserScramCredentialsDescription;
import org.apache.kafka.common.acl.AclBinding;
import org.apache.kafka.common.acl.AclOperation;
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.errors.SecurityDisabledException;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.SmartInitializingSingleton;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import scala.Tuple2;
import java.util.*;
import java.util.stream.Collectors;
import static com.xuxd.kafka.console.utils.SaslUtil.isEnableSasl;
import static com.xuxd.kafka.console.utils.SaslUtil.isEnableScram;
/**
* kafka-console-ui.
*
@@ -37,7 +38,7 @@ import scala.Tuple2;
**/
@Slf4j
@Service
public class AclServiceImpl implements AclService, SmartInitializingSingleton {
public class AclServiceImpl implements AclService {
@Autowired
private KafkaConfigConsole configConsole;
@@ -45,9 +46,6 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
@Autowired
private KafkaAclConsole aclConsole;
@Autowired
private KafkaConfig kafkaConfig;
private final KafkaUserMapper kafkaUserMapper;
public AclServiceImpl(ObjectProvider<KafkaUserMapper> kafkaUserMapper) {
@@ -64,15 +62,23 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
}
@Override public ResponseData addOrUpdateUser(String name, String pass) {
if (!isEnableSasl()) {
return ResponseData.create().failed("Only support SASL protocol.");
}
if (!isEnableScram()) {
return ResponseData.create().failed("Only support SASL_SCRAM.");
}
log.info("add or update user, username: {}, password: {}", name, pass);
if (!configConsole.addOrUpdateUser(name, pass)) {
Tuple2<Object, String> tuple2 = configConsole.addOrUpdateUser(name, pass);
if (!(boolean) tuple2._1()) {
log.error("add user to kafka failed.");
return ResponseData.create().failed("add user to kafka failed");
return ResponseData.create().failed("add user to kafka failed: " + tuple2._2());
}
// save user info to database.
KafkaUserDO userDO = new KafkaUserDO();
userDO.setUsername(name);
userDO.setPassword(pass);
userDO.setClusterInfoId(ContextConfigHolder.CONTEXT_CONFIG.get().getClusterInfoId());
try {
Map<String, Object> map = new HashMap<>();
map.put("username", name);
@@ -86,12 +92,24 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
}
@Override public ResponseData deleteUser(String name) {
if (!isEnableSasl()) {
return ResponseData.create().failed("Only support SASL protocol.");
}
if (!isEnableScram()) {
return ResponseData.create().failed("Only support SASL_SCRAM.");
}
log.info("delete user: {}", name);
Tuple2<Object, String> tuple2 = configConsole.deleteUser(name);
return (boolean) tuple2._1() ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2());
}
@Override public ResponseData deleteUserAndAuth(String name) {
if (!isEnableSasl()) {
return ResponseData.create().failed("Only support SASL protocol.");
}
if (!isEnableScram()) {
return ResponseData.create().failed("Only support SASL_SCRAM.");
}
log.info("delete user and authority: {}", name);
AclEntry entry = new AclEntry();
entry.setPrincipal(name);
@@ -114,37 +132,52 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
}
@Override public ResponseData getAclList(AclEntry entry) {
List<AclBinding> aclBindingList = entry.isNull() ? aclConsole.getAclList(null) : aclConsole.getAclList(entry);
List<AclBinding> aclBindingList = Collections.emptyList();
try {
aclBindingList = entry.isNull() ? aclConsole.getAclList(null) : aclConsole.getAclList(entry);
}catch (Exception ex) {
if (ex.getCause() instanceof SecurityDisabledException) {
Throwable e = ex.getCause();
log.info("SecurityDisabledException: {}", e.getMessage());
Map<String, String> hint = new HashMap<>(2);
hint.put("hint", "Security Disabled: " + e.getMessage());
return ResponseData.create().data(hint).success();
}
throw new RuntimeException(ex.getCause());
}
// List<AclBinding> aclBindingList = entry.isNull() ? aclConsole.getAclList(null) : aclConsole.getAclList(entry);
List<AclEntry> entryList = aclBindingList.stream().map(x -> AclEntry.valueOf(x)).collect(Collectors.toList());
Map<String, List<AclEntry>> entryMap = entryList.stream().collect(Collectors.groupingBy(AclEntry::getPrincipal));
Map<String, Object> resultMap = new HashMap<>();
entryMap.forEach((k, v) -> {
Map<String, List<AclEntry>> map = v.stream().collect(Collectors.groupingBy(e -> e.getResourceType() + "#" + e.getName()));
if (k.equals(kafkaConfig.getAdminUsername())) {
Map<String, Object> map2 = new HashMap<>(map);
Map<String, Object> userMap = new HashMap<>();
userMap.put("role", "admin");
map2.put("USER", userMap);
}
// String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
// if (k.equals(username)) {
// Map<String, Object> map2 = new HashMap<>(map);
// Map<String, Object> userMap = new HashMap<>();
// userMap.put("role", "admin");
// map2.put("USER", userMap);
// }
resultMap.put(k, map);
});
if (entry.isNull() || StringUtils.isNotBlank(entry.getPrincipal())) {
Map<String, UserScramCredentialsDescription> detailList = configConsole.getUserDetailList(StringUtils.isNotBlank(entry.getPrincipal()) ? Collections.singletonList(entry.getPrincipal()) : null);
detailList.values().forEach(u -> {
if (!resultMap.containsKey(u.name()) && !u.credentialInfos().isEmpty()) {
if (!u.name().equals(kafkaConfig.getAdminUsername())) {
resultMap.put(u.name(), Collections.emptyMap());
} else {
Map<String, Object> map2 = new HashMap<>();
Map<String, Object> userMap = new HashMap<>();
userMap.put("role", "admin");
map2.put("USER", userMap);
resultMap.put(u.name(), map2);
}
}
});
}
// if (entry.isNull() || StringUtils.isNotBlank(entry.getPrincipal())) {
// Map<String, UserScramCredentialsDescription> detailList = configConsole.getUserDetailList(StringUtils.isNotBlank(entry.getPrincipal()) ? Collections.singletonList(entry.getPrincipal()) : null);
//
// detailList.values().forEach(u -> {
// if (!resultMap.containsKey(u.name()) && !u.credentialInfos().isEmpty()) {
// String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
// if (!u.name().equals(username)) {
// resultMap.put(u.name(), Collections.emptyMap());
// } else {
// Map<String, Object> map2 = new HashMap<>();
// Map<String, Object> userMap = new HashMap<>();
// userMap.put("role", "admin");
// map2.put("USER", userMap);
// resultMap.put(u.name(), map2);
// }
// }
// });
// }
return ResponseData.create().data(new CounterMap<>(resultMap)).success();
}
@@ -194,27 +227,60 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
}
Map<String, Object> param = new HashMap<>();
param.put("username", username);
param.put("cluster_info_id", ContextConfigHolder.CONTEXT_CONFIG.get().getClusterInfoId());
List<KafkaUserDO> dos = kafkaUserMapper.selectByMap(param);
if (dos.isEmpty()) {
vo.setConsistencyDescription("Password is null.");
} else {
vo.setPassword(dos.stream().findFirst().get().getPassword());
// check for consistency.
boolean consistent = configConsole.isPassConsistent(username, vo.getPassword());
vo.setConsistencyDescription(consistent ? "Consistent" : "Password is not consistent.");
// boolean consistent = configConsole.isPassConsistent(username, vo.getPassword());
// vo.setConsistencyDescription(consistent ? "Consistent" : "Password is not consistent.");
vo.setConsistencyDescription("Can not check password consistent.");
}
return ResponseData.create().data(vo).success();
}
@Override public void afterSingletonsInstantiated() {
if (kafkaConfig.isEnableAcl() && kafkaConfig.isAdminCreate()) {
log.info("Start create admin user, username: {}, password: {}", kafkaConfig.getAdminUsername(), kafkaConfig.getAdminPassword());
boolean done = configConsole.addOrUpdateUserWithZK(kafkaConfig.getAdminUsername(), kafkaConfig.getAdminPassword());
if (!done) {
log.error("Create admin failed.");
throw new IllegalStateException();
}
}
@Override
public ResponseData clearAcl(AclEntry entry) {
log.info("Start clear acl, principal: {}", entry);
return aclConsole.deleteUserAcl(entry) ? ResponseData.create().success() : ResponseData.create().failed("操作失败");
}
@Override
public ResponseData getSaslScramUserList(AclEntry entry) {
Map<String, Object> resultMap = new HashMap<>();
if (entry.isNull() || StringUtils.isNotBlank(entry.getPrincipal())) {
Map<String, UserScramCredentialsDescription> detailList = configConsole.getUserDetailList(StringUtils.isNotBlank(entry.getPrincipal()) ? Collections.singletonList(entry.getPrincipal()) : null);
detailList.values().forEach(u -> {
if (!resultMap.containsKey(u.name()) && !u.credentialInfos().isEmpty()) {
String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
if (!u.name().equals(username)) {
resultMap.put(u.name(), Collections.emptyMap());
} else {
Map<String, Object> map2 = new HashMap<>();
Map<String, Object> userMap = new HashMap<>();
userMap.put("role", "admin");
map2.put("USER", userMap);
resultMap.put(u.name(), map2);
}
}
});
}
return ResponseData.create().data(new CounterMap<>(resultMap)).success();
}
// @Override public void afterSingletonsInstantiated() {
// if (kafkaConfig.isEnableAcl() && kafkaConfig.isAdminCreate()) {
// log.info("Start create admin user, username: {}, password: {}", kafkaConfig.getAdminUsername(), kafkaConfig.getAdminPassword());
// boolean done = configConsole.addOrUpdateUserWithZK(kafkaConfig.getAdminUsername(), kafkaConfig.getAdminPassword());
// if (!done) {
// log.error("Create admin failed.");
// throw new IllegalStateException();
// }
// }
// }
}

View File

@@ -0,0 +1,172 @@
package com.xuxd.kafka.console.service.impl;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterClientQuotaDTO;
import com.xuxd.kafka.console.beans.vo.ClientQuotaEntityVO;
import com.xuxd.kafka.console.service.ClientQuotaService;
import kafka.console.ClientQuotaConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.common.config.internals.QuotaConfigs;
import org.apache.kafka.common.quota.ClientQuotaEntity;
import org.springframework.stereotype.Service;
import scala.Tuple2;
import java.util.*;
import java.util.stream.Collectors;
/**
* @author 晓东哥哥
*/
@Slf4j
@Service
public class ClientQuotaServiceImpl implements ClientQuotaService {
private final ClientQuotaConsole clientQuotaConsole;
private final Map<String, String> typeDict = new HashMap<>();
private final Map<String, String> configDict = new HashMap<>();
private final String USER = "user";
private final String CLIENT_ID = "client-id";
private final String IP = "ip";
private final String USER_CLIENT = "user&client-id";
{
typeDict.put(USER, ClientQuotaEntity.USER);
typeDict.put(CLIENT_ID, ClientQuotaEntity.CLIENT_ID);
typeDict.put(IP, ClientQuotaEntity.IP);
typeDict.put(USER_CLIENT, USER_CLIENT);
configDict.put("producerRate", QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG);
configDict.put("consumerRate", QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG);
configDict.put("requestPercentage", QuotaConfigs.REQUEST_PERCENTAGE_OVERRIDE_CONFIG);
}
public ClientQuotaServiceImpl(ClientQuotaConsole clientQuotaConsole) {
this.clientQuotaConsole = clientQuotaConsole;
}
@Override
public ResponseData getClientQuotaConfigs(List<String> types, List<String> names) {
List<String> entityNames = names == null ? Collections.emptyList() : new ArrayList<>(names);
List<String> entityTypes = types.stream().map(e -> typeDict.get(e)).filter(e -> e != null).collect(Collectors.toList());
if (entityTypes.isEmpty() || entityTypes.size() != types.size()) {
throw new IllegalArgumentException("types illegal.");
}
boolean userAndClientFilterClientOnly = false;
// only type: [user and client-id], type.size == 2
if (entityTypes.size() == 2) {
if (names.size() == 2 && StringUtils.isBlank(names.get(0)) && StringUtils.isNotBlank(names.get(1))) {
userAndClientFilterClientOnly = true;
}
}
Map<ClientQuotaEntity, Map<String, Object>> clientQuotasConfigs = clientQuotaConsole.getClientQuotasConfigs(entityTypes,
userAndClientFilterClientOnly ? Collections.emptyList() : entityNames);
List<ClientQuotaEntityVO> voList = clientQuotasConfigs.entrySet().stream().map(entry -> ClientQuotaEntityVO.from(
entry.getKey(), entityTypes, entry.getValue())).collect(Collectors.toList());
if (!userAndClientFilterClientOnly) {
return ResponseData.create().data(voList).success();
}
List<ClientQuotaEntityVO> list = voList.stream().filter(e -> names.get(1).equals(e.getClient())).collect(Collectors.toList());
return ResponseData.create().data(list).success();
}
@Override
public ResponseData alterClientQuotaConfigs(AlterClientQuotaDTO request) {
if (StringUtils.isEmpty(request.getType()) || !typeDict.containsKey(request.getType())) {
return ResponseData.create().failed("Unknown type.");
}
List<String> types = new ArrayList<>();
List<String> names = new ArrayList<>();
parseTypesAndNames(request, types, names, request.getType());
Map<String, String> configsToBeAddedMap = new HashMap<>();
if (StringUtils.isNotEmpty(request.getProducerRate())) {
configsToBeAddedMap.put(QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG, String.valueOf(Math.floor(Double.valueOf(request.getProducerRate()))));
}
if (StringUtils.isNotEmpty(request.getConsumerRate())) {
configsToBeAddedMap.put(QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG, String.valueOf(Math.floor(Double.valueOf(request.getConsumerRate()))));
}
if (StringUtils.isNotEmpty(request.getRequestPercentage())) {
configsToBeAddedMap.put(QuotaConfigs.REQUEST_PERCENTAGE_OVERRIDE_CONFIG, String.valueOf(Math.floor(Double.valueOf(request.getRequestPercentage()))));
}
Tuple2<Object, String> tuple2 = clientQuotaConsole.addQuotaConfigs(types, names, configsToBeAddedMap);
if (!(Boolean) tuple2._1) {
return ResponseData.create().failed(tuple2._2);
}
if (CollectionUtils.isNotEmpty(request.getDeleteConfigs())) {
List<String> delete = request.getDeleteConfigs().stream().map(key -> configDict.get(key)).collect(Collectors.toList());
Tuple2<Object, String> tuple2Del = clientQuotaConsole.deleteQuotaConfigs(types, names, delete);
if (!(Boolean) tuple2Del._1) {
return ResponseData.create().failed(tuple2Del._2);
}
}
return ResponseData.create().success();
}
@Override
public ResponseData deleteClientQuotaConfigs(AlterClientQuotaDTO request) {
if (StringUtils.isEmpty(request.getType()) || !typeDict.containsKey(request.getType())) {
return ResponseData.create().failed("Unknown type.");
}
List<String> types = new ArrayList<>();
List<String> names = new ArrayList<>();
parseTypesAndNames(request, types, names, request.getType());
List<String> configs = new ArrayList<>();
configs.add(QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG);
configs.add(QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG);
configs.add(QuotaConfigs.REQUEST_PERCENTAGE_OVERRIDE_CONFIG);
Tuple2<Object, String> tuple2 = clientQuotaConsole.deleteQuotaConfigs(types, names, configs);
if (!(Boolean) tuple2._1) {
return ResponseData.create().failed(tuple2._2);
}
return ResponseData.create().success();
}
private void parseTypesAndNames(AlterClientQuotaDTO request, List<String> types, List<String> names, String type) {
switch (request.getType()) {
case USER:
getTypesAndNames(request, types, names, USER);
break;
case CLIENT_ID:
getTypesAndNames(request, types, names, CLIENT_ID);
break;
case IP:
getTypesAndNames(request, types, names, IP);
break;
case USER_CLIENT:
getTypesAndNames(request, types, names, USER);
getTypesAndNames(request, types, names, CLIENT_ID);
break;
}
}
private void getTypesAndNames(AlterClientQuotaDTO request, List<String> types, List<String> names, String type) {
int index = -1;
for (int i = 0; i < request.getTypes().size(); i++) {
if (type.equals(request.getTypes().get(i))) {
index = i;
break;
}
}
if (index < 0) {
throw new IllegalArgumentException("Does not contain the type" + type);
}
types.add(request.getTypes().get(index));
if (CollectionUtils.isNotEmpty(request.getNames()) && request.getNames().size() > index) {
names.add(request.getNames().get(index));
} else {
names.add("");
}
}
}

View File

@@ -0,0 +1,14 @@
package com.xuxd.kafka.console.service.impl;
import com.xuxd.kafka.console.service.ClusterInfoService;
import org.springframework.stereotype.Service;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-31 11:42:59
**/
@Service
public class ClusterInfoServiceImpl implements ClusterInfoService {
}

View File

@@ -1,9 +1,24 @@
package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.BrokerNode;
import com.xuxd.kafka.console.beans.ClusterInfo;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.beans.vo.BrokerApiVersionVO;
import com.xuxd.kafka.console.beans.vo.ClusterInfoVO;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.service.ClusterService;
import java.util.*;
import java.util.stream.Collectors;
import kafka.console.ClusterConsole;
import org.springframework.beans.factory.annotation.Autowired;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.NodeApiVersions;
import org.apache.kafka.common.Node;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.stereotype.Service;
/**
@@ -12,13 +27,91 @@ import org.springframework.stereotype.Service;
* @author xuxd
* @date 2021-10-08 14:23:09
**/
@Slf4j
@Service
public class ClusterServiceImpl implements ClusterService {
@Autowired
private ClusterConsole clusterConsole;
private final ClusterConsole clusterConsole;
private final ClusterInfoMapper clusterInfoMapper;
public ClusterServiceImpl(ObjectProvider<ClusterConsole> clusterConsole,
ObjectProvider<ClusterInfoMapper> clusterInfoMapper) {
this.clusterConsole = clusterConsole.getIfAvailable();
this.clusterInfoMapper = clusterInfoMapper.getIfAvailable();
}
@Override public ResponseData getClusterInfo() {
return ResponseData.create().data(clusterConsole.clusterInfo()).success();
ClusterInfo clusterInfo = clusterConsole.clusterInfo();
Set<BrokerNode> nodes = clusterInfo.getNodes();
if (nodes == null) {
log.error("集群节点信息为空,集群地址可能不正确或集群内没有活跃节点");
return ResponseData.create().failed("集群节点信息为空,集群地址可能不正确或集群内没有活跃节点");
}
clusterInfo.setNodes(new TreeSet<>(nodes));
return ResponseData.create().data(clusterInfo).success();
}
@Override public ResponseData getClusterInfoList() {
return ResponseData.create().data(clusterInfoMapper.selectList(null)
.stream().map(ClusterInfoVO::from).collect(Collectors.toList())).success();
}
@Override public ResponseData addClusterInfo(ClusterInfoDO infoDO) {
QueryWrapper<ClusterInfoDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("cluster_name", infoDO.getClusterName());
if (clusterInfoMapper.selectCount(queryWrapper) > 0) {
return ResponseData.create().failed("cluster name exist.");
}
clusterInfoMapper.insert(infoDO);
return ResponseData.create().success();
}
@Override public ResponseData deleteClusterInfo(Long id) {
clusterInfoMapper.deleteById(id);
return ResponseData.create().success();
}
@Override public ResponseData updateClusterInfo(ClusterInfoDO infoDO) {
if (infoDO.getProperties() == null) {
// null 的话不更新这个是bug设置为空字符串解决
infoDO.setProperties("");
}
clusterInfoMapper.updateById(infoDO);
return ResponseData.create().success();
}
@Override public ResponseData peekClusterInfo() {
List<ClusterInfoDO> dos = clusterInfoMapper.selectList(null);
if (CollectionUtils.isEmpty(dos)) {
return ResponseData.create().failed("No Cluster Info.");
}
return ResponseData.create().data(dos.stream().findFirst().map(ClusterInfoVO::from)).success();
}
@Override public ResponseData getBrokerApiVersionInfo() {
HashMap<Node, NodeApiVersions> map = clusterConsole.listBrokerVersionInfo();
List<BrokerApiVersionVO> list = new ArrayList<>(map.size());
map.forEach(((node, versions) -> {
BrokerApiVersionVO vo = new BrokerApiVersionVO();
vo.setBrokerId(node.id());
vo.setHost(node.host() + ":" + node.port());
vo.setSupportNums(versions.allSupportedApiVersions().size());
String versionInfo = versions.toString(true);
int from = 0;
int count = 0;
int index = -1;
while ((index = versionInfo.indexOf("UNSUPPORTED", from)) >= 0 && from < versionInfo.length()) {
count++;
from = index + 1;
}
vo.setUnSupportNums(count);
versionInfo = versionInfo.substring(1, versionInfo.length() - 2);
vo.setVersionInfo(Arrays.asList(StringUtils.split(versionInfo, ",")));
list.add(vo);
}));
Collections.sort(list, Comparator.comparingInt(BrokerApiVersionVO::getBrokerId));
return ResponseData.create().data(list).success();
}
}

View File

@@ -15,6 +15,7 @@ import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.locks.ReentrantLock;
import java.util.stream.Collectors;
import kafka.console.ConsumerConsole;
import kafka.console.TopicConsole;
@@ -48,6 +49,8 @@ public class ConsumerServiceImpl implements ConsumerService {
@Autowired
private TopicConsole topicConsole;
private ReentrantLock lock = new ReentrantLock();
@Override public ResponseData getConsumerGroupList(List<String> groupIds, Set<ConsumerGroupState> states) {
String simulateGroup = "inner_xxx_not_exit_group_###" + System.currentTimeMillis();
Set<String> groupList = new HashSet<>();
@@ -167,25 +170,7 @@ public class ConsumerServiceImpl implements ConsumerService {
}
@Override public ResponseData getTopicSubscribedByGroups(String topic) {
if (topicSubscribedInfo.isNeedRefresh(topic)) {
Set<String> groupIdList = consumerConsole.getConsumerGroupIdList(Collections.emptySet());
Map<String, Set<String>> cache = new HashMap<>();
Map<String, List<TopicPartition>> subscribeTopics = consumerConsole.listSubscribeTopics(groupIdList);
subscribeTopics.forEach((groupId, tl) -> {
tl.forEach(topicPartition -> {
String t = topicPartition.topic();
if (!cache.containsKey(t)) {
cache.put(t, new HashSet<>());
}
cache.get(t).add(groupId);
});
});
topicSubscribedInfo.refresh(cache);
}
Set<String> groups = topicSubscribedInfo.getSubscribedGroups(topic);
Set<String> groups = this.getSubscribedGroups(topic).getData();
Map<String, Object> res = new HashMap<>();
Collection<ConsumerConsole.TopicPartitionConsumeInfo> consumerDetail = consumerConsole.getConsumerDetail(groups);
@@ -212,6 +197,34 @@ public class ConsumerServiceImpl implements ConsumerService {
return ResponseData.create().data(Utils.abs(groupId.hashCode()) % size);
}
@Override public ResponseData<Set<String>> getSubscribedGroups(String topic) {
if (topicSubscribedInfo.isNeedRefresh(topic) && !lock.isLocked()) {
try {
lock.lock();
Set<String> groupIdList = consumerConsole.getConsumerGroupIdList(Collections.emptySet());
Map<String, Set<String>> cache = new HashMap<>();
Map<String, List<TopicPartition>> subscribeTopics = consumerConsole.listSubscribeTopics(groupIdList);
subscribeTopics.forEach((groupId, tl) -> {
tl.forEach(topicPartition -> {
String t = topicPartition.topic();
if (!cache.containsKey(t)) {
cache.put(t, new HashSet<>());
}
cache.get(t).add(groupId);
});
});
topicSubscribedInfo.refresh(cache);
} finally {
lock.unlock();
}
}
Set<String> groups = topicSubscribedInfo.getSubscribedGroups(topic);
return ResponseData.create(Set.class).data(groups).success();
}
class TopicSubscribedInfo {
long lastTime = System.currentTimeMillis();

View File

@@ -0,0 +1,288 @@
package com.xuxd.kafka.console.service.impl;
import com.xuxd.kafka.console.beans.MessageFilter;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage;
import com.xuxd.kafka.console.beans.enums.FilterType;
import com.xuxd.kafka.console.beans.vo.ConsumerRecordVO;
import com.xuxd.kafka.console.beans.vo.MessageDetailVO;
import com.xuxd.kafka.console.service.ConsumerService;
import com.xuxd.kafka.console.service.MessageService;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.stream.Collectors;
import kafka.console.ConsumerConsole;
import kafka.console.MessageConsole;
import kafka.console.TopicConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.admin.RecordsToDelete;
import org.apache.kafka.clients.admin.TopicDescription;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.apache.kafka.common.serialization.BytesDeserializer;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.DoubleDeserializer;
import org.apache.kafka.common.serialization.FloatDeserializer;
import org.apache.kafka.common.serialization.IntegerDeserializer;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.stereotype.Service;
import scala.Tuple2;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-11 09:43:44
**/
@Slf4j
@Service
public class MessageServiceImpl implements MessageService, ApplicationContextAware {
@Autowired
private MessageConsole messageConsole;
@Autowired
private TopicConsole topicConsole;
@Autowired
private ConsumerConsole consumerConsole;
private ApplicationContext applicationContext;
private Map<String, Deserializer> deserializerDict = new HashMap<>();
{
deserializerDict.put("ByteArray", new ByteArrayDeserializer());
deserializerDict.put("Integer", new IntegerDeserializer());
deserializerDict.put("String", new StringDeserializer());
deserializerDict.put("Float", new FloatDeserializer());
deserializerDict.put("Double", new DoubleDeserializer());
deserializerDict.put("Byte", new BytesDeserializer());
deserializerDict.put("Long", new LongDeserializer());
}
public static String defaultDeserializer = "String";
@Override public ResponseData searchByTime(QueryMessage queryMessage) {
int maxNums = 5000;
Object searchContent = null;
String headerKey = null;
String headerValue = null;
MessageFilter filter = new MessageFilter();
switch (queryMessage.getFilter()) {
case BODY:
if (StringUtils.isBlank(queryMessage.getValue())) {
queryMessage.setFilter(FilterType.NONE);
} else {
if (StringUtils.isBlank(queryMessage.getValueDeserializer())) {
queryMessage.setValueDeserializer(defaultDeserializer);
}
switch (queryMessage.getValueDeserializer()) {
case "String":
searchContent = String.valueOf(queryMessage.getValue());
filter.setContainsValue(true);
break;
case "Integer":
searchContent = Integer.valueOf(queryMessage.getValue());
break;
case "Float":
searchContent = Float.valueOf(queryMessage.getValue());
break;
case "Double":
searchContent = Double.valueOf(queryMessage.getValue());
break;
case "Long":
searchContent = Long.valueOf(queryMessage.getValue());
break;
default:
throw new IllegalArgumentException("Message body type not support.");
}
}
break;
case HEADER:
headerKey = queryMessage.getHeaderKey();
if (StringUtils.isBlank(headerKey)) {
queryMessage.setFilter(FilterType.NONE);
} else {
if (StringUtils.isNotBlank(queryMessage.getHeaderValue())) {
headerValue = String.valueOf(queryMessage.getHeaderValue());
}
}
break;
default:
break;
}
FilterType filterType = queryMessage.getFilter();
Deserializer deserializer = deserializerDict.get(queryMessage.getValueDeserializer());
filter.setFilterType(filterType);
filter.setSearchContent(searchContent);
filter.setDeserializer(deserializer);
filter.setHeaderKey(headerKey);
filter.setHeaderValue(headerValue);
Set<TopicPartition> partitions = getPartitions(queryMessage);
long startTime = System.currentTimeMillis();
Tuple2<List<ConsumerRecord<byte[], byte[]>>, Object> tuple2 = messageConsole.searchBy(partitions, queryMessage.getStartTime(), queryMessage.getEndTime(), maxNums, filter);
List<ConsumerRecord<byte[], byte[]>> records = tuple2._1();
log.info("search message by time, cost time: {}", (System.currentTimeMillis() - startTime));
List<ConsumerRecordVO> vos = records.stream().filter(record -> record.timestamp() <= queryMessage.getEndTime())
.map(ConsumerRecordVO::fromConsumerRecord).collect(Collectors.toList());
Map<String, Object> res = new HashMap<>();
vos = vos.subList(0, Math.min(maxNums, vos.size()));
res.put("maxNum", maxNums);
res.put("realNum", vos.size());
res.put("searchNum", tuple2._2());
res.put("data", vos);
return ResponseData.create().data(res).success();
}
@Override public ResponseData searchByOffset(QueryMessage queryMessage) {
Map<TopicPartition, ConsumerRecord<byte[], byte[]>> recordMap = searchRecordByOffset(queryMessage);
return ResponseData.create().data(recordMap.values().stream().map(ConsumerRecordVO::fromConsumerRecord).collect(Collectors.toList())).success();
}
@Override public ResponseData searchDetail(QueryMessage queryMessage) {
if (queryMessage.getPartition() == -1) {
throw new IllegalArgumentException();
}
if (StringUtils.isBlank(queryMessage.getKeyDeserializer())) {
queryMessage.setKeyDeserializer(defaultDeserializer);
}
if (StringUtils.isBlank(queryMessage.getValueDeserializer())) {
queryMessage.setValueDeserializer(defaultDeserializer);
}
Map<TopicPartition, ConsumerRecord<byte[], byte[]>> recordMap = searchRecordByOffset(queryMessage);
ConsumerRecord<byte[], byte[]> record = recordMap.get(new TopicPartition(queryMessage.getTopic(), queryMessage.getPartition()));
if (record != null) {
MessageDetailVO vo = new MessageDetailVO();
vo.setTopic(record.topic());
vo.setPartition(record.partition());
vo.setOffset(record.offset());
vo.setTimestamp(record.timestamp());
vo.setTimestampType(record.timestampType().name());
try {
vo.setKey(deserializerDict.get(queryMessage.getKeyDeserializer()).deserialize(queryMessage.getTopic(), record.key()));
} catch (Exception e) {
vo.setKey("KeyDeserializer Error: " + e.getMessage());
}
try {
vo.setValue(deserializerDict.get(queryMessage.getValueDeserializer()).deserialize(queryMessage.getTopic(), record.value()));
} catch (Exception e) {
vo.setValue("ValueDeserializer Error: " + e.getMessage());
}
record.headers().forEach(header -> {
MessageDetailVO.HeaderVO headerVO = new MessageDetailVO.HeaderVO();
headerVO.setKey(header.key());
headerVO.setValue(new String(header.value()));
vo.getHeaders().add(headerVO);
});
// 为了尽量保持代码好看不直接注入另一个service层的实现类了
Set<String> groupIds = applicationContext.getBean(ConsumerService.class).getSubscribedGroups(record.topic()).getData();
Collection<ConsumerConsole.TopicPartitionConsumeInfo> consumerDetail = consumerConsole.getConsumerDetail(groupIds);
List<MessageDetailVO.ConsumerVO> consumerVOS = new LinkedList<>();
consumerDetail.forEach(consumerInfo -> {
if (consumerInfo.topicPartition().equals(new TopicPartition(record.topic(), record.partition()))) {
MessageDetailVO.ConsumerVO consumerVO = new MessageDetailVO.ConsumerVO();
consumerVO.setGroupId(consumerInfo.getGroupId());
consumerVO.setStatus(consumerInfo.getConsumerOffset() <= record.offset() ? "unconsume" : "consumed");
consumerVOS.add(consumerVO);
}
});
vo.setConsumers(consumerVOS);
return ResponseData.create().data(vo).success();
}
return ResponseData.create().failed("Not found message detail.");
}
@Override public ResponseData deserializerList() {
return ResponseData.create().data(deserializerDict.keySet()).success();
}
@Override public ResponseData send(SendMessage message) {
messageConsole.send(message.getTopic(), message.getPartition(), message.getKey(), message.getBody(), message.getNum());
return ResponseData.create().success();
}
@Override public ResponseData resend(SendMessage message) {
TopicPartition partition = new TopicPartition(message.getTopic(), message.getPartition());
Map<TopicPartition, Object> offsetTable = new HashMap<>(1, 1.0f);
offsetTable.put(partition, message.getOffset());
Map<TopicPartition, ConsumerRecord<byte[], byte[]>> recordMap = messageConsole.searchBy(offsetTable);
if (recordMap.isEmpty()) {
return ResponseData.create().failed("Get message failed.");
}
ConsumerRecord<byte[], byte[]> record = recordMap.get(partition);
ProducerRecord<byte[], byte[]> producerRecord = new ProducerRecord<>(record.topic(), record.partition(), record.key(), record.value(), record.headers());
Tuple2<Object, String> tuple2 = messageConsole.sendSync(producerRecord);
boolean success = (boolean) tuple2._1();
return success ? ResponseData.create().success("success: " + tuple2._2()) : ResponseData.create().failed(tuple2._2());
}
@Override
public ResponseData delete(List<QueryMessage> messages) {
Map<TopicPartition, RecordsToDelete> params = new HashMap<>(messages.size(), 1f);
messages.forEach(message -> {
params.put(new TopicPartition(message.getTopic(), message.getPartition()), RecordsToDelete.beforeOffset(message.getOffset()));
});
Tuple2<Object, String> tuple2 = messageConsole.delete(params);
boolean success = (boolean) tuple2._1();
return success ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2());
}
private Map<TopicPartition, ConsumerRecord<byte[], byte[]>> searchRecordByOffset(QueryMessage queryMessage) {
Set<TopicPartition> partitions = getPartitions(queryMessage);
Map<TopicPartition, Object> offsetTable = new HashMap<>();
partitions.forEach(tp -> {
offsetTable.put(tp, queryMessage.getOffset());
});
Map<TopicPartition, ConsumerRecord<byte[], byte[]>> recordMap = messageConsole.searchBy(offsetTable);
return recordMap;
}
private Set<TopicPartition> getPartitions(QueryMessage queryMessage) {
Set<TopicPartition> partitions = new HashSet<>();
if (queryMessage.getPartition() != -1) {
partitions.add(new TopicPartition(queryMessage.getTopic(), queryMessage.getPartition()));
} else {
List<TopicDescription> list = topicConsole.getTopicList(Collections.singleton(queryMessage.getTopic()));
if (CollectionUtils.isEmpty(list)) {
throw new IllegalArgumentException("Can not find topic info.");
}
Set<TopicPartition> set = list.get(0).partitions().stream()
.map(tp -> new TopicPartition(queryMessage.getTopic(), tp.partition())).collect(Collectors.toSet());
partitions.addAll(set);
}
return partitions;
}
@Override public void setApplicationContext(ApplicationContext context) throws BeansException {
this.applicationContext = context;
}
}

View File

@@ -1,6 +1,7 @@
package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.google.common.collect.Lists;
import com.google.gson.Gson;
import com.google.gson.JsonObject;
import com.xuxd.kafka.console.beans.ResponseData;
@@ -10,6 +11,7 @@ import com.xuxd.kafka.console.beans.vo.OffsetAlignmentVO;
import com.xuxd.kafka.console.dao.MinOffsetAlignmentMapper;
import com.xuxd.kafka.console.service.OperationService;
import com.xuxd.kafka.console.utils.GsonUtil;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
@@ -19,6 +21,7 @@ import java.util.Properties;
import java.util.Set;
import java.util.stream.Collectors;
import kafka.console.OperationConsole;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.admin.PartitionReassignment;
import org.apache.kafka.common.TopicPartition;
import org.springframework.beans.factory.ObjectProvider;
@@ -162,4 +165,21 @@ public class OperationServiceImpl implements OperationService {
}
return ResponseData.create().success();
}
@Override public ResponseData proposedAssignments(String topic, List<Integer> brokerList) {
Map<String, Object> params = new HashMap<>();
params.put("version", 1);
Map<String, String> topicMap = new HashMap<>(1, 1.0f);
topicMap.put("topic", topic);
params.put("topics", Lists.newArrayList(topicMap));
List<String> list = brokerList.stream().map(String::valueOf).collect(Collectors.toList());
Map<TopicPartition, List<Object>> assignments = operationConsole.proposedAssignments(gson.toJson(params), StringUtils.join(list, ","));
List<CurrentReassignmentVO> res = new ArrayList<>(assignments.size());
assignments.forEach((tp, replicas) -> {
CurrentReassignmentVO vo = new CurrentReassignmentVO(tp.topic(), tp.partition(),
replicas.stream().map(x -> (Integer) x).collect(Collectors.toList()), null, null);
res.add(vo);
});
return ResponseData.create().data(res).success();
}
}

View File

@@ -9,28 +9,24 @@ import com.xuxd.kafka.console.beans.vo.TopicDescriptionVO;
import com.xuxd.kafka.console.beans.vo.TopicPartitionVO;
import com.xuxd.kafka.console.service.TopicService;
import com.xuxd.kafka.console.utils.GsonUtil;
import java.util.Calendar;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.atomic.AtomicLong;
import java.util.stream.Collectors;
import kafka.console.MessageConsole;
import kafka.console.TopicConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.admin.NewPartitions;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.admin.TopicDescription;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.TopicPartitionInfo;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import scala.Tuple2;
import java.util.*;
import java.util.concurrent.atomic.AtomicLong;
import java.util.stream.Collectors;
/**
* kafka-console-ui.
*
@@ -44,6 +40,9 @@ public class TopicServiceImpl implements TopicService {
@Autowired
private TopicConsole topicConsole;
@Autowired
private MessageConsole messageConsole;
private Gson gson = GsonUtil.INSTANCE.get();
@Override public ResponseData getTopicNameList(boolean internal) {
@@ -82,8 +81,8 @@ public class TopicServiceImpl implements TopicService {
return ResponseData.create().data(topicDescriptions.stream().map(d -> TopicDescriptionVO.from(d))).success();
}
@Override public ResponseData deleteTopic(String topic) {
Tuple2<Object, String> tuple2 = topicConsole.deleteTopic(topic);
@Override public ResponseData deleteTopics(Collection<String> topics) {
Tuple2<Object, String> tuple2 = topicConsole.deleteTopics(topics);
return (Boolean) tuple2._1 ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2);
}
@@ -106,6 +105,10 @@ public class TopicServiceImpl implements TopicService {
mapTuple2._2().forEach((k, v) -> {
endTable.put(k.partition(), (Long) v);
});
// computer the valid time range.
Map<TopicPartition, Object> beginOffsetTable = new HashMap<>();
Map<TopicPartition, Object> endOffsetTable = new HashMap<>();
Map<Integer, TopicPartition> partitionCache = new HashMap<>();
for (TopicPartitionVO partitionVO : voList) {
long begin = beginTable.get(partitionVO.getPartition());
@@ -113,7 +116,29 @@ public class TopicServiceImpl implements TopicService {
partitionVO.setBeginOffset(begin);
partitionVO.setEndOffset(end);
partitionVO.setDiff(end - begin);
if (begin != end) {
TopicPartition partition = new TopicPartition(topic, partitionVO.getPartition());
partitionCache.put(partitionVO.getPartition(), partition);
beginOffsetTable.put(partition, begin);
endOffsetTable.put(partition, end - 1); // end must < endOff
} else {
partitionVO.setBeginTime(-1L);
partitionVO.setEndTime(-1L);
}
}
Map<TopicPartition, ConsumerRecord<byte[], byte[]>> beginRecordMap = messageConsole.searchBy(beginOffsetTable);
Map<TopicPartition, ConsumerRecord<byte[], byte[]>> endRecordMap = messageConsole.searchBy(endOffsetTable);
for (TopicPartitionVO partitionVO : voList) {
if (partitionVO.getBeginTime() != -1L) {
TopicPartition partition = partitionCache.get(partitionVO.getPartition());
partitionVO.setBeginTime(beginRecordMap.containsKey(partition) ? beginRecordMap.get(partition).timestamp() : -1L);
partitionVO.setEndTime(endRecordMap.containsKey(partition) ? endRecordMap.get(partition).timestamp() : -1L);
}
}
return ResponseData.create().data(voList).success();
}

View File

@@ -1,9 +1,15 @@
package com.xuxd.kafka.console.utils;
import com.google.common.base.Preconditions;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import lombok.extern.slf4j.Slf4j;
import org.springframework.util.ClassUtils;
@@ -35,6 +41,52 @@ public class ConvertUtil {
}
});
}
Iterator<Map.Entry<String, Object>> iterator = res.entrySet().iterator();
while (iterator.hasNext()) {
if (iterator.next().getValue() == null) {
iterator.remove();
}
}
return res;
}
public static String toJsonString(Object src) {
return GsonUtil.INSTANCE.get().toJson(src);
}
public static Properties toProperties(String jsonStr) {
return GsonUtil.INSTANCE.get().fromJson(jsonStr, Properties.class);
}
public static String jsonStr2PropertiesStr(String jsonStr) {
StringBuilder sb = new StringBuilder();
Map<String, Object> map = GsonUtil.INSTANCE.get().fromJson(jsonStr, Map.class);
map.keySet().forEach(k -> {
sb.append(k).append("=").append(map.get(k).toString()).append(System.lineSeparator());
});
return sb.toString();
}
public static List<String> jsonStr2List(String jsonStr) {
List<String> res = new LinkedList<>();
Map<String, Object> map = GsonUtil.INSTANCE.get().fromJson(jsonStr, Map.class);
map.forEach((k, v) -> {
res.add(k + "=" + v);
});
return res;
}
public static String propertiesStr2JsonStr(String propertiesStr) {
String res = "{}";
try (ByteArrayInputStream baos = new ByteArrayInputStream(propertiesStr.getBytes())) {
Properties properties = new Properties();
properties.load(baos);
res = toJsonString(properties);
} catch (IOException e) {
log.error("propertiesStr2JsonStr error.", e);
}
return res;
}
}

View File

@@ -0,0 +1,61 @@
package com.xuxd.kafka.console.utils;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import java.util.Properties;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.admin.ScramMechanism;
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.security.auth.SecurityProtocol;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-06 11:07:41
**/
public class SaslUtil {
public static final Pattern JAAS_PATTERN = Pattern.compile("^.*(username=\"(.*)\"[ \t]+).*$");
private SaslUtil() {
}
public static String findUsername(String saslJaasConfig) {
Matcher matcher = JAAS_PATTERN.matcher(saslJaasConfig);
return matcher.find() ? matcher.group(2) : "";
}
public static boolean isEnableSasl() {
Properties properties = ContextConfigHolder.CONTEXT_CONFIG.get().getProperties();
if (!properties.containsKey(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG)) {
return false;
}
String s = properties.getProperty(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG);
SecurityProtocol protocol = SecurityProtocol.valueOf(s);
switch (protocol) {
case SASL_SSL:
case SASL_PLAINTEXT:
return true;
default:
return false;
}
}
public static boolean isEnableScram() {
Properties properties = ContextConfigHolder.CONTEXT_CONFIG.get().getProperties();
if (!properties.containsKey(SaslConfigs.SASL_MECHANISM)) {
return false;
}
String s = properties.getProperty(SaslConfigs.SASL_MECHANISM);
ScramMechanism mechanism = ScramMechanism.fromMechanismName(s);
switch (mechanism) {
case UNKNOWN:
return false;
default:
return true;
}
}
}

View File

@@ -6,23 +6,20 @@ server:
kafka:
config:
# kafka broker地址多个以逗号分隔
bootstrap-server: 'localhost:9092'
request-timeout-ms: 5000
# 服务端是否启用acl如果不启用下面的所有配置都忽略即可只用配置上面的Kafka集群地址就行了
enable-acl: false
# 只支持2种安全协议SASL_PLAINTEXT和PLAINTEXT启用acl则设置为SASL_PLAINTEXT不启用acl不需关心这个配置
security-protocol: SASL_PLAINTEXT
sasl-mechanism: SCRAM-SHA-256
# 超级管理员用户名在broker上已经配置为超级管理员
admin-username: admin
# 超级管理员密码
admin-password: admin
# 启动自动创建配置的超级管理员用户
admin-create: false
# broker连接的zk地址如果启动自动创建配置的超级管理员用户则必须配置否则忽略
zookeeper-addr: 'localhost:2181'
sasl-jaas-config: org.apache.kafka.common.security.scram.ScramLoginModule required username="${kafka.config.admin-username}" password="${kafka.config.admin-password}";
# 如果不存在default集群启动的时候默认会把这个加载进来(如果这里配置集群地址了),如果已经存在,则不加载
# kafka broker地址多个以逗号分隔不是必须在这里配置也可以启动之后在页面上添加集群信息
bootstrap-server:
# 集群其它属性配置
properties:
# request.timeout.ms: 5000
# 缓存连接,不缓存的情况下,每次请求建立连接. 即使每次请求建立连接其实也很快某些情况下开启ACL查询可能很慢可以设置连接缓存为true
# 或者想提高查询速度也可以设置下面连接缓存为true
# 缓存 admin client的连接
cache-admin-connection: false
# 缓存 producer的连接
cache-producer-connection: false
# 缓存 consumer的连接
cache-consumer-connection: false
spring:
application:
@@ -46,6 +43,7 @@ spring:
logging:
home: ./
# 基于scram方案的acl这里会记录创建的用户密码等信息定时扫描如果集群中已经不存在这些用户就把这些信息从db中清除掉
cron:
# clear-dirty-user: 0 * * * * ?
clear-dirty-user: 0 0 1 * * ?

View File

@@ -1,23 +1,37 @@
-- DROP TABLE IF EXISTS T_KAKFA_USER;
-- kafka ACL启用SASL_SCRAM中的用户
CREATE TABLE IF NOT EXISTS T_KAFKA_USER
(
ID IDENTITY NOT NULL COMMENT '主键ID',
USERNAME VARCHAR(50) NOT NULL DEFAULT '' COMMENT '',
PASSWORD VARCHAR(50) NOT NULL DEFAULT '' COMMENT '年龄',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
ID IDENTITY NOT NULL COMMENT '主键ID',
USERNAME VARCHAR(50) NOT NULL DEFAULT '' COMMENT '用户',
PASSWORD VARCHAR(50) NOT NULL DEFAULT '' COMMENT '密码',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
CLUSTER_INFO_ID BIGINT NOT NULL COMMENT '集群信息里的集群ID',
PRIMARY KEY (ID),
UNIQUE (USERNAME)
);
-- 消息同步解决方案中使用的位点对齐信息
CREATE TABLE IF NOT EXISTS T_MIN_OFFSET_ALIGNMENT
(
ID IDENTITY NOT NULL COMMENT '主键ID',
GROUP_ID VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'groupId',
TOPIC VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'topic',
THAT_OFFSET VARCHAR(512) NOT NULL DEFAULT '' COMMENT 'min offset for that kafka cluster',
THIS_OFFSET VARCHAR(512) NOT NULL DEFAULT '' COMMENT 'min offset for this kafka cluster',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
ID IDENTITY NOT NULL COMMENT '主键ID',
GROUP_ID VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'groupId',
TOPIC VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'topic',
THAT_OFFSET VARCHAR(512) NOT NULL DEFAULT '' COMMENT 'min offset for that kafka cluster',
THIS_OFFSET VARCHAR(512) NOT NULL DEFAULT '' COMMENT 'min offset for this kafka cluster',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
PRIMARY KEY (ID),
UNIQUE (GROUP_ID, TOPIC)
);
-- 多集群管理,每个集群的配置信息
CREATE TABLE IF NOT EXISTS T_CLUSTER_INFO
(
ID IDENTITY NOT NULL COMMENT '主键ID',
CLUSTER_NAME VARCHAR(128) NOT NULL DEFAULT '' COMMENT '集群名',
ADDRESS VARCHAR(256) NOT NULL DEFAULT '' COMMENT '集群地址',
PROPERTIES VARCHAR(512) NOT NULL DEFAULT '' COMMENT '集群的其它属性配置',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
PRIMARY KEY (ID),
UNIQUE (CLUSTER_NAME)
);

View File

@@ -0,0 +1,333 @@
package kafka.console
import com.xuxd.kafka.console.config.ContextConfigHolder
import kafka.utils.Implicits.MapExtensionMethods
import kafka.utils.Logging
import org.apache.kafka.clients._
import org.apache.kafka.clients.admin.AdminClientConfig
import org.apache.kafka.clients.consumer.internals.{ConsumerNetworkClient, RequestFuture}
import org.apache.kafka.common.Node
import org.apache.kafka.common.config.ConfigDef.ValidString.in
import org.apache.kafka.common.config.ConfigDef.{Importance, Type}
import org.apache.kafka.common.config.{AbstractConfig, ConfigDef}
import org.apache.kafka.common.errors.AuthenticationException
import org.apache.kafka.common.internals.ClusterResourceListeners
import org.apache.kafka.common.message.ApiVersionsResponseData.ApiVersionCollection
import org.apache.kafka.common.metrics.Metrics
import org.apache.kafka.common.network.Selector
import org.apache.kafka.common.protocol.Errors
import org.apache.kafka.common.requests._
import org.apache.kafka.common.utils.{KafkaThread, LogContext, Time}
import org.slf4j.{Logger, LoggerFactory}
import java.io.IOException
import java.util.Properties
import java.util.concurrent.atomic.AtomicInteger
import java.util.concurrent.{ConcurrentLinkedQueue, TimeUnit}
import scala.jdk.CollectionConverters.{ListHasAsScala, MapHasAsJava, PropertiesHasAsScala, SetHasAsScala}
import scala.util.{Failure, Success, Try}
/**
* kafka-console-ui.
*
* Copy from {@link kafka.admin.BrokerApiVersionsCommand}.
*
* @author xuxd
* @date 2022-01-22 15:15:57
* */
object BrokerApiVersion{
protected lazy val log : Logger = LoggerFactory.getLogger(this.getClass)
def listAllBrokerApiVersionInfo(): java.util.HashMap[Node, NodeApiVersions] = {
val res = new java.util.HashMap[Node, NodeApiVersions]()
val adminClient = createAdminClient()
try {
adminClient.awaitBrokers()
val brokerMap = adminClient.listAllBrokerVersionInfo()
brokerMap.forKeyValue {
(broker, versionInfoOrError) =>
versionInfoOrError match {
case Success(v) => {
res.put(broker, v)
}
case Failure(v) => log.error(s"${broker} -> ERROR: ${v}\n")
}
}
} finally {
adminClient.close()
}
res
}
private def createAdminClient(): AdminClient = {
val props = new Properties()
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getBootstrapServer())
props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs())
props.putAll(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties())
AdminClient.create(props)
}
// org.apache.kafka.clients.admin.AdminClient doesn't currently expose a way to retrieve the supported api versions.
// We inline the bits we need from kafka.admin.AdminClient so that we can delete it.
private class AdminClient(val time: Time,
val client: ConsumerNetworkClient,
val bootstrapBrokers: List[Node]) extends Logging {
@volatile var running = true
val pendingFutures = new ConcurrentLinkedQueue[RequestFuture[ClientResponse]]()
val networkThread = new KafkaThread("admin-client-network-thread", () => {
try {
while (running)
client.poll(time.timer(Long.MaxValue))
} catch {
case t: Throwable =>
error("admin-client-network-thread exited", t)
} finally {
pendingFutures.forEach { future =>
try {
future.raise(Errors.UNKNOWN_SERVER_ERROR)
} catch {
case _: IllegalStateException => // It is OK if the future has been completed
}
}
pendingFutures.clear()
}
}, true)
networkThread.start()
private def send(target: Node,
request: AbstractRequest.Builder[_ <: AbstractRequest]): AbstractResponse = {
val future = client.send(target, request)
pendingFutures.add(future)
future.awaitDone(Long.MaxValue, TimeUnit.MILLISECONDS)
pendingFutures.remove(future)
if (future.succeeded())
future.value().responseBody()
else
throw future.exception()
}
private def sendAnyNode(request: AbstractRequest.Builder[_ <: AbstractRequest]): AbstractResponse = {
bootstrapBrokers.foreach { broker =>
try {
return send(broker, request)
} catch {
case e: AuthenticationException =>
throw e
case e: Exception =>
debug(s"Request ${request.apiKey()} failed against node $broker", e)
}
}
throw new RuntimeException(s"Request ${request.apiKey()} failed on brokers $bootstrapBrokers")
}
private def getApiVersions(node: Node): ApiVersionCollection = {
val response = send(node, new ApiVersionsRequest.Builder()).asInstanceOf[ApiVersionsResponse]
Errors.forCode(response.data.errorCode).maybeThrow()
response.data.apiKeys
}
/**
* Wait until there is a non-empty list of brokers in the cluster.
*/
def awaitBrokers(): Unit = {
var nodes = List[Node]()
val start = System.currentTimeMillis()
val maxWait = 30 * 1000
do {
nodes = findAllBrokers()
if (nodes.isEmpty) {
Thread.sleep(50)
}
}
while (nodes.isEmpty && (System.currentTimeMillis() - start < maxWait))
}
private def findAllBrokers(): List[Node] = {
val request = MetadataRequest.Builder.allTopics()
val response = sendAnyNode(request).asInstanceOf[MetadataResponse]
val errors = response.errors
if (!errors.isEmpty) {
log.info(s"Metadata request contained errors: $errors")
}
// 在3.x版本中这个方法是buildCluster 代替cluster()了
response.buildCluster.nodes.asScala.toList
// response.cluster().nodes.asScala.toList
}
def listAllBrokerVersionInfo(): Map[Node, Try[NodeApiVersions]] =
findAllBrokers().map { broker =>
broker -> Try[NodeApiVersions](new NodeApiVersions(getApiVersions(broker)))
}.toMap
def close(): Unit = {
running = false
try {
client.close()
} catch {
case e: IOException =>
error("Exception closing nioSelector:", e)
}
}
}
private object AdminClient {
val DefaultConnectionMaxIdleMs = 9 * 60 * 1000
val DefaultRequestTimeoutMs = 5000
val DefaultSocketConnectionSetupMs = CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG
val DefaultSocketConnectionSetupMaxMs = CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG
val DefaultMaxInFlightRequestsPerConnection = 100
val DefaultReconnectBackoffMs = 50
val DefaultReconnectBackoffMax = 50
val DefaultSendBufferBytes = 128 * 1024
val DefaultReceiveBufferBytes = 32 * 1024
val DefaultRetryBackoffMs = 100
val AdminClientIdSequence = new AtomicInteger(1)
val AdminConfigDef = {
val config = new ConfigDef()
.define(
CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG,
Type.LIST,
Importance.HIGH,
CommonClientConfigs.BOOTSTRAP_SERVERS_DOC)
.define(CommonClientConfigs.CLIENT_DNS_LOOKUP_CONFIG,
Type.STRING,
ClientDnsLookup.USE_ALL_DNS_IPS.toString,
in(ClientDnsLookup.USE_ALL_DNS_IPS.toString,
ClientDnsLookup.RESOLVE_CANONICAL_BOOTSTRAP_SERVERS_ONLY.toString),
Importance.MEDIUM,
CommonClientConfigs.CLIENT_DNS_LOOKUP_DOC)
.define(
CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,
ConfigDef.Type.STRING,
CommonClientConfigs.DEFAULT_SECURITY_PROTOCOL,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.SECURITY_PROTOCOL_DOC)
.define(
CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG,
ConfigDef.Type.INT,
DefaultRequestTimeoutMs,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.REQUEST_TIMEOUT_MS_DOC)
.define(
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG,
ConfigDef.Type.LONG,
CommonClientConfigs.DEFAULT_SOCKET_CONNECTION_SETUP_TIMEOUT_MS,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_DOC)
.define(
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG,
ConfigDef.Type.LONG,
CommonClientConfigs.DEFAULT_SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_DOC)
.define(
CommonClientConfigs.RETRY_BACKOFF_MS_CONFIG,
ConfigDef.Type.LONG,
DefaultRetryBackoffMs,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.RETRY_BACKOFF_MS_DOC)
.withClientSslSupport()
.withClientSaslSupport()
config
}
class AdminConfig(originals: Map[_, _]) extends AbstractConfig(AdminConfigDef, originals.asJava, false)
def create(props: Properties): AdminClient = {
val properties = new Properties()
val names = props.stringPropertyNames()
for (name <- names.asScala.toSet) {
properties.put(name, props.get(name).toString())
}
create(properties.asScala.toMap)
}
def create(props: Map[String, _]): AdminClient = create(new AdminConfig(props))
def create(config: AdminConfig): AdminClient = {
val clientId = "admin-" + AdminClientIdSequence.getAndIncrement()
val logContext = new LogContext(s"[LegacyAdminClient clientId=$clientId] ")
val time = Time.SYSTEM
val metrics = new Metrics(time)
val metadata = new Metadata(100L, 60 * 60 * 1000L, logContext,
new ClusterResourceListeners)
val channelBuilder = ClientUtils.createChannelBuilder(config, time, logContext)
val requestTimeoutMs = config.getInt(CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG)
val connectionSetupTimeoutMs = config.getLong(CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG)
val connectionSetupTimeoutMaxMs = config.getLong(CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG)
val retryBackoffMs = config.getLong(CommonClientConfigs.RETRY_BACKOFF_MS_CONFIG)
val brokerUrls = config.getList(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)
val clientDnsLookup = config.getString(CommonClientConfigs.CLIENT_DNS_LOOKUP_CONFIG)
val brokerAddresses = ClientUtils.parseAndValidateAddresses(brokerUrls, clientDnsLookup)
metadata.bootstrap(brokerAddresses)
val selector = new Selector(
DefaultConnectionMaxIdleMs,
metrics,
time,
"admin",
channelBuilder,
logContext)
// 版本不一样,这个地方的兼容性问题也不一样了
// 3.x版本用这个
val networkClient = new NetworkClient(
selector,
metadata,
clientId,
DefaultMaxInFlightRequestsPerConnection,
DefaultReconnectBackoffMs,
DefaultReconnectBackoffMax,
DefaultSendBufferBytes,
DefaultReceiveBufferBytes,
requestTimeoutMs,
connectionSetupTimeoutMs,
connectionSetupTimeoutMaxMs,
time,
true,
new ApiVersions,
logContext)
// val networkClient = new NetworkClient(
// selector,
// metadata,
// clientId,
// DefaultMaxInFlightRequestsPerConnection,
// DefaultReconnectBackoffMs,
// DefaultReconnectBackoffMax,
// DefaultSendBufferBytes,
// DefaultReceiveBufferBytes,
// requestTimeoutMs,
// connectionSetupTimeoutMs,
// connectionSetupTimeoutMaxMs,
// ClientDnsLookup.USE_ALL_DNS_IPS,
// time,
// true,
// new ApiVersions,
// logContext)
val highLevelClient = new ConsumerNetworkClient(
logContext,
networkClient,
metadata,
time,
retryBackoffMs,
requestTimeoutMs,
Integer.MAX_VALUE)
new AdminClient(
time,
highLevelClient,
metadata.fetch.nodes.asScala.toList)
}
}
}

View File

@@ -0,0 +1,84 @@
package kafka.console
import com.xuxd.kafka.console.config.KafkaConfig
import org.apache.kafka.clients.admin.{Admin, AlterClientQuotasOptions}
import org.apache.kafka.common.quota.{ClientQuotaAlteration, ClientQuotaEntity, ClientQuotaFilter, ClientQuotaFilterComponent}
import java.util.Collections
import java.util.concurrent.TimeUnit
import scala.jdk.CollectionConverters.{IterableHasAsJava, ListHasAsScala, MapHasAsScala, SeqHasAsJava}
/**
* client quota console.
*
* @author xuxd
* @date 2022-12-30 10:55:56
* */
class ClientQuotaConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig) with Logging {
def getClientQuotasConfigs(entityTypes: java.util.List[String], entityNames: java.util.List[String]): java.util.Map[ClientQuotaEntity, java.util.Map[String, Double]] = {
withAdminClientAndCatchError(admin => getAllClientQuotasConfigs(admin, entityTypes.asScala.toList, entityNames.asScala.toList),
e => {
log.error("getAllClientQuotasConfigs error.", e)
java.util.Collections.emptyMap()
})
}.asInstanceOf[java.util.Map[ClientQuotaEntity, java.util.Map[String, Double]]]
def addQuotaConfigs(entityTypes: java.util.List[String], entityNames: java.util.List[String], configsToBeAddedMap: java.util.Map[String, String]): (Boolean, String) = {
alterQuotaConfigs(entityTypes, entityNames, configsToBeAddedMap, Collections.emptyList())
}
def deleteQuotaConfigs(entityTypes: java.util.List[String], entityNames: java.util.List[String], configsToBeDeleted: java.util.List[String]): (Boolean, String) = {
alterQuotaConfigs(entityTypes, entityNames, Collections.emptyMap(), configsToBeDeleted)
}
def alterQuotaConfigs(entityTypes: java.util.List[String], entityNames: java.util.List[String], configsToBeAddedMap: java.util.Map[String, String], configsToBeDeleted: java.util.List[String]): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
alterQuotaConfigsInner(admin, entityTypes.asScala.toList, entityNames.asScala.toList, configsToBeAddedMap.asScala.toMap, configsToBeDeleted.asScala.toSeq)
(true, "")
},
e => {
log.error("getAllClientQuotasConfigs error.", e)
(false, e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
private def getAllClientQuotasConfigs(adminClient: Admin, entityTypes: List[String], entityNames: List[String]): java.util.Map[ClientQuotaEntity, java.util.Map[String, Double]] = {
val components = entityTypes.map(Some(_)).zipAll(entityNames.map(Some(_)), None, None).map { case (entityType, entityNameOpt) =>
entityNameOpt match {
case Some("") => ClientQuotaFilterComponent.ofDefaultEntity(entityType.get)
case Some(name) => ClientQuotaFilterComponent.ofEntity(entityType.get, name)
case None => ClientQuotaFilterComponent.ofEntityType(entityType.get)
}
}
adminClient.describeClientQuotas(ClientQuotaFilter.containsOnly(components.asJava)).entities.get(30, TimeUnit.SECONDS)
}.asInstanceOf[java.util.Map[ClientQuotaEntity, java.util.Map[String, Double]]]
private def alterQuotaConfigsInner(adminClient: Admin, entityTypes: List[String], entityNames: List[String], configsToBeAddedMap: Map[String, String], configsToBeDeleted: Seq[String]) = {
// handle altering client/user quota configs
// val oldConfig = getAllClientQuotasConfigs(adminClient, entityTypes, entityNames)
// val invalidConfigs = configsToBeDeleted.filterNot(oldConfig.asScala.toMap.contains)
// if (invalidConfigs.nonEmpty)
// throw new InvalidConfigurationException(s"Invalid config(s): ${invalidConfigs.mkString(",")}")
val alterEntityNames = entityNames.map(en => if (en.nonEmpty) en else null)
// Explicitly populate a HashMap to ensure nulls are recorded properly.
val alterEntityMap = new java.util.HashMap[String, String]
entityTypes.zip(alterEntityNames).foreach { case (k, v) => alterEntityMap.put(k, v) }
val entity = new ClientQuotaEntity(alterEntityMap)
val alterOptions = new AlterClientQuotasOptions().validateOnly(false)
val alterOps = (configsToBeAddedMap.map { case (key, value) =>
val doubleValue = try value.toDouble catch {
case _: NumberFormatException =>
throw new IllegalArgumentException(s"Cannot parse quota configuration value for $key: $value")
}
new ClientQuotaAlteration.Op(key, doubleValue)
} ++ configsToBeDeleted.map(key => new ClientQuotaAlteration.Op(key, null))).asJavaCollection
adminClient.alterClientQuotas(Collections.singleton(new ClientQuotaAlteration(entity, alterOps)), alterOptions)
.all().get(60, TimeUnit.SECONDS)
}
}

View File

@@ -1,12 +1,13 @@
package kafka.console
import com.xuxd.kafka.console.beans.{BrokerNode, ClusterInfo}
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.kafka.clients.NodeApiVersions
import org.apache.kafka.clients.admin.DescribeClusterResult
import org.apache.kafka.common.Node
import java.util.Collections
import java.util.concurrent.TimeUnit
import com.xuxd.kafka.console.beans.{BrokerNode, ClusterInfo}
import com.xuxd.kafka.console.config.KafkaConfig
import org.apache.kafka.clients.admin.DescribeClusterResult
import scala.jdk.CollectionConverters.{CollectionHasAsScala, SetHasAsJava, SetHasAsScala}
/**
@@ -19,6 +20,7 @@ class ClusterConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
def clusterInfo(): ClusterInfo = {
withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val clusterResult: DescribeClusterResult = admin.describeCluster()
val clusterInfo = new ClusterInfo
clusterInfo.setClusterId(clusterResult.clusterId().get(timeoutMs, TimeUnit.MILLISECONDS))
@@ -41,4 +43,8 @@ class ClusterConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
new ClusterInfo
}).asInstanceOf[ClusterInfo]
}
def listBrokerVersionInfo(): java.util.HashMap[Node, NodeApiVersions] = {
BrokerApiVersion.listAllBrokerApiVersionInfo()
}
}

View File

@@ -3,8 +3,7 @@ package kafka.console
import java.util
import java.util.Collections
import java.util.concurrent.TimeUnit
import com.xuxd.kafka.console.config.KafkaConfig
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import kafka.console.ConfigConsole.BrokerLoggerConfigType
import kafka.server.ConfigType
import org.apache.kafka.clients.admin.{AlterConfigOp, Config, ConfigEntry, DescribeConfigsOptions}
@@ -69,6 +68,7 @@ class ConfigConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfi
val configResource = new ConfigResource(getResourceTypeAndValidate(entityType, entityName), entityName)
val config = Map(configResource -> Collections.singletonList(new AlterConfigOp(entry, opType)).asInstanceOf[util.Collection[AlterConfigOp]])
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.incrementalAlterConfigs(config.asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
}, e => {

View File

@@ -1,6 +1,6 @@
package kafka.console
import com.xuxd.kafka.console.config.KafkaConfig
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.kafka.clients.admin.ListOffsetsResult.ListOffsetsResultInfo
import org.apache.kafka.clients.admin._
import org.apache.kafka.clients.consumer.{ConsumerConfig, OffsetAndMetadata, OffsetResetStrategy}
@@ -75,6 +75,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
val endOffsets = commitOffsets.keySet.map { topicPartition =>
topicPartition -> OffsetSpec.latest
}.toMap
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.listOffsets(endOffsets.asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS)
}, e => {
log.error("listOffsets error.", e)
@@ -166,6 +167,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
def resetPartitionToTargetOffset(groupId: String, partition: TopicPartition, offset: Long): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.alterConsumerGroupOffsets(groupId, Map(partition -> new OffsetAndMetadata(offset)).asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
}, e => {
@@ -178,7 +180,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
timestamp: java.lang.Long): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
val logOffsets = getLogTimestampOffsets(admin, groupId, topicPartitions.asScala, timestamp)
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.alterConsumerGroupOffsets(groupId, logOffsets.asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
}, e => {
@@ -256,6 +258,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
val timestampOffsets = topicPartitions.map { topicPartition =>
topicPartition -> OffsetSpec.forTimestamp(timestamp)
}.toMap
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val offsets = admin.listOffsets(
timestampOffsets.asJava,
new ListOffsetsOptions().timeoutMs(timeoutMs)
@@ -280,6 +283,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
val endOffsets = topicPartitions.map { topicPartition =>
topicPartition -> OffsetSpec.latest
}.toMap
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val offsets = admin.listOffsets(
endOffsets.asJava,
new ListOffsetsOptions().timeoutMs(timeoutMs)

View File

@@ -3,9 +3,8 @@ package kafka.console
import java.util
import java.util.concurrent.TimeUnit
import java.util.{Collections, List}
import com.xuxd.kafka.console.beans.AclEntry
import com.xuxd.kafka.console.config.KafkaConfig
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.commons.lang3.StringUtils
import org.apache.kafka.common.acl._
import org.apache.kafka.common.resource.{ResourcePattern, ResourcePatternFilter, ResourceType}
@@ -58,6 +57,7 @@ class KafkaAclConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
def addAcl(acls: List[AclBinding]): Boolean = {
withAdminClient(adminClient => {
try {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
adminClient.createAcls(acls).all().get(timeoutMs, TimeUnit.MILLISECONDS)
true
} catch {
@@ -100,6 +100,7 @@ class KafkaAclConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
def deleteAcl(entry: AclEntry, allResource: Boolean, allPrincipal: Boolean, allOperation: Boolean): Boolean = {
withAdminClient(adminClient => {
try {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val result = adminClient.deleteAcls(Collections.singleton(entry.toAclBindingFilter(allResource, allPrincipal, allOperation))).all().get(timeoutMs, TimeUnit.MILLISECONDS)
log.info("delete acl: {}", result)
true
@@ -113,6 +114,7 @@ class KafkaAclConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
def deleteAcl(filters: util.Collection[AclBindingFilter]): Boolean = {
withAdminClient(adminClient => {
try {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val result = adminClient.deleteAcls(filters).all().get(timeoutMs, TimeUnit.MILLISECONDS)
log.info("delete acl: {}", result)
true

View File

@@ -1,16 +1,16 @@
package kafka.console
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import kafka.server.ConfigType
import kafka.utils.Implicits.PropertiesOps
import org.apache.kafka.clients.admin._
import org.apache.kafka.common.config.SaslConfigs
import org.apache.kafka.common.security.scram.internals.{ScramCredentialUtils, ScramFormatter}
import java.security.MessageDigest
import java.util
import java.util.concurrent.TimeUnit
import java.util.{Properties, Set}
import com.xuxd.kafka.console.config.KafkaConfig
import kafka.server.ConfigType
import kafka.utils.Implicits.PropertiesOps
import org.apache.kafka.clients.admin._
import org.apache.kafka.common.security.scram.internals.{ScramCredentialUtils, ScramFormatter}
import scala.jdk.CollectionConverters.{CollectionHasAsScala, DictionaryHasAsScala, SeqHasAsJava}
/**
@@ -35,31 +35,32 @@ class KafkaConfigConsole(config: KafkaConfig) extends KafkaConsole(config: Kafka
}).asInstanceOf[util.Map[String, UserScramCredentialsDescription]]
}
def addOrUpdateUser(name: String, pass: String): Boolean = {
def addOrUpdateUser(name: String, pass: String): (Boolean, String) = {
withAdminClient(adminClient => {
try {
adminClient.alterUserScramCredentials(util.Arrays.asList(
new UserScramCredentialUpsertion(name,
new ScramCredentialInfo(ScramMechanism.fromMechanismName(config.getSaslMechanism), defaultIterations), pass)))
.all().get(timeoutMs, TimeUnit.MILLISECONDS)
true
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val mechanisms = ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_MECHANISM).split(",").toSeq
val scrams = mechanisms.map(m => new UserScramCredentialUpsertion(name,
new ScramCredentialInfo(ScramMechanism.fromMechanismName(m), defaultIterations), pass))
adminClient.alterUserScramCredentials(scrams.asInstanceOf[Seq[UserScramCredentialAlteration]].asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
} catch {
case ex: Exception => log.error("addOrUpdateUser error", ex)
false
(false, ex.getMessage)
}
}).asInstanceOf[Boolean]
}).asInstanceOf[(Boolean, String)]
}
def addOrUpdateUserWithZK(name: String, pass: String): Boolean = {
withZKClient(adminZkClient => {
try {
val credential = new ScramFormatter(org.apache.kafka.common.security.scram.internals.ScramMechanism.forMechanismName(config.getSaslMechanism))
val credential = new ScramFormatter(org.apache.kafka.common.security.scram.internals.ScramMechanism.forMechanismName(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_MECHANISM)))
.generateCredential(pass, defaultIterations)
val credentialStr = ScramCredentialUtils.credentialToString(credential)
val userConfig: Properties = new Properties()
userConfig.put(config.getSaslMechanism, credentialStr)
userConfig.put(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_MECHANISM), credentialStr)
val configs = adminZkClient.fetchEntityConfig(ConfigType.User, name)
userConfig ++= configs
@@ -101,6 +102,7 @@ class KafkaConfigConsole(config: KafkaConfig) extends KafkaConsole(config: Kafka
// .all().get(timeoutMs, TimeUnit.MILLISECONDS)
// all delete
val userDetail = getUserDetailList(util.Collections.singletonList(name))
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
userDetail.values().asScala.foreach(u => {
adminClient.alterUserScramCredentials(u.credentialInfos().asScala.map(s => new UserScramCredentialDeletion(u.name(), s.mechanism())
.asInstanceOf[UserScramCredentialAlteration]).toList.asJava)

View File

@@ -1,19 +1,21 @@
package kafka.console
import com.xuxd.kafka.console.config.KafkaConfig
import kafka.zk.{AdminZkClient, KafkaZkClient}
import org.apache.kafka.clients.CommonClientConfigs
import com.google.common.cache.{CacheLoader, RemovalListener, RemovalNotification}
import com.xuxd.kafka.console.cache.TimeBasedCache
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import kafka.zk.AdminZkClient
import org.apache.kafka.clients.admin._
import org.apache.kafka.clients.consumer.{ConsumerConfig, KafkaConsumer, OffsetAndMetadata}
import org.apache.kafka.clients.producer.KafkaProducer
import org.apache.kafka.common.TopicPartition
import org.apache.kafka.common.config.SaslConfigs
import org.apache.kafka.common.requests.ListOffsetsResponse
import org.apache.kafka.common.serialization.ByteArrayDeserializer
import org.apache.kafka.common.utils.Time
import org.apache.kafka.common.serialization.{ByteArrayDeserializer, ByteArraySerializer, StringSerializer}
import org.slf4j.{Logger, LoggerFactory}
import java.util.Properties
import java.util.concurrent.Executors
import scala.collection.{Map, Seq}
import scala.concurrent.{ExecutionContext, Future}
import scala.jdk.CollectionConverters.{MapHasAsJava, MapHasAsScala}
/**
@@ -24,15 +26,17 @@ import scala.jdk.CollectionConverters.{MapHasAsJava, MapHasAsScala}
* */
class KafkaConsole(config: KafkaConfig) {
protected val timeoutMs: Int = config.getRequestTimeoutMs
// protected val timeoutMs: Int = config.getRequestTimeoutMs
protected def withAdminClient(f: Admin => Any): Any = {
val admin = createAdminClient()
val admin = if (config.isCacheAdminConnection()) AdminCache.cache.get(ContextConfigHolder.CONTEXT_CONFIG.get().getBootstrapServer()) else createAdminClient()
try {
f(admin)
} finally {
admin.close()
if (!config.isCacheAdminConnection) {
admin.close()
}
}
}
@@ -46,28 +50,69 @@ class KafkaConsole(config: KafkaConfig) {
protected def withConsumerAndCatchError(f: KafkaConsumer[Array[Byte], Array[Byte]] => Any, eh: Exception => Any,
extra: Properties = new Properties()): Any = {
val props = getProps()
props.putAll(extra)
props.put(ConsumerConfig.CLIENT_ID_CONFIG, String.valueOf(System.currentTimeMillis()))
val consumer = new KafkaConsumer(props, new ByteArrayDeserializer, new ByteArrayDeserializer)
// val props = getProps()
// props.putAll(extra)
// props.put(ConsumerConfig.CLIENT_ID_CONFIG, String.valueOf(System.currentTimeMillis()))
// val consumer = new KafkaConsumer(props, new ByteArrayDeserializer, new ByteArrayDeserializer)
ConsumerCache.setProperties(extra)
val consumer = if (config.isCacheConsumerConnection) ConsumerCache.cache.get(ContextConfigHolder.CONTEXT_CONFIG.get().getBootstrapServer()) else KafkaConsole.createByteArrayKVConsumer(extra)
try {
f(consumer)
} catch {
case er: Exception => eh(er)
}
finally {
consumer.close()
ConsumerCache.clearProperties()
if (!config.isCacheConsumerConnection) {
consumer.close()
}
}
}
protected def withZKClient(f: AdminZkClient => Any): Any = {
val zkClient = KafkaZkClient(config.getZookeeperAddr, false, 30000, 30000, Int.MaxValue, Time.SYSTEM)
val adminZkClient = new AdminZkClient(zkClient)
protected def withProducerAndCatchError(f: KafkaProducer[String, String] => Any, eh: Exception => Any,
extra: Properties = new Properties()): Any = {
ProducerCache.setProperties(extra)
val producer = if (config.isCacheProducerConnection) ProducerCache.cache.get(ContextConfigHolder.CONTEXT_CONFIG.get().getBootstrapServer) else KafkaConsole.createProducer(extra)
try {
f(adminZkClient)
} finally {
zkClient.close()
f(producer)
} catch {
case er: Exception => eh(er)
}
finally {
ProducerCache.clearProperties()
if (!config.isCacheProducerConnection) {
producer.close()
}
}
}
protected def withByteProducerAndCatchError(f: KafkaProducer[Array[Byte], Array[Byte]] => Any, eh: Exception => Any,
extra: Properties = new Properties()): Any = {
val props = getProps()
props.putAll(extra)
val producer = new KafkaProducer[Array[Byte], Array[Byte]](props, new ByteArraySerializer, new ByteArraySerializer)
try {
f(producer)
} catch {
case er: Exception => eh(er)
}
finally {
producer.close()
}
}
@Deprecated
protected def withZKClient(f: AdminZkClient => Any): Any = {
// val zkClient = KafkaZkClient(config.getZookeeperAddr, false, 30000, 30000, Int.MaxValue, Time.SYSTEM)
// 3.x
// val zkClient = KafkaZkClient(config.getZookeeperAddr, false, 30000, 30000, Int.MaxValue, Time.SYSTEM, new ZKClientConfig(), "KafkaZkClient")
// val adminZkClient = new AdminZkClient(zkClient)
// try {
// f(adminZkClient)
// } finally {
// zkClient.close()
// }
}
protected def createAdminClient(props: Properties): Admin = {
@@ -75,29 +120,52 @@ class KafkaConsole(config: KafkaConfig) {
}
protected def withTimeoutMs[T <: AbstractOptions[T]](options: T) = {
options.timeoutMs(timeoutMs)
options.timeoutMs(ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs())
}
private def createAdminClient(): Admin = {
Admin.create(getProps())
KafkaConsole.createAdminClient()
}
private def getProps(): Properties = {
val props: Properties = new Properties();
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, config.getBootstrapServer)
props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, config.getRequestTimeoutMs())
if (config.isEnableAcl) {
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, config.getSecurityProtocol())
props.put(SaslConfigs.SASL_MECHANISM, config.getSaslMechanism())
props.put(SaslConfigs.SASL_JAAS_CONFIG, config.getSaslJaasConfig())
}
props
KafkaConsole.getProps()
}
}
object KafkaConsole {
val log: Logger = LoggerFactory.getLogger(this.getClass)
def createAdminClient(): Admin = {
Admin.create(getProps())
}
def createByteArrayKVConsumer(extra: Properties) : KafkaConsumer[Array[Byte], Array[Byte]] = {
val props = getProps()
props.putAll(extra)
props.put(ConsumerConfig.CLIENT_ID_CONFIG, String.valueOf(System.currentTimeMillis()))
new KafkaConsumer(props, new ByteArrayDeserializer, new ByteArrayDeserializer)
}
def createProducer(extra: Properties) : KafkaProducer[String, String] = {
val props = getProps()
props.putAll(extra)
new KafkaProducer(props, new StringSerializer, new StringSerializer)
}
def createByteArrayStringProducer(extra: Properties) : KafkaProducer[Array[Byte], Array[Byte]] = {
val props = getProps()
props.putAll(extra)
new KafkaProducer(props, new ByteArraySerializer, new ByteArraySerializer)
}
def getProps(): Properties = {
val props: Properties = new Properties();
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getBootstrapServer())
props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs())
props.putAll(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties())
props
}
def getCommittedOffsets(admin: Admin, groupId: String,
timeoutMs: Integer): Map[TopicPartition, OffsetAndMetadata] = {
admin.listConsumerGroupOffsets(
@@ -147,4 +215,88 @@ object KafkaConsole {
}.toMap
res
}
implicit val ec = ExecutionContext.fromExecutorService(Executors.newFixedThreadPool(2))
}
object AdminCache {
private val log: Logger = LoggerFactory.getLogger(this.getClass)
private val cacheLoader = new CacheLoader[String, Admin] {
override def load(key: String): Admin = KafkaConsole.createAdminClient()
}
private val removeListener = new RemovalListener[String, Admin] {
override def onRemoval(notification: RemovalNotification[String, Admin]): Unit = {
Future {
log.warn("Close expired admin connection: {}", notification.getKey)
notification.getValue.close()
log.warn("Close expired admin connection complete: {}", notification.getKey)
}(KafkaConsole.ec)
}
}
val cache = new TimeBasedCache[String, Admin](cacheLoader, removeListener)
}
object ConsumerCache {
private val log: Logger = LoggerFactory.getLogger(this.getClass)
private val threadLocal = new ThreadLocal[Properties]
private val cacheLoader = new CacheLoader[String, KafkaConsumer[Array[Byte], Array[Byte]]] {
override def load(key: String): KafkaConsumer[Array[Byte], Array[Byte]] = KafkaConsole.createByteArrayKVConsumer(threadLocal.get())
}
private val removeListener = new RemovalListener[String, KafkaConsumer[Array[Byte], Array[Byte]]] {
override def onRemoval(notification: RemovalNotification[String, KafkaConsumer[Array[Byte], Array[Byte]]]): Unit = {
Future {
log.warn("Close expired consumer connection: {}", notification.getKey)
notification.getValue.close()
log.warn("Close expired consumer connection complete: {}", notification.getKey)
}(KafkaConsole.ec)
}
}
val cache = new TimeBasedCache[String, KafkaConsumer[Array[Byte], Array[Byte]]](cacheLoader, removeListener)
def setProperties(props : Properties) : Unit = {
threadLocal.set(props)
}
def clearProperties() : Unit = {
threadLocal.remove()
}
}
object ProducerCache {
private val log: Logger = LoggerFactory.getLogger(this.getClass)
private val threadLocal = new ThreadLocal[Properties]
private val cacheLoader = new CacheLoader[String, KafkaProducer[String, String]] {
override def load(key: String): KafkaProducer[String, String] = KafkaConsole.createProducer(threadLocal.get())
}
private val removeListener = new RemovalListener[String, KafkaProducer[String, String]] {
override def onRemoval(notification: RemovalNotification[String, KafkaProducer[String, String]]): Unit = {
Future {
log.warn("Close expired producer connection: {}", notification.getKey)
notification.getValue.close()
log.warn("Close expired producer connection complete: {}", notification.getKey)
}(KafkaConsole.ec)
}
}
val cache = new TimeBasedCache[String, KafkaProducer[String, String]](cacheLoader, removeListener)
def setProperties(props : Properties) : Unit = {
threadLocal.set(props)
}
def clearProperties() : Unit = {
threadLocal.remove()
}
}

View File

@@ -0,0 +1,250 @@
package kafka.console
import com.xuxd.kafka.console.beans.MessageFilter
import com.xuxd.kafka.console.beans.enums.FilterType
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.commons.lang3.StringUtils
import org.apache.kafka.clients.admin.{DeleteRecordsOptions, RecordsToDelete}
import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord}
import org.apache.kafka.clients.producer.ProducerRecord
import org.apache.kafka.common.TopicPartition
import java.time.Duration
import java.util
import java.util.{Properties}
import scala.collection.immutable
import scala.jdk.CollectionConverters.{CollectionHasAsScala, MapHasAsScala, SeqHasAsJava}
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-11 09:39:40
* */
class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig) with Logging {
def searchBy(partitions: util.Collection[TopicPartition], startTime: Long, endTime: Long,
maxNums: Int, filter: MessageFilter): (util.List[ConsumerRecord[Array[Byte], Array[Byte]]], Int) = {
var startOffTable: immutable.Map[TopicPartition, Long] = Map.empty
var endOffTable: immutable.Map[TopicPartition, Long] = Map.empty
withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val startTable = KafkaConsole.getLogTimestampOffsets(admin, partitions.asScala.toSeq, startTime, timeoutMs)
startOffTable = startTable.map(t2 => (t2._1, t2._2.offset())).toMap
endOffTable = KafkaConsole.getLogTimestampOffsets(admin, partitions.asScala.toSeq, endTime, timeoutMs)
.map(t2 => (t2._1, t2._2.offset())).toMap
}, e => {
log.error("getLogTimestampOffsets error.", e)
throw new RuntimeException("getLogTimestampOffsets error", e)
})
val headerValueBytes = if (StringUtils.isNotEmpty(filter.getHeaderValue())) filter.getHeaderValue().getBytes() else None
def filterMessage(record: ConsumerRecord[Array[Byte], Array[Byte]]): Boolean = {
filter.getFilterType() match {
case FilterType.BODY => {
val body = filter.getDeserializer().deserialize(record.topic(), record.value())
var contains = false
if (filter.isContainsValue) {
contains = body.asInstanceOf[String].contains(filter.getSearchContent().asInstanceOf[String])
} else {
contains = body.equals(filter.getSearchContent)
}
contains
}
case FilterType.HEADER => {
if (StringUtils.isNotEmpty(filter.getHeaderKey()) && StringUtils.isNotEmpty(filter.getHeaderValue())) {
val iterator = record.headers().headers(filter.getHeaderKey()).iterator()
var contains = false
while (iterator.hasNext() && !contains) {
val next = iterator.next().value()
contains = (next.sameElements(headerValueBytes.asInstanceOf[Array[Byte]]))
}
contains
} else if (StringUtils.isNotEmpty(filter.getHeaderKey()) && StringUtils.isEmpty(filter.getHeaderValue())) {
record.headers().headers(filter.getHeaderKey()).iterator().hasNext()
} else {
true
}
}
case FilterType.NONE => true
}
}
var terminate: Boolean = (startOffTable == endOffTable)
val res = new util.LinkedList[ConsumerRecord[Array[Byte], Array[Byte]]]()
// 检索的消息条数
var searchNums = 0
// 如果最小和最大偏移一致,就结束
if (!terminate) {
val arrive = new util.HashSet[TopicPartition](partitions)
val props = new Properties()
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false")
withConsumerAndCatchError(consumer => {
consumer.assign(partitions)
for ((tp, off) <- startOffTable) {
consumer.seek(tp, off)
}
// 终止条件
// 1.所有查询分区达都到最大偏移的时候
while (!terminate) {
// 达到查询的最大条数
if (searchNums >= maxNums) {
terminate = true
} else {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val records = consumer.poll(Duration.ofMillis(timeoutMs))
if (records.isEmpty) {
terminate = true
} else {
for ((tp, endOff) <- endOffTable) {
if (!terminate) {
var recordList = records.records(tp)
if (!recordList.isEmpty) {
val first = recordList.get(0)
if (first.offset() >= endOff) {
arrive.remove(tp)
} else {
searchNums += recordList.size()
//
// (String topic,
// int partition,
// long offset,
// long timestamp,
// TimestampType timestampType,
// Long checksum,
// int serializedKeySize,
// int serializedValueSize,
// K key,
// V value,
// Headers headers,
// Optional<Integer> leaderEpoch)
val nullVList = recordList.asScala.filter(filterMessage(_)).map(record => new ConsumerRecord[Array[Byte], Array[Byte]](record.topic(),
record.partition(),
record.offset(),
record.timestamp(),
record.timestampType(),
// record.checksum(),
record.serializedKeySize(),
record.serializedValueSize(),
record.key(),
null,
record.headers(),
record.leaderEpoch())).toSeq.asJava
res.addAll(nullVList)
if (recordList.get(recordList.size() - 1).offset() >= endOff) {
arrive.remove(tp)
}
if (recordList != null) {
recordList = null
}
}
}
if (arrive.isEmpty) {
terminate = true
}
}
}
}
}
}
}, e => {
log.error("searchBy time error.", e)
})
}
(res, searchNums)
}
def searchBy(
tp2o: util.Map[TopicPartition, Long]): util.Map[TopicPartition, ConsumerRecord[Array[Byte], Array[Byte]]] = {
val props = new Properties()
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false")
val res = new util.HashMap[TopicPartition, ConsumerRecord[Array[Byte], Array[Byte]]]()
withConsumerAndCatchError(consumer => {
var tpSet = tp2o.keySet()
val tpSetCopy = new util.HashSet[TopicPartition](tpSet)
val endOffsets = consumer.endOffsets(tpSet)
val beginOffsets = consumer.beginningOffsets(tpSet)
for ((tp, off) <- tp2o.asScala) {
val endOff = endOffsets.get(tp)
// if (endOff <= off) {
// consumer.seek(tp, endOff)
// tpSetCopy.remove(tp)
// } else {
// consumer.seek(tp, off)
// }
val beginOff = beginOffsets.get(tp)
if (off < beginOff || off >= endOff) {
tpSetCopy.remove(tp)
}
}
tpSet = tpSetCopy
consumer.assign(tpSet)
tpSet.asScala.foreach(tp => {
consumer.seek(tp, tp2o.get(tp))
})
var terminate = tpSet.isEmpty
while (!terminate) {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val records = consumer.poll(Duration.ofMillis(timeoutMs))
val tps = new util.HashSet(tpSet).asScala
for (tp <- tps) {
if (!res.containsKey(tp)) {
val recordList = records.records(tp)
if (!recordList.isEmpty) {
val record = recordList.get(0)
res.put(tp, record)
tpSet.remove(tp)
}
}
if (tpSet.isEmpty) {
terminate = true
}
}
}
}, e => {
log.error("searchBy offset error.", e)
})
res
}
def send(topic: String, partition: Int, key: String, value: String, num: Int): Unit = {
withProducerAndCatchError(producer => {
val nullKey = if (key != null && key.trim().length() == 0) null else key
for (a <- 1 to num) {
val record = if (partition != -1) new ProducerRecord[String, String](topic, partition, nullKey, value)
else new ProducerRecord[String, String](topic, nullKey, value)
producer.send(record)
}
}, e => log.error("send error.", e))
}
def sendSync(record: ProducerRecord[Array[Byte], Array[Byte]]): (Boolean, String) = {
withByteProducerAndCatchError(producer => {
val metadata = producer.send(record).get()
(true, metadata.toString())
}, e => {
log.error("send error.", e)
(false, e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
def delete(recordsToDelete: util.Map[TopicPartition, RecordsToDelete]): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
admin.deleteRecords(recordsToDelete, withTimeoutMs(new DeleteRecordsOptions())).all().get()
(true, "")
}, e => {
log.error("delete message error.", e)
(false, "delete error :" + e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
}

View File

@@ -1,6 +1,6 @@
package kafka.console
import com.xuxd.kafka.console.config.KafkaConfig
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import kafka.admin.ReassignPartitionsCommand
import org.apache.kafka.clients.admin.{ElectLeadersOptions, ListPartitionReassignmentsOptions, PartitionReassignment}
import org.apache.kafka.clients.consumer.KafkaConsumer
@@ -34,6 +34,7 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
throw new UnsupportedOperationException("exist consumer client.")
}
}
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val thatGroupDescriptionList = thatAdmin.describeConsumerGroups(searchGroupIds).all().get(timeoutMs, TimeUnit.MILLISECONDS).values()
if (groupDescriptionList.isEmpty) {
throw new IllegalArgumentException("that consumer group info is null.")
@@ -101,6 +102,7 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
thatMinOffset: util.Map[TopicPartition, Long]): (Boolean, String) = {
val thatAdmin = createAdminClient(props)
try {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val searchGroupIds = Collections.singleton(groupId)
val groupDescriptionList = consumerConsole.getConsumerGroupList(searchGroupIds)
if (groupDescriptionList.isEmpty) {
@@ -178,6 +180,7 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
val thatConsumer = new KafkaConsumer(props, new ByteArrayDeserializer, new ByteArrayDeserializer)
try {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val thisTopicPartitions = consumerConsole.listSubscribeTopics(groupId).get(topic).asScala.sortBy(_.partition())
val thatTopicPartitionMap = thatAdmin.listConsumerGroupOffsets(
groupId
@@ -239,8 +242,8 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
withAdminClientAndCatchError(admin => {
admin.listPartitionReassignments(withTimeoutMs(new ListPartitionReassignmentsOptions)).reassignments().get()
}, e => {
Collections.emptyMap()
log.error("listPartitionReassignments error.", e)
Collections.emptyMap()
}).asInstanceOf[util.Map[TopicPartition, PartitionReassignment]]
}
@@ -253,4 +256,20 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
throw e
}).asInstanceOf[util.Map[TopicPartition, Throwable]]
}
def proposedAssignments(reassignmentJson: String,
brokerListString: String): util.Map[TopicPartition, util.List[Int]] = {
withAdminClientAndCatchError(admin => {
val map = ReassignPartitionsCommand.generateAssignment(admin, reassignmentJson, brokerListString, true)._1
val res = new util.HashMap[TopicPartition, util.List[Int]]()
for (tp <- map.keys) {
res.put(tp, map(tp).asJava)
// res.put(tp, map.getOrElse(tp, Seq.empty).asJava)
}
res
}, e => {
log.error("proposedAssignments error.", e)
throw e
})
}.asInstanceOf[util.Map[TopicPartition, util.List[Int]]]
}

View File

@@ -1,6 +1,6 @@
package kafka.console
import com.xuxd.kafka.console.config.KafkaConfig
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import kafka.admin.ReassignPartitionsCommand._
import kafka.utils.Json
import org.apache.kafka.clients.admin._
@@ -28,6 +28,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
* @return all topic name set.
*/
def getTopicNameList(internal: Boolean = true): Set[String] = {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
withAdminClientAndCatchError(admin => admin.listTopics(new ListTopicsOptions().listInternal(internal)).names()
.get(timeoutMs, TimeUnit.MILLISECONDS),
e => {
@@ -42,6 +43,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
* @return internal topic name set.
*/
def getInternalTopicNameList(): Set[String] = {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
withAdminClientAndCatchError(admin => admin.listTopics(new ListTopicsOptions().listInternal(true)).listings()
.get(timeoutMs, TimeUnit.MILLISECONDS).asScala.filter(_.isInternal).map(_.name()).toSet[String].asJava,
e => {
@@ -64,16 +66,17 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
/**
* delete topic by topic name.
*
* @param topic topic name.
* @param topics topic name list.
* @return result or : fail message.
*/
def deleteTopic(topic: String): (Boolean, String) = {
def deleteTopics(topics: util.Collection[String]): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
admin.deleteTopics(Collections.singleton(topic), new DeleteTopicsOptions().retryOnQuotaViolation(false)).all().get(timeoutMs, TimeUnit.MILLISECONDS)
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.deleteTopics(topics, new DeleteTopicsOptions().retryOnQuotaViolation(false)).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
},
e => {
log.error("delete topic error, topic: " + topic, e)
log.error("delete topic error, topic: " + topics, e)
(false, e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
@@ -103,6 +106,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
*/
def createTopic(topic: NewTopic): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val createResult = admin.createTopics(Collections.singleton(topic), new CreateTopicsOptions().retryOnQuotaViolation(false))
createResult.all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
@@ -117,6 +121,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
*/
def createPartitions(newPartitions: util.Map[String, NewPartitions]): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.createPartitions(newPartitions,
new CreatePartitionsOptions().retryOnQuotaViolation(false)).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
@@ -241,6 +246,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
.asScala.map(info => new TopicPartition(topic, info.partition())).toSeq
case None => throw new IllegalArgumentException("topic is not exist.")
}
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val offsetMap = KafkaConsole.getLogTimestampOffsets(admin, partitions, timestamp, timeoutMs)
offsetMap.map(tuple2 => (tuple2._1, tuple2._2.offset())).toMap.asJava
}, e => {

View File

@@ -0,0 +1,48 @@
package com.xuxd.kafka.console.scala;
import com.xuxd.kafka.console.config.ContextConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.config.KafkaConfig;
import kafka.console.ClientQuotaConsole;
import org.apache.kafka.common.config.internals.QuotaConfigs;
import org.apache.kafka.common.quota.ClientQuotaEntity;
import org.junit.jupiter.api.Test;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
public class ClientQuotaConsoleTest {
String bootstrapServer = "localhost:9092";
@Test
void testGetClientQuotasConfigs() {
ClientQuotaConsole console = new ClientQuotaConsole(new KafkaConfig());
ContextConfig config = new ContextConfig();
config.setBootstrapServer(bootstrapServer);
ContextConfigHolder.CONTEXT_CONFIG.set(config);
Map<ClientQuotaEntity, Map<String, Object>> configs = console.getClientQuotasConfigs(Arrays.asList(ClientQuotaEntity.USER, ClientQuotaEntity.CLIENT_ID), Arrays.asList("user1", "clientA"));
configs.forEach((k, v) -> {
System.out.println(k);
System.out.println(v);
});
}
@Test
void testAlterClientQuotasConfigs() {
ClientQuotaConsole console = new ClientQuotaConsole(new KafkaConfig());
ContextConfig config = new ContextConfig();
config.setBootstrapServer(bootstrapServer);
ContextConfigHolder.CONTEXT_CONFIG.set(config);
Map<String, String> configsToBeAddedMap = new HashMap<>();
configsToBeAddedMap.put(QuotaConfigs.PRODUCER_BYTE_RATE_OVERRIDE_CONFIG, "1024000000");
console.addQuotaConfigs(Arrays.asList(ClientQuotaEntity.USER), Arrays.asList("user-test"), configsToBeAddedMap);
console.addQuotaConfigs(Arrays.asList(ClientQuotaEntity.USER), Arrays.asList(""), configsToBeAddedMap);
console.addQuotaConfigs(Arrays.asList(ClientQuotaEntity.CLIENT_ID), Arrays.asList(""), configsToBeAddedMap);
console.addQuotaConfigs(Arrays.asList(ClientQuotaEntity.CLIENT_ID), Arrays.asList("clientA"), configsToBeAddedMap);
console.addQuotaConfigs(Arrays.asList(ClientQuotaEntity.USER, ClientQuotaEntity.CLIENT_ID), Arrays.asList("", ""), configsToBeAddedMap);
// console.deleteQuotaConfigs(Arrays.asList(ClientQuotaEntity.CLIENT_ID), Arrays.asList(""), Arrays.asList(QuotaConfigs.CONSUMER_BYTE_RATE_OVERRIDE_CONFIG));
}
}

12
ui/package-lock.json generated
View File

@@ -1866,9 +1866,9 @@
"optional": true
},
"loader-utils": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz",
"integrity": "sha512-rP4F0h2RaWSvPEkD7BLDFQnvSf+nK+wr3ESUjNTyAGobqrijmW92zc+SO6d4p4B1wh7+B/Jg1mkQe5NYUEHtHQ==",
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.4.tgz",
"integrity": "sha512-xXqpXoINfFhgua9xiqD8fPFHgkoq1mmmpE92WlDbm9rNRd/EbRb+Gqf908T2DMfuHjjJlksiK2RbHVOdD/MqSw==",
"dev": true,
"optional": true,
"requires": {
@@ -1897,9 +1897,9 @@
}
},
"vue-loader-v16": {
"version": "npm:vue-loader@16.5.0",
"resolved": "https://registry.npmjs.org/vue-loader/-/vue-loader-16.5.0.tgz",
"integrity": "sha512-WXh+7AgFxGTgb5QAkQtFeUcHNIEq3PGVQ8WskY5ZiFbWBkOwcCPRs4w/2tVyTbh2q6TVRlO3xfvIukUtjsu62A==",
"version": "npm:vue-loader@16.8.3",
"resolved": "https://registry.npmjs.org/vue-loader/-/vue-loader-16.8.3.tgz",
"integrity": "sha512-7vKN45IxsKxe5GcVCbc2qFU5aWzyiLrYJyUuMz4BQLKctCj/fmCa0w6fGiiQ2cLFetNcek1ppGJQDCup0c1hpA==",
"dev": true,
"optional": true,
"requires": {

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.2 KiB

After

Width:  |  Height:  |  Size: 5.4 KiB

BIN
ui/public/vue.ico Normal file
View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

Some files were not shown because too many files have changed in this diff Show More