55 Commits

Author SHA1 Message Date
许晓东
f0369c6246 优化linux启停脚本 2025-08-06 20:21:02 +08:00
许晓东
0027f55184 windows启动脚本支持任意路径,增加PowerShell启动脚本. 2025-07-30 20:57:56 +08:00
许晓东
745aa6e9bc 升级v1.0.13版本 2025-07-01 20:34:22 +08:00
许晓东
1e86aa4569 增加消息转发功能. 2025-06-16 20:28:26 +08:00
许晓东
780d05bc06 增加消息转发接口. 2025-06-11 20:12:09 +08:00
许晓东
a755ffb010 消息详情对话框点击模态框的遮罩层关闭模态框. 2025-05-27 20:52:37 +08:00
许晓东
39544327ee 补充TopicConsole一个注释. 2025-05-20 21:07:08 +08:00
许晓东
9b7263241c broker配置mask close默认设置为true 2025-04-24 20:28:33 +08:00
许晓东
680630d372 Topic菜单消费详情支持点击刷新 2025-03-16 19:47:42 +08:00
许晓东
d554053e44 发布v1.0.12版本 2025-02-24 22:53:02 +08:00
许晓东
ba0a12fd5f 升级v1.0.12版本 2025-02-24 22:39:27 +08:00
许晓东
a927dd412e 消费组支持模糊过滤查询. 2025-02-22 20:56:00 +08:00
许晓东
a48812ed0e message提示框显示1秒,最多显示1条. 2025-01-12 22:02:49 +08:00
许晓东
dd2863e51d topic-发送统计: 增加刷新按钮 2024-12-16 20:50:54 +08:00
许晓东
89846018cc 发布1.0.11版本 2024-10-06 12:55:43 +08:00
许晓东
19cc635151 升级1.0.11版本 2024-10-06 11:30:08 +08:00
许晓东
a1206fb094 主页显示broker版本 2024-09-01 21:47:35 +08:00
许晓东
83c018f6a7 计算broker版本 2024-08-30 22:57:06 +08:00
许晓东
2729627a80 更新jetbrains 最新logo 2024-08-14 21:42:20 +08:00
许晓东
096792beb0 集群信息/topic:副本,配置模态框点击背景关闭#26 2024-07-07 21:10:00 +08:00
许晓东
fd495351bc 发布v1.0.10版本. 2024-06-16 21:06:54 +08:00
许晓东
ecbd9dda6a 开启集群数据权限,同一浏览器不同账号登录看到其它账号集群信息bug fixed. 2024-06-16 20:36:00 +08:00
许晓东
30707e84a7 brokerId不是从0开始,topic管理->变更副本失败fixed. 2024-06-11 20:00:23 +08:00
许晓东
f98cca8727 升级kafka到3.5.0版本. 2024-05-23 21:37:38 +08:00
许晓东
ea95697e88 长级kafka到3.5.0版本. 2024-05-23 21:35:34 +08:00
许晓东
e90268a56e issue #36, searchByTime maxNums 改为配置. 2024-03-03 21:46:45 +08:00
许晓东
fcc315782c 更新wechat. 2024-02-25 21:47:06 +08:00
许晓东
b79529f607 调整acl user用户名/密码长度. 2024-01-15 20:38:30 +08:00
许晓东
f49e2f7d0c 更新功能脑图 2024-01-06 20:48:50 +08:00
许晓东
4929953acd 登录按钮支持回车登录. 2023-12-23 21:14:38 +08:00
许晓东
8c946a96ba 发布1.0.9版本. 2023-12-06 20:41:37 +08:00
许晓东
364a716388 增加消息菜单里发送统计的访问权限. 2023-12-03 22:04:29 +08:00
许晓东
2ef8573a5b 增加消息根据时间范围发送统计的查询. 2023-12-03 19:25:39 +08:00
许晓东
e368ba9527 增加消息根据时间范围发送统计的查询. 2023-12-03 19:24:16 +08:00
许晓东
dc84144443 在线发送消息支持选择同步、异步发送. 2023-11-28 21:21:37 +08:00
许晓东
471e1e4962 增加支持logo. 2023-11-09 21:45:34 +08:00
许晓东
b2773b596c logback-test.xml rename to logback-spring.xml. 2023-10-25 16:10:40 +08:00
许晓东
ecd55b7517 打包配置打tar.gz和zip包. 2023-10-25 15:23:03 +08:00
许晓东
fed61394bc 消费端详情,订阅分区信息去掉左内边距. 2023-10-24 21:16:33 +08:00
许晓东
1a80c37d83 消费详情fixed,在线发送页面调整. 2023-09-24 21:33:06 +08:00
许晓东
34cdc33617 Merge branch 'main' of https://github.com/xxd763795151/kafka-console-ui 2023-09-21 16:52:46 +08:00
Xiaodong Xu
0041bafb83 Merge pull request #32 from zph95/feature/send_with_header
send message with header
2023-09-20 15:32:27 +08:00
zph
b27db32ee2 send message with header 2023-09-19 02:54:28 +08:00
许晓东
5c5d4fd3a1 升级1.0.9,打压缩包增加版本号. 2023-09-11 13:23:29 +08:00
许晓东
93c1d3cd9d 支持不同的角色查看不同集群. 2023-08-27 22:06:05 +08:00
许晓东
5a28adfa6b 集群绑定角色页面. 2023-08-27 12:17:09 +08:00
许晓东
dac9295fab 增加集群绑定角色接口. 2023-08-23 22:13:22 +08:00
许晓东
f5b27d9b40 没有编辑权限,隐藏集群属性;打印全部操作日志. 2023-08-20 20:04:47 +08:00
许晓东
b529dc313e topic和消费组下的查询类弹框支持鼠标点击蒙层关闭. 2023-07-23 14:03:39 +08:00
许晓东
848c8913f2 update wechat info. 2023-07-21 22:27:10 +08:00
许晓东
3ef9f1e012 shell 脚本启停路径修复. 2023-05-22 23:17:41 +08:00
许晓东
60fb02d6d4 指定utf-8编码. 2023-05-19 15:35:44 +08:00
许晓东
6f093fbb27 fixed 打包后访问权限问题,升级1.0.8. 2023-05-19 15:20:15 +08:00
许晓东
e186fff939 contact update. 2023-05-18 23:00:32 +08:00
Xiaodong Xu
c4676fb51a Merge pull request #24 from xxd763795151/user-auth-dev
User auth dev
2023-05-18 22:57:26 +08:00
94 changed files with 2618 additions and 202 deletions

View File

@@ -1,7 +1,7 @@
# kafka可视化管理平台
一款轻量级的kafka可视化管理平台安装配置快捷、简单易用。
为了开发的省事,没有国际化支持,页面只支持中文展示。
用过rocketmq-console吧前端展示风格跟那个有点类似。
用过rocketmq-console(rocketmq-dashboard)吧,对,前端展示风格跟那个有点类似。
## 页面预览
如果github能查看图片的话可以点击[查看菜单页面](./document/overview/概览.md),查看每个页面的样子
@@ -25,16 +25,16 @@ v1.0.6版本之前如果kafka集群启用了ACL但是控制台没看到Acl
![功能特性](./document/img/功能特性.png)
## 安装包下载
点击下载(v1.0.7版本)[kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.7/kafka-console-ui.zip)
点击下载(v1.0.13版本)[kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.13/kafka-console-ui-1.0.13.zip)
如果安装包下载的比较慢,可以查看下面的源码打包说明,把代码下载下来,本地快速打包.
github下载慢也可以试试从gitee下载点击下载[gitee来源kafka-console-ui.zip](https://gitee.com/xiaodong_xu/kafka-console-ui/releases/download/v1.0.7/kafka-console-ui.zip)
github下载慢也可以试试从gitee下载点击下载[gitee来源kafka-console-ui.zip](https://gitee.com/xiaodong_xu/kafka-console-ui/releases/download/v1.0.13/kafka-console-ui-1.0.13.zip)
## 快速使用
### Windows
1. 解压缩zip安装包
2. 进入bin目录必须在bin目录下双击执行`start.bat`启动
2. 进入bin目录, 双击执行`start.bat`启动; 如果使用PowerShell, 也可以选择运行`start.ps1`启动
3. 停止:直接关闭启动的命令行窗口即可
### Linux或Mac OS
@@ -68,7 +68,7 @@ sh bin/shutdown.sh
在新增集群的时候除了集群地址还可以输入集群的其它属性配置比如请求超时ACL配置等。如果开启了ACL切换到该集群的时候导航栏上便会出现ACL菜单支持进行相关操作目前是基于SASL_SCRAM认证授权管理支持的最完善其它的我也没验证过虽然是我开发的但是我也没具体全部验证这一块功能授权部分应该是通用的
## kafka版本
* 当前使用的kafka 3.2.0
* 当前使用的kafka 3.5.0
## 监控
仅提供运维管理功能监控、告警需要配合其它组件如有需要建议请查看https://blog.csdn.net/x763795151/article/details/119705372
@@ -79,14 +79,31 @@ sh bin/shutdown.sh
如果需要本地开发,开发环境配置查看:[本地开发](./document/develop/开发配置.md)
## 登录认证和权限
目前主分支不支持登录认证,感谢@dongyinuo 同学开发了一版支持登录认证,及相关的按钮权限(主要有两个角色:管理员和普通开发人员)。
1.0.7版本及之前,主分支不支持登录认证,感谢@dongyinuo 同学开发了一版支持登录认证,及相关的按钮权限(主要有两个角色:管理员和普通开发人员)。
在分支feature/dongyinuo/20220501/devops 上。
如果有需要使用管理台登录认证的,可以切换到这个分支上进行打包,打包方式看 源码打包 说明。
默认登录账户admin/kafka-console-ui521
目前最新版本主分支已增加登录认证、用户、角色等权限配置如需开启登录认证修改配置文件config/application.yml: auth.enable=true(默认 false),如下:
```yaml
# 权限认证设置设置为true需要先登录才能访问
auth:
enable: true
# 登录用户token的过期时间单位小时
expire-hours: 24
```
默认有两个登录用户super-admin/123456admin/123456登录成功后在个人设置修改密码。super-admin和admin的唯一区别是super-admin可以增加删除用户admin不能。如果觉得不合适请在用户菜单下删除相关用户或角色自行创建合适的角色或用户。
注意:不开启登录认证,页面不显示用户菜单,正常现象。
## DockerCompose部署
感谢@wdkang123 同学分享的部署方式,如果有需要请查看[DockerCompose部署方式](./document/deploy/docker部署.md)
## 感谢支持
感谢jetbrains的开源支持如果有朋友愿意一起维护很欢迎提pr.
[![jetbrains](./document/img/jb_beam.svg "jetbrains")](https://jb.gg/OpenSourceSupport)
jetbrains官方地址: https://www.jetbrains.com/
## 联系方式
+ 微信群
@@ -94,6 +111,14 @@ sh bin/shutdown.sh
[//]: # (<img src="https://github.com/xxd763795151/kafka-console-ui/blob/main/document/contact/weixin_contact.jpg" width="40%"/>)
+ 若联系方式失效, 请联系加一下微信, 说明意图
- xxd763795151
- wxid_7jy2ezljvebt12
抱歉,后面就不再提供新的联系方式加群了。
在很早之前,有个兄弟提了个建议,可以拉个群,大家可以一起交流,所以成立了一个群,当时我以为没有多少朋友愿意加入。
可是后来确实有不少朋友进群了这让我很惶恐其实这个平台我自己并不觉得有太多技术深度却有一些朋友愿意来捧场这个让我觉得很惭愧所以现在考虑了下如果真有使用问题可以留个issue。
另外,我自己也确实挺忙,对于这个项目的需求处理不够及时,实在是时间和精力上有限,所以有朋友希望新增的一些能力,拖了这么久我也没有下文,实在是抱歉。
对于一些不太耗时的功能,我还是可以积极处理的。
另外有些功能,是想要放到后面再加的,所以迟迟没有动手。

View File

@@ -2,10 +2,10 @@
xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">
<id>rocketmq-reput</id>
<id>${project.version}</id>
<formats>
<!-- <format>tar.gz</format>-->
<format>zip</format>
<format>tar.gz</format>
</formats>
<fileSets>
@@ -23,6 +23,7 @@
<includes>
<include>*.sh</include>
<include>*.bat</include>
<include>*.ps1</include>
</includes>
<outputDirectory>bin</outputDirectory>
<fileMode>0755</fileMode>

View File

@@ -1,8 +1,61 @@
#!/bin/bash
SCRIPT_DIR=`dirname $0`
PROJECT_DIR="$SCRIPT_DIR/.."
# 获取脚本真实路径兼容Linux和macOS
if [ -L "$0" ]; then
# 处理符号链接
if command -v greadlink >/dev/null 2>&1; then
SCRIPT_PATH=$(greadlink -f "$0")
elif command -v readlink >/dev/null 2>&1; then
SCRIPT_PATH=$(readlink -f "$0")
else
# 使用perl作为备选
SCRIPT_PATH=$(perl -e 'use Cwd "abs_path"; print abs_path(shift)' "$0" 2>/dev/null)
fi
else
SCRIPT_PATH="$0"
fi
# 最终回退方案
if [ -z "$SCRIPT_PATH" ] || [ ! -f "$SCRIPT_PATH" ]; then
SCRIPT_PATH="$0"
fi
# 计算项目根目录
SCRIPT_DIR=$(dirname "$SCRIPT_PATH")
PROJECT_DIR=$(cd "$SCRIPT_DIR" && cd .. && pwd 2>/dev/null)
# 验证项目目录是否存在
if [ ! -d "$PROJECT_DIR" ]; then
echo "ERROR: Failed to determine project directory!" >&2
exit 1
fi
# 不要修改进程标记,作为进程属性关闭使用
PROCESS_FLAG="kafka-console-ui-process-flag:${PROJECT_DIR}"
pkill -f $PROCESS_FLAG
echo 'Stop Kafka-console-ui!'
# 停止前检查进程是否存在
if pgrep -f "$PROCESS_FLAG" >/dev/null; then
# 先尝试正常停止
pkill -f "$PROCESS_FLAG"
# 等待进程退出
TIMEOUT=10
while [ $TIMEOUT -gt 0 ]; do
if ! pgrep -f "$PROCESS_FLAG" >/dev/null; then
break
fi
sleep 1
TIMEOUT=$((TIMEOUT - 1))
done
# 检查是否仍有进程存在
if pgrep -f "$PROCESS_FLAG" >/dev/null; then
# 强制终止
pkill -9 -f "$PROCESS_FLAG"
echo "Stop Kafka-console-ui! (force killed)"
else
echo "Stop Kafka-console-ui! (gracefully stopped)"
fi
else
echo "Kafka-console-ui is not running."
fi

View File

@@ -1,8 +1,38 @@
@echo off
rem MAIN_CLASS=org.springframework.boot.loader.JarLauncher
rem JAVA_HOME=jre1.8.0_66
set JAVA_CMD=%JAVA_HOME%\bin\java
set JAVA_OPTS=-Xmx512m -Xms512m -Xmn256m -Xss256k
set CONFIG_FILE=../config/application.yml
set TARGET=../lib/kafka-console-ui.jar
set DATA_DIR=..
"%JAVA_CMD%" -jar %TARGET% --spring.config.location=%CONFIG_FILE% --data.dir=%DATA_DIR%
setlocal enabledelayedexpansion
set "BIN_DIR=%~dp0"
if not "%BIN_DIR:~-1%"=="\" set "BIN_DIR=%BIN_DIR%\"
set "BASE_DIR=%BIN_DIR%.."
for %%I in ("%BASE_DIR%") do set "BASE_DIR=%%~fI"
if defined JAVA_HOME (
set "JAVA_CMD=%JAVA_HOME%\bin\java"
) else (
echo ERROR: JAVA_HOME is not defined
exit /b 1
)
set "JAVA_OPTS=-Xmx512m -Xms512m -Xmn256m -Xss256k -Dfile.encoding=utf-8"
set "CONFIG_FILE=%BASE_DIR%\config\application.yml"
set "TARGET=%BASE_DIR%\lib\kafka-console-ui.jar"
set "DATA_DIR=%BASE_DIR%"
set "LOG_HOME=%BASE_DIR%"
if not exist "%TARGET%" (
echo ERROR: Jar file not found at [%TARGET%]
exit /b 1
)
if not exist "%CONFIG_FILE%" (
echo WARNING: Config file not found at [%CONFIG_FILE%]
)
"%JAVA_CMD%" %JAVA_OPTS% -jar "%TARGET%" --spring.config.location="%CONFIG_FILE%" --data.dir="%DATA_DIR%" --logging.home="%LOG_HOME%"
endlocal

41
bin/start.ps1 Normal file
View File

@@ -0,0 +1,41 @@
# PowerShell
# Set the script execution policy. If necessary, execute this command in PowerShell and then run the script.
# Set-ExecutionPolicy Bypass -Scope Process -Force
param()
$BIN_DIR = $PSScriptRoot
if (-not $BIN_DIR.EndsWith('\')) {
$BIN_DIR += '\'
}
$BASE_DIR = (Get-Item (Join-Path $BIN_DIR "..")).FullName
if (-not $env:JAVA_HOME) {
Write-Error "ERROR: JAVA_HOME is not defined"
exit 1
}
$JAVA_OPTS = "-Xmx512m -Xms512m -Xmn256m -Xss256k -Dfile.encoding=utf-8"
$CONFIG_FILE = Join-Path $BASE_DIR "config\application.yml"
$TARGET = Join-Path $BASE_DIR "lib\kafka-console-ui.jar"
$DATA_DIR = $BASE_DIR
$LOG_HOME = $BASE_DIR
if (-not (Test-Path $TARGET -PathType Leaf)) {
Write-Error "ERROR: Jar file not found at [$TARGET]"
exit 1
}
if (-not (Test-Path $CONFIG_FILE -PathType Leaf)) {
Write-Warning "WARNING: Config file not found at [$CONFIG_FILE]"
}
$javaCmd = Join-Path $env:JAVA_HOME "bin\java.exe"
& $javaCmd $JAVA_OPTS.Split() `
-jar $TARGET `
"--spring.config.location=$CONFIG_FILE" `
"--data.dir=$DATA_DIR" `
"--logging.home=$LOG_HOME"

View File

@@ -1,24 +1,55 @@
#!/bin/bash
# 设置jvm堆大小及栈大小栈大小最少设置为256K不要小于这个值比如设置为128太小了
# 设置jvm堆大小及栈大小
JAVA_MEM_OPTS="-Xmx512m -Xms512m -Xmn256m -Xss256k"
SCRIPT_DIR=`dirname $0`
PROJECT_DIR="$SCRIPT_DIR/.."
# 获取脚本真实路径兼容Linux和macOS
if [ -L "$0" ]; then
# 处理符号链接
if command -v greadlink >/dev/null 2>&1; then
# macOS使用greadlink需brew install coreutils
SCRIPT_PATH=$(greadlink -f "$0")
else
# Linux使用readlink
SCRIPT_PATH=$(readlink -f "$0")
fi
else
SCRIPT_PATH="$0"
fi
# 如果上述方法失败如macOS无greadlink使用替代方案
if [ -z "$SCRIPT_PATH" ] || [ ! -f "$SCRIPT_PATH" ]; then
# 使用perl跨平台解决方案
SCRIPT_PATH=$(perl -e 'use Cwd "abs_path"; print abs_path(shift)' "$0" 2>/dev/null)
fi
# 最终回退方案
if [ -z "$SCRIPT_PATH" ] || [ ! -f "$SCRIPT_PATH" ]; then
SCRIPT_PATH="$0"
fi
# 计算项目根目录
SCRIPT_DIR=$(dirname "$SCRIPT_PATH")
PROJECT_DIR=$(cd "$SCRIPT_DIR" && cd .. && pwd)
CONF_FILE="$PROJECT_DIR/config/application.yml"
TARGET="$PROJECT_DIR/lib/kafka-console-ui.jar"
#设置h2文件根目录
DATA_DIR=$PROJECT_DIR
# 设置h2文件根目录
DATA_DIR="$PROJECT_DIR"
# 日志目录,默认为当前工程目录下
# 这个是错误输出如果启动命令有误输出到这个文件应用日志不会输出到error.out应用日志输出到上面的rocketmq-reput.log中
# 这个是错误输出如果启动命令有误输出到这个文件应用日志不会输出到error.out应用日志输出到上面的kafka-console-ui.log中
ERROR_OUT="$PROJECT_DIR/error.out"
# 不要修改进程标记作为进程属性关闭使用如果要修改请把stop.sh里的该属性的值保持一致
# 日志目录
LOG_HOME="$PROJECT_DIR"
PROCESS_FLAG="kafka-console-ui-process-flag:${PROJECT_DIR}"
JAVA_OPTS="$JAVA_OPTS $JAVA_MEM_OPTS"
JAVA_OPTS="$JAVA_OPTS $JAVA_MEM_OPTS -Dfile.encoding=utf-8"
nohup java -jar $JAVA_OPTS $TARGET --spring.config.location="$CONF_FILE" --logging.home="$PROJECT_DIR" --data.dir=$DATA_DIR $PROCESS_FLAG 1>/dev/null 2>$ERROR_OUT &
# 启动应用
nohup java -jar $JAVA_OPTS "$TARGET" \
--spring.config.location="$CONF_FILE" \
--logging.home="$LOG_HOME" \
--data.dir="$DATA_DIR" \
"$PROCESS_FLAG" >/dev/null 2>"$ERROR_OUT" &
echo "Kafka-console-ui Started!"
echo "Kafka-console-ui Started! PID: $!"

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 178 KiB

After

Width:  |  Height:  |  Size: 127 KiB

13
document/img/jb_beam.svg Normal file
View File

@@ -0,0 +1,13 @@
<svg xmlns="http://www.w3.org/2000/svg" width="298" height="64" fill="none" viewBox="0 0 298 64">
<defs>
<linearGradient id="a" x1=".850001" x2="62.62" y1="62.72" y2="1.81" gradientUnits="userSpaceOnUse">
<stop stop-color="#FF9419"/>
<stop offset=".43" stop-color="#FF021D"/>
<stop offset=".99" stop-color="#E600FF"/>
</linearGradient>
</defs>
<path fill="#000" d="M86.4844 40.5858c0 .8464-.1792 1.5933-.5377 2.2505-.3585.6573-.8564 1.1651-1.5137 1.5236-.6572.3585-1.3941.5378-2.2406.5378H78v6.1044h5.0787c1.912 0 3.6248-.4282 5.1484-1.2846 1.5236-.8564 2.7186-2.0415 3.585-3.5452.8663-1.5037 1.3045-3.1966 1.3045-5.0886V21.0178h-6.6322v19.568Zm17.8556-1.8224h13.891v-5.6065H104.34v-6.3633h15.355v-5.7758H97.8766v29.9743h22.2464v-5.7757H104.34v-6.453Zm17.865-11.8005h8.882v24.0193h6.633V26.9629h8.842v-5.9451h-24.367v5.9551l.01-.01Zm47.022 9.0022c-.517-.2788-1.085-.4879-1.673-.6472.449-.1295.877-.2888 1.275-.488 1.096-.5676 1.962-1.3643 2.579-2.39.618-1.0257.936-2.2007.936-3.5351 0-1.5237-.418-2.8879-1.244-4.0929-.827-1.195-1.992-2.131-3.486-2.8082-1.494-.6672-3.206-1.0058-5.118-1.0058h-13.315v29.9743h13.574c2.011 0 3.804-.3485 5.387-1.0556 1.573-.707 2.798-1.6829 3.675-2.9476.866-1.2547 1.304-2.6887 1.304-4.302 0-1.4837-.338-2.8082-1.026-3.9833-.687-1.175-1.633-2.0812-2.858-2.7285l-.01.0099Zm-13.603-9.9184h5.886c.816 0 1.533.1494 2.161.4382.627.2888 1.115.707 1.464 1.2547.348.5378.527 1.1751.527 1.9021 0 .7269-.179 1.414-.527 1.9817-.349.5676-.837.9958-1.464 1.3045-.628.3087-1.345.4581-2.161.4581h-5.886v-7.3492.0099Zm10.138 18.134c-.378.5676-.916 1.0058-1.603 1.3145-.697.3087-1.484.4581-2.39.4581h-6.145v-7.6878h6.145c.886 0 1.673.1693 2.37.4979.687.3286 1.235.7867 1.613 1.3842.378.5975.578 1.2747.578 2.0414 0 .7668-.19 1.4241-.568 1.9917Zm29.596-5.3077c1.663-.7967 2.947-1.922 3.864-3.3659.916-1.444 1.374-3.117 1.374-5.0289 0-1.912-.448-3.5253-1.344-4.9592-.897-1.434-2.171-2.5394-3.814-3.3261-1.644-.7867-3.546-1.1751-5.717-1.1751h-13.124v29.9743h6.642V40.0779h4.322l6.084 10.9142h7.578l-6.851-11.7208c.339-.1195.677-.249.996-.3983h-.01Zm-2.151-6.1244c-.369.6274-.896 1.1154-1.583 1.444-.688.3386-1.494.5079-2.42.5079h-5.975v-8.2953h5.975c.926 0 1.732.1693 2.42.4979.687.3287 1.214.8166 1.583 1.434.368.6174.558 1.3544.558 2.1908 0 .8365-.19 1.5734-.558 2.2008v.0199Zm20.594-11.7308-10.706 29.9743h6.742l2.121-6.6122h11.114l2.27 6.6122h6.612L220.99 21.0178h-7.189Zm-.339 18.3431 3.445-10.5756.409-1.922.408 1.922 3.685 10.5756h-7.947Zm20.693 11.6312h6.851V21.0178h-6.851v29.9743Zm31.02-9.6993-12.896-20.275h-6.463v29.9743h6.055V30.7172l12.826 20.2749h6.533V21.0178h-6.055v20.275Zm31.528-3.3559c-.647-1.2448-1.564-2.2904-2.729-3.1369-1.165-.8464-2.509-1.4041-4.023-1.6929l-5.098-1.0456c-.797-.1892-1.434-.5178-1.902-.9958-.469-.478-.708-1.0755-.708-1.7825 0-.6473.17-1.205.518-1.683.339-.478.827-.8464 1.444-1.1153.618-.2689 1.335-.3983 2.151-.3983.817 0 1.554.1394 2.181.4182.627.2788 1.115.6672 1.464 1.1751s.528 1.0755.528 1.7228h6.642c-.04-1.7427-.528-3.2863-1.444-4.6207-.916-1.3443-2.201-2.3899-3.834-3.1468-1.633-.7568-3.505-1.1352-5.597-1.1352-2.091 0-3.943.3884-5.566 1.1751-1.623.7867-2.898 1.8721-3.804 3.2663-.906 1.3941-1.364 2.9775-1.364 4.76 0 1.444.288 2.7485.876 3.9036.587 1.1652 1.414 2.1311 2.479 2.8979 1.076.7668 2.311 1.3045 3.725 1.6033l5.397 1.1153c.886.2091 1.584.5975 2.101 1.1551.518.5577.767 1.2448.767 2.0813 0 .6672-.189 1.2747-.567 1.8025-.379.5277-.907.936-1.584 1.2248-.677.2888-1.474.4282-2.39.4282-.916 0-1.782-.1593-2.529-.478-.747-.3186-1.325-.7767-1.733-1.3742-.418-.5875-.617-1.2747-.617-2.0414h-6.642c.029 1.8721.527 3.5152 1.513 4.9492.976 1.424 2.32 2.5394 4.033 3.336 1.713.7967 3.675 1.195 5.886 1.195 2.21 0 4.202-.4083 5.915-1.2249 1.723-.8165 3.057-1.9418 4.023-3.3758.966-1.434 1.444-3.0572 1.444-4.8696 0-1.4838-.329-2.848-.976-4.1028l.02.01Z"/>
<path fill="url(#a)" d="M20.34 3.66 3.66 20.34C1.32 22.68 0 25.86 0 29.18V59c0 2.76 2.24 5 5 5h29.82c3.32 0 6.49-1.32 8.84-3.66l16.68-16.68c2.34-2.34 3.66-5.52 3.66-8.84V5c0-2.76-2.24-5-5-5H29.18c-3.32 0-6.49 1.32-8.84 3.66Z"/>
<path fill="#000" d="M48 16H8v40h40V16Z"/>
<path fill="#fff" d="M30 47H13v4h17v-4Z"/>
</svg>

After

Width:  |  Height:  |  Size: 4.1 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 439 KiB

After

Width:  |  Height:  |  Size: 605 KiB

56
pom.xml
View File

@@ -10,7 +10,7 @@
</parent>
<groupId>com.xuxd</groupId>
<artifactId>kafka-console-ui</artifactId>
<version>1.0.7</version>
<version>1.0.13</version>
<name>kafka-console-ui</name>
<description>Kafka console manage ui</description>
<properties>
@@ -21,7 +21,7 @@
<ui.path>${project.basedir}/ui</ui.path>
<frontend-maven-plugin.version>1.11.0</frontend-maven-plugin.version>
<compiler.version>1.8</compiler.version>
<kafka.version>3.2.0</kafka.version>
<kafka.version>3.5.0</kafka.version>
<maven.assembly.plugin.version>3.0.0</maven.assembly.plugin.version>
<mybatis-plus-boot-starter.version>3.4.2</mybatis-plus-boot-starter.version>
<scala.version>2.13.6</scala.version>
@@ -90,6 +90,12 @@
</exclusions>
</dependency>
<!-- <dependency>-->
<!-- <groupId>org.apache.kafka</groupId>-->
<!-- <artifactId>kafka-tools</artifactId>-->
<!-- <version>${kafka.version}</version>-->
<!-- </dependency>-->
<dependency>
<groupId>com.typesafe.scala-logging</groupId>
<artifactId>scala-logging_2.13</artifactId>
@@ -174,26 +180,26 @@
</executions>
</plugin>
<!-- <plugin>-->
<!-- <groupId>org.codehaus.mojo</groupId>-->
<!-- <artifactId>build-helper-maven-plugin</artifactId>-->
<!-- <version>3.2.0</version>-->
<!-- <executions>-->
<!-- <execution>-->
<!-- <id>add-source</id>-->
<!-- <phase>generate-sources</phase>-->
<!-- <goals>-->
<!-- <goal>add-source</goal>-->
<!-- </goals>-->
<!-- <configuration>-->
<!-- <sources>-->
<!-- <source>src/main/java</source>-->
<!-- <source>src/main/scala</source>-->
<!-- </sources>-->
<!-- </configuration>-->
<!-- </execution>-->
<!-- </executions>-->
<!-- </plugin>-->
<!-- <plugin>-->
<!-- <groupId>org.codehaus.mojo</groupId>-->
<!-- <artifactId>build-helper-maven-plugin</artifactId>-->
<!-- <version>3.2.0</version>-->
<!-- <executions>-->
<!-- <execution>-->
<!-- <id>add-source</id>-->
<!-- <phase>generate-sources</phase>-->
<!-- <goals>-->
<!-- <goal>add-source</goal>-->
<!-- </goals>-->
<!-- <configuration>-->
<!-- <sources>-->
<!-- <source>src/main/java</source>-->
<!-- <source>src/main/scala</source>-->
<!-- </sources>-->
<!-- </configuration>-->
<!-- </execution>-->
<!-- </executions>-->
<!-- </plugin>-->
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
@@ -201,6 +207,8 @@
<source>${compiler.version}</source>
<target>${compiler.version}</target>
<encoding>${project.build.sourceEncoding}</encoding>
<!-- <debug>false</debug>-->
<!-- <parameters>false</parameters>-->
</configuration>
</plugin>
<plugin>
@@ -225,7 +233,7 @@
<goal>npm</goal>
</goals>
<configuration>
<!-- <arguments>install &#45;&#45;registry=https://registry.npmjs.org/</arguments>-->
<!-- <arguments>install &#45;&#45;registry=https://registry.npmjs.org/</arguments>-->
<arguments>install --registry=https://registry.npm.taobao.org</arguments>
</configuration>
</execution>
@@ -279,7 +287,7 @@
<attach>true</attach>
<tarLongFileMode>posix</tarLongFileMode>
<runOnlyAtExecutionRoot>false</runOnlyAtExecutionRoot>
<appendAssemblyId>false</appendAssemblyId>
<appendAssemblyId>true</appendAssemblyId>
</configuration>
</plugin>
</plugins>

View File

@@ -1,6 +1,9 @@
package com.xuxd.kafka.console.aspect;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.beans.Credentials;
import com.xuxd.kafka.console.config.LogConfig;
import com.xuxd.kafka.console.filter.CredentialsContext;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.aspectj.lang.ProceedingJoinPoint;
@@ -12,6 +15,7 @@ import org.springframework.stereotype.Component;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.locks.ReentrantLock;
@@ -28,6 +32,12 @@ public class ControllerLogAspect {
private ReentrantLock lock = new ReentrantLock();
private final LogConfig logConfig;
public ControllerLogAspect(LogConfig logConfig) {
this.logConfig = logConfig;
}
@Pointcut("@annotation(com.xuxd.kafka.console.aspect.annotation.ControllerLog)")
private void pointcut() {
@@ -35,6 +45,9 @@ public class ControllerLogAspect {
@Around("pointcut()")
public Object around(ProceedingJoinPoint joinPoint) throws Throwable {
if (!logConfig.isPrintControllerLog()) {
return joinPoint.proceed();
}
StringBuilder params = new StringBuilder("[");
try {
String methodName = getMethodFullName(joinPoint.getTarget().getClass().getName(), joinPoint.getSignature().getName());
@@ -56,6 +69,10 @@ public class ControllerLogAspect {
String resStr = "[" + (res != null ? res.toString() : "") + "]";
StringBuilder sb = new StringBuilder();
Credentials credentials = CredentialsContext.get();
if (credentials != null) {
sb.append("[").append(credentials.getUsername()).append("] ");
}
String shortMethodName = descMap.getOrDefault(methodName, ".-");
shortMethodName = shortMethodName.substring(shortMethodName.lastIndexOf(".") + 1);
sb.append("[").append(shortMethodName)
@@ -85,6 +102,9 @@ public class ControllerLogAspect {
Class<?>[] clzArr = new Class[args.length];
for (int i = 0; i < args.length; i++) {
clzArr[i] = args[i].getClass();
if (List.class.isAssignableFrom(clzArr[i])) {
clzArr[i] = List.class;
}
}
method = aClass.getDeclaredMethod(methodName, clzArr);

View File

@@ -101,17 +101,30 @@ public class PermissionAspect {
throw new UnAuthorizedException(credentials.getUsername() + ":" + allowPermSet);
}
boolean unauthorized = true;
boolean notFoundHideProperty = true;
String roleIds = userDO.getRoleIds();
List<Long> roleIdList = Arrays.stream(roleIds.split(",")).map(String::trim).filter(StringUtils::isNotEmpty).map(Long::valueOf).collect(Collectors.toList());
List<Long> roleIdList = Arrays.stream(roleIds.split(",")).
map(String::trim).filter(StringUtils::isNotEmpty).
map(Long::valueOf).collect(Collectors.toList());
for (Long roleId : roleIdList) {
Set<String> permSet = rolePermCache.getRolePermCache().getOrDefault(roleId, Collections.emptySet());
for (String p : allowPermSet) {
if (permSet.contains(p)) {
return;
unauthorized = false;
}
}
if (permSet.contains(authConfig.getHideClusterPropertyPerm())) {
notFoundHideProperty = false;
}
}
throw new UnAuthorizedException(credentials.getUsername() + ":" + allowPermSet);
if (unauthorized) {
throw new UnAuthorizedException(credentials.getUsername() + ":" + allowPermSet);
}
if (authConfig.isHideClusterProperty() && notFoundHideProperty) {
credentials.setHideClusterProperty(true);
}
credentials.setRoleIdList(roleIdList);
}
private Map<String, Set<String>> checkPermMap(String methodName, String[] value) {

View File

@@ -2,6 +2,8 @@ package com.xuxd.kafka.console.beans;
import lombok.Data;
import java.util.List;
/**
* @author: xuxd
* @date: 2023/5/14 19:37
@@ -15,6 +17,13 @@ public class Credentials {
private long expiration;
/**
* 是否隐藏集群属性
*/
private boolean hideClusterProperty;
private List<Long> roleIdList;
public boolean isInvalid() {
return this == INVALID;
}

View File

@@ -0,0 +1,30 @@
package com.xuxd.kafka.console.beans;
import lombok.Data;
/**
* 消息转发请求参数.
*
* @author: xuxd
* @since: 2025/6/5 16:52
**/
@Data
public class ForwardMessage {
private SendMessage message;
/**
* 是否还发到同一个分区.
*/
private boolean samePartition;
/**
* 目标集群id.
*/
private long targetClusterId;
/**
* 目标topic.
*/
private String targetTopic;
}

View File

@@ -33,4 +33,6 @@ public class QueryMessage {
private String headerKey;
private String headerValue;
private int filterNumber;
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.beans;
import java.util.List;
import lombok.Data;
/**
@@ -22,4 +23,18 @@ public class SendMessage {
private int num;
private long offset;
private List<Header> headers;
/**
* true: sync send.
*/
private boolean sync;
@Data
public static class Header{
private String headerKey;
private String headerValue;
}
}

View File

@@ -0,0 +1,24 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* @author: xuxd
* @since: 2023/8/23 21:35
**/
@Data
@TableName("t_cluster_role_relation")
public class ClusterRoleRelationDO {
@TableId(type = IdType.AUTO)
private Long id;
private Long roleId;
private Long clusterInfoId;
private String updateTime;
}

View File

@@ -0,0 +1,28 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.dos.ClusterRoleRelationDO;
import lombok.Data;
/**
* @author: xuxd
* @since: 2023/8/23 21:42
**/
@Data
public class ClusterRoleRelationDTO {
private Long id;
private Long roleId;
private Long clusterInfoId;
private String updateTime;
public ClusterRoleRelationDO toDO() {
ClusterRoleRelationDO aDo = new ClusterRoleRelationDO();
aDo.setId(id);
aDo.setRoleId(roleId);
aDo.setClusterInfoId(clusterInfoId);
return aDo;
}
}

View File

@@ -37,6 +37,8 @@ public class QueryMessageDTO {
private String headerValue;
private int filterNumber;
public QueryMessage toQueryMessage() {
QueryMessage queryMessage = new QueryMessage();
queryMessage.setTopic(topic);
@@ -69,6 +71,7 @@ public class QueryMessageDTO {
if (StringUtils.isNotBlank(headerValue)) {
queryMessage.setHeaderValue(headerValue.trim());
}
queryMessage.setFilterNumber(filterNumber);
return queryMessage;
}

View File

@@ -0,0 +1,23 @@
package com.xuxd.kafka.console.beans.dto;
import lombok.Data;
import java.util.Date;
import java.util.Set;
/**
* 发送统计查询请求,指定时间段内,发送了多少消息.
* @author: xuxd
* @since: 2023/12/1 22:01
**/
@Data
public class QuerySendStatisticsDTO {
private String topic;
private Set<Integer> partition;
private Date startTime;
private Date endTime;
}

View File

@@ -21,4 +21,6 @@ public class BrokerApiVersionVO {
private int unSupportNums;
private List<String> versionInfo;
private String brokerVersion;
}

View File

@@ -0,0 +1,33 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.dos.ClusterRoleRelationDO;
import lombok.Data;
/**
* @author: xuxd
* @since: 2023/8/23 21:45
**/
@Data
public class ClusterRoleRelationVO {
private Long id;
private Long roleId;
private Long clusterInfoId;
private String updateTime;
private String roleName;
private String clusterName;
public static ClusterRoleRelationVO from(ClusterRoleRelationDO relationDO) {
ClusterRoleRelationVO vo = new ClusterRoleRelationVO();
vo.setId(relationDO.getId());
vo.setRoleId(relationDO.getRoleId());
vo.setClusterInfoId(relationDO.getClusterInfoId());
vo.setUpdateTime(relationDO.getUpdateTime());
return vo;
}
}

View File

@@ -0,0 +1,36 @@
package com.xuxd.kafka.console.beans.vo;
import lombok.Data;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.Map;
/**
* @author: xuxd
* @since: 2023/12/1 17:49
**/
@Data
public class QuerySendStatisticsVO {
private static final String FORMAT = "yyyy-MM-dd'T'HH:mm:ss.SSSXXX";
private static final DateFormat DATE_FORMAT = new SimpleDateFormat(FORMAT);
private String topic;
private Long total;
private Map<Integer, Long> detail;
private String startTime;
private String endTime;
private String searchTime = format(new Date());
public static String format(Date date) {
return DATE_FORMAT.format(date);
}
}

View File

@@ -13,9 +13,42 @@ import org.springframework.context.annotation.Configuration;
@ConfigurationProperties(prefix = "auth")
public class AuthConfig {
/**
* 是否启用登录权限认证.
*/
private boolean enable;
/**
* 认证生成Jwt token用的,随便写.
*/
private String secret = "kafka-console-ui-default-secret";
/**
* token有效期小时.
*/
private long expireHours;
/**
* 隐藏集群的属性信息如果当前用户没有集群切换里的编辑权限就不能看集群的属性信息有开启ACL的集群需要开启这个.
* 不隐藏属性不行开启ACL的时候属性里需要配置认证信息比如超管的用户名密码等不等被普通角色看到.
*/
private boolean hideClusterProperty;
/**
* 不要修改.与data-h2.sql里配置的一致即可.
*/
private String hideClusterPropertyPerm = "op:cluster-switch:edit";
/**
* 是否启用集群的数据权限,如果启用,可以配置哪些角色看到哪些集群.
* 默认false是为了兼容老版本.
*
* @since 1.0.9
*/
private boolean enableClusterAuthority;
/**
* 重新加载权限信息版本升级替换jar包的时候新版本里增加了新的权限菜单这个设置为true.
*/
private boolean reloadPermission;
}

View File

@@ -0,0 +1,23 @@
package com.xuxd.kafka.console.config;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
/**
* @author: xuxd
* @since: 2023/8/20 20:00
**/
@Configuration
@ConfigurationProperties(prefix = "log")
public class LogConfig {
private boolean printControllerLog = true;
public boolean isPrintControllerLog() {
return printControllerLog;
}
public void setPrintControllerLog(boolean printControllerLog) {
this.printControllerLog = printControllerLog;
}
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.AclEntry;
import com.xuxd.kafka.console.beans.dto.AddAuthDTO;
@@ -46,6 +47,7 @@ public class AclAuthController {
return aclService.getAclList(param.toEntry());
}
@ControllerLog("增加Acl")
@Permission({"acl:authority:add-principal", "acl:authority:add", "acl:sasl-scram:add-auth"})
@PostMapping
public Object addAcl(@RequestBody AddAuthDTO param) {
@@ -58,6 +60,7 @@ public class AclAuthController {
* @param param entry.topic && entry.username must.
* @return
*/
@ControllerLog("增加ProducerAcl")
@Permission({"acl:authority:producer", "acl:sasl-scram:producer"})
@PostMapping("/producer")
public Object addProducerAcl(@RequestBody ProducerAuthDTO param) {
@@ -71,6 +74,7 @@ public class AclAuthController {
* @param param entry.topic && entry.groupId entry.username must.
* @return
*/
@ControllerLog("增加ConsumerAcl")
@Permission({"acl:authority:consumer", "acl:sasl-scram:consumer"})
@PostMapping("/consumer")
public Object addConsumerAcl(@RequestBody ConsumerAuthDTO param) {
@@ -84,6 +88,7 @@ public class AclAuthController {
* @param entry entry
* @return
*/
@ControllerLog("删除Acl")
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping
public Object deleteAclByUser(@RequestBody AclEntry entry) {
@@ -96,6 +101,7 @@ public class AclAuthController {
* @param param entry.username
* @return
*/
@ControllerLog("删除Acl")
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/user")
public Object deleteAclByUser(@RequestBody DeleteAclDTO param) {
@@ -103,11 +109,12 @@ public class AclAuthController {
}
/**
* add producer acl.
* delete producer acl.
*
* @param param entry.topic && entry.username must.
* @return
*/
@ControllerLog("删除ProducerAcl")
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/producer")
public Object deleteProducerAcl(@RequestBody ProducerAuthDTO param) {
@@ -116,11 +123,12 @@ public class AclAuthController {
}
/**
* add consumer acl.
* delete consumer acl.
*
* @param param entry.topic && entry.groupId entry.username must.
* @return
*/
@ControllerLog("删除ConsumerAcl")
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/consumer")
public Object deleteConsumerAcl(@RequestBody ConsumerAuthDTO param) {
@@ -134,6 +142,7 @@ public class AclAuthController {
* @param param acl principal.
* @return true or false.
*/
@ControllerLog("清除Acl")
@Permission({"acl:authority:clean", "acl:sasl-scram:pure"})
@DeleteMapping("/clear")
public Object clearAcl(@RequestBody DeleteAclDTO param) {

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.AclEntry;
import com.xuxd.kafka.console.beans.AclUser;
@@ -33,12 +34,14 @@ public class AclUserController {
return aclService.getUserList();
}
@ControllerLog("增加SaslUser")
@Permission({"acl:sasl-scram:add-update", "acl:sasl-scram:add-auth"})
@PostMapping
public Object addOrUpdateUser(@RequestBody AclUser user) {
return aclService.addOrUpdateUser(user.getUsername(), user.getPassword());
}
@ControllerLog("删除SaslUser")
@Permission({"acl:sasl-scram:del", "acl:sasl-scram:pure"})
@DeleteMapping
public Object deleteUser(@RequestBody AclUser user) {
@@ -46,6 +49,7 @@ public class AclUserController {
}
@ControllerLog("删除SaslUser和Acl")
@Permission({"acl:sasl-scram:del", "acl:sasl-scram:pure"})
@DeleteMapping("/auth")
public Object deleteUserAndAuth(@RequestBody AclUser user) {
@@ -54,12 +58,12 @@ public class AclUserController {
@Permission("acl:sasl-scram:detail")
@GetMapping("/detail")
public Object getUserDetail(@RequestParam String username) {
public Object getUserDetail(@RequestParam("username") String username) {
return aclService.getUserDetail(username);
}
@GetMapping("/scram")
public Object getSaslScramUserList(@RequestParam(required = false) String username) {
public Object getSaslScramUserList(@RequestParam(required = false, name = "username") String username) {
AclEntry entry = new AclEntry();
entry.setPrincipal(StringUtils.isNotBlank(username) ? username : null);
return aclService.getSaslScramUserList(entry);

View File

@@ -33,4 +33,9 @@ public class AuthController {
public ResponseData login(@RequestBody LoginUserDTO userDTO) {
return authService.login(userDTO);
}
@GetMapping("/own/data/auth")
public boolean ownDataAuthority() {
return authService.ownDataAuthority();
}
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterClientQuotaDTO;
@@ -28,6 +29,7 @@ public class ClientQuotaController {
return clientQuotaService.getClientQuotaConfigs(request.getTypes(), request.getNames());
}
@ControllerLog("增加限流配额")
@Permission({"quota:user:add", "quota:client:add", "quota:user-client:add", "quota:edit"})
@PostMapping
public Object alterClientQuotaConfigs(@RequestBody AlterClientQuotaDTO request) {
@@ -41,6 +43,7 @@ public class ClientQuotaController {
return clientQuotaService.alterClientQuotaConfigs(request);
}
@ControllerLog("删除限流配额")
@Permission("quota:del")
@DeleteMapping
public Object deleteClientQuotaConfigs(@RequestBody AlterClientQuotaDTO request) {

View File

@@ -1,16 +1,11 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.dto.ClusterInfoDTO;
import com.xuxd.kafka.console.service.ClusterService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.*;
/**
* kafka-console-ui.
@@ -30,23 +25,33 @@ public class ClusterController {
return clusterService.getClusterInfo();
}
@Permission({"op:cluster-switch", "user-manage:cluster-role:add"})
@GetMapping("/info")
public Object getClusterInfoList() {
return clusterService.getClusterInfoList();
}
@Permission({"user-manage:cluster-role:add"})
@GetMapping("/info/select")
public Object getClusterInfoListForSelect() {
return clusterService.getClusterInfoListForSelect();
}
@ControllerLog("增加集群信息")
@Permission("op:cluster-switch:add")
@PostMapping("/info")
public Object addClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.addClusterInfo(dto.to());
}
@ControllerLog("删除集群信息")
@Permission("op:cluster-switch:del")
@DeleteMapping("/info")
public Object deleteClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.deleteClusterInfo(dto.getId());
}
@ControllerLog("编辑集群信息")
@Permission("op:cluster-switch:edit")
@PutMapping("/info")
public Object updateClusterInfo(@RequestBody ClusterInfoDTO dto) {

View File

@@ -0,0 +1,42 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.dto.ClusterRoleRelationDTO;
import com.xuxd.kafka.console.service.ClusterRoleRelationService;
import org.springframework.web.bind.annotation.*;
/**
* @author: xuxd
* @since: 2023/8/23 22:01
**/
@RestController
@RequestMapping("/cluster-role/relation")
public class ClusterRoleRelationController {
private final ClusterRoleRelationService service;
public ClusterRoleRelationController(ClusterRoleRelationService service) {
this.service = service;
}
@Permission("user-manage:cluster-role")
@GetMapping
public Object select() {
return service.select();
}
@ControllerLog("增加集群归属角色信息")
@Permission("user-manage:cluster-role:add")
@PostMapping
public Object add(@RequestBody ClusterRoleRelationDTO dto) {
return service.add(dto);
}
@ControllerLog("删除集群归属角色信息")
@Permission("user-manage:cluster-role:delete")
@DeleteMapping
public Object delete(@RequestParam("id") Long id) {
return service.delete(id);
}
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AlterConfigDTO;
@@ -48,12 +49,14 @@ public class ConfigController {
return configService.getTopicConfig(topic);
}
@ControllerLog("编辑topic配置")
@Permission("topic:property-config:edit")
@PostMapping("/topic")
public Object setTopicConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterTopicConfig(dto.getEntity(), dto.to(), AlterType.SET);
}
@ControllerLog("删除topic配置")
@Permission("topic:property-config:del")
@DeleteMapping("/topic")
public Object deleteTopicConfig(@RequestBody AlterConfigDTO dto) {
@@ -66,12 +69,14 @@ public class ConfigController {
return configService.getBrokerConfig(brokerId);
}
@ControllerLog("设置broker配置")
@Permission("cluster:edit")
@PostMapping("/broker")
public Object setBrokerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerConfig(dto.getEntity(), dto.to(), AlterType.SET);
}
@ControllerLog("编辑broker配置")
@Permission("cluster:edit")
@DeleteMapping("/broker")
public Object deleteBrokerConfig(@RequestBody AlterConfigDTO dto) {
@@ -84,12 +89,14 @@ public class ConfigController {
return configService.getBrokerLoggerConfig(brokerId);
}
@ControllerLog("编辑broker日志配置")
@Permission("cluster:edit")
@PostMapping("/broker/logger")
public Object setBrokerLoggerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerLoggerConfig(dto.getEntity(), dto.to(), AlterType.SET);
}
@ControllerLog("删除broker日志配置")
@Permission("cluster:edit")
@DeleteMapping("/broker/logger")
public Object deleteBrokerLoggerConfig(@RequestBody AlterConfigDTO dto) {

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.AddSubscriptionDTO;
@@ -43,30 +44,33 @@ public class ConsumerController {
return consumerService.getConsumerGroupList(groupIdList, stateSet);
}
@ControllerLog("删除消费组")
@Permission("group:del")
@DeleteMapping("/group")
public Object deleteConsumerGroup(@RequestParam String groupId) {
public Object deleteConsumerGroup(@RequestParam("groupId") String groupId) {
return consumerService.deleteConsumerGroup(groupId);
}
@Permission("group:client")
@GetMapping("/member")
public Object getConsumerMembers(@RequestParam String groupId) {
public Object getConsumerMembers(@RequestParam("groupId") String groupId) {
return consumerService.getConsumerMembers(groupId);
}
@Permission("group:consumer-detail")
@GetMapping("/detail")
public Object getConsumerDetail(@RequestParam String groupId) {
public Object getConsumerDetail(@RequestParam("groupId") String groupId) {
return consumerService.getConsumerDetail(groupId);
}
@ControllerLog("新增消费组")
@Permission("group:add")
@PostMapping("/subscription")
public Object addSubscription(@RequestBody AddSubscriptionDTO subscriptionDTO) {
return consumerService.addSubscription(subscriptionDTO.getGroupId(), subscriptionDTO.getTopic());
}
@ControllerLog("重置消费位点")
@Permission({"group:consumer-detail:min",
"group:consumer-detail:last",
"group:consumer-detail:timestamp",
@@ -114,19 +118,19 @@ public class ConsumerController {
}
@GetMapping("/topic/list")
public Object getSubscribeTopicList(@RequestParam String groupId) {
public Object getSubscribeTopicList(@RequestParam("groupId") String groupId) {
return consumerService.getSubscribeTopicList(groupId);
}
@Permission({"topic:consumer-detail"})
@GetMapping("/topic/subscribed")
public Object getTopicSubscribedByGroups(@RequestParam String topic) {
public Object getTopicSubscribedByGroups(@RequestParam("topic") String topic) {
return consumerService.getTopicSubscribedByGroups(topic);
}
@Permission("group:offset-partition")
@GetMapping("/offset/partition")
public Object getOffsetPartition(@RequestParam String groupId) {
public Object getOffsetPartition(@RequestParam("groupId") String groupId) {
return consumerService.getOffsetPartition(groupId);
}
}

View File

@@ -1,12 +1,16 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ForwardMessage;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage;
import com.xuxd.kafka.console.beans.dto.QueryMessageDTO;
import com.xuxd.kafka.console.beans.dto.QuerySendStatisticsDTO;
import com.xuxd.kafka.console.service.MessageService;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
@@ -48,18 +52,21 @@ public class MessageController {
return messageService.deserializerList();
}
@Permission("message:send")
@PostMapping("/send")
@ControllerLog("在线发送消息")
@Permission("message:send")
public Object send(@RequestBody SendMessage message) {
return messageService.send(message);
return messageService.sendWithHeader(message);
}
@ControllerLog("重新发送消息")
@Permission("message:resend")
@PostMapping("/resend")
public Object resend(@RequestBody SendMessage message) {
return messageService.resend(message);
}
@ControllerLog("在线删除消息")
@Permission("message:del")
@DeleteMapping
public Object delete(@RequestBody List<QueryMessage> messages) {
@@ -68,4 +75,20 @@ public class MessageController {
}
return messageService.delete(messages);
}
@Permission("message:send-statistics")
@PostMapping("/send/statistics")
public Object sendStatistics(@RequestBody QuerySendStatisticsDTO dto) {
if (StringUtils.isEmpty(dto.getTopic())) {
return ResponseData.create().failed("Topic is null");
}
return messageService.sendStatisticsByTime(dto);
}
@Permission("message:forward")
@ControllerLog("消息转发")
@PostMapping("/forward")
public Object forward(@RequestBody ForwardMessage message) {
return messageService.forward(message);
}
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.TopicPartition;
import com.xuxd.kafka.console.beans.dto.BrokerThrottleDTO;
@@ -24,12 +25,14 @@ public class OperationController {
@Autowired
private OperationService operationService;
@ControllerLog("同步消费位点")
@PostMapping("/sync/consumer/offset")
public Object syncConsumerOffset(@RequestBody SyncDataDTO dto) {
dto.getProperties().put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, dto.getAddress());
return operationService.syncConsumerOffset(dto.getGroupId(), dto.getTopic(), dto.getProperties());
}
@ControllerLog("重新位点对齐")
@PostMapping("/sync/min/offset/alignment")
public Object minOffsetAlignment(@RequestBody SyncDataDTO dto) {
dto.getProperties().put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, dto.getAddress());
@@ -41,23 +44,27 @@ public class OperationController {
return operationService.getAlignmentList();
}
@ControllerLog("deleteAlignment")
@DeleteMapping("/sync/alignment")
public Object deleteAlignment(@RequestParam Long id) {
public Object deleteAlignment(@RequestParam("id") Long id) {
return operationService.deleteAlignmentById(id);
}
@ControllerLog("优先副本leader")
@Permission({"topic:partition-detail:preferred", "op:replication-preferred"})
@PostMapping("/replication/preferred")
public Object electPreferredLeader(@RequestBody ReplicationDTO dto) {
return operationService.electPreferredLeader(dto.getTopic(), dto.getPartition());
}
@ControllerLog("配置同步限流")
@Permission("op:config-throttle")
@PostMapping("/broker/throttle")
public Object configThrottle(@RequestBody BrokerThrottleDTO dto) {
return operationService.configThrottle(dto.getBrokerList(), dto.getUnit().toKb(dto.getThrottle()));
}
@ControllerLog("移除限流配置")
@Permission("op:remove-throttle")
@DeleteMapping("/broker/throttle")
public Object removeThrottle(@RequestBody BrokerThrottleDTO dto) {
@@ -70,6 +77,7 @@ public class OperationController {
return operationService.currentReassignments();
}
@ControllerLog("取消副本重分配")
@Permission("op:replication-update-detail:cancel")
@DeleteMapping("/replication/reassignments")
public Object cancelReassignment(@RequestBody TopicPartition partition) {

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.aspect.annotation.ControllerLog;
import com.xuxd.kafka.console.aspect.annotation.Permission;
import com.xuxd.kafka.console.beans.ReplicaAssignment;
import com.xuxd.kafka.console.beans.dto.AddPartitionDTO;
@@ -35,10 +36,11 @@ public class TopicController {
@Permission("topic:load")
@GetMapping("/list")
public Object getTopicList(@RequestParam(required = false) String topic, @RequestParam String type) {
public Object getTopicList(@RequestParam(required = false, name = "topic") String topic, @RequestParam("type") String type) {
return topicService.getTopicList(topic, TopicType.valueOf(type.toUpperCase()));
}
@ControllerLog("删除topic")
@Permission({"topic:batch-del", "topic:del"})
@DeleteMapping
public Object deleteTopic(@RequestBody List<String> topics) {
@@ -47,16 +49,18 @@ public class TopicController {
@Permission("topic:partition-detail")
@GetMapping("/partition")
public Object getTopicPartitionInfo(@RequestParam String topic) {
public Object getTopicPartitionInfo(@RequestParam("topic") String topic) {
return topicService.getTopicPartitionInfo(topic.trim());
}
@ControllerLog("创建topic")
@Permission("topic:add")
@PostMapping("/new")
public Object createNewTopic(@RequestBody NewTopicDTO topicDTO) {
return topicService.createTopic(topicDTO.toNewTopic());
}
@ControllerLog("增加topic分区")
@Permission("topic:partition-add")
@PostMapping("/partition/new")
public Object addPartition(@RequestBody AddPartitionDTO partitionDTO) {
@@ -76,16 +80,18 @@ public class TopicController {
}
@GetMapping("/replica/assignment")
public Object getCurrentReplicaAssignment(@RequestParam String topic) {
public Object getCurrentReplicaAssignment(@RequestParam("topic") String topic) {
return topicService.getCurrentReplicaAssignment(topic);
}
@ControllerLog("更新副本")
@Permission({"topic:replication-modify", "op:replication-reassign"})
@PostMapping("/replica/assignment")
public Object updateReplicaAssignment(@RequestBody ReplicaAssignment assignment) {
return topicService.updateReplicaAssignment(assignment);
}
@ControllerLog("配置限流")
@Permission("topic:replication-sync-throttle")
@PostMapping("/replica/throttle")
public Object configThrottle(@RequestBody TopicThrottleDTO dto) {
@@ -94,7 +100,7 @@ public class TopicController {
@Permission("topic:send-count")
@GetMapping("/send/stats")
public Object sendStats(@RequestParam String topic) {
public Object sendStats(@RequestParam("topic") String topic) {
return topicService.sendStats(topic);
}
}

View File

@@ -73,14 +73,14 @@ public class UserManageController {
@Permission("user-manage:role:del")
@ControllerLog("删除角色")
@DeleteMapping("/role")
public Object deleteRole(@RequestParam Long id) {
public Object deleteRole(@RequestParam("id") Long id) {
return userManageService.deleteRole(id);
}
@Permission("user-manage:user:del")
@ControllerLog("删除用户")
@DeleteMapping("/user")
public Object deleteUser(@RequestParam Long id) {
public Object deleteUser(@RequestParam("id") Long id) {
return userManageService.deleteUser(id);
}

View File

@@ -0,0 +1,15 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.ClusterRoleRelationDO;
import org.apache.ibatis.annotations.Mapper;
/**
* Cluster info and role relation.
*
* @author: xuxd
* @since: 2023/8/23 21:40
**/
@Mapper
public interface ClusterRoleRelationMapper extends BaseMapper<ClusterRoleRelationDO> {
}

View File

@@ -71,6 +71,9 @@ public class DataInit implements SmartInitializingSingleton {
initData(connection, SqlParse.ROLE_TABLE);
}
if (authConfig.isReloadPermission()) {
permissionMapper.delete(null);
}
Integer permCount = permissionMapper.selectCount(null);
if (permCount == null || permCount == 0) {
initData(connection, SqlParse.PERM_TABLE);

View File

@@ -1,15 +1,12 @@
package com.xuxd.kafka.console.dao.init;
import com.google.common.io.Files;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.util.ResourceUtils;
import scala.collection.mutable.StringBuilder;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.nio.charset.Charset;
import java.io.BufferedReader;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
@@ -22,7 +19,7 @@ import java.util.Map;
@Slf4j
public class SqlParse {
private final String FILE = "classpath:db/data-h2.sql";
private final String FILE = "db/data-h2.sql";
private final Map<String, List<String>> sqlMap = new HashMap<>();
@@ -37,8 +34,7 @@ public class SqlParse {
String table = null;
try {
File file = ResourceUtils.getFile(FILE);
List<String> lines = Files.readLines(file, Charset.forName("UTF-8"));
List<String> lines = getSqlLines();
for (String str : lines) {
if (StringUtils.isNotEmpty(str)) {
if (str.indexOf("start--") > 0) {
@@ -61,9 +57,7 @@ public class SqlParse {
}
}
}
} catch (FileNotFoundException e) {
throw new RuntimeException(e);
} catch (IOException e) {
} catch (Exception e) {
throw new RuntimeException(e);
}
}
@@ -82,4 +76,21 @@ public class SqlParse {
private boolean isSql(String str) {
return StringUtils.isNotEmpty(str) && str.startsWith("insert");
}
private List<String> getSqlLines() throws Exception {
// File file = ResourceUtils.getFile(FILE);
// List<String> lines = Files.readLines(file, Charset.forName("UTF-8"));
List<String> lines = new ArrayList<>();
try (InputStream inputStream = Thread.currentThread().getContextClassLoader().getResourceAsStream(FILE)) {
try (InputStreamReader inputStreamReader = new InputStreamReader(inputStream, "UTF-8")) {
try (BufferedReader reader = new BufferedReader(inputStreamReader)) {
String line;
while ((line = reader.readLine()) != null) {
lines.add(line);
}
}
}
}
return lines;
}
}

View File

@@ -44,6 +44,10 @@ public class AuthFilter implements Filter {
String accessToken = request.getHeader(TOKEN_HEADER);
String requestURI = request.getRequestURI();
if (isResourceRequest(requestURI)) {
filterChain.doFilter(servletRequest, servletResponse);
return;
}
if (requestURI.startsWith(AUTH_URI_PREFIX)) {
filterChain.doFilter(servletRequest, servletResponse);
return;
@@ -63,8 +67,12 @@ public class AuthFilter implements Filter {
try {
CredentialsContext.set(credentials);
filterChain.doFilter(servletRequest, servletResponse);
}finally {
} finally {
CredentialsContext.remove();
}
}
private boolean isResourceRequest(String requestURI) {
return requestURI.contains(".") || requestURI.equals("/");
}
}

View File

@@ -49,6 +49,10 @@ public class ContextSetFilter implements Filter {
String uri = request.getRequestURI();
if (!excludes.contains(uri)) {
String headerId = request.getHeader(Header.ID);
String specificId = request.getHeader(Header.SPECIFIC_ID);
if (StringUtils.isNotBlank(specificId)) {
headerId = specificId;
}
if (StringUtils.isBlank(headerId)) {
// ResponseData failed = ResponseData.create().failed("Cluster info is null.");
ResponseData failed = ResponseData.create().failed("没有集群信息,请先切换集群");
@@ -84,5 +88,6 @@ public class ContextSetFilter implements Filter {
interface Header {
String ID = "X-Cluster-Info-Id";
String NAME = "X-Cluster-Info-Name";
String SPECIFIC_ID = "X-Specific-Cluster-Info-Id";
}
}

View File

@@ -10,4 +10,6 @@ import com.xuxd.kafka.console.beans.dto.LoginUserDTO;
public interface AuthService {
ResponseData login(LoginUserDTO userDTO);
boolean ownDataAuthority();
}

View File

@@ -0,0 +1,19 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.ClusterRoleRelationDTO;
/**
* Cluster info and role relation.
*
* @author: xuxd
* @since: 2023/8/23 21:42
**/
public interface ClusterRoleRelationService {
ResponseData select();
ResponseData add(ClusterRoleRelationDTO dto);
ResponseData delete(Long id);
}

View File

@@ -12,6 +12,8 @@ import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
public interface ClusterService {
ResponseData getClusterInfo();
ResponseData getClusterInfoListForSelect();
ResponseData getClusterInfoList();
ResponseData addClusterInfo(ClusterInfoDO infoDO);

View File

@@ -1,6 +1,8 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ForwardMessage;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.dto.QuerySendStatisticsDTO;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage;
@@ -24,7 +26,13 @@ public interface MessageService {
ResponseData send(SendMessage message);
ResponseData sendWithHeader(SendMessage message);
ResponseData resend(SendMessage message);
ResponseData delete(List<QueryMessage> messages);
ResponseData sendStatisticsByTime(QuerySendStatisticsDTO request);
ResponseData forward(ForwardMessage message);
}

View File

@@ -93,4 +93,16 @@ public class AuthServiceImpl implements AuthService {
return ResponseData.create().data(loginResult).success();
}
@Override
public boolean ownDataAuthority() {
if (!authConfig.isEnable()) {
return true;
}
if (!authConfig.isEnableClusterAuthority()) {
return true;
}
return false;
}
}

View File

@@ -0,0 +1,115 @@
package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.beans.dos.ClusterRoleRelationDO;
import com.xuxd.kafka.console.beans.dos.SysRoleDO;
import com.xuxd.kafka.console.beans.dto.ClusterRoleRelationDTO;
import com.xuxd.kafka.console.beans.vo.ClusterRoleRelationVO;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.dao.ClusterRoleRelationMapper;
import com.xuxd.kafka.console.dao.SysRoleMapper;
import com.xuxd.kafka.console.service.ClusterRoleRelationService;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.function.Function;
import java.util.stream.Collectors;
/**
* @author: xuxd
* @since: 2023/8/23 21:50
**/
@Slf4j
@Service
public class ClusterRoleRelationServiceImpl implements ClusterRoleRelationService {
private final ClusterRoleRelationMapper mapper;
private final SysRoleMapper roleMapper;
private final ClusterInfoMapper clusterInfoMapper;
private final AuthConfig authConfig;
public ClusterRoleRelationServiceImpl(final ClusterRoleRelationMapper mapper,
final SysRoleMapper roleMapper,
final ClusterInfoMapper clusterInfoMapper,
final AuthConfig authConfig) {
this.mapper = mapper;
this.roleMapper = roleMapper;
this.clusterInfoMapper = clusterInfoMapper;
this.authConfig = authConfig;
}
@Override
public ResponseData select() {
if (!authConfig.isEnableClusterAuthority()) {
return ResponseData.create().data(Collections.emptyList()).success();
}
List<ClusterRoleRelationDO> dos = mapper.selectList(null);
Map<Long, SysRoleDO> roleMap = roleMapper.selectList(null).stream().
collect(Collectors.toMap(SysRoleDO::getId, Function.identity(), (e1, e2) -> e2));
Map<Long, ClusterInfoDO> clusterMap = clusterInfoMapper.selectList(null).stream().
collect(Collectors.toMap(ClusterInfoDO::getId, Function.identity(), (e1, e2) -> e2));
List<ClusterRoleRelationVO> vos = dos.stream().
map(aDo -> {
ClusterRoleRelationVO vo = ClusterRoleRelationVO.from(aDo);
if (roleMap.containsKey(vo.getRoleId())) {
vo.setRoleName(roleMap.get(vo.getRoleId()).getRoleName());
}
if (clusterMap.containsKey(vo.getClusterInfoId())) {
vo.setClusterName(clusterMap.get(vo.getClusterInfoId()).getClusterName());
}
return vo;
}).collect(Collectors.toList());
return ResponseData.create().data(vos).success();
}
@Override
public ResponseData add(ClusterRoleRelationDTO dto) {
if (!authConfig.isEnableClusterAuthority()) {
return ResponseData.create().failed("未启用集群的数据权限管理");
}
ClusterRoleRelationDO relationDO = dto.toDO();
if (relationDO.getClusterInfoId() == -1L) {
// all insert
for (ClusterInfoDO clusterInfoDO : clusterInfoMapper.selectList(null)) {
ClusterRoleRelationDO aDo = new ClusterRoleRelationDO();
aDo.setRoleId(relationDO.getRoleId());
aDo.setClusterInfoId(clusterInfoDO.getId());
insertIfNotExist(aDo);
}
} else {
insertIfNotExist(relationDO);
}
return ResponseData.create().success();
}
@Override
public ResponseData delete(Long id) {
if (!authConfig.isEnableClusterAuthority()) {
return ResponseData.create().failed("未启用集群的数据权限管理");
}
mapper.deleteById(id);
return ResponseData.create().success();
}
private void insertIfNotExist(ClusterRoleRelationDO relationDO) {
QueryWrapper<ClusterRoleRelationDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("role_id", relationDO.getRoleId()).
eq("cluster_info_id", relationDO.getClusterInfoId());
Integer count = mapper.selectCount(queryWrapper);
if (count > 0) {
log.info("已存在,不再增加:{}", relationDO);
return;
}
mapper.insert(relationDO);
}
}

View File

@@ -3,15 +3,18 @@ package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.BrokerNode;
import com.xuxd.kafka.console.beans.ClusterInfo;
import com.xuxd.kafka.console.beans.Credentials;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.beans.dos.ClusterRoleRelationDO;
import com.xuxd.kafka.console.beans.vo.BrokerApiVersionVO;
import com.xuxd.kafka.console.beans.vo.ClusterInfoVO;
import com.xuxd.kafka.console.config.AuthConfig;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.dao.ClusterRoleRelationMapper;
import com.xuxd.kafka.console.filter.CredentialsContext;
import com.xuxd.kafka.console.service.ClusterService;
import java.util.*;
import java.util.stream.Collectors;
import kafka.console.BrokerVersion;
import kafka.console.ClusterConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
@@ -21,6 +24,9 @@ import org.apache.kafka.common.Node;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.stereotype.Service;
import java.util.*;
import java.util.stream.Collectors;
/**
* kafka-console-ui.
*
@@ -35,13 +41,22 @@ public class ClusterServiceImpl implements ClusterService {
private final ClusterInfoMapper clusterInfoMapper;
public ClusterServiceImpl(ObjectProvider<ClusterConsole> clusterConsole,
ObjectProvider<ClusterInfoMapper> clusterInfoMapper) {
private final AuthConfig authConfig;
private final ClusterRoleRelationMapper clusterRoleRelationMapper;
public ClusterServiceImpl(final ObjectProvider<ClusterConsole> clusterConsole,
final ObjectProvider<ClusterInfoMapper> clusterInfoMapper,
final AuthConfig authConfig,
final ClusterRoleRelationMapper clusterRoleRelationMapper) {
this.clusterConsole = clusterConsole.getIfAvailable();
this.clusterInfoMapper = clusterInfoMapper.getIfAvailable();
this.authConfig = authConfig;
this.clusterRoleRelationMapper = clusterRoleRelationMapper;
}
@Override public ResponseData getClusterInfo() {
@Override
public ResponseData getClusterInfo() {
ClusterInfo clusterInfo = clusterConsole.clusterInfo();
Set<BrokerNode> nodes = clusterInfo.getNodes();
if (nodes == null) {
@@ -52,27 +67,90 @@ public class ClusterServiceImpl implements ClusterService {
return ResponseData.create().data(clusterInfo).success();
}
@Override public ResponseData getClusterInfoList() {
return ResponseData.create().data(clusterInfoMapper.selectList(null)
.stream().map(ClusterInfoVO::from).collect(Collectors.toList())).success();
@Override
public ResponseData getClusterInfoListForSelect() {
return ResponseData.create().
data(clusterInfoMapper.selectList(null).stream().
map(e -> {
ClusterInfoVO vo = ClusterInfoVO.from(e);
vo.setProperties(Collections.emptyList());
vo.setAddress("");
return vo;
}).collect(Collectors.toList())).success();
}
@Override public ResponseData addClusterInfo(ClusterInfoDO infoDO) {
@Override
public ResponseData getClusterInfoList() {
// 如果开启权限管理,当前用户没有集群切换->集群信息的编辑权限隐藏集群的属性信息避免ACL属性暴露出来
Credentials credentials = CredentialsContext.get();
boolean enableClusterAuthority = credentials != null && authConfig.isEnableClusterAuthority();
final Set<Long> clusterInfoIdSet = new HashSet<>();
if (enableClusterAuthority) {
List<Long> roleIdList = credentials.getRoleIdList();
QueryWrapper<ClusterRoleRelationDO> queryWrapper = new QueryWrapper<>();
queryWrapper.in("role_id", roleIdList);
clusterInfoIdSet.addAll(clusterRoleRelationMapper.selectList(queryWrapper).
stream().map(ClusterRoleRelationDO::getClusterInfoId).
collect(Collectors.toSet()));
}
return ResponseData.create().
data(clusterInfoMapper.selectList(null).stream().
filter(e -> !enableClusterAuthority || clusterInfoIdSet.contains(e.getId())).
map(e -> {
ClusterInfoVO vo = ClusterInfoVO.from(e);
if (credentials != null && credentials.isHideClusterProperty()) {
vo.setProperties(Collections.emptyList());
}
return vo;
}).collect(Collectors.toList())).success();
}
@Override
public ResponseData addClusterInfo(ClusterInfoDO infoDO) {
QueryWrapper<ClusterInfoDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("cluster_name", infoDO.getClusterName());
if (clusterInfoMapper.selectCount(queryWrapper) > 0) {
return ResponseData.create().failed("cluster name exist.");
}
clusterInfoMapper.insert(infoDO);
Credentials credentials = CredentialsContext.get();
boolean enableClusterAuthority = credentials != null && authConfig.isEnableClusterAuthority();
if (enableClusterAuthority) {
for (Long roleId : credentials.getRoleIdList()) {
// 开启集群的数据权限控制,新增集群的时候必须要录入一条信息
QueryWrapper<ClusterRoleRelationDO> relationQueryWrapper = new QueryWrapper<>();
relationQueryWrapper.eq("role_id", roleId).
eq("cluster_info_id", infoDO.getId());
Integer count = clusterRoleRelationMapper.selectCount(relationQueryWrapper);
if (count <= 0) {
ClusterRoleRelationDO relationDO = new ClusterRoleRelationDO();
relationDO.setRoleId(roleId);
relationDO.setClusterInfoId(infoDO.getId());
clusterRoleRelationMapper.insert(relationDO);
}
}
}
return ResponseData.create().success();
}
@Override public ResponseData deleteClusterInfo(Long id) {
@Override
public ResponseData deleteClusterInfo(Long id) {
clusterInfoMapper.deleteById(id);
Credentials credentials = CredentialsContext.get();
boolean enableClusterAuthority = credentials != null && authConfig.isEnableClusterAuthority();
if (enableClusterAuthority) {
for (Long roleId : credentials.getRoleIdList()) {
// 开启集群的数据权限控制,删除集群的时候必须要删除对应的数据权限
QueryWrapper<ClusterRoleRelationDO> relationQueryWrapper = new QueryWrapper<>();
relationQueryWrapper.eq("role_id", roleId).eq("cluster_info_id", id);
clusterRoleRelationMapper.delete(relationQueryWrapper);
}
}
return ResponseData.create().success();
}
@Override public ResponseData updateClusterInfo(ClusterInfoDO infoDO) {
@Override
public ResponseData updateClusterInfo(ClusterInfoDO infoDO) {
if (infoDO.getProperties() == null) {
// null 的话不更新这个是bug设置为空字符串解决
infoDO.setProperties("");
@@ -81,7 +159,8 @@ public class ClusterServiceImpl implements ClusterService {
return ResponseData.create().success();
}
@Override public ResponseData peekClusterInfo() {
@Override
public ResponseData peekClusterInfo() {
List<ClusterInfoDO> dos = clusterInfoMapper.selectList(null);
if (CollectionUtils.isEmpty(dos)) {
return ResponseData.create().failed("No Cluster Info.");
@@ -89,7 +168,8 @@ public class ClusterServiceImpl implements ClusterService {
return ResponseData.create().data(dos.stream().findFirst().map(ClusterInfoVO::from)).success();
}
@Override public ResponseData getBrokerApiVersionInfo() {
@Override
public ResponseData getBrokerApiVersionInfo() {
HashMap<Node, NodeApiVersions> map = clusterConsole.listBrokerVersionInfo();
List<BrokerApiVersionVO> list = new ArrayList<>(map.size());
map.forEach(((node, versions) -> {
@@ -109,6 +189,9 @@ public class ClusterServiceImpl implements ClusterService {
versionInfo = versionInfo.substring(1, versionInfo.length() - 2);
vo.setVersionInfo(Arrays.asList(StringUtils.split(versionInfo, ",")));
list.add(vo);
// 推测broker版本
String vs = BrokerVersion.guessBrokerVersion(versions);
vo.setBrokerVersion(vs);
}));
Collections.sort(list, Comparator.comparingInt(BrokerApiVersionVO::getBrokerId));
return ResponseData.create().data(list).success();

View File

@@ -1,23 +1,17 @@
package com.xuxd.kafka.console.service.impl;
import com.xuxd.kafka.console.beans.MessageFilter;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage;
import com.xuxd.kafka.console.beans.*;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.beans.dto.QuerySendStatisticsDTO;
import com.xuxd.kafka.console.beans.enums.FilterType;
import com.xuxd.kafka.console.beans.vo.ConsumerRecordVO;
import com.xuxd.kafka.console.beans.vo.MessageDetailVO;
import com.xuxd.kafka.console.beans.vo.QuerySendStatisticsVO;
import com.xuxd.kafka.console.config.ContextConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.service.ConsumerService;
import com.xuxd.kafka.console.service.MessageService;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.stream.Collectors;
import kafka.console.ConsumerConsole;
import kafka.console.MessageConsole;
import kafka.console.TopicConsole;
@@ -29,14 +23,7 @@ import org.apache.kafka.clients.admin.TopicDescription;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.apache.kafka.common.serialization.BytesDeserializer;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.DoubleDeserializer;
import org.apache.kafka.common.serialization.FloatDeserializer;
import org.apache.kafka.common.serialization.IntegerDeserializer;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.*;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
@@ -44,6 +31,9 @@ import org.springframework.context.ApplicationContextAware;
import org.springframework.stereotype.Service;
import scala.Tuple2;
import java.util.*;
import java.util.stream.Collectors;
/**
* kafka-console-ui.
*
@@ -63,6 +53,9 @@ public class MessageServiceImpl implements MessageService, ApplicationContextAwa
@Autowired
private ConsumerConsole consumerConsole;
@Autowired
private ClusterInfoMapper clusterInfoMapper;
private ApplicationContext applicationContext;
private Map<String, Deserializer> deserializerDict = new HashMap<>();
@@ -79,8 +72,9 @@ public class MessageServiceImpl implements MessageService, ApplicationContextAwa
public static String defaultDeserializer = "String";
@Override public ResponseData searchByTime(QueryMessage queryMessage) {
int maxNums = 5000;
@Override
public ResponseData searchByTime(QueryMessage queryMessage) {
int maxNums = queryMessage.getFilterNumber() <= 0 ? 5000 : queryMessage.getFilterNumber();
Object searchContent = null;
String headerKey = null;
@@ -144,7 +138,7 @@ public class MessageServiceImpl implements MessageService, ApplicationContextAwa
List<ConsumerRecord<byte[], byte[]>> records = tuple2._1();
log.info("search message by time, cost time: {}", (System.currentTimeMillis() - startTime));
List<ConsumerRecordVO> vos = records.stream().filter(record -> record.timestamp() <= queryMessage.getEndTime())
.map(ConsumerRecordVO::fromConsumerRecord).collect(Collectors.toList());
.map(ConsumerRecordVO::fromConsumerRecord).collect(Collectors.toList());
Map<String, Object> res = new HashMap<>();
vos = vos.subList(0, Math.min(maxNums, vos.size()));
res.put("maxNum", maxNums);
@@ -154,13 +148,15 @@ public class MessageServiceImpl implements MessageService, ApplicationContextAwa
return ResponseData.create().data(res).success();
}
@Override public ResponseData searchByOffset(QueryMessage queryMessage) {
@Override
public ResponseData searchByOffset(QueryMessage queryMessage) {
Map<TopicPartition, ConsumerRecord<byte[], byte[]>> recordMap = searchRecordByOffset(queryMessage);
return ResponseData.create().data(recordMap.values().stream().map(ConsumerRecordVO::fromConsumerRecord).collect(Collectors.toList())).success();
}
@Override public ResponseData searchDetail(QueryMessage queryMessage) {
@Override
public ResponseData searchDetail(QueryMessage queryMessage) {
if (queryMessage.getPartition() == -1) {
throw new IllegalArgumentException();
}
@@ -219,16 +215,35 @@ public class MessageServiceImpl implements MessageService, ApplicationContextAwa
return ResponseData.create().failed("Not found message detail.");
}
@Override public ResponseData deserializerList() {
@Override
public ResponseData deserializerList() {
return ResponseData.create().data(deserializerDict.keySet()).success();
}
@Override public ResponseData send(SendMessage message) {
@Override
public ResponseData send(SendMessage message) {
messageConsole.send(message.getTopic(), message.getPartition(), message.getKey(), message.getBody(), message.getNum());
return ResponseData.create().success();
}
@Override public ResponseData resend(SendMessage message) {
@Override
public ResponseData sendWithHeader(SendMessage message) {
String[] headerKeys = message.getHeaders().stream().map(SendMessage.Header::getHeaderKey).toArray(String[]::new);
String[] headerValues = message.getHeaders().stream().map(SendMessage.Header::getHeaderValue).toArray(String[]::new);
// log.info("send with header:keys{},values{}",headerKeys, headerValues);
Tuple2<Object, String> tuple2 = messageConsole.send(message.getTopic(),
message.getPartition(),
message.getKey(),
message.getBody(),
message.getNum(),
headerKeys,
headerValues,
message.isSync());
return (boolean) tuple2._1 ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2);
}
@Override
public ResponseData resend(SendMessage message) {
TopicPartition partition = new TopicPartition(message.getTopic(), message.getPartition());
Map<TopicPartition, Object> offsetTable = new HashMap<>(1, 1.0f);
offsetTable.put(partition, message.getOffset());
@@ -255,6 +270,85 @@ public class MessageServiceImpl implements MessageService, ApplicationContextAwa
return success ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2());
}
@Override
public ResponseData sendStatisticsByTime(QuerySendStatisticsDTO request) {
if (request.getPartition() != null && request.getPartition().contains(-1)) {
request.setPartition(Collections.emptySet());
}
Map<Integer, Long> startOffsetMap = topicConsole.getOffsetForTimestamp(request.getTopic(), request.getStartTime().getTime()).
entrySet().stream().collect(Collectors.toMap(e -> e.getKey().partition(), Map.Entry::getValue, (e1, e2) -> e2));
Map<Integer, Long> endOffsetMap = topicConsole.getOffsetForTimestamp(request.getTopic(), request.getEndTime().getTime()).
entrySet().stream().collect(Collectors.toMap(e -> e.getKey().partition(), Map.Entry::getValue, (e1, e2) -> e2));
Map<Integer, Long> diffOffsetMap = endOffsetMap.entrySet().stream().
collect(Collectors.toMap(e -> e.getKey(),
e -> Arrays.asList(e.getValue(), startOffsetMap.getOrDefault(e.getKey(), 0L)).
stream().reduce((a, b) -> a - b).get()));
if (CollectionUtils.isNotEmpty(request.getPartition())) {
Iterator<Map.Entry<Integer, Long>> iterator = diffOffsetMap.entrySet().iterator();
while (iterator.hasNext()) {
Integer partition = iterator.next().getKey();
if (!request.getPartition().contains(partition)) {
iterator.remove();
}
}
}
Long total = diffOffsetMap.values().stream().reduce(0L, (a, b) -> a + b);
QuerySendStatisticsVO vo = new QuerySendStatisticsVO();
vo.setTopic(request.getTopic());
vo.setTotal(total);
vo.setDetail(diffOffsetMap);
vo.setStartTime(QuerySendStatisticsVO.format(request.getStartTime()));
vo.setEndTime(QuerySendStatisticsVO.format(request.getEndTime()));
return ResponseData.create().data(vo).success();
}
@Override
public ResponseData forward(ForwardMessage message) {
ClusterInfoDO clusterInfoDO = clusterInfoMapper.selectById(message.getTargetClusterId());
if (clusterInfoDO == null) {
return ResponseData.create().failed("Target cluster not found.");
}
SendMessage sendMessage = message.getMessage();
// first, search message detail
Map<TopicPartition, Object> offsetTable = new HashMap<>();
TopicPartition topicPartition = new TopicPartition(sendMessage.getTopic(), sendMessage.getPartition());
offsetTable.put(topicPartition, sendMessage.getOffset());
Map<TopicPartition, ConsumerRecord<byte[], byte[]>> recordMap = messageConsole.searchBy(offsetTable);
ConsumerRecord<byte[], byte[]> consumerRecord = recordMap.get(topicPartition);
if (consumerRecord == null) {
return ResponseData.create().failed("Source message not found.");
}
String topic = message.getTargetTopic();
if (StringUtils.isEmpty(topic)) {
topic = sendMessage.getTopic();
}
// copy from consumer record.
ProducerRecord<byte[], byte[]> record = new ProducerRecord<>(topic,
message.isSamePartition() ? consumerRecord.partition() : null,
consumerRecord.key(),
consumerRecord.value(),
consumerRecord.headers());
ContextConfig config = ContextConfigHolder.CONTEXT_CONFIG.get();
config.setClusterInfoId(clusterInfoDO.getId());
config.setClusterName(clusterInfoDO.getClusterName());
config.setBootstrapServer(clusterInfoDO.getAddress());
// send.
Tuple2<Object, String> tuple2 = messageConsole.sendSync(record);
boolean success = (boolean) tuple2._1;
if (!success) {
return ResponseData.create().failed(tuple2._2);
}
return ResponseData.create().success();
}
private Map<TopicPartition, ConsumerRecord<byte[], byte[]>> searchRecordByOffset(QueryMessage queryMessage) {
Set<TopicPartition> partitions = getPartitions(queryMessage);
@@ -276,13 +370,14 @@ public class MessageServiceImpl implements MessageService, ApplicationContextAwa
throw new IllegalArgumentException("Can not find topic info.");
}
Set<TopicPartition> set = list.get(0).partitions().stream()
.map(tp -> new TopicPartition(queryMessage.getTopic(), tp.partition())).collect(Collectors.toSet());
.map(tp -> new TopicPartition(queryMessage.getTopic(), tp.partition())).collect(Collectors.toSet());
partitions.addAll(set);
}
return partitions;
}
@Override public void setApplicationContext(ApplicationContext context) throws BeansException {
@Override
public void setApplicationContext(ApplicationContext context) throws BeansException {
this.applicationContext = context;
}
}

View File

@@ -15,7 +15,7 @@ kafka:
# 缓存连接,不缓存的情况下,每次请求建立连接. 即使每次请求建立连接其实也很快某些情况下开启ACL查询可能很慢可以设置连接缓存为true
# 或者想提高查询速度也可以设置下面连接缓存为true
# 缓存 admin client的连接
cache-admin-connection: false
cache-admin-connection: true
# 缓存 producer的连接
cache-producer-connection: false
# 缓存 consumer的连接
@@ -52,4 +52,13 @@ cron:
auth:
enable: false
# 登录用户token的过期时间单位小时
expire-hours: 24
expire-hours: 24
# 隐藏集群的属性信息如果当前用户没有集群切换里的编辑权限就不能看集群的属性信息有开启ACL的集群需要开启这个
hide-cluster-property: true
# 是否启用集群的数据权限,如果启用,可以配置哪些角色看到哪些集群. 不启用,即使配置了也不生效,每个角色的用户都可以看到所有集群信息.
enable-cluster-authority: false
# 重新加载权限信息版本升级替换jar包的时候新版本里增加了新的权限菜单这个设置为true.然后在角色列表里分配新增加的菜单权限.
reload-permission: true
log:
# 是否打印操作日志(增加、删除、编辑)
print-controller-log: true

View File

@@ -42,6 +42,8 @@ insert into t_sys_permission(id, name,type,parent_id,permission) values(64,'在
insert into t_sys_permission(id, name,type,parent_id,permission) values(65,'在线删除',1,61,'message:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(66,'消息详情',1,61,'message:detail');
insert into t_sys_permission(id, name,type,parent_id,permission) values(67,'重新发送',1,61,'message:resend');
insert into t_sys_permission(id, name,type,parent_id,permission) values(68,'发送统计',1,61,'message:send-statistics');
insert into t_sys_permission(id, name,type,parent_id,permission) values(69,'消息转发',1,61,'message:forward');
insert into t_sys_permission(id, name,type,parent_id,permission) values(80,'限流',0,null,'quota');
insert into t_sys_permission(id, name,type,parent_id,permission) values(81,'用户',1,80,'quota:user');
@@ -82,6 +84,9 @@ insert into t_sys_permission(id, name,type,parent_id,permission) values(147,'保
insert into t_sys_permission(id, name,type,parent_id,permission) values(148,'删除',1,146,'user-manage:role:del');
insert into t_sys_permission(id, name,type,parent_id,permission) values(149,'权限列表',1,140,'user-manage:permission');
insert into t_sys_permission(id, name,type,parent_id,permission) values(150,'个人设置',1,140,'user-manage:setting');
insert into t_sys_permission(id, name,type,parent_id,permission) values(151,'集群权限',1,140,'user-manage:cluster-role');
insert into t_sys_permission(id, name,type,parent_id,permission) values(152,'新增',1,151,'user-manage:cluster-role:add');
insert into t_sys_permission(id, name,type,parent_id,permission) values(153,'删除',1,151,'user-manage:cluster-role:delete');
insert into t_sys_permission(id, name,type,parent_id,permission) values(160,'运维',0,null,'op');
insert into t_sys_permission(id, name,type,parent_id,permission) values(161,'集群切换',1,160,'op:cluster-switch');
@@ -98,8 +103,8 @@ insert into t_sys_permission(id, name,type,parent_id,permission) values(171,'取
-- t_sys_permission end--
-- t_sys_role start--
insert into t_sys_role(id, role_name, description, permission_ids) VALUES (1,'超级管理员','超级管理员','12,13,14,22,23,24,25,26,27,28,29,30,34,35,31,32,33,42,43,44,45,46,47,48,49,50,62,63,64,65,66,67,81,82,83,84,85,86,87,88,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,141,142,143,144,145,146,147,148,149,150,161,162,163,164,165,166,167,168,169,171,170');
insert into t_sys_role(id, role_name, description, permission_ids) VALUES (2,'普通管理员','普通管理员,不能更改用户信息','12,13,14,22,23,24,25,26,27,28,29,30,34,35,31,32,33,42,43,44,45,46,47,48,49,50,62,63,64,65,66,67,81,82,83,84,85,86,87,88,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,141,146,149,150,161,162,163,164,165,166,167,168,169,171,170');
insert into t_sys_role(id, role_name, description, permission_ids) VALUES (1,'超级管理员','超级管理员','12,13,14,22,23,24,25,26,27,28,29,30,34,35,31,32,33,42,43,44,45,46,47,48,49,50,62,63,64,65,66,67,68,69,81,82,83,84,85,86,87,88,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,141,142,143,144,145,146,147,148,149,150,151,152,153,161,162,163,164,165,166,167,168,169,171,170');
insert into t_sys_role(id, role_name, description, permission_ids) VALUES (2,'普通管理员','普通管理员,不能更改用户信息','12,13,14,22,23,24,25,26,27,28,29,30,34,35,31,32,33,42,43,44,45,46,47,48,49,50,62,63,64,65,66,67,68,69,81,82,83,84,85,86,87,88,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,141,146,149,150,161,162,163,164,165,166,167,168,169,171,170');
-- insert into t_sys_role(id, role_name, description, permission_ids) VALUES (2,'访客','访客','12,13,22,26,29,32,44,45,50,62,63,81,83,85,141,146,149,150,161,163');
-- t_sys_role end--

View File

@@ -4,8 +4,8 @@
CREATE TABLE IF NOT EXISTS T_KAFKA_USER
(
ID IDENTITY NOT NULL COMMENT '主键ID',
USERNAME VARCHAR(50) NOT NULL DEFAULT '' COMMENT '用户名',
PASSWORD VARCHAR(50) NOT NULL DEFAULT '' COMMENT '密码',
USERNAME VARCHAR(128) NOT NULL DEFAULT '' COMMENT '用户名',
PASSWORD VARCHAR(128) NOT NULL DEFAULT '' COMMENT '密码',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
CLUSTER_INFO_ID BIGINT NOT NULL COMMENT '集群信息里的集群ID',
PRIMARY KEY (ID),
@@ -65,4 +65,15 @@ CREATE TABLE IF NOT EXISTS t_sys_user
role_ids varchar(100) DEFAULT NULL COMMENT '分配角色的ID',
PRIMARY KEY (id),
UNIQUE (username)
);
-- 集群数据权限与角色绑定
CREATE TABLE IF NOT EXISTS t_cluster_role_relation
(
ID IDENTITY NOT NULL COMMENT '主键ID',
ROLE_ID bigint(20) NOT NULL COMMENT '角色ID',
CLUSTER_INFO_ID bigint(20) NOT NULL COMMENT '集群信息的ID',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
PRIMARY KEY (id),
UNIQUE (ROLE_ID, CLUSTER_INFO_ID)
);

View File

@@ -45,7 +45,12 @@
<appender-ref ref="AsyncFileAppender"/>
</logger>
<logger name="org.apache.kafka.clients.consumer" level="warn" additivity="false">
<logger name="org.apache.kafka.common" level="warn" additivity="false">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="AsyncFileAppender"/>
</logger>
<logger name="org.apache.kafka.clients.Metadata" level="warn" additivity="false">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="AsyncFileAppender"/>
</logger>
@@ -59,4 +64,6 @@
<appender-ref ref="CONSOLE"/>
<appender-ref ref="AsyncFileAppender"/>
</logger>
<logger name="org.apache.kafka" level="warn"/>
</configuration>

View File

@@ -21,7 +21,7 @@ import org.apache.kafka.common.utils.{KafkaThread, LogContext, Time}
import org.slf4j.{Logger, LoggerFactory}
import java.io.IOException
import java.util.Properties
import java.util.{Collections, Properties}
import java.util.concurrent.atomic.AtomicInteger
import java.util.concurrent.{ConcurrentLinkedQueue, TimeUnit}
import scala.jdk.CollectionConverters.{ListHasAsScala, MapHasAsJava, PropertiesHasAsScala, SetHasAsScala}
@@ -162,7 +162,7 @@ object BrokerApiVersion{
def listAllBrokerVersionInfo(): Map[Node, Try[NodeApiVersions]] =
findAllBrokers().map { broker =>
broker -> Try[NodeApiVersions](new NodeApiVersions(getApiVersions(broker)))
broker -> Try[NodeApiVersions](new NodeApiVersions(getApiVersions(broker), Collections.emptyList(), false))
}.toMap
def close(): Unit = {

View File

@@ -0,0 +1,101 @@
package kafka.console
import org.apache.kafka.clients.NodeApiVersions
import org.apache.kafka.common.message.ApiVersionsResponseData.ApiVersion
import java.util.Collections
import scala.jdk.CollectionConverters.SeqHasAsJava
import scala.reflect.runtime.universe._
import scala.reflect.runtime.{universe => ru}
/**
* broker version with api version.
*
* @author: xuxd
* @since: 2024/8/29 16:12
* */
class BrokerVersion(val apiVersion: Int, val brokerVersion: String)
object BrokerVersion {
val define: List[BrokerVersion] = List(
new BrokerVersion(33, "0.11"),
new BrokerVersion(37, "1.0"),
new BrokerVersion(42, "1.1"),
new BrokerVersion(43, "2.2"),
new BrokerVersion(44, "2.3"),
new BrokerVersion(47, "2.4~2.5"),
new BrokerVersion(49, "2.6"),
new BrokerVersion(57, "2.7"),
new BrokerVersion(64, "2.8"),
new BrokerVersion(67, "3.0~3.4"),
new BrokerVersion(68, "3.5~3.6"),
new BrokerVersion(74, "3.7"),
new BrokerVersion(75, "3.8")
)
def getDefineWithJavaList(): java.util.List[BrokerVersion] = {
define.toBuffer.asJava
}
def guessBrokerVersion(nodeVersion: NodeApiVersions): String = {
// if (nodeVersion.)
val unknown = "unknown";
var guessVersion = unknown
var maxApiKey: Short = -1
if (nodeVersion.toString().contains("UNKNOWN")) {
val unknownApis = getFieldValueByName(nodeVersion, "unknownApis")
unknownApis match {
case Some(unknownApis: java.util.List[ApiVersion]) => {
if (unknownApis.size > 0) {
maxApiKey = unknownApis.get(unknownApis.size() - 1).apiKey()
}
}
case _ => -1
}
}
if (maxApiKey < 0) {
val versions = new java.util.ArrayList[ApiVersion](nodeVersion.allSupportedApiVersions().values())
Collections.sort(versions, (o1: ApiVersion, o2: ApiVersion) => o2.apiKey() - o1.apiKey)
maxApiKey = versions.get(0).apiKey()
}
if (maxApiKey > 0) {
if (maxApiKey > define.last.apiVersion) {
guessVersion = "> " + define.last.brokerVersion
} else if (maxApiKey < define.head.apiVersion) {
guessVersion = "< " + define.head.brokerVersion
} else {
for (i <- define.indices) {
if (maxApiKey <= define(i).apiVersion && guessVersion == unknown) {
guessVersion = define(i).brokerVersion
}
}
}
}
guessVersion
}
def getFieldValueByName(obj: Object, fieldName: String): Option[Any] = {
val runtimeMirror = ru.runtimeMirror(obj.getClass.getClassLoader)
val instanceMirror = runtimeMirror.reflect(obj)
val typeOfObj = instanceMirror.symbol.toType
// 查找名为 fieldName 的字段
val fieldSymbol = typeOfObj.member(newTermName(fieldName)).asTerm
// 检查字段是否存在并且不是私有字段
if (fieldSymbol.isPrivate || fieldSymbol.isPrivateThis) {
// None // 如果字段是私有的,返回 None
val fieldMirror = runtimeMirror.reflect(obj).reflectField(fieldSymbol)
Some(fieldMirror.get)
} else {
// 反射获取字段值
val fieldMirror = instanceMirror.reflectField(fieldSymbol)
Some(fieldMirror.get)
}
}
}

View File

@@ -94,7 +94,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
t.lag = t.logEndOffset - t.consumerOffset
}
}
t.lag = t.logEndOffset - t.consumerOffset
// t.lag = t.logEndOffset - t.consumerOffset
(topicPartition, t)
}).toMap

View File

@@ -189,10 +189,10 @@ object KafkaConsole {
case (topicPartition, listOffsetsResultInfo) => topicPartition -> new OffsetAndMetadata(listOffsetsResultInfo.offset)
}.toMap
unsuccessfulOffsetsForTimes.foreach { entry =>
log.warn(s"\nWarn: Partition " + entry._1.partition() + " from topic " + entry._1.topic() +
" is empty. Falling back to latest known offset.")
}
// unsuccessfulOffsetsForTimes.foreach { entry =>
// log.warn(s"\nWarn: Partition " + entry._1.partition() + " from topic " + entry._1.topic() +
// " is empty. Falling back to latest known offset.")
// }
successfulLogTimestampOffsets ++ getLogEndOffsets(admin, unsuccessfulOffsetsForTimes.keySet.toSeq, timeoutMs)
}

View File

@@ -6,13 +6,17 @@ import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.commons.lang3.StringUtils
import org.apache.kafka.clients.admin.{DeleteRecordsOptions, RecordsToDelete}
import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord}
import org.apache.kafka.clients.producer.ProducerRecord
import org.apache.kafka.clients.producer.{ProducerRecord, RecordMetadata}
import org.apache.kafka.common.TopicPartition
import org.apache.kafka.common.header.Header
import org.apache.kafka.common.header.internals.RecordHeader
import java.time.Duration
import java.util
import java.util.{Properties}
import java.util.Properties
import java.util.concurrent.Future
import scala.collection.immutable
import scala.collection.mutable.ArrayBuffer
import scala.jdk.CollectionConverters.{CollectionHasAsScala, MapHasAsScala, SeqHasAsJava}
/**
@@ -218,7 +222,7 @@ class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
def send(topic: String, partition: Int, key: String, value: String, num: Int): Unit = {
withProducerAndCatchError(producer => {
val nullKey = if (key != null && key.trim().length() == 0) null else key
val nullKey = if (key != null && key.trim().isEmpty) null else key
for (a <- 1 to num) {
val record = if (partition != -1) new ProducerRecord[String, String](topic, partition, nullKey, value)
else new ProducerRecord[String, String](topic, nullKey, value)
@@ -228,6 +232,39 @@ class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
}
def send(topic: String,
partition: Int,
key: String,
value: String,
num: Int,
headerKeys: Array[String],
headerValues: Array[String],
sync: Boolean): (Boolean, String) = {
withProducerAndCatchError(producer => {
val nullKey = if (key != null && key.trim().isEmpty) null else key
val results = ArrayBuffer.empty[Future[RecordMetadata]]
for (a <- 1 to num) {
val record = if (partition != -1) new ProducerRecord[String, String](topic, partition, nullKey, value)
else new ProducerRecord[String, String](topic, nullKey, value)
if (!headerKeys.isEmpty && headerKeys.length == headerValues.length) {
val headers: Array[Header] = headerKeys.zip(headerValues).map { case (key, value) =>
new RecordHeader(key, value.getBytes())
}
headers.foreach(record.headers().add)
}
results += producer.send(record)
}
if (sync) {
results.foreach(_.get())
}
(true, "")
}, e => {
log.error("send error.", e)
(false, e.getMessage)
})
}.asInstanceOf[(Boolean, String)]
def sendSync(record: ProducerRecord[Array[Byte], Array[Byte]]): (Boolean, String) = {
withByteProducerAndCatchError(producer => {
val metadata = producer.send(record).get()

View File

@@ -15,7 +15,7 @@ import scala.collection.{Map, Seq}
import scala.jdk.CollectionConverters.{CollectionHasAsScala, MapHasAsJava, MapHasAsScala, SeqHasAsJava, SetHasAsJava}
/**
* kafka-console-ui.
* kafka topic console.
*
* @author xuxd
* @date 2021-09-08 19:52:27
@@ -52,6 +52,12 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
}).asInstanceOf[Set[String]]
}
/**
* get topic list by topic name list.
*
* @param topics topic name list.
* @return topic list.
*/
def getTopicList(topics: Set[String]): List[TopicDescription] = {
if (topics == null || topics.isEmpty) {
Collections.emptyList()

View File

@@ -17,3 +17,8 @@ new Vue({
store,
render: (h) => h(App),
}).$mount("#app");
Vue.prototype.$message.config({
duration: 1,
maxCount: 1,
});

View File

@@ -6,6 +6,7 @@ import {
setPermissions,
setToken,
setUsername,
deleteClusterInfo,
} from "@/utils/local-cache";
Vue.use(Vuex);
@@ -38,6 +39,9 @@ export default new Vuex.Store({
state.clusterInfo.enableSasl = enableSasl;
setClusterInfo(clusterInfo);
},
[CLUSTER.DELETE]() {
deleteClusterInfo();
},
[AUTH.ENABLE](state, enable) {
state.auth.enable = enable;
},

View File

@@ -1,5 +1,6 @@
export const CLUSTER = {
SWITCH: "switchCluster",
DELETE: "deleteClusterInfo",
};
export const AUTH = {

View File

@@ -199,6 +199,10 @@ export const KafkaClusterApi = {
url: "/cluster/info",
method: "get",
},
getClusterInfoListForSelect: {
url: "/cluster/info/select",
method: "get",
},
addClusterInfo: {
url: "/cluster/info",
method: "post",
@@ -292,6 +296,14 @@ export const KafkaMessageApi = {
url: "/message",
method: "delete",
},
sendStatistics: {
url: "/message/send/statistics",
method: "post",
},
forward: {
url: "/message/forward",
method: "post",
},
};
export const KafkaClientQuotaApi = {
@@ -357,4 +369,23 @@ export const AuthApi = {
url: "/auth/login",
method: "post",
},
ownDataAuthority: {
url: "/auth/own/data/auth",
method: "get",
},
};
export const ClusterRoleRelationApi = {
select: {
url: "/cluster-role/relation",
method: "get",
},
add: {
url: "/cluster-role/relation",
method: "post",
},
delete: {
url: "/cluster-role/relation",
method: "delete",
},
};

View File

@@ -4,6 +4,10 @@ export function setClusterInfo(clusterInfo) {
localStorage.setItem(Cache.clusterInfo, JSON.stringify(clusterInfo));
}
export function deleteClusterInfo() {
localStorage.removeItem(Cache.clusterInfo);
}
export function getClusterInfo() {
const str = localStorage.getItem(Cache.clusterInfo);
return str ? JSON.parse(str) : undefined;

View File

@@ -22,11 +22,11 @@ const errorHandler = (error) => {
});
Router.push({ path: "/login-page" });
} else if (error.response.status == 403) {
// const data = error.response.data;
// notification.error({
// message: error.response.status,
// description: data.msg,
// });
const data = error.response.data;
notification.error({
message: error.response.status,
description: data.msg,
});
} else {
const data = error.response.data;
notification.error({

View File

@@ -6,6 +6,10 @@
<p></p>
<hr />
<h3>kafka API 版本兼容性</h3>
<div class="green">
broker版本说明:
该值并不保证broker实际版本一定是该值(大概是这个版本范围)broker使用不同的模式(如kraft)可能显示不同的值
</div>
<a-spin :spinning="apiVersionInfoLoading">
<a-table
:columns="columns"
@@ -104,6 +108,11 @@ const columns = [
dataIndex: "host",
key: "host",
},
{
title: "broker版本",
dataIndex: "brokerVersion",
key: "brokerVersion",
},
{
title: "支持的api数量",
dataIndex: "supportNums",
@@ -125,4 +134,7 @@ const columns = [
.card-style {
width: 100%;
}
.green {
color: green;
}
</style>

View File

@@ -17,7 +17,7 @@
<a-input
placeholder="username"
:allowClear="true"
:maxLength="30"
:maxLength="100"
v-decorator="[
'username',
{
@@ -30,7 +30,7 @@
<a-input
placeholder="password"
:allowClear="true"
:maxLength="30"
:maxLength="100"
v-decorator="[
'password',
{

View File

@@ -6,7 +6,7 @@
:mask="false"
:destroyOnClose="true"
:footer="null"
:maskClosable="false"
:maskClosable="true"
@cancel="handleCancel"
>
<div>

View File

@@ -6,7 +6,7 @@
:mask="false"
:destroyOnClose="true"
:footer="null"
:maskClosable="false"
:maskClosable="true"
@cancel="handleCancel"
>
<div>

View File

@@ -6,7 +6,7 @@
:mask="false"
:destroyOnClose="true"
:footer="null"
:maskClosable="false"
:maskClosable="true"
@cancel="handleCancel"
>
<div>

View File

@@ -58,6 +58,22 @@
</a-form-item>
</a-col>
</a-row>
<hr class="hr" />
<a-row :gutter="24">
<a-col :span="24">
<a-form-item label="过滤消费组">
<a-input
placeholder="groupId 模糊过滤"
class="input-w"
v-decorator="['filterGroupId']"
@change="onFilterGroupIdUpdate"
/>
<span>
仅过滤当前已查出来的消费组如果要查询服务端最新消费组请点击查询按钮</span
>
</a-form-item>
</a-col>
</a-row>
</a-form>
</div>
<div class="operation-row-button">
@@ -70,7 +86,7 @@
</div>
<a-table
:columns="columns"
:data-source="data"
:data-source="filteredData"
bordered
row-key="groupId"
>
@@ -160,6 +176,7 @@ import Member from "@/views/group/Member";
import ConsumerDetail from "@/views/group/ConsumerDetail";
import AddSupscription from "@/views/group/AddSupscription";
import OffsetTopicPartition from "@/views/group/OffsetTopicPartition";
import { isAuthorized } from "@/utils/auth";
export default {
name: "ConsumerGroup",
@@ -168,6 +185,7 @@ export default {
return {
queryParam: {},
data: [],
filteredData: [],
columns,
selectRow: {},
form: this.$form.createForm(this, {
@@ -185,6 +203,7 @@ export default {
showConsumerDetailDialog: false,
showAddSubscriptionDialog: false,
showOffsetPartitionDialog: false,
filterGroupId: "",
};
},
methods: {
@@ -208,6 +227,7 @@ export default {
this.loading = false;
if (res.code == 0) {
this.data = res.data.list;
this.filter();
} else {
notification.error({
message: "error",
@@ -235,6 +255,9 @@ export default {
});
},
openConsumerMemberDialog(groupId) {
if (!isAuthorized("group:client")) {
return;
}
this.showConsumerGroupDialog = true;
this.selectDetail.resourceName = groupId;
},
@@ -264,6 +287,19 @@ export default {
closeOffsetPartitionDialog() {
this.showOffsetPartitionDialog = false;
},
onFilterGroupIdUpdate(input) {
this.filterGroupId = input.target.value;
this.filter();
},
filter() {
if (this.filterGroupId) {
this.filteredData = this.data.filter(
(e) => e.groupId.indexOf(this.filterGroupId) != -1
);
} else {
this.filteredData = this.data;
}
},
},
created() {
this.getConsumerGroupList();
@@ -368,4 +404,9 @@ const columns = [
.type-select {
width: 200px !important;
}
.hr {
height: 1px;
border: none;
border-top: 1px dashed #0066cc;
}
</style>

View File

@@ -6,7 +6,7 @@
:mask="false"
:destroyOnClose="true"
:footer="null"
:maskClosable="false"
:maskClosable="true"
@cancel="handleCancel"
>
<div>
@@ -125,4 +125,11 @@ const columns = [
];
</script>
<style scoped></style>
<style scoped>
ul ol {
padding-inline-start: 0px;
}
.ant-table-row-cell-break-word ul {
padding-inline-start: 0px;
}
</style>

View File

@@ -6,7 +6,7 @@
:mask="false"
:destroyOnClose="true"
:footer="null"
:maskClosable="false"
:maskClosable="true"
@cancel="handleCancel"
>
<div>

View File

@@ -9,6 +9,7 @@
<h3 class="login-title">登录kafka-console-ui</h3>
<a-form-item label="账号">
<a-input
@keyup.enter="handleSubmit"
style="width: 200px"
allowClear
v-decorator="[
@@ -20,6 +21,7 @@
</a-form-item>
<a-form-item label="密码">
<a-input-password
@keyup.enter="handleSubmit"
style="width: 200px"
v-decorator="[
'password',
@@ -28,7 +30,11 @@
/>
</a-form-item>
<a-form-item :wrapper-col="{ span: 16, offset: 5 }">
<a-button type="primary" @click="handleSubmit" :loading="loading"
<a-button
type="primary"
@click="handleSubmit"
:loading="loading"
@keyup.enter="handleSubmit"
>登录</a-button
>
</a-form-item>
@@ -40,7 +46,7 @@ import request from "@/utils/request";
import { AuthApi } from "@/utils/api";
import notification from "ant-design-vue/lib/notification";
import { mapMutations } from "vuex";
import { AUTH } from "@/store/mutation-types";
import { AUTH, CLUSTER } from "@/store/mutation-types";
export default {
name: "Login",
@@ -82,8 +88,19 @@ export default {
setToken: AUTH.SET_TOKEN,
setUsername: AUTH.SET_USERNAME,
setPermissions: AUTH.SET_PERMISSIONS,
deleteClusterInfo: CLUSTER.DELETE,
}),
},
created() {
request({
url: AuthApi.ownDataAuthority.url,
method: AuthApi.ownDataAuthority.method,
}).then((res) => {
if (!res) {
this.deleteClusterInfo();
}
});
},
};
</script>

View File

@@ -0,0 +1,248 @@
<template>
<a-modal
title="转发消息"
:visible="show"
:width="600"
:mask="false"
:destroyOnClose="true"
:footer="null"
:maskClosable="true"
@cancel="handleCancel"
>
<div>
<a-spin :spinning="loading">
<div>
<h4>选择集群</h4>
<hr />
<div class="message-detail" id="message-detail">
<a-form
:form="form"
:label-col="{ span: 5 }"
:wrapper-col="{ span: 18 }"
@submit="handleSubmit"
>
<a-form-item label="集群">
<a-select
class="select-width"
@change="clusterChange"
v-decorator="[
'targetClusterId',
{
rules: [{ required: true, message: '请选择一个集群!' }],
},
]"
placeholder="请选择一个集群"
>
<a-select-option
v-for="v in clusterList"
:key="v.id"
:value="v.id"
>
{{ v.clusterName }}
</a-select-option>
</a-select>
</a-form-item>
<a-form-item label="Topic">
<a-select
class="select-width"
show-search
option-filter-prop="children"
v-decorator="[
'targetTopic',
{
rules: [{ required: true, message: '请选择一个topic!' }],
},
]"
placeholder="请选择一个topic"
>
<a-select-option v-for="v in topicList" :key="v" :value="v">
{{ v }}
</a-select-option>
</a-select>
</a-form-item>
<a-form-item label="相同分区">
<a-radio-group
v-decorator="[
'samePartition',
{
initialValue: 'false',
rules: [{ required: true, message: '请选择!' }],
},
]"
>
<a-radio value="false"> 否</a-radio>
<a-radio value="true"> 是</a-radio>
</a-radio-group>
<span class="mar-left">和原消息保持同一个分区</span>
</a-form-item>
<a-form-item>
<div class="form-footer">
<a-button type="primary" html-type="submit"> 提交</a-button>
</div>
</a-form-item>
</a-form>
</div>
</div>
</a-spin>
</div>
</a-modal>
</template>
<script>
import request from "@/utils/request";
import { KafkaClusterApi, KafkaMessageApi, KafkaTopicApi } from "@/utils/api";
import notification from "ant-design-vue/lib/notification";
import moment from "moment";
export default {
name: "ForwardMessage",
props: {
record: {},
visible: {
type: Boolean,
default: false,
},
},
data() {
return {
show: this.visible,
data: {},
loading: false,
showForwardDialog: false,
targetClusterId: -1,
clusterList: [],
partition: -1,
topicList: [],
form: this.$form.createForm(this, { name: "ForwardMessageForm" }),
};
},
watch: {
visible(v) {
this.show = v;
if (this.show) {
this.getClusterList();
}
},
},
methods: {
getClusterList() {
this.loading = true;
request({
url: KafkaClusterApi.getClusterInfoList.url,
method: KafkaClusterApi.getClusterInfoList.method,
}).then((res) => {
this.loading = false;
if (res.code != 0) {
notification.error({
message: "error",
description: res.msg,
});
} else {
this.clusterList = res.data;
this.targetClusterId = this.clusterList[0].id;
}
});
},
handleSubmit(e) {
e.preventDefault();
this.form.validateFields((err, values) => {
if (!err) {
const params = {
message: Object.assign({}, this.record),
};
this.forward({ ...params, ...values });
}
});
},
handleCancel() {
this.$emit("closeForwardDialog", { refresh: false });
},
formatTime(time) {
return time == -1 ? -1 : moment(time).format("YYYY-MM-DD HH:mm:ss:SSS");
},
clusterChange(e) {
this.getTopicNameList(e);
},
forward(params) {
this.loading = true;
request({
url: KafkaMessageApi.forward.url,
method: KafkaMessageApi.forward.method,
data: params,
}).then((res) => {
this.loading = false;
if (res.code != 0) {
notification.error({
message: "error",
description: res.msg,
});
} else {
this.$message.success(res.msg);
}
});
},
openForwardDialog() {
this.showForwardDialog = true;
},
closeForwardDialog() {
this.showForwardDialog = false;
},
getTopicNameList(clusterInfoId) {
this.loading = true;
request({
url: KafkaTopicApi.getTopicNameList.url,
method: KafkaTopicApi.getTopicNameList.method,
headers: {
"X-Specific-Cluster-Info-Id": clusterInfoId,
},
}).then((res) => {
this.loading = false;
if (res.code == 0) {
this.topicList = res.data;
} else {
notification.error({
message: "error",
description: res.msg,
});
}
});
},
},
};
</script>
<style scoped>
.m-info {
/*text-decoration: underline;*/
}
.title {
width: 15%;
display: inline-block;
text-align: right;
margin-right: 2%;
font-weight: bold;
}
.ant-spin-container #message-detail textarea {
max-width: 80% !important;
vertical-align: top !important;
}
.center {
text-align: center;
}
.mar-left {
margin-left: 1%;
}
.select-width {
width: 80%;
}
.form-footer {
text-align: center;
margin-top: 3%;
}
</style>

View File

@@ -23,6 +23,14 @@
<a-tab-pane key="4" tab="在线删除" v-if="isAuthorized('message:del')">
<DeleteMessage :topic-list="topicList"></DeleteMessage>
</a-tab-pane>
<a-tab-pane
key="5"
tab="发送统计"
v-if="isAuthorized('message:send-statistics')"
>
<SendStatistics :topic-list="topicList"></SendStatistics>
</a-tab-pane>
</a-tabs>
</a-spin>
</div>
@@ -31,6 +39,7 @@
<script>
import SearchByTime from "@/views/message/SearchByTime";
import SearchByOffset from "@/views/message/SearchByOffset";
import SendStatistics from "@/views/message/SendStatistics.vue";
import request from "@/utils/request";
import { KafkaTopicApi } from "@/utils/api";
import notification from "ant-design-vue/lib/notification";
@@ -39,7 +48,13 @@ import DeleteMessage from "./DeleteMessage";
import { isAuthorized, isUnauthorized } from "@/utils/auth";
export default {
name: "Message",
components: { DeleteMessage, SearchByTime, SearchByOffset, SendMessage },
components: {
DeleteMessage,
SearchByTime,
SearchByOffset,
SendMessage,
SendStatistics,
},
data() {
return {
loading: false,

View File

@@ -6,7 +6,7 @@
:mask="false"
:destroyOnClose="true"
:footer="null"
:maskClosable="false"
:maskClosable="true"
@cancel="handleCancel"
>
<div>
@@ -120,8 +120,22 @@
重新发送
</a-button>
</a-popconfirm>
<a-button
type="dashed"
class="mar-left"
icon="plus"
v-action:message:forward
@click="openForwardDialog()"
>
转发消息
</a-button>
</div>
</a-spin>
<ForwardMessage
:visible="showForwardDialog"
:record="data"
@closeForwardDialog="closeForwardDialog"
></ForwardMessage>
</div>
</a-modal>
</template>
@@ -131,9 +145,11 @@ import request from "@/utils/request";
import { KafkaMessageApi } from "@/utils/api";
import notification from "ant-design-vue/lib/notification";
import moment from "moment";
import ForwardMessage from "@/views/message/ForwardMessage.vue";
export default {
name: "MessageDetail",
components: { ForwardMessage },
props: {
record: {},
visible: {
@@ -151,6 +167,7 @@ export default {
valueDeserializer: "String",
consumerDetail: [],
columns,
showForwardDialog: false,
};
},
watch: {
@@ -232,6 +249,12 @@ export default {
}
});
},
openForwardDialog() {
this.showForwardDialog = true;
},
closeForwardDialog() {
this.showForwardDialog = false;
},
},
};
const columns = [
@@ -264,4 +287,7 @@ const columns = [
max-width: 80% !important;
vertical-align: top !important;
}
.mar-left {
margin-left: 1%;
}
</style>

View File

@@ -60,6 +60,33 @@
</a-col>
</a-row>
<hr class="hr" />
<a-row :gutter="24">
<a-col :span="24">
<a-form-item label="最大检索数">
<a-input-number
v-decorator="[
'filterNumber',
{
initialValue: 5000,
rules: [
{
required: true,
message: '输入消息数!',
},
],
},
]"
:min="1"
:max="100000"
/>
<span
>条
注意这里允许最多检索10万条但是不建议将该值设置过大这意味着一次查询要在内存里缓存这么多的数据可能导致内存溢出并且更大的消息量会导致更长的检索时间</span
>
</a-form-item>
</a-col>
</a-row>
<hr class="hr" />
<a-row :gutter="24">
<a-col :span="5">
<a-form-item label="消息过滤">

View File

@@ -1,7 +1,12 @@
<template>
<div class="content">
<a-spin :spinning="loading">
<a-form :form="form" @submit="handleSubmit">
<a-form
:form="form"
:label-col="{ span: 5 }"
:wrapper-col="{ span: 12 }"
@submit="handleSubmit"
>
<a-form-item label="Topic">
<a-select
class="topic-select"
@@ -34,11 +39,42 @@
</a-select-option>
</a-select>
</a-form-item>
<a-form-item label="消息头">
<table>
<tbody>
<tr v-for="(row, index) in rows" :key="index">
<td class="w-30">
<a-input v-model="row.headerKey" placeholder="key" />
</td>
<td class="w-60">
<a-input v-model="row.headerValue" placeholder="value" />
</td>
<td>
<a-button
type="danger"
@click="deleteRow(index)"
v-show="rows.length > 1"
>删除</a-button
>
</td>
<td>
<a-button
type="primary"
@click="addRow"
v-show="index == rows.length - 1"
>添加</a-button
>
</td>
</tr>
</tbody>
</table>
</a-form-item>
<a-form-item label="消息Key">
<a-input v-decorator="['key', { initialValue: 'key' }]" />
</a-form-item>
<a-form-item label="消息体" has-feedback>
<a-textarea
:autosize="{ minRows: 5 }"
v-decorator="[
'body',
{
@@ -71,6 +107,22 @@
:max="32"
/>
</a-form-item>
<a-form-item label="发送类型">
<a-radio-group
v-decorator="[
'sync',
{
initialValue: 'false',
rules: [{ required: true, message: '请选择一个发送类型!' }],
},
]"
>
<a-radio value="false"> 异步发送 </a-radio>
<a-radio value="true">
同步发送(发送失败,会返回错误信息)
</a-radio>
</a-radio-group>
</a-form-item>
<a-form-item :wrapper-col="{ span: 12, offset: 5 }">
<a-button type="primary" html-type="submit"> 提交 </a-button>
</a-form-item>
@@ -86,17 +138,15 @@ import notification from "ant-design-vue/lib/notification";
export default {
name: "SendMessage",
components: {},
props: {
topicList: {
type: Array,
},
},
props: {},
data() {
return {
form: this.$form.createForm(this, { name: "message_send" }),
loading: false,
partitions: [],
selectPartition: undefined,
rows: [{ headerKey: "", headerValue: "" }],
topicList: [],
};
},
methods: {
@@ -137,12 +187,21 @@ export default {
this.selectPartition = -1;
this.getPartitionInfo(topic);
},
addRow() {
if (this.rows.length < 32) {
this.rows.push({ HeaderKey: "", HeaderValue: "" });
}
},
deleteRow(index) {
this.rows.splice(index, 1);
},
handleSubmit(e) {
e.preventDefault();
this.form.validateFields((err, values) => {
if (!err) {
const param = Object.assign({}, values, {
partition: this.selectPartition,
headers: this.rows,
});
this.loading = true;
request({
@@ -169,5 +228,11 @@ export default {
},
};
</script>
<style scoped></style>
<style scoped>
.w-30 {
width: 300px;
}
.w-60 {
width: 500px;
}
</style>

View File

@@ -0,0 +1,293 @@
<template>
<div class="tab-content">
<a-spin :spinning="loading">
<div id="search-time-form-advanced-search">
<a-form
class="ant-advanced-search-form"
:form="form"
@submit="handleSearch"
>
<a-row>
<a-col :span="16">
<a-form-item label="topic">
<a-select
class="topic-select"
@change="handleTopicChange"
show-search
option-filter-prop="children"
v-decorator="[
'topic',
{
rules: [{ required: true, message: '请选择一个topic!' }],
},
]"
placeholder="请选择一个topic"
>
<a-select-option v-for="v in topicList" :key="v" :value="v">
{{ v }}
</a-select-option>
</a-select>
</a-form-item>
</a-col>
<a-col :span="8">
<a-form-item label="分区">
<a-select
class="type-select"
show-search
mode="multiple"
option-filter-prop="children"
v-model="selectPartition"
placeholder="请选择分区"
>
<a-select-option v-for="v in partitions" :key="v" :value="v">
<span v-if="v == -1">全部</span> <span v-else>{{ v }}</span>
</a-select-option>
</a-select>
</a-form-item>
</a-col>
</a-row>
<a-row :gutter="24">
<a-col :span="20">
<a-form-item label="时间">
<a-range-picker
v-decorator="['time', rangeConfig]"
format="YYYY-MM-DD HH:mm:ss.SSS"
:show-time="{
hideDisabledOptions: true,
defaultValue: [
moment('00:00:00.000', 'HH:mm:ss.SSS'),
moment('23:59:59.999', 'HH:mm:ss.SSS'),
],
}"
/>
</a-form-item>
</a-col>
<a-col :span="2" :style="{ textAlign: 'right' }">
<a-form-item>
<a-button type="primary" html-type="submit"> 查询</a-button>
</a-form-item>
</a-col>
</a-row>
</a-form>
</div>
<div id="search-result-view">
<!-- <ul>-->
<!-- <li v-for="(item, index) in data" :key="index">-->
<!-- <fieldset>-->
<!-- <legend>-->
<!-- {{ item.topic }}, 时间: [{{ item.startTime }} ~-->
<!-- {{ item.endTime }}], 总数: {{ item.total }}, 查询时间:-->
<!-- {{ item.searchTime }}-->
<!-- </legend>-->
<!-- </fieldset>-->
<!-- </li>-->
<!-- </ul>-->
<a-collapse>
<a-collapse-panel
v-for="(item, index) in data"
:key="index"
:header="
item.topic +
', 时间[' +
item.startTime +
' ~ ' +
item.endTime +
'], 总数' +
item.total +
', 查询时间' +
item.searchTime
"
>
<ul>
<li v-for="(value, key) in item.detail" :key="key">
分区:{{ key }}, 数量: {{ value }}
</li>
</ul>
</a-collapse-panel>
</a-collapse>
</div>
</a-spin>
</div>
</template>
<script>
import request from "@/utils/request";
import { KafkaMessageApi, KafkaTopicApi } from "@/utils/api";
import notification from "ant-design-vue/lib/notification";
import locale from "ant-design-vue/lib/date-picker/locale/zh_CN";
import moment from "moment";
export default {
name: "SendStatistics",
props: {
topicList: {
type: Array,
},
},
data() {
return {
moment,
locale,
loading: false,
form: this.$form.createForm(this, { name: "message_send_statistics" }),
partitions: [],
selectPartition: [],
rangeConfig: {
rules: [{ type: "array", required: true, message: "请选择时间!" }],
},
data: [],
};
},
methods: {
handleSearch(e) {
e.preventDefault();
this.form.validateFields((err, values) => {
if (!err) {
const data = Object.assign({}, values);
delete data.time;
data.startTime = values.time[0];
data.endTime = values.time[1];
data.partition = Array.isArray(this.selectPartition)
? this.selectPartition
: [this.selectPartition];
this.loading = true;
request({
url: KafkaMessageApi.sendStatistics.url,
method: KafkaMessageApi.sendStatistics.method,
data: data,
}).then((res) => {
this.loading = false;
if (res.code == 0) {
this.data.splice(0, 0, res.data);
} else {
notification.error({
message: "error",
description: res.msg,
});
}
});
}
});
},
getPartitionInfo(topic) {
this.loading = true;
request({
url: KafkaTopicApi.getPartitionInfo.url + "?topic=" + topic,
method: KafkaTopicApi.getPartitionInfo.method,
}).then((res) => {
this.loading = false;
if (res.code != 0) {
notification.error({
message: "error",
description: res.msg,
});
} else {
this.partitions = res.data.map((v) => v.partition);
this.partitions.splice(0, 0, -1);
}
});
},
handleTopicChange(topic) {
this.selectPartition = -1;
this.getPartitionInfo(topic);
},
getCurrentTime() {
const date = new Date();
const yy = date.getFullYear();
const month = date.getMonth() + 1;
const mm = month < 10 ? "0" + month : month;
const day = date.getDate();
const dd = day < 10 ? "0" + day : day;
const hh = date.getHours();
const minutes = date.getMinutes();
const mf = minutes < 10 ? "0" + minutes : minutes;
const seconds = date.getSeconds();
const ss = seconds < 10 ? "0" + seconds : seconds;
return yy + "-" + mm + "-" + dd + " " + hh + ":" + mf + ":" + ss;
},
},
};
</script>
<style scoped>
.tab-content {
width: 100%;
height: 100%;
}
.ant-advanced-search-form {
padding: 24px;
background: #fbfbfb;
border: 1px solid #d9d9d9;
border-radius: 6px;
}
.ant-advanced-search-form .ant-form-item {
display: flex;
}
.ant-advanced-search-form .ant-form-item-control-wrapper {
flex: 1;
}
#components-form-topic-advanced-search .ant-form {
max-width: none;
margin-bottom: 1%;
}
#search-time-form-advanced-search .search-result-list {
margin-top: 16px;
border: 1px dashed #e9e9e9;
border-radius: 6px;
background-color: #fafafa;
min-height: 200px;
text-align: center;
padding-top: 80px;
}
.topic-select {
width: 500px !important;
}
.ant-calendar-picker {
width: 500px !important;
}
.type-select {
width: 150px !important;
}
.hint {
font-size: smaller;
color: green;
}
.ant-advanced-search-form {
padding-bottom: 0px;
}
.hr {
height: 1px;
border: none;
border-top: 1px dashed #0066cc;
padding-right: 10%;
}
#search-result-view ul {
list-style-type: none;
padding-left: 0px;
margin-top: 1%;
}
#search-result-view ul li {
margin-top: 1%;
}
#search-result-view fieldset {
border: 1px solid #333;
border-radius: 5px; /* 设置圆角 */
}
#search-result-view legend {
padding: 0.5em; /* 设置内边距 */
}
#search-result-view .ant-collapse {
margin-top: 1%;
}
</style>

View File

@@ -6,7 +6,7 @@
:mask="false"
:destroyOnClose="true"
:footer="null"
:maskClosable="false"
:maskClosable="true"
@cancel="handleCancel"
>
<div>

View File

@@ -6,7 +6,7 @@
:mask="false"
:destroyOnClose="true"
:footer="null"
:maskClosable="false"
:maskClosable="true"
@cancel="handleCancel"
>
<div>
@@ -14,6 +14,15 @@
<div v-for="(v, k) in data" :key="k">
<strong>消费组: </strong><span class="color-font">{{ k }}</span
><strong> | 积压: </strong><span class="color-font">{{ v.lag }}</span>
<a-button
type="primary"
icon="reload"
size="small"
style="float: right"
@click="getConsumerDetail"
>
刷新
</a-button>
<hr />
<a-table
:columns="columns"

View File

@@ -6,7 +6,7 @@
:mask="false"
:destroyOnClose="true"
:footer="null"
:maskClosable="false"
:maskClosable="true"
@cancel="handleCancel"
>
<div>

View File

@@ -6,7 +6,7 @@
:mask="false"
:destroyOnClose="true"
:footer="null"
:maskClosable="false"
:maskClosable="true"
@cancel="handleCancel"
>
<div>

View File

@@ -6,12 +6,23 @@
:mask="false"
:destroyOnClose="true"
:footer="null"
:maskClosable="false"
:maskClosable="true"
@cancel="handleCancel"
>
<div>
<a-spin :spinning="loading">
<h4>今天发送消息数{{ today.total }}</h4>
<h4>
今天发送消息数{{ today.total
}}<a-button
type="primary"
icon="reload"
size="small"
style="float: right"
@click="sendStatus"
>
刷新
</a-button>
</h4>
<a-table
:columns="columns"
:data-source="today.detail"

View File

@@ -1,5 +1,5 @@
<template>
<div class="content">
<div class="content" v-action:topic:load>
<a-spin :spinning="loading">
<div class="topic">
<div id="components-form-topic-advanced-search">

View File

@@ -6,7 +6,7 @@
:mask="false"
:destroyOnClose="true"
:footer="null"
:maskClosable="false"
:maskClosable="true"
@cancel="handleCancel"
>
<div>

View File

@@ -5,7 +5,7 @@
:width="1200"
:mask="false"
:destroyOnClose="true"
:maskClosable="false"
:maskClosable="true"
@cancel="handleCancel"
okText="确认"
cancelText="取消"
@@ -91,6 +91,7 @@ export default {
loading: false,
form: this.$form.createForm(this, { name: "coordinated" }),
brokerSize: 0,
brokerIdList: [],
replicaNums: 0,
defaultReplicaNums: 0,
};
@@ -136,6 +137,8 @@ export default {
method: KafkaClusterApi.getClusterInfo.method,
}).then((res) => {
this.brokerSize = res.data.nodes.length;
this.brokerIdList = res.data.nodes.map((o) => o.id);
this.brokerIdList.sort((a, b) => a - b);
});
},
handleCancel() {
@@ -149,9 +152,16 @@ export default {
if (this.data.partitions.length > 0) {
this.data.partitions.forEach((p) => {
if (value > p.replicas.length) {
let min = this.brokerIdList[0];
let max = this.brokerIdList[this.brokerSize - 1] + 1;
let num = p.replicas[p.replicas.length - 1];
for (let i = p.replicas.length; i < value; i++) {
p.replicas.push(++num % this.brokerSize);
++num;
if (num < max) {
p.replicas.push(num);
} else {
p.replicas.push((num % max) + min);
}
}
}
if (value < p.replicas.length) {

View File

@@ -0,0 +1,235 @@
<template>
<div class="tab-content">
<a-spin :spinning="loading">
<div id="search-offset-form-advanced-search">
<a-form
class="ant-advanced-search-form"
:form="form"
@submit="handleSearch"
>
<a-row :gutter="24">
<a-col :span="16">
<a-form-item label="角色">
<a-input
v-decorator="['roleName']"
placeholder="请输入角色名!"
@change="onRoleNameChange"
/>
</a-form-item>
</a-col>
<a-col :span="2" :style="{ textAlign: 'right' }">
<a-form-item>
<a-button
type="primary"
html-type="submit"
@click="handleSearch()"
>
刷新
</a-button>
</a-form-item>
</a-col>
</a-row>
</a-form>
</div>
<div class="operation-row-button">
<a-button
type="primary"
@click="openCreateUserDialog()"
v-action:user-manage:user:add
>新增集群归属权限
</a-button>
</div>
<a-table
:columns="columns"
:data-source="filteredData"
bordered
row-key="id"
>
<div slot="operation" slot-scope="record">
<a-popconfirm
title="确认删除?"
ok-text="确认"
cancel-text="取消"
@confirm="deleteRelation(record)"
>
<a-button
size="small"
href="javascript:;"
class="operation-btn"
v-action:user-manage:user:del
>删除
</a-button>
</a-popconfirm>
</div>
</a-table>
<CreateClusterRoleRelation
@closeCreateClusterRoleRelationDialog="
closeCreateClusterRoleRelationDialog
"
:visible="showCreateClusterRoleRelationDialog"
></CreateClusterRoleRelation>
</a-spin>
</div>
</template>
<script>
import request from "@/utils/request";
import notification from "ant-design-vue/lib/notification";
import { ClusterRoleRelationApi } from "@/utils/api";
import CreateClusterRoleRelation from "@/views/user/CreateClusterRoleRelation.vue";
export default {
name: "ClusterRoleRelation",
components: { CreateClusterRoleRelation },
props: {
topicList: {
type: Array,
},
},
data() {
return {
loading: false,
form: this.$form.createForm(this, { name: "user" }),
data: [],
filteredData: [],
filterRoleName: "",
showCreateClusterRoleRelationDialog: false,
columns: [
{
title: "角色",
dataIndex: "roleName",
key: "roleName",
},
{
title: "集群",
dataIndex: "clusterName",
key: "clusterName",
},
{
title: "操作",
key: "operation",
scopedSlots: { customRender: "operation" },
},
],
};
},
methods: {
handleSearch() {
this.form.validateFields((err) => {
if (!err) {
this.loading = true;
request({
url: ClusterRoleRelationApi.select.url,
method: ClusterRoleRelationApi.select.method,
}).then((res) => {
this.loading = false;
if (res.code == 0) {
this.data = res.data;
this.filter();
} else {
notification.error({
message: "error",
description: res.msg,
});
}
});
}
});
},
refresh() {
this.handleSearch();
},
filter() {
this.filteredData = this.data.filter(
(e) => e.roleName.indexOf(this.filterRoleName) != -1
);
},
onRoleNameChange(input) {
this.filterRoleName = input.target.value;
this.filter();
},
openCreateUserDialog() {
this.showCreateClusterRoleRelationDialog = true;
},
closeCreateClusterRoleRelationDialog(p) {
this.showCreateClusterRoleRelationDialog = false;
if (p.refresh) {
this.refresh();
}
},
deleteRelation(user) {
this.loading = true;
request({
url: ClusterRoleRelationApi.delete.url + "?id=" + user.id,
method: ClusterRoleRelationApi.delete.method,
}).then((res) => {
this.loading = false;
if (res.code == 0) {
this.refresh();
} else {
notification.error({
message: "error",
description: res.msg,
});
}
});
},
},
created() {
this.handleSearch();
},
};
</script>
<style scoped>
.tab-content {
width: 100%;
height: 100%;
}
.ant-advanced-search-form {
padding: 24px;
background: #fbfbfb;
border: 1px solid #d9d9d9;
border-radius: 6px;
}
.ant-advanced-search-form .ant-form-item {
display: flex;
}
.ant-advanced-search-form input {
width: 400px;
}
.ant-advanced-search-form .ant-form-item-control-wrapper {
flex: 1;
}
#components-form-topic-advanced-search .ant-form {
max-width: none;
margin-bottom: 1%;
}
#search-offset-form-advanced-search .search-result-list {
margin-top: 16px;
border: 1px dashed #e9e9e9;
border-radius: 6px;
background-color: #fafafa;
min-height: 200px;
text-align: center;
padding-top: 80px;
}
.operation-row-button {
height: 4%;
text-align: left;
margin-bottom: 5px;
margin-top: 5px;
}
.operation-btn {
margin-right: 3%;
}
</style>

View File

@@ -0,0 +1,175 @@
<template>
<a-modal
title="新增集群归属权限"
:visible="show"
:width="800"
:mask="false"
:destroyOnClose="true"
:footer="null"
:maskClosable="false"
@cancel="handleCancel"
>
<div>
<a-spin :spinning="loading">
<a-form
:form="form"
:label-col="{ span: 5 }"
:wrapper-col="{ span: 12 }"
@submit="handleSubmit"
>
<a-form-item label="角色">
<a-select
show-search
option-filter-prop="children"
v-decorator="[
'roleId',
{ rules: [{ required: true, message: '请选择一个角色!' }] },
]"
placeholder="请选择一个角色"
>
<a-select-option
v-for="role in roles"
:key="role.id"
:value="role.id"
>
{{ role.roleName }}
</a-select-option>
</a-select>
</a-form-item>
<a-form-item label="集群">
<a-select
show-search
option-filter-prop="children"
v-decorator="[
'clusterInfoId',
{ rules: [{ required: true, message: '请选择集群!' }] },
]"
placeholder="请选择集群"
>
<a-select-option
v-for="clusterInfo in clusterInfoList"
:key="clusterInfo.id"
:value="clusterInfo.id"
>
{{ clusterInfo.clusterName }}
</a-select-option>
</a-select>
</a-form-item>
<a-form-item :wrapper-col="{ span: 12, offset: 5 }">
<a-button type="primary" html-type="submit"> 提交</a-button>
</a-form-item>
</a-form>
</a-spin>
</div>
</a-modal>
</template>
<script>
import request from "@/utils/request";
import notification from "ant-design-vue/es/notification";
import {
UserManageApi,
KafkaClusterApi,
ClusterRoleRelationApi,
} from "@/utils/api";
export default {
name: "CreateClusterRoleRelation",
props: {
visible: {
type: Boolean,
default: false,
},
},
data() {
return {
show: this.visible,
data: [],
loading: false,
form: this.$form.createForm(this, { name: "coordinated" }),
roles: [],
clusterInfoList: [],
};
},
watch: {
visible(v) {
this.show = v;
if (this.show) {
this.getRoles();
}
},
},
methods: {
handleSubmit(e) {
e.preventDefault();
this.form.validateFields((err, values) => {
if (!err) {
this.loading = true;
request({
url: ClusterRoleRelationApi.add.url,
method: ClusterRoleRelationApi.add.method,
data: values,
}).then((res) => {
this.loading = false;
if (res.code == 0) {
this.$message.success(res.msg);
this.$emit("closeCreateClusterRoleRelationDialog", {
refresh: true,
data: res.data,
});
} else {
notification.error({
message: "error",
description: res.msg,
});
}
});
}
});
},
getRoles() {
this.loading = true;
request({
url: UserManageApi.getRole.url,
method: UserManageApi.getRole.method,
}).then((res) => {
this.loading = false;
if (res.code == 0) {
this.roles = res.data;
} else {
notification.error({
message: "error",
description: res.msg,
});
}
});
},
getClusterInfoList() {
request({
url: KafkaClusterApi.getClusterInfoListForSelect.url,
method: KafkaClusterApi.getClusterInfoListForSelect.method,
}).then((res) => {
if (res.code == 0) {
this.clusterInfoList = res.data;
this.clusterInfoList.splice(0, 0, { id: -1, clusterName: "全部" });
} else {
notification.error({
message: "error",
description: res.msg,
});
}
});
},
handleCancel() {
this.data = [];
this.$emit("closeCreateClusterRoleRelationDialog", { refresh: true });
},
},
created() {
this.getRoles();
this.getClusterInfoList();
},
};
</script>
<style scoped></style>

View File

@@ -46,6 +46,7 @@
</div>
<div class="role-info" v-if="selectedRole.roleName">
<a-form :form="form">
<h2>角色信息配置</h2>
<a-form-item label="角色名称">
<a-input
v-decorator="[
@@ -73,7 +74,8 @@
/>
</a-form-item>
<a-form-item label="权限配置">
<a-form-item>
<h2>功能权限配置</h2>
<div
v-for="(menuPermission, index) in selectedPermissions"
:key="index"

View File

@@ -45,7 +45,11 @@
bordered
row-key="id"
>
<div slot="operation" slot-scope="record" v-show="record.username != 'super-admin'">
<div
slot="operation"
slot-scope="record"
v-show="record.username != 'super-admin'"
>
<a-popconfirm
:title="'删除用户: ' + record.username + ''"
ok-text="确认"
@@ -178,7 +182,7 @@ export default {
},
filter() {
this.filteredData = this.data.filter(
(e) => e.username.indexOf(this.filterUsername) != -1
(e) => e && e.username && e.username.indexOf(this.filterUsername) != -1
);
},
onUsernameChange(input) {

View File

@@ -18,13 +18,20 @@
</a-tab-pane>
<a-tab-pane
key="3"
tab="集群权限"
v-if="isAuthorized('user-manage:cluster-role')"
>
<ClusterRoleRelation></ClusterRoleRelation>
</a-tab-pane>
<a-tab-pane
key="4"
tab="权限列表"
v-if="isAuthorized('user-manage:permission')"
>
<Permission></Permission>
</a-tab-pane>
<a-tab-pane
key="4"
key="5"
tab="个人设置"
v-if="isAuthorized('user-manage:setting')"
>
@@ -40,10 +47,11 @@ import Permission from "@/views/user/Permission.vue";
import Role from "@/views/user/Role.vue";
import User from "@/views/user/User.vue";
import UserSetting from "@/views/user/UserSetting.vue";
import ClusterRoleRelation from "@/views/user/ClusterRoleRelation.vue";
import { isAuthorized } from "@/utils/auth";
export default {
name: "UserManage",
components: { Permission, Role, User, UserSetting },
components: { Permission, Role, User, UserSetting, ClusterRoleRelation },
data() {
return {
loading: false,