25 Commits

Author SHA1 Message Date
yinuo
8942760a5a basic user auth 2022-05-06 10:36:00 +08:00
Xiaodong Xu
b1feaad9f7 Merge pull request #14 from dongyinuo/feature/dongyinuo/add/contact
Feature/dongyinuo/add/contact
2022-04-29 17:27:19 +08:00
yinuo
4d372f8374 添加联系方式 2022-04-29 17:22:44 +08:00
yinuo
4b2c544c0d 添加联系方式 2022-04-29 17:21:32 +08:00
许晓东
8131cb1a42 发布1.0.4安装包下载地址 2022-02-16 20:01:06 +08:00
许晓东
1dd6466261 副本重分配 2022-02-16 19:50:35 +08:00
许晓东
dda08a2152 副本重分配-》生成分配计划 2022-02-15 20:13:07 +08:00
许晓东
01c7121ee4 集群节点列表有序 2022-01-22 23:33:13 +08:00
许晓东
d939d7653c 主页展示Broker API的版本兼容信息 2022-01-22 23:07:41 +08:00
许晓东
058cd5a24e 查询当前重分配,版本不支持异常处理 2022-01-20 13:44:37 +08:00
许晓东
db3f55ac4a polish README 2022-01-19 19:05:00 +08:00
许晓东
a311a34537 分区比较栈溢出bug修复 2022-01-18 20:42:11 +08:00
许晓东
e8fe2ea1c7 集群名称支持中文,消息查询可选择时间展示顺序 2022-01-13 14:19:17 +08:00
许晓东
10302dd39c v1.0.3安装包下载地址 2022-01-09 23:57:00 +08:00
许晓东
55a4483fcc 打包只打zip包 2022-01-09 23:45:39 +08:00
许晓东
4dd2412b78 polish README 2022-01-09 23:40:20 +08:00
许晓东
387c714072 polish README 2022-01-09 23:27:23 +08:00
许晓东
6a2d876d50 多集群支持,集群切换 2022-01-06 19:31:44 +08:00
许晓东
f5fb2c4f88 集群切换 2022-01-05 21:19:46 +08:00
许晓东
6f9676e259 集群列表、新增集群 2022-01-04 21:06:50 +08:00
许晓东
2427ce2c1e 准备开发多集群支持 2022-01-03 22:02:03 +08:00
许晓东
02abe67fce 消息查询过滤 2021-12-30 14:17:47 +08:00
许晓东
ad39f4e82c 消息查询过滤 2021-12-29 21:15:56 +08:00
许晓东
243c89b459 topic模糊查询,消息过滤页面配置 2021-12-28 20:39:07 +08:00
许晓东
11418cd6e0 去掉消息同步方案入口 2021-12-27 20:41:19 +08:00
134 changed files with 5151 additions and 14232 deletions

128
README.md
View File

@@ -2,89 +2,79 @@
一款轻量级的kafka可视化管理平台安装配置快捷、简单易用。 一款轻量级的kafka可视化管理平台安装配置快捷、简单易用。
为了开发的省事,没有国际化支持,只支持中文展示。 为了开发的省事,没有国际化支持,只支持中文展示。
用过rocketmq-console吧前端展示风格跟那个有点类似。 用过rocketmq-console吧前端展示风格跟那个有点类似。
## 安装包下载
以下两种方式2选一直接下载安装包或下载源码手动打包 ## 页面预览
* 点击下载(v1.0.2版本)[kafka-console-ui.tar.gz](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.2/kafka-console-ui.tar.gz) 或 [kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.2/kafka-console-ui.zip) 如果github能查看图片的话可以点击[查看菜单页面](./document/overview/概览.md),查看每个页面的样子
* 参考下面的打包部署,下载源码重新打包(提交的最新功能特性) ## 集群迁移支持说明
当前主分支及日后版本不再提供消息同步、集群迁移的解决方案,如有需要,查看:[集群迁移说明](./document/datasync/集群迁移.md)
## ACL说明
acl配置说明如果kafka集群启用了ACL但是控制台没看到Acl菜单可以查看[Acl配置启用说明](./document/acl/Acl.md)
## 功能支持 ## 功能支持
* 多集群支持
* 集群信息 * 集群信息
* Topic管理 * Topic管理
* 消费组管理 * 消费组管理
* 消息管理 * 消息管理
* 基于SASL_SCRAM认证授权管理 * ACL
* 运维 * 运维
![功能特性](./document/功能特性.png)
## 技术栈 功能明细看这个脑图:
* spring boot ![功能特性](./document/img/功能特性.png)
* java、scala
* kafka ## 安装包下载
* h2 点击下载(v1.0.4版本)[kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.4/kafka-console-ui.zip)
* vue
## kafka版本 ## 快速使用
* 当前使用的kafka 2.8.0 ### Windows
## 监控 1. 解压缩zip安装包
仅提供运维管理功能监控、告警需要配合其它组件使用请查看https://blog.csdn.net/x763795151/article/details/119705372 2. 进入bin目录必须在bin目录下双击执行`start.bat`启动
# 打包、部署 3. 停止:直接关闭启动的命令行窗口即可
## 打包
环境要求 ### Linux或Mac OS
* maven 3.6+
* jdk 8
* git
``` ```
git clone https://github.com/xxd763795151/kafka-console-ui.git # 解压缩
cd kafka-console-ui unzip kafka-console-ui.zip
# linux或mac执行
sh package.sh
# windows执行
package.bat
```
打包成功,输出文件(以下2种归档类型)
* target/kafka-console-ui.tar.gz
* target/kafka-console-ui.zip
## 部署
### Mac OS 或 Linux
```
# 解压缩(以tar.gz为例)
tar -zxvf kafka-console-ui.tar.gz
# 进入解压缩后的目录 # 进入解压缩后的目录
cd kafka-console-ui cd kafka-console-ui
# 编辑配置
vim config/application.yml
# 启动 # 启动
sh bin/start.sh sh bin/start.sh
# 停止 # 停止
sh bin/shutdown.sh sh bin/shutdown.sh
``` ```
### Windows
1.解压缩zip安装包
2.编辑配置文件 `config/application.yml`
3.进入bin目录必须在bin目录下执行`start.bat`启动
启动完成访问http://127.0.0.1:7766 ### 访问地址
启动完成访问http://127.0.0.1:7766
# 开发环境 ### 配置集群
* jdk 8 第一次启动打开浏览器后因为还没有配置kafka集群信息所以页面右上角可能会有错误信息比如No Cluster Info或者是没有集群信息请先切换集群之类的提示。
* idea
* scala 2.13
* maven >=3.6+
* webstorm
除了webstorm是开发前端的ide可以根据自己需要代替jdk scala是必须有的。
# 本地开发配置
以我自己为例开发环境里的工具准备好然后代码clone到本地。
## 后端配置
1. 用idea打开项目
2. 打开idea的Project Structure(Settings) -> Modules -> 设置src/main/scala为Sources因为约定src/main/java是源码目录所以这里要再加一个
3. 打开idea的Project Structure(Settings) -> Libraries 添加scala sdk然后选择本地下载的scala 2.13的目录,确定添加进来
## 前端
前端代码在工程的ui目录下找个前端开发的ide打开进行开发即可。
## 注意
前后分离,直接启动后端如果未编译前端代码是没有前端页面的,可以先打包进行编译`sh package.sh`然后再用idea启动或者前端部分单独启动
# 页面示例
如果未启用ACL配置不会显示ACL的菜单页面所以导航栏上没有Acl这一项
![集群](./document/集群.png) 集群配置如下:
![Topic](./document/Topic.png) 1. 点击页面上方导航栏的 [运维] 菜单
![消费组](./document/消费组.png) 2. 点击集群管理下的 [集群切换] 按钮
![运维](./document/运维.png) 3. 在弹框里点击 [新增集群]
增加消息检索页面 4. 然后输入kafka集群地址和一个名称随便起个名字
![消息](./document/消息.png) 5. 点击提交便增加成功了
6. 增加成功可以看到会话框已经有这个集群信息,然后点击右侧的 [切换] 按钮,便切换该集群为当前集群
后续如果再增加其它集群,就可以按上面这个流程,如果想切换到哪个集群,点击切换按钮,便会切换到对应的集群,页面的右上角会显示当前是使用的哪个集群,如果不确定,可以刷新下页面。
在新增集群的时候除了集群地址还可以输入集群的其它属性配置比如请求超时ACL配置等。如果开启了ACL切换到该集群的时候导航栏上便会出现ACL菜单支持进行相关操作目前是基于SASL_SCRAM认证授权管理支持的最完善其它的我也没验证过虽然是我开发的但是我也没具体全部验证这一块功能授权部分应该是通用的
## kafka版本
* 当前使用的kafka 2.8.0
## 监控
仅提供运维管理功能监控、告警需要配合其它组件如有需要建议请查看https://blog.csdn.net/x763795151/article/details/119705372
## 源码打包
如果想通过源码打包,查看:[源码打包说明](./document/package/源码打包.md)
## 本地开发
如果需要本地开发,开发环境配置查看:[本地开发](./document/develop/开发配置.md)
## 联系方式
+ 微信群
<img src="https://github.com/dongyinuo/kafka-console-ui/blob/feature/dongyinuo/add/contact/document/contact/weixin_contact.jpeg" width="40%"/>
+ 若联系方式失效, 请联系加一下微信, 说明意图
- xxd763795151
- wxid_7jy2ezljvebt12

View File

@@ -4,7 +4,7 @@
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd"> xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">
<id>rocketmq-reput</id> <id>rocketmq-reput</id>
<formats> <formats>
<format>tar.gz</format> <!-- <format>tar.gz</format>-->
<format>zip</format> <format>zip</format>
</formats> </formats>

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

36
document/acl/Acl.md Normal file
View File

@@ -0,0 +1,36 @@
# Acl配置启用说明
## 前言
可能有的同学是看了这篇文章来的:[如何通过可视化方式快捷管理kafka的acl配置](https://blog.csdn.net/x763795151/article/details/120200119)
这篇文章里可能说了是通过修改配置文件application.yml的方式来启用ACL示例如下
```yaml
kafka:
config:
# kafka broker地址多个以逗号分隔
bootstrap-server: 'localhost:9092'
# 服务端是否启用acl如果不启用下面的几项都忽略即可
enable-acl: true
# 只支持2种安全协议SASL_PLAINTEXT和PLAINTEXT启用acl则设置为SASL_PLAINTEXT不启用acl不需关心这个配置
security-protocol: SASL_PLAINTEXT
sasl-mechanism: SCRAM-SHA-256
# 超级管理员用户名在broker上已经配置为超级管理员
admin-username: admin
# 超级管理员密码
admin-password: admin
# 启动自动创建配置的超级管理员用户
admin-create: true
# broker连接的zk地址
zookeeper-addr: localhost:2181
sasl-jaas-config: org.apache.kafka.common.security.scram.ScramLoginModule required username="${kafka.config.admin-username}" password="${kafka.config.admin-password}";
```
其中说明了kafka.config.enable-acl配置项需要为true。
注意:**现在不再支持这种方式了**
## 新版本说明
因为现在支持多集群配置,关于多集群配置,可以看主页说明的 配置集群 介绍。
所以这里把这些额外的配置项都去掉了。
如果启用了ACL在页面上新增集群的时候在属性里配置集群的ACL相关信息如下![新增集群](./新增集群.png)
如果控制台检测到属性里有ACL相关属性配置切换到这个集群后ACL菜单会自动出现的。
注意只支持SASL。

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 245 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

View File

@@ -0,0 +1,11 @@
# 集群迁移
可能是有的同学看了我以前发的解决云上、云下集群迁移的方案,好奇看到了这里。
当时发的文章链接在这里:[kafka新老集群平滑迁移实践](https://blog.csdn.net/x763795151/article/details/121070563)
不过考虑到这些功能涉及到业务属性,已经在新的版本中都去掉了。
当前主分支及日后版本不再提供消息同步、集群迁移的解决方案如有需要可以使用single-data-sync分支的代码或者历史发布的v1.0.2(直接下载) [kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.2/kafka-console-ui.zip) 的安装包。
v1.0.2版本及其之前的版本只支持单集群配置但是对于SASL_SCRAM认证授权管理功能相当完善。
后续版本会支持多集群管理并将v1.0.2之前的部分功能去掉或优化,目的是做为一个足够轻量的管理工具,不再涉及其它属性。

View File

@@ -0,0 +1,31 @@
# 本地开发配置说明
## 技术栈
* spring boot
* java、scala
* kafka
* h2
* vue
## 开发环境
* jdk 8
* idea
* scala 2.13
* maven >=3.6+
* webstorm
除了webstorm是开发前端的ide可以根据自己需要代替jdk scala是必须有的。
scala 2.13下载地址在这个页面最下面https://www.scala-lang.org/download/scala2.html
## 克隆代码
以我自己为例开发环境里的工具准备好然后代码clone到本地。
## 后端配置
1. 用idea打开项目
2. 打开idea的Project Structure(Settings) -> Modules -> 设置src/main/scala为Sources因为约定src/main/java是源码目录所以这里要再加一个源码目录
3. 打开idea的Project Structure(Settings) -> Libraries 添加scala sdk然后选择本地下载的scala 2.13的目录确定添加进来如果使用的idea可以直接勾选也可以不用先下载到本地
## 前端
前端代码在工程的ui目录下找个前端开发的ide如web storm打开进行开发即可。
## 注意
前后分离直接启动后端工程的话src/main/resources目录下可能是没有静态文件的所以直接通过浏览器访问应该是没页面的。
可以先打包编译一下前端文件,比如执行:`sh package.sh`然后再用idea启动。或者是后端用idea启动完成后找个前端的ide 比如web storm打开工程的ui目录下的前端项目单独启动进行开发。

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

View File

@@ -0,0 +1,32 @@
# 菜单预览
如果未启用ACL配置不会显示ACL的菜单页面所以导航栏上没有Acl这一项
## 集群
* 展示集群列表
![集群](./img/集群.png)
* 查看或修改集群配置
![Broker配置](./img/Broker配置.png)
## Topic
* Topic列表
![Topic](./img/Topic.png)
## 消费组
* 消费组列表
![消费组](./img/消费组.png)
## 消息
* 根据时间检索或过滤消息
![消息时间查询](./img/消息时间查询.png)
* 消息详情
![消息详情](./img/消息详情.png)
## 运维
* 运维页面
![运维](./img/运维.png)
* 集群切换
![集群切换](./img/集群切换.png)

View File

@@ -0,0 +1,32 @@
# 源码打包说明
可以直接下载最新代码,进行打包,最新代码相比已经发布的安装包可能会包含最新的特性
## 环境要求
* maven 3.6+
* jdk 8
* git非必须
maven是建议版本>=3.6版本3.4+和3.5+我也没试过3.3+的版本在windows上我试了下打包有bug可能把spring boot的application.yml打不到合适的目录。
如果3.6+在mac上也不行也是上面这个问题建议用最新版本试试。
## 源码下载
```
git clone https://github.com/xxd763795151/kafka-console-ui.git
```
或者直接在页面下载源码
## 打包
我已经写了个简单的打包脚本,直接执行即可。
### Windows
```
cd kafka-console-ui
# windows执行
package.bat
```
### Linux或Mac OS
```
cd kafka-console-ui
# linux或mac执行
sh package.sh
```

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 492 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 400 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 126 KiB

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

View File

@@ -10,7 +10,7 @@
</parent> </parent>
<groupId>com.xuxd</groupId> <groupId>com.xuxd</groupId>
<artifactId>kafka-console-ui</artifactId> <artifactId>kafka-console-ui</artifactId>
<version>1.0.2</version> <version>1.0.4</version>
<name>kafka-console-ui</name> <name>kafka-console-ui</name>
<description>Kafka console manage ui</description> <description>Kafka console manage ui</description>
<properties> <properties>
@@ -25,8 +25,14 @@
<maven.assembly.plugin.version>3.0.0</maven.assembly.plugin.version> <maven.assembly.plugin.version>3.0.0</maven.assembly.plugin.version>
<mybatis-plus-boot-starter.version>3.4.2</mybatis-plus-boot-starter.version> <mybatis-plus-boot-starter.version>3.4.2</mybatis-plus-boot-starter.version>
<scala.version>2.13.6</scala.version> <scala.version>2.13.6</scala.version>
<jwt.version>0.9.0</jwt.version>
</properties> </properties>
<dependencies> <dependencies>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt</artifactId>
<version>${jwt.version}</version>
</dependency>
<dependency> <dependency>
<groupId>org.scala-lang</groupId> <groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId> <artifactId>scala-library</artifactId>

View File

@@ -3,11 +3,13 @@ package com.xuxd.kafka.console;
import org.mybatis.spring.annotation.MapperScan; import org.mybatis.spring.annotation.MapperScan;
import org.springframework.boot.SpringApplication; import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.servlet.ServletComponentScan;
import org.springframework.scheduling.annotation.EnableScheduling; import org.springframework.scheduling.annotation.EnableScheduling;
@MapperScan("com.xuxd.kafka.console.dao") @MapperScan("com.xuxd.kafka.console.dao")
@SpringBootApplication @SpringBootApplication
@EnableScheduling @EnableScheduling
@ServletComponentScan
public class KafkaConsoleUiApplication { public class KafkaConsoleUiApplication {
public static void main(String[] args) { public static void main(String[] args) {

View File

@@ -8,7 +8,7 @@ import org.apache.kafka.common.Node;
* @author xuxd * @author xuxd
* @date 2021-10-08 14:03:21 * @date 2021-10-08 14:03:21
**/ **/
public class BrokerNode { public class BrokerNode implements Comparable{
private int id; private int id;
@@ -80,4 +80,8 @@ public class BrokerNode {
public void setController(boolean controller) { public void setController(boolean controller) {
isController = controller; isController = controller;
} }
@Override public int compareTo(Object o) {
return this.id - ((BrokerNode)o).id;
}
} }

View File

@@ -0,0 +1,7 @@
package com.xuxd.kafka.console.beans;
public class KafkaConsoleException extends RuntimeException{
public KafkaConsoleException(String msg){
super(msg);
}
}

View File

@@ -0,0 +1,73 @@
package com.xuxd.kafka.console.beans;
import com.xuxd.kafka.console.beans.enums.FilterType;
import org.apache.kafka.common.serialization.Deserializer;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-29 15:30:08
**/
public class MessageFilter {
private FilterType filterType = FilterType.NONE;
private Object searchContent = null;
private String headerKey = null;
private String headerValue = null;
private Deserializer deserializer = null;
private boolean isContainsValue = false;
public FilterType getFilterType() {
return filterType;
}
public void setFilterType(FilterType filterType) {
this.filterType = filterType;
}
public Object getSearchContent() {
return searchContent;
}
public void setSearchContent(Object searchContent) {
this.searchContent = searchContent;
}
public String getHeaderKey() {
return headerKey;
}
public void setHeaderKey(String headerKey) {
this.headerKey = headerKey;
}
public String getHeaderValue() {
return headerValue;
}
public void setHeaderValue(String headerValue) {
this.headerValue = headerValue;
}
public Deserializer getDeserializer() {
return deserializer;
}
public void setDeserializer(Deserializer deserializer) {
this.deserializer = deserializer;
}
public boolean isContainsValue() {
return isContainsValue;
}
public void setContainsValue(boolean containsValue) {
isContainsValue = containsValue;
}
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.beans; package com.xuxd.kafka.console.beans;
import com.xuxd.kafka.console.beans.enums.FilterType;
import lombok.Data; import lombok.Data;
/** /**
@@ -24,4 +25,12 @@ public class QueryMessage {
private String keyDeserializer; private String keyDeserializer;
private String valueDeserializer; private String valueDeserializer;
private FilterType filter;
private String value;
private String headerKey;
private String headerValue;
} }

View File

@@ -11,7 +11,7 @@ import lombok.Setter;
**/ **/
public class ResponseData<T> { public class ResponseData<T> {
public static final int SUCCESS_CODE = 0, FAILED_CODE = -9999; public static final int SUCCESS_CODE = 0, TOKEN_ILLEGAL = -5000, FAILED_CODE = -9999;
public static final String SUCCESS_MSG = "success", FAILED_MSG = "failed"; public static final String SUCCESS_MSG = "success", FAILED_MSG = "failed";
@@ -58,6 +58,12 @@ public class ResponseData<T> {
return this; return this;
} }
public ResponseData<T> failed(int code) {
this.code = code;
this.msg = FAILED_MSG;
return this;
}
public ResponseData<T> failed(String msg) { public ResponseData<T> failed(String msg) {
this.code = FAILED_CODE; this.code = FAILED_CODE;
this.msg = msg; this.msg = msg;

View File

@@ -21,7 +21,7 @@ public class TopicPartition implements Comparable {
} }
TopicPartition other = (TopicPartition) o; TopicPartition other = (TopicPartition) o;
if (!this.topic.equals(other.getTopic())) { if (!this.topic.equals(other.getTopic())) {
return this.compareTo(other); return this.topic.compareTo(other.topic);
} }
return this.partition - other.partition; return this.partition - other.partition;

View File

@@ -0,0 +1,9 @@
package com.xuxd.kafka.console.beans.annotation;
import java.lang.annotation.*;
@Target({ElementType.TYPE, ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface RequiredAuthorize {
}

View File

@@ -0,0 +1,28 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-31 09:54:24
**/
@Data
@TableName("t_cluster_info")
public class ClusterInfoDO {
@TableId(type = IdType.AUTO)
private Long id;
private String clusterName;
private String address;
private String properties;
private String updateTime;
}

View File

@@ -0,0 +1,29 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import com.xuxd.kafka.console.beans.enums.Role;
import lombok.Data;
import java.util.Date;
@Data
@TableName("t_devops_user")
public class DevOpsUserDO {
@TableId(type = IdType.AUTO)
private Long id;
private String username;
private String password;
private Role role;
private boolean delete;
private Date createTime;
private Date updateTime;
}

View File

@@ -23,4 +23,6 @@ public class KafkaUserDO {
private String password; private String password;
private String updateTime; private String updateTime;
private Long clusterInfoId;
} }

View File

@@ -0,0 +1,38 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.utils.ConvertUtil;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-04 20:19:03
**/
@Data
public class ClusterInfoDTO {
private Long id;
private String clusterName;
private String address;
private String properties;
private String updateTime;
public ClusterInfoDO to() {
ClusterInfoDO infoDO = new ClusterInfoDO();
infoDO.setId(id);
infoDO.setClusterName(clusterName);
infoDO.setAddress(address);
if (StringUtils.isNotBlank(properties)) {
infoDO.setProperties(ConvertUtil.propertiesStr2JsonStr(properties));
}
return infoDO;
}
}

View File

@@ -0,0 +1,18 @@
package com.xuxd.kafka.console.beans.dto;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-02-15 19:08:13
**/
@Data
public class ProposedAssignmentDTO {
private String topic;
private List<Integer> brokers;
}

View File

@@ -1,8 +1,10 @@
package com.xuxd.kafka.console.beans.dto; package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.QueryMessage; import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.enums.FilterType;
import java.util.Date; import java.util.Date;
import lombok.Data; import lombok.Data;
import org.apache.commons.lang3.StringUtils;
/** /**
* kafka-console-ui. * kafka-console-ui.
@@ -27,6 +29,14 @@ public class QueryMessageDTO {
private String valueDeserializer; private String valueDeserializer;
private String filter;
private String value;
private String headerKey;
private String headerValue;
public QueryMessage toQueryMessage() { public QueryMessage toQueryMessage() {
QueryMessage queryMessage = new QueryMessage(); QueryMessage queryMessage = new QueryMessage();
queryMessage.setTopic(topic); queryMessage.setTopic(topic);
@@ -45,6 +55,21 @@ public class QueryMessageDTO {
queryMessage.setKeyDeserializer(keyDeserializer); queryMessage.setKeyDeserializer(keyDeserializer);
queryMessage.setValueDeserializer(valueDeserializer); queryMessage.setValueDeserializer(valueDeserializer);
if (StringUtils.isNotBlank(filter)) {
queryMessage.setFilter(FilterType.valueOf(filter.toUpperCase()));
} else {
queryMessage.setFilter(FilterType.NONE);
}
if (StringUtils.isNotBlank(value)) {
queryMessage.setValue(value.trim());
}
if (StringUtils.isNotBlank(headerKey)) {
queryMessage.setHeaderKey(headerKey.trim());
}
if (StringUtils.isNotBlank(headerValue)) {
queryMessage.setHeaderValue(headerValue.trim());
}
return queryMessage; return queryMessage;
} }
} }

View File

@@ -0,0 +1,11 @@
package com.xuxd.kafka.console.beans.dto.user;
import com.xuxd.kafka.console.beans.enums.Role;
import lombok.Data;
@Data
public class AddUserDTO {
private String username;
private String password;
private Role role;
}

View File

@@ -0,0 +1,9 @@
package com.xuxd.kafka.console.beans.dto.user;
import lombok.Data;
@Data
public class ListUserDTO {
private Long id;
private String username;
}

View File

@@ -0,0 +1,9 @@
package com.xuxd.kafka.console.beans.dto.user;
import lombok.Data;
@Data
public class LoginDTO {
private String username;
private String password;
}

View File

@@ -0,0 +1,9 @@
package com.xuxd.kafka.console.beans.dto.user;
import lombok.Data;
@Data
public class PasswordDTO {
private Long userId;
private String password;
}

View File

@@ -0,0 +1,11 @@
package com.xuxd.kafka.console.beans.dto.user;
import com.xuxd.kafka.console.beans.enums.Role;
import lombok.Data;
@Data
public class UpdateUserDTO {
private String username;
private String password;
private Role role;
}

View File

@@ -0,0 +1,11 @@
package com.xuxd.kafka.console.beans.enums;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-29 14:36:01
**/
public enum FilterType {
NONE, BODY, HEADER
}

View File

@@ -0,0 +1,6 @@
package com.xuxd.kafka.console.beans.enums;
public enum Role {
developer,
manager
}

View File

@@ -0,0 +1,24 @@
package com.xuxd.kafka.console.beans.vo;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-22 16:24:58
**/
@Data
public class BrokerApiVersionVO {
private int brokerId;
private String host;
private int supportNums;
private int unSupportNums;
private List<String> versionInfo;
}

View File

@@ -0,0 +1,40 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.utils.ConvertUtil;
import java.util.List;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-04 19:16:11
**/
@Data
public class ClusterInfoVO {
private Long id;
private String clusterName;
private String address;
private List<String> properties;
private String updateTime;
public static ClusterInfoVO from(ClusterInfoDO infoDO) {
ClusterInfoVO vo = new ClusterInfoVO();
vo.setId(infoDO.getId());
vo.setClusterName(infoDO.getClusterName());
vo.setAddress(infoDO.getAddress());
vo.setUpdateTime(infoDO.getUpdateTime());
if (StringUtils.isNotBlank(infoDO.getProperties())) {
vo.setProperties(ConvertUtil.jsonStr2List(infoDO.getProperties()));
}
return vo;
}
}

View File

@@ -0,0 +1,16 @@
package com.xuxd.kafka.console.beans.vo;
import com.fasterxml.jackson.annotation.JsonFormat;
import com.xuxd.kafka.console.beans.enums.Role;
import lombok.Data;
import java.util.Date;
@Data
public class DevOpsUserVO {
private Long id;
private String username;
private Role role;
@JsonFormat(pattern="yyyy-MM-dd HH:mm:ss",timezone="GMT+8")
private Date createTime;
}

View File

@@ -0,0 +1,16 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.enums.Role;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Data;
import lombok.NoArgsConstructor;
@Data
@Builder
@AllArgsConstructor
@NoArgsConstructor
public class LoginVO {
private String token;
private Role role;
}

View File

@@ -0,0 +1,71 @@
package com.xuxd.kafka.console.boot;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.beans.dos.DevOpsUserDO;
import com.xuxd.kafka.console.config.KafkaConfig;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.dao.DevOpsUserMapper;
import com.xuxd.kafka.console.utils.ConvertUtil;
import java.util.List;
import com.xuxd.kafka.console.utils.Md5Utils;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.SmartInitializingSingleton;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-30 19:16:50
**/
@Slf4j
@Component
public class Bootstrap implements SmartInitializingSingleton {
public static final String DEFAULT_CLUSTER_NAME = "default";
private final KafkaConfig config;
private final ClusterInfoMapper clusterInfoMapper;
public Bootstrap(KafkaConfig config, ObjectProvider<ClusterInfoMapper> clusterInfoMapper) {
this.config = config;
this.clusterInfoMapper = clusterInfoMapper.getIfAvailable();
}
private void initialize() {
loadDefaultClusterConfig();
}
private void loadDefaultClusterConfig() {
log.info("load default kafka config.");
if (StringUtils.isBlank(config.getBootstrapServer())) {
return;
}
QueryWrapper<ClusterInfoDO> clusterInfoDOQueryWrapper = new QueryWrapper<>();
clusterInfoDOQueryWrapper.eq("cluster_name", DEFAULT_CLUSTER_NAME);
List<Object> objects = clusterInfoMapper.selectObjs(clusterInfoDOQueryWrapper);
if (CollectionUtils.isNotEmpty(objects)) {
log.warn("default kafka cluster config has existed[any of cluster name or address].");
return;
}
ClusterInfoDO infoDO = new ClusterInfoDO();
infoDO.setClusterName(DEFAULT_CLUSTER_NAME);
infoDO.setAddress(config.getBootstrapServer().trim());
infoDO.setProperties(ConvertUtil.toJsonString(config.getProperties()));
clusterInfoMapper.insert(infoDO);
log.info("Insert default config: {}", infoDO);
}
@Override public void afterSingletonsInstantiated() {
initialize();
}
}

View File

@@ -0,0 +1,42 @@
package com.xuxd.kafka.console.boot;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.dos.DevOpsUserDO;
import com.xuxd.kafka.console.beans.enums.Role;
import com.xuxd.kafka.console.dao.DevOpsUserMapper;
import com.xuxd.kafka.console.utils.Md5Utils;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.SmartInitializingSingleton;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
@Component
@Slf4j
@RequiredArgsConstructor
public class InitSuperDevOpsUser implements SmartInitializingSingleton {
private final DevOpsUserMapper devOpsUserMapper;
public final static String SUPER_USERNAME = "admin";
@Value("${devops.password:kafka-console-ui521}")
private String password;
@Override
public void afterSingletonsInstantiated() {
QueryWrapper<DevOpsUserDO> userDOQueryWrapper = new QueryWrapper<>();
userDOQueryWrapper.eq("username", SUPER_USERNAME);
DevOpsUserDO userDO = devOpsUserMapper.selectOne(userDOQueryWrapper);
if (userDO == null){
DevOpsUserDO devOpsUserDO = new DevOpsUserDO();
devOpsUserDO.setUsername(SUPER_USERNAME);
devOpsUserDO.setPassword(Md5Utils.MD5(password));
devOpsUserDO.setRole(Role.manager);
devOpsUserMapper.insert(devOpsUserDO);
} else {
userDO.setPassword(Md5Utils.MD5(password));
devOpsUserMapper.updateById(userDO);
}
log.info("init super devops user done, username = {}", SUPER_USERNAME);
}
}

View File

@@ -0,0 +1,74 @@
package com.xuxd.kafka.console.config;
import java.util.Properties;
import org.apache.kafka.clients.CommonClientConfigs;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-30 15:46:55
**/
public class ContextConfig {
public static final int DEFAULT_REQUEST_TIMEOUT_MS = 5000;
private Long clusterInfoId;
private String clusterName;
private String bootstrapServer;
private int requestTimeoutMs = DEFAULT_REQUEST_TIMEOUT_MS;
private Properties properties = new Properties();
public String getBootstrapServer() {
return bootstrapServer;
}
public void setBootstrapServer(String bootstrapServer) {
this.bootstrapServer = bootstrapServer;
}
public int getRequestTimeoutMs() {
return properties.containsKey(CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG) ?
Integer.parseInt(properties.getProperty(CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG)) : requestTimeoutMs;
}
public void setRequestTimeoutMs(int requestTimeoutMs) {
this.requestTimeoutMs = requestTimeoutMs;
}
public Properties getProperties() {
return properties;
}
public Long getClusterInfoId() {
return clusterInfoId;
}
public void setClusterInfoId(Long clusterInfoId) {
this.clusterInfoId = clusterInfoId;
}
public String getClusterName() {
return clusterName;
}
public void setClusterName(String clusterName) {
this.clusterName = clusterName;
}
public void setProperties(Properties properties) {
this.properties = properties;
}
@Override public String toString() {
return "KafkaContextConfig{" +
"bootstrapServer='" + bootstrapServer + '\'' +
", requestTimeoutMs=" + requestTimeoutMs +
", properties=" + properties +
'}';
}
}

View File

@@ -0,0 +1,12 @@
package com.xuxd.kafka.console.config;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-30 18:55:28
**/
public class ContextConfigHolder {
public static final ThreadLocal<ContextConfig> CONTEXT_CONFIG = new ThreadLocal<>();
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.config; package com.xuxd.kafka.console.config;
import java.util.Properties;
import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Configuration;
@@ -15,23 +16,9 @@ public class KafkaConfig {
private String bootstrapServer; private String bootstrapServer;
private int requestTimeoutMs;
private String securityProtocol;
private String saslMechanism;
private String saslJaasConfig;
private String adminUsername;
private String adminPassword;
private boolean adminCreate;
private String zookeeperAddr; private String zookeeperAddr;
private boolean enableAcl; private Properties properties;
public String getBootstrapServer() { public String getBootstrapServer() {
return bootstrapServer; return bootstrapServer;
@@ -41,62 +28,6 @@ public class KafkaConfig {
this.bootstrapServer = bootstrapServer; this.bootstrapServer = bootstrapServer;
} }
public int getRequestTimeoutMs() {
return requestTimeoutMs;
}
public void setRequestTimeoutMs(int requestTimeoutMs) {
this.requestTimeoutMs = requestTimeoutMs;
}
public String getSecurityProtocol() {
return securityProtocol;
}
public void setSecurityProtocol(String securityProtocol) {
this.securityProtocol = securityProtocol;
}
public String getSaslMechanism() {
return saslMechanism;
}
public void setSaslMechanism(String saslMechanism) {
this.saslMechanism = saslMechanism;
}
public String getSaslJaasConfig() {
return saslJaasConfig;
}
public void setSaslJaasConfig(String saslJaasConfig) {
this.saslJaasConfig = saslJaasConfig;
}
public String getAdminUsername() {
return adminUsername;
}
public void setAdminUsername(String adminUsername) {
this.adminUsername = adminUsername;
}
public String getAdminPassword() {
return adminPassword;
}
public void setAdminPassword(String adminPassword) {
this.adminPassword = adminPassword;
}
public boolean isAdminCreate() {
return adminCreate;
}
public void setAdminCreate(boolean adminCreate) {
this.adminCreate = adminCreate;
}
public String getZookeeperAddr() { public String getZookeeperAddr() {
return zookeeperAddr; return zookeeperAddr;
} }
@@ -105,11 +36,11 @@ public class KafkaConfig {
this.zookeeperAddr = zookeeperAddr; this.zookeeperAddr = zookeeperAddr;
} }
public boolean isEnableAcl() { public Properties getProperties() {
return enableAcl; return properties;
} }
public void setEnableAcl(boolean enableAcl) { public void setProperties(Properties properties) {
this.enableAcl = enableAcl; this.properties = properties;
} }
} }

View File

@@ -1,6 +1,7 @@
package com.xuxd.kafka.console.controller; package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.AclUser; import com.xuxd.kafka.console.beans.AclUser;
import com.xuxd.kafka.console.beans.annotation.RequiredAuthorize;
import com.xuxd.kafka.console.service.AclService; import com.xuxd.kafka.console.service.AclService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping; import org.springframework.web.bind.annotation.DeleteMapping;
@@ -30,17 +31,20 @@ public class AclUserController {
} }
@PostMapping @PostMapping
@RequiredAuthorize
public Object addOrUpdateUser(@RequestBody AclUser user) { public Object addOrUpdateUser(@RequestBody AclUser user) {
return aclService.addOrUpdateUser(user.getUsername(), user.getPassword()); return aclService.addOrUpdateUser(user.getUsername(), user.getPassword());
} }
@DeleteMapping @DeleteMapping
@RequiredAuthorize
public Object deleteUser(@RequestBody AclUser user) { public Object deleteUser(@RequestBody AclUser user) {
return aclService.deleteUser(user.getUsername()); return aclService.deleteUser(user.getUsername());
} }
@DeleteMapping("/auth") @DeleteMapping("/auth")
@RequiredAuthorize
public Object deleteUserAndAuth(@RequestBody AclUser user) { public Object deleteUserAndAuth(@RequestBody AclUser user) {
return aclService.deleteUserAndAuth(user.getUsername()); return aclService.deleteUserAndAuth(user.getUsername());
} }

View File

@@ -1,10 +1,9 @@
package com.xuxd.kafka.console.controller; package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.dto.ClusterInfoDTO;
import com.xuxd.kafka.console.service.ClusterService; import com.xuxd.kafka.console.service.ClusterService;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.*;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
/** /**
* kafka-console-ui. * kafka-console-ui.
@@ -23,4 +22,34 @@ public class ClusterController {
public Object getClusterInfo() { public Object getClusterInfo() {
return clusterService.getClusterInfo(); return clusterService.getClusterInfo();
} }
@GetMapping("/info")
public Object getClusterInfoList() {
return clusterService.getClusterInfoList();
}
@PostMapping("/info")
public Object addClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.addClusterInfo(dto.to());
}
@DeleteMapping("/info")
public Object deleteClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.deleteClusterInfo(dto.getId());
}
@PutMapping("/info")
public Object updateClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.updateClusterInfo(dto.to());
}
@GetMapping("/info/peek")
public Object peekClusterInfo() {
return clusterService.peekClusterInfo();
}
@GetMapping("/info/api/version")
public Object getBrokerApiVersionInfo() {
return clusterService.getBrokerApiVersionInfo();
}
} }

View File

@@ -1,6 +1,7 @@
package com.xuxd.kafka.console.controller; package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.ResponseData; import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.annotation.RequiredAuthorize;
import com.xuxd.kafka.console.beans.dto.AlterConfigDTO; import com.xuxd.kafka.console.beans.dto.AlterConfigDTO;
import com.xuxd.kafka.console.beans.enums.AlterType; import com.xuxd.kafka.console.beans.enums.AlterType;
import com.xuxd.kafka.console.config.KafkaConfig; import com.xuxd.kafka.console.config.KafkaConfig;
@@ -36,7 +37,7 @@ public class ConfigController {
this.configService = configService; this.configService = configService;
} }
@GetMapping @GetMapping("/console")
public Object getConfig() { public Object getConfig() {
return ResponseData.create().data(configMap).success(); return ResponseData.create().data(configMap).success();
} }
@@ -47,11 +48,13 @@ public class ConfigController {
} }
@PostMapping("/topic") @PostMapping("/topic")
@RequiredAuthorize
public Object setTopicConfig(@RequestBody AlterConfigDTO dto) { public Object setTopicConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterTopicConfig(dto.getEntity(), dto.to(), AlterType.SET); return configService.alterTopicConfig(dto.getEntity(), dto.to(), AlterType.SET);
} }
@DeleteMapping("/topic") @DeleteMapping("/topic")
@RequiredAuthorize
public Object deleteTopicConfig(@RequestBody AlterConfigDTO dto) { public Object deleteTopicConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterTopicConfig(dto.getEntity(), dto.to(), AlterType.DELETE); return configService.alterTopicConfig(dto.getEntity(), dto.to(), AlterType.DELETE);
} }
@@ -62,11 +65,13 @@ public class ConfigController {
} }
@PostMapping("/broker") @PostMapping("/broker")
@RequiredAuthorize
public Object setBrokerConfig(@RequestBody AlterConfigDTO dto) { public Object setBrokerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerConfig(dto.getEntity(), dto.to(), AlterType.SET); return configService.alterBrokerConfig(dto.getEntity(), dto.to(), AlterType.SET);
} }
@DeleteMapping("/broker") @DeleteMapping("/broker")
@RequiredAuthorize
public Object deleteBrokerConfig(@RequestBody AlterConfigDTO dto) { public Object deleteBrokerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerConfig(dto.getEntity(), dto.to(), AlterType.DELETE); return configService.alterBrokerConfig(dto.getEntity(), dto.to(), AlterType.DELETE);
} }
@@ -77,11 +82,13 @@ public class ConfigController {
} }
@PostMapping("/broker/logger") @PostMapping("/broker/logger")
@RequiredAuthorize
public Object setBrokerLoggerConfig(@RequestBody AlterConfigDTO dto) { public Object setBrokerLoggerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerLoggerConfig(dto.getEntity(), dto.to(), AlterType.SET); return configService.alterBrokerLoggerConfig(dto.getEntity(), dto.to(), AlterType.SET);
} }
@DeleteMapping("/broker/logger") @DeleteMapping("/broker/logger")
@RequiredAuthorize
public Object deleteBrokerLoggerConfig(@RequestBody AlterConfigDTO dto) { public Object deleteBrokerLoggerConfig(@RequestBody AlterConfigDTO dto) {
return configService.alterBrokerLoggerConfig(dto.getEntity(), dto.to(), AlterType.DELETE); return configService.alterBrokerLoggerConfig(dto.getEntity(), dto.to(), AlterType.DELETE);
} }

View File

@@ -1,6 +1,7 @@
package com.xuxd.kafka.console.controller; package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.ResponseData; import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.annotation.RequiredAuthorize;
import com.xuxd.kafka.console.beans.dto.AddSubscriptionDTO; import com.xuxd.kafka.console.beans.dto.AddSubscriptionDTO;
import com.xuxd.kafka.console.beans.dto.QueryConsumerGroupDTO; import com.xuxd.kafka.console.beans.dto.QueryConsumerGroupDTO;
import com.xuxd.kafka.console.beans.dto.ResetOffsetDTO; import com.xuxd.kafka.console.beans.dto.ResetOffsetDTO;
@@ -67,11 +68,13 @@ public class ConsumerController {
} }
@PostMapping("/subscription") @PostMapping("/subscription")
@RequiredAuthorize
public Object addSubscription(@RequestBody AddSubscriptionDTO subscriptionDTO) { public Object addSubscription(@RequestBody AddSubscriptionDTO subscriptionDTO) {
return consumerService.addSubscription(subscriptionDTO.getGroupId(), subscriptionDTO.getTopic()); return consumerService.addSubscription(subscriptionDTO.getGroupId(), subscriptionDTO.getTopic());
} }
@PostMapping("/reset/offset") @PostMapping("/reset/offset")
@RequiredAuthorize
public Object restOffset(@RequestBody ResetOffsetDTO offsetDTO) { public Object restOffset(@RequestBody ResetOffsetDTO offsetDTO) {
ResponseData res = ResponseData.create().failed("unknown"); ResponseData res = ResponseData.create().failed("unknown");
switch (offsetDTO.getLevel()) { switch (offsetDTO.getLevel()) {

View File

@@ -0,0 +1,53 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.annotation.RequiredAuthorize;
import com.xuxd.kafka.console.beans.dto.user.*;
import com.xuxd.kafka.console.beans.vo.DevOpsUserVO;
import com.xuxd.kafka.console.beans.vo.LoginVO;
import com.xuxd.kafka.console.service.DevOpsUserService;
import lombok.RequiredArgsConstructor;
import org.springframework.web.bind.annotation.*;
import java.util.List;
/**
* 用户管理
* @author dongyinuo
*/
@RestController
@RequiredArgsConstructor
@RequestMapping("/devops/user")
public class DevOpsUserController {
private final DevOpsUserService devOpsUserService;
@PostMapping("add")
@RequiredAuthorize
public ResponseData<Boolean> add(@RequestBody AddUserDTO addUserDTO){
return devOpsUserService.add(addUserDTO);
}
@PostMapping("update")
@RequiredAuthorize
public ResponseData<Boolean> update(@RequestBody UpdateUserDTO updateUserDTO){
return devOpsUserService.update(updateUserDTO);
}
@DeleteMapping
@RequiredAuthorize
public ResponseData<Boolean> delete(@RequestParam Long id){
return devOpsUserService.delete(id);
}
@GetMapping("list")
@RequiredAuthorize
public ResponseData<List<DevOpsUserVO>> list(@ModelAttribute ListUserDTO listUserDTO){
return devOpsUserService.list(listUserDTO);
}
@PostMapping("login")
public ResponseData<LoginVO> login(@RequestBody LoginDTO loginDTO){
return devOpsUserService.login(loginDTO.getUsername(), loginDTO.getPassword());
}
}

View File

@@ -1,7 +1,9 @@
package com.xuxd.kafka.console.controller; package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.TopicPartition; import com.xuxd.kafka.console.beans.TopicPartition;
import com.xuxd.kafka.console.beans.annotation.RequiredAuthorize;
import com.xuxd.kafka.console.beans.dto.BrokerThrottleDTO; import com.xuxd.kafka.console.beans.dto.BrokerThrottleDTO;
import com.xuxd.kafka.console.beans.dto.ProposedAssignmentDTO;
import com.xuxd.kafka.console.beans.dto.ReplicationDTO; import com.xuxd.kafka.console.beans.dto.ReplicationDTO;
import com.xuxd.kafka.console.beans.dto.SyncDataDTO; import com.xuxd.kafka.console.beans.dto.SyncDataDTO;
import com.xuxd.kafka.console.service.OperationService; import com.xuxd.kafka.console.service.OperationService;
@@ -29,12 +31,14 @@ public class OperationController {
private OperationService operationService; private OperationService operationService;
@PostMapping("/sync/consumer/offset") @PostMapping("/sync/consumer/offset")
@RequiredAuthorize
public Object syncConsumerOffset(@RequestBody SyncDataDTO dto) { public Object syncConsumerOffset(@RequestBody SyncDataDTO dto) {
dto.getProperties().put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, dto.getAddress()); dto.getProperties().put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, dto.getAddress());
return operationService.syncConsumerOffset(dto.getGroupId(), dto.getTopic(), dto.getProperties()); return operationService.syncConsumerOffset(dto.getGroupId(), dto.getTopic(), dto.getProperties());
} }
@PostMapping("/sync/min/offset/alignment") @PostMapping("/sync/min/offset/alignment")
@RequiredAuthorize
public Object minOffsetAlignment(@RequestBody SyncDataDTO dto) { public Object minOffsetAlignment(@RequestBody SyncDataDTO dto) {
dto.getProperties().put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, dto.getAddress()); dto.getProperties().put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, dto.getAddress());
return operationService.minOffsetAlignment(dto.getGroupId(), dto.getTopic(), dto.getProperties()); return operationService.minOffsetAlignment(dto.getGroupId(), dto.getTopic(), dto.getProperties());
@@ -46,6 +50,7 @@ public class OperationController {
} }
@DeleteMapping("/sync/alignment") @DeleteMapping("/sync/alignment")
@RequiredAuthorize
public Object deleteAlignment(@RequestParam Long id) { public Object deleteAlignment(@RequestParam Long id) {
return operationService.deleteAlignmentById(id); return operationService.deleteAlignmentById(id);
} }
@@ -56,6 +61,7 @@ public class OperationController {
} }
@PostMapping("/broker/throttle") @PostMapping("/broker/throttle")
@RequiredAuthorize
public Object configThrottle(@RequestBody BrokerThrottleDTO dto) { public Object configThrottle(@RequestBody BrokerThrottleDTO dto) {
return operationService.configThrottle(dto.getBrokerList(), dto.getUnit().toKb(dto.getThrottle())); return operationService.configThrottle(dto.getBrokerList(), dto.getUnit().toKb(dto.getThrottle()));
} }
@@ -74,4 +80,10 @@ public class OperationController {
public Object cancelReassignment(@RequestBody TopicPartition partition) { public Object cancelReassignment(@RequestBody TopicPartition partition) {
return operationService.cancelReassignment(new org.apache.kafka.common.TopicPartition(partition.getTopic(), partition.getPartition())); return operationService.cancelReassignment(new org.apache.kafka.common.TopicPartition(partition.getTopic(), partition.getPartition()));
} }
@PostMapping("/replication/reassignments/proposed")
@RequiredAuthorize
public Object proposedAssignments(@RequestBody ProposedAssignmentDTO dto) {
return operationService.proposedAssignments(dto.getTopic(), dto.getBrokers());
}
} }

View File

@@ -1,6 +1,7 @@
package com.xuxd.kafka.console.controller; package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.ReplicaAssignment; import com.xuxd.kafka.console.beans.ReplicaAssignment;
import com.xuxd.kafka.console.beans.annotation.RequiredAuthorize;
import com.xuxd.kafka.console.beans.dto.AddPartitionDTO; import com.xuxd.kafka.console.beans.dto.AddPartitionDTO;
import com.xuxd.kafka.console.beans.dto.NewTopicDTO; import com.xuxd.kafka.console.beans.dto.NewTopicDTO;
import com.xuxd.kafka.console.beans.dto.TopicThrottleDTO; import com.xuxd.kafka.console.beans.dto.TopicThrottleDTO;
@@ -43,6 +44,7 @@ public class TopicController {
} }
@DeleteMapping @DeleteMapping
@RequiredAuthorize
public Object deleteTopic(@RequestParam String topic) { public Object deleteTopic(@RequestParam String topic) {
return topicService.deleteTopic(topic); return topicService.deleteTopic(topic);
} }
@@ -53,11 +55,13 @@ public class TopicController {
} }
@PostMapping("/new") @PostMapping("/new")
@RequiredAuthorize
public Object createNewTopic(@RequestBody NewTopicDTO topicDTO) { public Object createNewTopic(@RequestBody NewTopicDTO topicDTO) {
return topicService.createTopic(topicDTO.toNewTopic()); return topicService.createTopic(topicDTO.toNewTopic());
} }
@PostMapping("/partition/new") @PostMapping("/partition/new")
@RequiredAuthorize
public Object addPartition(@RequestBody AddPartitionDTO partitionDTO) { public Object addPartition(@RequestBody AddPartitionDTO partitionDTO) {
String topic = partitionDTO.getTopic().trim(); String topic = partitionDTO.getTopic().trim();
int addNum = partitionDTO.getAddNum(); int addNum = partitionDTO.getAddNum();
@@ -80,11 +84,13 @@ public class TopicController {
} }
@PostMapping("/replica/assignment") @PostMapping("/replica/assignment")
@RequiredAuthorize
public Object updateReplicaAssignment(@RequestBody ReplicaAssignment assignment) { public Object updateReplicaAssignment(@RequestBody ReplicaAssignment assignment) {
return topicService.updateReplicaAssignment(assignment); return topicService.updateReplicaAssignment(assignment);
} }
@PostMapping("/replica/throttle") @PostMapping("/replica/throttle")
@RequiredAuthorize
public Object configThrottle(@RequestBody TopicThrottleDTO dto) { public Object configThrottle(@RequestBody TopicThrottleDTO dto) {
return topicService.configThrottle(dto.getTopic(), dto.getPartitions(), dto.getOperation()); return topicService.configThrottle(dto.getTopic(), dto.getPartitions(), dto.getOperation());
} }

View File

@@ -0,0 +1,13 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-31 09:58:52
**/
public interface ClusterInfoMapper extends BaseMapper<ClusterInfoDO> {
}

View File

@@ -0,0 +1,7 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.DevOpsUserDO;
public interface DevOpsUserMapper extends BaseMapper<DevOpsUserDO> {
}

View File

@@ -0,0 +1,87 @@
package com.xuxd.kafka.console.interceptor;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.config.ContextConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.utils.ConvertUtil;
import java.io.IOException;
import java.util.HashSet;
import java.util.Set;
import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.MediaType;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-05 19:56:25
**/
@WebFilter(filterName = "context-set-filter", urlPatterns = {"/acl/*","/user/*","/cluster/*","/config/*","/consumer/*","/message/*","/topic/*","/op/*"})
@Slf4j
public class ContextSetFilter implements Filter {
private Set<String> excludes = new HashSet<>();
{
excludes.add("/cluster/info/peek");
excludes.add("/cluster/info");
excludes.add("/config/console");
}
@Autowired
private ClusterInfoMapper clusterInfoMapper;
@Override public void doFilter(ServletRequest req, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
try {
HttpServletRequest request = (HttpServletRequest) req;
String uri = request.getRequestURI();
if (!excludes.contains(uri)) {
String headerId = request.getHeader(Header.ID);
if (StringUtils.isBlank(headerId)) {
// ResponseData failed = ResponseData.create().failed("Cluster info is null.");
ResponseData failed = ResponseData.create().failed("没有集群信息,请先切换集群");
response.setContentType(MediaType.APPLICATION_JSON_UTF8_VALUE);
response.getWriter().println(ConvertUtil.toJsonString(failed));
return;
} else {
ClusterInfoDO infoDO = clusterInfoMapper.selectById(Long.valueOf(headerId));
if (infoDO == null) {
ResponseData failed = ResponseData.create().failed("该集群找不到信息,请切换一个有效集群");
response.setContentType(MediaType.APPLICATION_JSON_UTF8_VALUE);
response.getWriter().println(ConvertUtil.toJsonString(failed));
return;
}
ContextConfig config = new ContextConfig();
config.setClusterInfoId(infoDO.getId());
config.setClusterName(infoDO.getClusterName());
config.setBootstrapServer(infoDO.getAddress());
if (StringUtils.isNotBlank(infoDO.getProperties())) {
config.setProperties(ConvertUtil.toProperties(infoDO.getProperties()));
}
ContextConfigHolder.CONTEXT_CONFIG.set(config);
}
}
chain.doFilter(req, response);
} finally {
ContextConfigHolder.CONTEXT_CONFIG.remove();
}
}
interface Header {
String ID = "X-Cluster-Info-Id";
String NAME = "X-Cluster-Info-Name";
}
}

View File

@@ -1,7 +1,6 @@
package com.xuxd.kafka.console.interceptor; package com.xuxd.kafka.console.interceptor;
import com.xuxd.kafka.console.beans.ResponseData; import com.xuxd.kafka.console.utils.ResponseUtil;
import javax.servlet.http.HttpServletRequest;
import lombok.extern.slf4j.Slf4j; import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.ControllerAdvice; import org.springframework.web.bind.annotation.ControllerAdvice;
import org.springframework.web.bind.annotation.ExceptionHandler; import org.springframework.web.bind.annotation.ExceptionHandler;
@@ -14,14 +13,13 @@ import org.springframework.web.bind.annotation.ResponseBody;
* @date 2021-10-19 14:32:18 * @date 2021-10-19 14:32:18
**/ **/
@Slf4j @Slf4j
@ControllerAdvice(basePackages = "com.xuxd.kafka.console.controller") @ControllerAdvice(basePackages = "com.xuxd.kafka.console")
public class GlobalExceptionHandler { public class GlobalExceptionHandler {
@ExceptionHandler(value = Exception.class) @ExceptionHandler(value = Exception.class)
@ResponseBody @ResponseBody
public Object exceptionHandler(HttpServletRequest req, Exception ex) throws Exception { public Object exceptionHandler(Exception ex) {
log.error("exception handle: ", ex); log.error("exception handle: ", ex);
return ResponseData.create().failed(ex.getMessage()); return ResponseUtil.error(ex.getMessage());
} }
} }

View File

@@ -0,0 +1,84 @@
package com.xuxd.kafka.console.interceptor;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.annotation.RequiredAuthorize;
import com.xuxd.kafka.console.beans.enums.Role;
import com.xuxd.kafka.console.beans.vo.DevOpsUserVO;
import com.xuxd.kafka.console.service.DevOpsUserService;
import com.xuxd.kafka.console.utils.ContextUtil;
import com.xuxd.kafka.console.utils.ConvertUtil;
import com.xuxd.kafka.console.utils.JwtUtils;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.stereotype.Component;
import org.springframework.web.method.HandlerMethod;
import org.springframework.web.servlet.AsyncHandlerInterceptor;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.PrintWriter;
import static com.xuxd.kafka.console.beans.ResponseData.TOKEN_ILLEGAL;
@Component
@Slf4j
@RequiredArgsConstructor
public class TokenInterceptor implements AsyncHandlerInterceptor {
private final static String TOKEN = "token";
private final DevOpsUserService devOpsUserService;
@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
if (handler instanceof HandlerMethod){
String token = request.getHeader(TOKEN);
if (StringUtils.isBlank(token)){
log.info("token not exist");
write(response);
return false;
}
String username = JwtUtils.parse(token);
if (StringUtils.isBlank(username)){
log.info("{} is wrongful", token);
write(response);
return false;
}
ResponseData<DevOpsUserVO> userVORsp = devOpsUserService.detail(username);
if (userVORsp == null || userVORsp.getData() == null){
log.info("{} not exist", username);
write(response);
return false;
}
ContextUtil.set(ContextUtil.USERNAME, username);
HandlerMethod method = (HandlerMethod)handler;
RequiredAuthorize annotation = method.getMethodAnnotation(RequiredAuthorize.class);
if (annotation != null){
DevOpsUserVO userVO = userVORsp.getData();
if (!userVO.getRole().equals(Role.manager)){
log.info("{},{} no permission", username, request.getRequestURI());
write(response);
return false;
}
}
}
return true;
}
private void write(HttpServletResponse response){
PrintWriter writer = null;
try {
writer = response.getWriter();
writer.write(ConvertUtil.toJsonString(ResponseData.create().failed(TOKEN_ILLEGAL)));
} catch (Exception ignored){
} finally {
if (writer != null){
writer.flush();
writer.close();
}
}
}
}

View File

@@ -0,0 +1,20 @@
package com.xuxd.kafka.console.interceptor;
import lombok.RequiredArgsConstructor;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.InterceptorRegistry;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurer;
@Configuration
@RequiredArgsConstructor
public class WebMvcConfig implements WebMvcConfigurer {
private final TokenInterceptor tokenInterceptor;
@Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(tokenInterceptor)
.addPathPatterns("/**")
.excludePathPatterns("/devops/user/login");
}
}

View File

@@ -1,9 +1,22 @@
package com.xuxd.kafka.console.schedule; package com.xuxd.kafka.console.schedule;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.beans.dos.KafkaUserDO;
import com.xuxd.kafka.console.config.ContextConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.dao.KafkaUserMapper; import com.xuxd.kafka.console.dao.KafkaUserMapper;
import com.xuxd.kafka.console.utils.ConvertUtil;
import com.xuxd.kafka.console.utils.SaslUtil;
import java.util.ArrayList;
import java.util.List;
import java.util.Set; import java.util.Set;
import kafka.console.KafkaConfigConsole; import kafka.console.KafkaConfigConsole;
import lombok.extern.slf4j.Slf4j; import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.scheduling.annotation.Scheduled; import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component; import org.springframework.stereotype.Component;
@@ -21,25 +34,58 @@ public class KafkaAclSchedule {
private final KafkaConfigConsole configConsole; private final KafkaConfigConsole configConsole;
public KafkaAclSchedule(KafkaUserMapper userMapper, KafkaConfigConsole configConsole) { private final ClusterInfoMapper clusterInfoMapper;
this.userMapper = userMapper;
this.configConsole = configConsole; public KafkaAclSchedule(ObjectProvider<KafkaUserMapper> userMapper,
ObjectProvider<KafkaConfigConsole> configConsole, ObjectProvider<ClusterInfoMapper> clusterInfoMapper) {
this.userMapper = userMapper.getIfAvailable();
this.configConsole = configConsole.getIfAvailable();
this.clusterInfoMapper = clusterInfoMapper.getIfAvailable();
} }
@Scheduled(cron = "${cron.clear-dirty-user}") @Scheduled(cron = "${cron.clear-dirty-user}")
public void clearDirtyKafkaUser() { public void clearDirtyKafkaUser() {
log.info("Start clear dirty data for kafka user from database."); try {
Set<String> userSet = configConsole.getUserList(null); log.info("Start clear dirty data for kafka user from database.");
userMapper.selectList(null).forEach(u -> { List<ClusterInfoDO> clusterInfoDOS = clusterInfoMapper.selectList(null);
if (!userSet.contains(u.getUsername())) { List<Long> clusterInfoIds = new ArrayList<>();
log.info("clear user: {} from database.", u.getUsername()); for (ClusterInfoDO infoDO : clusterInfoDOS) {
try { ContextConfig config = new ContextConfig();
userMapper.deleteById(u.getId()); config.setClusterInfoId(infoDO.getId());
} catch (Exception e) { config.setClusterName(infoDO.getClusterName());
log.error("userMapper.deleteById error, user: " + u, e);
config.setBootstrapServer(infoDO.getAddress());
if (StringUtils.isNotBlank(infoDO.getProperties())) {
config.setProperties(ConvertUtil.toProperties(infoDO.getProperties()));
}
ContextConfigHolder.CONTEXT_CONFIG.set(config);
if (SaslUtil.isEnableSasl() && SaslUtil.isEnableScram()) {
log.info("Start clear cluster: {}", infoDO.getClusterName());
Set<String> userSet = configConsole.getUserList(null);
QueryWrapper<KafkaUserDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("cluster_info_id", infoDO.getId());
userMapper.selectList(queryWrapper).forEach(u -> {
if (!userSet.contains(u.getUsername())) {
log.info("clear user: {} from database.", u.getUsername());
try {
userMapper.deleteById(u.getId());
} catch (Exception e) {
log.error("userMapper.deleteById error, user: " + u, e);
}
}
});
clusterInfoIds.add(infoDO.getId());
} }
} }
}); if (CollectionUtils.isNotEmpty(clusterInfoIds)) {
log.info("Clear end."); log.info("Clear the cluster id {}, which not found.", clusterInfoIds);
QueryWrapper<KafkaUserDO> wrapper = new QueryWrapper<>();
wrapper.notIn("cluster_info_id", clusterInfoIds);
userMapper.delete(wrapper);
}
log.info("Clear end.");
} finally {
ContextConfigHolder.CONTEXT_CONFIG.remove();
}
} }
} }

View File

@@ -0,0 +1,10 @@
package com.xuxd.kafka.console.service;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-31 11:42:43
**/
public interface ClusterInfoService {
}

View File

@@ -1,6 +1,7 @@
package com.xuxd.kafka.console.service; package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData; import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
/** /**
* kafka-console-ui. * kafka-console-ui.
@@ -10,4 +11,16 @@ import com.xuxd.kafka.console.beans.ResponseData;
**/ **/
public interface ClusterService { public interface ClusterService {
ResponseData getClusterInfo(); ResponseData getClusterInfo();
ResponseData getClusterInfoList();
ResponseData addClusterInfo(ClusterInfoDO infoDO);
ResponseData deleteClusterInfo(Long id);
ResponseData updateClusterInfo(ClusterInfoDO infoDO);
ResponseData peekClusterInfo();
ResponseData getBrokerApiVersionInfo();
} }

View File

@@ -0,0 +1,26 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dto.user.AddUserDTO;
import com.xuxd.kafka.console.beans.dto.user.ListUserDTO;
import com.xuxd.kafka.console.beans.dto.user.UpdateUserDTO;
import com.xuxd.kafka.console.beans.vo.DevOpsUserVO;
import com.xuxd.kafka.console.beans.vo.LoginVO;
import java.util.List;
public interface DevOpsUserService {
ResponseData<Boolean> add(AddUserDTO addUserDTO);
ResponseData<Boolean> update(UpdateUserDTO updateUserDTO);
ResponseData<Boolean> delete(Long id);
ResponseData<List<DevOpsUserVO>> list(ListUserDTO listUserDTO);
ResponseData<DevOpsUserVO> detail(String username);
ResponseData<LoginVO> login(String username, String password);
}

View File

@@ -30,4 +30,6 @@ public interface OperationService {
ResponseData currentReassignments(); ResponseData currentReassignments();
ResponseData cancelReassignment(TopicPartition partition); ResponseData cancelReassignment(TopicPartition partition);
ResponseData proposedAssignments(String topic, List<Integer> brokerList);
} }

View File

@@ -6,29 +6,37 @@ import com.xuxd.kafka.console.beans.CounterMap;
import com.xuxd.kafka.console.beans.ResponseData; import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.KafkaUserDO; import com.xuxd.kafka.console.beans.dos.KafkaUserDO;
import com.xuxd.kafka.console.beans.vo.KafkaUserDetailVO; import com.xuxd.kafka.console.beans.vo.KafkaUserDetailVO;
import com.xuxd.kafka.console.config.KafkaConfig; import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.KafkaUserMapper; import com.xuxd.kafka.console.dao.KafkaUserMapper;
import com.xuxd.kafka.console.service.AclService; import com.xuxd.kafka.console.service.AclService;
import com.xuxd.kafka.console.utils.SaslUtil;
import java.util.Arrays; import java.util.Arrays;
import java.util.Collections; import java.util.Collections;
import java.util.HashMap; import java.util.HashMap;
import java.util.List; import java.util.List;
import java.util.Map; import java.util.Map;
import java.util.Properties;
import java.util.Set; import java.util.Set;
import java.util.stream.Collectors; import java.util.stream.Collectors;
import kafka.console.KafkaAclConsole; import kafka.console.KafkaAclConsole;
import kafka.console.KafkaConfigConsole; import kafka.console.KafkaConfigConsole;
import lombok.extern.slf4j.Slf4j; import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.admin.ScramMechanism;
import org.apache.kafka.clients.admin.UserScramCredentialsDescription; import org.apache.kafka.clients.admin.UserScramCredentialsDescription;
import org.apache.kafka.common.acl.AclBinding; import org.apache.kafka.common.acl.AclBinding;
import org.apache.kafka.common.acl.AclOperation; import org.apache.kafka.common.acl.AclOperation;
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.security.auth.SecurityProtocol;
import org.springframework.beans.factory.ObjectProvider; import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.SmartInitializingSingleton;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service; import org.springframework.stereotype.Service;
import scala.Tuple2; import scala.Tuple2;
import static com.xuxd.kafka.console.utils.SaslUtil.isEnableSasl;
import static com.xuxd.kafka.console.utils.SaslUtil.isEnableScram;
/** /**
* kafka-console-ui. * kafka-console-ui.
* *
@@ -37,7 +45,7 @@ import scala.Tuple2;
**/ **/
@Slf4j @Slf4j
@Service @Service
public class AclServiceImpl implements AclService, SmartInitializingSingleton { public class AclServiceImpl implements AclService {
@Autowired @Autowired
private KafkaConfigConsole configConsole; private KafkaConfigConsole configConsole;
@@ -45,9 +53,6 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
@Autowired @Autowired
private KafkaAclConsole aclConsole; private KafkaAclConsole aclConsole;
@Autowired
private KafkaConfig kafkaConfig;
private final KafkaUserMapper kafkaUserMapper; private final KafkaUserMapper kafkaUserMapper;
public AclServiceImpl(ObjectProvider<KafkaUserMapper> kafkaUserMapper) { public AclServiceImpl(ObjectProvider<KafkaUserMapper> kafkaUserMapper) {
@@ -64,15 +69,23 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
} }
@Override public ResponseData addOrUpdateUser(String name, String pass) { @Override public ResponseData addOrUpdateUser(String name, String pass) {
if (!isEnableSasl()) {
return ResponseData.create().failed("Only support SASL protocol.");
}
if (!isEnableScram()) {
return ResponseData.create().failed("Only support SASL_SCRAM.");
}
log.info("add or update user, username: {}, password: {}", name, pass); log.info("add or update user, username: {}, password: {}", name, pass);
if (!configConsole.addOrUpdateUser(name, pass)) { Tuple2<Object, String> tuple2 = configConsole.addOrUpdateUser(name, pass);
if (!(boolean) tuple2._1()) {
log.error("add user to kafka failed."); log.error("add user to kafka failed.");
return ResponseData.create().failed("add user to kafka failed"); return ResponseData.create().failed("add user to kafka failed: " + tuple2._2());
} }
// save user info to database. // save user info to database.
KafkaUserDO userDO = new KafkaUserDO(); KafkaUserDO userDO = new KafkaUserDO();
userDO.setUsername(name); userDO.setUsername(name);
userDO.setPassword(pass); userDO.setPassword(pass);
userDO.setClusterInfoId(ContextConfigHolder.CONTEXT_CONFIG.get().getClusterInfoId());
try { try {
Map<String, Object> map = new HashMap<>(); Map<String, Object> map = new HashMap<>();
map.put("username", name); map.put("username", name);
@@ -86,12 +99,24 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
} }
@Override public ResponseData deleteUser(String name) { @Override public ResponseData deleteUser(String name) {
if (!isEnableSasl()) {
return ResponseData.create().failed("Only support SASL protocol.");
}
if (!isEnableScram()) {
return ResponseData.create().failed("Only support SASL_SCRAM.");
}
log.info("delete user: {}", name); log.info("delete user: {}", name);
Tuple2<Object, String> tuple2 = configConsole.deleteUser(name); Tuple2<Object, String> tuple2 = configConsole.deleteUser(name);
return (boolean) tuple2._1() ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2()); return (boolean) tuple2._1() ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2());
} }
@Override public ResponseData deleteUserAndAuth(String name) { @Override public ResponseData deleteUserAndAuth(String name) {
if (!isEnableSasl()) {
return ResponseData.create().failed("Only support SASL protocol.");
}
if (!isEnableScram()) {
return ResponseData.create().failed("Only support SASL_SCRAM.");
}
log.info("delete user and authority: {}", name); log.info("delete user and authority: {}", name);
AclEntry entry = new AclEntry(); AclEntry entry = new AclEntry();
entry.setPrincipal(name); entry.setPrincipal(name);
@@ -120,7 +145,8 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
Map<String, Object> resultMap = new HashMap<>(); Map<String, Object> resultMap = new HashMap<>();
entryMap.forEach((k, v) -> { entryMap.forEach((k, v) -> {
Map<String, List<AclEntry>> map = v.stream().collect(Collectors.groupingBy(e -> e.getResourceType() + "#" + e.getName())); Map<String, List<AclEntry>> map = v.stream().collect(Collectors.groupingBy(e -> e.getResourceType() + "#" + e.getName()));
if (k.equals(kafkaConfig.getAdminUsername())) { String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
if (k.equals(username)) {
Map<String, Object> map2 = new HashMap<>(map); Map<String, Object> map2 = new HashMap<>(map);
Map<String, Object> userMap = new HashMap<>(); Map<String, Object> userMap = new HashMap<>();
userMap.put("role", "admin"); userMap.put("role", "admin");
@@ -133,7 +159,8 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
detailList.values().forEach(u -> { detailList.values().forEach(u -> {
if (!resultMap.containsKey(u.name()) && !u.credentialInfos().isEmpty()) { if (!resultMap.containsKey(u.name()) && !u.credentialInfos().isEmpty()) {
if (!u.name().equals(kafkaConfig.getAdminUsername())) { String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
if (!u.name().equals(username)) {
resultMap.put(u.name(), Collections.emptyMap()); resultMap.put(u.name(), Collections.emptyMap());
} else { } else {
Map<String, Object> map2 = new HashMap<>(); Map<String, Object> map2 = new HashMap<>();
@@ -194,27 +221,29 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
} }
Map<String, Object> param = new HashMap<>(); Map<String, Object> param = new HashMap<>();
param.put("username", username); param.put("username", username);
param.put("cluster_info_id", ContextConfigHolder.CONTEXT_CONFIG.get().getClusterInfoId());
List<KafkaUserDO> dos = kafkaUserMapper.selectByMap(param); List<KafkaUserDO> dos = kafkaUserMapper.selectByMap(param);
if (dos.isEmpty()) { if (dos.isEmpty()) {
vo.setConsistencyDescription("Password is null."); vo.setConsistencyDescription("Password is null.");
} else { } else {
vo.setPassword(dos.stream().findFirst().get().getPassword()); vo.setPassword(dos.stream().findFirst().get().getPassword());
// check for consistency. // check for consistency.
boolean consistent = configConsole.isPassConsistent(username, vo.getPassword()); // boolean consistent = configConsole.isPassConsistent(username, vo.getPassword());
vo.setConsistencyDescription(consistent ? "Consistent" : "Password is not consistent."); // vo.setConsistencyDescription(consistent ? "Consistent" : "Password is not consistent.");
vo.setConsistencyDescription("Can not check password consistent.");
} }
return ResponseData.create().data(vo).success(); return ResponseData.create().data(vo).success();
} }
@Override public void afterSingletonsInstantiated() { // @Override public void afterSingletonsInstantiated() {
if (kafkaConfig.isEnableAcl() && kafkaConfig.isAdminCreate()) { // if (kafkaConfig.isEnableAcl() && kafkaConfig.isAdminCreate()) {
log.info("Start create admin user, username: {}, password: {}", kafkaConfig.getAdminUsername(), kafkaConfig.getAdminPassword()); // log.info("Start create admin user, username: {}, password: {}", kafkaConfig.getAdminUsername(), kafkaConfig.getAdminPassword());
boolean done = configConsole.addOrUpdateUserWithZK(kafkaConfig.getAdminUsername(), kafkaConfig.getAdminPassword()); // boolean done = configConsole.addOrUpdateUserWithZK(kafkaConfig.getAdminUsername(), kafkaConfig.getAdminPassword());
if (!done) { // if (!done) {
log.error("Create admin failed."); // log.error("Create admin failed.");
throw new IllegalStateException(); // throw new IllegalStateException();
} // }
} // }
} // }
} }

View File

@@ -0,0 +1,14 @@
package com.xuxd.kafka.console.service.impl;
import com.xuxd.kafka.console.service.ClusterInfoService;
import org.springframework.stereotype.Service;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-31 11:42:59
**/
@Service
public class ClusterInfoServiceImpl implements ClusterInfoService {
}

View File

@@ -1,9 +1,27 @@
package com.xuxd.kafka.console.service.impl; package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.ClusterInfo;
import com.xuxd.kafka.console.beans.ResponseData; import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.beans.vo.BrokerApiVersionVO;
import com.xuxd.kafka.console.beans.vo.ClusterInfoVO;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.service.ClusterService; import com.xuxd.kafka.console.service.ClusterService;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashMap;
import java.util.List;
import java.util.TreeSet;
import java.util.stream.Collectors;
import kafka.console.ClusterConsole; import kafka.console.ClusterConsole;
import org.springframework.beans.factory.annotation.Autowired; import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.NodeApiVersions;
import org.apache.kafka.common.Node;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.stereotype.Service; import org.springframework.stereotype.Service;
/** /**
@@ -15,10 +33,78 @@ import org.springframework.stereotype.Service;
@Service @Service
public class ClusterServiceImpl implements ClusterService { public class ClusterServiceImpl implements ClusterService {
@Autowired private final ClusterConsole clusterConsole;
private ClusterConsole clusterConsole;
private final ClusterInfoMapper clusterInfoMapper;
public ClusterServiceImpl(ObjectProvider<ClusterConsole> clusterConsole,
ObjectProvider<ClusterInfoMapper> clusterInfoMapper) {
this.clusterConsole = clusterConsole.getIfAvailable();
this.clusterInfoMapper = clusterInfoMapper.getIfAvailable();
}
@Override public ResponseData getClusterInfo() { @Override public ResponseData getClusterInfo() {
return ResponseData.create().data(clusterConsole.clusterInfo()).success(); ClusterInfo clusterInfo = clusterConsole.clusterInfo();
clusterInfo.setNodes(new TreeSet<>(clusterInfo.getNodes()));
return ResponseData.create().data(clusterInfo).success();
} }
@Override public ResponseData getClusterInfoList() {
return ResponseData.create().data(clusterInfoMapper.selectList(null)
.stream().map(ClusterInfoVO::from).collect(Collectors.toList())).success();
}
@Override public ResponseData addClusterInfo(ClusterInfoDO infoDO) {
QueryWrapper<ClusterInfoDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("cluster_name", infoDO.getClusterName());
if (clusterInfoMapper.selectCount(queryWrapper) > 0) {
return ResponseData.create().failed("cluster name exist.");
}
clusterInfoMapper.insert(infoDO);
return ResponseData.create().success();
}
@Override public ResponseData deleteClusterInfo(Long id) {
clusterInfoMapper.deleteById(id);
return ResponseData.create().success();
}
@Override public ResponseData updateClusterInfo(ClusterInfoDO infoDO) {
clusterInfoMapper.updateById(infoDO);
return ResponseData.create().success();
}
@Override public ResponseData peekClusterInfo() {
List<ClusterInfoDO> dos = clusterInfoMapper.selectList(null);
if (CollectionUtils.isEmpty(dos)) {
return ResponseData.create().failed("No Cluster Info.");
}
return ResponseData.create().data(dos.stream().findFirst().map(ClusterInfoVO::from)).success();
}
@Override public ResponseData getBrokerApiVersionInfo() {
HashMap<Node, NodeApiVersions> map = clusterConsole.listBrokerVersionInfo();
List<BrokerApiVersionVO> list = new ArrayList<>(map.size());
map.forEach(((node, versions) -> {
BrokerApiVersionVO vo = new BrokerApiVersionVO();
vo.setBrokerId(node.id());
vo.setHost(node.host() + ":" + node.port());
vo.setSupportNums(versions.allSupportedApiVersions().size());
String versionInfo = versions.toString(true);
int from = 0;
int count = 0;
int index = -1;
while ((index = versionInfo.indexOf("UNSUPPORTED", from)) >= 0 && from < versionInfo.length()) {
count++;
from = index + 1;
}
vo.setUnSupportNums(count);
versionInfo = versionInfo.substring(1, versionInfo.length() - 2);
vo.setVersionInfo(Arrays.asList(StringUtils.split(versionInfo, ",")));
list.add(vo);
}));
Collections.sort(list, Comparator.comparingInt(BrokerApiVersionVO::getBrokerId));
return ResponseData.create().data(list).success();
}
} }

View File

@@ -0,0 +1,99 @@
package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.baomidou.mybatisplus.core.conditions.update.UpdateWrapper;
import com.xuxd.kafka.console.beans.KafkaConsoleException;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.DevOpsUserDO;
import com.xuxd.kafka.console.beans.dto.user.AddUserDTO;
import com.xuxd.kafka.console.beans.dto.user.ListUserDTO;
import com.xuxd.kafka.console.beans.dto.user.UpdateUserDTO;
import com.xuxd.kafka.console.beans.vo.DevOpsUserVO;
import com.xuxd.kafka.console.beans.vo.LoginVO;
import com.xuxd.kafka.console.boot.InitSuperDevOpsUser;
import com.xuxd.kafka.console.dao.DevOpsUserMapper;
import com.xuxd.kafka.console.service.DevOpsUserService;
import com.xuxd.kafka.console.utils.ConvertUtil;
import com.xuxd.kafka.console.utils.JwtUtils;
import com.xuxd.kafka.console.utils.Md5Utils;
import com.xuxd.kafka.console.utils.ResponseUtil;
import lombok.RequiredArgsConstructor;
import org.apache.commons.lang3.StringUtils;
import org.springframework.stereotype.Service;
import java.util.List;
@Service
@RequiredArgsConstructor
public class DevOpsServiceImpl implements DevOpsUserService {
private final DevOpsUserMapper devOpsUserMapper;
@Override
public ResponseData<Boolean> add(AddUserDTO addUserDTO) {
QueryWrapper<DevOpsUserDO> queryWrapper = new QueryWrapper<DevOpsUserDO>();
queryWrapper.eq("username", addUserDTO.getUsername());
if (devOpsUserMapper.selectOne(queryWrapper) != null){
throw new KafkaConsoleException("账号已存在");
}
addUserDTO.setPassword(Md5Utils.MD5(addUserDTO.getPassword()));
int ret = devOpsUserMapper.insert(ConvertUtil.copy(addUserDTO, DevOpsUserDO.class));
return ResponseUtil.success(ret > 0);
}
@Override
public ResponseData<Boolean> update(UpdateUserDTO updateUserDTO) {
UpdateWrapper<DevOpsUserDO> updateWrapper = new UpdateWrapper<>();
if (updateUserDTO.getRole() != null){
updateWrapper.set("role", updateUserDTO.getRole());
}
if (StringUtils.isNotBlank(updateUserDTO.getPassword())){
updateWrapper.set("password", Md5Utils.MD5(updateUserDTO.getPassword()));
}
updateWrapper.eq("username", updateUserDTO.getUsername());
int ret = devOpsUserMapper.update(null, updateWrapper);
return ResponseUtil.success(ret > 0);
}
@Override
public ResponseData<Boolean> delete(Long id) {
int ret = devOpsUserMapper.deleteById(id);
return ResponseUtil.success(ret > 0);
}
@Override
public ResponseData<List<DevOpsUserVO>> list(ListUserDTO listUserDTO) {
QueryWrapper<DevOpsUserDO> queryWrapper = new QueryWrapper<DevOpsUserDO>();
if (listUserDTO.getId() != null){
queryWrapper.eq("id", listUserDTO.getId());
}
if (StringUtils.isNotBlank(listUserDTO.getUsername())){
queryWrapper.eq("username", listUserDTO.getUsername());
}
queryWrapper.ne("username", InitSuperDevOpsUser.SUPER_USERNAME);
List<DevOpsUserDO> userDOS = devOpsUserMapper.selectList(queryWrapper);
return ResponseUtil.success(ConvertUtil.copyList(userDOS, DevOpsUserVO.class));
}
@Override
public ResponseData<DevOpsUserVO> detail(String username) {
QueryWrapper<DevOpsUserDO> queryWrapper = new QueryWrapper<DevOpsUserDO>();
queryWrapper.eq("username", username);
DevOpsUserDO userDO = devOpsUserMapper.selectOne(queryWrapper);
return ResponseUtil.success(ConvertUtil.copy(userDO, DevOpsUserVO.class));
}
@Override
public ResponseData<LoginVO> login(String username, String password) {
QueryWrapper<DevOpsUserDO> queryWrapper = new QueryWrapper<DevOpsUserDO>();
queryWrapper.eq("username", username);
queryWrapper.eq("password", Md5Utils.MD5(password));
DevOpsUserDO userDO = devOpsUserMapper.selectOne(queryWrapper);
if (userDO == null){
throw new KafkaConsoleException("用户名或密码错误");
}
LoginVO loginVO = LoginVO.builder().role(userDO.getRole()).token(JwtUtils.sign(username)).build();
return ResponseUtil.success(loginVO);
}
}

View File

@@ -1,8 +1,10 @@
package com.xuxd.kafka.console.service.impl; package com.xuxd.kafka.console.service.impl;
import com.xuxd.kafka.console.beans.MessageFilter;
import com.xuxd.kafka.console.beans.QueryMessage; import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.ResponseData; import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage; import com.xuxd.kafka.console.beans.SendMessage;
import com.xuxd.kafka.console.beans.enums.FilterType;
import com.xuxd.kafka.console.beans.vo.ConsumerRecordVO; import com.xuxd.kafka.console.beans.vo.ConsumerRecordVO;
import com.xuxd.kafka.console.beans.vo.MessageDetailVO; import com.xuxd.kafka.console.beans.vo.MessageDetailVO;
import com.xuxd.kafka.console.service.ConsumerService; import com.xuxd.kafka.console.service.ConsumerService;
@@ -32,6 +34,7 @@ import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.DoubleDeserializer; import org.apache.kafka.common.serialization.DoubleDeserializer;
import org.apache.kafka.common.serialization.FloatDeserializer; import org.apache.kafka.common.serialization.FloatDeserializer;
import org.apache.kafka.common.serialization.IntegerDeserializer; import org.apache.kafka.common.serialization.IntegerDeserializer;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.StringDeserializer; import org.apache.kafka.common.serialization.StringDeserializer;
import org.springframework.beans.BeansException; import org.springframework.beans.BeansException;
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Autowired;
@@ -70,23 +73,83 @@ public class MessageServiceImpl implements MessageService, ApplicationContextAwa
deserializerDict.put("Float", new FloatDeserializer()); deserializerDict.put("Float", new FloatDeserializer());
deserializerDict.put("Double", new DoubleDeserializer()); deserializerDict.put("Double", new DoubleDeserializer());
deserializerDict.put("Byte", new BytesDeserializer()); deserializerDict.put("Byte", new BytesDeserializer());
deserializerDict.put("Long", new LongDeserializer());
} }
public static String defaultDeserializer = "String"; public static String defaultDeserializer = "String";
@Override public ResponseData searchByTime(QueryMessage queryMessage) { @Override public ResponseData searchByTime(QueryMessage queryMessage) {
int maxNums = 10000; int maxNums = 5000;
Object searchContent = null;
String headerKey = null;
String headerValue = null;
MessageFilter filter = new MessageFilter();
switch (queryMessage.getFilter()) {
case BODY:
if (StringUtils.isBlank(queryMessage.getValue())) {
queryMessage.setFilter(FilterType.NONE);
} else {
if (StringUtils.isBlank(queryMessage.getValueDeserializer())) {
queryMessage.setValueDeserializer(defaultDeserializer);
}
switch (queryMessage.getValueDeserializer()) {
case "String":
searchContent = String.valueOf(queryMessage.getValue());
filter.setContainsValue(true);
break;
case "Integer":
searchContent = Integer.valueOf(queryMessage.getValue());
break;
case "Float":
searchContent = Float.valueOf(queryMessage.getValue());
break;
case "Double":
searchContent = Double.valueOf(queryMessage.getValue());
break;
case "Long":
searchContent = Long.valueOf(queryMessage.getValue());
break;
default:
throw new IllegalArgumentException("Message body type not support.");
}
}
break;
case HEADER:
headerKey = queryMessage.getHeaderKey();
if (StringUtils.isBlank(headerKey)) {
queryMessage.setFilter(FilterType.NONE);
} else {
if (StringUtils.isNotBlank(queryMessage.getHeaderValue())) {
headerValue = String.valueOf(queryMessage.getHeaderValue());
}
}
break;
default:
break;
}
FilterType filterType = queryMessage.getFilter();
Deserializer deserializer = deserializerDict.get(queryMessage.getValueDeserializer());
filter.setFilterType(filterType);
filter.setSearchContent(searchContent);
filter.setDeserializer(deserializer);
filter.setHeaderKey(headerKey);
filter.setHeaderValue(headerValue);
Set<TopicPartition> partitions = getPartitions(queryMessage); Set<TopicPartition> partitions = getPartitions(queryMessage);
long startTime = System.currentTimeMillis(); long startTime = System.currentTimeMillis();
List<ConsumerRecord<byte[], byte[]>> records = messageConsole.searchBy(partitions, queryMessage.getStartTime(), queryMessage.getEndTime(), maxNums); Tuple2<List<ConsumerRecord<byte[], byte[]>>, Object> tuple2 = messageConsole.searchBy(partitions, queryMessage.getStartTime(), queryMessage.getEndTime(), maxNums, filter);
List<ConsumerRecord<byte[], byte[]>> records = tuple2._1();
log.info("search message by time, cost time: {}", (System.currentTimeMillis() - startTime)); log.info("search message by time, cost time: {}", (System.currentTimeMillis() - startTime));
List<ConsumerRecordVO> vos = records.stream().filter(record -> record.timestamp() <= queryMessage.getEndTime()) List<ConsumerRecordVO> vos = records.stream().filter(record -> record.timestamp() <= queryMessage.getEndTime())
.map(ConsumerRecordVO::fromConsumerRecord).collect(Collectors.toList()); .map(ConsumerRecordVO::fromConsumerRecord).collect(Collectors.toList());
Map<String, Object> res = new HashMap<>(); Map<String, Object> res = new HashMap<>();
vos = vos.subList(0, Math.min(maxNums, vos.size()));
res.put("maxNum", maxNums); res.put("maxNum", maxNums);
res.put("realNum", vos.size()); res.put("realNum", vos.size());
res.put("data", vos.subList(0, Math.min(maxNums, vos.size()))); res.put("searchNum", tuple2._2());
res.put("data", vos);
return ResponseData.create().data(res).success(); return ResponseData.create().data(res).success();
} }

View File

@@ -1,6 +1,7 @@
package com.xuxd.kafka.console.service.impl; package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper; import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.google.common.collect.Lists;
import com.google.gson.Gson; import com.google.gson.Gson;
import com.google.gson.JsonObject; import com.google.gson.JsonObject;
import com.xuxd.kafka.console.beans.ResponseData; import com.xuxd.kafka.console.beans.ResponseData;
@@ -10,6 +11,7 @@ import com.xuxd.kafka.console.beans.vo.OffsetAlignmentVO;
import com.xuxd.kafka.console.dao.MinOffsetAlignmentMapper; import com.xuxd.kafka.console.dao.MinOffsetAlignmentMapper;
import com.xuxd.kafka.console.service.OperationService; import com.xuxd.kafka.console.service.OperationService;
import com.xuxd.kafka.console.utils.GsonUtil; import com.xuxd.kafka.console.utils.GsonUtil;
import java.util.ArrayList;
import java.util.Collections; import java.util.Collections;
import java.util.HashMap; import java.util.HashMap;
import java.util.HashSet; import java.util.HashSet;
@@ -19,6 +21,7 @@ import java.util.Properties;
import java.util.Set; import java.util.Set;
import java.util.stream.Collectors; import java.util.stream.Collectors;
import kafka.console.OperationConsole; import kafka.console.OperationConsole;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.admin.PartitionReassignment; import org.apache.kafka.clients.admin.PartitionReassignment;
import org.apache.kafka.common.TopicPartition; import org.apache.kafka.common.TopicPartition;
import org.springframework.beans.factory.ObjectProvider; import org.springframework.beans.factory.ObjectProvider;
@@ -162,4 +165,21 @@ public class OperationServiceImpl implements OperationService {
} }
return ResponseData.create().success(); return ResponseData.create().success();
} }
@Override public ResponseData proposedAssignments(String topic, List<Integer> brokerList) {
Map<String, Object> params = new HashMap<>();
params.put("version", 1);
Map<String, String> topicMap = new HashMap<>(1, 1.0f);
topicMap.put("topic", topic);
params.put("topics", Lists.newArrayList(topicMap));
List<String> list = brokerList.stream().map(String::valueOf).collect(Collectors.toList());
Map<TopicPartition, List<Object>> assignments = operationConsole.proposedAssignments(gson.toJson(params), StringUtils.join(list, ","));
List<CurrentReassignmentVO> res = new ArrayList<>(assignments.size());
assignments.forEach((tp, replicas) -> {
CurrentReassignmentVO vo = new CurrentReassignmentVO(tp.topic(), tp.partition(),
replicas.stream().map(x -> (Integer) x).collect(Collectors.toList()), null, null);
res.add(vo);
});
return ResponseData.create().data(res).success();
}
} }

View File

@@ -0,0 +1,23 @@
package com.xuxd.kafka.console.utils;
import java.util.HashMap;
import java.util.Map;
public class ContextUtil {
public static final String USERNAME = "username" ;
private static ThreadLocal<Map<String, Object>> context = ThreadLocal.withInitial(() -> new HashMap<>());
public static void set(String key, Object value){
context.get().put(key, value);
}
public static String get(String key){
return (String) context.get().get(key);
}
public static void clear(){
context.remove();
}
}

View File

@@ -1,12 +1,16 @@
package com.xuxd.kafka.console.utils; package com.xuxd.kafka.console.utils;
import com.google.common.base.Preconditions; import com.google.common.base.Preconditions;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import lombok.extern.slf4j.Slf4j; import lombok.extern.slf4j.Slf4j;
import org.springframework.cglib.beans.BeanCopier;
import org.springframework.objenesis.ObjenesisStd;
import org.springframework.util.ClassUtils; import org.springframework.util.ClassUtils;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
/** /**
* kafka-console-ui. * kafka-console-ui.
* *
@@ -16,6 +20,47 @@ import org.springframework.util.ClassUtils;
@Slf4j @Slf4j
public class ConvertUtil { public class ConvertUtil {
private static ThreadLocal<ObjenesisStd> objenesisStdThreadLocal = ThreadLocal.withInitial(ObjenesisStd::new);
private static ConcurrentHashMap<Class<?>, ConcurrentHashMap<Class<?>, BeanCopier>> cache = new ConcurrentHashMap<>();
public static <T> T copy(Object source, Class<T> target) {
return copy(source, objenesisStdThreadLocal.get().newInstance(target));
}
public static <T> T copy(Object source, T target) {
if (null == source) {
return null;
}
BeanCopier beanCopier = getCacheBeanCopier(source.getClass(), target.getClass());
beanCopier.copy(source, target, null);
return target;
}
public static <T> List<T> copyList(List<?> sources, Class<T> target) {
if (sources.isEmpty()) {
return Collections.emptyList();
}
ArrayList<T> list = new ArrayList<>(sources.size());
ObjenesisStd objenesisStd = objenesisStdThreadLocal.get();
for (Object source : sources) {
if (source == null) {
break;
}
T newInstance = objenesisStd.newInstance(target);
BeanCopier beanCopier = getCacheBeanCopier(source.getClass(), target);
beanCopier.copy(source, newInstance, null);
list.add(newInstance);
}
return list;
}
private static <S, T> BeanCopier getCacheBeanCopier(Class<S> source, Class<T> target) {
ConcurrentHashMap<Class<?>, BeanCopier> copierConcurrentHashMap =
cache.computeIfAbsent(source, aClass -> new ConcurrentHashMap<>(16));
return copierConcurrentHashMap.computeIfAbsent(target, aClass -> BeanCopier.create(source, target, false));
}
public static Map<String, Object> toMap(Object src) { public static Map<String, Object> toMap(Object src) {
Preconditions.checkNotNull(src); Preconditions.checkNotNull(src);
Map<String, Object> res = new HashMap<>(); Map<String, Object> res = new HashMap<>();
@@ -35,6 +80,52 @@ public class ConvertUtil {
} }
}); });
} }
Iterator<Map.Entry<String, Object>> iterator = res.entrySet().iterator();
while (iterator.hasNext()) {
if (iterator.next().getValue() == null) {
iterator.remove();
}
}
return res;
}
public static String toJsonString(Object src) {
return GsonUtil.INSTANCE.get().toJson(src);
}
public static Properties toProperties(String jsonStr) {
return GsonUtil.INSTANCE.get().fromJson(jsonStr, Properties.class);
}
public static String jsonStr2PropertiesStr(String jsonStr) {
StringBuilder sb = new StringBuilder();
Map<String, Object> map = GsonUtil.INSTANCE.get().fromJson(jsonStr, Map.class);
map.keySet().forEach(k -> {
sb.append(k).append("=").append(map.get(k).toString()).append(System.lineSeparator());
});
return sb.toString();
}
public static List<String> jsonStr2List(String jsonStr) {
List<String> res = new LinkedList<>();
Map<String, Object> map = GsonUtil.INSTANCE.get().fromJson(jsonStr, Map.class);
map.forEach((k, v) -> {
res.add(k + "=" + v);
});
return res;
}
public static String propertiesStr2JsonStr(String propertiesStr) {
String res = "{}";
try (ByteArrayInputStream baos = new ByteArrayInputStream(propertiesStr.getBytes())) {
Properties properties = new Properties();
properties.load(baos);
res = toJsonString(properties);
} catch (IOException e) {
log.error("propertiesStr2JsonStr error.", e);
}
return res; return res;
} }
} }

View File

@@ -0,0 +1,43 @@
package com.xuxd.kafka.console.utils;
import io.jsonwebtoken.Claims;
import io.jsonwebtoken.Jwts;
import io.jsonwebtoken.SignatureAlgorithm;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;
public class JwtUtils {
private static final String ISSUER = "kafka-console-ui";
private static final long EXPIRE_TIME = 5 * 24 * 60 * 60 * 1000;
private static final String PRIVATE_KEY = "~hello!kafka=console^ui";
public static String sign(String username){
Map<String,Object> header = new HashMap<>();
header.put("typ","JWT");
header.put("alg","HS256");
Map<String,Object> claims = new HashMap<>();
claims.put("username", username);
return Jwts.builder()
.setIssuer(ISSUER)
.setHeader(header)
.setClaims(claims)
.setIssuedAt(new Date())
.setExpiration(new Date(System.currentTimeMillis() + EXPIRE_TIME))
.signWith(SignatureAlgorithm.HS256, PRIVATE_KEY)
.compact();
}
public static String parse(String token){
try{
Claims claims = Jwts.parser()
.setSigningKey(PRIVATE_KEY)
.parseClaimsJws(token).getBody();
return (String) claims.get("username");
}catch (Exception e){
return null;
}
}
}

View File

@@ -0,0 +1,12 @@
package com.xuxd.kafka.console.utils;
import org.springframework.util.DigestUtils;
import java.nio.charset.StandardCharsets;
public class Md5Utils {
public static String MD5(String s) {
return DigestUtils.md5DigestAsHex(s.getBytes(StandardCharsets.UTF_8));
}
}

View File

@@ -0,0 +1,15 @@
package com.xuxd.kafka.console.utils;
import com.xuxd.kafka.console.beans.ResponseData;
public class ResponseUtil {
public static <T> ResponseData<T> success(T data) {
return ResponseData.create().data(data);
}
public static ResponseData<String> error(String msg) {
return ResponseData.create().failed(msg);
}
}

View File

@@ -0,0 +1,61 @@
package com.xuxd.kafka.console.utils;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import java.util.Properties;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.admin.ScramMechanism;
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.security.auth.SecurityProtocol;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-06 11:07:41
**/
public class SaslUtil {
public static final Pattern JAAS_PATTERN = Pattern.compile("^.*(username=\"(.*)\"[ \t]+).*$");
private SaslUtil() {
}
public static String findUsername(String saslJaasConfig) {
Matcher matcher = JAAS_PATTERN.matcher(saslJaasConfig);
return matcher.find() ? matcher.group(2) : "";
}
public static boolean isEnableSasl() {
Properties properties = ContextConfigHolder.CONTEXT_CONFIG.get().getProperties();
if (!properties.containsKey(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG)) {
return false;
}
String s = properties.getProperty(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG);
SecurityProtocol protocol = SecurityProtocol.valueOf(s);
switch (protocol) {
case SASL_SSL:
case SASL_PLAINTEXT:
return true;
default:
return false;
}
}
public static boolean isEnableScram() {
Properties properties = ContextConfigHolder.CONTEXT_CONFIG.get().getProperties();
if (!properties.containsKey(SaslConfigs.SASL_MECHANISM)) {
return false;
}
String s = properties.getProperty(SaslConfigs.SASL_MECHANISM);
ScramMechanism mechanism = ScramMechanism.fromMechanismName(s);
switch (mechanism) {
case UNKNOWN:
return false;
default:
return true;
}
}
}

View File

@@ -6,23 +6,12 @@ server:
kafka: kafka:
config: config:
# kafka broker地址多个以逗号分隔 # 如果不存在default集群启动的时候默认会把这个加载进来(如果这里配置集群地址了),如果已经存在,则不加载
bootstrap-server: 'localhost:9092' # kafka broker地址多个以逗号分隔不是必须在这里配置也可以启动之后在页面上添加集群信息
request-timeout-ms: 5000 bootstrap-server:
# 服务端是否启用acl如果不启用下面的所有配置都忽略即可只用配置上面的Kafka集群地址就行了 # 集群其它属性配置
enable-acl: false properties:
# 只支持2种安全协议SASL_PLAINTEXT和PLAINTEXT启用acl则设置为SASL_PLAINTEXT不启用acl不需关心这个配置 # request.timeout.ms: 5000
security-protocol: SASL_PLAINTEXT
sasl-mechanism: SCRAM-SHA-256
# 超级管理员用户名在broker上已经配置为超级管理员
admin-username: admin
# 超级管理员密码
admin-password: admin
# 启动自动创建配置的超级管理员用户
admin-create: false
# broker连接的zk地址如果启动自动创建配置的超级管理员用户则必须配置否则忽略
zookeeper-addr: 'localhost:2181'
sasl-jaas-config: org.apache.kafka.common.security.scram.ScramLoginModule required username="${kafka.config.admin-username}" password="${kafka.config.admin-password}";
spring: spring:
application: application:
@@ -46,6 +35,7 @@ spring:
logging: logging:
home: ./ home: ./
# 基于scram方案的acl这里会记录创建的用户密码等信息定时扫描如果集群中已经不存在这些用户就把这些信息从db中清除掉
cron: cron:
# clear-dirty-user: 0 * * * * ? # clear-dirty-user: 0 * * * * ?
clear-dirty-user: 0 0 1 * * ? clear-dirty-user: 0 0 1 * * ?

View File

@@ -1,23 +1,51 @@
-- DROP TABLE IF EXISTS T_KAKFA_USER; -- DROP TABLE IF EXISTS T_KAKFA_USER;
-- kafka ACL启用SASL_SCRAM中的用户
CREATE TABLE IF NOT EXISTS T_KAFKA_USER CREATE TABLE IF NOT EXISTS T_KAFKA_USER
( (
ID IDENTITY NOT NULL COMMENT '主键ID', ID IDENTITY NOT NULL COMMENT '主键ID',
USERNAME VARCHAR(50) NOT NULL DEFAULT '' COMMENT '', USERNAME VARCHAR(50) NOT NULL DEFAULT '' COMMENT '用户',
PASSWORD VARCHAR(50) NOT NULL DEFAULT '' COMMENT '年龄', PASSWORD VARCHAR(50) NOT NULL DEFAULT '' COMMENT '密码',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间', UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
CLUSTER_INFO_ID BIGINT NOT NULL COMMENT '集群信息里的集群ID',
PRIMARY KEY (ID), PRIMARY KEY (ID),
UNIQUE (USERNAME) UNIQUE (USERNAME)
); );
-- 消息同步解决方案中使用的位点对齐信息
CREATE TABLE IF NOT EXISTS T_MIN_OFFSET_ALIGNMENT CREATE TABLE IF NOT EXISTS T_MIN_OFFSET_ALIGNMENT
( (
ID IDENTITY NOT NULL COMMENT '主键ID', ID IDENTITY NOT NULL COMMENT '主键ID',
GROUP_ID VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'groupId', GROUP_ID VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'groupId',
TOPIC VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'topic', TOPIC VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'topic',
THAT_OFFSET VARCHAR(512) NOT NULL DEFAULT '' COMMENT 'min offset for that kafka cluster', THAT_OFFSET VARCHAR(512) NOT NULL DEFAULT '' COMMENT 'min offset for that kafka cluster',
THIS_OFFSET VARCHAR(512) NOT NULL DEFAULT '' COMMENT 'min offset for this kafka cluster', THIS_OFFSET VARCHAR(512) NOT NULL DEFAULT '' COMMENT 'min offset for this kafka cluster',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间', UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
PRIMARY KEY (ID), PRIMARY KEY (ID),
UNIQUE (GROUP_ID, TOPIC) UNIQUE (GROUP_ID, TOPIC)
);
-- 多集群管理,每个集群的配置信息
CREATE TABLE IF NOT EXISTS T_CLUSTER_INFO
(
ID IDENTITY NOT NULL COMMENT '主键ID',
CLUSTER_NAME VARCHAR(128) NOT NULL DEFAULT '' COMMENT '集群名',
ADDRESS VARCHAR(256) NOT NULL DEFAULT '' COMMENT '集群地址',
PROPERTIES VARCHAR(512) NOT NULL DEFAULT '' COMMENT '集群的其它属性配置',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
PRIMARY KEY (ID),
UNIQUE (CLUSTER_NAME)
);
-- 用户表
CREATE TABLE IF NOT EXISTS T_DEVOPS_USER
(
ID IDENTITY NOT NULL COMMENT '主键ID',
USERNAME VARCHAR(50) NOT NULL DEFAULT '' COMMENT '用户名',
PASSWORD VARCHAR(50) NOT NULL DEFAULT '' COMMENT '密码',
`ROLE` VARCHAR(16) NOT NULL DEFAULT 'developer' COMMENT '角色',
`DELETE` TINYINT(1) NOT NULL DEFAULT '' COMMENT '删除标记',
CREATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '创建时间',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
PRIMARY KEY (ID),
UNIQUE (USERNAME)
); );

View File

@@ -0,0 +1,330 @@
package kafka.console
import com.xuxd.kafka.console.config.ContextConfigHolder
import kafka.utils.Implicits.MapExtensionMethods
import kafka.utils.Logging
import org.apache.kafka.clients._
import org.apache.kafka.clients.admin.AdminClientConfig
import org.apache.kafka.clients.consumer.internals.{ConsumerNetworkClient, RequestFuture}
import org.apache.kafka.common.Node
import org.apache.kafka.common.config.ConfigDef.ValidString.in
import org.apache.kafka.common.config.ConfigDef.{Importance, Type}
import org.apache.kafka.common.config.{AbstractConfig, ConfigDef}
import org.apache.kafka.common.errors.AuthenticationException
import org.apache.kafka.common.internals.ClusterResourceListeners
import org.apache.kafka.common.message.ApiVersionsResponseData.ApiVersionCollection
import org.apache.kafka.common.metrics.Metrics
import org.apache.kafka.common.network.Selector
import org.apache.kafka.common.protocol.Errors
import org.apache.kafka.common.requests._
import org.apache.kafka.common.utils.{KafkaThread, LogContext, Time}
import java.io.IOException
import java.util.Properties
import java.util.concurrent.atomic.AtomicInteger
import java.util.concurrent.{ConcurrentLinkedQueue, TimeUnit}
import scala.jdk.CollectionConverters.{ListHasAsScala, MapHasAsJava, PropertiesHasAsScala, SetHasAsScala}
import scala.util.{Failure, Success, Try}
/**
* kafka-console-ui.
*
* Copy from {@link kafka.admin.BrokerApiVersionsCommand}.
*
* @author xuxd
* @date 2022-01-22 15:15:57
* */
object BrokerApiVersion extends Logging {
def listAllBrokerApiVersionInfo(): java.util.HashMap[Node, NodeApiVersions] = {
val res = new java.util.HashMap[Node, NodeApiVersions]()
val adminClient = createAdminClient()
try {
adminClient.awaitBrokers()
val brokerMap = adminClient.listAllBrokerVersionInfo()
brokerMap.forKeyValue {
(broker, versionInfoOrError) =>
versionInfoOrError match {
case Success(v) => {
res.put(broker, v)
}
case Failure(v) => logger.error(s"${broker} -> ERROR: ${v}\n")
}
}
} finally {
adminClient.close()
}
res
}
private def createAdminClient(): AdminClient = {
val props = new Properties()
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getBootstrapServer())
props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs())
props.putAll(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties())
AdminClient.create(props)
}
// org.apache.kafka.clients.admin.AdminClient doesn't currently expose a way to retrieve the supported api versions.
// We inline the bits we need from kafka.admin.AdminClient so that we can delete it.
private class AdminClient(val time: Time,
val client: ConsumerNetworkClient,
val bootstrapBrokers: List[Node]) extends Logging {
@volatile var running = true
val pendingFutures = new ConcurrentLinkedQueue[RequestFuture[ClientResponse]]()
val networkThread = new KafkaThread("admin-client-network-thread", () => {
try {
while (running)
client.poll(time.timer(Long.MaxValue))
} catch {
case t: Throwable =>
error("admin-client-network-thread exited", t)
} finally {
pendingFutures.forEach { future =>
try {
future.raise(Errors.UNKNOWN_SERVER_ERROR)
} catch {
case _: IllegalStateException => // It is OK if the future has been completed
}
}
pendingFutures.clear()
}
}, true)
networkThread.start()
private def send(target: Node,
request: AbstractRequest.Builder[_ <: AbstractRequest]): AbstractResponse = {
val future = client.send(target, request)
pendingFutures.add(future)
future.awaitDone(Long.MaxValue, TimeUnit.MILLISECONDS)
pendingFutures.remove(future)
if (future.succeeded())
future.value().responseBody()
else
throw future.exception()
}
private def sendAnyNode(request: AbstractRequest.Builder[_ <: AbstractRequest]): AbstractResponse = {
bootstrapBrokers.foreach { broker =>
try {
return send(broker, request)
} catch {
case e: AuthenticationException =>
throw e
case e: Exception =>
debug(s"Request ${request.apiKey()} failed against node $broker", e)
}
}
throw new RuntimeException(s"Request ${request.apiKey()} failed on brokers $bootstrapBrokers")
}
private def getApiVersions(node: Node): ApiVersionCollection = {
val response = send(node, new ApiVersionsRequest.Builder()).asInstanceOf[ApiVersionsResponse]
Errors.forCode(response.data.errorCode).maybeThrow()
response.data.apiKeys
}
/**
* Wait until there is a non-empty list of brokers in the cluster.
*/
def awaitBrokers(): Unit = {
var nodes = List[Node]()
val start = System.currentTimeMillis()
val maxWait = 30 * 1000
do {
nodes = findAllBrokers()
if (nodes.isEmpty) {
Thread.sleep(50)
}
}
while (nodes.isEmpty && (System.currentTimeMillis() - start < maxWait))
}
private def findAllBrokers(): List[Node] = {
val request = MetadataRequest.Builder.allTopics()
val response = sendAnyNode(request).asInstanceOf[MetadataResponse]
val errors = response.errors
if (!errors.isEmpty) {
logger.info(s"Metadata request contained errors: $errors")
}
// 在3.x版本中这个方法是buildCluster 代替cluster()了
// response.buildCluster.nodes.asScala.toList
response.cluster().nodes.asScala.toList
}
def listAllBrokerVersionInfo(): Map[Node, Try[NodeApiVersions]] =
findAllBrokers().map { broker =>
broker -> Try[NodeApiVersions](new NodeApiVersions(getApiVersions(broker)))
}.toMap
def close(): Unit = {
running = false
try {
client.close()
} catch {
case e: IOException =>
error("Exception closing nioSelector:", e)
}
}
}
private object AdminClient {
val DefaultConnectionMaxIdleMs = 9 * 60 * 1000
val DefaultRequestTimeoutMs = 5000
val DefaultSocketConnectionSetupMs = CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG
val DefaultSocketConnectionSetupMaxMs = CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG
val DefaultMaxInFlightRequestsPerConnection = 100
val DefaultReconnectBackoffMs = 50
val DefaultReconnectBackoffMax = 50
val DefaultSendBufferBytes = 128 * 1024
val DefaultReceiveBufferBytes = 32 * 1024
val DefaultRetryBackoffMs = 100
val AdminClientIdSequence = new AtomicInteger(1)
val AdminConfigDef = {
val config = new ConfigDef()
.define(
CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG,
Type.LIST,
Importance.HIGH,
CommonClientConfigs.BOOTSTRAP_SERVERS_DOC)
.define(CommonClientConfigs.CLIENT_DNS_LOOKUP_CONFIG,
Type.STRING,
ClientDnsLookup.USE_ALL_DNS_IPS.toString,
in(ClientDnsLookup.USE_ALL_DNS_IPS.toString,
ClientDnsLookup.RESOLVE_CANONICAL_BOOTSTRAP_SERVERS_ONLY.toString),
Importance.MEDIUM,
CommonClientConfigs.CLIENT_DNS_LOOKUP_DOC)
.define(
CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,
ConfigDef.Type.STRING,
CommonClientConfigs.DEFAULT_SECURITY_PROTOCOL,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.SECURITY_PROTOCOL_DOC)
.define(
CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG,
ConfigDef.Type.INT,
DefaultRequestTimeoutMs,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.REQUEST_TIMEOUT_MS_DOC)
.define(
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG,
ConfigDef.Type.LONG,
CommonClientConfigs.DEFAULT_SOCKET_CONNECTION_SETUP_TIMEOUT_MS,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_DOC)
.define(
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG,
ConfigDef.Type.LONG,
CommonClientConfigs.DEFAULT_SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_DOC)
.define(
CommonClientConfigs.RETRY_BACKOFF_MS_CONFIG,
ConfigDef.Type.LONG,
DefaultRetryBackoffMs,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.RETRY_BACKOFF_MS_DOC)
.withClientSslSupport()
.withClientSaslSupport()
config
}
class AdminConfig(originals: Map[_, _]) extends AbstractConfig(AdminConfigDef, originals.asJava, false)
def create(props: Properties): AdminClient = {
val properties = new Properties()
val names = props.stringPropertyNames()
for (name <- names.asScala.toSet) {
properties.put(name, props.get(name).toString())
}
create(properties.asScala.toMap)
}
def create(props: Map[String, _]): AdminClient = create(new AdminConfig(props))
def create(config: AdminConfig): AdminClient = {
val clientId = "admin-" + AdminClientIdSequence.getAndIncrement()
val logContext = new LogContext(s"[LegacyAdminClient clientId=$clientId] ")
val time = Time.SYSTEM
val metrics = new Metrics(time)
val metadata = new Metadata(100L, 60 * 60 * 1000L, logContext,
new ClusterResourceListeners)
val channelBuilder = ClientUtils.createChannelBuilder(config, time, logContext)
val requestTimeoutMs = config.getInt(CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG)
val connectionSetupTimeoutMs = config.getLong(CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG)
val connectionSetupTimeoutMaxMs = config.getLong(CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG)
val retryBackoffMs = config.getLong(CommonClientConfigs.RETRY_BACKOFF_MS_CONFIG)
val brokerUrls = config.getList(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)
val clientDnsLookup = config.getString(CommonClientConfigs.CLIENT_DNS_LOOKUP_CONFIG)
val brokerAddresses = ClientUtils.parseAndValidateAddresses(brokerUrls, clientDnsLookup)
metadata.bootstrap(brokerAddresses)
val selector = new Selector(
DefaultConnectionMaxIdleMs,
metrics,
time,
"admin",
channelBuilder,
logContext)
// 版本不一样,这个地方的兼容性问题也不一样了
// 3.x版本用这个
// val networkClient = new NetworkClient(
// selector,
// metadata,
// clientId,
// DefaultMaxInFlightRequestsPerConnection,
// DefaultReconnectBackoffMs,
// DefaultReconnectBackoffMax,
// DefaultSendBufferBytes,
// DefaultReceiveBufferBytes,
// requestTimeoutMs,
// connectionSetupTimeoutMs,
// connectionSetupTimeoutMaxMs,
// time,
// true,
// new ApiVersions,
// logContext)
val networkClient = new NetworkClient(
selector,
metadata,
clientId,
DefaultMaxInFlightRequestsPerConnection,
DefaultReconnectBackoffMs,
DefaultReconnectBackoffMax,
DefaultSendBufferBytes,
DefaultReceiveBufferBytes,
requestTimeoutMs,
connectionSetupTimeoutMs,
connectionSetupTimeoutMaxMs,
ClientDnsLookup.USE_ALL_DNS_IPS,
time,
true,
new ApiVersions,
logContext)
val highLevelClient = new ConsumerNetworkClient(
logContext,
networkClient,
metadata,
time,
retryBackoffMs,
requestTimeoutMs,
Integer.MAX_VALUE)
new AdminClient(
time,
highLevelClient,
metadata.fetch.nodes.asScala.toList)
}
}
}

View File

@@ -1,12 +1,13 @@
package kafka.console package kafka.console
import com.xuxd.kafka.console.beans.{BrokerNode, ClusterInfo}
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.kafka.clients.NodeApiVersions
import org.apache.kafka.clients.admin.DescribeClusterResult
import org.apache.kafka.common.Node
import java.util.Collections import java.util.Collections
import java.util.concurrent.TimeUnit import java.util.concurrent.TimeUnit
import com.xuxd.kafka.console.beans.{BrokerNode, ClusterInfo}
import com.xuxd.kafka.console.config.KafkaConfig
import org.apache.kafka.clients.admin.DescribeClusterResult
import scala.jdk.CollectionConverters.{CollectionHasAsScala, SetHasAsJava, SetHasAsScala} import scala.jdk.CollectionConverters.{CollectionHasAsScala, SetHasAsJava, SetHasAsScala}
/** /**
@@ -19,6 +20,7 @@ class ClusterConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
def clusterInfo(): ClusterInfo = { def clusterInfo(): ClusterInfo = {
withAdminClientAndCatchError(admin => { withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val clusterResult: DescribeClusterResult = admin.describeCluster() val clusterResult: DescribeClusterResult = admin.describeCluster()
val clusterInfo = new ClusterInfo val clusterInfo = new ClusterInfo
clusterInfo.setClusterId(clusterResult.clusterId().get(timeoutMs, TimeUnit.MILLISECONDS)) clusterInfo.setClusterId(clusterResult.clusterId().get(timeoutMs, TimeUnit.MILLISECONDS))
@@ -41,4 +43,8 @@ class ClusterConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
new ClusterInfo new ClusterInfo
}).asInstanceOf[ClusterInfo] }).asInstanceOf[ClusterInfo]
} }
def listBrokerVersionInfo(): java.util.HashMap[Node, NodeApiVersions] = {
BrokerApiVersion.listAllBrokerApiVersionInfo()
}
} }

View File

@@ -3,8 +3,7 @@ package kafka.console
import java.util import java.util
import java.util.Collections import java.util.Collections
import java.util.concurrent.TimeUnit import java.util.concurrent.TimeUnit
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import com.xuxd.kafka.console.config.KafkaConfig
import kafka.console.ConfigConsole.BrokerLoggerConfigType import kafka.console.ConfigConsole.BrokerLoggerConfigType
import kafka.server.ConfigType import kafka.server.ConfigType
import org.apache.kafka.clients.admin.{AlterConfigOp, Config, ConfigEntry, DescribeConfigsOptions} import org.apache.kafka.clients.admin.{AlterConfigOp, Config, ConfigEntry, DescribeConfigsOptions}
@@ -69,6 +68,7 @@ class ConfigConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfi
val configResource = new ConfigResource(getResourceTypeAndValidate(entityType, entityName), entityName) val configResource = new ConfigResource(getResourceTypeAndValidate(entityType, entityName), entityName)
val config = Map(configResource -> Collections.singletonList(new AlterConfigOp(entry, opType)).asInstanceOf[util.Collection[AlterConfigOp]]) val config = Map(configResource -> Collections.singletonList(new AlterConfigOp(entry, opType)).asInstanceOf[util.Collection[AlterConfigOp]])
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.incrementalAlterConfigs(config.asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS) admin.incrementalAlterConfigs(config.asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "") (true, "")
}, e => { }, e => {

View File

@@ -1,6 +1,6 @@
package kafka.console package kafka.console
import com.xuxd.kafka.console.config.KafkaConfig import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.kafka.clients.admin.ListOffsetsResult.ListOffsetsResultInfo import org.apache.kafka.clients.admin.ListOffsetsResult.ListOffsetsResultInfo
import org.apache.kafka.clients.admin._ import org.apache.kafka.clients.admin._
import org.apache.kafka.clients.consumer.{ConsumerConfig, OffsetAndMetadata, OffsetResetStrategy} import org.apache.kafka.clients.consumer.{ConsumerConfig, OffsetAndMetadata, OffsetResetStrategy}
@@ -75,6 +75,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
val endOffsets = commitOffsets.keySet.map { topicPartition => val endOffsets = commitOffsets.keySet.map { topicPartition =>
topicPartition -> OffsetSpec.latest topicPartition -> OffsetSpec.latest
}.toMap }.toMap
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.listOffsets(endOffsets.asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS) admin.listOffsets(endOffsets.asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS)
}, e => { }, e => {
log.error("listOffsets error.", e) log.error("listOffsets error.", e)
@@ -166,6 +167,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
def resetPartitionToTargetOffset(groupId: String, partition: TopicPartition, offset: Long): (Boolean, String) = { def resetPartitionToTargetOffset(groupId: String, partition: TopicPartition, offset: Long): (Boolean, String) = {
withAdminClientAndCatchError(admin => { withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.alterConsumerGroupOffsets(groupId, Map(partition -> new OffsetAndMetadata(offset)).asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS) admin.alterConsumerGroupOffsets(groupId, Map(partition -> new OffsetAndMetadata(offset)).asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "") (true, "")
}, e => { }, e => {
@@ -178,7 +180,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
timestamp: java.lang.Long): (Boolean, String) = { timestamp: java.lang.Long): (Boolean, String) = {
withAdminClientAndCatchError(admin => { withAdminClientAndCatchError(admin => {
val logOffsets = getLogTimestampOffsets(admin, groupId, topicPartitions.asScala, timestamp) val logOffsets = getLogTimestampOffsets(admin, groupId, topicPartitions.asScala, timestamp)
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.alterConsumerGroupOffsets(groupId, logOffsets.asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS) admin.alterConsumerGroupOffsets(groupId, logOffsets.asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "") (true, "")
}, e => { }, e => {
@@ -256,6 +258,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
val timestampOffsets = topicPartitions.map { topicPartition => val timestampOffsets = topicPartitions.map { topicPartition =>
topicPartition -> OffsetSpec.forTimestamp(timestamp) topicPartition -> OffsetSpec.forTimestamp(timestamp)
}.toMap }.toMap
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val offsets = admin.listOffsets( val offsets = admin.listOffsets(
timestampOffsets.asJava, timestampOffsets.asJava,
new ListOffsetsOptions().timeoutMs(timeoutMs) new ListOffsetsOptions().timeoutMs(timeoutMs)
@@ -280,6 +283,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
val endOffsets = topicPartitions.map { topicPartition => val endOffsets = topicPartitions.map { topicPartition =>
topicPartition -> OffsetSpec.latest topicPartition -> OffsetSpec.latest
}.toMap }.toMap
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val offsets = admin.listOffsets( val offsets = admin.listOffsets(
endOffsets.asJava, endOffsets.asJava,
new ListOffsetsOptions().timeoutMs(timeoutMs) new ListOffsetsOptions().timeoutMs(timeoutMs)

View File

@@ -3,9 +3,8 @@ package kafka.console
import java.util import java.util
import java.util.concurrent.TimeUnit import java.util.concurrent.TimeUnit
import java.util.{Collections, List} import java.util.{Collections, List}
import com.xuxd.kafka.console.beans.AclEntry import com.xuxd.kafka.console.beans.AclEntry
import com.xuxd.kafka.console.config.KafkaConfig import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.commons.lang3.StringUtils import org.apache.commons.lang3.StringUtils
import org.apache.kafka.common.acl._ import org.apache.kafka.common.acl._
import org.apache.kafka.common.resource.{ResourcePattern, ResourcePatternFilter, ResourceType} import org.apache.kafka.common.resource.{ResourcePattern, ResourcePatternFilter, ResourceType}
@@ -58,6 +57,7 @@ class KafkaAclConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
def addAcl(acls: List[AclBinding]): Boolean = { def addAcl(acls: List[AclBinding]): Boolean = {
withAdminClient(adminClient => { withAdminClient(adminClient => {
try { try {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
adminClient.createAcls(acls).all().get(timeoutMs, TimeUnit.MILLISECONDS) adminClient.createAcls(acls).all().get(timeoutMs, TimeUnit.MILLISECONDS)
true true
} catch { } catch {
@@ -100,6 +100,7 @@ class KafkaAclConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
def deleteAcl(entry: AclEntry, allResource: Boolean, allPrincipal: Boolean, allOperation: Boolean): Boolean = { def deleteAcl(entry: AclEntry, allResource: Boolean, allPrincipal: Boolean, allOperation: Boolean): Boolean = {
withAdminClient(adminClient => { withAdminClient(adminClient => {
try { try {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val result = adminClient.deleteAcls(Collections.singleton(entry.toAclBindingFilter(allResource, allPrincipal, allOperation))).all().get(timeoutMs, TimeUnit.MILLISECONDS) val result = adminClient.deleteAcls(Collections.singleton(entry.toAclBindingFilter(allResource, allPrincipal, allOperation))).all().get(timeoutMs, TimeUnit.MILLISECONDS)
log.info("delete acl: {}", result) log.info("delete acl: {}", result)
true true
@@ -113,6 +114,7 @@ class KafkaAclConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
def deleteAcl(filters: util.Collection[AclBindingFilter]): Boolean = { def deleteAcl(filters: util.Collection[AclBindingFilter]): Boolean = {
withAdminClient(adminClient => { withAdminClient(adminClient => {
try { try {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val result = adminClient.deleteAcls(filters).all().get(timeoutMs, TimeUnit.MILLISECONDS) val result = adminClient.deleteAcls(filters).all().get(timeoutMs, TimeUnit.MILLISECONDS)
log.info("delete acl: {}", result) log.info("delete acl: {}", result)
true true

View File

@@ -1,16 +1,16 @@
package kafka.console package kafka.console
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import kafka.server.ConfigType
import kafka.utils.Implicits.PropertiesOps
import org.apache.kafka.clients.admin._
import org.apache.kafka.common.config.SaslConfigs
import org.apache.kafka.common.security.scram.internals.{ScramCredentialUtils, ScramFormatter}
import java.security.MessageDigest import java.security.MessageDigest
import java.util import java.util
import java.util.concurrent.TimeUnit import java.util.concurrent.TimeUnit
import java.util.{Properties, Set} import java.util.{Properties, Set}
import com.xuxd.kafka.console.config.KafkaConfig
import kafka.server.ConfigType
import kafka.utils.Implicits.PropertiesOps
import org.apache.kafka.clients.admin._
import org.apache.kafka.common.security.scram.internals.{ScramCredentialUtils, ScramFormatter}
import scala.jdk.CollectionConverters.{CollectionHasAsScala, DictionaryHasAsScala, SeqHasAsJava} import scala.jdk.CollectionConverters.{CollectionHasAsScala, DictionaryHasAsScala, SeqHasAsJava}
/** /**
@@ -35,31 +35,32 @@ class KafkaConfigConsole(config: KafkaConfig) extends KafkaConsole(config: Kafka
}).asInstanceOf[util.Map[String, UserScramCredentialsDescription]] }).asInstanceOf[util.Map[String, UserScramCredentialsDescription]]
} }
def addOrUpdateUser(name: String, pass: String): Boolean = { def addOrUpdateUser(name: String, pass: String): (Boolean, String) = {
withAdminClient(adminClient => { withAdminClient(adminClient => {
try { try {
adminClient.alterUserScramCredentials(util.Arrays.asList( val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
new UserScramCredentialUpsertion(name, val mechanisms = ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_MECHANISM).split(",").toSeq
new ScramCredentialInfo(ScramMechanism.fromMechanismName(config.getSaslMechanism), defaultIterations), pass))) val scrams = mechanisms.map(m => new UserScramCredentialUpsertion(name,
.all().get(timeoutMs, TimeUnit.MILLISECONDS) new ScramCredentialInfo(ScramMechanism.fromMechanismName(m), defaultIterations), pass))
true adminClient.alterUserScramCredentials(scrams.asInstanceOf[Seq[UserScramCredentialAlteration]].asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
} catch { } catch {
case ex: Exception => log.error("addOrUpdateUser error", ex) case ex: Exception => log.error("addOrUpdateUser error", ex)
false (false, ex.getMessage)
} }
}).asInstanceOf[Boolean] }).asInstanceOf[(Boolean, String)]
} }
def addOrUpdateUserWithZK(name: String, pass: String): Boolean = { def addOrUpdateUserWithZK(name: String, pass: String): Boolean = {
withZKClient(adminZkClient => { withZKClient(adminZkClient => {
try { try {
val credential = new ScramFormatter(org.apache.kafka.common.security.scram.internals.ScramMechanism.forMechanismName(config.getSaslMechanism)) val credential = new ScramFormatter(org.apache.kafka.common.security.scram.internals.ScramMechanism.forMechanismName(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_MECHANISM)))
.generateCredential(pass, defaultIterations) .generateCredential(pass, defaultIterations)
val credentialStr = ScramCredentialUtils.credentialToString(credential) val credentialStr = ScramCredentialUtils.credentialToString(credential)
val userConfig: Properties = new Properties() val userConfig: Properties = new Properties()
userConfig.put(config.getSaslMechanism, credentialStr) userConfig.put(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_MECHANISM), credentialStr)
val configs = adminZkClient.fetchEntityConfig(ConfigType.User, name) val configs = adminZkClient.fetchEntityConfig(ConfigType.User, name)
userConfig ++= configs userConfig ++= configs
@@ -101,6 +102,7 @@ class KafkaConfigConsole(config: KafkaConfig) extends KafkaConsole(config: Kafka
// .all().get(timeoutMs, TimeUnit.MILLISECONDS) // .all().get(timeoutMs, TimeUnit.MILLISECONDS)
// all delete // all delete
val userDetail = getUserDetailList(util.Collections.singletonList(name)) val userDetail = getUserDetailList(util.Collections.singletonList(name))
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
userDetail.values().asScala.foreach(u => { userDetail.values().asScala.foreach(u => {
adminClient.alterUserScramCredentials(u.credentialInfos().asScala.map(s => new UserScramCredentialDeletion(u.name(), s.mechanism()) adminClient.alterUserScramCredentials(u.credentialInfos().asScala.map(s => new UserScramCredentialDeletion(u.name(), s.mechanism())
.asInstanceOf[UserScramCredentialAlteration]).toList.asJava) .asInstanceOf[UserScramCredentialAlteration]).toList.asJava)

View File

@@ -1,13 +1,11 @@
package kafka.console package kafka.console
import com.xuxd.kafka.console.config.KafkaConfig import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import kafka.zk.{AdminZkClient, KafkaZkClient} import kafka.zk.{AdminZkClient, KafkaZkClient}
import org.apache.kafka.clients.CommonClientConfigs
import org.apache.kafka.clients.admin._ import org.apache.kafka.clients.admin._
import org.apache.kafka.clients.consumer.{ConsumerConfig, KafkaConsumer, OffsetAndMetadata} import org.apache.kafka.clients.consumer.{ConsumerConfig, KafkaConsumer, OffsetAndMetadata}
import org.apache.kafka.clients.producer.KafkaProducer import org.apache.kafka.clients.producer.KafkaProducer
import org.apache.kafka.common.TopicPartition import org.apache.kafka.common.TopicPartition
import org.apache.kafka.common.config.SaslConfigs
import org.apache.kafka.common.requests.ListOffsetsResponse import org.apache.kafka.common.requests.ListOffsetsResponse
import org.apache.kafka.common.serialization.{ByteArrayDeserializer, ByteArraySerializer, StringSerializer} import org.apache.kafka.common.serialization.{ByteArrayDeserializer, ByteArraySerializer, StringSerializer}
import org.apache.kafka.common.utils.Time import org.apache.kafka.common.utils.Time
@@ -25,7 +23,7 @@ import scala.jdk.CollectionConverters.{MapHasAsJava, MapHasAsScala}
* */ * */
class KafkaConsole(config: KafkaConfig) { class KafkaConsole(config: KafkaConfig) {
protected val timeoutMs: Int = config.getRequestTimeoutMs // protected val timeoutMs: Int = config.getRequestTimeoutMs
protected def withAdminClient(f: Admin => Any): Any = { protected def withAdminClient(f: Admin => Any): Any = {
@@ -108,7 +106,7 @@ class KafkaConsole(config: KafkaConfig) {
} }
protected def withTimeoutMs[T <: AbstractOptions[T]](options: T) = { protected def withTimeoutMs[T <: AbstractOptions[T]](options: T) = {
options.timeoutMs(timeoutMs) options.timeoutMs(ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs())
} }
private def createAdminClient(): Admin = { private def createAdminClient(): Admin = {
@@ -117,13 +115,9 @@ class KafkaConsole(config: KafkaConfig) {
private def getProps(): Properties = { private def getProps(): Properties = {
val props: Properties = new Properties(); val props: Properties = new Properties();
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, config.getBootstrapServer) props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getBootstrapServer())
props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, config.getRequestTimeoutMs()) props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs())
if (config.isEnableAcl) { props.putAll(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties())
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, config.getSecurityProtocol())
props.put(SaslConfigs.SASL_MECHANISM, config.getSaslMechanism())
props.put(SaslConfigs.SASL_JAAS_CONFIG, config.getSaslJaasConfig())
}
props props
} }
} }

View File

@@ -1,6 +1,9 @@
package kafka.console package kafka.console
import com.xuxd.kafka.console.config.KafkaConfig import com.xuxd.kafka.console.beans.MessageFilter
import com.xuxd.kafka.console.beans.enums.FilterType
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.commons.lang3.StringUtils
import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord} import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord}
import org.apache.kafka.clients.producer.ProducerRecord import org.apache.kafka.clients.producer.ProducerRecord
import org.apache.kafka.common.TopicPartition import org.apache.kafka.common.TopicPartition
@@ -20,10 +23,11 @@ import scala.jdk.CollectionConverters.{CollectionHasAsScala, MapHasAsScala, SeqH
class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig) with Logging { class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig) with Logging {
def searchBy(partitions: util.Collection[TopicPartition], startTime: Long, endTime: Long, def searchBy(partitions: util.Collection[TopicPartition], startTime: Long, endTime: Long,
maxNums: Int): util.List[ConsumerRecord[Array[Byte], Array[Byte]]] = { maxNums: Int, filter: MessageFilter): (util.List[ConsumerRecord[Array[Byte], Array[Byte]]], Int) = {
var startOffTable: immutable.Map[TopicPartition, Long] = Map.empty var startOffTable: immutable.Map[TopicPartition, Long] = Map.empty
var endOffTable: immutable.Map[TopicPartition, Long] = Map.empty var endOffTable: immutable.Map[TopicPartition, Long] = Map.empty
withAdminClientAndCatchError(admin => { withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val startTable = KafkaConsole.getLogTimestampOffsets(admin, partitions.asScala.toSeq, startTime, timeoutMs) val startTable = KafkaConsole.getLogTimestampOffsets(admin, partitions.asScala.toSeq, startTime, timeoutMs)
startOffTable = startTable.map(t2 => (t2._1, t2._2.offset())).toMap startOffTable = startTable.map(t2 => (t2._1, t2._2.offset())).toMap
@@ -33,8 +37,44 @@ class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
log.error("getLogTimestampOffsets error.", e) log.error("getLogTimestampOffsets error.", e)
throw new RuntimeException("getLogTimestampOffsets error", e) throw new RuntimeException("getLogTimestampOffsets error", e)
}) })
val headerValueBytes = if (StringUtils.isNotEmpty(filter.getHeaderValue())) filter.getHeaderValue().getBytes() else None
def filterMessage(record: ConsumerRecord[Array[Byte], Array[Byte]]): Boolean = {
filter.getFilterType() match {
case FilterType.BODY => {
val body = filter.getDeserializer().deserialize(record.topic(), record.value())
var contains = false
if (filter.isContainsValue) {
contains = body.asInstanceOf[String].contains(filter.getSearchContent().asInstanceOf[String])
} else {
contains = body.equals(filter.getSearchContent)
}
contains
}
case FilterType.HEADER => {
if (StringUtils.isNotEmpty(filter.getHeaderKey()) && StringUtils.isNotEmpty(filter.getHeaderValue())) {
val iterator = record.headers().headers(filter.getHeaderKey()).iterator()
var contains = false
while (iterator.hasNext() && !contains) {
val next = iterator.next().value()
contains = (next.sameElements(headerValueBytes.asInstanceOf[Array[Byte]]))
}
contains
} else if (StringUtils.isNotEmpty(filter.getHeaderKey()) && StringUtils.isEmpty(filter.getHeaderValue())) {
record.headers().headers(filter.getHeaderKey()).iterator().hasNext()
} else {
true
}
}
case FilterType.NONE => true
}
}
var terminate: Boolean = (startOffTable == endOffTable) var terminate: Boolean = (startOffTable == endOffTable)
val res = new util.LinkedList[ConsumerRecord[Array[Byte], Array[Byte]]]() val res = new util.LinkedList[ConsumerRecord[Array[Byte], Array[Byte]]]()
// 检索的消息条数
var searchNums = 0
// 如果最小和最大偏移一致,就结束 // 如果最小和最大偏移一致,就结束
if (!terminate) { if (!terminate) {
@@ -51,9 +91,10 @@ class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
// 1.所有查询分区达都到最大偏移的时候 // 1.所有查询分区达都到最大偏移的时候
while (!terminate) { while (!terminate) {
// 达到查询的最大条数 // 达到查询的最大条数
if (res.size() >= maxNums) { if (searchNums >= maxNums) {
terminate = true terminate = true
} else { } else {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val records = consumer.poll(Duration.ofMillis(timeoutMs)) val records = consumer.poll(Duration.ofMillis(timeoutMs))
if (records.isEmpty) { if (records.isEmpty) {
@@ -67,6 +108,7 @@ class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
if (first.offset() >= endOff) { if (first.offset() >= endOff) {
arrive.remove(tp) arrive.remove(tp)
} else { } else {
searchNums += recordList.size()
// //
// (String topic, // (String topic,
// int partition, // int partition,
@@ -80,7 +122,7 @@ class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
// V value, // V value,
// Headers headers, // Headers headers,
// Optional<Integer> leaderEpoch) // Optional<Integer> leaderEpoch)
val nullVList = recordList.asScala.map(record => new ConsumerRecord[Array[Byte], Array[Byte]](record.topic(), val nullVList = recordList.asScala.filter(filterMessage(_)).map(record => new ConsumerRecord[Array[Byte], Array[Byte]](record.topic(),
record.partition(), record.partition(),
record.offset(), record.offset(),
record.timestamp(), record.timestamp(),
@@ -114,7 +156,7 @@ class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
}) })
} }
res (res, searchNums)
} }
def searchBy( def searchBy(
@@ -149,6 +191,7 @@ class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
var terminate = tpSet.isEmpty var terminate = tpSet.isEmpty
while (!terminate) { while (!terminate) {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val records = consumer.poll(Duration.ofMillis(timeoutMs)) val records = consumer.poll(Duration.ofMillis(timeoutMs))
val tps = new util.HashSet(tpSet).asScala val tps = new util.HashSet(tpSet).asScala
for (tp <- tps) { for (tp <- tps) {

View File

@@ -1,6 +1,6 @@
package kafka.console package kafka.console
import com.xuxd.kafka.console.config.KafkaConfig import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import kafka.admin.ReassignPartitionsCommand import kafka.admin.ReassignPartitionsCommand
import org.apache.kafka.clients.admin.{ElectLeadersOptions, ListPartitionReassignmentsOptions, PartitionReassignment} import org.apache.kafka.clients.admin.{ElectLeadersOptions, ListPartitionReassignmentsOptions, PartitionReassignment}
import org.apache.kafka.clients.consumer.KafkaConsumer import org.apache.kafka.clients.consumer.KafkaConsumer
@@ -34,6 +34,7 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
throw new UnsupportedOperationException("exist consumer client.") throw new UnsupportedOperationException("exist consumer client.")
} }
} }
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val thatGroupDescriptionList = thatAdmin.describeConsumerGroups(searchGroupIds).all().get(timeoutMs, TimeUnit.MILLISECONDS).values() val thatGroupDescriptionList = thatAdmin.describeConsumerGroups(searchGroupIds).all().get(timeoutMs, TimeUnit.MILLISECONDS).values()
if (groupDescriptionList.isEmpty) { if (groupDescriptionList.isEmpty) {
throw new IllegalArgumentException("that consumer group info is null.") throw new IllegalArgumentException("that consumer group info is null.")
@@ -101,6 +102,7 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
thatMinOffset: util.Map[TopicPartition, Long]): (Boolean, String) = { thatMinOffset: util.Map[TopicPartition, Long]): (Boolean, String) = {
val thatAdmin = createAdminClient(props) val thatAdmin = createAdminClient(props)
try { try {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val searchGroupIds = Collections.singleton(groupId) val searchGroupIds = Collections.singleton(groupId)
val groupDescriptionList = consumerConsole.getConsumerGroupList(searchGroupIds) val groupDescriptionList = consumerConsole.getConsumerGroupList(searchGroupIds)
if (groupDescriptionList.isEmpty) { if (groupDescriptionList.isEmpty) {
@@ -178,6 +180,7 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
val thatConsumer = new KafkaConsumer(props, new ByteArrayDeserializer, new ByteArrayDeserializer) val thatConsumer = new KafkaConsumer(props, new ByteArrayDeserializer, new ByteArrayDeserializer)
try { try {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val thisTopicPartitions = consumerConsole.listSubscribeTopics(groupId).get(topic).asScala.sortBy(_.partition()) val thisTopicPartitions = consumerConsole.listSubscribeTopics(groupId).get(topic).asScala.sortBy(_.partition())
val thatTopicPartitionMap = thatAdmin.listConsumerGroupOffsets( val thatTopicPartitionMap = thatAdmin.listConsumerGroupOffsets(
groupId groupId
@@ -239,8 +242,8 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
withAdminClientAndCatchError(admin => { withAdminClientAndCatchError(admin => {
admin.listPartitionReassignments(withTimeoutMs(new ListPartitionReassignmentsOptions)).reassignments().get() admin.listPartitionReassignments(withTimeoutMs(new ListPartitionReassignmentsOptions)).reassignments().get()
}, e => { }, e => {
Collections.emptyMap()
log.error("listPartitionReassignments error.", e) log.error("listPartitionReassignments error.", e)
Collections.emptyMap()
}).asInstanceOf[util.Map[TopicPartition, PartitionReassignment]] }).asInstanceOf[util.Map[TopicPartition, PartitionReassignment]]
} }
@@ -253,4 +256,20 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
throw e throw e
}).asInstanceOf[util.Map[TopicPartition, Throwable]] }).asInstanceOf[util.Map[TopicPartition, Throwable]]
} }
def proposedAssignments(reassignmentJson: String,
brokerListString: String): util.Map[TopicPartition, util.List[Int]] = {
withAdminClientAndCatchError(admin => {
val map = ReassignPartitionsCommand.generateAssignment(admin, reassignmentJson, brokerListString, true)._1
val res = new util.HashMap[TopicPartition, util.List[Int]]()
for (tp <- map.keys) {
res.put(tp, map(tp).asJava)
// res.put(tp, map.getOrElse(tp, Seq.empty).asJava)
}
res
}, e => {
log.error("proposedAssignments error.", e)
throw e
})
}.asInstanceOf[util.Map[TopicPartition, util.List[Int]]]
} }

View File

@@ -1,6 +1,6 @@
package kafka.console package kafka.console
import com.xuxd.kafka.console.config.KafkaConfig import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import kafka.admin.ReassignPartitionsCommand._ import kafka.admin.ReassignPartitionsCommand._
import kafka.utils.Json import kafka.utils.Json
import org.apache.kafka.clients.admin._ import org.apache.kafka.clients.admin._
@@ -28,6 +28,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
* @return all topic name set. * @return all topic name set.
*/ */
def getTopicNameList(internal: Boolean = true): Set[String] = { def getTopicNameList(internal: Boolean = true): Set[String] = {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
withAdminClientAndCatchError(admin => admin.listTopics(new ListTopicsOptions().listInternal(internal)).names() withAdminClientAndCatchError(admin => admin.listTopics(new ListTopicsOptions().listInternal(internal)).names()
.get(timeoutMs, TimeUnit.MILLISECONDS), .get(timeoutMs, TimeUnit.MILLISECONDS),
e => { e => {
@@ -42,6 +43,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
* @return internal topic name set. * @return internal topic name set.
*/ */
def getInternalTopicNameList(): Set[String] = { def getInternalTopicNameList(): Set[String] = {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
withAdminClientAndCatchError(admin => admin.listTopics(new ListTopicsOptions().listInternal(true)).listings() withAdminClientAndCatchError(admin => admin.listTopics(new ListTopicsOptions().listInternal(true)).listings()
.get(timeoutMs, TimeUnit.MILLISECONDS).asScala.filter(_.isInternal).map(_.name()).toSet[String].asJava, .get(timeoutMs, TimeUnit.MILLISECONDS).asScala.filter(_.isInternal).map(_.name()).toSet[String].asJava,
e => { e => {
@@ -69,6 +71,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
*/ */
def deleteTopic(topic: String): (Boolean, String) = { def deleteTopic(topic: String): (Boolean, String) = {
withAdminClientAndCatchError(admin => { withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.deleteTopics(Collections.singleton(topic), new DeleteTopicsOptions().retryOnQuotaViolation(false)).all().get(timeoutMs, TimeUnit.MILLISECONDS) admin.deleteTopics(Collections.singleton(topic), new DeleteTopicsOptions().retryOnQuotaViolation(false)).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "") (true, "")
}, },
@@ -103,6 +106,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
*/ */
def createTopic(topic: NewTopic): (Boolean, String) = { def createTopic(topic: NewTopic): (Boolean, String) = {
withAdminClientAndCatchError(admin => { withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val createResult = admin.createTopics(Collections.singleton(topic), new CreateTopicsOptions().retryOnQuotaViolation(false)) val createResult = admin.createTopics(Collections.singleton(topic), new CreateTopicsOptions().retryOnQuotaViolation(false))
createResult.all().get(timeoutMs, TimeUnit.MILLISECONDS) createResult.all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "") (true, "")
@@ -117,6 +121,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
*/ */
def createPartitions(newPartitions: util.Map[String, NewPartitions]): (Boolean, String) = { def createPartitions(newPartitions: util.Map[String, NewPartitions]): (Boolean, String) = {
withAdminClientAndCatchError(admin => { withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.createPartitions(newPartitions, admin.createPartitions(newPartitions,
new CreatePartitionsOptions().retryOnQuotaViolation(false)).all().get(timeoutMs, TimeUnit.MILLISECONDS) new CreatePartitionsOptions().retryOnQuotaViolation(false)).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "") (true, "")
@@ -241,6 +246,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
.asScala.map(info => new TopicPartition(topic, info.partition())).toSeq .asScala.map(info => new TopicPartition(topic, info.partition())).toSeq
case None => throw new IllegalArgumentException("topic is not exist.") case None => throw new IllegalArgumentException("topic is not exist.")
} }
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val offsetMap = KafkaConsole.getLogTimestampOffsets(admin, partitions, timestamp, timeoutMs) val offsetMap = KafkaConsole.getLogTimestampOffsets(admin, partitions, timestamp, timeoutMs)
offsetMap.map(tuple2 => (tuple2._1, tuple2._2.offset())).toMap.asJava offsetMap.map(tuple2 => (tuple2._1, tuple2._2.offset())).toMap.asJava
}, e => { }, e => {

13125
ui/package-lock.json generated
View File

File diff suppressed because it is too large Load Diff

View File

@@ -17,11 +17,6 @@
"vuex": "^3.4.0" "vuex": "^3.4.0"
}, },
"devDependencies": { "devDependencies": {
"@vue/cli-plugin-babel": "~4.5.0",
"@vue/cli-plugin-eslint": "~4.5.0",
"@vue/cli-plugin-router": "~4.5.0",
"@vue/cli-plugin-vuex": "~4.5.0",
"@vue/cli-service": "~4.5.0",
"@vue/eslint-config-prettier": "^6.0.0", "@vue/eslint-config-prettier": "^6.0.0",
"babel-eslint": "^10.1.0", "babel-eslint": "^10.1.0",
"eslint": "^6.7.2", "eslint": "^6.7.2",

Some files were not shown because too many files have changed in this diff Show More