48 Commits

Author SHA1 Message Date
许晓东
8131cb1a42 发布1.0.4安装包下载地址 2022-02-16 20:01:06 +08:00
许晓东
1dd6466261 副本重分配 2022-02-16 19:50:35 +08:00
许晓东
dda08a2152 副本重分配-》生成分配计划 2022-02-15 20:13:07 +08:00
许晓东
01c7121ee4 集群节点列表有序 2022-01-22 23:33:13 +08:00
许晓东
d939d7653c 主页展示Broker API的版本兼容信息 2022-01-22 23:07:41 +08:00
许晓东
058cd5a24e 查询当前重分配,版本不支持异常处理 2022-01-20 13:44:37 +08:00
许晓东
db3f55ac4a polish README 2022-01-19 19:05:00 +08:00
许晓东
a311a34537 分区比较栈溢出bug修复 2022-01-18 20:42:11 +08:00
许晓东
e8fe2ea1c7 集群名称支持中文,消息查询可选择时间展示顺序 2022-01-13 14:19:17 +08:00
许晓东
10302dd39c v1.0.3安装包下载地址 2022-01-09 23:57:00 +08:00
许晓东
55a4483fcc 打包只打zip包 2022-01-09 23:45:39 +08:00
许晓东
4dd2412b78 polish README 2022-01-09 23:40:20 +08:00
许晓东
387c714072 polish README 2022-01-09 23:27:23 +08:00
许晓东
6a2d876d50 多集群支持,集群切换 2022-01-06 19:31:44 +08:00
许晓东
f5fb2c4f88 集群切换 2022-01-05 21:19:46 +08:00
许晓东
6f9676e259 集群列表、新增集群 2022-01-04 21:06:50 +08:00
许晓东
2427ce2c1e 准备开发多集群支持 2022-01-03 22:02:03 +08:00
许晓东
02abe67fce 消息查询过滤 2021-12-30 14:17:47 +08:00
许晓东
ad39f4e82c 消息查询过滤 2021-12-29 21:15:56 +08:00
许晓东
243c89b459 topic模糊查询,消息过滤页面配置 2021-12-28 20:39:07 +08:00
许晓东
11418cd6e0 去掉消息同步方案入口 2021-12-27 20:41:19 +08:00
许晓东
b19c6200d2 polish README 2021-12-21 14:57:16 +08:00
许晓东
5f6a06c100 1.0.2安装包下载地址 2021-12-21 14:44:55 +08:00
许晓东
5930e44fdf 消息详情支持重新发送 2021-12-21 14:08:36 +08:00
许晓东
98f33bb2cc 分区信息里展示当前分区的有效时间范围 2021-12-20 20:29:34 +08:00
许晓东
0ec3bac6c2 在线发送消息 2021-12-20 00:09:20 +08:00
许晓东
bd814d550d 按时间查询消息及时释放内存 2021-12-17 20:06:23 +08:00
许晓东
b9548d1640 fix按时间查询消息bug,加长页面请求超时 2021-12-13 19:05:57 +08:00
许晓东
57a41e087f 查询消息详情的时候展示消费情况 2021-12-12 23:35:17 +08:00
许晓东
54cd402810 查询消息详情信息 2021-12-12 18:53:29 +08:00
许晓东
c17b0aa4b9 根据偏移查询消息 2021-12-11 23:56:18 +08:00
Xiaodong Xu
8169ddb019 Merge pull request #5 from xxd763795151/master
根据时间查询消息
2021-12-11 14:55:07 +08:00
许晓东
5f24c62855 根据时间查询消息 2021-12-11 14:53:54 +08:00
许晓东
3b21fc4cd8 加一个消息页面 2021-12-05 23:18:51 +08:00
许晓东
d15ec4a2db 页面左上角加个标志 2021-12-04 14:47:32 +08:00
许晓东
12431db525 更新最新版本安装包下载地址 2021-11-30 20:20:03 +08:00
许晓东
20535027bf 取消正在进行的副本重分配,消费组->消费详情增加刷新按钮 2021-11-30 19:49:02 +08:00
许晓东
222ba34702 发送统计 2021-11-30 15:15:47 +08:00
许晓东
39e50a6589 根据时间戳重围消费位点,采用东8区时间 2021-11-29 19:40:46 +08:00
许晓东
e881c58a8f 移除topic 限流 2021-11-27 20:49:46 +08:00
许晓东
34c87997d1 topic 限流 2021-11-27 19:46:06 +08:00
许晓东
4639335a9d polish README.md 2021-11-25 19:07:51 +08:00
许晓东
73fed3face 解除限流速率配置 2021-11-25 10:55:37 +08:00
许晓东
1b028fcb4f 限流配置 2021-11-24 20:57:33 +08:00
许晓东
62569c4454 变更副本 2021-11-23 19:59:52 +08:00
许晓东
a219551802 变更副本信息 2021-11-20 22:35:38 +08:00
许晓东
7a98eb479f 变更副本信息查询 2021-11-19 21:01:11 +08:00
许晓东
405f272fb7 下载地址 2021-11-18 15:13:08 +08:00
118 changed files with 6574 additions and 365 deletions

115
README.md
View File

@@ -1,79 +1,72 @@
# kafka可视化管理平台
一款轻量级的kafka可视化管理平台安装配置快捷、简单易用。
为了开发的省事,没有多语言支持,只支持中文展示。
为了开发的省事,没有国际化支持,只支持中文展示。
用过rocketmq-console吧前端展示风格跟那个有点类似。
## 安装包下载
* 点击下载:[kafka-console-ui.tar.gz](http://43.128.31.53/kafka-console-ui.tar.gz) 或 [kafka-console-ui.zip](http://43.128.31.53/kafka-console-ui.zip)
* 参考下面的打包部署,下载源码重新打包
## 页面预览
如果github能查看图片的话可以点击[查看菜单页面](./document/overview/概览.md),查看每个页面的样子
## 集群迁移支持说明
当前主分支及日后版本不再提供消息同步、集群迁移的解决方案,如有需要,查看:[集群迁移说明](./document/datasync/集群迁移.md)
## ACL说明
acl配置说明如果kafka集群启用了ACL但是控制台没看到Acl菜单可以查看[Acl配置启用说明](./document/acl/Acl.md)
## 功能支持
* 多集群支持
* 集群信息
* Topic管理
* 消费组管理
* 基于SASL_SCRAM认证授权管理
* 消息管理
* ACL
* 运维
![功能特性](./document/功能特性.png)
## 技术栈
* spring boot
* java、scala
* kafka
* h2
* vue
## kafka版本
* 当前使用的kafka 2.8.0
## 监控
仅提供运维管理功能监控、告警需要配合其它组件使用请查看https://blog.csdn.net/x763795151/article/details/119705372
# 打包、部署
## 打包
环境要求
* maven 3.6+
* jdk 8
* git
功能明细看这个脑图:
![功能特性](./document/img/功能特性.png)
## 安装包下载
点击下载(v1.0.4版本)[kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.4/kafka-console-ui.zip)
## 快速使用
### Windows
1. 解压缩zip安装包
2. 进入bin目录必须在bin目录下双击执行`start.bat`启动
3. 停止:直接关闭启动的命令行窗口即可
### Linux或Mac OS
```
git clone https://github.com/xxd763795151/kafka-console-ui.git
cd kafka-console-ui
# linux或mac执行
sh package.sh
# windows执行
package.bat
```
打包成功,输出文件(以下2种归档类型)
* target/kafka-console-ui.tar.gz
* target/kafka-console-ui.zip
## 部署
### Mac OS 或 Linux
```
# 解压缩(以tar.gz为例)
tar -zxvf kafka-console-ui.tar.gz
# 解压缩
unzip kafka-console-ui.zip
# 进入解压缩后的目录
cd kafka-console-ui
# 编辑配置
vim config/application.yml
# 启动
sh bin/start.sh
# 停止
sh bin/shutdown.sh
```
### Windows
1.解压缩zip安装包
2.编辑配置文件 `config/application.yml`
3.进入bin目录必须在bin目录下执行`start.bat`启动
启动完成访问http://127.0.0.1:7766
### 访问地址
启动完成访问http://127.0.0.1:7766
# 开发环境
* jdk 8
* idea
* scala 2.13
* maven >=3.6+
* webstorm
除了webstorm是开发前端的ide可以根据自己需要代替jdk scala是必须有的。
# 本地开发配置
以我自己为例开发环境里的工具准备好然后代码clone到本地。
## 后端配置
1. 用idea打开项目
2. 打开idea的Project Structure(Settings) -> Modules -> 设置src/main/scala为Sources因为约定src/main/java是源码目录所以这里要再加一个
3. 打开idea的Project Structure(Settings) -> Libraries 添加scala sdk然后选择本地下载的scala 2.13的目录,确定添加进来
## 前端
前端代码在工程的ui目录下找个前端开发的ide打开进行开发即可。
## 注意
前后分离,直接启动后端如果未编译前端代码是没有前端页面的,可以先打包进行编译`sh package.sh`然后再用idea启动或者前端部分单独启动
### 配置集群
第一次启动打开浏览器后因为还没有配置kafka集群信息所以页面右上角可能会有错误信息比如No Cluster Info或者是没有集群信息请先切换集群之类的提示。
集群配置如下:
1. 点击页面上方导航栏的 [运维] 菜单
2. 点击集群管理下的 [集群切换] 按钮
3. 在弹框里点击 [新增集群]
4. 然后输入kafka集群地址和一个名称随便起个名字
5. 点击提交便增加成功了
6. 增加成功可以看到会话框已经有这个集群信息,然后点击右侧的 [切换] 按钮,便切换该集群为当前集群
后续如果再增加其它集群,就可以按上面这个流程,如果想切换到哪个集群,点击切换按钮,便会切换到对应的集群,页面的右上角会显示当前是使用的哪个集群,如果不确定,可以刷新下页面。
在新增集群的时候除了集群地址还可以输入集群的其它属性配置比如请求超时ACL配置等。如果开启了ACL切换到该集群的时候导航栏上便会出现ACL菜单支持进行相关操作目前是基于SASL_SCRAM认证授权管理支持的最完善其它的我也没验证过虽然是我开发的但是我也没具体全部验证这一块功能授权部分应该是通用的
## kafka版本
* 当前使用的kafka 2.8.0
## 监控
仅提供运维管理功能监控、告警需要配合其它组件如有需要建议请查看https://blog.csdn.net/x763795151/article/details/119705372
## 源码打包
如果想通过源码打包,查看:[源码打包说明](./document/package/源码打包.md)
## 本地开发
如果需要本地开发,开发环境配置查看:[本地开发](./document/develop/开发配置.md)

View File

@@ -4,7 +4,7 @@
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">
<id>rocketmq-reput</id>
<formats>
<format>tar.gz</format>
<!-- <format>tar.gz</format>-->
<format>zip</format>
</formats>

36
document/acl/Acl.md Normal file
View File

@@ -0,0 +1,36 @@
# Acl配置启用说明
## 前言
可能有的同学是看了这篇文章来的:[如何通过可视化方式快捷管理kafka的acl配置](https://blog.csdn.net/x763795151/article/details/120200119)
这篇文章里可能说了是通过修改配置文件application.yml的方式来启用ACL示例如下
```yaml
kafka:
config:
# kafka broker地址多个以逗号分隔
bootstrap-server: 'localhost:9092'
# 服务端是否启用acl如果不启用下面的几项都忽略即可
enable-acl: true
# 只支持2种安全协议SASL_PLAINTEXT和PLAINTEXT启用acl则设置为SASL_PLAINTEXT不启用acl不需关心这个配置
security-protocol: SASL_PLAINTEXT
sasl-mechanism: SCRAM-SHA-256
# 超级管理员用户名在broker上已经配置为超级管理员
admin-username: admin
# 超级管理员密码
admin-password: admin
# 启动自动创建配置的超级管理员用户
admin-create: true
# broker连接的zk地址
zookeeper-addr: localhost:2181
sasl-jaas-config: org.apache.kafka.common.security.scram.ScramLoginModule required username="${kafka.config.admin-username}" password="${kafka.config.admin-password}";
```
其中说明了kafka.config.enable-acl配置项需要为true。
注意:**现在不再支持这种方式了**
## 新版本说明
因为现在支持多集群配置,关于多集群配置,可以看主页说明的 配置集群 介绍。
所以这里把这些额外的配置项都去掉了。
如果启用了ACL在页面上新增集群的时候在属性里配置集群的ACL相关信息如下![新增集群](./新增集群.png)
如果控制台检测到属性里有ACL相关属性配置切换到这个集群后ACL菜单会自动出现的。
注意只支持SASL。

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 245 KiB

View File

@@ -0,0 +1,11 @@
# 集群迁移
可能是有的同学看了我以前发的解决云上、云下集群迁移的方案,好奇看到了这里。
当时发的文章链接在这里:[kafka新老集群平滑迁移实践](https://blog.csdn.net/x763795151/article/details/121070563)
不过考虑到这些功能涉及到业务属性,已经在新的版本中都去掉了。
当前主分支及日后版本不再提供消息同步、集群迁移的解决方案如有需要可以使用single-data-sync分支的代码或者历史发布的v1.0.2(直接下载) [kafka-console-ui.zip](https://github.com/xxd763795151/kafka-console-ui/releases/download/v1.0.2/kafka-console-ui.zip) 的安装包。
v1.0.2版本及其之前的版本只支持单集群配置但是对于SASL_SCRAM认证授权管理功能相当完善。
后续版本会支持多集群管理并将v1.0.2之前的部分功能去掉或优化,目的是做为一个足够轻量的管理工具,不再涉及其它属性。

View File

@@ -0,0 +1,31 @@
# 本地开发配置说明
## 技术栈
* spring boot
* java、scala
* kafka
* h2
* vue
## 开发环境
* jdk 8
* idea
* scala 2.13
* maven >=3.6+
* webstorm
除了webstorm是开发前端的ide可以根据自己需要代替jdk scala是必须有的。
scala 2.13下载地址在这个页面最下面https://www.scala-lang.org/download/scala2.html
## 克隆代码
以我自己为例开发环境里的工具准备好然后代码clone到本地。
## 后端配置
1. 用idea打开项目
2. 打开idea的Project Structure(Settings) -> Modules -> 设置src/main/scala为Sources因为约定src/main/java是源码目录所以这里要再加一个源码目录
3. 打开idea的Project Structure(Settings) -> Libraries 添加scala sdk然后选择本地下载的scala 2.13的目录确定添加进来如果使用的idea可以直接勾选也可以不用先下载到本地
## 前端
前端代码在工程的ui目录下找个前端开发的ide如web storm打开进行开发即可。
## 注意
前后分离直接启动后端工程的话src/main/resources目录下可能是没有静态文件的所以直接通过浏览器访问应该是没页面的。
可以先打包编译一下前端文件,比如执行:`sh package.sh`然后再用idea启动。或者是后端用idea启动完成后找个前端的ide 比如web storm打开工程的ui目录下的前端项目单独启动进行开发。

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

View File

@@ -0,0 +1,32 @@
# 菜单预览
如果未启用ACL配置不会显示ACL的菜单页面所以导航栏上没有Acl这一项
## 集群
* 展示集群列表
![集群](./img/集群.png)
* 查看或修改集群配置
![Broker配置](./img/Broker配置.png)
## Topic
* Topic列表
![Topic](./img/Topic.png)
## 消费组
* 消费组列表
![消费组](./img/消费组.png)
## 消息
* 根据时间检索或过滤消息
![消息时间查询](./img/消息时间查询.png)
* 消息详情
![消息详情](./img/消息详情.png)
## 运维
* 运维页面
![运维](./img/运维.png)
* 集群切换
![集群切换](./img/集群切换.png)

View File

@@ -0,0 +1,32 @@
# 源码打包说明
可以直接下载最新代码,进行打包,最新代码相比已经发布的安装包可能会包含最新的特性
## 环境要求
* maven 3.6+
* jdk 8
* git非必须
maven是建议版本>=3.6版本3.4+和3.5+我也没试过3.3+的版本在windows上我试了下打包有bug可能把spring boot的application.yml打不到合适的目录。
如果3.6+在mac上也不行也是上面这个问题建议用最新版本试试。
## 源码下载
```
git clone https://github.com/xxd763795151/kafka-console-ui.git
```
或者直接在页面下载源码
## 打包
我已经写了个简单的打包脚本,直接执行即可。
### Windows
```
cd kafka-console-ui
# windows执行
package.bat
```
### Linux或Mac OS
```
cd kafka-console-ui
# linux或mac执行
sh package.sh
```

View File

Binary file not shown.

Before

Width:  |  Height:  |  Size: 354 KiB

View File

@@ -10,7 +10,7 @@
</parent>
<groupId>com.xuxd</groupId>
<artifactId>kafka-console-ui</artifactId>
<version>1.0.0</version>
<version>1.0.4</version>
<name>kafka-console-ui</name>
<description>Kafka console manage ui</description>
<properties>

View File

@@ -3,11 +3,13 @@ package com.xuxd.kafka.console;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.servlet.ServletComponentScan;
import org.springframework.scheduling.annotation.EnableScheduling;
@MapperScan("com.xuxd.kafka.console.dao")
@SpringBootApplication
@EnableScheduling
@ServletComponentScan
public class KafkaConsoleUiApplication {
public static void main(String[] args) {

View File

@@ -8,7 +8,7 @@ import org.apache.kafka.common.Node;
* @author xuxd
* @date 2021-10-08 14:03:21
**/
public class BrokerNode {
public class BrokerNode implements Comparable{
private int id;
@@ -80,4 +80,8 @@ public class BrokerNode {
public void setController(boolean controller) {
isController = controller;
}
@Override public int compareTo(Object o) {
return this.id - ((BrokerNode)o).id;
}
}

View File

@@ -0,0 +1,73 @@
package com.xuxd.kafka.console.beans;
import com.xuxd.kafka.console.beans.enums.FilterType;
import org.apache.kafka.common.serialization.Deserializer;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-29 15:30:08
**/
public class MessageFilter {
private FilterType filterType = FilterType.NONE;
private Object searchContent = null;
private String headerKey = null;
private String headerValue = null;
private Deserializer deserializer = null;
private boolean isContainsValue = false;
public FilterType getFilterType() {
return filterType;
}
public void setFilterType(FilterType filterType) {
this.filterType = filterType;
}
public Object getSearchContent() {
return searchContent;
}
public void setSearchContent(Object searchContent) {
this.searchContent = searchContent;
}
public String getHeaderKey() {
return headerKey;
}
public void setHeaderKey(String headerKey) {
this.headerKey = headerKey;
}
public String getHeaderValue() {
return headerValue;
}
public void setHeaderValue(String headerValue) {
this.headerValue = headerValue;
}
public Deserializer getDeserializer() {
return deserializer;
}
public void setDeserializer(Deserializer deserializer) {
this.deserializer = deserializer;
}
public boolean isContainsValue() {
return isContainsValue;
}
public void setContainsValue(boolean containsValue) {
isContainsValue = containsValue;
}
}

View File

@@ -0,0 +1,36 @@
package com.xuxd.kafka.console.beans;
import com.xuxd.kafka.console.beans.enums.FilterType;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-11 09:45:49
**/
@Data
public class QueryMessage {
private String topic;
private int partition;
private long startTime;
private long endTime;
private long offset;
private String keyDeserializer;
private String valueDeserializer;
private FilterType filter;
private String value;
private String headerKey;
private String headerValue;
}

View File

@@ -0,0 +1,29 @@
package com.xuxd.kafka.console.beans;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-11-19 17:07:50
**/
@Data
public class ReplicaAssignment {
private long version = 1L;
private List<Partition> partitions;
private long interBrokerThrottle = -1;
@Data
static class Partition {
private String topic;
private int partition;
private List<Integer> replicas;
}
}

View File

@@ -0,0 +1,25 @@
package com.xuxd.kafka.console.beans;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-19 23:28:31
**/
@Data
public class SendMessage {
private String topic;
private int partition;
private String key;
private String body;
private int num;
private long offset;
}

View File

@@ -21,7 +21,7 @@ public class TopicPartition implements Comparable {
}
TopicPartition other = (TopicPartition) o;
if (!this.topic.equals(other.getTopic())) {
return this.compareTo(other);
return this.topic.compareTo(other.topic);
}
return this.partition - other.partition;

View File

@@ -0,0 +1,28 @@
package com.xuxd.kafka.console.beans.dos;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-31 09:54:24
**/
@Data
@TableName("t_cluster_info")
public class ClusterInfoDO {
@TableId(type = IdType.AUTO)
private Long id;
private String clusterName;
private String address;
private String properties;
private String updateTime;
}

View File

@@ -23,4 +23,6 @@ public class KafkaUserDO {
private String password;
private String updateTime;
private Long clusterInfoId;
}

View File

@@ -0,0 +1,22 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.enums.ThrottleUnit;
import java.util.ArrayList;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-11-24 19:37:10
**/
@Data
public class BrokerThrottleDTO {
private List<Integer> brokerList = new ArrayList<>();
private long throttle;
private ThrottleUnit unit;
}

View File

@@ -0,0 +1,38 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.utils.ConvertUtil;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-04 20:19:03
**/
@Data
public class ClusterInfoDTO {
private Long id;
private String clusterName;
private String address;
private String properties;
private String updateTime;
public ClusterInfoDO to() {
ClusterInfoDO infoDO = new ClusterInfoDO();
infoDO.setId(id);
infoDO.setClusterName(clusterName);
infoDO.setAddress(address);
if (StringUtils.isNotBlank(properties)) {
infoDO.setProperties(ConvertUtil.propertiesStr2JsonStr(properties));
}
return infoDO;
}
}

View File

@@ -0,0 +1,18 @@
package com.xuxd.kafka.console.beans.dto;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-02-15 19:08:13
**/
@Data
public class ProposedAssignmentDTO {
private String topic;
private List<Integer> brokers;
}

View File

@@ -0,0 +1,75 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.enums.FilterType;
import java.util.Date;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-11 09:17:59
**/
@Data
public class QueryMessageDTO {
private String topic;
private int partition;
private Date startTime;
private Date endTime;
private Long offset;
private String keyDeserializer;
private String valueDeserializer;
private String filter;
private String value;
private String headerKey;
private String headerValue;
public QueryMessage toQueryMessage() {
QueryMessage queryMessage = new QueryMessage();
queryMessage.setTopic(topic);
queryMessage.setPartition(partition);
if (startTime != null) {
queryMessage.setStartTime(startTime.getTime());
}
if (endTime != null) {
queryMessage.setEndTime(endTime.getTime());
}
if (offset != null) {
queryMessage.setOffset(offset);
}
queryMessage.setKeyDeserializer(keyDeserializer);
queryMessage.setValueDeserializer(valueDeserializer);
if (StringUtils.isNotBlank(filter)) {
queryMessage.setFilter(FilterType.valueOf(filter.toUpperCase()));
} else {
queryMessage.setFilter(FilterType.NONE);
}
if (StringUtils.isNotBlank(value)) {
queryMessage.setValue(value.trim());
}
if (StringUtils.isNotBlank(headerKey)) {
queryMessage.setHeaderKey(headerKey.trim());
}
if (StringUtils.isNotBlank(headerValue)) {
queryMessage.setHeaderValue(headerValue.trim());
}
return queryMessage;
}
}

View File

@@ -0,0 +1,21 @@
package com.xuxd.kafka.console.beans.dto;
import com.xuxd.kafka.console.beans.enums.TopicThrottleSwitch;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-11-26 15:33:37
**/
@Data
public class TopicThrottleDTO {
private String topic;
private List<Integer> partitions;
private TopicThrottleSwitch operation;
}

View File

@@ -0,0 +1,11 @@
package com.xuxd.kafka.console.beans.enums;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-29 14:36:01
**/
public enum FilterType {
NONE, BODY, HEADER
}

View File

@@ -0,0 +1,18 @@
package com.xuxd.kafka.console.beans.enums;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-11-24 19:38:00
**/
public enum ThrottleUnit {
KB, MB;
public long toKb(long size) {
if (this == MB) {
return 1024 * size;
}
return size;
}
}

View File

@@ -0,0 +1,11 @@
package com.xuxd.kafka.console.beans.enums;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-11-26 15:33:07
**/
public enum TopicThrottleSwitch {
ON,OFF;
}

View File

@@ -0,0 +1,24 @@
package com.xuxd.kafka.console.beans.vo;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-22 16:24:58
**/
@Data
public class BrokerApiVersionVO {
private int brokerId;
private String host;
private int supportNums;
private int unSupportNums;
private List<String> versionInfo;
}

View File

@@ -0,0 +1,40 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.utils.ConvertUtil;
import java.util.List;
import lombok.Data;
import org.apache.commons.lang3.StringUtils;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-04 19:16:11
**/
@Data
public class ClusterInfoVO {
private Long id;
private String clusterName;
private String address;
private List<String> properties;
private String updateTime;
public static ClusterInfoVO from(ClusterInfoDO infoDO) {
ClusterInfoVO vo = new ClusterInfoVO();
vo.setId(infoDO.getId());
vo.setClusterName(infoDO.getClusterName());
vo.setAddress(infoDO.getAddress());
vo.setUpdateTime(infoDO.getUpdateTime());
if (StringUtils.isNotBlank(infoDO.getProperties())) {
vo.setProperties(ConvertUtil.jsonStr2List(infoDO.getProperties()));
}
return vo;
}
}

View File

@@ -0,0 +1,32 @@
package com.xuxd.kafka.console.beans.vo;
import lombok.Data;
import org.apache.kafka.clients.consumer.ConsumerRecord;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-11 14:19:35
**/
@Data
public class ConsumerRecordVO {
private String topic;
private int partition;
private long offset;
private long timestamp;
public static ConsumerRecordVO fromConsumerRecord(ConsumerRecord record) {
ConsumerRecordVO vo = new ConsumerRecordVO();
vo.setTopic(record.topic());
vo.setPartition(record.partition());
vo.setOffset(record.offset());
vo.setTimestamp(record.timestamp());
return vo;
}
}

View File

@@ -0,0 +1,25 @@
package com.xuxd.kafka.console.beans.vo;
import com.xuxd.kafka.console.beans.TopicPartition;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-11-30 16:03:41
**/
@Data
public class CurrentReassignmentVO {
private final String topic;
private final int partition;
private final List<Integer> replicas;
private final List<Integer> addingReplicas;
private final List<Integer> removingReplicas;
}

View File

@@ -0,0 +1,47 @@
package com.xuxd.kafka.console.beans.vo;
import java.util.ArrayList;
import java.util.List;
import lombok.Data;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-12 12:45:23
**/
@Data
public class MessageDetailVO {
private String topic;
private int partition;
private long offset;
private long timestamp;
private String timestampType;
private List<HeaderVO> headers = new ArrayList<>();
private Object key;
private Object value;
private List<ConsumerVO> consumers;
@Data
public static class HeaderVO {
String key;
String value;
}
@Data
public static class ConsumerVO {
String groupId;
String status;
}
}

View File

@@ -29,12 +29,16 @@ public class TopicPartitionVO {
private long diff;
private long beginTime;
private long endTime;
public static TopicPartitionVO from(TopicPartitionInfo partitionInfo) {
TopicPartitionVO partitionVO = new TopicPartitionVO();
partitionVO.setPartition(partitionInfo.partition());
partitionVO.setLeader(partitionInfo.leader().toString());
partitionVO.setReplicas(partitionInfo.replicas().stream().map(Node::toString).collect(Collectors.toList()));
partitionVO.setIsr(partitionInfo.isr().stream().map(Node::toString).collect(Collectors.toList()));
partitionVO.setReplicas(partitionInfo.replicas().stream().map(node -> node.host() + ":" + node.port() + " (id: " + node.idString() + ")").collect(Collectors.toList()));
partitionVO.setIsr(partitionInfo.isr().stream().map(Node::idString).collect(Collectors.toList()));
return partitionVO;
}
}

View File

@@ -0,0 +1,66 @@
package com.xuxd.kafka.console.boot;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.config.KafkaConfig;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.utils.ConvertUtil;
import java.util.List;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.SmartInitializingSingleton;
import org.springframework.stereotype.Component;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-30 19:16:50
**/
@Slf4j
@Component
public class Bootstrap implements SmartInitializingSingleton {
public static final String DEFAULT_CLUSTER_NAME = "default";
private final KafkaConfig config;
private final ClusterInfoMapper clusterInfoMapper;
public Bootstrap(KafkaConfig config, ObjectProvider<ClusterInfoMapper> clusterInfoMapper) {
this.config = config;
this.clusterInfoMapper = clusterInfoMapper.getIfAvailable();
}
private void initialize() {
loadDefaultClusterConfig();
}
private void loadDefaultClusterConfig() {
log.info("load default kafka config.");
if (StringUtils.isBlank(config.getBootstrapServer())) {
return;
}
QueryWrapper<ClusterInfoDO> clusterInfoDOQueryWrapper = new QueryWrapper<>();
clusterInfoDOQueryWrapper.eq("cluster_name", DEFAULT_CLUSTER_NAME);
List<Object> objects = clusterInfoMapper.selectObjs(clusterInfoDOQueryWrapper);
if (CollectionUtils.isNotEmpty(objects)) {
log.warn("default kafka cluster config has existed[any of cluster name or address].");
return;
}
ClusterInfoDO infoDO = new ClusterInfoDO();
infoDO.setClusterName(DEFAULT_CLUSTER_NAME);
infoDO.setAddress(config.getBootstrapServer().trim());
infoDO.setProperties(ConvertUtil.toJsonString(config.getProperties()));
clusterInfoMapper.insert(infoDO);
log.info("Insert default config: {}", infoDO);
}
@Override public void afterSingletonsInstantiated() {
initialize();
}
}

View File

@@ -0,0 +1,74 @@
package com.xuxd.kafka.console.config;
import java.util.Properties;
import org.apache.kafka.clients.CommonClientConfigs;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-30 15:46:55
**/
public class ContextConfig {
public static final int DEFAULT_REQUEST_TIMEOUT_MS = 5000;
private Long clusterInfoId;
private String clusterName;
private String bootstrapServer;
private int requestTimeoutMs = DEFAULT_REQUEST_TIMEOUT_MS;
private Properties properties = new Properties();
public String getBootstrapServer() {
return bootstrapServer;
}
public void setBootstrapServer(String bootstrapServer) {
this.bootstrapServer = bootstrapServer;
}
public int getRequestTimeoutMs() {
return properties.containsKey(CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG) ?
Integer.parseInt(properties.getProperty(CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG)) : requestTimeoutMs;
}
public void setRequestTimeoutMs(int requestTimeoutMs) {
this.requestTimeoutMs = requestTimeoutMs;
}
public Properties getProperties() {
return properties;
}
public Long getClusterInfoId() {
return clusterInfoId;
}
public void setClusterInfoId(Long clusterInfoId) {
this.clusterInfoId = clusterInfoId;
}
public String getClusterName() {
return clusterName;
}
public void setClusterName(String clusterName) {
this.clusterName = clusterName;
}
public void setProperties(Properties properties) {
this.properties = properties;
}
@Override public String toString() {
return "KafkaContextConfig{" +
"bootstrapServer='" + bootstrapServer + '\'' +
", requestTimeoutMs=" + requestTimeoutMs +
", properties=" + properties +
'}';
}
}

View File

@@ -0,0 +1,12 @@
package com.xuxd.kafka.console.config;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-30 18:55:28
**/
public class ContextConfigHolder {
public static final ThreadLocal<ContextConfig> CONTEXT_CONFIG = new ThreadLocal<>();
}

View File

@@ -1,5 +1,6 @@
package com.xuxd.kafka.console.config;
import java.util.Properties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
@@ -15,23 +16,9 @@ public class KafkaConfig {
private String bootstrapServer;
private int requestTimeoutMs;
private String securityProtocol;
private String saslMechanism;
private String saslJaasConfig;
private String adminUsername;
private String adminPassword;
private boolean adminCreate;
private String zookeeperAddr;
private boolean enableAcl;
private Properties properties;
public String getBootstrapServer() {
return bootstrapServer;
@@ -41,62 +28,6 @@ public class KafkaConfig {
this.bootstrapServer = bootstrapServer;
}
public int getRequestTimeoutMs() {
return requestTimeoutMs;
}
public void setRequestTimeoutMs(int requestTimeoutMs) {
this.requestTimeoutMs = requestTimeoutMs;
}
public String getSecurityProtocol() {
return securityProtocol;
}
public void setSecurityProtocol(String securityProtocol) {
this.securityProtocol = securityProtocol;
}
public String getSaslMechanism() {
return saslMechanism;
}
public void setSaslMechanism(String saslMechanism) {
this.saslMechanism = saslMechanism;
}
public String getSaslJaasConfig() {
return saslJaasConfig;
}
public void setSaslJaasConfig(String saslJaasConfig) {
this.saslJaasConfig = saslJaasConfig;
}
public String getAdminUsername() {
return adminUsername;
}
public void setAdminUsername(String adminUsername) {
this.adminUsername = adminUsername;
}
public String getAdminPassword() {
return adminPassword;
}
public void setAdminPassword(String adminPassword) {
this.adminPassword = adminPassword;
}
public boolean isAdminCreate() {
return adminCreate;
}
public void setAdminCreate(boolean adminCreate) {
this.adminCreate = adminCreate;
}
public String getZookeeperAddr() {
return zookeeperAddr;
}
@@ -105,11 +36,11 @@ public class KafkaConfig {
this.zookeeperAddr = zookeeperAddr;
}
public boolean isEnableAcl() {
return enableAcl;
public Properties getProperties() {
return properties;
}
public void setEnableAcl(boolean enableAcl) {
this.enableAcl = enableAcl;
public void setProperties(Properties properties) {
this.properties = properties;
}
}

View File

@@ -5,6 +5,7 @@ import kafka.console.ConfigConsole;
import kafka.console.ConsumerConsole;
import kafka.console.KafkaAclConsole;
import kafka.console.KafkaConfigConsole;
import kafka.console.MessageConsole;
import kafka.console.OperationConsole;
import kafka.console.TopicConsole;
import org.springframework.context.annotation.Bean;
@@ -54,4 +55,9 @@ public class KafkaConfiguration {
ConsumerConsole consumerConsole) {
return new OperationConsole(config, topicConsole, consumerConsole);
}
@Bean
public MessageConsole messageConsole(KafkaConfig config) {
return new MessageConsole(config);
}
}

View File

@@ -1,8 +1,13 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.dto.ClusterInfoDTO;
import com.xuxd.kafka.console.service.ClusterService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@@ -23,4 +28,34 @@ public class ClusterController {
public Object getClusterInfo() {
return clusterService.getClusterInfo();
}
@GetMapping("/info")
public Object getClusterInfoList() {
return clusterService.getClusterInfoList();
}
@PostMapping("/info")
public Object addClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.addClusterInfo(dto.to());
}
@DeleteMapping("/info")
public Object deleteClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.deleteClusterInfo(dto.getId());
}
@PutMapping("/info")
public Object updateClusterInfo(@RequestBody ClusterInfoDTO dto) {
return clusterService.updateClusterInfo(dto.to());
}
@GetMapping("/info/peek")
public Object peekClusterInfo() {
return clusterService.peekClusterInfo();
}
@GetMapping("/info/api/version")
public Object getBrokerApiVersionInfo() {
return clusterService.getBrokerApiVersionInfo();
}
}

View File

@@ -36,7 +36,7 @@ public class ConfigController {
this.configService = configService;
}
@GetMapping
@GetMapping("/console")
public Object getConfig() {
return ResponseData.create().data(configMap).success();
}

View File

@@ -0,0 +1,55 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.SendMessage;
import com.xuxd.kafka.console.beans.dto.QueryMessageDTO;
import com.xuxd.kafka.console.service.MessageService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-11 09:22:19
**/
@RestController
@RequestMapping("/message")
public class MessageController {
@Autowired
private MessageService messageService;
@PostMapping("/search/time")
public Object searchByTime(@RequestBody QueryMessageDTO dto) {
return messageService.searchByTime(dto.toQueryMessage());
}
@PostMapping("/search/offset")
public Object searchByOffset(@RequestBody QueryMessageDTO dto) {
return messageService.searchByOffset(dto.toQueryMessage());
}
@PostMapping("/search/detail")
public Object searchDetail(@RequestBody QueryMessageDTO dto) {
return messageService.searchDetail(dto.toQueryMessage());
}
@GetMapping("/deserializer/list")
public Object deserializerList() {
return messageService.deserializerList();
}
@PostMapping("/send")
public Object send(@RequestBody SendMessage message) {
return messageService.send(message);
}
@PostMapping("/resend")
public Object resend(@RequestBody SendMessage message) {
return messageService.resend(message);
}
}

View File

@@ -1,5 +1,8 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.TopicPartition;
import com.xuxd.kafka.console.beans.dto.BrokerThrottleDTO;
import com.xuxd.kafka.console.beans.dto.ProposedAssignmentDTO;
import com.xuxd.kafka.console.beans.dto.ReplicationDTO;
import com.xuxd.kafka.console.beans.dto.SyncDataDTO;
import com.xuxd.kafka.console.service.OperationService;
@@ -52,4 +55,29 @@ public class OperationController {
public Object electPreferredLeader(@RequestBody ReplicationDTO dto) {
return operationService.electPreferredLeader(dto.getTopic(), dto.getPartition());
}
@PostMapping("/broker/throttle")
public Object configThrottle(@RequestBody BrokerThrottleDTO dto) {
return operationService.configThrottle(dto.getBrokerList(), dto.getUnit().toKb(dto.getThrottle()));
}
@DeleteMapping("/broker/throttle")
public Object removeThrottle(@RequestBody BrokerThrottleDTO dto) {
return operationService.removeThrottle(dto.getBrokerList());
}
@GetMapping("/replication/reassignments")
public Object currentReassignments() {
return operationService.currentReassignments();
}
@DeleteMapping("/replication/reassignments")
public Object cancelReassignment(@RequestBody TopicPartition partition) {
return operationService.cancelReassignment(new org.apache.kafka.common.TopicPartition(partition.getTopic(), partition.getPartition()));
}
@PostMapping("/replication/reassignments/proposed")
public Object proposedAssignments(@RequestBody ProposedAssignmentDTO dto) {
return operationService.proposedAssignments(dto.getTopic(), dto.getBrokers());
}
}

View File

@@ -1,7 +1,9 @@
package com.xuxd.kafka.console.controller;
import com.xuxd.kafka.console.beans.ReplicaAssignment;
import com.xuxd.kafka.console.beans.dto.AddPartitionDTO;
import com.xuxd.kafka.console.beans.dto.NewTopicDTO;
import com.xuxd.kafka.console.beans.dto.TopicThrottleDTO;
import com.xuxd.kafka.console.beans.enums.TopicType;
import com.xuxd.kafka.console.service.TopicService;
import java.util.ArrayList;
@@ -71,4 +73,24 @@ public class TopicController {
return topicService.addPartitions(topic, addNum, assignment);
}
@GetMapping("/replica/assignment")
public Object getCurrentReplicaAssignment(@RequestParam String topic) {
return topicService.getCurrentReplicaAssignment(topic);
}
@PostMapping("/replica/assignment")
public Object updateReplicaAssignment(@RequestBody ReplicaAssignment assignment) {
return topicService.updateReplicaAssignment(assignment);
}
@PostMapping("/replica/throttle")
public Object configThrottle(@RequestBody TopicThrottleDTO dto) {
return topicService.configThrottle(dto.getTopic(), dto.getPartitions(), dto.getOperation());
}
@GetMapping("/send/stats")
public Object sendStats(@RequestParam String topic) {
return topicService.sendStats(topic);
}
}

View File

@@ -0,0 +1,13 @@
package com.xuxd.kafka.console.dao;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-31 09:58:52
**/
public interface ClusterInfoMapper extends BaseMapper<ClusterInfoDO> {
}

View File

@@ -0,0 +1,87 @@
package com.xuxd.kafka.console.interceptor;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.config.ContextConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.utils.ConvertUtil;
import java.io.IOException;
import java.util.HashSet;
import java.util.Set;
import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.MediaType;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-05 19:56:25
**/
@WebFilter(filterName = "context-set-filter", urlPatterns = {"/acl/*","/user/*","/cluster/*","/config/*","/consumer/*","/message/*","/topic/*","/op/*"})
@Slf4j
public class ContextSetFilter implements Filter {
private Set<String> excludes = new HashSet<>();
{
excludes.add("/cluster/info/peek");
excludes.add("/cluster/info");
excludes.add("/config/console");
}
@Autowired
private ClusterInfoMapper clusterInfoMapper;
@Override public void doFilter(ServletRequest req, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
try {
HttpServletRequest request = (HttpServletRequest) req;
String uri = request.getRequestURI();
if (!excludes.contains(uri)) {
String headerId = request.getHeader(Header.ID);
if (StringUtils.isBlank(headerId)) {
// ResponseData failed = ResponseData.create().failed("Cluster info is null.");
ResponseData failed = ResponseData.create().failed("没有集群信息,请先切换集群");
response.setContentType(MediaType.APPLICATION_JSON_UTF8_VALUE);
response.getWriter().println(ConvertUtil.toJsonString(failed));
return;
} else {
ClusterInfoDO infoDO = clusterInfoMapper.selectById(Long.valueOf(headerId));
if (infoDO == null) {
ResponseData failed = ResponseData.create().failed("该集群找不到信息,请切换一个有效集群");
response.setContentType(MediaType.APPLICATION_JSON_UTF8_VALUE);
response.getWriter().println(ConvertUtil.toJsonString(failed));
return;
}
ContextConfig config = new ContextConfig();
config.setClusterInfoId(infoDO.getId());
config.setClusterName(infoDO.getClusterName());
config.setBootstrapServer(infoDO.getAddress());
if (StringUtils.isNotBlank(infoDO.getProperties())) {
config.setProperties(ConvertUtil.toProperties(infoDO.getProperties()));
}
ContextConfigHolder.CONTEXT_CONFIG.set(config);
}
}
chain.doFilter(req, response);
} finally {
ContextConfigHolder.CONTEXT_CONFIG.remove();
}
}
interface Header {
String ID = "X-Cluster-Info-Id";
String NAME = "X-Cluster-Info-Name";
}
}

View File

@@ -1,9 +1,22 @@
package com.xuxd.kafka.console.schedule;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.beans.dos.KafkaUserDO;
import com.xuxd.kafka.console.config.ContextConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.dao.KafkaUserMapper;
import com.xuxd.kafka.console.utils.ConvertUtil;
import com.xuxd.kafka.console.utils.SaslUtil;
import java.util.ArrayList;
import java.util.List;
import java.util.Set;
import kafka.console.KafkaConfigConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
@@ -21,25 +34,58 @@ public class KafkaAclSchedule {
private final KafkaConfigConsole configConsole;
public KafkaAclSchedule(KafkaUserMapper userMapper, KafkaConfigConsole configConsole) {
this.userMapper = userMapper;
this.configConsole = configConsole;
private final ClusterInfoMapper clusterInfoMapper;
public KafkaAclSchedule(ObjectProvider<KafkaUserMapper> userMapper,
ObjectProvider<KafkaConfigConsole> configConsole, ObjectProvider<ClusterInfoMapper> clusterInfoMapper) {
this.userMapper = userMapper.getIfAvailable();
this.configConsole = configConsole.getIfAvailable();
this.clusterInfoMapper = clusterInfoMapper.getIfAvailable();
}
@Scheduled(cron = "${cron.clear-dirty-user}")
public void clearDirtyKafkaUser() {
log.info("Start clear dirty data for kafka user from database.");
Set<String> userSet = configConsole.getUserList(null);
userMapper.selectList(null).forEach(u -> {
if (!userSet.contains(u.getUsername())) {
log.info("clear user: {} from database.", u.getUsername());
try {
userMapper.deleteById(u.getId());
} catch (Exception e) {
log.error("userMapper.deleteById error, user: " + u, e);
try {
log.info("Start clear dirty data for kafka user from database.");
List<ClusterInfoDO> clusterInfoDOS = clusterInfoMapper.selectList(null);
List<Long> clusterInfoIds = new ArrayList<>();
for (ClusterInfoDO infoDO : clusterInfoDOS) {
ContextConfig config = new ContextConfig();
config.setClusterInfoId(infoDO.getId());
config.setClusterName(infoDO.getClusterName());
config.setBootstrapServer(infoDO.getAddress());
if (StringUtils.isNotBlank(infoDO.getProperties())) {
config.setProperties(ConvertUtil.toProperties(infoDO.getProperties()));
}
ContextConfigHolder.CONTEXT_CONFIG.set(config);
if (SaslUtil.isEnableSasl() && SaslUtil.isEnableScram()) {
log.info("Start clear cluster: {}", infoDO.getClusterName());
Set<String> userSet = configConsole.getUserList(null);
QueryWrapper<KafkaUserDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("cluster_info_id", infoDO.getId());
userMapper.selectList(queryWrapper).forEach(u -> {
if (!userSet.contains(u.getUsername())) {
log.info("clear user: {} from database.", u.getUsername());
try {
userMapper.deleteById(u.getId());
} catch (Exception e) {
log.error("userMapper.deleteById error, user: " + u, e);
}
}
});
clusterInfoIds.add(infoDO.getId());
}
}
});
log.info("Clear end.");
if (CollectionUtils.isNotEmpty(clusterInfoIds)) {
log.info("Clear the cluster id {}, which not found.", clusterInfoIds);
QueryWrapper<KafkaUserDO> wrapper = new QueryWrapper<>();
wrapper.notIn("cluster_info_id", clusterInfoIds);
userMapper.delete(wrapper);
}
log.info("Clear end.");
} finally {
ContextConfigHolder.CONTEXT_CONFIG.remove();
}
}
}

View File

@@ -0,0 +1,10 @@
package com.xuxd.kafka.console.service;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-31 11:42:43
**/
public interface ClusterInfoService {
}

View File

@@ -1,6 +1,7 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
/**
* kafka-console-ui.
@@ -10,4 +11,16 @@ import com.xuxd.kafka.console.beans.ResponseData;
**/
public interface ClusterService {
ResponseData getClusterInfo();
ResponseData getClusterInfoList();
ResponseData addClusterInfo(ClusterInfoDO infoDO);
ResponseData deleteClusterInfo(Long id);
ResponseData updateClusterInfo(ClusterInfoDO infoDO);
ResponseData peekClusterInfo();
ResponseData getBrokerApiVersionInfo();
}

View File

@@ -38,4 +38,6 @@ public interface ConsumerService {
ResponseData getTopicSubscribedByGroups(String topic);
ResponseData getOffsetPartition(String groupId);
ResponseData<Set<String>> getSubscribedGroups(String topic);
}

View File

@@ -0,0 +1,26 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-11 09:43:26
**/
public interface MessageService {
ResponseData searchByTime(QueryMessage queryMessage);
ResponseData searchByOffset(QueryMessage queryMessage);
ResponseData searchDetail(QueryMessage queryMessage);
ResponseData deserializerList();
ResponseData send(SendMessage message);
ResponseData resend(SendMessage message);
}

View File

@@ -1,7 +1,9 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ResponseData;
import java.util.List;
import java.util.Properties;
import org.apache.kafka.common.TopicPartition;
/**
* kafka-console-ui.
@@ -20,4 +22,14 @@ public interface OperationService {
ResponseData deleteAlignmentById(Long id);
ResponseData electPreferredLeader(String topic, int partition);
ResponseData configThrottle(List<Integer> brokerList, long size);
ResponseData removeThrottle(List<Integer> brokerList);
ResponseData currentReassignments();
ResponseData cancelReassignment(TopicPartition partition);
ResponseData proposedAssignments(String topic, List<Integer> brokerList);
}

View File

@@ -1,10 +1,10 @@
package com.xuxd.kafka.console.service;
import com.xuxd.kafka.console.beans.ReplicaAssignment;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.enums.TopicThrottleSwitch;
import com.xuxd.kafka.console.beans.enums.TopicType;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import org.apache.kafka.clients.admin.NewTopic;
/**
@@ -26,4 +26,12 @@ public interface TopicService {
ResponseData createTopic(NewTopic topic);
ResponseData addPartitions(String topic, int addNum, List<List<Integer>> newAssignmentst);
ResponseData getCurrentReplicaAssignment(String topic);
ResponseData updateReplicaAssignment(ReplicaAssignment assignment);
ResponseData configThrottle(String topic, List<Integer> partitions, TopicThrottleSwitch throttleSwitch);
ResponseData sendStats(String topic);
}

View File

@@ -6,29 +6,37 @@ import com.xuxd.kafka.console.beans.CounterMap;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.KafkaUserDO;
import com.xuxd.kafka.console.beans.vo.KafkaUserDetailVO;
import com.xuxd.kafka.console.config.KafkaConfig;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import com.xuxd.kafka.console.dao.KafkaUserMapper;
import com.xuxd.kafka.console.service.AclService;
import com.xuxd.kafka.console.utils.SaslUtil;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import java.util.stream.Collectors;
import kafka.console.KafkaAclConsole;
import kafka.console.KafkaConfigConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.admin.ScramMechanism;
import org.apache.kafka.clients.admin.UserScramCredentialsDescription;
import org.apache.kafka.common.acl.AclBinding;
import org.apache.kafka.common.acl.AclOperation;
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.security.auth.SecurityProtocol;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.SmartInitializingSingleton;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import scala.Tuple2;
import static com.xuxd.kafka.console.utils.SaslUtil.isEnableSasl;
import static com.xuxd.kafka.console.utils.SaslUtil.isEnableScram;
/**
* kafka-console-ui.
*
@@ -37,7 +45,7 @@ import scala.Tuple2;
**/
@Slf4j
@Service
public class AclServiceImpl implements AclService, SmartInitializingSingleton {
public class AclServiceImpl implements AclService {
@Autowired
private KafkaConfigConsole configConsole;
@@ -45,9 +53,6 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
@Autowired
private KafkaAclConsole aclConsole;
@Autowired
private KafkaConfig kafkaConfig;
private final KafkaUserMapper kafkaUserMapper;
public AclServiceImpl(ObjectProvider<KafkaUserMapper> kafkaUserMapper) {
@@ -64,15 +69,23 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
}
@Override public ResponseData addOrUpdateUser(String name, String pass) {
if (!isEnableSasl()) {
return ResponseData.create().failed("Only support SASL protocol.");
}
if (!isEnableScram()) {
return ResponseData.create().failed("Only support SASL_SCRAM.");
}
log.info("add or update user, username: {}, password: {}", name, pass);
if (!configConsole.addOrUpdateUser(name, pass)) {
Tuple2<Object, String> tuple2 = configConsole.addOrUpdateUser(name, pass);
if (!(boolean) tuple2._1()) {
log.error("add user to kafka failed.");
return ResponseData.create().failed("add user to kafka failed");
return ResponseData.create().failed("add user to kafka failed: " + tuple2._2());
}
// save user info to database.
KafkaUserDO userDO = new KafkaUserDO();
userDO.setUsername(name);
userDO.setPassword(pass);
userDO.setClusterInfoId(ContextConfigHolder.CONTEXT_CONFIG.get().getClusterInfoId());
try {
Map<String, Object> map = new HashMap<>();
map.put("username", name);
@@ -86,12 +99,24 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
}
@Override public ResponseData deleteUser(String name) {
if (!isEnableSasl()) {
return ResponseData.create().failed("Only support SASL protocol.");
}
if (!isEnableScram()) {
return ResponseData.create().failed("Only support SASL_SCRAM.");
}
log.info("delete user: {}", name);
Tuple2<Object, String> tuple2 = configConsole.deleteUser(name);
return (boolean) tuple2._1() ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2());
}
@Override public ResponseData deleteUserAndAuth(String name) {
if (!isEnableSasl()) {
return ResponseData.create().failed("Only support SASL protocol.");
}
if (!isEnableScram()) {
return ResponseData.create().failed("Only support SASL_SCRAM.");
}
log.info("delete user and authority: {}", name);
AclEntry entry = new AclEntry();
entry.setPrincipal(name);
@@ -120,7 +145,8 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
Map<String, Object> resultMap = new HashMap<>();
entryMap.forEach((k, v) -> {
Map<String, List<AclEntry>> map = v.stream().collect(Collectors.groupingBy(e -> e.getResourceType() + "#" + e.getName()));
if (k.equals(kafkaConfig.getAdminUsername())) {
String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
if (k.equals(username)) {
Map<String, Object> map2 = new HashMap<>(map);
Map<String, Object> userMap = new HashMap<>();
userMap.put("role", "admin");
@@ -133,7 +159,8 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
detailList.values().forEach(u -> {
if (!resultMap.containsKey(u.name()) && !u.credentialInfos().isEmpty()) {
if (!u.name().equals(kafkaConfig.getAdminUsername())) {
String username = SaslUtil.findUsername(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_JAAS_CONFIG));
if (!u.name().equals(username)) {
resultMap.put(u.name(), Collections.emptyMap());
} else {
Map<String, Object> map2 = new HashMap<>();
@@ -194,27 +221,29 @@ public class AclServiceImpl implements AclService, SmartInitializingSingleton {
}
Map<String, Object> param = new HashMap<>();
param.put("username", username);
param.put("cluster_info_id", ContextConfigHolder.CONTEXT_CONFIG.get().getClusterInfoId());
List<KafkaUserDO> dos = kafkaUserMapper.selectByMap(param);
if (dos.isEmpty()) {
vo.setConsistencyDescription("Password is null.");
} else {
vo.setPassword(dos.stream().findFirst().get().getPassword());
// check for consistency.
boolean consistent = configConsole.isPassConsistent(username, vo.getPassword());
vo.setConsistencyDescription(consistent ? "Consistent" : "Password is not consistent.");
// boolean consistent = configConsole.isPassConsistent(username, vo.getPassword());
// vo.setConsistencyDescription(consistent ? "Consistent" : "Password is not consistent.");
vo.setConsistencyDescription("Can not check password consistent.");
}
return ResponseData.create().data(vo).success();
}
@Override public void afterSingletonsInstantiated() {
if (kafkaConfig.isEnableAcl() && kafkaConfig.isAdminCreate()) {
log.info("Start create admin user, username: {}, password: {}", kafkaConfig.getAdminUsername(), kafkaConfig.getAdminPassword());
boolean done = configConsole.addOrUpdateUserWithZK(kafkaConfig.getAdminUsername(), kafkaConfig.getAdminPassword());
if (!done) {
log.error("Create admin failed.");
throw new IllegalStateException();
}
}
}
// @Override public void afterSingletonsInstantiated() {
// if (kafkaConfig.isEnableAcl() && kafkaConfig.isAdminCreate()) {
// log.info("Start create admin user, username: {}, password: {}", kafkaConfig.getAdminUsername(), kafkaConfig.getAdminPassword());
// boolean done = configConsole.addOrUpdateUserWithZK(kafkaConfig.getAdminUsername(), kafkaConfig.getAdminPassword());
// if (!done) {
// log.error("Create admin failed.");
// throw new IllegalStateException();
// }
// }
// }
}

View File

@@ -0,0 +1,14 @@
package com.xuxd.kafka.console.service.impl;
import com.xuxd.kafka.console.service.ClusterInfoService;
import org.springframework.stereotype.Service;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-31 11:42:59
**/
@Service
public class ClusterInfoServiceImpl implements ClusterInfoService {
}

View File

@@ -1,9 +1,27 @@
package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.xuxd.kafka.console.beans.ClusterInfo;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.ClusterInfoDO;
import com.xuxd.kafka.console.beans.vo.BrokerApiVersionVO;
import com.xuxd.kafka.console.beans.vo.ClusterInfoVO;
import com.xuxd.kafka.console.dao.ClusterInfoMapper;
import com.xuxd.kafka.console.service.ClusterService;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashMap;
import java.util.List;
import java.util.TreeSet;
import java.util.stream.Collectors;
import kafka.console.ClusterConsole;
import org.springframework.beans.factory.annotation.Autowired;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.NodeApiVersions;
import org.apache.kafka.common.Node;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.stereotype.Service;
/**
@@ -15,10 +33,78 @@ import org.springframework.stereotype.Service;
@Service
public class ClusterServiceImpl implements ClusterService {
@Autowired
private ClusterConsole clusterConsole;
private final ClusterConsole clusterConsole;
private final ClusterInfoMapper clusterInfoMapper;
public ClusterServiceImpl(ObjectProvider<ClusterConsole> clusterConsole,
ObjectProvider<ClusterInfoMapper> clusterInfoMapper) {
this.clusterConsole = clusterConsole.getIfAvailable();
this.clusterInfoMapper = clusterInfoMapper.getIfAvailable();
}
@Override public ResponseData getClusterInfo() {
return ResponseData.create().data(clusterConsole.clusterInfo()).success();
ClusterInfo clusterInfo = clusterConsole.clusterInfo();
clusterInfo.setNodes(new TreeSet<>(clusterInfo.getNodes()));
return ResponseData.create().data(clusterInfo).success();
}
@Override public ResponseData getClusterInfoList() {
return ResponseData.create().data(clusterInfoMapper.selectList(null)
.stream().map(ClusterInfoVO::from).collect(Collectors.toList())).success();
}
@Override public ResponseData addClusterInfo(ClusterInfoDO infoDO) {
QueryWrapper<ClusterInfoDO> queryWrapper = new QueryWrapper<>();
queryWrapper.eq("cluster_name", infoDO.getClusterName());
if (clusterInfoMapper.selectCount(queryWrapper) > 0) {
return ResponseData.create().failed("cluster name exist.");
}
clusterInfoMapper.insert(infoDO);
return ResponseData.create().success();
}
@Override public ResponseData deleteClusterInfo(Long id) {
clusterInfoMapper.deleteById(id);
return ResponseData.create().success();
}
@Override public ResponseData updateClusterInfo(ClusterInfoDO infoDO) {
clusterInfoMapper.updateById(infoDO);
return ResponseData.create().success();
}
@Override public ResponseData peekClusterInfo() {
List<ClusterInfoDO> dos = clusterInfoMapper.selectList(null);
if (CollectionUtils.isEmpty(dos)) {
return ResponseData.create().failed("No Cluster Info.");
}
return ResponseData.create().data(dos.stream().findFirst().map(ClusterInfoVO::from)).success();
}
@Override public ResponseData getBrokerApiVersionInfo() {
HashMap<Node, NodeApiVersions> map = clusterConsole.listBrokerVersionInfo();
List<BrokerApiVersionVO> list = new ArrayList<>(map.size());
map.forEach(((node, versions) -> {
BrokerApiVersionVO vo = new BrokerApiVersionVO();
vo.setBrokerId(node.id());
vo.setHost(node.host() + ":" + node.port());
vo.setSupportNums(versions.allSupportedApiVersions().size());
String versionInfo = versions.toString(true);
int from = 0;
int count = 0;
int index = -1;
while ((index = versionInfo.indexOf("UNSUPPORTED", from)) >= 0 && from < versionInfo.length()) {
count++;
from = index + 1;
}
vo.setUnSupportNums(count);
versionInfo = versionInfo.substring(1, versionInfo.length() - 2);
vo.setVersionInfo(Arrays.asList(StringUtils.split(versionInfo, ",")));
list.add(vo);
}));
Collections.sort(list, Comparator.comparingInt(BrokerApiVersionVO::getBrokerId));
return ResponseData.create().data(list).success();
}
}

View File

@@ -15,6 +15,7 @@ import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.locks.ReentrantLock;
import java.util.stream.Collectors;
import kafka.console.ConsumerConsole;
import kafka.console.TopicConsole;
@@ -48,6 +49,8 @@ public class ConsumerServiceImpl implements ConsumerService {
@Autowired
private TopicConsole topicConsole;
private ReentrantLock lock = new ReentrantLock();
@Override public ResponseData getConsumerGroupList(List<String> groupIds, Set<ConsumerGroupState> states) {
String simulateGroup = "inner_xxx_not_exit_group_###" + System.currentTimeMillis();
Set<String> groupList = new HashSet<>();
@@ -142,7 +145,7 @@ public class ConsumerServiceImpl implements ConsumerService {
@Override public ResponseData resetOffsetByDate(String groupId, String topic, String dateStr) {
long timestamp = -1L;
try {
StringBuilder sb = new StringBuilder(dateStr.replace(" ", "T")).append(".000");
StringBuilder sb = new StringBuilder(dateStr.replace(" ", "T")).append(".000+08:00");//固定为utc+08:00东8区来计算
timestamp = Utils.getDateTime(sb.toString());
} catch (ParseException e) {
throw new IllegalArgumentException(e);
@@ -167,25 +170,7 @@ public class ConsumerServiceImpl implements ConsumerService {
}
@Override public ResponseData getTopicSubscribedByGroups(String topic) {
if (topicSubscribedInfo.isNeedRefresh(topic)) {
Set<String> groupIdList = consumerConsole.getConsumerGroupIdList(Collections.emptySet());
Map<String, Set<String>> cache = new HashMap<>();
Map<String, List<TopicPartition>> subscribeTopics = consumerConsole.listSubscribeTopics(groupIdList);
subscribeTopics.forEach((groupId, tl) -> {
tl.forEach(topicPartition -> {
String t = topicPartition.topic();
if (!cache.containsKey(t)) {
cache.put(t, new HashSet<>());
}
cache.get(t).add(groupId);
});
});
topicSubscribedInfo.refresh(cache);
}
Set<String> groups = topicSubscribedInfo.getSubscribedGroups(topic);
Set<String> groups = this.getSubscribedGroups(topic).getData();
Map<String, Object> res = new HashMap<>();
Collection<ConsumerConsole.TopicPartitionConsumeInfo> consumerDetail = consumerConsole.getConsumerDetail(groups);
@@ -212,6 +197,34 @@ public class ConsumerServiceImpl implements ConsumerService {
return ResponseData.create().data(Utils.abs(groupId.hashCode()) % size);
}
@Override public ResponseData<Set<String>> getSubscribedGroups(String topic) {
if (topicSubscribedInfo.isNeedRefresh(topic) && !lock.isLocked()) {
try {
lock.lock();
Set<String> groupIdList = consumerConsole.getConsumerGroupIdList(Collections.emptySet());
Map<String, Set<String>> cache = new HashMap<>();
Map<String, List<TopicPartition>> subscribeTopics = consumerConsole.listSubscribeTopics(groupIdList);
subscribeTopics.forEach((groupId, tl) -> {
tl.forEach(topicPartition -> {
String t = topicPartition.topic();
if (!cache.containsKey(t)) {
cache.put(t, new HashSet<>());
}
cache.get(t).add(groupId);
});
});
topicSubscribedInfo.refresh(cache);
} finally {
lock.unlock();
}
}
Set<String> groups = topicSubscribedInfo.getSubscribedGroups(topic);
return ResponseData.create(Set.class).data(groups).success();
}
class TopicSubscribedInfo {
long lastTime = System.currentTimeMillis();

View File

@@ -0,0 +1,275 @@
package com.xuxd.kafka.console.service.impl;
import com.xuxd.kafka.console.beans.MessageFilter;
import com.xuxd.kafka.console.beans.QueryMessage;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.SendMessage;
import com.xuxd.kafka.console.beans.enums.FilterType;
import com.xuxd.kafka.console.beans.vo.ConsumerRecordVO;
import com.xuxd.kafka.console.beans.vo.MessageDetailVO;
import com.xuxd.kafka.console.service.ConsumerService;
import com.xuxd.kafka.console.service.MessageService;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.stream.Collectors;
import kafka.console.ConsumerConsole;
import kafka.console.MessageConsole;
import kafka.console.TopicConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.admin.TopicDescription;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.apache.kafka.common.serialization.BytesDeserializer;
import org.apache.kafka.common.serialization.Deserializer;
import org.apache.kafka.common.serialization.DoubleDeserializer;
import org.apache.kafka.common.serialization.FloatDeserializer;
import org.apache.kafka.common.serialization.IntegerDeserializer;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.stereotype.Service;
import scala.Tuple2;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-11 09:43:44
**/
@Slf4j
@Service
public class MessageServiceImpl implements MessageService, ApplicationContextAware {
@Autowired
private MessageConsole messageConsole;
@Autowired
private TopicConsole topicConsole;
@Autowired
private ConsumerConsole consumerConsole;
private ApplicationContext applicationContext;
private Map<String, Deserializer> deserializerDict = new HashMap<>();
{
deserializerDict.put("ByteArray", new ByteArrayDeserializer());
deserializerDict.put("Integer", new IntegerDeserializer());
deserializerDict.put("String", new StringDeserializer());
deserializerDict.put("Float", new FloatDeserializer());
deserializerDict.put("Double", new DoubleDeserializer());
deserializerDict.put("Byte", new BytesDeserializer());
deserializerDict.put("Long", new LongDeserializer());
}
public static String defaultDeserializer = "String";
@Override public ResponseData searchByTime(QueryMessage queryMessage) {
int maxNums = 5000;
Object searchContent = null;
String headerKey = null;
String headerValue = null;
MessageFilter filter = new MessageFilter();
switch (queryMessage.getFilter()) {
case BODY:
if (StringUtils.isBlank(queryMessage.getValue())) {
queryMessage.setFilter(FilterType.NONE);
} else {
if (StringUtils.isBlank(queryMessage.getValueDeserializer())) {
queryMessage.setValueDeserializer(defaultDeserializer);
}
switch (queryMessage.getValueDeserializer()) {
case "String":
searchContent = String.valueOf(queryMessage.getValue());
filter.setContainsValue(true);
break;
case "Integer":
searchContent = Integer.valueOf(queryMessage.getValue());
break;
case "Float":
searchContent = Float.valueOf(queryMessage.getValue());
break;
case "Double":
searchContent = Double.valueOf(queryMessage.getValue());
break;
case "Long":
searchContent = Long.valueOf(queryMessage.getValue());
break;
default:
throw new IllegalArgumentException("Message body type not support.");
}
}
break;
case HEADER:
headerKey = queryMessage.getHeaderKey();
if (StringUtils.isBlank(headerKey)) {
queryMessage.setFilter(FilterType.NONE);
} else {
if (StringUtils.isNotBlank(queryMessage.getHeaderValue())) {
headerValue = String.valueOf(queryMessage.getHeaderValue());
}
}
break;
default:
break;
}
FilterType filterType = queryMessage.getFilter();
Deserializer deserializer = deserializerDict.get(queryMessage.getValueDeserializer());
filter.setFilterType(filterType);
filter.setSearchContent(searchContent);
filter.setDeserializer(deserializer);
filter.setHeaderKey(headerKey);
filter.setHeaderValue(headerValue);
Set<TopicPartition> partitions = getPartitions(queryMessage);
long startTime = System.currentTimeMillis();
Tuple2<List<ConsumerRecord<byte[], byte[]>>, Object> tuple2 = messageConsole.searchBy(partitions, queryMessage.getStartTime(), queryMessage.getEndTime(), maxNums, filter);
List<ConsumerRecord<byte[], byte[]>> records = tuple2._1();
log.info("search message by time, cost time: {}", (System.currentTimeMillis() - startTime));
List<ConsumerRecordVO> vos = records.stream().filter(record -> record.timestamp() <= queryMessage.getEndTime())
.map(ConsumerRecordVO::fromConsumerRecord).collect(Collectors.toList());
Map<String, Object> res = new HashMap<>();
vos = vos.subList(0, Math.min(maxNums, vos.size()));
res.put("maxNum", maxNums);
res.put("realNum", vos.size());
res.put("searchNum", tuple2._2());
res.put("data", vos);
return ResponseData.create().data(res).success();
}
@Override public ResponseData searchByOffset(QueryMessage queryMessage) {
Map<TopicPartition, ConsumerRecord<byte[], byte[]>> recordMap = searchRecordByOffset(queryMessage);
return ResponseData.create().data(recordMap.values().stream().map(ConsumerRecordVO::fromConsumerRecord).collect(Collectors.toList())).success();
}
@Override public ResponseData searchDetail(QueryMessage queryMessage) {
if (queryMessage.getPartition() == -1) {
throw new IllegalArgumentException();
}
if (StringUtils.isBlank(queryMessage.getKeyDeserializer())) {
queryMessage.setKeyDeserializer(defaultDeserializer);
}
if (StringUtils.isBlank(queryMessage.getValueDeserializer())) {
queryMessage.setValueDeserializer(defaultDeserializer);
}
Map<TopicPartition, ConsumerRecord<byte[], byte[]>> recordMap = searchRecordByOffset(queryMessage);
ConsumerRecord<byte[], byte[]> record = recordMap.get(new TopicPartition(queryMessage.getTopic(), queryMessage.getPartition()));
if (record != null) {
MessageDetailVO vo = new MessageDetailVO();
vo.setTopic(record.topic());
vo.setPartition(record.partition());
vo.setOffset(record.offset());
vo.setTimestamp(record.timestamp());
vo.setTimestampType(record.timestampType().name());
try {
vo.setKey(deserializerDict.get(queryMessage.getKeyDeserializer()).deserialize(queryMessage.getTopic(), record.key()));
} catch (Exception e) {
vo.setKey("KeyDeserializer Error: " + e.getMessage());
}
try {
vo.setValue(deserializerDict.get(queryMessage.getValueDeserializer()).deserialize(queryMessage.getTopic(), record.value()));
} catch (Exception e) {
vo.setValue("ValueDeserializer Error: " + e.getMessage());
}
record.headers().forEach(header -> {
MessageDetailVO.HeaderVO headerVO = new MessageDetailVO.HeaderVO();
headerVO.setKey(header.key());
headerVO.setValue(new String(header.value()));
vo.getHeaders().add(headerVO);
});
// 为了尽量保持代码好看不直接注入另一个service层的实现类了
Set<String> groupIds = applicationContext.getBean(ConsumerService.class).getSubscribedGroups(record.topic()).getData();
Collection<ConsumerConsole.TopicPartitionConsumeInfo> consumerDetail = consumerConsole.getConsumerDetail(groupIds);
List<MessageDetailVO.ConsumerVO> consumerVOS = new LinkedList<>();
consumerDetail.forEach(consumerInfo -> {
if (consumerInfo.topicPartition().equals(new TopicPartition(record.topic(), record.partition()))) {
MessageDetailVO.ConsumerVO consumerVO = new MessageDetailVO.ConsumerVO();
consumerVO.setGroupId(consumerInfo.getGroupId());
consumerVO.setStatus(consumerInfo.getConsumerOffset() <= record.offset() ? "unconsume" : "consumed");
consumerVOS.add(consumerVO);
}
});
vo.setConsumers(consumerVOS);
return ResponseData.create().data(vo).success();
}
return ResponseData.create().failed("Not found message detail.");
}
@Override public ResponseData deserializerList() {
return ResponseData.create().data(deserializerDict.keySet()).success();
}
@Override public ResponseData send(SendMessage message) {
messageConsole.send(message.getTopic(), message.getPartition(), message.getKey(), message.getBody(), message.getNum());
return ResponseData.create().success();
}
@Override public ResponseData resend(SendMessage message) {
TopicPartition partition = new TopicPartition(message.getTopic(), message.getPartition());
Map<TopicPartition, Object> offsetTable = new HashMap<>(1, 1.0f);
offsetTable.put(partition, message.getOffset());
Map<TopicPartition, ConsumerRecord<byte[], byte[]>> recordMap = messageConsole.searchBy(offsetTable);
if (recordMap.isEmpty()) {
return ResponseData.create().failed("Get message failed.");
}
ConsumerRecord<byte[], byte[]> record = recordMap.get(partition);
ProducerRecord<byte[], byte[]> producerRecord = new ProducerRecord<>(record.topic(), record.partition(), record.key(), record.value(), record.headers());
Tuple2<Object, String> tuple2 = messageConsole.sendSync(producerRecord);
boolean success = (boolean) tuple2._1();
return success ? ResponseData.create().success("success: " + tuple2._2()) : ResponseData.create().failed(tuple2._2());
}
private Map<TopicPartition, ConsumerRecord<byte[], byte[]>> searchRecordByOffset(QueryMessage queryMessage) {
Set<TopicPartition> partitions = getPartitions(queryMessage);
Map<TopicPartition, Object> offsetTable = new HashMap<>();
partitions.forEach(tp -> {
offsetTable.put(tp, queryMessage.getOffset());
});
Map<TopicPartition, ConsumerRecord<byte[], byte[]>> recordMap = messageConsole.searchBy(offsetTable);
return recordMap;
}
private Set<TopicPartition> getPartitions(QueryMessage queryMessage) {
Set<TopicPartition> partitions = new HashSet<>();
if (queryMessage.getPartition() != -1) {
partitions.add(new TopicPartition(queryMessage.getTopic(), queryMessage.getPartition()));
} else {
List<TopicDescription> list = topicConsole.getTopicList(Collections.singleton(queryMessage.getTopic()));
if (CollectionUtils.isEmpty(list)) {
throw new IllegalArgumentException("Can not find topic info.");
}
Set<TopicPartition> set = list.get(0).partitions().stream()
.map(tp -> new TopicPartition(queryMessage.getTopic(), tp.partition())).collect(Collectors.toSet());
partitions.addAll(set);
}
return partitions;
}
@Override public void setApplicationContext(ApplicationContext context) throws BeansException {
this.applicationContext = context;
}
}

View File

@@ -1,20 +1,28 @@
package com.xuxd.kafka.console.service.impl;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.google.common.collect.Lists;
import com.google.gson.Gson;
import com.google.gson.JsonObject;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.dos.MinOffsetAlignmentDO;
import com.xuxd.kafka.console.beans.vo.CurrentReassignmentVO;
import com.xuxd.kafka.console.beans.vo.OffsetAlignmentVO;
import com.xuxd.kafka.console.dao.MinOffsetAlignmentMapper;
import com.xuxd.kafka.console.service.OperationService;
import com.xuxd.kafka.console.utils.GsonUtil;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import java.util.stream.Collectors;
import kafka.console.OperationConsole;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.admin.PartitionReassignment;
import org.apache.kafka.common.TopicPartition;
import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Autowired;
@@ -30,7 +38,7 @@ import scala.Tuple2;
@Service
public class OperationServiceImpl implements OperationService {
private Gson gson = new Gson();
private Gson gson = GsonUtil.INSTANCE.get();
@Autowired
private OperationConsole operationConsole;
@@ -122,4 +130,56 @@ public class OperationServiceImpl implements OperationService {
return (boolean) tuple2._1() ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2());
}
@Override public ResponseData configThrottle(List<Integer> brokerList, long size) {
Tuple2<Object, String> tuple2 = operationConsole.modifyInterBrokerThrottle(new HashSet<>(brokerList), size);
return (boolean) tuple2._1() ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2());
}
@Override public ResponseData removeThrottle(List<Integer> brokerList) {
Tuple2<Object, String> tuple2 = operationConsole.clearBrokerLevelThrottles(new HashSet<>(brokerList));
return (boolean) tuple2._1() ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2());
}
@Override public ResponseData currentReassignments() {
Map<TopicPartition, PartitionReassignment> reassignmentMap = operationConsole.currentReassignments();
List<CurrentReassignmentVO> vos = reassignmentMap.entrySet().stream().map(entry -> {
TopicPartition partition = entry.getKey();
PartitionReassignment reassignment = entry.getValue();
return new CurrentReassignmentVO(partition.topic(),
partition.partition(), reassignment.replicas(), reassignment.addingReplicas(), reassignment.removingReplicas());
}).collect(Collectors.toList());
return ResponseData.create().data(vos).success();
}
@Override public ResponseData cancelReassignment(TopicPartition partition) {
Map<TopicPartition, Throwable> res = operationConsole.cancelPartitionReassignments(Collections.singleton(partition));
if (!res.isEmpty()) {
StringBuilder sb = new StringBuilder("Failed: ");
res.forEach((p, t) -> {
sb.append(p.toString()).append(": ").append(t.getMessage()).append(System.lineSeparator());
});
return ResponseData.create().failed(sb.toString());
}
return ResponseData.create().success();
}
@Override public ResponseData proposedAssignments(String topic, List<Integer> brokerList) {
Map<String, Object> params = new HashMap<>();
params.put("version", 1);
Map<String, String> topicMap = new HashMap<>(1, 1.0f);
topicMap.put("topic", topic);
params.put("topics", Lists.newArrayList(topicMap));
List<String> list = brokerList.stream().map(String::valueOf).collect(Collectors.toList());
Map<TopicPartition, List<Object>> assignments = operationConsole.proposedAssignments(gson.toJson(params), StringUtils.join(list, ","));
List<CurrentReassignmentVO> res = new ArrayList<>(assignments.size());
assignments.forEach((tp, replicas) -> {
CurrentReassignmentVO vo = new CurrentReassignmentVO(tp.topic(), tp.partition(),
replicas.stream().map(x -> (Integer) x).collect(Collectors.toList()), null, null);
res.add(vo);
});
return ResponseData.create().data(res).success();
}
}

View File

@@ -1,10 +1,15 @@
package com.xuxd.kafka.console.service.impl;
import com.google.gson.Gson;
import com.xuxd.kafka.console.beans.ReplicaAssignment;
import com.xuxd.kafka.console.beans.ResponseData;
import com.xuxd.kafka.console.beans.enums.TopicThrottleSwitch;
import com.xuxd.kafka.console.beans.enums.TopicType;
import com.xuxd.kafka.console.beans.vo.TopicDescriptionVO;
import com.xuxd.kafka.console.beans.vo.TopicPartitionVO;
import com.xuxd.kafka.console.service.TopicService;
import com.xuxd.kafka.console.utils.GsonUtil;
import java.util.Calendar;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashMap;
@@ -12,13 +17,16 @@ import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.atomic.AtomicLong;
import java.util.stream.Collectors;
import kafka.console.MessageConsole;
import kafka.console.TopicConsole;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.apache.kafka.clients.admin.NewPartitions;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.admin.TopicDescription;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.TopicPartitionInfo;
import org.springframework.beans.factory.annotation.Autowired;
@@ -38,6 +46,11 @@ public class TopicServiceImpl implements TopicService {
@Autowired
private TopicConsole topicConsole;
@Autowired
private MessageConsole messageConsole;
private Gson gson = GsonUtil.INSTANCE.get();
@Override public ResponseData getTopicNameList(boolean internal) {
return ResponseData.create().data(topicConsole.getTopicNameList(internal)).success();
}
@@ -98,6 +111,10 @@ public class TopicServiceImpl implements TopicService {
mapTuple2._2().forEach((k, v) -> {
endTable.put(k.partition(), (Long) v);
});
// computer the valid time range.
Map<TopicPartition, Object> beginOffsetTable = new HashMap<>();
Map<TopicPartition, Object> endOffsetTable = new HashMap<>();
Map<Integer, TopicPartition> partitionCache = new HashMap<>();
for (TopicPartitionVO partitionVO : voList) {
long begin = beginTable.get(partitionVO.getPartition());
@@ -105,7 +122,29 @@ public class TopicServiceImpl implements TopicService {
partitionVO.setBeginOffset(begin);
partitionVO.setEndOffset(end);
partitionVO.setDiff(end - begin);
if (begin != end) {
TopicPartition partition = new TopicPartition(topic, partitionVO.getPartition());
partitionCache.put(partitionVO.getPartition(), partition);
beginOffsetTable.put(partition, begin);
endOffsetTable.put(partition, end - 1); // end must < endOff
} else {
partitionVO.setBeginTime(-1L);
partitionVO.setEndTime(-1L);
}
}
Map<TopicPartition, ConsumerRecord<byte[], byte[]>> beginRecordMap = messageConsole.searchBy(beginOffsetTable);
Map<TopicPartition, ConsumerRecord<byte[], byte[]>> endRecordMap = messageConsole.searchBy(endOffsetTable);
for (TopicPartitionVO partitionVO : voList) {
if (partitionVO.getBeginTime() != -1L) {
TopicPartition partition = partitionCache.get(partitionVO.getPartition());
partitionVO.setBeginTime(beginRecordMap.containsKey(partition) ? beginRecordMap.get(partition).timestamp() : -1L);
partitionVO.setEndTime(endRecordMap.containsKey(partition) ? endRecordMap.get(partition).timestamp() : -1L);
}
}
return ResponseData.create().data(voList).success();
}
@@ -130,4 +169,92 @@ public class TopicServiceImpl implements TopicService {
return success ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2());
}
@Override public ResponseData getCurrentReplicaAssignment(String topic) {
Tuple2<Object, String> tuple2 = topicConsole.getCurrentReplicaAssignmentJson(topic);
boolean success = (boolean) tuple2._1();
return success ? ResponseData.create().data(gson.fromJson(tuple2._2(), ReplicaAssignment.class)).success() : ResponseData.create().failed(tuple2._2());
}
@Override public ResponseData updateReplicaAssignment(ReplicaAssignment assignment) {
Tuple2<Object, String> tuple2 = topicConsole.updateReplicas(gson.toJson(assignment), assignment.getInterBrokerThrottle());
boolean success = (boolean) tuple2._1();
return success ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2());
}
@Override
public ResponseData configThrottle(String topic, List<Integer> partitions, TopicThrottleSwitch throttleSwitch) {
Tuple2<Object, String> tuple2 = null;
switch (throttleSwitch) {
case ON:
tuple2 = topicConsole.configThrottle(topic, partitions);
break;
case OFF:
tuple2 = topicConsole.clearThrottle(topic);
break;
default:
throw new IllegalArgumentException("switch is unknown.");
}
boolean success = (boolean) tuple2._1();
return success ? ResponseData.create().success() : ResponseData.create().failed(tuple2._2());
}
@Override public ResponseData sendStats(String topic) {
Calendar calendar = Calendar.getInstance();
long current = calendar.getTimeInMillis();
calendar.set(Calendar.HOUR_OF_DAY, 0);
calendar.set(Calendar.MINUTE, 0);
calendar.set(Calendar.SECOND, 0);
calendar.set(Calendar.MILLISECOND, 0);
long today = calendar.getTimeInMillis();
calendar.add(Calendar.DAY_OF_MONTH, -1);
long yesterday = calendar.getTimeInMillis();
Map<TopicPartition, Long> currentOffset = topicConsole.getOffsetForTimestamp(topic, current);
Map<TopicPartition, Long> todayOffset = topicConsole.getOffsetForTimestamp(topic, today);
Map<TopicPartition, Long> yesterdayOffset = topicConsole.getOffsetForTimestamp(topic, yesterday);
Map<String, Object> res = new HashMap<>();
// 昨天的消息数是今天减去昨天的
AtomicLong yesterdayTotal = new AtomicLong(0L), todayTotal = new AtomicLong(0L);
Map<Integer, Long> yesterdayDetail = new HashMap<>(), todayDetail = new HashMap<>();
todayOffset.forEach(((partition, aLong) -> {
Long last = yesterdayOffset.get(partition);
long diff = last == null ? aLong : aLong - last;
yesterdayDetail.put(partition.partition(), diff);
yesterdayTotal.addAndGet(diff);
}));
currentOffset.forEach(((partition, aLong) -> {
Long last = todayOffset.get(partition);
long diff = last == null ? aLong : aLong - last;
todayDetail.put(partition.partition(), diff);
todayTotal.addAndGet(diff);
}));
Map<String, Object> yes = new HashMap<>(), to = new HashMap<>();
yes.put("detail", convertList(yesterdayDetail));
yes.put("total", yesterdayTotal.get());
to.put("detail", convertList(todayDetail));
to.put("total", todayTotal.get());
res.put("yesterday", yes);
res.put("today", to);
// 今天的消息数是现在减去今天0时的
return ResponseData.create().data(res).success();
}
private List<Map<String, Object>> convertList(Map<Integer, Long> source) {
List<Map<String, Object>> collect = source.entrySet().stream().map(entry -> {
Map<String, Object> map = new HashMap<>(3, 1.0f);
map.put("partition", entry.getKey());
map.put("num", entry.getValue());
return map;
}).collect(Collectors.toList());
return collect;
}
}

View File

@@ -1,9 +1,15 @@
package com.xuxd.kafka.console.utils;
import com.google.common.base.Preconditions;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import lombok.extern.slf4j.Slf4j;
import org.springframework.util.ClassUtils;
@@ -35,6 +41,52 @@ public class ConvertUtil {
}
});
}
Iterator<Map.Entry<String, Object>> iterator = res.entrySet().iterator();
while (iterator.hasNext()) {
if (iterator.next().getValue() == null) {
iterator.remove();
}
}
return res;
}
public static String toJsonString(Object src) {
return GsonUtil.INSTANCE.get().toJson(src);
}
public static Properties toProperties(String jsonStr) {
return GsonUtil.INSTANCE.get().fromJson(jsonStr, Properties.class);
}
public static String jsonStr2PropertiesStr(String jsonStr) {
StringBuilder sb = new StringBuilder();
Map<String, Object> map = GsonUtil.INSTANCE.get().fromJson(jsonStr, Map.class);
map.keySet().forEach(k -> {
sb.append(k).append("=").append(map.get(k).toString()).append(System.lineSeparator());
});
return sb.toString();
}
public static List<String> jsonStr2List(String jsonStr) {
List<String> res = new LinkedList<>();
Map<String, Object> map = GsonUtil.INSTANCE.get().fromJson(jsonStr, Map.class);
map.forEach((k, v) -> {
res.add(k + "=" + v);
});
return res;
}
public static String propertiesStr2JsonStr(String propertiesStr) {
String res = "{}";
try (ByteArrayInputStream baos = new ByteArrayInputStream(propertiesStr.getBytes())) {
Properties properties = new Properties();
properties.load(baos);
res = toJsonString(properties);
} catch (IOException e) {
log.error("propertiesStr2JsonStr error.", e);
}
return res;
}
}

View File

@@ -0,0 +1,19 @@
package com.xuxd.kafka.console.utils;
import com.google.gson.Gson;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-11-19 17:01:01
**/
public enum GsonUtil {
INSTANCE;
private Gson gson = new Gson();
public Gson get() {
return gson;
}
}

View File

@@ -0,0 +1,61 @@
package com.xuxd.kafka.console.utils;
import com.xuxd.kafka.console.config.ContextConfigHolder;
import java.util.Properties;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.admin.ScramMechanism;
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.security.auth.SecurityProtocol;
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2022-01-06 11:07:41
**/
public class SaslUtil {
public static final Pattern JAAS_PATTERN = Pattern.compile("^.*(username=\"(.*)\"[ \t]+).*$");
private SaslUtil() {
}
public static String findUsername(String saslJaasConfig) {
Matcher matcher = JAAS_PATTERN.matcher(saslJaasConfig);
return matcher.find() ? matcher.group(2) : "";
}
public static boolean isEnableSasl() {
Properties properties = ContextConfigHolder.CONTEXT_CONFIG.get().getProperties();
if (!properties.containsKey(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG)) {
return false;
}
String s = properties.getProperty(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG);
SecurityProtocol protocol = SecurityProtocol.valueOf(s);
switch (protocol) {
case SASL_SSL:
case SASL_PLAINTEXT:
return true;
default:
return false;
}
}
public static boolean isEnableScram() {
Properties properties = ContextConfigHolder.CONTEXT_CONFIG.get().getProperties();
if (!properties.containsKey(SaslConfigs.SASL_MECHANISM)) {
return false;
}
String s = properties.getProperty(SaslConfigs.SASL_MECHANISM);
ScramMechanism mechanism = ScramMechanism.fromMechanismName(s);
switch (mechanism) {
case UNKNOWN:
return false;
default:
return true;
}
}
}

View File

@@ -6,23 +6,12 @@ server:
kafka:
config:
# kafka broker地址多个以逗号分隔
bootstrap-server: 'localhost:9092'
request-timeout-ms: 5000
# 服务端是否启用acl如果不启用下面的所有配置都忽略即可只用配置上面的Kafka集群地址就行了
enable-acl: false
# 只支持2种安全协议SASL_PLAINTEXT和PLAINTEXT启用acl则设置为SASL_PLAINTEXT不启用acl不需关心这个配置
security-protocol: SASL_PLAINTEXT
sasl-mechanism: SCRAM-SHA-256
# 超级管理员用户名在broker上已经配置为超级管理员
admin-username: admin
# 超级管理员密码
admin-password: admin
# 启动自动创建配置的超级管理员用户
admin-create: false
# broker连接的zk地址如果启动自动创建配置的超级管理员用户则必须配置否则忽略
zookeeper-addr: 'localhost:2181'
sasl-jaas-config: org.apache.kafka.common.security.scram.ScramLoginModule required username="${kafka.config.admin-username}" password="${kafka.config.admin-password}";
# 如果不存在default集群启动的时候默认会把这个加载进来(如果这里配置集群地址了),如果已经存在,则不加载
# kafka broker地址多个以逗号分隔不是必须在这里配置也可以启动之后在页面上添加集群信息
bootstrap-server:
# 集群其它属性配置
properties:
# request.timeout.ms: 5000
spring:
application:
@@ -46,6 +35,7 @@ spring:
logging:
home: ./
# 基于scram方案的acl这里会记录创建的用户密码等信息定时扫描如果集群中已经不存在这些用户就把这些信息从db中清除掉
cron:
# clear-dirty-user: 0 * * * * ?
clear-dirty-user: 0 0 1 * * ?

View File

@@ -1,23 +1,37 @@
-- DROP TABLE IF EXISTS T_KAKFA_USER;
-- kafka ACL启用SASL_SCRAM中的用户
CREATE TABLE IF NOT EXISTS T_KAFKA_USER
(
ID IDENTITY NOT NULL COMMENT '主键ID',
USERNAME VARCHAR(50) NOT NULL DEFAULT '' COMMENT '',
PASSWORD VARCHAR(50) NOT NULL DEFAULT '' COMMENT '年龄',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
ID IDENTITY NOT NULL COMMENT '主键ID',
USERNAME VARCHAR(50) NOT NULL DEFAULT '' COMMENT '用户',
PASSWORD VARCHAR(50) NOT NULL DEFAULT '' COMMENT '密码',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
CLUSTER_INFO_ID BIGINT NOT NULL COMMENT '集群信息里的集群ID',
PRIMARY KEY (ID),
UNIQUE (USERNAME)
);
-- 消息同步解决方案中使用的位点对齐信息
CREATE TABLE IF NOT EXISTS T_MIN_OFFSET_ALIGNMENT
(
ID IDENTITY NOT NULL COMMENT '主键ID',
GROUP_ID VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'groupId',
TOPIC VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'topic',
THAT_OFFSET VARCHAR(512) NOT NULL DEFAULT '' COMMENT 'min offset for that kafka cluster',
THIS_OFFSET VARCHAR(512) NOT NULL DEFAULT '' COMMENT 'min offset for this kafka cluster',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
ID IDENTITY NOT NULL COMMENT '主键ID',
GROUP_ID VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'groupId',
TOPIC VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'topic',
THAT_OFFSET VARCHAR(512) NOT NULL DEFAULT '' COMMENT 'min offset for that kafka cluster',
THIS_OFFSET VARCHAR(512) NOT NULL DEFAULT '' COMMENT 'min offset for this kafka cluster',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
PRIMARY KEY (ID),
UNIQUE (GROUP_ID, TOPIC)
);
-- 多集群管理,每个集群的配置信息
CREATE TABLE IF NOT EXISTS T_CLUSTER_INFO
(
ID IDENTITY NOT NULL COMMENT '主键ID',
CLUSTER_NAME VARCHAR(128) NOT NULL DEFAULT '' COMMENT '集群名',
ADDRESS VARCHAR(256) NOT NULL DEFAULT '' COMMENT '集群地址',
PROPERTIES VARCHAR(512) NOT NULL DEFAULT '' COMMENT '集群的其它属性配置',
UPDATE_TIME TIMESTAMP NOT NULL DEFAULT NOW() COMMENT '更新时间',
PRIMARY KEY (ID),
UNIQUE (CLUSTER_NAME)
);

View File

@@ -0,0 +1,330 @@
package kafka.console
import com.xuxd.kafka.console.config.ContextConfigHolder
import kafka.utils.Implicits.MapExtensionMethods
import kafka.utils.Logging
import org.apache.kafka.clients._
import org.apache.kafka.clients.admin.AdminClientConfig
import org.apache.kafka.clients.consumer.internals.{ConsumerNetworkClient, RequestFuture}
import org.apache.kafka.common.Node
import org.apache.kafka.common.config.ConfigDef.ValidString.in
import org.apache.kafka.common.config.ConfigDef.{Importance, Type}
import org.apache.kafka.common.config.{AbstractConfig, ConfigDef}
import org.apache.kafka.common.errors.AuthenticationException
import org.apache.kafka.common.internals.ClusterResourceListeners
import org.apache.kafka.common.message.ApiVersionsResponseData.ApiVersionCollection
import org.apache.kafka.common.metrics.Metrics
import org.apache.kafka.common.network.Selector
import org.apache.kafka.common.protocol.Errors
import org.apache.kafka.common.requests._
import org.apache.kafka.common.utils.{KafkaThread, LogContext, Time}
import java.io.IOException
import java.util.Properties
import java.util.concurrent.atomic.AtomicInteger
import java.util.concurrent.{ConcurrentLinkedQueue, TimeUnit}
import scala.jdk.CollectionConverters.{ListHasAsScala, MapHasAsJava, PropertiesHasAsScala, SetHasAsScala}
import scala.util.{Failure, Success, Try}
/**
* kafka-console-ui.
*
* Copy from {@link kafka.admin.BrokerApiVersionsCommand}.
*
* @author xuxd
* @date 2022-01-22 15:15:57
* */
object BrokerApiVersion extends Logging {
def listAllBrokerApiVersionInfo(): java.util.HashMap[Node, NodeApiVersions] = {
val res = new java.util.HashMap[Node, NodeApiVersions]()
val adminClient = createAdminClient()
try {
adminClient.awaitBrokers()
val brokerMap = adminClient.listAllBrokerVersionInfo()
brokerMap.forKeyValue {
(broker, versionInfoOrError) =>
versionInfoOrError match {
case Success(v) => {
res.put(broker, v)
}
case Failure(v) => logger.error(s"${broker} -> ERROR: ${v}\n")
}
}
} finally {
adminClient.close()
}
res
}
private def createAdminClient(): AdminClient = {
val props = new Properties()
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getBootstrapServer())
props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs())
props.putAll(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties())
AdminClient.create(props)
}
// org.apache.kafka.clients.admin.AdminClient doesn't currently expose a way to retrieve the supported api versions.
// We inline the bits we need from kafka.admin.AdminClient so that we can delete it.
private class AdminClient(val time: Time,
val client: ConsumerNetworkClient,
val bootstrapBrokers: List[Node]) extends Logging {
@volatile var running = true
val pendingFutures = new ConcurrentLinkedQueue[RequestFuture[ClientResponse]]()
val networkThread = new KafkaThread("admin-client-network-thread", () => {
try {
while (running)
client.poll(time.timer(Long.MaxValue))
} catch {
case t: Throwable =>
error("admin-client-network-thread exited", t)
} finally {
pendingFutures.forEach { future =>
try {
future.raise(Errors.UNKNOWN_SERVER_ERROR)
} catch {
case _: IllegalStateException => // It is OK if the future has been completed
}
}
pendingFutures.clear()
}
}, true)
networkThread.start()
private def send(target: Node,
request: AbstractRequest.Builder[_ <: AbstractRequest]): AbstractResponse = {
val future = client.send(target, request)
pendingFutures.add(future)
future.awaitDone(Long.MaxValue, TimeUnit.MILLISECONDS)
pendingFutures.remove(future)
if (future.succeeded())
future.value().responseBody()
else
throw future.exception()
}
private def sendAnyNode(request: AbstractRequest.Builder[_ <: AbstractRequest]): AbstractResponse = {
bootstrapBrokers.foreach { broker =>
try {
return send(broker, request)
} catch {
case e: AuthenticationException =>
throw e
case e: Exception =>
debug(s"Request ${request.apiKey()} failed against node $broker", e)
}
}
throw new RuntimeException(s"Request ${request.apiKey()} failed on brokers $bootstrapBrokers")
}
private def getApiVersions(node: Node): ApiVersionCollection = {
val response = send(node, new ApiVersionsRequest.Builder()).asInstanceOf[ApiVersionsResponse]
Errors.forCode(response.data.errorCode).maybeThrow()
response.data.apiKeys
}
/**
* Wait until there is a non-empty list of brokers in the cluster.
*/
def awaitBrokers(): Unit = {
var nodes = List[Node]()
val start = System.currentTimeMillis()
val maxWait = 30 * 1000
do {
nodes = findAllBrokers()
if (nodes.isEmpty) {
Thread.sleep(50)
}
}
while (nodes.isEmpty && (System.currentTimeMillis() - start < maxWait))
}
private def findAllBrokers(): List[Node] = {
val request = MetadataRequest.Builder.allTopics()
val response = sendAnyNode(request).asInstanceOf[MetadataResponse]
val errors = response.errors
if (!errors.isEmpty) {
logger.info(s"Metadata request contained errors: $errors")
}
// 在3.x版本中这个方法是buildCluster 代替cluster()了
// response.buildCluster.nodes.asScala.toList
response.cluster().nodes.asScala.toList
}
def listAllBrokerVersionInfo(): Map[Node, Try[NodeApiVersions]] =
findAllBrokers().map { broker =>
broker -> Try[NodeApiVersions](new NodeApiVersions(getApiVersions(broker)))
}.toMap
def close(): Unit = {
running = false
try {
client.close()
} catch {
case e: IOException =>
error("Exception closing nioSelector:", e)
}
}
}
private object AdminClient {
val DefaultConnectionMaxIdleMs = 9 * 60 * 1000
val DefaultRequestTimeoutMs = 5000
val DefaultSocketConnectionSetupMs = CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG
val DefaultSocketConnectionSetupMaxMs = CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG
val DefaultMaxInFlightRequestsPerConnection = 100
val DefaultReconnectBackoffMs = 50
val DefaultReconnectBackoffMax = 50
val DefaultSendBufferBytes = 128 * 1024
val DefaultReceiveBufferBytes = 32 * 1024
val DefaultRetryBackoffMs = 100
val AdminClientIdSequence = new AtomicInteger(1)
val AdminConfigDef = {
val config = new ConfigDef()
.define(
CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG,
Type.LIST,
Importance.HIGH,
CommonClientConfigs.BOOTSTRAP_SERVERS_DOC)
.define(CommonClientConfigs.CLIENT_DNS_LOOKUP_CONFIG,
Type.STRING,
ClientDnsLookup.USE_ALL_DNS_IPS.toString,
in(ClientDnsLookup.USE_ALL_DNS_IPS.toString,
ClientDnsLookup.RESOLVE_CANONICAL_BOOTSTRAP_SERVERS_ONLY.toString),
Importance.MEDIUM,
CommonClientConfigs.CLIENT_DNS_LOOKUP_DOC)
.define(
CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,
ConfigDef.Type.STRING,
CommonClientConfigs.DEFAULT_SECURITY_PROTOCOL,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.SECURITY_PROTOCOL_DOC)
.define(
CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG,
ConfigDef.Type.INT,
DefaultRequestTimeoutMs,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.REQUEST_TIMEOUT_MS_DOC)
.define(
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG,
ConfigDef.Type.LONG,
CommonClientConfigs.DEFAULT_SOCKET_CONNECTION_SETUP_TIMEOUT_MS,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_DOC)
.define(
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG,
ConfigDef.Type.LONG,
CommonClientConfigs.DEFAULT_SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_DOC)
.define(
CommonClientConfigs.RETRY_BACKOFF_MS_CONFIG,
ConfigDef.Type.LONG,
DefaultRetryBackoffMs,
ConfigDef.Importance.MEDIUM,
CommonClientConfigs.RETRY_BACKOFF_MS_DOC)
.withClientSslSupport()
.withClientSaslSupport()
config
}
class AdminConfig(originals: Map[_, _]) extends AbstractConfig(AdminConfigDef, originals.asJava, false)
def create(props: Properties): AdminClient = {
val properties = new Properties()
val names = props.stringPropertyNames()
for (name <- names.asScala.toSet) {
properties.put(name, props.get(name).toString())
}
create(properties.asScala.toMap)
}
def create(props: Map[String, _]): AdminClient = create(new AdminConfig(props))
def create(config: AdminConfig): AdminClient = {
val clientId = "admin-" + AdminClientIdSequence.getAndIncrement()
val logContext = new LogContext(s"[LegacyAdminClient clientId=$clientId] ")
val time = Time.SYSTEM
val metrics = new Metrics(time)
val metadata = new Metadata(100L, 60 * 60 * 1000L, logContext,
new ClusterResourceListeners)
val channelBuilder = ClientUtils.createChannelBuilder(config, time, logContext)
val requestTimeoutMs = config.getInt(CommonClientConfigs.REQUEST_TIMEOUT_MS_CONFIG)
val connectionSetupTimeoutMs = config.getLong(CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MS_CONFIG)
val connectionSetupTimeoutMaxMs = config.getLong(CommonClientConfigs.SOCKET_CONNECTION_SETUP_TIMEOUT_MAX_MS_CONFIG)
val retryBackoffMs = config.getLong(CommonClientConfigs.RETRY_BACKOFF_MS_CONFIG)
val brokerUrls = config.getList(CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG)
val clientDnsLookup = config.getString(CommonClientConfigs.CLIENT_DNS_LOOKUP_CONFIG)
val brokerAddresses = ClientUtils.parseAndValidateAddresses(brokerUrls, clientDnsLookup)
metadata.bootstrap(brokerAddresses)
val selector = new Selector(
DefaultConnectionMaxIdleMs,
metrics,
time,
"admin",
channelBuilder,
logContext)
// 版本不一样,这个地方的兼容性问题也不一样了
// 3.x版本用这个
// val networkClient = new NetworkClient(
// selector,
// metadata,
// clientId,
// DefaultMaxInFlightRequestsPerConnection,
// DefaultReconnectBackoffMs,
// DefaultReconnectBackoffMax,
// DefaultSendBufferBytes,
// DefaultReceiveBufferBytes,
// requestTimeoutMs,
// connectionSetupTimeoutMs,
// connectionSetupTimeoutMaxMs,
// time,
// true,
// new ApiVersions,
// logContext)
val networkClient = new NetworkClient(
selector,
metadata,
clientId,
DefaultMaxInFlightRequestsPerConnection,
DefaultReconnectBackoffMs,
DefaultReconnectBackoffMax,
DefaultSendBufferBytes,
DefaultReceiveBufferBytes,
requestTimeoutMs,
connectionSetupTimeoutMs,
connectionSetupTimeoutMaxMs,
ClientDnsLookup.USE_ALL_DNS_IPS,
time,
true,
new ApiVersions,
logContext)
val highLevelClient = new ConsumerNetworkClient(
logContext,
networkClient,
metadata,
time,
retryBackoffMs,
requestTimeoutMs,
Integer.MAX_VALUE)
new AdminClient(
time,
highLevelClient,
metadata.fetch.nodes.asScala.toList)
}
}
}

View File

@@ -1,12 +1,13 @@
package kafka.console
import com.xuxd.kafka.console.beans.{BrokerNode, ClusterInfo}
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.kafka.clients.NodeApiVersions
import org.apache.kafka.clients.admin.DescribeClusterResult
import org.apache.kafka.common.Node
import java.util.Collections
import java.util.concurrent.TimeUnit
import com.xuxd.kafka.console.beans.{BrokerNode, ClusterInfo}
import com.xuxd.kafka.console.config.KafkaConfig
import org.apache.kafka.clients.admin.DescribeClusterResult
import scala.jdk.CollectionConverters.{CollectionHasAsScala, SetHasAsJava, SetHasAsScala}
/**
@@ -19,6 +20,7 @@ class ClusterConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
def clusterInfo(): ClusterInfo = {
withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val clusterResult: DescribeClusterResult = admin.describeCluster()
val clusterInfo = new ClusterInfo
clusterInfo.setClusterId(clusterResult.clusterId().get(timeoutMs, TimeUnit.MILLISECONDS))
@@ -41,4 +43,8 @@ class ClusterConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConf
new ClusterInfo
}).asInstanceOf[ClusterInfo]
}
def listBrokerVersionInfo(): java.util.HashMap[Node, NodeApiVersions] = {
BrokerApiVersion.listAllBrokerApiVersionInfo()
}
}

View File

@@ -3,8 +3,7 @@ package kafka.console
import java.util
import java.util.Collections
import java.util.concurrent.TimeUnit
import com.xuxd.kafka.console.config.KafkaConfig
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import kafka.console.ConfigConsole.BrokerLoggerConfigType
import kafka.server.ConfigType
import org.apache.kafka.clients.admin.{AlterConfigOp, Config, ConfigEntry, DescribeConfigsOptions}
@@ -69,6 +68,7 @@ class ConfigConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfi
val configResource = new ConfigResource(getResourceTypeAndValidate(entityType, entityName), entityName)
val config = Map(configResource -> Collections.singletonList(new AlterConfigOp(entry, opType)).asInstanceOf[util.Collection[AlterConfigOp]])
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.incrementalAlterConfigs(config.asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
}, e => {

View File

@@ -1,6 +1,6 @@
package kafka.console
import com.xuxd.kafka.console.config.KafkaConfig
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.kafka.clients.admin.ListOffsetsResult.ListOffsetsResultInfo
import org.apache.kafka.clients.admin._
import org.apache.kafka.clients.consumer.{ConsumerConfig, OffsetAndMetadata, OffsetResetStrategy}
@@ -75,6 +75,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
val endOffsets = commitOffsets.keySet.map { topicPartition =>
topicPartition -> OffsetSpec.latest
}.toMap
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.listOffsets(endOffsets.asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS)
}, e => {
log.error("listOffsets error.", e)
@@ -166,6 +167,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
def resetPartitionToTargetOffset(groupId: String, partition: TopicPartition, offset: Long): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.alterConsumerGroupOffsets(groupId, Map(partition -> new OffsetAndMetadata(offset)).asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
}, e => {
@@ -178,7 +180,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
timestamp: java.lang.Long): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
val logOffsets = getLogTimestampOffsets(admin, groupId, topicPartitions.asScala, timestamp)
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.alterConsumerGroupOffsets(groupId, logOffsets.asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
}, e => {
@@ -256,6 +258,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
val timestampOffsets = topicPartitions.map { topicPartition =>
topicPartition -> OffsetSpec.forTimestamp(timestamp)
}.toMap
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val offsets = admin.listOffsets(
timestampOffsets.asJava,
new ListOffsetsOptions().timeoutMs(timeoutMs)
@@ -280,6 +283,7 @@ class ConsumerConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
val endOffsets = topicPartitions.map { topicPartition =>
topicPartition -> OffsetSpec.latest
}.toMap
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val offsets = admin.listOffsets(
endOffsets.asJava,
new ListOffsetsOptions().timeoutMs(timeoutMs)

View File

@@ -3,9 +3,8 @@ package kafka.console
import java.util
import java.util.concurrent.TimeUnit
import java.util.{Collections, List}
import com.xuxd.kafka.console.beans.AclEntry
import com.xuxd.kafka.console.config.KafkaConfig
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.commons.lang3.StringUtils
import org.apache.kafka.common.acl._
import org.apache.kafka.common.resource.{ResourcePattern, ResourcePatternFilter, ResourceType}
@@ -58,6 +57,7 @@ class KafkaAclConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
def addAcl(acls: List[AclBinding]): Boolean = {
withAdminClient(adminClient => {
try {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
adminClient.createAcls(acls).all().get(timeoutMs, TimeUnit.MILLISECONDS)
true
} catch {
@@ -100,6 +100,7 @@ class KafkaAclConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
def deleteAcl(entry: AclEntry, allResource: Boolean, allPrincipal: Boolean, allOperation: Boolean): Boolean = {
withAdminClient(adminClient => {
try {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val result = adminClient.deleteAcls(Collections.singleton(entry.toAclBindingFilter(allResource, allPrincipal, allOperation))).all().get(timeoutMs, TimeUnit.MILLISECONDS)
log.info("delete acl: {}", result)
true
@@ -113,6 +114,7 @@ class KafkaAclConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaCon
def deleteAcl(filters: util.Collection[AclBindingFilter]): Boolean = {
withAdminClient(adminClient => {
try {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val result = adminClient.deleteAcls(filters).all().get(timeoutMs, TimeUnit.MILLISECONDS)
log.info("delete acl: {}", result)
true

View File

@@ -1,16 +1,16 @@
package kafka.console
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import kafka.server.ConfigType
import kafka.utils.Implicits.PropertiesOps
import org.apache.kafka.clients.admin._
import org.apache.kafka.common.config.SaslConfigs
import org.apache.kafka.common.security.scram.internals.{ScramCredentialUtils, ScramFormatter}
import java.security.MessageDigest
import java.util
import java.util.concurrent.TimeUnit
import java.util.{Properties, Set}
import com.xuxd.kafka.console.config.KafkaConfig
import kafka.server.ConfigType
import kafka.utils.Implicits.PropertiesOps
import org.apache.kafka.clients.admin._
import org.apache.kafka.common.security.scram.internals.{ScramCredentialUtils, ScramFormatter}
import scala.jdk.CollectionConverters.{CollectionHasAsScala, DictionaryHasAsScala, SeqHasAsJava}
/**
@@ -35,31 +35,32 @@ class KafkaConfigConsole(config: KafkaConfig) extends KafkaConsole(config: Kafka
}).asInstanceOf[util.Map[String, UserScramCredentialsDescription]]
}
def addOrUpdateUser(name: String, pass: String): Boolean = {
def addOrUpdateUser(name: String, pass: String): (Boolean, String) = {
withAdminClient(adminClient => {
try {
adminClient.alterUserScramCredentials(util.Arrays.asList(
new UserScramCredentialUpsertion(name,
new ScramCredentialInfo(ScramMechanism.fromMechanismName(config.getSaslMechanism), defaultIterations), pass)))
.all().get(timeoutMs, TimeUnit.MILLISECONDS)
true
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val mechanisms = ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_MECHANISM).split(",").toSeq
val scrams = mechanisms.map(m => new UserScramCredentialUpsertion(name,
new ScramCredentialInfo(ScramMechanism.fromMechanismName(m), defaultIterations), pass))
adminClient.alterUserScramCredentials(scrams.asInstanceOf[Seq[UserScramCredentialAlteration]].asJava).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
} catch {
case ex: Exception => log.error("addOrUpdateUser error", ex)
false
(false, ex.getMessage)
}
}).asInstanceOf[Boolean]
}).asInstanceOf[(Boolean, String)]
}
def addOrUpdateUserWithZK(name: String, pass: String): Boolean = {
withZKClient(adminZkClient => {
try {
val credential = new ScramFormatter(org.apache.kafka.common.security.scram.internals.ScramMechanism.forMechanismName(config.getSaslMechanism))
val credential = new ScramFormatter(org.apache.kafka.common.security.scram.internals.ScramMechanism.forMechanismName(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_MECHANISM)))
.generateCredential(pass, defaultIterations)
val credentialStr = ScramCredentialUtils.credentialToString(credential)
val userConfig: Properties = new Properties()
userConfig.put(config.getSaslMechanism, credentialStr)
userConfig.put(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties().getProperty(SaslConfigs.SASL_MECHANISM), credentialStr)
val configs = adminZkClient.fetchEntityConfig(ConfigType.User, name)
userConfig ++= configs
@@ -101,6 +102,7 @@ class KafkaConfigConsole(config: KafkaConfig) extends KafkaConsole(config: Kafka
// .all().get(timeoutMs, TimeUnit.MILLISECONDS)
// all delete
val userDetail = getUserDetailList(util.Collections.singletonList(name))
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
userDetail.values().asScala.foreach(u => {
adminClient.alterUserScramCredentials(u.credentialInfos().asScala.map(s => new UserScramCredentialDeletion(u.name(), s.mechanism())
.asInstanceOf[UserScramCredentialAlteration]).toList.asJava)

View File

@@ -1,15 +1,19 @@
package kafka.console
import java.util.Properties
import com.xuxd.kafka.console.config.KafkaConfig
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import kafka.zk.{AdminZkClient, KafkaZkClient}
import org.apache.kafka.clients.CommonClientConfigs
import org.apache.kafka.clients.admin.{AbstractOptions, Admin, AdminClientConfig}
import org.apache.kafka.clients.consumer.{ConsumerConfig, KafkaConsumer}
import org.apache.kafka.common.config.SaslConfigs
import org.apache.kafka.common.serialization.ByteArrayDeserializer
import org.apache.kafka.clients.admin._
import org.apache.kafka.clients.consumer.{ConsumerConfig, KafkaConsumer, OffsetAndMetadata}
import org.apache.kafka.clients.producer.KafkaProducer
import org.apache.kafka.common.TopicPartition
import org.apache.kafka.common.requests.ListOffsetsResponse
import org.apache.kafka.common.serialization.{ByteArrayDeserializer, ByteArraySerializer, StringSerializer}
import org.apache.kafka.common.utils.Time
import org.slf4j.{Logger, LoggerFactory}
import java.util.Properties
import scala.collection.{Map, Seq}
import scala.jdk.CollectionConverters.{MapHasAsJava, MapHasAsScala}
/**
* kafka-console-ui.
@@ -19,7 +23,7 @@ import org.apache.kafka.common.utils.Time
* */
class KafkaConsole(config: KafkaConfig) {
protected val timeoutMs: Int = config.getRequestTimeoutMs
// protected val timeoutMs: Int = config.getRequestTimeoutMs
protected def withAdminClient(f: Admin => Any): Any = {
@@ -55,6 +59,38 @@ class KafkaConsole(config: KafkaConfig) {
}
}
protected def withProducerAndCatchError(f: KafkaProducer[String, String] => Any, eh: Exception => Any,
extra: Properties = new Properties()): Any = {
val props = getProps()
props.putAll(extra)
props.put(ConsumerConfig.CLIENT_ID_CONFIG, String.valueOf(System.currentTimeMillis()))
val producer = new KafkaProducer[String, String](props, new StringSerializer, new StringSerializer)
try {
f(producer)
} catch {
case er: Exception => eh(er)
}
finally {
producer.close()
}
}
protected def withByteProducerAndCatchError(f: KafkaProducer[Array[Byte], Array[Byte]] => Any, eh: Exception => Any,
extra: Properties = new Properties()): Any = {
val props = getProps()
props.putAll(extra)
props.put(ConsumerConfig.CLIENT_ID_CONFIG, String.valueOf(System.currentTimeMillis()))
val producer = new KafkaProducer[Array[Byte], Array[Byte]](props, new ByteArraySerializer, new ByteArraySerializer)
try {
f(producer)
} catch {
case er: Exception => eh(er)
}
finally {
producer.close()
}
}
protected def withZKClient(f: AdminZkClient => Any): Any = {
val zkClient = KafkaZkClient(config.getZookeeperAddr, false, 30000, 30000, Int.MaxValue, Time.SYSTEM)
val adminZkClient = new AdminZkClient(zkClient)
@@ -70,7 +106,7 @@ class KafkaConsole(config: KafkaConfig) {
}
protected def withTimeoutMs[T <: AbstractOptions[T]](options: T) = {
options.timeoutMs(timeoutMs)
options.timeoutMs(ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs())
}
private def createAdminClient(): Admin = {
@@ -79,13 +115,63 @@ class KafkaConsole(config: KafkaConfig) {
private def getProps(): Properties = {
val props: Properties = new Properties();
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, config.getBootstrapServer)
props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, config.getRequestTimeoutMs())
if (config.isEnableAcl) {
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, config.getSecurityProtocol())
props.put(SaslConfigs.SASL_MECHANISM, config.getSaslMechanism())
props.put(SaslConfigs.SASL_JAAS_CONFIG, config.getSaslJaasConfig())
}
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getBootstrapServer())
props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs())
props.putAll(ContextConfigHolder.CONTEXT_CONFIG.get().getProperties())
props
}
}
object KafkaConsole {
val log: Logger = LoggerFactory.getLogger(this.getClass)
def getCommittedOffsets(admin: Admin, groupId: String,
timeoutMs: Integer): Map[TopicPartition, OffsetAndMetadata] = {
admin.listConsumerGroupOffsets(
groupId, new ListConsumerGroupOffsetsOptions().timeoutMs(timeoutMs)
).partitionsToOffsetAndMetadata.get.asScala
}
def getLogTimestampOffsets(admin: Admin, topicPartitions: Seq[TopicPartition],
timestamp: java.lang.Long, timeoutMs: Integer): Map[TopicPartition, OffsetAndMetadata] = {
val timestampOffsets = topicPartitions.map { topicPartition =>
topicPartition -> OffsetSpec.forTimestamp(timestamp)
}.toMap
val offsets = admin.listOffsets(
timestampOffsets.asJava,
new ListOffsetsOptions().timeoutMs(timeoutMs)
).all.get
val (successfulOffsetsForTimes, unsuccessfulOffsetsForTimes) =
offsets.asScala.partition(_._2.offset != ListOffsetsResponse.UNKNOWN_OFFSET)
val successfulLogTimestampOffsets = successfulOffsetsForTimes.map {
case (topicPartition, listOffsetsResultInfo) => topicPartition -> new OffsetAndMetadata(listOffsetsResultInfo.offset)
}.toMap
unsuccessfulOffsetsForTimes.foreach { entry =>
log.warn(s"\nWarn: Partition " + entry._1.partition() + " from topic " + entry._1.topic() +
" is empty. Falling back to latest known offset.")
}
successfulLogTimestampOffsets ++ getLogEndOffsets(admin, unsuccessfulOffsetsForTimes.keySet.toSeq, timeoutMs)
}
def getLogEndOffsets(admin: Admin,
topicPartitions: Seq[TopicPartition], timeoutMs: Integer): Predef.Map[TopicPartition, OffsetAndMetadata] = {
val endOffsets = topicPartitions.map { topicPartition =>
topicPartition -> OffsetSpec.latest
}.toMap
val offsets = admin.listOffsets(
endOffsets.asJava,
new ListOffsetsOptions().timeoutMs(timeoutMs)
).all.get
val res = topicPartitions.map { topicPartition =>
Option(offsets.get(topicPartition)) match {
case Some(listOffsetsResultInfo) => topicPartition -> new OffsetAndMetadata(listOffsetsResultInfo.offset)
case _ =>
throw new IllegalArgumentException
}
}.toMap
res
}
}

View File

@@ -0,0 +1,239 @@
package kafka.console
import com.xuxd.kafka.console.beans.MessageFilter
import com.xuxd.kafka.console.beans.enums.FilterType
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import org.apache.commons.lang3.StringUtils
import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord}
import org.apache.kafka.clients.producer.ProducerRecord
import org.apache.kafka.common.TopicPartition
import java.time.Duration
import java.util
import java.util.Properties
import scala.collection.immutable
import scala.jdk.CollectionConverters.{CollectionHasAsScala, MapHasAsScala, SeqHasAsJava}
/**
* kafka-console-ui.
*
* @author xuxd
* @date 2021-12-11 09:39:40
* */
class MessageConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig) with Logging {
def searchBy(partitions: util.Collection[TopicPartition], startTime: Long, endTime: Long,
maxNums: Int, filter: MessageFilter): (util.List[ConsumerRecord[Array[Byte], Array[Byte]]], Int) = {
var startOffTable: immutable.Map[TopicPartition, Long] = Map.empty
var endOffTable: immutable.Map[TopicPartition, Long] = Map.empty
withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val startTable = KafkaConsole.getLogTimestampOffsets(admin, partitions.asScala.toSeq, startTime, timeoutMs)
startOffTable = startTable.map(t2 => (t2._1, t2._2.offset())).toMap
endOffTable = KafkaConsole.getLogTimestampOffsets(admin, partitions.asScala.toSeq, endTime, timeoutMs)
.map(t2 => (t2._1, t2._2.offset())).toMap
}, e => {
log.error("getLogTimestampOffsets error.", e)
throw new RuntimeException("getLogTimestampOffsets error", e)
})
val headerValueBytes = if (StringUtils.isNotEmpty(filter.getHeaderValue())) filter.getHeaderValue().getBytes() else None
def filterMessage(record: ConsumerRecord[Array[Byte], Array[Byte]]): Boolean = {
filter.getFilterType() match {
case FilterType.BODY => {
val body = filter.getDeserializer().deserialize(record.topic(), record.value())
var contains = false
if (filter.isContainsValue) {
contains = body.asInstanceOf[String].contains(filter.getSearchContent().asInstanceOf[String])
} else {
contains = body.equals(filter.getSearchContent)
}
contains
}
case FilterType.HEADER => {
if (StringUtils.isNotEmpty(filter.getHeaderKey()) && StringUtils.isNotEmpty(filter.getHeaderValue())) {
val iterator = record.headers().headers(filter.getHeaderKey()).iterator()
var contains = false
while (iterator.hasNext() && !contains) {
val next = iterator.next().value()
contains = (next.sameElements(headerValueBytes.asInstanceOf[Array[Byte]]))
}
contains
} else if (StringUtils.isNotEmpty(filter.getHeaderKey()) && StringUtils.isEmpty(filter.getHeaderValue())) {
record.headers().headers(filter.getHeaderKey()).iterator().hasNext()
} else {
true
}
}
case FilterType.NONE => true
}
}
var terminate: Boolean = (startOffTable == endOffTable)
val res = new util.LinkedList[ConsumerRecord[Array[Byte], Array[Byte]]]()
// 检索的消息条数
var searchNums = 0
// 如果最小和最大偏移一致,就结束
if (!terminate) {
val arrive = new util.HashSet[TopicPartition](partitions)
val props = new Properties()
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false")
withConsumerAndCatchError(consumer => {
consumer.assign(partitions)
for ((tp, off) <- startOffTable) {
consumer.seek(tp, off)
}
// 终止条件
// 1.所有查询分区达都到最大偏移的时候
while (!terminate) {
// 达到查询的最大条数
if (searchNums >= maxNums) {
terminate = true
} else {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val records = consumer.poll(Duration.ofMillis(timeoutMs))
if (records.isEmpty) {
terminate = true
} else {
for ((tp, endOff) <- endOffTable) {
if (!terminate) {
var recordList = records.records(tp)
if (!recordList.isEmpty) {
val first = recordList.get(0)
if (first.offset() >= endOff) {
arrive.remove(tp)
} else {
searchNums += recordList.size()
//
// (String topic,
// int partition,
// long offset,
// long timestamp,
// TimestampType timestampType,
// Long checksum,
// int serializedKeySize,
// int serializedValueSize,
// K key,
// V value,
// Headers headers,
// Optional<Integer> leaderEpoch)
val nullVList = recordList.asScala.filter(filterMessage(_)).map(record => new ConsumerRecord[Array[Byte], Array[Byte]](record.topic(),
record.partition(),
record.offset(),
record.timestamp(),
record.timestampType(),
record.checksum(),
record.serializedKeySize(),
record.serializedValueSize(),
record.key(),
null,
record.headers(),
record.leaderEpoch())).toSeq.asJava
res.addAll(nullVList)
if (recordList.get(recordList.size() - 1).offset() >= endOff) {
arrive.remove(tp)
}
if (recordList != null) {
recordList = null
}
}
}
if (arrive.isEmpty) {
terminate = true
}
}
}
}
}
}
}, e => {
log.error("searchBy time error.", e)
})
}
(res, searchNums)
}
def searchBy(
tp2o: util.Map[TopicPartition, Long]): util.Map[TopicPartition, ConsumerRecord[Array[Byte], Array[Byte]]] = {
val props = new Properties()
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false")
val res = new util.HashMap[TopicPartition, ConsumerRecord[Array[Byte], Array[Byte]]]()
withConsumerAndCatchError(consumer => {
var tpSet = tp2o.keySet()
val tpSetCopy = new util.HashSet[TopicPartition](tpSet)
val endOffsets = consumer.endOffsets(tpSet)
val beginOffsets = consumer.beginningOffsets(tpSet)
for ((tp, off) <- tp2o.asScala) {
val endOff = endOffsets.get(tp)
// if (endOff <= off) {
// consumer.seek(tp, endOff)
// tpSetCopy.remove(tp)
// } else {
// consumer.seek(tp, off)
// }
val beginOff = beginOffsets.get(tp)
if (off < beginOff || off >= endOff) {
tpSetCopy.remove(tp)
}
}
tpSet = tpSetCopy
consumer.assign(tpSet)
tpSet.asScala.foreach(tp => {
consumer.seek(tp, tp2o.get(tp))
})
var terminate = tpSet.isEmpty
while (!terminate) {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val records = consumer.poll(Duration.ofMillis(timeoutMs))
val tps = new util.HashSet(tpSet).asScala
for (tp <- tps) {
if (!res.containsKey(tp)) {
val recordList = records.records(tp)
if (!recordList.isEmpty) {
val record = recordList.get(0)
res.put(tp, record)
tpSet.remove(tp)
}
}
if (tpSet.isEmpty) {
terminate = true
}
}
}
}, e => {
log.error("searchBy offset error.", e)
})
res
}
def send(topic: String, partition: Int, key: String, value: String, num: Int): Unit = {
withProducerAndCatchError(producer => {
val nullKey = if (key != null && key.trim().length() == 0) null else key
for (a <- 1 to num) {
val record = if (partition != -1) new ProducerRecord[String, String](topic, partition, nullKey, value)
else new ProducerRecord[String, String](topic, nullKey, value)
producer.send(record)
}
}, e => log.error("send error.", e))
}
def sendSync(record: ProducerRecord[Array[Byte], Array[Byte]]): (Boolean, String) = {
withByteProducerAndCatchError(producer => {
val metadata = producer.send(record).get()
(true, metadata.toString())
}, e => {
log.error("send error.", e)
(false, e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
}

View File

@@ -1,15 +1,15 @@
package kafka.console
import java.util.concurrent.TimeUnit
import java.util.{Collections, Properties}
import com.xuxd.kafka.console.config.KafkaConfig
import org.apache.kafka.clients.admin.ElectLeadersOptions
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import kafka.admin.ReassignPartitionsCommand
import org.apache.kafka.clients.admin.{ElectLeadersOptions, ListPartitionReassignmentsOptions, PartitionReassignment}
import org.apache.kafka.clients.consumer.KafkaConsumer
import org.apache.kafka.common.serialization.ByteArrayDeserializer
import org.apache.kafka.common.{ElectionType, TopicPartition}
import scala.jdk.CollectionConverters.{CollectionHasAsScala, ListHasAsScala, MapHasAsScala, SeqHasAsJava, SetHasAsJava, SetHasAsScala}
import java.util.concurrent.TimeUnit
import java.util.{Collections, Properties}
import scala.jdk.CollectionConverters.{CollectionHasAsScala, ListHasAsScala, MapHasAsJava, MapHasAsScala, SeqHasAsJava, SetHasAsJava, SetHasAsScala}
/**
* kafka-console-ui.
@@ -34,6 +34,7 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
throw new UnsupportedOperationException("exist consumer client.")
}
}
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val thatGroupDescriptionList = thatAdmin.describeConsumerGroups(searchGroupIds).all().get(timeoutMs, TimeUnit.MILLISECONDS).values()
if (groupDescriptionList.isEmpty) {
throw new IllegalArgumentException("that consumer group info is null.")
@@ -101,6 +102,7 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
thatMinOffset: util.Map[TopicPartition, Long]): (Boolean, String) = {
val thatAdmin = createAdminClient(props)
try {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val searchGroupIds = Collections.singleton(groupId)
val groupDescriptionList = consumerConsole.getConsumerGroupList(searchGroupIds)
if (groupDescriptionList.isEmpty) {
@@ -178,6 +180,7 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
val thatConsumer = new KafkaConsumer(props, new ByteArrayDeserializer, new ByteArrayDeserializer)
try {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val thisTopicPartitions = consumerConsole.listSubscribeTopics(groupId).get(topic).asScala.sortBy(_.partition())
val thatTopicPartitionMap = thatAdmin.listConsumerGroupOffsets(
groupId
@@ -210,4 +213,63 @@ class OperationConsole(config: KafkaConfig, topicConsole: TopicConsole,
val topicList = topicConsole.getTopicList(Collections.singleton(topic))
topicList.asScala.flatMap(_.partitions().asScala.map(t => new TopicPartition(topic, t.partition()))).toSet.asJava
}
}
def modifyInterBrokerThrottle(reassigningBrokers: util.Set[Int],
interBrokerThrottle: Long): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
ReassignPartitionsCommand.modifyInterBrokerThrottle(admin, reassigningBrokers.asScala.toSet, interBrokerThrottle)
(true, "")
}, e => {
log.error("modifyInterBrokerThrottle error.", e)
(false, e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
def clearBrokerLevelThrottles(brokers: util.Set[Int]): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
ReassignPartitionsCommand.clearBrokerLevelThrottles(admin, brokers.asScala.toSet)
(true, "")
}, e => {
log.error("clearBrokerLevelThrottles error.", e)
(false, e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
/**
* current reassigning is active.
*/
def currentReassignments(): util.Map[TopicPartition, PartitionReassignment] = {
withAdminClientAndCatchError(admin => {
admin.listPartitionReassignments(withTimeoutMs(new ListPartitionReassignmentsOptions)).reassignments().get()
}, e => {
log.error("listPartitionReassignments error.", e)
Collections.emptyMap()
}).asInstanceOf[util.Map[TopicPartition, PartitionReassignment]]
}
def cancelPartitionReassignments(reassignments: util.Set[TopicPartition]): util.Map[TopicPartition, Throwable] = {
withAdminClientAndCatchError(admin => {
val res = ReassignPartitionsCommand.cancelPartitionReassignments(admin, reassignments.asScala.toSet)
res.asJava
}, e => {
log.error("cancelPartitionReassignments error.", e)
throw e
}).asInstanceOf[util.Map[TopicPartition, Throwable]]
}
def proposedAssignments(reassignmentJson: String,
brokerListString: String): util.Map[TopicPartition, util.List[Int]] = {
withAdminClientAndCatchError(admin => {
val map = ReassignPartitionsCommand.generateAssignment(admin, reassignmentJson, brokerListString, true)._1
val res = new util.HashMap[TopicPartition, util.List[Int]]()
for (tp <- map.keys) {
res.put(tp, map(tp).asJava)
// res.put(tp, map.getOrElse(tp, Seq.empty).asJava)
}
res
}, e => {
log.error("proposedAssignments error.", e)
throw e
})
}.asInstanceOf[util.Map[TopicPartition, util.List[Int]]]
}

View File

@@ -1,14 +1,18 @@
package kafka.console
import java.util
import java.util.concurrent.TimeUnit
import java.util.{Collections, List, Set}
import com.xuxd.kafka.console.config.KafkaConfig
import com.xuxd.kafka.console.config.{ContextConfigHolder, KafkaConfig}
import kafka.admin.ReassignPartitionsCommand._
import kafka.utils.Json
import org.apache.kafka.clients.admin._
import org.apache.kafka.common.TopicPartition
import org.apache.kafka.common.errors.UnknownTopicOrPartitionException
import org.apache.kafka.common.utils.Time
import org.apache.kafka.common.{TopicPartition, TopicPartitionReplica}
import scala.jdk.CollectionConverters.{CollectionHasAsScala, SetHasAsJava}
import java.util
import java.util.concurrent.{ExecutionException, TimeUnit}
import java.util.{Collections, List, Set}
import scala.collection.{Map, Seq}
import scala.jdk.CollectionConverters.{CollectionHasAsScala, MapHasAsJava, MapHasAsScala, SeqHasAsJava, SetHasAsJava}
/**
* kafka-console-ui.
@@ -24,6 +28,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
* @return all topic name set.
*/
def getTopicNameList(internal: Boolean = true): Set[String] = {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
withAdminClientAndCatchError(admin => admin.listTopics(new ListTopicsOptions().listInternal(internal)).names()
.get(timeoutMs, TimeUnit.MILLISECONDS),
e => {
@@ -38,6 +43,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
* @return internal topic name set.
*/
def getInternalTopicNameList(): Set[String] = {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
withAdminClientAndCatchError(admin => admin.listTopics(new ListTopicsOptions().listInternal(true)).listings()
.get(timeoutMs, TimeUnit.MILLISECONDS).asScala.filter(_.isInternal).map(_.name()).toSet[String].asJava,
e => {
@@ -65,6 +71,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
*/
def deleteTopic(topic: String): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.deleteTopics(Collections.singleton(topic), new DeleteTopicsOptions().retryOnQuotaViolation(false)).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
},
@@ -99,6 +106,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
*/
def createTopic(topic: NewTopic): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val createResult = admin.createTopics(Collections.singleton(topic), new CreateTopicsOptions().retryOnQuotaViolation(false))
createResult.all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
@@ -113,6 +121,7 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
*/
def createPartitions(newPartitions: util.Map[String, NewPartitions]): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
admin.createPartitions(newPartitions,
new CreatePartitionsOptions().retryOnQuotaViolation(false)).all().get(timeoutMs, TimeUnit.MILLISECONDS)
(true, "")
@@ -121,4 +130,181 @@ class TopicConsole(config: KafkaConfig) extends KafkaConsole(config: KafkaConfig
(false, e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
def getCurrentReplicaAssignmentJson(topic: String): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
val json = formatAsReassignmentJson(getReplicaAssignmentForTopics(admin, Seq(topic)), Map.empty)
(true, json)
}, e => {
log.error("getCurrentReplicaAssignmentJson error, ", e)
(false, e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
def updateReplicas(reassignmentJson: String, interBrokerThrottle: Long = -1L): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
executeAssignment(admin, reassignmentJson, interBrokerThrottle)
(true, "")
}, e => {
log.error("executeAssignment error, ", e)
(false, e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
/**
* Copy and modify from @{link kafka.admin.ReassignPartitionsCommand#executeAssignment}.
*/
def executeAssignment(adminClient: Admin,
reassignmentJson: String,
interBrokerThrottle: Long = -1L,
logDirThrottle: Long = -1L,
timeoutMs: Long = 30000L,
time: Time = Time.SYSTEM): Unit = {
val (proposedParts, proposedReplicas) = parseExecuteAssignmentArgs(reassignmentJson)
val currentReassignments = adminClient.
listPartitionReassignments().reassignments().get().asScala
// If there is an existing assignment
// This helps avoid surprising users.
if (currentReassignments.nonEmpty) {
throw new TerseReassignmentFailureException("Cannot execute because there is an existing partition assignment.")
}
verifyBrokerIds(adminClient, proposedParts.values.flatten.toSet)
val currentParts = getReplicaAssignmentForPartitions(adminClient, proposedParts.keySet.toSet)
log.info("currentPartitionReplicaAssignment: " + currentPartitionReplicaAssignmentToString(proposedParts, currentParts))
log.info(s"newPartitionReplicaAssignment: $reassignmentJson")
if (interBrokerThrottle >= 0 || logDirThrottle >= 0) {
if (interBrokerThrottle >= 0) {
val moveMap = calculateProposedMoveMap(currentReassignments, proposedParts, currentParts)
modifyReassignmentThrottle(adminClient, moveMap)
}
if (logDirThrottle >= 0) {
val movingBrokers = calculateMovingBrokers(proposedReplicas.keySet.toSet)
modifyLogDirThrottle(adminClient, movingBrokers, logDirThrottle)
}
}
// Execute the partition reassignments.
val errors = alterPartitionReassignments(adminClient, proposedParts)
if (errors.nonEmpty) {
throw new TerseReassignmentFailureException(
"Error reassigning partition(s):%n%s".format(
errors.keySet.toBuffer.sortWith(compareTopicPartitions).map { part =>
s"$part: ${errors(part).getMessage}"
}.mkString(System.lineSeparator())))
}
if (proposedReplicas.nonEmpty) {
executeMoves(adminClient, proposedReplicas, timeoutMs, time)
}
}
def configThrottle(topic: String, partitions: util.List[Integer]): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
val throttles = {
if (partitions.get(0) == -1) {
Map(topic -> "*")
} else {
val topicDescription = admin.describeTopics(Collections.singleton(topic), withTimeoutMs(new DescribeTopicsOptions))
.all().get().values().asScala.toList
def convert(partition: Integer, replicas: scala.List[Int]): String = {
replicas.map("%d:%d".format(partition, _)).toSet.mkString(",")
}
val ptor = topicDescription.head.partitions().asScala.map(info => (info.partition(), info.replicas().asScala.map(_.id()))).toMap
val conf = partitions.asScala.map(partition => convert(partition, ptor.get(partition) match {
case Some(v) => v.toList
case None => throw new IllegalArgumentException
})).toList
Map(topic -> conf.mkString(","))
}
}
modifyTopicThrottles(admin, throttles, throttles)
(true, "")
}, e => {
log.error("configThrottle error, ", e)
(false, e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
def clearThrottle(topic: String): (Boolean, String) = {
withAdminClientAndCatchError(admin => {
clearTopicLevelThrottles(admin, Collections.singleton(topic).asScala.toSet)
(true, "")
}, e => {
log.error("clearThrottle error, ", e)
(false, e.getMessage)
}).asInstanceOf[(Boolean, String)]
}
def getOffsetForTimestamp(topic: String, timestamp: java.lang.Long): util.Map[TopicPartition, java.lang.Long] = {
withAdminClientAndCatchError(admin => {
val partitions = describeTopics(admin, Collections.singleton(topic)).get(topic) match {
case Some(topicDescription: TopicDescription) => topicDescription.partitions()
.asScala.map(info => new TopicPartition(topic, info.partition())).toSeq
case None => throw new IllegalArgumentException("topic is not exist.")
}
val timeoutMs = ContextConfigHolder.CONTEXT_CONFIG.get().getRequestTimeoutMs()
val offsetMap = KafkaConsole.getLogTimestampOffsets(admin, partitions, timestamp, timeoutMs)
offsetMap.map(tuple2 => (tuple2._1, tuple2._2.offset())).toMap.asJava
}, e => {
log.error("clearThrottle error, ", e)
Collections.emptyMap()
}).asInstanceOf[util.Map[TopicPartition, java.lang.Long]]
}
/**
* Get the current replica assignments for some topics.
*
* @param adminClient The AdminClient to use.
* @param topics The topics to get information about.
* @return A map from partitions to broker assignments.
* If any topic can't be found, an exception will be thrown.
*/
private def getReplicaAssignmentForTopics(adminClient: Admin,
topics: Seq[String])
: Map[TopicPartition, Seq[Int]] = {
describeTopics(adminClient, topics.toSet.asJava).flatMap {
case (topicName, topicDescription) => topicDescription.partitions.asScala.map { info =>
(new TopicPartition(topicName, info.partition), info.replicas.asScala.map(_.id).toSeq)
}
}
}
private def formatAsReassignmentJson(partitionsToBeReassigned: Map[TopicPartition, Seq[Int]],
replicaLogDirAssignment: Map[TopicPartitionReplica, String]): String = {
Json.encodeAsString(Map(
"version" -> 1,
"partitions" -> partitionsToBeReassigned.keySet.toBuffer.sortWith(compareTopicPartitions).map {
tp =>
val replicas = partitionsToBeReassigned(tp)
Map(
"topic" -> tp.topic,
"partition" -> tp.partition,
"replicas" -> replicas.asJava
).asJava
}.asJava
).asJava)
}
private def describeTopics(adminClient: Admin,
topics: Set[String])
: Map[String, TopicDescription] = {
adminClient.describeTopics(topics).values.asScala.map { case (topicName, topicDescriptionFuture) =>
try topicName -> topicDescriptionFuture.get
catch {
case t: ExecutionException if t.getCause.isInstanceOf[UnknownTopicOrPartitionException] =>
throw new ExecutionException(
new UnknownTopicOrPartitionException(s"Topic $topicName not found."))
}
}
}
private def modifyReassignmentThrottle(admin: Admin, moveMap: MoveMap): Unit = {
val leaderThrottles = calculateLeaderThrottles(moveMap)
val followerThrottles = calculateFollowerThrottles(moveMap)
modifyTopicThrottles(admin, leaderThrottles, followerThrottles)
}
}

12
ui/package-lock.json generated
View File

@@ -1866,9 +1866,9 @@
"optional": true
},
"loader-utils": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.0.tgz",
"integrity": "sha512-rP4F0h2RaWSvPEkD7BLDFQnvSf+nK+wr3ESUjNTyAGobqrijmW92zc+SO6d4p4B1wh7+B/Jg1mkQe5NYUEHtHQ==",
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.2.tgz",
"integrity": "sha512-TM57VeHptv569d/GKh6TAYdzKblwDNiumOdkFnejjD0XwTH87K90w3O7AiJRqdQoXygvi1VQTJTLGhJl7WqA7A==",
"dev": true,
"optional": true,
"requires": {
@@ -1897,9 +1897,9 @@
}
},
"vue-loader-v16": {
"version": "npm:vue-loader@16.5.0",
"resolved": "https://registry.npmjs.org/vue-loader/-/vue-loader-16.5.0.tgz",
"integrity": "sha512-WXh+7AgFxGTgb5QAkQtFeUcHNIEq3PGVQ8WskY5ZiFbWBkOwcCPRs4w/2tVyTbh2q6TVRlO3xfvIukUtjsu62A==",
"version": "npm:vue-loader@16.8.3",
"resolved": "https://registry.npmjs.org/vue-loader/-/vue-loader-16.8.3.tgz",
"integrity": "sha512-7vKN45IxsKxe5GcVCbc2qFU5aWzyiLrYJyUuMz4BQLKctCj/fmCa0w6fGiiQ2cLFetNcek1ppGJQDCup0c1hpA==",
"dev": true,
"optional": true,
"requires": {

View File

@@ -1,6 +1,7 @@
<template>
<div id="app">
<div id="nav">
<h2 class="logo">Kafka 控制台</h2>
<router-link to="/" class="pad-l-r">主页</router-link>
<span>|</span
><router-link to="/cluster-page" class="pad-l-r">集群</router-link>
@@ -8,19 +9,26 @@
><router-link to="/topic-page" class="pad-l-r">Topic</router-link>
<span>|</span
><router-link to="/group-page" class="pad-l-r">消费组</router-link>
<span v-show="config.enableAcl">|</span
><router-link to="/acl-page" class="pad-l-r" v-show="config.enableAcl"
<span>|</span
><router-link to="/message-page" class="pad-l-r">消息</router-link>
<span v-show="enableSasl">|</span
><router-link to="/acl-page" class="pad-l-r" v-show="enableSasl"
>Acl</router-link
>
<span>|</span
><router-link to="/op-page" class="pad-l-r">运维</router-link>
<span class="right">集群{{ clusterName }}</span>
</div>
<router-view class="content" />
</div>
</template>
<script>
import { KafkaConfigApi } from "@/utils/api";
import { KafkaClusterApi } from "@/utils/api";
import request from "@/utils/request";
import { mapMutations, mapState } from "vuex";
import { getClusterInfo } from "@/utils/local-cache";
import notification from "ant-design-vue/lib/notification";
import { CLUSTER } from "@/store/mutation-types";
export default {
data() {
@@ -29,12 +37,35 @@ export default {
};
},
created() {
request({
url: KafkaConfigApi.getConfig.url,
method: KafkaConfigApi.getConfig.method,
}).then((res) => {
this.config = res.data;
});
const clusterInfo = getClusterInfo();
if (!clusterInfo) {
request({
url: KafkaClusterApi.peekClusterInfo.url,
method: KafkaClusterApi.peekClusterInfo.method,
}).then((res) => {
if (res.code == 0) {
this.switchCluster(res.data);
} else {
notification.error({
message: "error",
description: res.msg,
});
}
});
} else {
this.switchCluster(clusterInfo);
}
},
computed: {
...mapState({
clusterName: (state) => state.clusterInfo.clusterName,
enableSasl: (state) => state.clusterInfo.enableSasl,
}),
},
methods: {
...mapMutations({
switchCluster: CLUSTER.SWITCH,
}),
},
};
</script>
@@ -44,7 +75,6 @@ export default {
font-family: Avenir, Helvetica, Arial, sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
text-align: center;
color: #2c3e50;
}
@@ -59,6 +89,7 @@ export default {
padding-top: 1%;
padding-bottom: 1%;
margin-bottom: 1%;
text-align: center;
}
#nav a {
@@ -81,4 +112,16 @@ export default {
height: 90%;
width: 100%;
}
.logo {
float: left;
left: 1%;
top: 1%;
position: absolute;
}
.right {
float: right;
right: 1%;
top: 2%;
position: absolute;
}
</style>

View File

@@ -43,6 +43,12 @@ const routes = [
component: () =>
import(/* webpackChunkName: "cluster" */ "../views/cluster/Cluster.vue"),
},
{
path: "/message-page",
name: "Message",
component: () =>
import(/* webpackChunkName: "cluster" */ "../views/message/Message.vue"),
},
];
const router = new VueRouter({

View File

@@ -1,11 +1,34 @@
import Vue from "vue";
import Vuex from "vuex";
import { CLUSTER } from "@/store/mutation-types";
import { setClusterInfo } from "@/utils/local-cache";
Vue.use(Vuex);
export default new Vuex.Store({
state: {},
mutations: {},
state: {
clusterInfo: {
id: undefined,
clusterName: undefined,
enableSasl: false,
},
},
mutations: {
[CLUSTER.SWITCH](state, clusterInfo) {
state.clusterInfo.id = clusterInfo.id;
state.clusterInfo.clusterName = clusterInfo.clusterName;
let enableSasl = false;
for (let p in clusterInfo.properties) {
if (enableSasl) {
break;
}
enableSasl =
clusterInfo.properties[p].indexOf("security.protocol=SASL") != -1;
}
state.clusterInfo.enableSasl = enableSasl;
setClusterInfo(clusterInfo);
},
},
actions: {},
modules: {},
});

View File

@@ -0,0 +1,3 @@
export const CLUSTER = {
SWITCH: "switchCluster",
};

View File

@@ -51,7 +51,7 @@ export const KafkaAclApi = {
export const KafkaConfigApi = {
getConfig: {
url: "/config",
url: "/config/console",
method: "get",
},
getTopicConfig: {
@@ -117,6 +117,22 @@ export const KafkaTopicApi = {
url: "/topic/partition/new",
method: "post",
},
getCurrentReplicaAssignment: {
url: "/topic/replica/assignment",
method: "get",
},
updateReplicaAssignment: {
url: "/topic/replica/assignment",
method: "post",
},
configThrottle: {
url: "/topic/replica/throttle",
method: "post",
},
sendStats: {
url: "/topic/send/stats",
method: "get",
},
};
export const KafkaConsumerApi = {
@@ -167,6 +183,30 @@ export const KafkaClusterApi = {
url: "/cluster",
method: "get",
},
getClusterInfoList: {
url: "/cluster/info",
method: "get",
},
addClusterInfo: {
url: "/cluster/info",
method: "post",
},
deleteClusterInfo: {
url: "/cluster/info",
method: "delete",
},
updateClusterInfo: {
url: "/cluster/info",
method: "put",
},
peekClusterInfo: {
url: "/cluster/info/peek",
method: "get",
},
getBrokerApiVersionInfo: {
url: "/cluster/info/api/version",
method: "get",
},
};
export const KafkaOpApi = {
@@ -190,4 +230,50 @@ export const KafkaOpApi = {
url: "/op/replication/preferred",
method: "post",
},
configThrottle: {
url: "/op/broker/throttle",
method: "post",
},
removeThrottle: {
url: "/op/broker/throttle",
method: "delete",
},
currentReassignments: {
url: "/op/replication/reassignments",
method: "get",
},
cancelReassignment: {
url: "/op/replication/reassignments",
method: "delete",
},
proposedAssignment: {
url: "/op/replication/reassignments/proposed",
method: "post",
},
};
export const KafkaMessageApi = {
searchByTime: {
url: "/message/search/time",
method: "post",
},
searchByOffset: {
url: "/message/search/offset",
method: "post",
},
searchDetail: {
url: "/message/search/detail",
method: "post",
},
deserializerList: {
url: "/message/deserializer/list",
method: "get",
},
send: {
url: "/message/send",
method: "post",
},
resend: {
url: "/message/resend",
method: "post",
},
};

View File

@@ -1,3 +1,7 @@
export const ConstantEvent = {
updateUserDialogData: "updateUserDialogData",
};
export const Cache = {
clusterInfo: "clusterInfo",
};

View File

@@ -0,0 +1,10 @@
import { Cache } from "@/utils/constants";
export function setClusterInfo(clusterInfo) {
localStorage.setItem(Cache.clusterInfo, JSON.stringify(clusterInfo));
}
export function getClusterInfo() {
const str = localStorage.getItem(Cache.clusterInfo);
return str ? JSON.parse(str) : undefined;
}

View File

@@ -1,12 +1,13 @@
import axios from "axios";
import notification from "ant-design-vue/es/notification";
import { VueAxios } from "./axios";
import { getClusterInfo } from "@/utils/local-cache";
// 创建 axios 实例
const request = axios.create({
// API 请求的默认前缀
baseURL: process.env.VUE_APP_API_BASE_URL,
timeout: 30000, // 请求超时时间
timeout: 120000, // 请求超时时间
});
// 异常拦截处理器
@@ -22,10 +23,14 @@ const errorHandler = (error) => {
};
// request interceptor
// request.interceptors.request.use(config => {
//
// return config
// }, errorHandler)
request.interceptors.request.use((config) => {
const clusterInfo = getClusterInfo();
if (clusterInfo) {
config.headers["X-Cluster-Info-Id"] = clusterInfo.id;
// config.headers["X-Cluster-Info-Name"] = encodeURIComponent(clusterInfo.clusterName);
}
return config;
}, errorHandler);
// response interceptor
request.interceptors.response.use((response) => {

View File

@@ -1,32 +1,128 @@
<template>
<div class="home">
<a-card title="kafka console 配置" style="width: 100%">
<!-- <a slot="extra" href="#">more</a>-->
<a-card title="控制台默认配置" class="card-style">
<p v-for="(v, k) in config" :key="k">{{ k }}={{ v }}</p>
</a-card>
<p></p>
<hr />
<h3>kafka API 版本兼容性</h3>
<a-spin :spinning="apiVersionInfoLoading">
<a-table
:columns="columns"
:data-source="brokerApiVersionInfo"
bordered
row-key="brokerId"
>
<div slot="operation" slot-scope="record">
<a-button
size="small"
href="javascript:;"
class="operation-btn"
@click="openApiVersionInfoDialog(record)"
>详情
</a-button>
</div>
</a-table>
</a-spin>
<VersionInfo
:version-info="apiVersionInfo"
:visible="showApiVersionInfoDialog"
@closeApiVersionInfoDialog="closeApiVersionInfoDialog"
>
</VersionInfo>
</div>
</template>
<script>
// @ is an alias to /src
import request from "@/utils/request";
import { KafkaConfigApi } from "@/utils/api";
import { KafkaConfigApi, KafkaClusterApi } from "@/utils/api";
import notification from "ant-design-vue/lib/notification";
import VersionInfo from "@/views/home/VersionInfo";
export default {
name: "Home",
components: {},
components: { VersionInfo },
data() {
return {
config: {},
columns,
brokerApiVersionInfo: [],
showApiVersionInfoDialog: false,
apiVersionInfo: [],
apiVersionInfoLoading: false,
};
},
methods: {
openApiVersionInfoDialog(record) {
this.apiVersionInfo = record.versionInfo;
this.showApiVersionInfoDialog = true;
},
closeApiVersionInfoDialog() {
this.showApiVersionInfoDialog = false;
},
},
created() {
request({
url: KafkaConfigApi.getConfig.url,
method: KafkaConfigApi.getConfig.method,
}).then((res) => {
this.config = res.data;
if (res.code == 0) {
this.config = res.data;
} else {
notification.error({
message: "error",
description: res.msg,
});
}
});
this.apiVersionInfoLoading = true;
request({
url: KafkaClusterApi.getBrokerApiVersionInfo.url,
method: KafkaClusterApi.getBrokerApiVersionInfo.method,
}).then((res) => {
this.apiVersionInfoLoading = false;
if (res.code == 0) {
this.brokerApiVersionInfo = res.data;
} else {
notification.error({
message: "error",
description: res.msg,
});
}
});
},
};
const columns = [
{
title: "id",
dataIndex: "brokerId",
key: "brokerId",
},
{
title: "地址",
dataIndex: "host",
key: "host",
},
{
title: "支持的api数量",
dataIndex: "supportNums",
key: "supportNums",
},
{
title: "不支持的api数量",
dataIndex: "unSupportNums",
key: "unSupportNums",
},
{
title: "操作",
key: "operation",
scopedSlots: { customRender: "operation" },
},
];
</script>
<style scoped>
.card-style {
width: 100%;
}
</style>

View File

@@ -232,11 +232,13 @@ export default {
}
},
onDeleteUser(row) {
this.loading = true;
request({
url: KafkaAclApi.deleteKafkaUser.url,
method: KafkaAclApi.deleteKafkaUser.method,
data: { username: row.username },
}).then((res) => {
this.loading = false;
this.getAclList();
if (res.code == 0) {
this.$message.success(res.msg);

View File

@@ -8,20 +8,22 @@
:footer="null"
@cancel="handleCancel"
>
<a-form :form="form" :label-col="{ span: 5 }" :wrapper-col="{ span: 12 }">
<a-form-item label="用户名">
<span>{{ user.username }}</span>
</a-form-item>
<a-form-item label="密码">
<span>{{ user.password }}</span>
</a-form-item>
<a-form-item label="凭证信息">
<span>{{ user.credentialInfos }}</span>
</a-form-item>
<a-form-item label="数据一致性说明">
<strong>{{ user.consistencyDescription }}</strong>
</a-form-item>
</a-form>
<a-spin :spinning="loading">
<a-form :form="form" :label-col="{ span: 5 }" :wrapper-col="{ span: 12 }">
<a-form-item label="用户名">
<span>{{ user.username }}</span>
</a-form-item>
<a-form-item label="密码">
<span>{{ user.password }}</span>
</a-form-item>
<a-form-item label="凭证信息">
<span>{{ user.credentialInfos }}</span>
</a-form-item>
<a-form-item label="数据一致性说明">
<strong>{{ user.consistencyDescription }}</strong>
</a-form-item>
</a-form>
</a-spin>
</a-modal>
</template>
@@ -47,6 +49,7 @@ export default {
show: this.visible,
form: this.$form.createForm(this, { name: "UserDetailForm" }),
user: {},
loading: false,
};
},
watch: {
@@ -63,11 +66,13 @@ export default {
},
getUserDetail() {
const api = KafkaAclApi.getKafkaUserDetail;
this.loading = true;
request({
url: api.url,
method: api.method,
params: { username: this.username },
}).then((res) => {
this.loading = false;
if (res.code != 0) {
this.$message.error(res.msg);
} else {

View File

@@ -45,6 +45,7 @@
import request from "@/utils/request";
import { KafkaClusterApi } from "@/utils/api";
import BrokerConfig from "@/views/cluster/BrokerConfig";
import notification from "ant-design-vue/lib/notification";
export default {
name: "Topic",
@@ -68,8 +69,15 @@ export default {
method: KafkaClusterApi.getClusterInfo.method,
}).then((res) => {
this.loading = false;
this.data = res.data.nodes;
this.clusterId = res.data.clusterId;
if (res.code == 0) {
this.data = res.data.nodes;
this.clusterId = res.data.clusterId;
} else {
notification.error({
message: "error",
description: res.msg,
});
}
});
},
openBrokerConfigDialog(record, isLoggerConfig) {

View File

@@ -47,6 +47,15 @@
@click="openResetOffsetByTimeDialog(k)"
>时间戳
</a-button>
<a-button
type="primary"
icon="reload"
size="small"
style="float: right"
@click="getConsumerDetail"
>
刷新
</a-button>
<hr />
<a-table
:columns="columns"
@@ -70,6 +79,11 @@
</a-button>
</div>
</a-table>
<p>
<strong style="color: red"
>注意重置位点时要求当前没有正在运行的消费端否则重置的时候会报错返回失败信息</strong
>
</p>
</div>
<a-modal

View File

@@ -196,7 +196,14 @@ export default {
data: this.queryParam,
}).then((res) => {
this.loading = false;
this.data = res.data.list;
if (res.code == 0) {
this.data = res.data.list;
} else {
notification.error({
message: "error",
description: res.msg,
});
}
});
},
deleteGroup(group) {

View File

@@ -32,6 +32,10 @@
/>
</a-form-item>
</a-form>
<hr />
<p>
*注意该时间为北京时间这里固定为东8区的计算时间如果所在地区不是采用北京时间中国大部分地区都是采用的北京时间请自行对照为当地时间重置
</p>
</a-spin>
</div>
</a-modal>

View File

@@ -0,0 +1,61 @@
<template>
<a-modal
title="API版本信息"
:visible="show"
:width="600"
:mask="false"
:destroyOnClose="true"
:footer="null"
:maskClosable="false"
@cancel="handleCancel"
>
<div>
<h3>格式说明</h3>
<p>请求类型(1)0 to n(2) [usage: v](3)</p>
<ol>
<li>表示客户端发出的请求类型</li>
<li>该请求在broker中支持的版本号区间</li>
<li>
表示当前控制台的kafka客户端使用的是v版本如果是UNSUPPORTED说明broker版本太老无法处理控制台的这些请求可能影响相关功能的使用
</li>
</ol>
<hr />
<ol>
<li v-for="info in versionInfo" v-bind:key="info">{{ info }}</li>
</ol>
</div>
</a-modal>
</template>
<script>
export default {
name: "APIVersionInfo",
props: {
versionInfo: {
type: Array,
},
visible: {
type: Boolean,
default: false,
},
},
data() {
return {
show: false,
};
},
watch: {
visible(v) {
this.show = v;
},
},
methods: {
handleCancel() {
this.$emit("closeApiVersionInfoDialog", {});
},
},
};
</script>
<style scoped></style>

Some files were not shown because too many files have changed in this diff Show More