Compare commits

...

69 Commits

Author SHA1 Message Date
澄潭
e68a8ac25f add model-mapper plugin & optimize model-router plugin (#1538) 2024-11-22 22:24:42 +08:00
Kent Dong
96575b982e fix: Refresh go.mod and go.sum file contents (#1525) 2024-11-22 13:34:55 +08:00
EnableAsync
c2d405b2a7 feat: Enhance ai-cache Plugin with Vector Similarity-Based LLM Cache Recall and Multi-DB Support (#1248) 2024-11-21 16:57:41 +08:00
Jingze
6efb3109f2 fix: update oidc plugin go.mod dependencies (#1522) 2024-11-19 17:33:42 +08:00
Se7en
1b1c08afb7 fix: apitoken failover for coze (#1515) 2024-11-18 15:36:26 +08:00
Se7en
d24123a55f feat: implement apiToken failover mechanism (#1256) 2024-11-16 19:03:09 +08:00
澄潭
f2a5df3949 use the body returned by the ext auth server when auth fails (#1510) 2024-11-14 18:50:33 +08:00
澄潭
ebc5b2987e fix compile of wasm cpp plugins (#1511) 2024-11-14 18:49:21 +08:00
007gzs
ca97cbd75a fix workflows build-and-push-wasm-plugin-image (#1508) 2024-11-13 17:39:24 +08:00
hanans426
a787e237ce 增加快速部署到阿里云的部署方案 (#1506) 2024-11-13 16:26:55 +08:00
纪卓志
6a1bf90d42 feat: supports custom prepare build script (#1490) 2024-11-12 13:45:28 +08:00
007gzs
60e476da87 fix example sse build error (#1503) 2024-11-11 17:47:26 +08:00
rinfx
2cb8558cda Optimize AI security guard plugin (#1473)
Co-authored-by: Kent Dong <ch3cho@qq.com>
2024-11-11 14:49:17 +08:00
littlejian
4d1a037942 feat: Automatically generating markdown documentation for helm charts with helm-docs (#1496) 2024-11-11 11:34:38 +08:00
xingyunyang01
39b6eac9d0 AI Agent plugin adds JSON formatting output feature (#1374) 2024-11-11 11:11:02 +08:00
johnlanni
7697af9d2b update chart.lock 2024-11-08 14:07:12 +08:00
澄潭
3660715506 release 2.0.3 (#1494) 2024-11-08 13:59:12 +08:00
Squidward
7bd438877b add textin embedding for ai-cache (#1493) 2024-11-08 13:48:32 +08:00
Ranjana761
0fbeb39cac docs: Added back to top , contributors section and star history graph (#1440) 2024-11-08 10:18:16 +08:00
Kent Dong
d02c974af4 feat: Ensure all images are loaded to K8s before starting e2e tests (#1389) 2024-11-08 10:15:51 +08:00
澄潭
8ad4970231 update rust makefile (#1491) 2024-11-08 10:14:02 +08:00
007gzs
aee37c5e22 Implement Rust Wasm Plugin Build & Publish Action (#1483) 2024-11-07 20:20:42 +08:00
Kent Dong
73cf32aadd feat: Update istio codebase (#1485) 2024-11-07 11:40:57 +08:00
澄潭
1ab69fcf82 Update README.md 2024-11-07 11:20:25 +08:00
mamba
9b995321bb feat: 🎸 支持多版本能力:根据不同路由映射不同的Version版本。 (#1429) 2024-11-07 09:11:57 +08:00
澄潭
00cac813e3 add clusterrole for gateway api (#1480) 2024-11-06 13:37:18 +08:00
澄潭
548cf2f081 move nottinygc to proxy-wasm-go-sdk (#1477) 2024-11-05 20:59:51 +08:00
007gzs
c1f2504e87 Ai data mask deny word match optimize (#1453) 2024-11-05 15:26:55 +08:00
rinfx
7e8b0445ad modify log-format, then every plugin log is associated with access log (#1454) 2024-11-04 22:04:20 +08:00
littlejian
63d5422da6 feat:Support downstream and upstram, which can be configured through helm parameters (#1399) 2024-11-01 22:39:19 +08:00
韩贤涛
035e81a5ca fix: Control gateway-api Listener with global.enableGatewayAPI in Helm (#1461) 2024-10-31 21:36:40 +08:00
澄潭
9a1edcd4c8 Fix the issue where wasmplugin does not work when kingress exists (#1450) 2024-10-29 20:27:11 +08:00
007gzs
2219a17898 [feat] Support redis call with wasm-rust (#1417) 2024-10-29 19:35:02 +08:00
纪卓志
93c1e5c2bb feat: lazy formatted log (#1441) 2024-10-29 17:06:03 +08:00
澄潭
7c2d2b2855 fix destinationrule merge logic (#1439) 2024-10-29 09:01:06 +08:00
澄潭
b1550e91ab rel: Release v2.0.2 (#1438) 2024-10-28 19:14:57 +08:00
澄潭
0b42836e85 Update CODEOWNERS 2024-10-28 18:56:09 +08:00
tmsnan
7c33ebf6ea bugfix: remove config check and fix memory leak in traffic-tag (#1435) 2024-10-28 16:26:29 +08:00
Yang Beining
acec48ed8b [ai-cache] Implement a WASM plugin for LLM result retrieval based on vector similarity (#1290) 2024-10-27 16:21:04 +08:00
澄潭
d309bf2e25 fix model-router plugin (#1432) 2024-10-25 16:48:37 +08:00
韩贤涛
496d365a95 add PILOT_ENABLE_ALPHA_GATEWAY_API and fix gateway status update (#1421) 2024-10-24 20:57:46 +08:00
rinfx
d952fa562b bugfix: plugin will block GET request (#1428) 2024-10-24 17:34:26 +08:00
纪卓志
e7561c30e5 feat: implements text/event-stream(SSE) MIME parser (#1416)
Co-authored-by: 007gzs <007gzs@gmail.com>
2024-10-24 16:58:45 +08:00
澄潭
cdd71155a9 Update README.md 2024-10-24 11:25:59 +08:00
Ankur Singh
a5ccb90b28 Improve the grammar of the sentence (#1426) 2024-10-24 09:38:22 +08:00
007gzs
d76f574ab3 plugin ai-data-mask add log (#1423) 2024-10-23 13:19:02 +08:00
mamba
bb6c43c767 feat: 【frontend-gray】添加 skipedRoutes以及skipedByHeaders 配置 (#1409)
Co-authored-by: Kent Dong <ch3cho@qq.com>
2024-10-23 09:34:00 +08:00
fengxsong
b8f5826a32 fix: do not create ingressclass when it's empty (#1419)
Signed-off-by: fengxusong <fengxsong@outlook.com>
2024-10-23 09:03:07 +08:00
Bingkun Zhao
0d79386ce2 fix a bug of ip-restriction plugin (#1422) 2024-10-22 22:17:56 +08:00
007gzs
871ae179c3 Ai data masking fix (#1420) 2024-10-22 21:39:01 +08:00
澄潭
f8d62a8ac3 add model router plugin (#1414) 2024-10-21 16:46:18 +08:00
澄潭
badf4b7101 ai cache plugin support set skip ai cache header (#1380) 2024-10-21 15:43:01 +08:00
Ikko Eltociear Ashimine
fc6902ded2 docs: add Japanese README and CONTRIBUTING files (#1407) 2024-10-21 15:20:45 +08:00
007gzs
d96994767c Change http_content to Rc in HttpWrapper (#1391) 2024-10-21 09:44:01 +08:00
rinfx
32e5a59ae0 fix special charactor handle in ai-security-guard plugin (#1394) 2024-10-18 16:32:48 +08:00
Lisheng Zheng
49bb5ec2b9 fix: add HTTP2 protocol options to skywalking and otel cluster (#1379) 2024-10-18 15:34:34 +08:00
mamba
11ff2d1d31 [frontend-gray] support grayKey from localStorage (#1395) 2024-10-18 13:58:52 +08:00
澄潭
c67f494b49 Update README.md 2024-10-18 09:54:10 +08:00
澄潭
299621476f Update README.md 2024-10-17 14:33:08 +08:00
澄潭
7e6168a644 Update README.md 2024-10-17 14:31:26 +08:00
Smoothengineer
e923cbaecc Update README_EN.md (#1402) 2024-10-17 12:53:04 +08:00
Kent Dong
6f86c31bac feat: Update submodules: envoy/envoy, istio/isitio (#1398) 2024-10-16 19:00:18 +08:00
Kent Dong
51c956f0b3 fix: Fix clean targets in Makefile (#1397) 2024-10-16 18:42:49 +08:00
澄潭
d0693d8c4b Update SECURITY.md 2024-10-16 11:17:44 +08:00
Jun
e298078065 add dns&static registry e2e test (#1393) 2024-10-16 10:21:03 +08:00
澄潭
85f8eb5166 key-auth consumer support set independent key source (#1392) 2024-10-15 20:52:03 +08:00
澄潭
0a112d1a1e fix mcp service port protocol name (#1383) 2024-10-15 11:50:43 +08:00
Kent Dong
04ce776f14 feat: Support route fallback by default (#1381) 2024-10-14 18:50:45 +08:00
rinfx
952c9ec5dc Ai proxy support coze (#1387) 2024-10-14 12:45:53 +08:00
272 changed files with 11625 additions and 4623 deletions

View File

@@ -3,9 +3,16 @@ name: Build and Push Wasm Plugin Image
on:
push:
tags:
- "wasm-go-*-v*.*.*" # 匹配 wasm-go-{pluginName}-vX.Y.Z 格式的标签
- "wasm-*-*-v*.*.*" # 匹配 wasm-{go|rust}-{pluginName}-vX.Y.Z 格式的标签
workflow_dispatch:
inputs:
plugin_type:
description: 'Type of the plugin'
required: true
type: choice
options:
- go
- rust
plugin_name:
description: 'Name of the plugin'
required: true
@@ -23,32 +30,42 @@ jobs:
env:
IMAGE_REGISTRY_SERVICE: ${{ vars.IMAGE_REGISTRY || 'higress-registry.cn-hangzhou.cr.aliyuncs.com' }}
IMAGE_REPOSITORY: ${{ vars.PLUGIN_IMAGE_REPOSITORY || 'plugins' }}
RUST_VERSION: 1.82
GO_VERSION: 1.19
TINYGO_VERSION: 0.28.1
ORAS_VERSION: 1.0.0
steps:
- name: Set plugin_name and version from inputs or ref_name
- name: Set plugin_type, plugin_name and version from inputs or ref_name
id: set_vars
run: |
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
plugin_type="${{ github.event.inputs.plugin_type }}"
plugin_name="${{ github.event.inputs.plugin_name }}"
version="${{ github.event.inputs.version }}"
else
ref_name=${{ github.ref_name }}
plugin_type=${ref_name#*-} # 删除插件类型前面的字段(wasm-)
plugin_type=${plugin_type%%-*} # 删除插件类型后面的字段(-{plugin_name}-vX.Y.Z)
plugin_name=${ref_name#*-*-} # 删除插件名前面的字段(wasm-go-)
plugin_name=${plugin_name%-*} # 删除插件名后面的字段(-vX.Y.Z)
version=$(echo "$ref_name" | awk -F'v' '{print $2}')
fi
if [[ "$plugin_type" == "rust" ]]; then
builder_image="higress-registry.cn-hangzhou.cr.aliyuncs.com/plugins/wasm-rust-builder:rust${{ env.RUST_VERSION }}-oras${{ env.ORAS_VERSION }}"
else
builder_image="higress-registry.cn-hangzhou.cr.aliyuncs.com/plugins/wasm-go-builder:go${{ env.GO_VERSION }}-tinygo${{ env.TINYGO_VERSION }}-oras${{ env.ORAS_VERSION }}"
fi
echo "PLUGIN_TYPE=$plugin_type" >> $GITHUB_ENV
echo "PLUGIN_NAME=$plugin_name" >> $GITHUB_ENV
echo "VERSION=$version" >> $GITHUB_ENV
echo "BUILDER_IMAGE=$builder_image" >> $GITHUB_ENV
- name: Checkout code
uses: actions/checkout@v3
- name: File Check
run: |
workspace=${{ github.workspace }}/plugins/wasm-go/extensions/${PLUGIN_NAME}
workspace=${{ github.workspace }}/plugins/wasm-${PLUGIN_TYPE}/extensions/${PLUGIN_NAME}
push_command="./plugin.tar.gz:application/vnd.oci.image.layer.v1.tar+gzip"
# 查找spec.yaml
@@ -75,10 +92,10 @@ jobs:
echo "PUSH_COMMAND=\"$push_command\"" >> $GITHUB_ENV
- name: Run a wasm-go-builder
- name: Run a wasm-builder
env:
PLUGIN_NAME: ${{ env.PLUGIN_NAME }}
BUILDER_IMAGE: higress-registry.cn-hangzhou.cr.aliyuncs.com/plugins/wasm-go-builder:go${{ env.GO_VERSION }}-tinygo${{ env.TINYGO_VERSION }}-oras${{ env.ORAS_VERSION }}
BUILDER_IMAGE: ${{ env.BUILDER_IMAGE }}
run: |
docker run -itd --name builder -v ${{ github.workspace }}:/workspace -e PLUGIN_NAME=${{ env.PLUGIN_NAME }} --rm ${{ env.BUILDER_IMAGE }} /bin/bash
@@ -93,7 +110,7 @@ jobs:
echo "TargetImage=${target_image}"
echo "TargetImageLatest=${target_image_latest}"
cd ${{ github.workspace }}/plugins/wasm-go/extensions/${PLUGIN_NAME}
cd ${{ github.workspace }}/plugins/wasm-${PLUGIN_TYPE}/extensions/${PLUGIN_NAME}
if [ -f ./.buildrc ]; then
echo 'Found .buildrc file, sourcing it...'
. ./.buildrc
@@ -101,7 +118,7 @@ jobs:
echo '.buildrc file not found'
fi
echo "EXTRA_TAGS=${EXTRA_TAGS}"
if [ "${PLUGIN_TYPE}" == "go" ]; then
command="
set -e
cd /workspace/plugins/wasm-go/extensions/${PLUGIN_NAME}
@@ -112,4 +129,21 @@ jobs:
oras push ${target_image} ${push_command}
oras push ${target_image_latest} ${push_command}
"
elif [ "${PLUGIN_TYPE}" == "rust" ]; then
command="
set -e
cd /workspace/plugins/wasm-rust/extensions/${PLUGIN_NAME}
cargo build --target wasm32-wasi --release
cp target/wasm32-wasi/release/*.wasm plugin.wasm
tar czvf plugin.tar.gz plugin.wasm
echo ${{ secrets.REGISTRY_PASSWORD }} | oras login -u ${{ secrets.REGISTRY_USERNAME }} --password-stdin ${{ env.IMAGE_REGISTRY_SERVICE }}
oras push ${target_image} ${push_command}
oras push ${target_image_latest} ${push_command}
"
else
command="
echo "unkown type ${PLUGIN_TYPE}"
"
fi
docker exec builder bash -c "$command"

35
.github/workflows/helm-docs.yaml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: "Helm Docs"
on:
pull_request:
branches:
- "*"
push:
jobs:
helm:
name: Helm Docs
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: '1.22.9'
- name: Run helm-docs
run: |
GOBIN=$PWD GO111MODULE=on go install github.com/norwoodj/helm-docs/cmd/helm-docs@v1.14.2
./helm-docs -c ${GITHUB_WORKSPACE}/helm/higress -f ../core/values.yaml
DIFF=$(git diff ${GITHUB_WORKSPACE}/helm/higress/*md)
if [ ! -z "$DIFF" ]; then
echo "Please use helm-docs in your clone, of your fork, of the project, and commit a updated README.md for the chart."
fi
git diff --exit-code
rm -f ./helm-docs

24
.github/workflows/release-crd.yaml vendored Normal file
View File

@@ -0,0 +1,24 @@
name: Release CRD to GitHub
on:
push:
tags:
- "v*.*.*"
workflow_dispatch: ~
jobs:
release-crd:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: generate crds
run: |
cat helm/core/crds/customresourcedefinitions.gen.yaml helm/core/crds/istio-envoyfilter.yaml > crd.yaml
- name: Upload hgctl packages to the GitHub release
uses: softprops/action-gh-release@v2
if: startsWith(github.ref, 'refs/tags/')
with:
files: |
crd.yaml

3
.gitignore vendored
View File

@@ -16,4 +16,5 @@ helm/**/charts/**.tgz
target/
tools/hack/cluster.conf
envoy/1.20
istio/1.12
istio/1.12
Cargo.lock

View File

@@ -3,7 +3,7 @@
/istio @SpecialYang @johnlanni
/pkg @SpecialYang @johnlanni @CH3CHO
/plugins @johnlanni @WeixinX @CH3CHO
/plugins/wasm-rust @007gzs
/plugins/wasm-rust @007gzs @jizhuozhi
/registry @NameHaibinZhang @2456868764 @johnlanni
/test @Xunzhuo @2456868764 @CH3CHO
/tools @johnlanni @Xunzhuo @2456868764

View File

@@ -1,6 +1,6 @@
# Contributing to Higress
It is warmly welcomed if you have interest to hack on Higress. First, we encourage this kind of willing very much. And here is a list of contributing guide for you.
Your interest in contributing to Higress is warmly welcomed. First, we encourage this kind of willing very much. And here is a list of contributing guide for you.
[[中文贡献文档](./CONTRIBUTING_CN.md)]

195
CONTRIBUTING_JP.md Normal file
View File

@@ -0,0 +1,195 @@
# Higress への貢献
Higress のハッキングに興味がある場合は、温かく歓迎します。まず、このような意欲を非常に奨励します。そして、以下は貢献ガイドのリストです。
[[中文](./CONTRIBUTING.md)] | [[English Contributing Document](./CONTRIBUTING_EN.md)]
## トピック
- [Higress への貢献](#higress-への貢献)
- [トピック](#トピック)
- [セキュリティ問題の報告](#セキュリティ問題の報告)
- [一般的な問題の報告](#一般的な問題の報告)
- [コードとドキュメントの貢献](#コードとドキュメントの貢献)
- [ワークスペースの準備](#ワークスペースの準備)
- [ブランチの定義](#ブランチの定義)
- [コミットルール](#コミットルール)
- [コミットメッセージ](#コミットメッセージ)
- [コミット内容](#コミット内容)
- [PR 説明](#pr-説明)
- [テストケースの貢献](#テストケースの貢献)
- [何かを手伝うための参加](#何かを手伝うための参加)
- [コードスタイル](#コードスタイル)
## セキュリティ問題の報告
セキュリティ問題は常に真剣に扱われます。通常の原則として、セキュリティ問題を広めることは推奨しません。Higress のセキュリティ問題を発見した場合は、公開で議論せず、公開の問題を開かないでください。代わりに、[higress@googlegroups.com](mailto:higress@googlegroups.com) にプライベートなメールを送信して報告することをお勧めします。
## 一般的な問題の報告
正直なところ、Higress のすべてのユーザーを非常に親切な貢献者と見なしています。Higress を体験した後、プロジェクトに対するフィードバックがあるかもしれません。その場合は、[NEW ISSUE](https://github.com/alibaba/higress/issues/new/choose) を通じて問題を開くことを自由に行ってください。
Higress プロジェクトを分散型で協力しているため、**よく書かれた**、**詳細な**、**明確な**問題報告を高く評価します。コミュニケーションをより効率的にするために、問題が検索リストに存在するかどうかを検索することを希望します。存在する場合は、新しい問題を開くのではなく、既存の問題のコメントに詳細を追加してください。
問題の詳細をできるだけ標準化するために、問題報告者のために [ISSUE TEMPLATE](./.github/ISSUE_TEMPLATE) を設定しました。テンプレートのフィールドに従って指示に従って記入してください。
問題を開く場合は多くのケースがあります:
* バグ報告
* 機能要求
* パフォーマンス問題
* 機能提案
* 機能設計
* 助けが必要
* ドキュメントが不完全
* テストの改善
* プロジェクトに関する質問
* その他
また、新しい問題を記入する際には、投稿から機密データを削除することを忘れないでください。機密データには、パスワード、秘密鍵、ネットワークの場所、プライベートなビジネスデータなどが含まれる可能性があります。
## コードとドキュメントの貢献
Higress プロジェクトをより良くするためのすべての行動が奨励されます。GitHub では、Higress のすべての改善は PRプルリクエストの略を通じて行うことができます。
* タイプミスを見つけた場合は、修正してみてください!
* バグを見つけた場合は、修正してみてください!
* 冗長なコードを見つけた場合は、削除してみてください!
* 欠落しているテストケースを見つけた場合は、追加してみてください!
* 機能を強化できる場合は、**ためらわないでください**
* コードが不明瞭な場合は、コメントを追加して明確にしてください!
* コードが醜い場合は、リファクタリングしてみてください!
* ドキュメントの改善に役立つ場合は、さらに良いです!
* ドキュメントが不正確な場合は、修正してください!
* ...
実際には、それらを完全にリストすることは不可能です。1つの原則を覚えておいてください
> あなたからの PR を楽しみにしています。
Higress を PR で改善する準備ができたら、ここで PR ルールを確認することをお勧めします。
* [ワークスペースの準備](#ワークスペースの準備)
* [ブランチの定義](#ブランチの定義)
* [コミットルール](#コミットルール)
* [PR 説明](#pr-説明)
### ワークスペースの準備
PR を提出するために、GitHub ID に登録していることを前提とします。その後、以下の手順で準備を完了できます:
1. Higress を自分のリポジトリに **FORK** します。この作業を行うには、[alibaba/higress](https://github.com/alibaba/higress) のメインページの右上にある Fork ボタンをクリックするだけです。その後、`https://github.com/<your-username>/higress` に自分のリポジトリが作成されます。ここで、`your-username` はあなたの GitHub ユーザー名です。
2. 自分のリポジトリをローカルに **CLONE** します。`git clone git@github.com:<your-username>/higress.git` を使用してリポジトリをローカルマシンにクローンします。その後、新しいブランチを作成して、行いたい変更を完了できます。
3. リモートを `git@github.com:alibaba/higress.git` に設定します。以下の2つのコマンドを使用します
```bash
git remote add upstream git@github.com:alibaba/higress.git
git remote set-url --push upstream no-pushing
```
このリモート設定を使用すると、git リモート設定を次のように確認できます:
```shell
$ git remote -v
origin git@github.com:<your-username>/higress.git (fetch)
origin git@github.com:<your-username>/higress.git (push)
upstream git@github.com:alibaba/higress.git (fetch)
upstream no-pushing (push)
```
これを追加すると、ローカルブランチを上流ブランチと簡単に同期できます。
### ブランチの定義
現在、プルリクエストを通じたすべての貢献は Higress の [main ブランチ](https://github.com/alibaba/higress/tree/main) に対するものであると仮定します。貢献する前に、ブランチの定義を理解することは非常に役立ちます。
貢献者として、プルリクエストを通じたすべての貢献は main ブランチに対するものであることを再度覚えておいてください。Higress プロジェクトには、リリースブランチ0.6.0、0.6.1)、機能ブランチ、ホットフィックスブランチなど、いくつかの他のブランチがあります。
正式にバージョンをリリースする際には、リリースブランチが作成され、バージョン番号で命名されます。
リリース後、リリースブランチのコミットを main ブランチにマージします。
特定のバージョンにバグがある場合、後のバージョンで修正するか、特定のホットフィックスバージョンで修正するかを決定します。ホットフィックスバージョンで修正することを決定した場合、対応するリリースブランチに基づいてホットフィックスブランチをチェックアウトし、コード修正と検証を行い、main ブランチにマージします。
大きな機能については、開発と検証のために機能ブランチを引き出します。
### コミットルール
実際には、Higress ではコミット時に2つのルールを真剣に考えています
* [コミットメッセージ](#コミットメッセージ)
* [コミット内容](#コミット内容)
#### コミットメッセージ
コミットメッセージは、提出された PR の目的をレビュアーがよりよく理解するのに役立ちます。また、コードレビューの手続きを加速するのにも役立ちます。貢献者には、曖昧なメッセージではなく、**明確な**コミットメッセージを使用することを奨励します。一般的に、以下のコミットメッセージタイプを推奨します:
* docs: xxxx. 例:"docs: add docs about Higress cluster installation".
* feature: xxxx. 例:"feature: use higress config instead of istio config".
* bugfix: xxxx. 例:"bugfix: fix panic when input nil parameter".
* refactor: xxxx. 例:"refactor: simplify to make codes more readable".
* test: xxx. 例:"test: add unit test case for func InsertIntoArray".
* その他の読みやすく明確な表現方法。
一方で、以下のような方法でのコミットメッセージは推奨しません:
* ~~バグ修正~~
* ~~更新~~
* ~~ドキュメント追加~~
迷った場合は、[Git コミットメッセージの書き方](http://chris.beams.io/posts/git-commit/) を参照してください。
#### コミット内容
コミット内容は、1つのコミットに含まれるすべての内容の変更を表します。1つのコミットに、他のコミットの助けを借りずにレビュアーが完全にレビューできる内容を含めるのが最善です。言い換えれば、1つのコミットの内容は CI を通過でき、コードの混乱を避けることができます。簡単に言えば、次の3つの小さなルールを覚えておく必要があります
* コミットで非常に大きな変更を避ける;
* 各コミットが完全でレビュー可能であること。
* コミット時に git config`user.name``user.email`)を確認して、それが GitHub ID に関連付けられていることを確認します。
```bash
git config --get user.name
git config --get user.email
```
* pr を提出する際には、'changes/' フォルダーの下の XXX.md ファイルに現在の変更の簡単な説明を追加してください。
さらに、コード変更部分では、すべての貢献者が Higress の [コードスタイル](#コードスタイル) を読むことをお勧めします。
コミットメッセージやコミット内容に関係なく、コードレビューに重点を置いています。
### PR 説明
PR は Higress プロジェクトファイルを変更する唯一の方法です。レビュアーが目的をよりよく理解できるようにするために、PR 説明は詳細すぎることはありません。貢献者には、[PR テンプレート](./.github/PULL_REQUEST_TEMPLATE.md) に従ってプルリクエストを完了することを奨励します。
### 開発前の準備
```shell
make prebuild && go mod tidy
```
## テストケースの貢献
テストケースは歓迎されます。現在、Higress の機能テストケースが高優先度です。
* 単体テストの場合、同じモジュールの test ディレクトリに xxxTest.go という名前のテストファイルを作成する必要があります。
* 統合テストの場合、統合テストを test ディレクトリに配置できます。
//TBD
## 何かを手伝うための参加
GitHub を Higress の協力の主要な場所として選択しました。したがって、Higress の最新の更新は常にここにあります。PR を通じた貢献は明確な助けの方法ですが、他の方法も呼びかけています。
* 可能であれば、他の人の質問に返信する;
* 他のユーザーの問題を解決するのを手伝う;
* 他の人の PR 設計をレビューするのを手伝う;
* 他の人の PR のコードをレビューするのを手伝う;
* Higress について議論して、物事を明確にする;
* GitHub 以外で Higress 技術を宣伝する;
* Higress に関するブログを書くなど。
## コードスタイル
//TBD
要するに、**どんな助けも貢献です。**

View File

@@ -72,17 +72,17 @@ go.test.coverage: prebuild
.PHONY: build
build: prebuild $(OUT)
GOPROXY=$(GOPROXY) GOOS=$(GOOS_LOCAL) GOARCH=$(GOARCH_LOCAL) LDFLAGS=$(RELEASE_LDFLAGS) tools/hack/gobuild.sh $(OUT)/ $(HIGRESS_BINARIES)
GOPROXY="$(GOPROXY)" GOOS=$(GOOS_LOCAL) GOARCH=$(GOARCH_LOCAL) LDFLAGS=$(RELEASE_LDFLAGS) tools/hack/gobuild.sh $(OUT)/ $(HIGRESS_BINARIES)
.PHONY: build-linux
build-linux: prebuild $(OUT)
GOPROXY=$(GOPROXY) GOOS=linux GOARCH=$(GOARCH_LOCAL) LDFLAGS=$(RELEASE_LDFLAGS) tools/hack/gobuild.sh $(OUT_LINUX)/ $(HIGRESS_BINARIES)
GOPROXY="$(GOPROXY)" GOOS=linux GOARCH=$(GOARCH_LOCAL) LDFLAGS=$(RELEASE_LDFLAGS) tools/hack/gobuild.sh $(OUT_LINUX)/ $(HIGRESS_BINARIES)
$(AMD64_OUT_LINUX)/higress:
GOPROXY=$(GOPROXY) GOOS=linux GOARCH=amd64 LDFLAGS=$(RELEASE_LDFLAGS) tools/hack/gobuild.sh ./out/linux_amd64/ $(HIGRESS_BINARIES)
GOPROXY="$(GOPROXY)" GOOS=linux GOARCH=amd64 LDFLAGS=$(RELEASE_LDFLAGS) tools/hack/gobuild.sh ./out/linux_amd64/ $(HIGRESS_BINARIES)
$(ARM64_OUT_LINUX)/higress:
GOPROXY=$(GOPROXY) GOOS=linux GOARCH=arm64 LDFLAGS=$(RELEASE_LDFLAGS) tools/hack/gobuild.sh ./out/linux_arm64/ $(HIGRESS_BINARIES)
GOPROXY="$(GOPROXY)" GOOS=linux GOARCH=arm64 LDFLAGS=$(RELEASE_LDFLAGS) tools/hack/gobuild.sh ./out/linux_arm64/ $(HIGRESS_BINARIES)
.PHONY: build-hgctl
build-hgctl: prebuild $(OUT)
@@ -221,11 +221,15 @@ clean-higress: ## Cleans all the intermediate files and folders previously gener
rm -rf $(DIRS_TO_CLEAN)
clean-istio:
rm -rf external/api
rm -rf external/client-go
rm -rf external/istio
rm -rf external/pkg
clean-gateway: clean-istio
rm -rf external/envoy
rm -rf external/proxy
rm -rf external/go-control-plane
rm -rf external/package/envoy.tar.gz
clean-env:
@@ -284,6 +288,8 @@ delete-cluster: $(tools/kind) ## Delete kind cluster.
.PHONY: kube-load-image
kube-load-image: $(tools/kind) ## Install the Higress image to a kind cluster using the provided $IMAGE and $TAG.
tools/hack/kind-load-image.sh higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/higress $(TAG)
tools/hack/docker-pull-image.sh higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/pilot $(ISTIO_LATEST_IMAGE_TAG)
tools/hack/docker-pull-image.sh higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/gateway $(ENVOY_LATEST_IMAGE_TAG)
tools/hack/docker-pull-image.sh higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/dubbo-provider-demo 0.0.3-x86
tools/hack/docker-pull-image.sh higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/nacos-standlone-rc3 1.0.0-RC3
tools/hack/docker-pull-image.sh docker.io/hashicorp/consul 1.16.0
@@ -294,6 +300,7 @@ kube-load-image: $(tools/kind) ## Install the Higress image to a kind cluster us
tools/hack/docker-pull-image.sh higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/echo-server v1.0
tools/hack/docker-pull-image.sh higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/echo-body 1.0.0
tools/hack/docker-pull-image.sh openpolicyagent/opa latest
tools/hack/docker-pull-image.sh curlimages/curl latest
tools/hack/docker-pull-image.sh registry.cn-hangzhou.aliyuncs.com/2456868764/httpbin 1.0.2
tools/hack/docker-pull-image.sh registry.cn-hangzhou.aliyuncs.com/hinsteny/nacos-standlone-rc3 1.0.0-RC3
tools/hack/kind-load-image.sh higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/dubbo-provider-demo 0.0.3-x86
@@ -306,6 +313,7 @@ kube-load-image: $(tools/kind) ## Install the Higress image to a kind cluster us
tools/hack/kind-load-image.sh higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/echo-server v1.0
tools/hack/kind-load-image.sh higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/echo-body 1.0.0
tools/hack/kind-load-image.sh openpolicyagent/opa latest
tools/hack/kind-load-image.sh curlimages/curl latest
tools/hack/kind-load-image.sh registry.cn-hangzhou.aliyuncs.com/2456868764/httpbin 1.0.2
tools/hack/kind-load-image.sh registry.cn-hangzhou.aliyuncs.com/hinsteny/nacos-standlone-rc3 1.0.0-RC3

View File

@@ -1,3 +1,4 @@
<a name="readme-top"></a>
<h1 align="center">
<img src="https://img.alicdn.com/imgextra/i2/O1CN01NwxLDd20nxfGBjxmZ_!!6000000006895-2-tps-960-290.png" alt="Higress" width="240" height="72.5">
<br>
@@ -9,7 +10,7 @@
[![license](https://img.shields.io/github/license/alibaba/higress.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
[**官网**](https://higress.cn/) &nbsp; |
&nbsp; [**文档**](https://higress.cn/docs/latest/user/quickstart/) &nbsp; |
&nbsp; [**文档**](https://higress.cn/docs/latest/overview/what-is-higress/) &nbsp; |
&nbsp; [**博客**](https://higress.cn/blog/) &nbsp; |
&nbsp; [**电子书**](https://higress.cn/docs/ebook/wasm14/) &nbsp; |
&nbsp; [**开发指引**](https://higress.cn/docs/latest/dev/architecture/) &nbsp; |
@@ -17,18 +18,19 @@
<p>
<a href="README_EN.md"> English <a/> | 中文
<a href="README_EN.md"> English <a/>| 中文 | <a href="README_JP.md"> 日本語 <a/>
</p>
Higress 是基于阿里内部多年的 Envoy Gateway 实践沉淀,以开源 [Istio](https://github.com/istio/istio) 与 [Envoy](https://github.com/envoyproxy/envoy) 为核心构建的云原生 API 网关。
Higress 是一款云原生 API 网关,内核基于 Istio 和 Envoy可以用 Go/Rust/JS 等编写 Wasm 插件提供了数十个现成的通用插件以及开箱即用的控制台demo 点[这里](http://demo.higress.io/)
Higress 在阿里内部作为 AI 网关,承载了通义千问 APP、百炼大模型 API、机器学习 PAI 平台等 AI 业务的流量
Higress 在阿里内部为解决 Tengine reload 对长连接业务有损,以及 gRPC/Dubbo 负载均衡能力不足而诞生
Higress 能够用统一的协议对接国内外所有 LLM 模型厂商,同时具备丰富的 AI 可观测、多模型负载均衡/fallback、AI token 流控、AI 缓存等能力
阿里云基于 Higress 构建了云原生 API 网关产品,为大量企业客户提供 99.99% 的网关高可用保障服务能力
![](https://img.alicdn.com/imgextra/i1/O1CN01fNnhCp1cV8mYPRFeS_!!6000000003605-0-tps-1080-608.jpg)
Higress 基于 AI 网关能力,支撑了通义千问 APP、百炼大模型 API、机器学习 PAI 平台等 AI 业务。同时服务国内头部的 AIGC 企业(如零一万物),以及 AI 产品(如 FastGPT
![](https://img.alicdn.com/imgextra/i2/O1CN011AbR8023V8R5N0HcA_!!6000000007260-2-tps-1080-606.png)
## Summary
@@ -58,32 +60,45 @@ docker run -d --rm --name higress-ai -v ${PWD}:/data \
- 8080 端口:网关 HTTP 协议入口
- 8443 端口:网关 HTTPS 协议入口
**Higress 的所有 Docker 镜像都一直使用自己独享的仓库,不受 Docker Hub 境内不可访问的影响**
**Higress 的所有 Docker 镜像都一直使用自己独享的仓库,不受 Docker Hub 境内访问受限的影响**
K8s 下使用 Helm 部署等其他安装方式可以参考官网 [Quick Start 文档](https://higress.cn/docs/latest/user/quickstart/)。
如果您是在云上部署,生产环境推荐使用[企业版](https://higress.io/cloud/),开发测试可以使用下面一键部署社区版:
[![Deploy on AlibabaCloud ComputeNest](https://service-info-public.oss-cn-hangzhou.aliyuncs.com/computenest.svg)](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Higress社区版)
## 使用场景
- **AI 网关**:
Higress 提供了一站式的 AI 插件集,可以增强依赖 AI 能力业务的稳定性、灵活性、可观测性,使得业务与 AI 的集成更加便捷和高效。
Higress 能够用统一的协议对接国内外所有 LLM 模型厂商,同时具备丰富的 AI 可观测、多模型负载均衡/fallback、AI token 流控、AI 缓存等能力:
![](https://img.alicdn.com/imgextra/i1/O1CN01fNnhCp1cV8mYPRFeS_!!6000000003605-0-tps-1080-608.jpg)
- **Kubernetes Ingress 网关**:
Higress 可以作为 K8s 集群的 Ingress 入口网关, 并且兼容了大量 K8s Nginx Ingress 的注解,可以从 K8s Nginx Ingress 快速平滑迁移到 Higress。
支持 [Gateway API](https://gateway-api.sigs.k8s.io/) 标准,支持用户从 Ingress API 平滑迁移到 Gateway API。
相比 ingress-nginx资源开销大幅下降路由变更生效速度有十倍提升
![](https://img.alicdn.com/imgextra/i1/O1CN01bhEtb229eeMNBWmdP_!!6000000008093-2-tps-750-547.png)
![](https://img.alicdn.com/imgextra/i1/O1CN01bqRets1LsBGyitj4S_!!6000000001354-2-tps-887-489.png)
- **微服务网关**:
Higress 可以作为微服务网关, 能够对接多种类型的注册中心发现服务配置路由,例如 Nacos, ZooKeeper, Consul, Eureka 等。
并且深度集成了 [Dubbo](https://github.com/apache/dubbo), [Nacos](https://github.com/alibaba/nacos), [Sentinel](https://github.com/alibaba/Sentinel) 等微服务技术栈,基于 Envoy C++ 网关内核的出色性能,相比传统 Java 类微服务网关,可以显著降低资源使用率,减少成本。
![](https://img.alicdn.com/imgextra/i4/O1CN01v4ZbCj1dBjePSMZ17_!!6000000003698-0-tps-1613-926.jpg)
- **安全防护网关**:
Higress 可以作为安全防护网关, 提供 WAF 的能力,并且支持多种认证鉴权策略,例如 key-auth, hmac-auth, jwt-auth, basic-auth, oidc 等。
Higress 可以作为安全防护网关, 提供 WAF 的能力,并且支持多种认证鉴权策略,例如 key-auth, hmac-auth, jwt-auth, basic-auth, oidc 等。
## 核心优势
@@ -165,7 +180,7 @@ K8s 下使用 Helm 部署等其他安装方式可以参考官网 [Quick Start
### 交流群
![image](https://img.alicdn.com/imgextra/i2/O1CN01qPd7Ix1uZPVEsWjWp_!!6000000006051-0-tps-720-405.jpg)
![image](https://img.alicdn.com/imgextra/i2/O1CN01BkopaB22ZsvamFftE_!!6000000007135-0-tps-720-405.jpg)
### 技术分享
@@ -176,4 +191,20 @@ K8s 下使用 Helm 部署等其他安装方式可以参考官网 [Quick Start
### 关联仓库
- Higress 控制台https://github.com/higress-group/higress-console
- Higress独立运行版https://github.com/higress-group/higress-standalone
- Higress独立运行版https://github.com/higress-group/higress-standalone
### 贡献者
<a href="https://github.com/alibaba/higress/graphs/contributors">
<img alt="contributors" src="https://contrib.rocks/image?repo=alibaba/higress"/>
</a>
### Star History
[![Star History](https://api.star-history.com/svg?repos=alibaba/higress&type=Date)](https://star-history.com/#alibaba/higress&Date)
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ 返回顶部 ↑
</a>
</p>

View File

@@ -1,3 +1,4 @@
<a name="readme-top"></a>
<h1 align="center">
<img src="https://img.alicdn.com/imgextra/i2/O1CN01NwxLDd20nxfGBjxmZ_!!6000000006895-2-tps-960-290.png" alt="Higress" width="240" height="72.5">
<br>
@@ -15,7 +16,7 @@
<p>
English | <a href="README.md">中文<a/>
English | <a href="README.md">中文<a/> | <a href="README_JP.md">日本語<a/>
</p>
Higress is a cloud-native api gateway based on Alibaba's internal gateway practices.
@@ -47,7 +48,7 @@ Powered by [Istio](https://github.com/istio/istio) and [Envoy](https://github.co
Higress can function as a microservice gateway, which can discovery microservices from various service registries, such as Nacos, ZooKeeper, Consul, Eureka, etc.
It deeply integrates of [Dubbo](https://github.com/apache/dubbo), [Nacos](https://github.com/alibaba/nacos), [Sentinel](https://github.com/alibaba/Sentinel) and other microservice technology stacks.
It deeply integrates with [Dubbo](https://github.com/apache/dubbo), [Nacos](https://github.com/alibaba/nacos), [Sentinel](https://github.com/alibaba/Sentinel) and other microservice technology stacks.
- **Security gateway**:
@@ -57,7 +58,7 @@ Powered by [Istio](https://github.com/istio/istio) and [Envoy](https://github.co
- **Easy to use**
Provide one-stop gateway solutions for traffic scheduling, service management, and security protection, support Console, K8s Ingress, and Gateway API configuration methods, and also support HTTP to Dubbo protocol conversion, and easily complete protocol mapping configuration.
Provides one-stop gateway solutions for traffic scheduling, service management, and security protection, support Console, K8s Ingress, and Gateway API configuration methods, and also support HTTP to Dubbo protocol conversion, and easily complete protocol mapping configuration.
- **Easy to expand**
@@ -73,7 +74,7 @@ Powered by [Istio](https://github.com/istio/istio) and [Envoy](https://github.co
- **Security**
Provides JWT, OIDC, custom authentication and authentication, deeply integrates open source web application firewall.
Provides JWT, OIDC, custom authentication and authentication, deeply integrates open-source web application firewall.
## Community
@@ -81,9 +82,25 @@ Powered by [Istio](https://github.com/istio/istio) and [Envoy](https://github.co
### Thanks
Higress would not be possible without the valuable open-source work of projects in the community. We would like to extend a special thank-you to Envoy and Istio.
Higress would not be possible without the valuable open-source work of projects in the community. We would like to extend a special thank you to Envoy and Istio.
### Related Repositories
- Higress Console: https://github.com/higress-group/higress-console
- Higress Standalone: https://github.com/higress-group/higress-standalone
- Higress Standalone: https://github.com/higress-group/higress-standalone
### Contributors
<a href="https://github.com/alibaba/higress/graphs/contributors">
<img alt="contributors" src="https://contrib.rocks/image?repo=alibaba/higress"/>
</a>
### Star History
[![Star History Chart](https://api.star-history.com/svg?repos=alibaba/higress&type=Date)](https://star-history.com/#alibaba/higress&Date)
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ Back to Top ↑
</a>
</p>

206
README_JP.md Normal file
View File

@@ -0,0 +1,206 @@
<a name="readme-top"></a>
<h1 align="center">
<img src="https://img.alicdn.com/imgextra/i2/O1CN01NwxLDd20nxfGBjxmZ_!!6000000006895-2-tps-960-290.png" alt="Higress" width="240" height="72.5">
<br>
AIゲートウェイ
</h1>
<h4 align="center"> AIネイティブAPIゲートウェイ </h4>
[![Build Status](https://github.com/alibaba/higress/actions/workflows/build-and-test.yaml/badge.svg?branch=main)](https://github.com/alibaba/higress/actions)
[![license](https://img.shields.io/github/license/alibaba/higress.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
[**公式サイト**](https://higress.cn/) &nbsp; |
&nbsp; [**ドキュメント**](https://higress.cn/docs/latest/overview/what-is-higress/) &nbsp; |
&nbsp; [**ブログ**](https://higress.cn/blog/) &nbsp; |
&nbsp; [**電子書籍**](https://higress.cn/docs/ebook/wasm14/) &nbsp; |
&nbsp; [**開発ガイド**](https://higress.cn/docs/latest/dev/architecture/) &nbsp; |
&nbsp; [**AIプラグイン**](https://higress.cn/plugin/) &nbsp;
<p>
<a href="README_EN.md"> English <a/> | <a href="README.md">中文<a/> | 日本語
</p>
Higressは、IstioとEnvoyをベースにしたクラウドネイティブAPIゲートウェイで、Go/Rust/JSなどを使用してWasmプラグインを作成できます。数十の既製の汎用プラグインと、すぐに使用できるコンソールを提供していますデモは[こちら](http://demo.higress.io/))。
Higressは、Tengineのリロードが長時間接続のビジネスに影響を与える問題や、gRPC/Dubboの負荷分散能力の不足を解決するために、Alibaba内部で誕生しました。
Alibaba Cloudは、Higressを基盤にクラウドネイティブAPIゲートウェイ製品を構築し、多くの企業顧客に99.99%のゲートウェイ高可用性保証サービスを提供しています。
Higressは、AIゲートウェイ機能を基盤に、Tongyi Qianwen APP、Bailian大規模モデルAPI、機械学習PAIプラットフォームなどのAIビジネスをサポートしています。また、国内の主要なAIGC企業ZeroOneやAI製品FastGPTにもサービスを提供しています。
![](https://img.alicdn.com/imgextra/i2/O1CN011AbR8023V8R5N0HcA_!!6000000007260-2-tps-1080-606.png)
## 目次
- [**クイックスタート**](#クイックスタート)
- [**機能紹介**](#機能紹介)
- [**使用シナリオ**](#使用シナリオ)
- [**主な利点**](#主な利点)
- [**コミュニティ**](#コミュニティ)
## クイックスタート
HigressはDockerだけで起動でき、個人開発者がローカルで学習用にセットアップしたり、簡易サイトを構築するのに便利です。
```bash
# 作業ディレクトリを作成
mkdir higress; cd higress
# Higressを起動し、設定ファイルを作業ディレクトリに書き込みます
docker run -d --rm --name higress-ai -v ${PWD}:/data \
-p 8001:8001 -p 8080:8080 -p 8443:8443 \
higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/all-in-one:latest
```
リスンポートの説明は以下の通りです:
- 8001ポートHigress UIコンソールのエントリーポイント
- 8080ポートゲートウェイのHTTPプロトコルエントリーポイント
- 8443ポートゲートウェイのHTTPSプロトコルエントリーポイント
**HigressのすべてのDockerイメージは専用のリポジトリを使用しており、Docker Hubの国内アクセス不可の影響を受けません**
K8sでのHelmデプロイなどの他のインストール方法については、公式サイトの[クイックスタートドキュメント](https://higress.cn/docs/latest/user/quickstart/)を参照してください。
## 使用シナリオ
- **AIゲートウェイ**:
Higressは、国内外のすべてのLLMモデルプロバイダーと統一されたプロトコルで接続でき、豊富なAI可観測性、多モデル負荷分散/フォールバック、AIトークンフロー制御、AIキャッシュなどの機能を備えています。
![](https://img.alicdn.com/imgextra/i1/O1CN01fNnhCp1cV8mYPRFeS_!!6000000003605-0-tps-1080-608.jpg)
- **Kubernetes Ingressゲートウェイ**:
HigressはK8sクラスターのIngressエントリーポイントゲートウェイとして機能し、多くのK8s Nginx Ingressの注釈に対応しています。K8s Nginx IngressからHigressへのスムーズな移行が可能です。
[Gateway API](https://gateway-api.sigs.k8s.io/)標準をサポートし、ユーザーがIngress APIからGateway APIにスムーズに移行できるようにします。
ingress-nginxと比較して、リソースの消費が大幅に減少し、ルーティングの変更が10倍速く反映されます。
![](https://img.alicdn.com/imgextra/i1/O1CN01bhEtb229eeMNBWmdP_!!6000000008093-2-tps-750-547.png)
![](https://img.alicdn.com/imgextra/i1/O1CN01bqRets1LsBGyitj4S_!!6000000001354-2-tps-887-489.png)
- **マイクロサービスゲートウェイ**:
Higressはマイクロサービスゲートウェイとして機能し、Nacos、ZooKeeper、Consul、Eurekaなどのさまざまなサービスレジストリからサービスを発見し、ルーティングを構成できます。
また、[Dubbo](https://github.com/apache/dubbo)、[Nacos](https://github.com/alibaba/nacos)、[Sentinel](https://github.com/alibaba/Sentinel)などのマイクロサービス技術スタックと深く統合されています。Envoy C++ゲートウェイコアの優れたパフォーマンスに基づいて、従来のJavaベースのマイクロサービスゲートウェイと比較して、リソース使用率を大幅に削減し、コストを削減できます。
![](https://img.alicdn.com/imgextra/i4/O1CN01v4ZbCj1dBjePSMZ17_!!6000000003698-0-tps-1613-926.jpg)
- **セキュリティゲートウェイ**:
Higressはセキュリティゲートウェイとして機能し、WAF機能を提供し、key-auth、hmac-auth、jwt-auth、basic-auth、oidcなどのさまざまな認証戦略をサポートします。
## 主な利点
- **プロダクションレベル**
Alibabaで2年以上のプロダクション検証を経た内部製品から派生し、毎秒数十万のリクエストを処理する大規模なシナリオをサポートします。
Nginxのリロードによるトラフィックの揺れを完全に排除し、構成変更がミリ秒単位で反映され、ビジネスに影響を与えません。AIビジネスなどの長時間接続シナリオに特に適しています。
- **ストリーム処理**
リクエスト/レスポンスボディの完全なストリーム処理をサポートし、Wasmプラグインを使用してSSEServer-Sent Eventsなどのストリームプロトコルのメッセージをカスタマイズして処理できます。
AIビジネスなどの大帯域幅シナリオで、メモリ使用量を大幅に削減できます。
- **拡張性**
AI、トラフィック管理、セキュリティ保護などの一般的な機能をカバーする豊富な公式プラグインライブラリを提供し、90以上のビジネスシナリオのニーズを満たします。
Wasmプラグイン拡張を主力とし、サンドボックス隔離を通じてメモリの安全性を確保し、複数のプログラミング言語をサポートし、プラグインバージョンの独立したアップグレードを許可し、トラフィックに影響を与えずにゲートウェイロジックをホットアップデートできます。
- **安全で使いやすい**
Ingress APIおよびGateway API標準に基づき、すぐに使用できるUIコンソールを提供し、WAF保護プラグイン、IP/Cookie CC保護プラグインをすぐに使用できます。
Let's Encryptの自動証明書発行および更新をサポートし、K8sを使用せずにデプロイでき、1行のDockerコマンドで起動でき、個人開発者にとって便利です。
## 機能紹介
### AIゲートウェイデモ展示
[OpenAIから他の大規模モデルへの移行を30秒で完了
](https://www.bilibili.com/video/BV1dT421a7w7/?spm_id_from=333.788.recommend_more_video.14)
### Higress UIコンソール
- **豊富な可観測性**
すぐに使用できる可観測性を提供し、GrafanaPrometheusは組み込みのものを使用することも、自分で構築したものを接続することもできます。
![](./docs/images/monitor.gif)
- **プラグイン拡張メカニズム**
公式にはさまざまなプラグインが提供されており、ユーザーは[独自のプラグインを開発](./plugins/wasm-go)し、Docker/OCIイメージとして構築し、コンソールで構成して、プラグインロジックをリアルタイムで変更できます。トラフィックに影響を与えずにプラグインロジックをホットアップデートできます。
![](./docs/images/plugin.gif)
- **さまざまなサービス発見**
デフォルトでK8s Serviceサービス発見を提供し、構成を通じてNacos/ZooKeeperなどのレジストリに接続してサービスを発見することも、静的IPまたはDNSに基づいて発見することもできます。
![](./docs/images/service-source.gif)
- **ドメインと証明書**
TLS証明書を作成および管理し、ドメインのHTTP/HTTPS動作を構成できます。ドメインポリシーでは、特定のドメインに対してプラグインを適用することができます。
![](./docs/images/domain.gif)
- **豊富なルーティング機能**
上記で定義されたサービス発見メカニズムを通じて、発見されたサービスはサービスリストに表示されます。ルーティングを作成する際に、ドメインを選択し、ルーティングマッチングメカニズムを定義し、ターゲットサービスを選択してルーティングを行います。ルーティングポリシーでは、特定のルーティングに対してプラグインを適用することができます。
![](./docs/images/route-service.gif)
## コミュニティ
### 感謝
EnvoyとIstioのオープンソースの取り組みがなければ、Higressは実現できませんでした。これらのプロジェクトに最も誠実な敬意を表します。
### 交流グループ
![image](https://img.alicdn.com/imgextra/i2/O1CN01BkopaB22ZsvamFftE_!!6000000007135-0-tps-720-405.jpg)
### 技術共有
WeChat公式アカウント
![](https://img.alicdn.com/imgextra/i1/O1CN01WnQt0q1tcmqVDU73u_!!6000000005923-0-tps-258-258.jpg)
### 関連リポジトリ
- Higressコンソールhttps://github.com/higress-group/higress-console
- Higressスタンドアロン版https://github.com/higress-group/higress-standalone
### 貢献者
<a href="https://github.com/alibaba/higress/graphs/contributors">
<img alt="contributors" src="https://contrib.rocks/image?repo=alibaba/higress"/>
</a>
### スターの歴史
[![スターの歴史チャート](https://api.star-history.com/svg?repos=alibaba/higress&type=Date)](https://star-history.com/#alibaba/higress&Date)
<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ トップに戻る ↑
</a>
</p>

View File

@@ -4,6 +4,7 @@
| Version | Supported |
| ------- | ------------------ |
| 2.x.x | :white_check_mark: |
| 1.x.x | :white_check_mark: |
| < 1.0.0 | :x: |

View File

@@ -1 +1 @@
v2.0.1
v2.0.3

View File

@@ -1,5 +1,5 @@
apiVersion: v2
appVersion: 2.0.1
appVersion: 2.0.3
description: Helm chart for deploying higress gateways
icon: https://higress.io/img/higress_logo_small.png
home: http://higress.io/
@@ -10,4 +10,4 @@ name: higress-core
sources:
- http://github.com/alibaba/higress
type: application
version: 2.0.1
version: 2.0.3

View File

@@ -1,3 +1,4 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:

View File

@@ -116,6 +116,12 @@ data:
{{- $existingData = index $existingConfig.data "higress" | default "{}" | fromYaml }}
{{- end }}
{{- $newData := dict }}
{{- if hasKey .Values "upstream" }}
{{- $_ := set $newData "upstream" .Values.upstream }}
{{- end }}
{{- if hasKey .Values "downstream" }}
{{- $_ := set $newData "downstream" .Values.downstream }}
{{- end }}
{{- if and (hasKey .Values "tracing") .Values.tracing.enable }}
{{- $_ := set $newData "tracing" .Values.tracing }}
{{- end }}

View File

@@ -129,3 +129,10 @@ rules:
- apiGroups: ["networking.internal.knative.dev"]
resources: ["ingresses/status"]
verbs: ["get","patch","update"]
# gateway api need
- apiGroups: ["apps"]
verbs: [ "get", "watch", "list", "update", "patch", "create", "delete" ]
resources: [ "deployments" ]
- apiGroups: [""]
verbs: [ "get", "watch", "list", "update", "patch", "create", "delete" ]
resources: [ "serviceaccounts"]

View File

@@ -69,6 +69,12 @@ spec:
fieldPath: spec.serviceAccountName
- name: DOMAIN_SUFFIX
value: {{ .Values.global.proxy.clusterDomain }}
- name: GATEWAY_NAME
value: {{ include "gateway.name" . }}
- name: PILOT_ENABLE_GATEWAY_API
value: "{{ .Values.global.enableGatewayAPI }}"
- name: PILOT_ENABLE_ALPHA_GATEWAY_API
value: "{{ .Values.global.enableGatewayAPI }}"
{{- if .Values.controller.env }}
{{- range $key, $val := .Values.controller.env }}
- name: {{ $key }}
@@ -215,14 +221,14 @@ spec:
- name: HIGRESS_ENABLE_ISTIO_API
value: "true"
{{- end }}
{{- if .Values.global.enableGatewayAPI }}
- name: PILOT_ENABLE_GATEWAY_API
value: "true"
value: "false"
- name: PILOT_ENABLE_ALPHA_GATEWAY_API
value: "false"
- name: PILOT_ENABLE_GATEWAY_API_STATUS
value: "true"
value: "false"
- name: PILOT_ENABLE_GATEWAY_API_DEPLOYMENT_CONTROLLER
value: "false"
{{- end }}
{{- if not .Values.global.enableHigressIstio }}
- name: CUSTOM_CA_CERT_NAME
value: "higress-ca-root-cert"

View File

@@ -0,0 +1,22 @@
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: {{ include "gateway.name" . }}-global-custom-response
namespace: {{ .Release.Namespace }}
labels:
{{- include "gateway.labels" . | nindent 4}}
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: envoy.filters.network.http_connection_manager
patch:
operation: INSERT_FIRST
value:
name: envoy.filters.http.custom_response
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.custom_response.v3.CustomResponse

View File

@@ -1,6 +1,8 @@
{{- if .Values.global.ingressClass }}
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: {{ .Values.global.ingressClass }}
spec:
controller: higress.io/higress-controller
controller: higress.io/higress-controller
{{- end }}

View File

@@ -10,7 +10,7 @@ global:
onDemandRDS: false
hostRDSMergeSubset: false
onlyPushRouteCluster: true
# IngressClass filters which ingress resources the higress controller watches.
# -- IngressClass filters which ingress resources the higress controller watches.
# The default ingress class is higress.
# There are some special cases for special ingress class.
# 1. When the ingress class is set as nginx, the higress controller will watch ingress
@@ -18,28 +18,40 @@ global:
# 2. When the ingress class is set empty, the higress controller will watch all ingress
# resources in the k8s cluster.
ingressClass: "higress"
# -- If not empty, Higress Controller will only watch resources in the specified namespace.
# When isolating different business systems using K8s namespace,
# if each namespace requires a standalone gateway instance,
# this parameter can be used to confine the Ingress watching of Higress within the given namespace.
watchNamespace: ""
# -- Whether to disable HTTP/2 in ALPN
disableAlpnH2: false
# -- If true, Higress Controller will update the status field of Ingress resources.
# When migrating from Nginx Ingress, in order to avoid status field of Ingress objects being overwritten,
# this parameter needs to be set to false,
# so Higress won't write the entry IP to the status field of the corresponding Ingress object.
enableStatus: true
# whether to use autoscaling/v2 template for HPA settings
# -- whether to use autoscaling/v2 template for HPA settings
# for internal usage only, not to be configured by users.
autoscalingv2API: true
local: false # When deploying to a local cluster (e.g.: kind cluster), set this to true.
# -- When deploying to a local cluster (e.g.: kind cluster), set this to true.
local: false
kind: false # Deprecated. Please use "global.local" instead. Will be removed later.
enableIstioAPI: false
# -- If true, Higress Controller will monitor istio resources as well
enableIstioAPI: true
# -- If true, Higress Controller will monitor Gateway API resources as well
enableGatewayAPI: false
# Deprecated
enableHigressIstio: false
# Used to locate istiod.
# -- Used to locate istiod.
istioNamespace: istio-system
# enable pod disruption budget for the control plane, which is used to
# -- enable pod disruption budget for the control plane, which is used to
# ensure Istio control plane components are gradually upgraded or recovered.
defaultPodDisruptionBudget:
enabled: false
# The values aren't mutable due to a current PodDisruptionBudget limitation
# minAvailable: 1
# A minimal set of requested resources to applied to all deployments so that
# -- A minimal set of requested resources to applied to all deployments so that
# Horizontal Pod Autoscaler will be able to function (if set).
# Each component can overwrite these default values by adding its own resources
# block in the relevant section below and setting the desired resources values.
@@ -51,16 +63,16 @@ global:
# cpu: 100m
# memory: 128Mi
# Default hub for Istio images.
# -- Default hub for Istio images.
# Releases are published to docker hub under 'istio' project.
# Dev builds from prow are on gcr.io
hub: higress-registry.cn-hangzhou.cr.aliyuncs.com/higress
# Specify image pull policy if default behavior isn't desired.
# -- Specify image pull policy if default behavior isn't desired.
# Default behavior: latest images will be Always else IfNotPresent.
imagePullPolicy: ""
# ImagePullSecrets for all ServiceAccount, list of secrets in the same namespace
# -- ImagePullSecrets for all ServiceAccount, list of secrets in the same namespace
# to use for pulling any images in pods that reference this ServiceAccount.
# For components that don't use ServiceAccounts (i.e. grafana, servicegraph, tracing)
# ImagePullSecrets will be added to the corresponding Deployment(StatefulSet) objects.
@@ -68,14 +80,14 @@ global:
imagePullSecrets: []
# - private-registry-key
# Enabled by default in master for maximising testing.
# -- Enabled by default in master for maximising testing.
istiod:
enableAnalysis: false
# To output all istio components logs in json format by adding --log_as_json argument to each container argument
# -- To output all istio components logs in json format by adding --log_as_json argument to each container argument
logAsJson: false
# Comma-separated minimum per-scope logging level of messages to output, in the form of <scope>:<level>,<scope>:<level>
# -- Comma-separated minimum per-scope logging level of messages to output, in the form of <scope>:<level>,<scope>:<level>
# The control plane has different scopes depending on component, but can configure default log level across all components
# If empty, default scope and level will be used as configured in code
logging:
@@ -83,11 +95,11 @@ global:
omitSidecarInjectorConfigMap: false
# Whether to restrict the applications namespace the controller manages;
# -- Whether to restrict the applications namespace the controller manages;
# If not set, controller watches all namespaces
oneNamespace: false
# Configure whether Operator manages webhook configurations. The current behavior
# -- Configure whether Operator manages webhook configurations. The current behavior
# of Istiod is to manage its own webhook configurations.
# When this option is set as true, Istio Operator, instead of webhooks, manages the
# webhook configurations. When this option is set as false, webhooks manage their
@@ -106,7 +118,7 @@ global:
#- global
#- "{{ valueOrDefault .DeploymentMeta.Namespace \"default\" }}.global"
# Kubernetes >=v1.11.0 will create two PriorityClass, including system-cluster-critical and
# -- Kubernetes >=v1.11.0 will create two PriorityClass, including system-cluster-critical and
# system-node-critical, it is better to configure this in order to make sure your Istio pods
# will not be killed because of low priority class.
# Refer to https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
@@ -116,18 +128,18 @@ global:
proxy:
image: proxyv2
# This controls the 'policy' in the sidecar injector.
# -- This controls the 'policy' in the sidecar injector.
autoInject: enabled
# CAUTION: It is important to ensure that all Istio helm charts specify the same clusterDomain value
# -- CAUTION: It is important to ensure that all Istio helm charts specify the same clusterDomain value
# cluster domain. Default value is "cluster.local".
clusterDomain: "cluster.local"
# Per Component log level for proxy, applies to gateways and sidecars. If a component level is
# -- Per Component log level for proxy, applies to gateways and sidecars. If a component level is
# not set, then the global "logLevel" will be used.
componentLogLevel: "misc:error"
# If set, newly injected sidecars will have core dumps enabled.
# -- If set, newly injected sidecars will have core dumps enabled.
enableCoreDump: false
# istio ingress capture allowlist
@@ -136,7 +148,7 @@ global:
excludeInboundPorts: ""
includeInboundPorts: "*"
# istio egress capture allowlist
# -- istio egress capture allowlist
# https://istio.io/docs/tasks/traffic-management/egress.html#calling-external-services-directly
# example: includeIPRanges: "172.30.0.0/16,172.20.0.0/16"
# would only capture egress traffic on those two IP Ranges, all other outbound traffic would
@@ -146,29 +158,29 @@ global:
includeOutboundPorts: ""
excludeOutboundPorts: ""
# Log level for proxy, applies to gateways and sidecars.
# -- Log level for proxy, applies to gateways and sidecars.
# Expected values are: trace|debug|info|warning|error|critical|off
logLevel: warning
#If set to true, istio-proxy container will have privileged securityContext
# -- If set to true, istio-proxy container will have privileged securityContext
privileged: false
# The number of successive failed probes before indicating readiness failure.
# -- The number of successive failed probes before indicating readiness failure.
readinessFailureThreshold: 30
# The number of successive successed probes before indicating readiness success.
# -- The number of successive successed probes before indicating readiness success.
readinessSuccessThreshold: 30
# The initial delay for readiness probes in seconds.
# -- The initial delay for readiness probes in seconds.
readinessInitialDelaySeconds: 1
# The period between readiness probes.
# -- The period between readiness probes.
readinessPeriodSeconds: 2
# The readiness timeout seconds
# -- The readiness timeout seconds
readinessTimeoutSeconds: 3
# Resources for the sidecar.
# -- Resources for the sidecar.
resources:
requests:
cpu: 100m
@@ -177,18 +189,18 @@ global:
cpu: 2000m
memory: 1024Mi
# Default port for Pilot agent health checks. A value of 0 will disable health checking.
# -- Default port for Pilot agent health checks. A value of 0 will disable health checking.
statusPort: 15020
# Specify which tracer to use. One of: lightstep, datadog, stackdriver.
# -- Specify which tracer to use. One of: lightstep, datadog, stackdriver.
# If using stackdriver tracer outside GCP, set env GOOGLE_APPLICATION_CREDENTIALS to the GCP credential file.
tracer: ""
# Controls if sidecar is injected at the front of the container list and blocks the start of the other containers until the proxy is ready
# -- Controls if sidecar is injected at the front of the container list and blocks the start of the other containers until the proxy is ready
holdApplicationUntilProxyStarts: false
proxy_init:
# Base name for the proxy_init container, used to configure iptables.
# -- Base name for the proxy_init container, used to configure iptables.
image: proxyv2
resources:
limits:
@@ -198,7 +210,7 @@ global:
cpu: 10m
memory: 10Mi
# configure remote pilot and istiod service and endpoint
# -- configure remote pilot and istiod service and endpoint
remotePilotAddress: ""
##############################################################################################
@@ -206,20 +218,20 @@ global:
# make sure they are consistent across your Istio helm charts #
##############################################################################################
# The customized CA address to retrieve certificates for the pods in the cluster.
# -- The customized CA address to retrieve certificates for the pods in the cluster.
# CSR clients such as the Istio Agent and ingress gateways can use this to specify the CA endpoint.
# If not set explicitly, default to the Istio discovery address.
caAddress: ""
# Configure a remote cluster data plane controlled by an external istiod.
# -- Configure a remote cluster data plane controlled by an external istiod.
# When set to true, istiod is not deployed locally and only a subset of the other
# discovery charts are enabled.
externalIstiod: false
# Configure a remote cluster as the config cluster for an external istiod.
# -- Configure a remote cluster as the config cluster for an external istiod.
configCluster: false
# Configure the policy for validating JWT.
# -- Configure the policy for validating JWT.
# Currently, two options are supported: "third-party-jwt" and "first-party-jwt".
jwtPolicy: "third-party-jwt"
@@ -241,7 +253,7 @@ global:
# of migration TBD, and it may be a disruptive operation to change the Mesh
# ID post-install.
#
# If the mesh admin does not specify a value, Istio will use the value of the
# -- If the mesh admin does not specify a value, Istio will use the value of the
# mesh's Trust Domain. The best practice is to select a proper Trust Domain
# value.
meshID: ""
@@ -275,68 +287,69 @@ global:
#
meshNetworks: {}
# Use the user-specified, secret volume mounted key and certs for Pilot and workloads.
# -- Use the user-specified, secret volume mounted key and certs for Pilot and workloads.
mountMtlsCerts: false
multiCluster:
# Set to true to connect two kubernetes clusters via their respective
# -- Set to true to connect two kubernetes clusters via their respective
# ingressgateway services when pods in each cluster cannot directly
# talk to one another. All clusters should be using Istio mTLS and must
# have a shared root CA for this model to work.
enabled: true
# Should be set to the name of the cluster this installation will run in. This is required for sidecar injection
# -- Should be set to the name of the cluster this installation will run in. This is required for sidecar injection
# to properly label proxies
clusterName: ""
# Network defines the network this cluster belong to. This name
# -- Network defines the network this cluster belong to. This name
# corresponds to the networks in the map of mesh networks.
network: ""
# Configure the certificate provider for control plane communication.
# -- Configure the certificate provider for control plane communication.
# Currently, two providers are supported: "kubernetes" and "istiod".
# As some platforms may not have kubernetes signing APIs,
# Istiod is the default
pilotCertProvider: istiod
sds:
# The JWT token for SDS and the aud field of such JWT. See RFC 7519, section 4.1.3.
# -- The JWT token for SDS and the aud field of such JWT. See RFC 7519, section 4.1.3.
# When a CSR is sent from Istio Agent to the CA (e.g. Istiod), this aud is to make sure the
# JWT is intended for the CA.
token:
aud: istio-ca
sts:
# The service port used by Security Token Service (STS) server to handle token exchange requests.
# -- The service port used by Security Token Service (STS) server to handle token exchange requests.
# Setting this port to a non-zero value enables STS server.
servicePort: 0
# Configuration for each of the supported tracers
# -- Configuration for each of the supported tracers
tracer:
# Configuration for envoy to send trace data to LightStep.
# -- Configuration for envoy to send trace data to LightStep.
# Disabled by default.
# address: the <host>:<port> of the satellite pool
# accessToken: required for sending data to the pool
#
datadog:
# Host:Port for submitting traces to the Datadog agent.
# -- Host:Port for submitting traces to the Datadog agent.
address: "$(HOST_IP):8126"
lightstep:
address: "" # example: lightstep-satellite:443
accessToken: "" # example: abcdefg1234567
# -- example: lightstep-satellite:443
address: ""
# -- example: abcdefg1234567
accessToken: ""
stackdriver:
# enables trace output to stdout.
# -- enables trace output to stdout.
debug: false
# The global default max number of message events per span.
# -- The global default max number of message events per span.
maxNumberOfMessageEvents: 200
# The global default max number of annotation events per span.
# -- The global default max number of annotation events per span.
maxNumberOfAnnotations: 200
# The global default max number of attributes per span.
# -- The global default max number of attributes per span.
maxNumberOfAttributes: 200
# Use the Mesh Control Protocol (MCP) for configuring Istiod. Requires an MCP source.
# -- Use the Mesh Control Protocol (MCP) for configuring Istiod. Requires an MCP source.
useMCP: false
# Observability (o11y) configurations
# -- Observability (o11y) configurations
o11y:
enabled: false
promtail:
@@ -350,7 +363,7 @@ global:
memory: 2Gi
securityContext: {}
# The name of the CA for workload certificates.
# -- The name of the CA for workload certificates.
# For example, when caName=GkeWorkloadCertificate, GKE workload certificates
# will be used as the certificates for workloads.
# The default value is "" and when caName="", the CA will be configured by other
@@ -359,7 +372,7 @@ global:
hub: higress-registry.cn-hangzhou.cr.aliyuncs.com/higress
clusterName: ""
# meshConfig defines runtime configuration of components, including Istiod and istio-agent behavior
# -- meshConfig defines runtime configuration of components, including Istiod and istio-agent behavior
# See https://istio.io/docs/reference/config/istio.mesh.v1alpha1/ for all available options
meshConfig:
enablePrometheusMerge: true
@@ -370,14 +383,13 @@ meshConfig:
# and gradual adoption by setting capture only on specific workloads. It also allows
# VMs to use other DNS options, like dnsmasq or unbound.
# The namespace to treat as the administrative root namespace for Istio configuration.
# -- The namespace to treat as the administrative root namespace for Istio configuration.
# When processing a leaf namespace Istio will search for declarations in that namespace first
# and if none are found it will search in the root namespace. Any matching declaration found in the root namespace
# is processed as if it were declared in the leaf namespace.
rootNamespace:
# The trust domain corresponds to the trust root of a system
# -- The trust domain corresponds to the trust root of a system
# Refer to https://github.com/spiffe/spiffe/blob/master/standards/SPIFFE-ID.md#21-trust-domain
trustDomain: "cluster.local"
@@ -391,56 +403,57 @@ meshConfig:
gateway:
name: "higress-gateway"
# -- Number of Higress Gateway pods
replicas: 2
image: gateway
# -- Use a `DaemonSet` or `Deployment`
kind: Deployment
# The number of successive failed probes before indicating readiness failure.
# -- The number of successive failed probes before indicating readiness failure.
readinessFailureThreshold: 30
# The number of successive successed probes before indicating readiness success.
# -- The number of successive successed probes before indicating readiness success.
readinessSuccessThreshold: 1
# The initial delay for readiness probes in seconds.
# -- The initial delay for readiness probes in seconds.
readinessInitialDelaySeconds: 1
# The period between readiness probes.
# -- The period between readiness probes.
readinessPeriodSeconds: 2
# The readiness timeout seconds
# -- The readiness timeout seconds
readinessTimeoutSeconds: 3
hub: higress-registry.cn-hangzhou.cr.aliyuncs.com/higress
tag: ""
# revision declares which revision this gateway is a part of
# -- revision declares which revision this gateway is a part of
revision: ""
rbac:
# If enabled, roles will be created to enable accessing certificates from Gateways. This is not needed
# -- If enabled, roles will be created to enable accessing certificates from Gateways. This is not needed
# when using http://gateway-api.org/.
enabled: true
serviceAccount:
# If set, a service account will be created. Otherwise, the default is used
# -- If set, a service account will be created. Otherwise, the default is used
create: true
# Annotations to add to the service account
# -- Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# -- The name of the service account to use.
# If not set, the release name is used
name: ""
# Pod environment variables
# -- Pod environment variables
env: {}
httpPort: 80
httpsPort: 443
hostNetwork: false
# Labels to apply to all resources
# -- Labels to apply to all resources
labels: {}
# Annotations to apply to all resources
# -- Annotations to apply to all resources
annotations: {}
podAnnotations:
@@ -449,14 +462,14 @@ gateway:
prometheus.io/path: "/stats/prometheus"
sidecar.istio.io/inject: "false"
# Define the security context for the pod.
# -- Define the security context for the pod.
# If unset, this will be automatically set to the minimum privileges required to bind to port 80 and 443.
# On Kubernetes 1.22+, this only requires the `net.ipv4.ip_unprivileged_port_start` sysctl.
securityContext: ~
containerSecurityContext: ~
service:
# Type of service. Set to "None" to disable the service entirely
# -- Type of service. Set to "None" to disable the service entirely
type: LoadBalancer
ports:
- name: http2
@@ -496,28 +509,29 @@ gateway:
affinity: {}
# If specified, the gateway will act as a network gateway for the given network.
# -- If specified, the gateway will act as a network gateway for the given network.
networkGateway: ""
metrics:
# If true, create PodMonitor or VMPodScrape for gateway
# -- If true, create PodMonitor or VMPodScrape for gateway
enabled: false
# provider group name for CustomResourceDefinition, can be monitoring.coreos.com or operator.victoriametrics.com
# -- provider group name for CustomResourceDefinition, can be monitoring.coreos.com or operator.victoriametrics.com
provider: monitoring.coreos.com
interval: ""
scrapeTimeout: ""
honorLabels: false
# for monitoring.coreos.com/v1.PodMonitor
# -- for monitoring.coreos.com/v1.PodMonitor
metricRelabelings: []
relabelings: []
# for operator.victoriametrics.com/v1beta1.VMPodScrape
# -- for operator.victoriametrics.com/v1beta1.VMPodScrape
metricRelabelConfigs: []
relabelConfigs: []
# some more raw podMetricsEndpoints spec
# -- some more raw podMetricsEndpoints spec
rawSpec: {}
controller:
name: "higress-controller"
# -- Number of Higress Controller pods
replicas: 1
image: higress
@@ -541,12 +555,12 @@ controller:
create: true
serviceAccount:
# Specifies whether a service account should be created
# -- Specifies whether a service account should be created
create: true
# Annotations to add to the service account
# -- Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
# -- The name of the service account to use.
# -- If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
@@ -602,7 +616,7 @@ controller:
enabled: true
email: ""
## Discovery Settings
## -- Discovery Settings
pilot:
autoscaleEnabled: false
autoscaleMin: 1
@@ -614,11 +628,11 @@ pilot:
hub: higress-registry.cn-hangzhou.cr.aliyuncs.com/higress
tag: ""
# Can be a full hub/image:tag
# -- Can be a full hub/image:tag
image: pilot
traceSampling: 1.0
# Resources for a small pilot install
# -- Resources for a small pilot install
resources:
requests:
cpu: 500m
@@ -633,21 +647,21 @@ pilot:
cpu:
targetAverageUtilization: 80
# if protocol sniffing is enabled for outbound
# -- if protocol sniffing is enabled for outbound
enableProtocolSniffingForOutbound: true
# if protocol sniffing is enabled for inbound
# -- if protocol sniffing is enabled for inbound
enableProtocolSniffingForInbound: true
nodeSelector: {}
podAnnotations: {}
serviceAnnotations: {}
# You can use jwksResolverExtraRootCA to provide a root certificate
# -- You can use jwksResolverExtraRootCA to provide a root certificate
# in PEM format. This will then be trusted by pilot when resolving
# JWKS URIs.
jwksResolverExtraRootCA: ""
# This is used to set the source of configuration for
# -- This is used to set the source of configuration for
# the associated address in configSource, if nothing is specified
# the default MCP is assumed.
configSource:
@@ -655,21 +669,21 @@ pilot:
plugins: []
# The following is used to limit how long a sidecar can be connected
# -- The following is used to limit how long a sidecar can be connected
# to a pilot. It balances out load across pilot instances at the cost of
# increasing system churn.
keepaliveMaxServerConnectionAge: 30m
# Additional labels to apply to the deployment.
# -- Additional labels to apply to the deployment.
deploymentLabels: {}
## Mesh config settings
# Install the mesh config map, generated from values.yaml.
# -- Install the mesh config map, generated from values.yaml.
# If false, pilot wil use default values (by default) or user-supplied values.
configMap: true
# Additional labels to apply on the pod level for monitoring and logging configuration.
# -- Additional labels to apply on the pod level for monitoring and logging configuration.
podLabels: {}
# Tracing config settings
@@ -684,3 +698,19 @@ tracing:
# zipkin:
# service: ""
# port: 9411
# -- Downstream config settings
downstream:
idleTimeout: 180
maxRequestHeadersKb: 60
connectionBufferLimits: 32768
http2:
maxConcurrentStreams: 100
initialStreamWindowSize: 65535
initialConnectionWindowSize: 1048576
routeTimeout: 0
# -- Upstream config settings
upstream:
idleTimeout: 10
connectionBufferLimits: 10485760

View File

@@ -1,9 +1,9 @@
dependencies:
- name: higress-core
repository: file://../core
version: 2.0.1
version: 2.0.3
- name: higress-console
repository: https://higress.io/helm-charts/
version: 1.4.4
digest: sha256:6e4d77c31c834a404a728ec5a8379dd5df27a7e9b998a08e6524dc6534b07c1d
generated: "2024-10-09T20:07:21.857942+08:00"
version: 1.4.5
digest: sha256:74b772113264168483961f5d0424459fd7359adc509a4b50400229581d7cddbf
generated: "2024-11-08T14:06:51.871719+08:00"

View File

@@ -1,5 +1,5 @@
apiVersion: v2
appVersion: 2.0.1
appVersion: 2.0.3
description: Helm chart for deploying Higress gateways
icon: https://higress.io/img/higress_logo_small.png
home: http://higress.io/
@@ -12,9 +12,9 @@ sources:
dependencies:
- name: higress-core
repository: "file://../core"
version: 2.0.1
version: 2.0.3
- name: higress-console
repository: "https://higress.io/helm-charts/"
version: 1.4.4
version: 1.4.5
type: application
version: 2.0.1
version: 2.0.3

View File

@@ -1,57 +1,276 @@
# Higress Helm Chart
Installs the cloud-native gateway [Higress](http://higress.io/)
## Get Repo Info
```console
helm repo add higress.io https://higress.io/helm-charts
helm repo update
```
_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._
## Installing the Chart
To install the chart with the release name `higress`:
```console
helm install higress -n higress-system higress.io/higress --create-namespace --render-subchart-notes
```
## Uninstalling the Chart
To uninstall/delete the higress deployment:
```console
helm delete higress -n higress-system
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
| **Parameter** | **Description** | **Default** |
|---|---|---|
| **Global Parameters** | | |
| global.local | Set to `true` if installing to a local K8s cluster (e.g.: Kind, Rancher Desktop, etc.) | false |
| global.ingressClass | [IngressClass](https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/#ingress-class) which is used to filter Ingress resources Higress Controller watches.<br />If there are multiple gateway instances deployed in the cluster, this parameter can be used to distinguish the scope of each gateway instance.<br />There are some special cases for special IngressClass values:<br />1. If set to "nginx", Higress Controller will watch Ingress resources with the `nginx` IngressClass or without any Ingress class.<br />2. If set to empty, Higress Controller will watch all Ingress resources in the K8s cluster. | higress |
| global.watchNamespace | If not empty, Higress Controller will only watch resources in the specified namespace. When isolating different business systems using K8s namespace, if each namespace requires a standalone gateway instance, this parameter can be used to confine the Ingress watching of Higress within the given namespace. | "" |
| global.disableAlpnH2 | Whether to disable HTTP/2 in ALPN | true |
| global.enableStatus | If `true`, Higress Controller will update the `status` field of Ingress resources.<br />When migrating from Nginx Ingress, in order to avoid `status` field of Ingress objects being overwritten, this parameter needs to be set to false, so Higress won't write the entry IP to the `status` field of the corresponding Ingress object. | true |
| global.enableIstioAPI | If `true`, Higress Controller will monitor istio resources as well | false |
| global.enableGatewayAPI | If `true`, Higress Controller will monitor Gateway API resources as well | false |
| global.istioNamespace | The namespace istio is installed to | istio-system |
| **Core Paramters** | | |
| higress-core.gateway.replicas | Number of Higress Gateway pods | 2 |
| higress-core.controller.replicas | Number of Higress Controller pods | 1 |
| **Console Paramters** | | |
| higress-console.replicaCount | Number of Higress Console pods | 1 |
| higress-console.service.type | K8s service type used by Higress Console | ClusterIP |
| higress-console.domain | Domain used to access Higress Console | console.higress.io |
| higress-console.tlsSecretName | Name of Secret resource used by TLS connections. | "" |
| higress-console.web.login.prompt | Prompt message to be displayed on the login page | "" |
| higress-console.admin.password.value | If not empty, the admin password will be configured to the specified value. | "" |
| higress-console.admin.password.length | The length of random admin password generated during installation. Only works when `higress-console.admin.password.value` is not set. | 8 |
| higress-console.o11y.enabled | If `true`, o11y suite (Grafana + Promethues) will be installed. | false |
| higress-console.pvc.rwxSupported | Set to `false` when installing to a standard K8s cluster and the target cluster doesn't support the ReadWriteMany access mode of PersistentVolumeClaim. | true |
## Higress for Kubernetes
Higress is a cloud-native api gateway based on Alibaba's internal gateway practices.
Powered by Istio and Envoy, Higress realizes the integration of the triple gateway architecture of traffic gateway, microservice gateway and security gateway, thereby greatly reducing the costs of deployment, operation and maintenance.
## Setup Repo Info
```console
helm repo add higress.io https://higress.io/helm-charts
helm repo update
```
## Install
To install the chart with the release name `higress`:
```console
helm install higress -n higress-system higress.io/higress --create-namespace --render-subchart-notes
```
## Uninstall
To uninstall/delete the higress deployment:
```console
helm delete higress -n higress-system
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Parameters
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| clusterName | string | `""` | |
| controller.affinity | object | `{}` | |
| controller.automaticHttps.email | string | `""` | |
| controller.automaticHttps.enabled | bool | `true` | |
| controller.autoscaling.enabled | bool | `false` | |
| controller.autoscaling.maxReplicas | int | `5` | |
| controller.autoscaling.minReplicas | int | `1` | |
| controller.autoscaling.targetCPUUtilizationPercentage | int | `80` | |
| controller.env | object | `{}` | |
| controller.hub | string | `"higress-registry.cn-hangzhou.cr.aliyuncs.com/higress"` | |
| controller.image | string | `"higress"` | |
| controller.imagePullSecrets | list | `[]` | |
| controller.labels | object | `{}` | |
| controller.name | string | `"higress-controller"` | |
| controller.nodeSelector | object | `{}` | |
| controller.podAnnotations | object | `{}` | |
| controller.podSecurityContext | object | `{}` | |
| controller.ports[0].name | string | `"http"` | |
| controller.ports[0].port | int | `8888` | |
| controller.ports[0].protocol | string | `"TCP"` | |
| controller.ports[0].targetPort | int | `8888` | |
| controller.ports[1].name | string | `"http-solver"` | |
| controller.ports[1].port | int | `8889` | |
| controller.ports[1].protocol | string | `"TCP"` | |
| controller.ports[1].targetPort | int | `8889` | |
| controller.ports[2].name | string | `"grpc"` | |
| controller.ports[2].port | int | `15051` | |
| controller.ports[2].protocol | string | `"TCP"` | |
| controller.ports[2].targetPort | int | `15051` | |
| controller.probe.httpGet.path | string | `"/ready"` | |
| controller.probe.httpGet.port | int | `8888` | |
| controller.probe.initialDelaySeconds | int | `1` | |
| controller.probe.periodSeconds | int | `3` | |
| controller.probe.timeoutSeconds | int | `5` | |
| controller.rbac.create | bool | `true` | |
| controller.replicas | int | `1` | Number of Higress Controller pods |
| controller.resources.limits.cpu | string | `"1000m"` | |
| controller.resources.limits.memory | string | `"2048Mi"` | |
| controller.resources.requests.cpu | string | `"500m"` | |
| controller.resources.requests.memory | string | `"2048Mi"` | |
| controller.securityContext | object | `{}` | |
| controller.service.type | string | `"ClusterIP"` | |
| controller.serviceAccount.annotations | object | `{}` | Annotations to add to the service account |
| controller.serviceAccount.create | bool | `true` | Specifies whether a service account should be created |
| controller.serviceAccount.name | string | `""` | If not set and create is true, a name is generated using the fullname template |
| controller.tag | string | `""` | |
| controller.tolerations | list | `[]` | |
| downstream | object | `{"connectionBufferLimits":32768,"http2":{"initialConnectionWindowSize":1048576,"initialStreamWindowSize":65535,"maxConcurrentStreams":100},"idleTimeout":180,"maxRequestHeadersKb":60,"routeTimeout":0}` | Downstream config settings |
| gateway.affinity | object | `{}` | |
| gateway.annotations | object | `{}` | Annotations to apply to all resources |
| gateway.autoscaling.enabled | bool | `false` | |
| gateway.autoscaling.maxReplicas | int | `5` | |
| gateway.autoscaling.minReplicas | int | `1` | |
| gateway.autoscaling.targetCPUUtilizationPercentage | int | `80` | |
| gateway.containerSecurityContext | string | `nil` | |
| gateway.env | object | `{}` | Pod environment variables |
| gateway.hostNetwork | bool | `false` | |
| gateway.httpPort | int | `80` | |
| gateway.httpsPort | int | `443` | |
| gateway.hub | string | `"higress-registry.cn-hangzhou.cr.aliyuncs.com/higress"` | |
| gateway.image | string | `"gateway"` | |
| gateway.kind | string | `"Deployment"` | Use a `DaemonSet` or `Deployment` |
| gateway.labels | object | `{}` | Labels to apply to all resources |
| gateway.metrics.enabled | bool | `false` | If true, create PodMonitor or VMPodScrape for gateway |
| gateway.metrics.honorLabels | bool | `false` | |
| gateway.metrics.interval | string | `""` | |
| gateway.metrics.metricRelabelConfigs | list | `[]` | for operator.victoriametrics.com/v1beta1.VMPodScrape |
| gateway.metrics.metricRelabelings | list | `[]` | for monitoring.coreos.com/v1.PodMonitor |
| gateway.metrics.provider | string | `"monitoring.coreos.com"` | provider group name for CustomResourceDefinition, can be monitoring.coreos.com or operator.victoriametrics.com |
| gateway.metrics.rawSpec | object | `{}` | some more raw podMetricsEndpoints spec |
| gateway.metrics.relabelConfigs | list | `[]` | |
| gateway.metrics.relabelings | list | `[]` | |
| gateway.metrics.scrapeTimeout | string | `""` | |
| gateway.name | string | `"higress-gateway"` | |
| gateway.networkGateway | string | `""` | If specified, the gateway will act as a network gateway for the given network. |
| gateway.nodeSelector | object | `{}` | |
| gateway.podAnnotations."prometheus.io/path" | string | `"/stats/prometheus"` | |
| gateway.podAnnotations."prometheus.io/port" | string | `"15020"` | |
| gateway.podAnnotations."prometheus.io/scrape" | string | `"true"` | |
| gateway.podAnnotations."sidecar.istio.io/inject" | string | `"false"` | |
| gateway.rbac.enabled | bool | `true` | If enabled, roles will be created to enable accessing certificates from Gateways. This is not needed when using http://gateway-api.org/. |
| gateway.readinessFailureThreshold | int | `30` | The number of successive failed probes before indicating readiness failure. |
| gateway.readinessInitialDelaySeconds | int | `1` | The initial delay for readiness probes in seconds. |
| gateway.readinessPeriodSeconds | int | `2` | The period between readiness probes. |
| gateway.readinessSuccessThreshold | int | `1` | The number of successive successed probes before indicating readiness success. |
| gateway.readinessTimeoutSeconds | int | `3` | The readiness timeout seconds |
| gateway.replicas | int | `2` | Number of Higress Gateway pods |
| gateway.resources.limits.cpu | string | `"2000m"` | |
| gateway.resources.limits.memory | string | `"2048Mi"` | |
| gateway.resources.requests.cpu | string | `"2000m"` | |
| gateway.resources.requests.memory | string | `"2048Mi"` | |
| gateway.revision | string | `""` | revision declares which revision this gateway is a part of |
| gateway.rollingMaxSurge | string | `"100%"` | |
| gateway.rollingMaxUnavailable | string | `"25%"` | |
| gateway.securityContext | string | `nil` | Define the security context for the pod. If unset, this will be automatically set to the minimum privileges required to bind to port 80 and 443. On Kubernetes 1.22+, this only requires the `net.ipv4.ip_unprivileged_port_start` sysctl. |
| gateway.service.annotations | object | `{}` | |
| gateway.service.externalTrafficPolicy | string | `""` | |
| gateway.service.loadBalancerClass | string | `""` | |
| gateway.service.loadBalancerIP | string | `""` | |
| gateway.service.loadBalancerSourceRanges | list | `[]` | |
| gateway.service.ports[0].name | string | `"http2"` | |
| gateway.service.ports[0].port | int | `80` | |
| gateway.service.ports[0].protocol | string | `"TCP"` | |
| gateway.service.ports[0].targetPort | int | `80` | |
| gateway.service.ports[1].name | string | `"https"` | |
| gateway.service.ports[1].port | int | `443` | |
| gateway.service.ports[1].protocol | string | `"TCP"` | |
| gateway.service.ports[1].targetPort | int | `443` | |
| gateway.service.type | string | `"LoadBalancer"` | Type of service. Set to "None" to disable the service entirely |
| gateway.serviceAccount.annotations | object | `{}` | Annotations to add to the service account |
| gateway.serviceAccount.create | bool | `true` | If set, a service account will be created. Otherwise, the default is used |
| gateway.serviceAccount.name | string | `""` | The name of the service account to use. If not set, the release name is used |
| gateway.tag | string | `""` | |
| gateway.tolerations | list | `[]` | |
| global.autoscalingv2API | bool | `true` | whether to use autoscaling/v2 template for HPA settings for internal usage only, not to be configured by users. |
| global.caAddress | string | `""` | The customized CA address to retrieve certificates for the pods in the cluster. CSR clients such as the Istio Agent and ingress gateways can use this to specify the CA endpoint. If not set explicitly, default to the Istio discovery address. |
| global.caName | string | `""` | The name of the CA for workload certificates. For example, when caName=GkeWorkloadCertificate, GKE workload certificates will be used as the certificates for workloads. The default value is "" and when caName="", the CA will be configured by other mechanisms (e.g., environmental variable CA_PROVIDER). |
| global.configCluster | bool | `false` | Configure a remote cluster as the config cluster for an external istiod. |
| global.defaultPodDisruptionBudget | object | `{"enabled":false}` | enable pod disruption budget for the control plane, which is used to ensure Istio control plane components are gradually upgraded or recovered. |
| global.defaultResources | object | `{"requests":{"cpu":"10m"}}` | A minimal set of requested resources to applied to all deployments so that Horizontal Pod Autoscaler will be able to function (if set). Each component can overwrite these default values by adding its own resources block in the relevant section below and setting the desired resources values. |
| global.defaultUpstreamConcurrencyThreshold | int | `10000` | |
| global.disableAlpnH2 | bool | `false` | Whether to disable HTTP/2 in ALPN |
| global.enableGatewayAPI | bool | `false` | If true, Higress Controller will monitor Gateway API resources as well |
| global.enableH3 | bool | `false` | |
| global.enableHigressIstio | bool | `false` | |
| global.enableIPv6 | bool | `false` | |
| global.enableIstioAPI | bool | `true` | If true, Higress Controller will monitor istio resources as well |
| global.enableProxyProtocol | bool | `false` | |
| global.enableSRDS | bool | `true` | |
| global.enableStatus | bool | `true` | If true, Higress Controller will update the status field of Ingress resources. When migrating from Nginx Ingress, in order to avoid status field of Ingress objects being overwritten, this parameter needs to be set to false, so Higress won't write the entry IP to the status field of the corresponding Ingress object. |
| global.externalIstiod | bool | `false` | Configure a remote cluster data plane controlled by an external istiod. When set to true, istiod is not deployed locally and only a subset of the other discovery charts are enabled. |
| global.hostRDSMergeSubset | bool | `false` | |
| global.hub | string | `"higress-registry.cn-hangzhou.cr.aliyuncs.com/higress"` | Default hub for Istio images. Releases are published to docker hub under 'istio' project. Dev builds from prow are on gcr.io |
| global.imagePullPolicy | string | `""` | Specify image pull policy if default behavior isn't desired. Default behavior: latest images will be Always else IfNotPresent. |
| global.imagePullSecrets | list | `[]` | ImagePullSecrets for all ServiceAccount, list of secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. For components that don't use ServiceAccounts (i.e. grafana, servicegraph, tracing) ImagePullSecrets will be added to the corresponding Deployment(StatefulSet) objects. Must be set for any cluster configured with private docker registry. |
| global.ingressClass | string | `"higress"` | IngressClass filters which ingress resources the higress controller watches. The default ingress class is higress. There are some special cases for special ingress class. 1. When the ingress class is set as nginx, the higress controller will watch ingress resources with the nginx ingress class or without any ingress class. 2. When the ingress class is set empty, the higress controller will watch all ingress resources in the k8s cluster. |
| global.istioNamespace | string | `"istio-system"` | Used to locate istiod. |
| global.istiod | object | `{"enableAnalysis":false}` | Enabled by default in master for maximising testing. |
| global.jwtPolicy | string | `"third-party-jwt"` | Configure the policy for validating JWT. Currently, two options are supported: "third-party-jwt" and "first-party-jwt". |
| global.kind | bool | `false` | |
| global.liteMetrics | bool | `true` | |
| global.local | bool | `false` | When deploying to a local cluster (e.g.: kind cluster), set this to true. |
| global.logAsJson | bool | `false` | |
| global.logging | object | `{"level":"default:info"}` | Comma-separated minimum per-scope logging level of messages to output, in the form of <scope>:<level>,<scope>:<level> The control plane has different scopes depending on component, but can configure default log level across all components If empty, default scope and level will be used as configured in code |
| global.meshID | string | `""` | If the mesh admin does not specify a value, Istio will use the value of the mesh's Trust Domain. The best practice is to select a proper Trust Domain value. |
| global.meshNetworks | object | `{}` | |
| global.mountMtlsCerts | bool | `false` | Use the user-specified, secret volume mounted key and certs for Pilot and workloads. |
| global.multiCluster.clusterName | string | `""` | Should be set to the name of the cluster this installation will run in. This is required for sidecar injection to properly label proxies |
| global.multiCluster.enabled | bool | `true` | Set to true to connect two kubernetes clusters via their respective ingressgateway services when pods in each cluster cannot directly talk to one another. All clusters should be using Istio mTLS and must have a shared root CA for this model to work. |
| global.network | string | `""` | Network defines the network this cluster belong to. This name corresponds to the networks in the map of mesh networks. |
| global.o11y | object | `{"enabled":false,"promtail":{"image":{"repository":"higress-registry.cn-hangzhou.cr.aliyuncs.com/higress/promtail","tag":"2.9.4"},"port":3101,"resources":{"limits":{"cpu":"500m","memory":"2Gi"}},"securityContext":{}}}` | Observability (o11y) configurations |
| global.omitSidecarInjectorConfigMap | bool | `false` | |
| global.onDemandRDS | bool | `false` | |
| global.oneNamespace | bool | `false` | Whether to restrict the applications namespace the controller manages; If not set, controller watches all namespaces |
| global.onlyPushRouteCluster | bool | `true` | |
| global.operatorManageWebhooks | bool | `false` | Configure whether Operator manages webhook configurations. The current behavior of Istiod is to manage its own webhook configurations. When this option is set as true, Istio Operator, instead of webhooks, manages the webhook configurations. When this option is set as false, webhooks manage their own webhook configurations. |
| global.pilotCertProvider | string | `"istiod"` | Configure the certificate provider for control plane communication. Currently, two providers are supported: "kubernetes" and "istiod". As some platforms may not have kubernetes signing APIs, Istiod is the default |
| global.priorityClassName | string | `""` | Kubernetes >=v1.11.0 will create two PriorityClass, including system-cluster-critical and system-node-critical, it is better to configure this in order to make sure your Istio pods will not be killed because of low priority class. Refer to https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass for more detail. |
| global.proxy.autoInject | string | `"enabled"` | This controls the 'policy' in the sidecar injector. |
| global.proxy.clusterDomain | string | `"cluster.local"` | CAUTION: It is important to ensure that all Istio helm charts specify the same clusterDomain value cluster domain. Default value is "cluster.local". |
| global.proxy.componentLogLevel | string | `"misc:error"` | Per Component log level for proxy, applies to gateways and sidecars. If a component level is not set, then the global "logLevel" will be used. |
| global.proxy.enableCoreDump | bool | `false` | If set, newly injected sidecars will have core dumps enabled. |
| global.proxy.excludeIPRanges | string | `""` | |
| global.proxy.excludeInboundPorts | string | `""` | |
| global.proxy.excludeOutboundPorts | string | `""` | |
| global.proxy.holdApplicationUntilProxyStarts | bool | `false` | Controls if sidecar is injected at the front of the container list and blocks the start of the other containers until the proxy is ready |
| global.proxy.image | string | `"proxyv2"` | |
| global.proxy.includeIPRanges | string | `"*"` | istio egress capture allowlist https://istio.io/docs/tasks/traffic-management/egress.html#calling-external-services-directly example: includeIPRanges: "172.30.0.0/16,172.20.0.0/16" would only capture egress traffic on those two IP Ranges, all other outbound traffic would be allowed by the sidecar |
| global.proxy.includeInboundPorts | string | `"*"` | |
| global.proxy.includeOutboundPorts | string | `""` | |
| global.proxy.logLevel | string | `"warning"` | Log level for proxy, applies to gateways and sidecars. Expected values are: trace|debug|info|warning|error|critical|off |
| global.proxy.privileged | bool | `false` | If set to true, istio-proxy container will have privileged securityContext |
| global.proxy.readinessFailureThreshold | int | `30` | The number of successive failed probes before indicating readiness failure. |
| global.proxy.readinessInitialDelaySeconds | int | `1` | The initial delay for readiness probes in seconds. |
| global.proxy.readinessPeriodSeconds | int | `2` | The period between readiness probes. |
| global.proxy.readinessSuccessThreshold | int | `30` | The number of successive successed probes before indicating readiness success. |
| global.proxy.readinessTimeoutSeconds | int | `3` | The readiness timeout seconds |
| global.proxy.resources | object | `{"limits":{"cpu":"2000m","memory":"1024Mi"},"requests":{"cpu":"100m","memory":"128Mi"}}` | Resources for the sidecar. |
| global.proxy.statusPort | int | `15020` | Default port for Pilot agent health checks. A value of 0 will disable health checking. |
| global.proxy.tracer | string | `""` | Specify which tracer to use. One of: lightstep, datadog, stackdriver. If using stackdriver tracer outside GCP, set env GOOGLE_APPLICATION_CREDENTIALS to the GCP credential file. |
| global.proxy_init.image | string | `"proxyv2"` | Base name for the proxy_init container, used to configure iptables. |
| global.proxy_init.resources.limits.cpu | string | `"2000m"` | |
| global.proxy_init.resources.limits.memory | string | `"1024Mi"` | |
| global.proxy_init.resources.requests.cpu | string | `"10m"` | |
| global.proxy_init.resources.requests.memory | string | `"10Mi"` | |
| global.remotePilotAddress | string | `""` | configure remote pilot and istiod service and endpoint |
| global.sds.token | object | `{"aud":"istio-ca"}` | The JWT token for SDS and the aud field of such JWT. See RFC 7519, section 4.1.3. When a CSR is sent from Istio Agent to the CA (e.g. Istiod), this aud is to make sure the JWT is intended for the CA. |
| global.sts.servicePort | int | `0` | The service port used by Security Token Service (STS) server to handle token exchange requests. Setting this port to a non-zero value enables STS server. |
| global.tracer | object | `{"datadog":{"address":"$(HOST_IP):8126"},"lightstep":{"accessToken":"","address":""},"stackdriver":{"debug":false,"maxNumberOfAnnotations":200,"maxNumberOfAttributes":200,"maxNumberOfMessageEvents":200}}` | Configuration for each of the supported tracers |
| global.tracer.datadog | object | `{"address":"$(HOST_IP):8126"}` | Configuration for envoy to send trace data to LightStep. Disabled by default. address: the <host>:<port> of the satellite pool accessToken: required for sending data to the pool |
| global.tracer.datadog.address | string | `"$(HOST_IP):8126"` | Host:Port for submitting traces to the Datadog agent. |
| global.tracer.lightstep.accessToken | string | `""` | example: abcdefg1234567 |
| global.tracer.lightstep.address | string | `""` | example: lightstep-satellite:443 |
| global.tracer.stackdriver.debug | bool | `false` | enables trace output to stdout. |
| global.tracer.stackdriver.maxNumberOfAnnotations | int | `200` | The global default max number of annotation events per span. |
| global.tracer.stackdriver.maxNumberOfAttributes | int | `200` | The global default max number of attributes per span. |
| global.tracer.stackdriver.maxNumberOfMessageEvents | int | `200` | The global default max number of message events per span. |
| global.useMCP | bool | `false` | Use the Mesh Control Protocol (MCP) for configuring Istiod. Requires an MCP source. |
| global.watchNamespace | string | `""` | If not empty, Higress Controller will only watch resources in the specified namespace. When isolating different business systems using K8s namespace, if each namespace requires a standalone gateway instance, this parameter can be used to confine the Ingress watching of Higress within the given namespace. |
| global.xdsMaxRecvMsgSize | string | `"104857600"` | |
| hub | string | `"higress-registry.cn-hangzhou.cr.aliyuncs.com/higress"` | |
| meshConfig | object | `{"enablePrometheusMerge":true,"rootNamespace":null,"trustDomain":"cluster.local"}` | meshConfig defines runtime configuration of components, including Istiod and istio-agent behavior See https://istio.io/docs/reference/config/istio.mesh.v1alpha1/ for all available options |
| meshConfig.rootNamespace | string | `nil` | The namespace to treat as the administrative root namespace for Istio configuration. When processing a leaf namespace Istio will search for declarations in that namespace first and if none are found it will search in the root namespace. Any matching declaration found in the root namespace is processed as if it were declared in the leaf namespace. |
| meshConfig.trustDomain | string | `"cluster.local"` | The trust domain corresponds to the trust root of a system Refer to https://github.com/spiffe/spiffe/blob/master/standards/SPIFFE-ID.md#21-trust-domain |
| pilot.autoscaleEnabled | bool | `false` | |
| pilot.autoscaleMax | int | `5` | |
| pilot.autoscaleMin | int | `1` | |
| pilot.configMap | bool | `true` | Install the mesh config map, generated from values.yaml. If false, pilot wil use default values (by default) or user-supplied values. |
| pilot.configSource | object | `{"subscribedResources":[]}` | This is used to set the source of configuration for the associated address in configSource, if nothing is specified the default MCP is assumed. |
| pilot.cpu.targetAverageUtilization | int | `80` | |
| pilot.deploymentLabels | object | `{}` | Additional labels to apply to the deployment. |
| pilot.enableProtocolSniffingForInbound | bool | `true` | if protocol sniffing is enabled for inbound |
| pilot.enableProtocolSniffingForOutbound | bool | `true` | if protocol sniffing is enabled for outbound |
| pilot.env.PILOT_ENABLE_CROSS_CLUSTER_WORKLOAD_ENTRY | string | `"false"` | |
| pilot.env.PILOT_ENABLE_METADATA_EXCHANGE | string | `"false"` | |
| pilot.env.PILOT_SCOPE_GATEWAY_TO_NAMESPACE | string | `"false"` | |
| pilot.env.VALIDATION_ENABLED | string | `"false"` | |
| pilot.hub | string | `"higress-registry.cn-hangzhou.cr.aliyuncs.com/higress"` | |
| pilot.image | string | `"pilot"` | Can be a full hub/image:tag |
| pilot.jwksResolverExtraRootCA | string | `""` | You can use jwksResolverExtraRootCA to provide a root certificate in PEM format. This will then be trusted by pilot when resolving JWKS URIs. |
| pilot.keepaliveMaxServerConnectionAge | string | `"30m"` | The following is used to limit how long a sidecar can be connected to a pilot. It balances out load across pilot instances at the cost of increasing system churn. |
| pilot.nodeSelector | object | `{}` | |
| pilot.plugins | list | `[]` | |
| pilot.podAnnotations | object | `{}` | |
| pilot.podLabels | object | `{}` | Additional labels to apply on the pod level for monitoring and logging configuration. |
| pilot.replicaCount | int | `1` | |
| pilot.resources | object | `{"requests":{"cpu":"500m","memory":"2048Mi"}}` | Resources for a small pilot install |
| pilot.rollingMaxSurge | string | `"100%"` | |
| pilot.rollingMaxUnavailable | string | `"25%"` | |
| pilot.serviceAnnotations | object | `{}` | |
| pilot.tag | string | `""` | |
| pilot.traceSampling | float | `1` | |
| revision | string | `""` | |
| tracing.enable | bool | `false` | |
| tracing.sampling | int | `100` | |
| tracing.skywalking.port | int | `11800` | |
| tracing.skywalking.service | string | `""` | |
| tracing.timeout | int | `500` | |
| upstream | object | `{"connectionBufferLimits":10485760,"idleTimeout":10}` | Upstream config settings |

View File

@@ -0,0 +1,34 @@
## Higress for Kubernetes
Higress is a cloud-native api gateway based on Alibaba's internal gateway practices.
Powered by Istio and Envoy, Higress realizes the integration of the triple gateway architecture of traffic gateway, microservice gateway and security gateway, thereby greatly reducing the costs of deployment, operation and maintenance.
## Setup Repo Info
```console
helm repo add higress.io https://higress.io/helm-charts
helm repo update
```
## Install
To install the chart with the release name `higress`:
```console
helm install higress -n higress-system higress.io/higress --create-namespace --render-subchart-notes
```
## Uninstall
To uninstall/delete the higress deployment:
```console
helm delete higress -n higress-system
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Parameters
{{ template "chart.valuesSection" . }}

View File

@@ -19,6 +19,7 @@ import "istio.io/pkg/env"
var (
PodNamespace = env.RegisterStringVar("POD_NAMESPACE", "higress-system", "").Get()
PodName = env.RegisterStringVar("POD_NAME", "", "").Get()
GatewayName = env.RegisterStringVar("GATEWAY_NAME", "higress-gateway", "").Get()
// Revision is the value of the Istio control plane revision, e.g. "canary",
// and is the value used by the "istio.io/rev" label.
Revision = env.Register("REVISION", "", "").Get()

View File

@@ -34,6 +34,7 @@ import (
extensions "istio.io/api/extensions/v1alpha1"
networking "istio.io/api/networking/v1alpha3"
istiotype "istio.io/api/type/v1beta1"
"istio.io/istio/pilot/pkg/features"
istiomodel "istio.io/istio/pilot/pkg/model"
"istio.io/istio/pilot/pkg/util/protoconv"
"istio.io/istio/pkg/cluster"
@@ -235,8 +236,9 @@ func (m *IngressConfig) AddLocalCluster(options common.Options) {
ingressController = ingressv1.NewController(m.localKubeClient, m.localKubeClient, options, secretController)
}
m.remoteIngressControllers[options.ClusterId] = ingressController
m.remoteGatewayControllers[options.ClusterId] = gateway.NewController(m.localKubeClient, options)
if features.EnableGatewayAPI {
m.remoteGatewayControllers[options.ClusterId] = gateway.NewController(m.localKubeClient, options)
}
}
func (m *IngressConfig) List(typ config.GroupVersionKind, namespace string) []config.Config {
@@ -719,9 +721,9 @@ func (m *IngressConfig) convertDestinationRule(configs []common.WrapperConfig) [
} else if dr.DestinationRule.TrafficPolicy != nil {
portTrafficPolicy := destinationRuleWrapper.DestinationRule.TrafficPolicy.PortLevelSettings[0]
portUpdated := false
for _, portTrafficPolicy := range dr.DestinationRule.TrafficPolicy.PortLevelSettings {
if portTrafficPolicy.Port.Number == portTrafficPolicy.Port.Number {
portTrafficPolicy.Tls = portTrafficPolicy.Tls
for _, policy := range dr.DestinationRule.TrafficPolicy.PortLevelSettings {
if policy.Port.Number == portTrafficPolicy.Port.Number {
policy.Tls = portTrafficPolicy.Tls
portUpdated = true
break
}

View File

@@ -255,6 +255,59 @@ func (t *TracingController) ConstructEnvoyFilters() ([]*config.Config, error) {
return configs, nil
}
configPatches := []*networking.EnvoyFilter_EnvoyConfigObjectPatch{
{
ApplyTo: networking.EnvoyFilter_NETWORK_FILTER,
Match: &networking.EnvoyFilter_EnvoyConfigObjectMatch{
Context: networking.EnvoyFilter_GATEWAY,
ObjectTypes: &networking.EnvoyFilter_EnvoyConfigObjectMatch_Listener{
Listener: &networking.EnvoyFilter_ListenerMatch{
FilterChain: &networking.EnvoyFilter_ListenerMatch_FilterChainMatch{
Filter: &networking.EnvoyFilter_ListenerMatch_FilterMatch{
Name: "envoy.filters.network.http_connection_manager",
},
},
},
},
},
Patch: &networking.EnvoyFilter_Patch{
Operation: networking.EnvoyFilter_Patch_MERGE,
Value: util.BuildPatchStruct(tracingConfig),
},
},
{
ApplyTo: networking.EnvoyFilter_HTTP_FILTER,
Match: &networking.EnvoyFilter_EnvoyConfigObjectMatch{
Context: networking.EnvoyFilter_GATEWAY,
ObjectTypes: &networking.EnvoyFilter_EnvoyConfigObjectMatch_Listener{
Listener: &networking.EnvoyFilter_ListenerMatch{
FilterChain: &networking.EnvoyFilter_ListenerMatch_FilterChainMatch{
Filter: &networking.EnvoyFilter_ListenerMatch_FilterMatch{
Name: "envoy.filters.network.http_connection_manager",
SubFilter: &networking.EnvoyFilter_ListenerMatch_SubFilterMatch{
Name: "envoy.filters.http.router",
},
},
},
},
},
},
Patch: &networking.EnvoyFilter_Patch{
Operation: networking.EnvoyFilter_Patch_MERGE,
Value: util.BuildPatchStruct(`{
"name":"envoy.filters.http.router",
"typed_config":{
"@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router",
"start_child_span": true
}
}`),
},
},
}
patches := t.constructTracingExtendPatches(tracing)
configPatches = append(configPatches, patches...)
config := &config.Config{
Meta: config.Meta{
GroupVersionKind: gvk.EnvoyFilter,
@@ -262,55 +315,7 @@ func (t *TracingController) ConstructEnvoyFilters() ([]*config.Config, error) {
Namespace: namespace,
},
Spec: &networking.EnvoyFilter{
ConfigPatches: []*networking.EnvoyFilter_EnvoyConfigObjectPatch{
{
ApplyTo: networking.EnvoyFilter_NETWORK_FILTER,
Match: &networking.EnvoyFilter_EnvoyConfigObjectMatch{
Context: networking.EnvoyFilter_GATEWAY,
ObjectTypes: &networking.EnvoyFilter_EnvoyConfigObjectMatch_Listener{
Listener: &networking.EnvoyFilter_ListenerMatch{
FilterChain: &networking.EnvoyFilter_ListenerMatch_FilterChainMatch{
Filter: &networking.EnvoyFilter_ListenerMatch_FilterMatch{
Name: "envoy.filters.network.http_connection_manager",
},
},
},
},
},
Patch: &networking.EnvoyFilter_Patch{
Operation: networking.EnvoyFilter_Patch_MERGE,
Value: util.BuildPatchStruct(tracingConfig),
},
},
{
ApplyTo: networking.EnvoyFilter_HTTP_FILTER,
Match: &networking.EnvoyFilter_EnvoyConfigObjectMatch{
Context: networking.EnvoyFilter_GATEWAY,
ObjectTypes: &networking.EnvoyFilter_EnvoyConfigObjectMatch_Listener{
Listener: &networking.EnvoyFilter_ListenerMatch{
FilterChain: &networking.EnvoyFilter_ListenerMatch_FilterChainMatch{
Filter: &networking.EnvoyFilter_ListenerMatch_FilterMatch{
Name: "envoy.filters.network.http_connection_manager",
SubFilter: &networking.EnvoyFilter_ListenerMatch_SubFilterMatch{
Name: "envoy.filters.http.router",
},
},
},
},
},
},
Patch: &networking.EnvoyFilter_Patch{
Operation: networking.EnvoyFilter_Patch_MERGE,
Value: util.BuildPatchStruct(`{
"name":"envoy.filters.http.router",
"typed_config":{
"@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router",
"start_child_span": true
}
}`),
},
},
},
ConfigPatches: configPatches,
},
}
@@ -318,6 +323,52 @@ func (t *TracingController) ConstructEnvoyFilters() ([]*config.Config, error) {
return configs, nil
}
func tracingClusterName(port, service string) string {
return fmt.Sprintf("outbound|%s||%s", port, service)
}
func (t *TracingController) constructHTTP2ProtocolOptionsPatch(port, service string) *networking.EnvoyFilter_EnvoyConfigObjectPatch {
http2ProtocolOptions := `{"typed_extension_protocol_options": {
"envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
"@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
"explicit_http_config": {
"http2_protocol_options": {}
}
}
}}`
return &networking.EnvoyFilter_EnvoyConfigObjectPatch{
ApplyTo: networking.EnvoyFilter_CLUSTER,
Match: &networking.EnvoyFilter_EnvoyConfigObjectMatch{
Context: networking.EnvoyFilter_GATEWAY,
ObjectTypes: &networking.EnvoyFilter_EnvoyConfigObjectMatch_Cluster{
Cluster: &networking.EnvoyFilter_ClusterMatch{
Name: tracingClusterName(port, service),
},
},
},
Patch: &networking.EnvoyFilter_Patch{
Operation: networking.EnvoyFilter_Patch_MERGE,
Value: util.BuildPatchStruct(http2ProtocolOptions),
},
}
}
func (t *TracingController) constructTracingExtendPatches(tracing *Tracing) []*networking.EnvoyFilter_EnvoyConfigObjectPatch {
if tracing == nil {
return nil
}
var patches []*networking.EnvoyFilter_EnvoyConfigObjectPatch
if skywalking := tracing.Skywalking; skywalking != nil {
patches = append(patches, t.constructHTTP2ProtocolOptionsPatch(skywalking.Port, skywalking.Service))
}
if otel := tracing.OpenTelemetry; otel != nil {
patches = append(patches, t.constructHTTP2ProtocolOptionsPatch(otel.Port, otel.Service))
}
return patches
}
func (t *TracingController) constructTracingTracer(tracing *Tracing, namespace string) string {
tracingConfig := ""
timeout := float32(tracing.Timeout) / 1000
@@ -338,7 +389,7 @@ func (t *TracingController) constructTracingTracer(tracing *Tracing, namespace s
},
"grpc_service": {
"envoy_grpc": {
"cluster_name": "outbound|%s||%s"
"cluster_name": "%s"
},
"timeout": "%.3fs"
}
@@ -349,7 +400,7 @@ func (t *TracingController) constructTracingTracer(tracing *Tracing, namespace s
}
}
}
}`, namespace, skywalking.AccessToken, skywalking.Port, skywalking.Service, timeout, tracing.Sampling)
}`, namespace, skywalking.AccessToken, tracingClusterName(skywalking.Port, skywalking.Service), timeout, tracing.Sampling)
}
if tracing.Zipkin != nil {
@@ -363,7 +414,7 @@ func (t *TracingController) constructTracingTracer(tracing *Tracing, namespace s
"name": "envoy.tracers.zipkin",
"typed_config": {
"@type": "type.googleapis.com/envoy.config.trace.v3.ZipkinConfig",
"collector_cluster": "outbound|%s||%s",
"collector_cluster": "%s",
"collector_endpoint": "/api/v2/spans",
"collector_hostname": "higress-gateway",
"collector_endpoint_version": "HTTP_JSON",
@@ -375,7 +426,7 @@ func (t *TracingController) constructTracingTracer(tracing *Tracing, namespace s
}
}
}
}`, zipkin.Port, zipkin.Service, tracing.Sampling)
}`, tracingClusterName(zipkin.Port, zipkin.Service), tracing.Sampling)
}
if tracing.OpenTelemetry != nil {
@@ -392,7 +443,7 @@ func (t *TracingController) constructTracingTracer(tracing *Tracing, namespace s
"service_name": "higress-gateway.%s",
"grpc_service": {
"envoy_grpc": {
"cluster_name": "outbound|%s||%s"
"cluster_name": "%s"
},
"timeout": "%.3fs"
}
@@ -403,7 +454,7 @@ func (t *TracingController) constructTracingTracer(tracing *Tracing, namespace s
}
}
}
}`, namespace, opentelemetry.Port, opentelemetry.Service, timeout, tracing.Sampling)
}`, namespace, tracingClusterName(opentelemetry.Port, opentelemetry.Service), timeout, tracing.Sampling)
}
return tracingConfig
}

View File

@@ -25,6 +25,7 @@ import (
"strings"
higressconfig "github.com/alibaba/higress/pkg/config"
"github.com/alibaba/higress/pkg/ingress/kube/util"
istio "istio.io/api/networking/v1alpha3"
"istio.io/istio/pilot/pkg/features"
"istio.io/istio/pilot/pkg/model"
@@ -1880,7 +1881,7 @@ func extractGatewayServices(r GatewayResources, kgw *k8s.GatewaySpec, obj config
if len(name) > 0 {
return []string{fmt.Sprintf("%s.%s.svc.%v", name, obj.Namespace, r.Domain)}, false, nil
}
return []string{}, true, nil
return []string{fmt.Sprintf("%s.%s.svc.%s", higressconfig.GatewayName, higressconfig.PodNamespace, util.GetDomainSuffix())}, true, nil
}
gatewayServices := []string{}
skippedAddresses := []string{}

View File

@@ -919,7 +919,7 @@ func TestExtractGatewayServices(t *testing.T) {
Namespace: "default",
},
},
gatewayServices: []string{},
gatewayServices: []string{"higress-gateway.higress-system.svc.cluster.local"},
useDefaultService: true,
},
{
@@ -1039,7 +1039,7 @@ func TestExtractGatewayServices(t *testing.T) {
Namespace: "default",
},
},
gatewayServices: []string{},
gatewayServices: []string{"higress-gateway.higress-system.svc.cluster.local"},
useDefaultService: true,
},
}

View File

@@ -187,10 +187,9 @@ func (m *IngressTranslation) List(typ config.GroupVersionKind, namespace string)
higressConfig = append(higressConfig, ingressConfig...)
if m.kingressConfig != nil {
kingressConfig := m.kingressConfig.List(typ, namespace)
if kingressConfig == nil {
return nil
if kingressConfig != nil {
higressConfig = append(higressConfig, kingressConfig...)
}
higressConfig = append(higressConfig, kingressConfig...)
}
return higressConfig
}

View File

@@ -1,5 +1,6 @@
## Wasm 插件
目前 Higress 提供了 c++ 和 golang 两种 Wasm 插件开发框架,支持 Wasm 插件路由&域名级匹配生效。
同时提供了多个内置插件,用户可以基于 Higress 提供的官方镜像仓库直接使用这些插件(以 c++ 版本举例):

View File

@@ -16,24 +16,28 @@ load("@io_bazel_rules_docker//repositories:deps.bzl", container_deps = "deps")
container_deps()
PROXY_WASM_CPP_SDK_SHA = "fd0be8405db25de0264bdb78fae3a82668c03782"
PROXY_WASM_CPP_SDK_SHA = "eaec483b5b3c7bcb89fd208b5a1fa5d79d626f61"
PROXY_WASM_CPP_SDK_SHA256 = "c57de2425b5c61d7f630c5061e319b4557ae1f1c7526e5a51c33dc1299471b08"
PROXY_WASM_CPP_SDK_SHA256 = "1140bc8114d75db56a6ca6b18423d4df50d988d40b4cec929a1eb246cf5a4a3d"
http_archive(
name = "proxy_wasm_cpp_sdk",
sha256 = PROXY_WASM_CPP_SDK_SHA256,
strip_prefix = "proxy-wasm-cpp-sdk-" + PROXY_WASM_CPP_SDK_SHA,
url = "https://github.com/proxy-wasm/proxy-wasm-cpp-sdk/archive/" + PROXY_WASM_CPP_SDK_SHA + ".tar.gz",
url = "https://github.com/higress-group/proxy-wasm-cpp-sdk/archive/" + PROXY_WASM_CPP_SDK_SHA + ".tar.gz",
)
load("@proxy_wasm_cpp_sdk//bazel/dep:deps.bzl", "wasm_dependencies")
load("@proxy_wasm_cpp_sdk//bazel:repositories.bzl", "proxy_wasm_cpp_sdk_repositories")
wasm_dependencies()
proxy_wasm_cpp_sdk_repositories()
load("@proxy_wasm_cpp_sdk//bazel/dep:deps_extra.bzl", "wasm_dependencies_extra")
load("@proxy_wasm_cpp_sdk//bazel:dependencies.bzl", "proxy_wasm_cpp_sdk_dependencies")
wasm_dependencies_extra()
proxy_wasm_cpp_sdk_dependencies()
load("@proxy_wasm_cpp_sdk//bazel:dependencies_extra.bzl", "proxy_wasm_cpp_sdk_dependencies_extra")
proxy_wasm_cpp_sdk_dependencies_extra()
load("@istio_ecosystem_wasm_extensions//bazel:wasm.bzl", "wasm_libraries")

View File

@@ -2,16 +2,16 @@ diff --git a/absl/time/internal/cctz/src/time_zone_format.cc b/absl/time/interna
index d8cb047..0c5f182 100644
--- a/absl/time/internal/cctz/src/time_zone_format.cc
+++ b/absl/time/internal/cctz/src/time_zone_format.cc
@@ -18,6 +18,8 @@
#endif
#endif
@@ -12,6 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
+#define HAS_STRPTIME 0
+
#if defined(HAS_STRPTIME) && HAS_STRPTIME
#if !defined(_XOPEN_SOURCE)
#define _XOPEN_SOURCE // Definedness suffices for strptime.
@@ -58,7 +60,7 @@ namespace {
#if !defined(HAS_STRPTIME)
#if !defined(_MSC_VER) && !defined(__MINGW32__)
#define HAS_STRPTIME 1 // assume everyone has strptime() except windows
@@ -58,7 +60,7 @@
#if !HAS_STRPTIME
// Build a strptime() using C++11's std::get_time().
@@ -20,7 +20,7 @@ index d8cb047..0c5f182 100644
std::istringstream input(s);
input >> std::get_time(tm, fmt);
if (input.fail()) return nullptr;
@@ -648,7 +650,7 @@ const char* ParseSubSeconds(const char* dp, detail::femtoseconds* subseconds) {
@@ -648,7 +650,7 @@
// Parses a string into a std::tm using strptime(3).
const char* ParseTM(const char* dp, const char* fmt, std::tm* tm) {
if (dp != nullptr) {

View File

@@ -9,9 +9,9 @@ load(
def wasm_libraries():
http_archive(
name = "com_google_absl",
sha256 = "ec8ef47335310cc3382bdc0d0cc1097a001e67dc83fcba807845aa5696e7e1e4",
strip_prefix = "abseil-cpp-302b250e1d917ede77b5ff00a6fd9f28430f1563",
url = "https://github.com/abseil/abseil-cpp/archive/302b250e1d917ede77b5ff00a6fd9f28430f1563.tar.gz",
sha256 = "3a0bb3d2e6f53352526a8d1a7e7b5749c68cd07f2401766a404fb00d2853fa49",
strip_prefix = "abseil-cpp-4bbdb026899fea9f882a95cbd7d6a4adaf49b2dd",
url = "https://github.com/abseil/abseil-cpp/archive/4bbdb026899fea9f882a95cbd7d6a4adaf49b2dd.tar.gz",
patch_args = ["-p1"],
patches = ["//bazel:absl.patch"],
)
@@ -33,14 +33,14 @@ def wasm_libraries():
urls = ["https://github.com/google/googletest/archive/release-1.10.0.tar.gz"],
)
PROXY_WASM_CPP_HOST_SHA = "f38347360feaaf5b2a733f219c4d8c9660d626f0"
PROXY_WASM_CPP_HOST_SHA256 = "bf10de946eb5785813895c2bf16504afc0cd590b9655d9ee52fb1074d0825ea3"
PROXY_WASM_CPP_HOST_SHA = "ecf42a27fcf78f42e64037d4eff1a0ca5a61e403"
PROXY_WASM_CPP_HOST_SHA256 = "9748156731e9521837686923321bf12725c32c9fa8355218209831cc3ee87080"
http_archive(
name = "proxy_wasm_cpp_host",
sha256 = PROXY_WASM_CPP_HOST_SHA256,
strip_prefix = "proxy-wasm-cpp-host-" + PROXY_WASM_CPP_HOST_SHA,
url = "https://github.com/proxy-wasm/proxy-wasm-cpp-host/archive/" + PROXY_WASM_CPP_HOST_SHA +".tar.gz",
url = "https://github.com/higress-group/proxy-wasm-cpp-host/archive/" + PROXY_WASM_CPP_HOST_SHA +".tar.gz",
)
http_archive(

View File

@@ -19,7 +19,6 @@
#include "absl/strings/str_cat.h"
#include "absl/strings/str_format.h"
#include "absl/strings/str_split.h"
#include "common/common_util.h"
namespace Wasm::Common::Http {
@@ -190,7 +189,8 @@ std::vector<std::string> getAllOfHeader(std::string_view key) {
std::vector<std::string> result;
auto headers = getRequestHeaderPairs()->pairs();
for (auto& header : headers) {
if (absl::EqualsIgnoreCase(Wasm::Common::stdToAbsl(header.first), Wasm::Common::stdToAbsl(key))) {
if (absl::EqualsIgnoreCase(Wasm::Common::stdToAbsl(header.first),
Wasm::Common::stdToAbsl(key))) {
result.push_back(std::string(header.second));
}
}
@@ -225,7 +225,8 @@ void forEachCookie(
v = v.substr(1, v.size() - 2);
}
if (!cookie_consumer(Wasm::Common::abslToStd(k), Wasm::Common::abslToStd(v))) {
if (!cookie_consumer(Wasm::Common::abslToStd(k),
Wasm::Common::abslToStd(v))) {
return;
}
}
@@ -265,7 +266,63 @@ std::string buildOriginalUri(std::optional<uint32_t> max_path_length) {
auto scheme = scheme_ptr->view();
auto host_ptr = getRequestHeader(Header::Host);
auto host = host_ptr->view();
return absl::StrCat(Wasm::Common::stdToAbsl(scheme), "://", Wasm::Common::stdToAbsl(host), Wasm::Common::stdToAbsl(final_path));
return absl::StrCat(Wasm::Common::stdToAbsl(scheme), "://",
Wasm::Common::stdToAbsl(host),
Wasm::Common::stdToAbsl(final_path));
}
void extractHostPathFromUri(const absl::string_view& uri,
absl::string_view& host, absl::string_view& path) {
/**
* URI RFC: https://www.ietf.org/rfc/rfc2396.txt
*
* Example:
* uri = "https://example.com:8443/certs"
* pos: ^
* host_pos: ^
* path_pos: ^
* host = "example.com:8443"
* path = "/certs"
*/
const auto pos = uri.find("://");
// Start position of the host
const auto host_pos = (pos == std::string::npos) ? 0 : pos + 3;
// Start position of the path
const auto path_pos = uri.find('/', host_pos);
if (path_pos == std::string::npos) {
// If uri doesn't have "/", the whole string is treated as host.
host = uri.substr(host_pos);
path = "/";
} else {
host = uri.substr(host_pos, path_pos - host_pos);
path = uri.substr(path_pos);
}
}
void extractPathWithoutArgsFromUri(const std::string_view& uri,
std::string_view& path_without_args) {
auto params_pos = uri.find('?');
size_t uri_end;
if (params_pos == std::string::npos) {
uri_end = uri.size();
} else {
uri_end = params_pos;
}
path_without_args = uri.substr(0, uri_end);
}
bool hasRequestBody() {
auto contentType = getRequestHeader("content-type")->toString();
auto contentLengthStr = getRequestHeader("content-length")->toString();
auto transferEncoding = getRequestHeader("transfer-encoding")->toString();
if (!contentType.empty()) {
return true;
}
if (!contentLengthStr.empty()) {
return true;
}
return transferEncoding.find("chunked") != std::string::npos;
}
} // namespace Wasm::Common::Http

View File

@@ -42,6 +42,12 @@ namespace Wasm::Common::Http {
using QueryParams = std::map<std::string, std::string>;
using SystemTime = std::chrono::time_point<std::chrono::system_clock>;
namespace Status {
constexpr int OK = 200;
constexpr int InternalServerError = 500;
constexpr int Unauthorized = 401;
} // namespace Status
namespace Header {
constexpr std::string_view Scheme(":scheme");
constexpr std::string_view Method(":method");
@@ -52,14 +58,17 @@ constexpr std::string_view Accept("accept");
constexpr std::string_view ContentMD5("content-md5");
constexpr std::string_view ContentType("content-type");
constexpr std::string_view ContentLength("content-length");
constexpr std::string_view TransferEncoding("transfer-encoding");
constexpr std::string_view UserAgent("user-agent");
constexpr std::string_view Date("date");
constexpr std::string_view Cookie("cookie");
constexpr std::string_view StrictTransportSecurity("strict-transport-security");
} // namespace Header
namespace ContentTypeValues {
constexpr std::string_view Grpc{"application/grpc"};
}
constexpr std::string_view Json{"application/json"};
} // namespace ContentTypeValues
class PercentEncoding {
public:
@@ -142,4 +151,10 @@ std::unordered_map<std::string, std::string> parseCookies(
std::string buildOriginalUri(std::optional<uint32_t> max_path_length);
void extractHostPathFromUri(const absl::string_view& uri,
absl::string_view& host, absl::string_view& path);
void extractPathWithoutArgsFromUri(const std::string_view& uri,
std::string_view& path_without_args);
bool hasRequestBody();
} // namespace Wasm::Common::Http

View File

@@ -12,10 +12,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
load("@proxy_wasm_cpp_sdk//bazel/wasm:wasm.bzl", "wasm_cc_binary")
load("@proxy_wasm_cpp_sdk//bazel:defs.bzl", "proxy_wasm_cc_binary")
load("//bazel:wasm.bzl", "declare_wasm_image_targets")
wasm_cc_binary(
proxy_wasm_cc_binary(
name = "bot_detect.wasm",
srcs = [
"plugin.cc",
@@ -28,7 +28,6 @@ wasm_cc_binary(
"//common:http_util",
"//common:regex_util",
"//common:rule_util",
"@proxy_wasm_cpp_sdk//:proxy_wasm_intrinsics",
],
)

View File

@@ -12,10 +12,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
load("@proxy_wasm_cpp_sdk//bazel/wasm:wasm.bzl", "wasm_cc_binary")
load("@proxy_wasm_cpp_sdk//bazel:defs.bzl", "proxy_wasm_cc_binary")
load("//bazel:wasm.bzl", "declare_wasm_image_targets")
wasm_cc_binary(
proxy_wasm_cc_binary(
name = "custom_response.wasm",
srcs = [
"plugin.cc",
@@ -27,7 +27,6 @@ wasm_cc_binary(
"//common:json_util",
"//common:http_util",
"//common:rule_util",
"@proxy_wasm_cpp_sdk//:proxy_wasm_intrinsics",
],
)

View File

@@ -12,10 +12,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
load("@proxy_wasm_cpp_sdk//bazel/wasm:wasm.bzl", "wasm_cc_binary")
load("@proxy_wasm_cpp_sdk//bazel:defs.bzl", "proxy_wasm_cc_binary")
load("//bazel:wasm.bzl", "declare_wasm_image_targets")
wasm_cc_binary(
proxy_wasm_cc_binary(
name = "hmac_auth.wasm",
srcs = [
"plugin.cc",
@@ -30,7 +30,6 @@ wasm_cc_binary(
"//common:crypto_util",
"//common:http_util",
"//common:rule_util",
"@proxy_wasm_cpp_sdk//:proxy_wasm_intrinsics",
],
)

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
load("@proxy_wasm_cpp_sdk//bazel/wasm:wasm.bzl", "wasm_cc_binary")
load("@proxy_wasm_cpp_sdk//bazel:defs.bzl", "proxy_wasm_cc_binary")
load("//bazel:wasm.bzl", "declare_wasm_image_targets")
wasm_cc_binary(
@@ -33,7 +33,6 @@ wasm_cc_binary(
"//common:json_util",
"//common:http_util",
"//common:rule_util",
"@proxy_wasm_cpp_sdk//:proxy_wasm_intrinsics",
],
)

View File

@@ -12,10 +12,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
load("@proxy_wasm_cpp_sdk//bazel/wasm:wasm.bzl", "wasm_cc_binary")
load("@proxy_wasm_cpp_sdk//bazel:defs.bzl", "proxy_wasm_cc_binary")
load("//bazel:wasm.bzl", "declare_wasm_image_targets")
wasm_cc_binary(
proxy_wasm_cc_binary(
name = "key_auth.wasm",
srcs = [
"plugin.cc",
@@ -28,7 +28,6 @@ wasm_cc_binary(
"//common:json_util",
"//common:http_util",
"//common:rule_util",
"@proxy_wasm_cpp_sdk//:proxy_wasm_intrinsics",
],
)

View File

@@ -42,10 +42,7 @@ static RegisterContextFactory register_KeyAuth(CONTEXT_FACTORY(PluginContext),
namespace {
void deniedNoKeyAuthData(const std::string& realm) {
sendLocalResponse(401, "No API key found in request", "",
{{"WWW-Authenticate", absl::StrCat("Key realm=", realm)}});
}
const std::string OriginalAuthKey("X-HI-ORIGINAL-AUTH");
void deniedInvalidCredentials(const std::string& realm) {
sendLocalResponse(401, "Request denied by Key Auth check. Invalid API key",
@@ -84,6 +81,7 @@ bool PluginRootContext::parsePluginConfig(const json& configuration,
}
if (!JsonArrayIterate(
configuration, "consumers", [&](const json& consumer) -> bool {
Consumer c;
auto item = consumer.find("name");
if (item == consumer.end()) {
LOG_WARN("can't find 'name' field in consumer.");
@@ -94,6 +92,7 @@ bool PluginRootContext::parsePluginConfig(const json& configuration,
!name.first) {
return false;
}
c.name = name.first.value();
item = consumer.find("credential");
if (item == consumer.end()) {
LOG_WARN("can't find 'credential' field in consumer.");
@@ -104,6 +103,7 @@ bool PluginRootContext::parsePluginConfig(const json& configuration,
!credential.first) {
return false;
}
c.credential = credential.first.value();
if (rule.credential_to_name.find(credential.first.value()) !=
rule.credential_to_name.end()) {
LOG_WARN(absl::StrCat("duplicate consumer credential: ",
@@ -113,15 +113,59 @@ bool PluginRootContext::parsePluginConfig(const json& configuration,
rule.credentials.insert(credential.first.value());
rule.credential_to_name.emplace(
std::make_pair(credential.first.value(), name.first.value()));
item = consumer.find("keys");
if (item != consumer.end()) {
c.keys = std::vector<std::string>{OriginalAuthKey};
if (!JsonArrayIterate(
consumer, "keys", [&](const json& key_json) -> bool {
auto key = JsonValueAs<std::string>(key_json);
if (key.second !=
Wasm::Common::JsonParserResultDetail::OK) {
return false;
}
c.keys->push_back(key.first.value());
return true;
})) {
LOG_WARN("failed to parse configuration for consumer keys.");
return false;
}
item = consumer.find("in_query");
if (item != consumer.end()) {
auto in_query = JsonValueAs<bool>(item.value());
if (in_query.second !=
Wasm::Common::JsonParserResultDetail::OK ||
!in_query.first) {
LOG_WARN(
"failed to parse 'in_query' field in consumer "
"configuration.");
return false;
}
c.in_query = in_query.first;
}
item = consumer.find("in_header");
if (item != consumer.end()) {
auto in_header = JsonValueAs<bool>(item.value());
if (in_header.second !=
Wasm::Common::JsonParserResultDetail::OK ||
!in_header.first) {
LOG_WARN(
"failed to parse 'in_header' field in consumer "
"configuration.");
return false;
}
c.in_header = in_header.first;
}
}
rule.consumers.push_back(std::move(c));
return true;
})) {
LOG_WARN("failed to parse configuration for credentials.");
return false;
}
if (rule.credentials.empty()) {
LOG_INFO("at least one credential has to be configured for a rule.");
return false;
}
// if (rule.credentials.empty()) {
// LOG_INFO("at least one credential has to be configured for a rule.");
// return false;
// }
if (!JsonArrayIterate(configuration, "keys", [&](const json& item) -> bool {
auto key = JsonValueAs<std::string>(item);
if (key.second != Wasm::Common::JsonParserResultDetail::OK) {
@@ -137,6 +181,7 @@ bool PluginRootContext::parsePluginConfig(const json& configuration,
LOG_WARN("at least one key has to be configured for a rule.");
return false;
}
rule.keys.push_back(OriginalAuthKey);
auto it = configuration.find("realm");
if (it != configuration.end()) {
auto realm_string = JsonValueAs<std::string>(it.value());
@@ -175,36 +220,102 @@ bool PluginRootContext::parsePluginConfig(const json& configuration,
bool PluginRootContext::checkPlugin(
const KeyAuthConfigRule& rule,
const std::optional<std::unordered_set<std::string>>& allow_set) {
auto credential = extractCredential(rule);
if (credential.empty()) {
LOG_DEBUG("empty credential");
deniedNoKeyAuthData(rule.realm);
return false;
}
auto auth_credential_iter = rule.credentials.find(std::string(credential));
// Check if the credential is part of the credentials
// set from our container to grant or deny access.
if (auth_credential_iter == rule.credentials.end()) {
LOG_DEBUG(absl::StrCat("api key not found: ", credential));
deniedInvalidCredentials(rule.realm);
return false;
}
// Check if this credential has a consumer name. If so, check if this
// consumer is allowed to access. If allow_set is empty, allow all consumers.
auto credential_to_name_iter =
rule.credential_to_name.find(std::string(std::string(credential)));
if (credential_to_name_iter != rule.credential_to_name.end()) {
if (allow_set && !allow_set.value().empty()) {
if (allow_set.value().find(credential_to_name_iter->second) ==
allow_set.value().end()) {
deniedUnauthorizedConsumer(rule.realm);
LOG_DEBUG(credential_to_name_iter->second);
return false;
if (rule.consumers.empty()) {
for (const auto& key : rule.keys) {
auto credential = extractCredential(rule.in_header, rule.in_query, key);
if (credential.empty()) {
LOG_DEBUG("empty credential for key: " + key);
continue;
}
auto auth_credential_iter = rule.credentials.find(credential);
if (auth_credential_iter == rule.credentials.end()) {
LOG_DEBUG("api key not found: " + credential);
continue;
}
auto credential_to_name_iter = rule.credential_to_name.find(credential);
if (credential_to_name_iter != rule.credential_to_name.end()) {
if (allow_set && !allow_set->empty()) {
if (allow_set->find(credential_to_name_iter->second) ==
allow_set->end()) {
deniedUnauthorizedConsumer(rule.realm);
LOG_DEBUG("unauthorized consumer: " +
credential_to_name_iter->second);
return false;
}
}
addRequestHeader("X-Mse-Consumer", credential_to_name_iter->second);
}
return true;
}
} else {
for (const auto& consumer : rule.consumers) {
std::vector<std::string> keys_to_check =
consumer.keys.value_or(rule.keys);
bool in_query = consumer.in_query.value_or(rule.in_query);
bool in_header = consumer.in_header.value_or(rule.in_header);
for (const auto& key : keys_to_check) {
auto credential = extractCredential(in_header, in_query, key);
if (credential.empty()) {
LOG_DEBUG("empty credential for key: " + key);
continue;
}
if (credential != consumer.credential) {
LOG_DEBUG("credential does not match the consumer's credential: " +
credential);
continue;
}
auto auth_credential_iter = rule.credentials.find(credential);
if (auth_credential_iter == rule.credentials.end()) {
LOG_DEBUG("api key not found: " + credential);
continue;
}
auto credential_to_name_iter = rule.credential_to_name.find(credential);
if (credential_to_name_iter != rule.credential_to_name.end()) {
if (allow_set && !allow_set->empty()) {
if (allow_set->find(credential_to_name_iter->second) ==
allow_set->end()) {
deniedUnauthorizedConsumer(rule.realm);
LOG_DEBUG("unauthorized consumer: " +
credential_to_name_iter->second);
return false;
}
}
addRequestHeader("X-Mse-Consumer", credential_to_name_iter->second);
}
return true;
}
}
addRequestHeader("X-Mse-Consumer", credential_to_name_iter->second);
}
return true;
LOG_DEBUG("No valid credentials were found after checking all consumers.");
deniedInvalidCredentials(rule.realm);
return false;
}
std::string PluginRootContext::extractCredential(bool in_header, bool in_query,
const std::string& key) {
if (in_header) {
auto header = getRequestHeader(key);
if (header->size() != 0) {
return header->toString();
}
}
if (in_query) {
auto request_path_header = getRequestHeader(":path");
auto path = request_path_header->view();
auto params = Wasm::Common::Http::parseAndDecodeQueryString(path);
auto it = params.find(key);
if (it != params.end()) {
return it->second;
}
}
return "";
}
bool PluginRootContext::onConfigure(size_t size) {
@@ -234,31 +345,6 @@ bool PluginRootContext::configure(size_t configuration_size) {
return true;
}
std::string PluginRootContext::extractCredential(
const KeyAuthConfigRule& rule) {
auto request_path_header = getRequestHeader(":path");
auto path = request_path_header->view();
LOG_DEBUG(std::string(path));
if (rule.in_query) {
auto params = Wasm::Common::Http::parseAndDecodeQueryString(path);
for (const auto& key : rule.keys) {
auto it = params.find(key);
if (it != params.end()) {
return it->second;
}
}
}
if (rule.in_header) {
for (const auto& key : rule.keys) {
auto header = getRequestHeader(key);
if (header->size() != 0) {
return header->toString();
}
}
}
return "";
}
FilterHeadersStatus PluginContext::onRequestHeaders(uint32_t, bool) {
auto* rootCtx = rootContext();
return rootCtx->checkAuthRule(

View File

@@ -36,7 +36,16 @@ namespace key_auth {
#endif
struct Consumer {
std::string name;
std::string credential;
std::optional<std::vector<std::string>> keys;
std::optional<bool> in_query = std::nullopt;
std::optional<bool> in_header = std::nullopt;
};
struct KeyAuthConfigRule {
std::vector<Consumer> consumers;
std::unordered_set<std::string> credentials;
std::unordered_map<std::string, std::string> credential_to_name;
std::string realm = "MSE Gateway";
@@ -61,7 +70,8 @@ class PluginRootContext : public RootContext,
private:
bool parsePluginConfig(const json&, KeyAuthConfigRule&) override;
std::string extractCredential(const KeyAuthConfigRule&);
std::string extractCredential(bool in_header, bool in_query,
const std::string& key);
};
// Per-stream context.

View File

@@ -121,7 +121,7 @@ TEST_F(KeyAuthTest, InQuery) {
"_rules_": [
{
"_match_route_": ["test"],
"credentials":["abc"],
"credentials":["abc","def"],
"keys": ["apiKey", "x-api-key"]
}
]
@@ -144,6 +144,10 @@ TEST_F(KeyAuthTest, InQuery) {
path_ = "/test?hello=123&apiKey=123";
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
path_ = "/test?hello=123&apiKey=123&x-api-key=def";
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::Continue);
}
TEST_F(KeyAuthTest, InQueryWithConsumer) {
@@ -173,6 +177,29 @@ TEST_F(KeyAuthTest, InQueryWithConsumer) {
FilterHeadersStatus::StopIteration);
}
TEST_F(KeyAuthTest, EmptyConsumer) {
std::string configuration = R"(
{
"consumers" : [],
"keys" : [ "apiKey", "x-api-key" ],
"_rules_" : [ {"_match_route_" : ["test"], "allow" : []} ]
})";
BufferBase buffer;
buffer.set(configuration);
EXPECT_CALL(*mock_context_, getBuffer(WasmBufferType::PluginConfiguration))
.WillOnce([&buffer](WasmBufferType) { return &buffer; });
EXPECT_TRUE(root_context_->configure(configuration.size()));
route_name_ = "test";
path_ = "/test?hello=1&apiKey=abc";
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
route_name_ = "test2";
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::Continue);
}
TEST_F(KeyAuthTest, InHeader) {
std::string configuration = R"(
{
@@ -240,6 +267,40 @@ TEST_F(KeyAuthTest, InHeaderWithConsumer) {
FilterHeadersStatus::StopIteration);
}
TEST_F(KeyAuthTest, ConsumerDifferentKey) {
std::string configuration = R"(
{
"consumers" : [ {"credential" : "abc", "name" : "consumer1", "keys" : [ "apiKey" ]}, {"credential" : "123", "name" : "consumer2"} ],
"keys" : [ "apiKey2" ],
"_rules_" : [ {"_match_route_" : ["test"], "allow" : ["consumer1"]}, {"_match_route_" : ["test2"], "allow" : ["consumer2"]} ]
})";
BufferBase buffer;
buffer.set(configuration);
EXPECT_CALL(*mock_context_, getBuffer(WasmBufferType::PluginConfiguration))
.WillOnce([&buffer](WasmBufferType) { return &buffer; });
EXPECT_TRUE(root_context_->configure(configuration.size()));
route_name_ = "test";
path_ = "/test?hello=1&apiKey=abc";
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::Continue);
route_name_ = "test";
path_ = "/test?hello=1&apiKey2=abc";
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
route_name_ = "test";
path_ = "/test?hello=123&apiKey2=123";
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
route_name_ = "test2";
path_ = "/test?hello=123&apiKey2=123";
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::Continue);
}
} // namespace key_auth
} // namespace null_plugin
} // namespace proxy_wasm

View File

@@ -12,10 +12,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
load("@proxy_wasm_cpp_sdk//bazel/wasm:wasm.bzl", "wasm_cc_binary")
load("@proxy_wasm_cpp_sdk//bazel:defs.bzl", "proxy_wasm_cc_binary")
load("//bazel:wasm.bzl", "declare_wasm_image_targets")
wasm_cc_binary(
proxy_wasm_cc_binary(
name = "key_rate_limit.wasm",
srcs = [
"plugin.cc",
@@ -29,7 +29,6 @@ wasm_cc_binary(
"//common:json_util",
"//common:http_util",
"//common:rule_util",
"@proxy_wasm_cpp_sdk//:proxy_wasm_intrinsics",
],
)

View File

@@ -0,0 +1,70 @@
# Copyright (c) 2022 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
load("@proxy_wasm_cpp_sdk//bazel:defs.bzl", "proxy_wasm_cc_binary")
load("//bazel:wasm.bzl", "declare_wasm_image_targets")
proxy_wasm_cc_binary(
name = "model_mapper.wasm",
srcs = [
"plugin.cc",
"plugin.h",
],
deps = [
"@proxy_wasm_cpp_sdk//:proxy_wasm_intrinsics_higress",
"@com_google_absl//absl/strings",
"@com_google_absl//absl/time",
"//common:json_util",
"//common:http_util",
"//common:rule_util",
],
)
cc_library(
name = "model_mapper_lib",
srcs = [
"plugin.cc",
],
hdrs = [
"plugin.h",
],
copts = ["-DNULL_PLUGIN"],
deps = [
"@com_google_absl//absl/strings",
"@com_google_absl//absl/time",
"//common:json_util",
"@proxy_wasm_cpp_host//:lib",
"//common:http_util_nullvm",
"//common:rule_util_nullvm",
],
)
cc_test(
name = "model_mapper_test",
srcs = [
"plugin_test.cc",
],
copts = ["-DNULL_PLUGIN"],
deps = [
":model_mapper_lib",
"@com_google_googletest//:gtest",
"@com_google_googletest//:gtest_main",
"@proxy_wasm_cpp_host//:lib",
],
)
declare_wasm_image_targets(
name = "model_mapper",
wasm_file = ":model_mapper.wasm",
)

View File

@@ -0,0 +1,63 @@
## 功能说明
`model-mapper`插件实现了基于LLM协议中的model参数路由的功能
## 配置字段
| 名称 | 数据类型 | 填写要求 | 默认值 | 描述 |
| ----------- | --------------- | ----------------------- | ------ | ------------------------------------------- |
| `modelKey` | string | 选填 | model | 请求body中model参数的位置 |
| `modelMapping` | map of string | 选填 | - | AI 模型映射表,用于将请求中的模型名称映射为服务提供商支持模型名称。<br/>1. 支持前缀匹配。例如用 "gpt-3-*" 匹配所有名称以“gpt-3-”开头的模型;<br/>2. 支持使用 "*" 为键来配置通用兜底映射关系;<br/>3. 如果映射的目标名称为空字符串 "",则表示保留原模型名称。 |
| `enableOnPathSuffix` | array of string | 选填 | ["/v1/chat/completions"] | 只对这些特定路径后缀的请求生效 ## 运行属性
插件执行阶段:认证阶段
插件执行优先级800
|
## 效果说明
如下配置
```yaml
modelMapping:
'gpt-4-*': "qwen-max"
'gpt-4o': "qwen-vl-plus"
'*': "qwen-turbo"
```
开启后,`gpt-4-` 开头的模型参数会被改写为 `qwen-max`, `gpt-4o` 会被改写为 `qwen-vl-plus`,其他所有模型会被改写为 `qwen-turbo`
例如原本的请求是:
```json
{
"model": "gpt-4o",
"frequency_penalty": 0,
"max_tokens": 800,
"stream": false,
"messages": [{
"role": "user",
"content": "higress项目主仓库的github地址是什么"
}],
"presence_penalty": 0,
"temperature": 0.7,
"top_p": 0.95
}
```
经过这个插件后,原始的 LLM 请求体将被改成:
```json
{
"model": "qwen-vl-plus",
"frequency_penalty": 0,
"max_tokens": 800,
"stream": false,
"messages": [{
"role": "user",
"content": "higress项目主仓库的github地址是什么"
}],
"presence_penalty": 0,
"temperature": 0.7,
"top_p": 0.95
}
```

View File

@@ -0,0 +1,65 @@
## Function Description
The `model-mapper` plugin implements the functionality of routing based on the model parameter in the LLM protocol.
## Configuration Fields
| Name | Data Type | Filling Requirement | Default Value | Description |
| ----------- | --------------- | ----------------------- | ------ | ------------------------------------------- |
| `modelKey` | string | Optional | model | The location of the model parameter in the request body. |
| `modelMapping` | map of string | Optional | - | AI model mapping table, used to map the model names in the request to the model names supported by the service provider.<br/>1. Supports prefix matching. For example, use "gpt-3-*" to match all models whose names start with “gpt-3-”;<br/>2. Supports using "*" as the key to configure a generic fallback mapping relationship;<br/>3. If the target name in the mapping is an empty string "", it means to keep the original model name. |
| `enableOnPathSuffix` | array of string | Optional | ["/v1/chat/completions"] | Only applies to requests with these specific path suffixes. |
## Runtime Properties
Plugin execution phase: Authentication phase
Plugin execution priority: 800
## Effect Description
With the following configuration:
```yaml
modelMapping:
'gpt-4-*': "qwen-max"
'gpt-4o': "qwen-vl-plus"
'*': "qwen-turbo"
```
After enabling, model parameters starting with `gpt-4-` will be rewritten to `qwen-max`, `gpt-4o` will be rewritten to `qwen-vl-plus`, and all other models will be rewritten to `qwen-turbo`.
For example, if the original request was:
```json
{
"model": "gpt-4o",
"frequency_penalty": 0,
"max_tokens": 800,
"stream": false,
"messages": [{
"role": "user",
"content": "What is the GitHub address of the main repository for the higress project?"
}],
"presence_penalty": 0,
"temperature": 0.7,
"top_p": 0.95
}
```
After processing by this plugin, the original LLM request body will be modified to:
```json
{
"model": "qwen-vl-plus",
"frequency_penalty": 0,
"max_tokens": 800,
"stream": false,
"messages": [{
"role": "user",
"content": "What is the GitHub address of the main repository for the higress project?"
}],
"presence_penalty": 0,
"temperature": 0.7,
"top_p": 0.95
}
```

View File

@@ -0,0 +1,243 @@
// Copyright (c) 2022 Alibaba Group Holding Ltd.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "extensions/model_mapper/plugin.h"
#include <array>
#include <limits>
#include "absl/strings/str_cat.h"
#include "absl/strings/str_split.h"
#include "common/http_util.h"
#include "common/json_util.h"
using ::nlohmann::json;
using ::Wasm::Common::JsonArrayIterate;
using ::Wasm::Common::JsonGetField;
using ::Wasm::Common::JsonObjectIterate;
using ::Wasm::Common::JsonValueAs;
#ifdef NULL_PLUGIN
namespace proxy_wasm {
namespace null_plugin {
namespace model_mapper {
PROXY_WASM_NULL_PLUGIN_REGISTRY
#endif
static RegisterContextFactory register_ModelMapper(
CONTEXT_FACTORY(PluginContext), ROOT_FACTORY(PluginRootContext));
namespace {
constexpr std::string_view SetDecoderBufferLimitKey =
"SetRequestBodyBufferLimit";
constexpr std::string_view DefaultMaxBodyBytes = "10485760";
} // namespace
bool PluginRootContext::parsePluginConfig(const json& configuration,
ModelMapperConfigRule& rule) {
if (auto it = configuration.find("modelKey"); it != configuration.end()) {
if (it->is_string()) {
rule.model_key_ = it->get<std::string>();
} else {
LOG_ERROR("Invalid type for modelKey. Expected string.");
return false;
}
}
if (auto it = configuration.find("modelMapping"); it != configuration.end()) {
if (!it->is_object()) {
LOG_ERROR("Invalid type for modelMapping. Expected object.");
return false;
}
auto model_mapping = it->get<Wasm::Common::JsonObject>();
if (!JsonObjectIterate(model_mapping, [&](std::string key) -> bool {
auto model_json = model_mapping.find(key);
if (!model_json->is_string()) {
LOG_ERROR(
"Invalid type for item in modelMapping. Expected string.");
return false;
}
if (key == "*") {
rule.default_model_mapping_ = model_json->get<std::string>();
return true;
}
if (absl::EndsWith(key, "*")) {
rule.prefix_model_mapping_.emplace_back(
absl::StripSuffix(key, "*"), model_json->get<std::string>());
return true;
}
auto ret = rule.exact_model_mapping_.emplace(
key, model_json->get<std::string>());
if (!ret.second) {
LOG_ERROR("Duplicate key in modelMapping: " + key);
return false;
}
return true;
})) {
return false;
}
}
if (!JsonArrayIterate(
configuration, "enableOnPathSuffix", [&](const json& item) -> bool {
if (item.is_string()) {
rule.enable_on_path_suffix_.emplace_back(item.get<std::string>());
return true;
}
return false;
})) {
LOG_WARN("Invalid type for item in enableOnPathSuffix. Expected string.");
return false;
}
return true;
}
bool PluginRootContext::onConfigure(size_t size) {
// Parse configuration JSON string.
if (size > 0 && !configure(size)) {
LOG_WARN("configuration has errors initialization will not continue.");
return false;
}
return true;
}
bool PluginRootContext::configure(size_t configuration_size) {
auto configuration_data = getBufferBytes(WasmBufferType::PluginConfiguration,
0, configuration_size);
// Parse configuration JSON string.
auto result = ::Wasm::Common::JsonParse(configuration_data->view());
if (!result) {
LOG_WARN(absl::StrCat("cannot parse plugin configuration JSON string: ",
configuration_data->view()));
return false;
}
if (!parseRuleConfig(result.value())) {
LOG_WARN(absl::StrCat("cannot parse plugin configuration JSON string: ",
configuration_data->view()));
return false;
}
return true;
}
FilterHeadersStatus PluginRootContext::onHeader(
const ModelMapperConfigRule& rule) {
if (!Wasm::Common::Http::hasRequestBody()) {
return FilterHeadersStatus::Continue;
}
auto path = getRequestHeader(Wasm::Common::Http::Header::Path)->toString();
auto params_pos = path.find('?');
size_t uri_end;
if (params_pos == std::string::npos) {
uri_end = path.size();
} else {
uri_end = params_pos;
}
bool enable = false;
for (const auto& enable_suffix : rule.enable_on_path_suffix_) {
if (absl::EndsWith({path.c_str(), uri_end}, enable_suffix)) {
enable = true;
break;
}
}
if (!enable) {
return FilterHeadersStatus::Continue;
}
auto content_type_value =
getRequestHeader(Wasm::Common::Http::Header::ContentType);
if (!absl::StrContains(content_type_value->view(),
Wasm::Common::Http::ContentTypeValues::Json)) {
return FilterHeadersStatus::Continue;
}
removeRequestHeader(Wasm::Common::Http::Header::ContentLength);
setFilterState(SetDecoderBufferLimitKey, DefaultMaxBodyBytes);
return FilterHeadersStatus::StopIteration;
}
FilterDataStatus PluginRootContext::onBody(const ModelMapperConfigRule& rule,
std::string_view body) {
const auto& exact_model_mapping = rule.exact_model_mapping_;
const auto& prefix_model_mapping = rule.prefix_model_mapping_;
const auto& default_model_mapping = rule.default_model_mapping_;
const auto& model_key = rule.model_key_;
auto body_json_opt = ::Wasm::Common::JsonParse(body);
if (!body_json_opt) {
LOG_WARN(absl::StrCat("cannot parse body to JSON string: ", body));
return FilterDataStatus::Continue;
}
auto body_json = body_json_opt.value();
std::string old_model;
if (body_json.contains(model_key)) {
old_model = body_json[model_key];
}
std::string model =
default_model_mapping.empty() ? old_model : default_model_mapping;
if (auto it = exact_model_mapping.find(old_model);
it != exact_model_mapping.end()) {
model = it->second;
} else {
for (auto& prefix_model_pair : prefix_model_mapping) {
if (absl::StartsWith(old_model, prefix_model_pair.first)) {
model = prefix_model_pair.second;
break;
}
}
}
if (!model.empty() && model != old_model) {
body_json[model_key] = model;
setBuffer(WasmBufferType::HttpRequestBody, 0,
std::numeric_limits<size_t>::max(), body_json.dump());
LOG_DEBUG(
absl::StrCat("model mapped, before:", old_model, ", after:", model));
}
return FilterDataStatus::Continue;
}
FilterHeadersStatus PluginContext::onRequestHeaders(uint32_t, bool) {
auto* rootCtx = rootContext();
return rootCtx->onHeaders([rootCtx, this](const auto& config) {
auto ret = rootCtx->onHeader(config);
if (ret == FilterHeadersStatus::StopIteration) {
this->config_ = &config;
}
return ret;
});
}
FilterDataStatus PluginContext::onRequestBody(size_t body_size,
bool end_stream) {
if (config_ == nullptr) {
return FilterDataStatus::Continue;
}
body_total_size_ += body_size;
if (!end_stream) {
return FilterDataStatus::StopIterationAndBuffer;
}
auto body =
getBufferBytes(WasmBufferType::HttpRequestBody, 0, body_total_size_);
auto* rootCtx = rootContext();
return rootCtx->onBody(*config_, body->view());
}
#ifdef NULL_PLUGIN
} // namespace model_mapper
} // namespace null_plugin
} // namespace proxy_wasm
#endif

View File

@@ -0,0 +1,87 @@
/*
* Copyright (c) 2022 Alibaba Group Holding Ltd.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <assert.h>
#include <string>
#include <unordered_set>
#include "common/route_rule_matcher.h"
#define ASSERT(_X) assert(_X)
#ifndef NULL_PLUGIN
#include "proxy_wasm_intrinsics.h"
#else
#include "include/proxy-wasm/null_plugin.h"
namespace proxy_wasm {
namespace null_plugin {
namespace model_mapper {
#endif
struct ModelMapperConfigRule {
std::string model_key_ = "model";
std::map<std::string, std::string> exact_model_mapping_;
std::vector<std::pair<std::string, std::string>> prefix_model_mapping_;
std::string default_model_mapping_;
std::vector<std::string> enable_on_path_suffix_ = {"/v1/chat/completions"};
};
// PluginRootContext is the root context for all streams processed by the
// thread. It has the same lifetime as the worker thread and acts as target for
// interactions that outlives individual stream, e.g. timer, async calls.
class PluginRootContext : public RootContext,
public RouteRuleMatcher<ModelMapperConfigRule> {
public:
PluginRootContext(uint32_t id, std::string_view root_id)
: RootContext(id, root_id) {}
~PluginRootContext() {}
bool onConfigure(size_t) override;
FilterHeadersStatus onHeader(const ModelMapperConfigRule&);
FilterDataStatus onBody(const ModelMapperConfigRule&, std::string_view);
bool configure(size_t);
private:
bool parsePluginConfig(const json&, ModelMapperConfigRule&) override;
};
// Per-stream context.
class PluginContext : public Context {
public:
explicit PluginContext(uint32_t id, RootContext* root) : Context(id, root) {}
FilterHeadersStatus onRequestHeaders(uint32_t, bool) override;
FilterDataStatus onRequestBody(size_t, bool) override;
private:
inline PluginRootContext* rootContext() {
return dynamic_cast<PluginRootContext*>(this->root());
}
size_t body_total_size_ = 0;
const ModelMapperConfigRule* config_ = nullptr;
};
#ifdef NULL_PLUGIN
} // namespace model_mapper
} // namespace null_plugin
} // namespace proxy_wasm
#endif

View File

@@ -0,0 +1,301 @@
// Copyright (c) 2022 Alibaba Group Holding Ltd.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "extensions/model_mapper/plugin.h"
#include <cstddef>
#include "gmock/gmock.h"
#include "gtest/gtest.h"
#include "include/proxy-wasm/context.h"
#include "include/proxy-wasm/null.h"
namespace proxy_wasm {
namespace null_plugin {
namespace model_mapper {
NullPluginRegistry* context_registry_;
RegisterNullVmPluginFactory register_model_mapper_plugin("model_mapper", []() {
return std::make_unique<NullPlugin>(model_mapper::context_registry_);
});
class MockContext : public proxy_wasm::ContextBase {
public:
MockContext(WasmBase* wasm) : ContextBase(wasm) {}
MOCK_METHOD(BufferInterface*, getBuffer, (WasmBufferType));
MOCK_METHOD(WasmResult, log, (uint32_t, std::string_view));
MOCK_METHOD(WasmResult, setBuffer,
(WasmBufferType, size_t, size_t, std::string_view));
MOCK_METHOD(WasmResult, getHeaderMapValue,
(WasmHeaderMapType /* type */, std::string_view /* key */,
std::string_view* /*result */));
MOCK_METHOD(WasmResult, replaceHeaderMapValue,
(WasmHeaderMapType /* type */, std::string_view /* key */,
std::string_view /* value */));
MOCK_METHOD(WasmResult, removeHeaderMapValue,
(WasmHeaderMapType /* type */, std::string_view /* key */));
MOCK_METHOD(WasmResult, addHeaderMapValue,
(WasmHeaderMapType, std::string_view, std::string_view));
MOCK_METHOD(WasmResult, getProperty, (std::string_view, std::string*));
MOCK_METHOD(WasmResult, setProperty, (std::string_view, std::string_view));
};
class ModelMapperTest : public ::testing::Test {
protected:
ModelMapperTest() {
// Initialize test VM
test_vm_ = createNullVm();
wasm_base_ = std::make_unique<WasmBase>(
std::move(test_vm_), "test-vm", "", "",
std::unordered_map<std::string, std::string>{},
AllowedCapabilitiesMap{});
wasm_base_->load("model_mapper");
wasm_base_->initialize();
// Initialize host side context
mock_context_ = std::make_unique<MockContext>(wasm_base_.get());
current_context_ = mock_context_.get();
// Initialize Wasm sandbox context
root_context_ = std::make_unique<PluginRootContext>(0, "");
context_ = std::make_unique<PluginContext>(1, root_context_.get());
ON_CALL(*mock_context_, log(testing::_, testing::_))
.WillByDefault([](uint32_t, std::string_view m) {
std::cerr << m << "\n";
return WasmResult::Ok;
});
ON_CALL(*mock_context_, getBuffer(testing::_))
.WillByDefault([&](WasmBufferType type) {
if (type == WasmBufferType::HttpRequestBody) {
return &body_;
}
return &config_;
});
ON_CALL(*mock_context_, getHeaderMapValue(WasmHeaderMapType::RequestHeaders,
testing::_, testing::_))
.WillByDefault([&](WasmHeaderMapType, std::string_view header,
std::string_view* result) {
if (header == "content-type") {
*result = "application/json";
} else if (header == "content-length") {
*result = "1024";
} else if (header == ":path") {
*result = path_;
}
return WasmResult::Ok;
});
ON_CALL(*mock_context_,
replaceHeaderMapValue(WasmHeaderMapType::RequestHeaders, testing::_,
testing::_))
.WillByDefault([&](WasmHeaderMapType, std::string_view key,
std::string_view value) { return WasmResult::Ok; });
ON_CALL(*mock_context_,
removeHeaderMapValue(WasmHeaderMapType::RequestHeaders, testing::_))
.WillByDefault([&](WasmHeaderMapType, std::string_view key) {
return WasmResult::Ok;
});
ON_CALL(*mock_context_, addHeaderMapValue(WasmHeaderMapType::RequestHeaders,
testing::_, testing::_))
.WillByDefault([&](WasmHeaderMapType, std::string_view header,
std::string_view value) { return WasmResult::Ok; });
ON_CALL(*mock_context_, getProperty(testing::_, testing::_))
.WillByDefault([&](std::string_view path, std::string* result) {
if (absl::StartsWith(path, "route_name")) {
*result = route_name_;
} else if (absl::StartsWith(path, "cluster_name")) {
*result = service_name_;
}
return WasmResult::Ok;
});
ON_CALL(*mock_context_, setProperty(testing::_, testing::_))
.WillByDefault(
[&](std::string_view, std::string_view) { return WasmResult::Ok; });
}
~ModelMapperTest() override {}
std::unique_ptr<WasmBase> wasm_base_;
std::unique_ptr<WasmVm> test_vm_;
std::unique_ptr<MockContext> mock_context_;
std::unique_ptr<PluginRootContext> root_context_;
std::unique_ptr<PluginContext> context_;
std::string route_name_;
std::string service_name_;
std::string path_;
BufferBase body_;
BufferBase config_;
};
TEST_F(ModelMapperTest, ModelMappingTest) {
std::string configuration = R"(
{
"modelMapping": {
"*": "qwen-long",
"gpt-4*": "qwen-max",
"gpt-4o": "qwen-turbo",
"gpt-4o-mini": "qwen-plus",
"text-embedding-v1": ""
}
})";
config_.set(configuration);
EXPECT_TRUE(root_context_->configure(configuration.size()));
path_ = "/v1/chat/completions";
std::string request_json = R"({"model": "gpt-3.5"})";
EXPECT_CALL(*mock_context_,
setBuffer(testing::_, testing::_, testing::_, testing::_))
.WillOnce([&](WasmBufferType, size_t, size_t, std::string_view body) {
EXPECT_EQ(body, R"({"model":"qwen-long"})");
return WasmResult::Ok;
});
body_.set(request_json);
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
EXPECT_EQ(context_->onRequestBody(20, true), FilterDataStatus::Continue);
request_json = R"({"model": "gpt-4"})";
EXPECT_CALL(*mock_context_,
setBuffer(testing::_, testing::_, testing::_, testing::_))
.WillOnce([&](WasmBufferType, size_t, size_t, std::string_view body) {
EXPECT_EQ(body, R"({"model":"qwen-max"})");
return WasmResult::Ok;
});
body_.set(request_json);
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
EXPECT_EQ(context_->onRequestBody(20, true), FilterDataStatus::Continue);
request_json = R"({"model": "gpt-4o"})";
EXPECT_CALL(*mock_context_,
setBuffer(testing::_, testing::_, testing::_, testing::_))
.WillOnce([&](WasmBufferType, size_t, size_t, std::string_view body) {
EXPECT_EQ(body, R"({"model":"qwen-turbo"})");
return WasmResult::Ok;
});
body_.set(request_json);
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
EXPECT_EQ(context_->onRequestBody(20, true), FilterDataStatus::Continue);
request_json = R"({"model": "gpt-4o-mini"})";
EXPECT_CALL(*mock_context_,
setBuffer(testing::_, testing::_, testing::_, testing::_))
.WillOnce([&](WasmBufferType, size_t, size_t, std::string_view body) {
EXPECT_EQ(body, R"({"model":"qwen-plus"})");
return WasmResult::Ok;
});
body_.set(request_json);
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
EXPECT_EQ(context_->onRequestBody(20, true), FilterDataStatus::Continue);
request_json = R"({"model": "text-embedding-v1"})";
EXPECT_CALL(*mock_context_,
setBuffer(testing::_, testing::_, testing::_, testing::_))
.Times(0);
body_.set(request_json);
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
EXPECT_EQ(context_->onRequestBody(20, true), FilterDataStatus::Continue);
request_json = R"({})";
EXPECT_CALL(*mock_context_,
setBuffer(testing::_, testing::_, testing::_, testing::_))
.WillOnce([&](WasmBufferType, size_t, size_t, std::string_view body) {
EXPECT_EQ(body, R"({"model":"qwen-long"})");
return WasmResult::Ok;
});
body_.set(request_json);
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
EXPECT_EQ(context_->onRequestBody(20, true), FilterDataStatus::Continue);
}
TEST_F(ModelMapperTest, RouteLevelModelMappingTest) {
std::string configuration = R"(
{
"_rules_": [
{
"_match_route_": ["route-a"],
"_match_service_": ["service-1"],
"modelMapping": {
"*": "qwen-long"
}
},
{
"_match_route_": ["route-b"],
"_match_service_": ["service-2"],
"modelMapping": {
"*": "qwen-max"
}
},
{
"_match_route_": ["route-b"],
"_match_service_": ["service-3"],
"modelMapping": {
"*": "qwen-turbo"
}
}
]})";
config_.set(configuration);
EXPECT_TRUE(root_context_->configure(configuration.size()));
path_ = "/api/v1/chat/completions";
std::string request_json = R"({"model": "gpt-4"})";
body_.set(request_json);
route_name_ = "route-a";
service_name_ = "outbound|80||service-1";
EXPECT_CALL(*mock_context_,
setBuffer(testing::_, testing::_, testing::_, testing::_))
.WillOnce([&](WasmBufferType, size_t, size_t, std::string_view body) {
EXPECT_EQ(body, R"({"model":"qwen-long"})");
return WasmResult::Ok;
});
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
EXPECT_EQ(context_->onRequestBody(20, true), FilterDataStatus::Continue);
route_name_ = "route-b";
service_name_ = "outbound|80||service-2";
EXPECT_CALL(*mock_context_,
setBuffer(testing::_, testing::_, testing::_, testing::_))
.WillOnce([&](WasmBufferType, size_t, size_t, std::string_view body) {
EXPECT_EQ(body, R"({"model":"qwen-max"})");
return WasmResult::Ok;
});
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
EXPECT_EQ(context_->onRequestBody(20, true), FilterDataStatus::Continue);
route_name_ = "route-b";
service_name_ = "outbound|80||service-3";
EXPECT_CALL(*mock_context_,
setBuffer(testing::_, testing::_, testing::_, testing::_))
.WillOnce([&](WasmBufferType, size_t, size_t, std::string_view body) {
EXPECT_EQ(body, R"({"model":"qwen-turbo"})");
return WasmResult::Ok;
});
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
EXPECT_EQ(context_->onRequestBody(20, true), FilterDataStatus::Continue);
}
} // namespace model_mapper
} // namespace null_plugin
} // namespace proxy_wasm

View File

@@ -0,0 +1,70 @@
# Copyright (c) 2022 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
load("@proxy_wasm_cpp_sdk//bazel:defs.bzl", "proxy_wasm_cc_binary")
load("//bazel:wasm.bzl", "declare_wasm_image_targets")
proxy_wasm_cc_binary(
name = "model_router.wasm",
srcs = [
"plugin.cc",
"plugin.h",
],
deps = [
"@proxy_wasm_cpp_sdk//:proxy_wasm_intrinsics_higress",
"@com_google_absl//absl/strings",
"@com_google_absl//absl/time",
"//common:json_util",
"//common:http_util",
"//common:rule_util",
],
)
cc_library(
name = "model_router_lib",
srcs = [
"plugin.cc",
],
hdrs = [
"plugin.h",
],
copts = ["-DNULL_PLUGIN"],
deps = [
"@com_google_absl//absl/strings",
"@com_google_absl//absl/time",
"//common:json_util",
"@proxy_wasm_cpp_host//:lib",
"//common:http_util_nullvm",
"//common:rule_util_nullvm",
],
)
cc_test(
name = "model_router_test",
srcs = [
"plugin_test.cc",
],
copts = ["-DNULL_PLUGIN"],
deps = [
":model_router_lib",
"@com_google_googletest//:gtest",
"@com_google_googletest//:gtest_main",
"@proxy_wasm_cpp_host//:lib",
],
)
declare_wasm_image_targets(
name = "model_router",
wasm_file = ":model_router.wasm",
)

View File

@@ -0,0 +1,98 @@
## 功能说明
`model-router`插件实现了基于LLM协议中的model参数路由的功能
## 配置字段
| 名称 | 数据类型 | 填写要求 | 默认值 | 描述 |
| ----------- | --------------- | ----------------------- | ------ | ------------------------------------------- |
| `modelKey` | string | 选填 | model | 请求body中model参数的位置 |
| `addProviderHeader` | string | 选填 | - | 从model参数中解析出的provider名字放到哪个请求header中 |
| `modelToHeader` | string | 选填 | - | 直接将model参数放到哪个请求header中 |
| `enableOnPathSuffix` | array of string | 选填 | ["/v1/chat/completions"] | 只对这些特定路径后缀的请求生效 |
## 运行属性
插件执行阶段:认证阶段
插件执行优先级900
## 效果说明
### 基于 model 参数进行路由
需要做如下配置:
```yaml
modelToHeader: x-higress-llm-model
```
插件会将请求中 model 参数提取出来,设置到 x-higress-llm-model 这个请求 header 中,用于后续路由,举例来说,原生的 LLM 请求体是:
```json
{
"model": "qwen-long",
"frequency_penalty": 0,
"max_tokens": 800,
"stream": false,
"messages": [{
"role": "user",
"content": "higress项目主仓库的github地址是什么"
}],
"presence_penalty": 0,
"temperature": 0.7,
"top_p": 0.95
}
```
经过这个插件后,将添加下面这个请求头(可以用于路由匹配)
x-higress-llm-model: qwen-long
### 提取 model 参数中的 provider 字段用于路由
> 注意这种模式需要客户端在 model 参数中通过`/`分隔的方式,来指定 provider
需要做如下配置:
```yaml
addProviderHeader: x-higress-llm-provider
```
插件会将请求中 model 参数的 provider 部分(如果有)提取出来,设置到 x-higress-llm-provider 这个请求 header 中,用于后续路由,并将 model 参数重写为模型名称部分。举例来说,原生的 LLM 请求体是:
```json
{
"model": "dashscope/qwen-long",
"frequency_penalty": 0,
"max_tokens": 800,
"stream": false,
"messages": [{
"role": "user",
"content": "higress项目主仓库的github地址是什么"
}],
"presence_penalty": 0,
"temperature": 0.7,
"top_p": 0.95
}
```
经过这个插件后,将添加下面这个请求头(可以用于路由匹配)
x-higress-llm-provider: dashscope
原始的 LLM 请求体将被改成:
```json
{
"model": "qwen-long",
"frequency_penalty": 0,
"max_tokens": 800,
"stream": false,
"messages": [{
"role": "user",
"content": "higress项目主仓库的github地址是什么"
}],
"presence_penalty": 0,
"temperature": 0.7,
"top_p": 0.95
}
```

View File

@@ -0,0 +1,97 @@
## Function Description
The `model-router` plugin implements the function of routing based on the model parameter in the LLM protocol.
## Configuration Fields
| Name | Data Type | Filling Requirement | Default Value | Description |
| ----------- | --------------- | ----------------------- | ------ | ------------------------------------------- |
| `modelKey` | string | Optional | model | The location of the model parameter in the request body |
| `addProviderHeader` | string | Optional | - | Which request header to place the provider name parsed from the model parameter |
| `modelToHeader` | string | Optional | - | Which request header to directly place the model parameter |
| `enableOnPathSuffix` | array of string | Optional | ["/v1/chat/completions"] | Only effective for requests with these specific path suffixes |
## Runtime Attributes
Plugin execution phase: Authentication phase
Plugin execution priority: 900
## Effect Description
### Routing Based on the model Parameter
The following configuration is required:
```yaml
modelToHeader: x-higress-llm-model
```
The plugin will extract the model parameter from the request and set it in the x-higress-llm-model request header, which can be used for subsequent routing. For example, the original LLM request body:
```json
{
"model": "qwen-long",
"frequency_penalty": 0,
"max_tokens": 800,
"stream": false,
"messages": [{
"role": "user",
"content": "What is the GitHub address of the main repository for the higress project"
}],
"presence_penalty": 0,
"temperature": 0.7,
"top_p": 0.95
}
```
After processing by this plugin, the following request header (which can be used for route matching) will be added:
x-higress-llm-model: qwen-long
### Extracting the provider Field from the model Parameter for Routing
> Note that this mode requires the client to specify the provider using a `/` separator in the model parameter.
The following configuration is required:
```yaml
addProviderHeader: x-higress-llm-provider
```
The plugin will extract the provider part (if present) from the model parameter in the request and set it in the x-higress-llm-provider request header, which can be used for subsequent routing, and rewrite the model parameter to the model name part. For example, the original LLM request body:
```json
{
"model": "dashscope/qwen-long",
"frequency_penalty": 0,
"max_tokens": 800,
"stream": false,
"messages": [{
"role": "user",
"content": "What is the GitHub address of the main repository for the higress project"
}],
"presence_penalty": 0,
"temperature": 0.7,
"top_p": 0.95
}
```
After processing by this plugin, the following request header (which can be used for route matching) will be added:
x-higress-llm-provider: dashscope
The original LLM request body will be changed to:
```json
{
"model": "qwen-long",
"frequency_penalty": 0,
"max_tokens": 800,
"stream": false,
"messages": [{
"role": "user",
"content": "What is the GitHub address of the main repository for the higress project"
}],
"presence_penalty": 0,
"temperature": 0.7,
"top_p": 0.95
}

View File

@@ -0,0 +1,227 @@
// Copyright (c) 2022 Alibaba Group Holding Ltd.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "extensions/model_router/plugin.h"
#include <array>
#include <limits>
#include "absl/strings/str_cat.h"
#include "absl/strings/str_split.h"
#include "common/http_util.h"
#include "common/json_util.h"
using ::nlohmann::json;
using ::Wasm::Common::JsonArrayIterate;
using ::Wasm::Common::JsonGetField;
using ::Wasm::Common::JsonObjectIterate;
using ::Wasm::Common::JsonValueAs;
#ifdef NULL_PLUGIN
namespace proxy_wasm {
namespace null_plugin {
namespace model_router {
PROXY_WASM_NULL_PLUGIN_REGISTRY
#endif
static RegisterContextFactory register_ModelRouter(
CONTEXT_FACTORY(PluginContext), ROOT_FACTORY(PluginRootContext));
namespace {
constexpr std::string_view SetDecoderBufferLimitKey =
"SetRequestBodyBufferLimit";
constexpr std::string_view DefaultMaxBodyBytes = "10485760";
} // namespace
bool PluginRootContext::parsePluginConfig(const json& configuration,
ModelRouterConfigRule& rule) {
if (auto it = configuration.find("modelKey"); it != configuration.end()) {
if (it->is_string()) {
rule.model_key_ = it->get<std::string>();
} else {
LOG_ERROR("Invalid type for modelKey. Expected string.");
return false;
}
}
if (auto it = configuration.find("addProviderHeader");
it != configuration.end()) {
if (it->is_string()) {
rule.add_provider_header_ = it->get<std::string>();
} else {
LOG_ERROR("Invalid type for addProviderHeader. Expected string.");
return false;
}
}
if (auto it = configuration.find("modelToHeader");
it != configuration.end()) {
if (it->is_string()) {
rule.model_to_header_ = it->get<std::string>();
} else {
LOG_ERROR("Invalid type for modelToHeader. Expected string.");
return false;
}
}
if (!JsonArrayIterate(
configuration, "enableOnPathSuffix", [&](const json& item) -> bool {
if (item.is_string()) {
rule.enable_on_path_suffix_.emplace_back(item.get<std::string>());
return true;
}
return false;
})) {
LOG_ERROR("Invalid type for item in enableOnPathSuffix. Expected string.");
return false;
}
return true;
}
bool PluginRootContext::onConfigure(size_t size) {
// Parse configuration JSON string.
if (size > 0 && !configure(size)) {
LOG_ERROR("configuration has errors initialization will not continue.");
return false;
}
return true;
}
bool PluginRootContext::configure(size_t configuration_size) {
auto configuration_data = getBufferBytes(WasmBufferType::PluginConfiguration,
0, configuration_size);
// Parse configuration JSON string.
auto result = ::Wasm::Common::JsonParse(configuration_data->view());
if (!result) {
LOG_ERROR(absl::StrCat("cannot parse plugin configuration JSON string: ",
configuration_data->view()));
return false;
}
if (!parseRuleConfig(result.value())) {
LOG_ERROR(absl::StrCat("cannot parse plugin configuration JSON string: ",
configuration_data->view()));
return false;
}
return true;
}
FilterHeadersStatus PluginRootContext::onHeader(
const ModelRouterConfigRule& rule) {
if (!Wasm::Common::Http::hasRequestBody()) {
return FilterHeadersStatus::Continue;
}
auto path = getRequestHeader(Wasm::Common::Http::Header::Path)->toString();
auto params_pos = path.find('?');
size_t uri_end;
if (params_pos == std::string::npos) {
uri_end = path.size();
} else {
uri_end = params_pos;
}
bool enable = false;
for (const auto& enable_suffix : rule.enable_on_path_suffix_) {
if (absl::EndsWith({path.c_str(), uri_end}, enable_suffix)) {
enable = true;
break;
}
}
if (!enable) {
return FilterHeadersStatus::Continue;
}
auto content_type_value =
getRequestHeader(Wasm::Common::Http::Header::ContentType);
if (!absl::StrContains(content_type_value->view(),
Wasm::Common::Http::ContentTypeValues::Json)) {
return FilterHeadersStatus::Continue;
}
removeRequestHeader(Wasm::Common::Http::Header::ContentLength);
setFilterState(SetDecoderBufferLimitKey, DefaultMaxBodyBytes);
return FilterHeadersStatus::StopIteration;
}
FilterDataStatus PluginRootContext::onBody(const ModelRouterConfigRule& rule,
std::string_view body) {
const auto& model_key = rule.model_key_;
const auto& add_provider_header = rule.add_provider_header_;
const auto& model_to_header = rule.model_to_header_;
auto body_json_opt = ::Wasm::Common::JsonParse(body);
if (!body_json_opt) {
LOG_WARN(absl::StrCat("cannot parse body to JSON string: ", body));
return FilterDataStatus::Continue;
}
auto body_json = body_json_opt.value();
if (body_json.contains(model_key)) {
std::string model_value = body_json[model_key];
if (!model_to_header.empty()) {
replaceRequestHeader(model_to_header, model_value);
}
if (!add_provider_header.empty()) {
auto pos = model_value.find('/');
if (pos != std::string::npos) {
const auto& provider = model_value.substr(0, pos);
const auto& model = model_value.substr(pos + 1);
replaceRequestHeader(add_provider_header, provider);
body_json[model_key] = model;
setBuffer(WasmBufferType::HttpRequestBody, 0,
std::numeric_limits<size_t>::max(), body_json.dump());
LOG_DEBUG(absl::StrCat("model route to provider:", provider,
", model:", model));
} else {
LOG_DEBUG(absl::StrCat("model route to provider not work, model:",
model_value));
}
}
}
return FilterDataStatus::Continue;
}
FilterHeadersStatus PluginContext::onRequestHeaders(uint32_t, bool) {
auto* rootCtx = rootContext();
return rootCtx->onHeaders([rootCtx, this](const auto& config) {
auto ret = rootCtx->onHeader(config);
if (ret == FilterHeadersStatus::StopIteration) {
this->config_ = &config;
}
return ret;
});
}
FilterDataStatus PluginContext::onRequestBody(size_t body_size,
bool end_stream) {
if (config_ == nullptr) {
return FilterDataStatus::Continue;
}
body_total_size_ += body_size;
if (!end_stream) {
return FilterDataStatus::StopIterationAndBuffer;
}
auto body =
getBufferBytes(WasmBufferType::HttpRequestBody, 0, body_total_size_);
auto* rootCtx = rootContext();
return rootCtx->onBody(*config_, body->view());
}
#ifdef NULL_PLUGIN
} // namespace model_router
} // namespace null_plugin
} // namespace proxy_wasm
#endif

View File

@@ -0,0 +1,86 @@
/*
* Copyright (c) 2022 Alibaba Group Holding Ltd.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <assert.h>
#include <string>
#include <unordered_set>
#include "common/route_rule_matcher.h"
#define ASSERT(_X) assert(_X)
#ifndef NULL_PLUGIN
#include "proxy_wasm_intrinsics.h"
#else
#include "include/proxy-wasm/null_plugin.h"
namespace proxy_wasm {
namespace null_plugin {
namespace model_router {
#endif
struct ModelRouterConfigRule {
std::string model_key_ = "model";
std::string add_provider_header_;
std::string model_to_header_;
std::vector<std::string> enable_on_path_suffix_ = {"/v1/chat/completions"};
};
// PluginRootContext is the root context for all streams processed by the
// thread. It has the same lifetime as the worker thread and acts as target for
// interactions that outlives individual stream, e.g. timer, async calls.
class PluginRootContext : public RootContext,
public RouteRuleMatcher<ModelRouterConfigRule> {
public:
PluginRootContext(uint32_t id, std::string_view root_id)
: RootContext(id, root_id) {}
~PluginRootContext() {}
bool onConfigure(size_t) override;
FilterHeadersStatus onHeader(const ModelRouterConfigRule&);
FilterDataStatus onBody(const ModelRouterConfigRule&, std::string_view);
bool configure(size_t);
private:
bool parsePluginConfig(const json&, ModelRouterConfigRule&) override;
};
// Per-stream context.
class PluginContext : public Context {
public:
explicit PluginContext(uint32_t id, RootContext* root) : Context(id, root) {}
FilterHeadersStatus onRequestHeaders(uint32_t, bool) override;
FilterDataStatus onRequestBody(size_t, bool) override;
private:
inline PluginRootContext* rootContext() {
return dynamic_cast<PluginRootContext*>(this->root());
}
size_t body_total_size_ = 0;
const ModelRouterConfigRule* config_ = nullptr;
};
#ifdef NULL_PLUGIN
} // namespace model_router
} // namespace null_plugin
} // namespace proxy_wasm
#endif

View File

@@ -0,0 +1,250 @@
// Copyright (c) 2022 Alibaba Group Holding Ltd.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "extensions/model_router/plugin.h"
#include <cstddef>
#include "gmock/gmock.h"
#include "gtest/gtest.h"
#include "include/proxy-wasm/context.h"
#include "include/proxy-wasm/null.h"
namespace proxy_wasm {
namespace null_plugin {
namespace model_router {
NullPluginRegistry* context_registry_;
RegisterNullVmPluginFactory register_model_router_plugin("model_router", []() {
return std::make_unique<NullPlugin>(model_router::context_registry_);
});
class MockContext : public proxy_wasm::ContextBase {
public:
MockContext(WasmBase* wasm) : ContextBase(wasm) {}
MOCK_METHOD(BufferInterface*, getBuffer, (WasmBufferType));
MOCK_METHOD(WasmResult, log, (uint32_t, std::string_view));
MOCK_METHOD(WasmResult, setBuffer,
(WasmBufferType, size_t, size_t, std::string_view));
MOCK_METHOD(WasmResult, getHeaderMapValue,
(WasmHeaderMapType /* type */, std::string_view /* key */,
std::string_view* /*result */));
MOCK_METHOD(WasmResult, replaceHeaderMapValue,
(WasmHeaderMapType /* type */, std::string_view /* key */,
std::string_view /* value */));
MOCK_METHOD(WasmResult, removeHeaderMapValue,
(WasmHeaderMapType /* type */, std::string_view /* key */));
MOCK_METHOD(WasmResult, addHeaderMapValue,
(WasmHeaderMapType, std::string_view, std::string_view));
MOCK_METHOD(WasmResult, getProperty, (std::string_view, std::string*));
MOCK_METHOD(WasmResult, setProperty, (std::string_view, std::string_view));
};
class ModelRouterTest : public ::testing::Test {
protected:
ModelRouterTest() {
// Initialize test VM
test_vm_ = createNullVm();
wasm_base_ = std::make_unique<WasmBase>(
std::move(test_vm_), "test-vm", "", "",
std::unordered_map<std::string, std::string>{},
AllowedCapabilitiesMap{});
wasm_base_->load("model_router");
wasm_base_->initialize();
// Initialize host side context
mock_context_ = std::make_unique<MockContext>(wasm_base_.get());
current_context_ = mock_context_.get();
// Initialize Wasm sandbox context
root_context_ = std::make_unique<PluginRootContext>(0, "");
context_ = std::make_unique<PluginContext>(1, root_context_.get());
ON_CALL(*mock_context_, log(testing::_, testing::_))
.WillByDefault([](uint32_t, std::string_view m) {
std::cerr << m << "\n";
return WasmResult::Ok;
});
ON_CALL(*mock_context_, getBuffer(testing::_))
.WillByDefault([&](WasmBufferType type) {
if (type == WasmBufferType::HttpRequestBody) {
return &body_;
}
return &config_;
});
ON_CALL(*mock_context_, getHeaderMapValue(WasmHeaderMapType::RequestHeaders,
testing::_, testing::_))
.WillByDefault([&](WasmHeaderMapType, std::string_view header,
std::string_view* result) {
if (header == "content-type") {
*result = "application/json";
} else if (header == "content-length") {
*result = "1024";
} else if (header == ":path") {
*result = path_;
}
return WasmResult::Ok;
});
ON_CALL(*mock_context_,
replaceHeaderMapValue(WasmHeaderMapType::RequestHeaders, testing::_,
testing::_))
.WillByDefault([&](WasmHeaderMapType, std::string_view key,
std::string_view value) { return WasmResult::Ok; });
ON_CALL(*mock_context_,
removeHeaderMapValue(WasmHeaderMapType::RequestHeaders, testing::_))
.WillByDefault([&](WasmHeaderMapType, std::string_view key) {
return WasmResult::Ok;
});
ON_CALL(*mock_context_, addHeaderMapValue(WasmHeaderMapType::RequestHeaders,
testing::_, testing::_))
.WillByDefault([&](WasmHeaderMapType, std::string_view header,
std::string_view value) { return WasmResult::Ok; });
ON_CALL(*mock_context_, getProperty(testing::_, testing::_))
.WillByDefault([&](std::string_view path, std::string* result) {
*result = route_name_;
return WasmResult::Ok;
});
ON_CALL(*mock_context_, setProperty(testing::_, testing::_))
.WillByDefault(
[&](std::string_view, std::string_view) { return WasmResult::Ok; });
}
~ModelRouterTest() override {}
std::unique_ptr<WasmBase> wasm_base_;
std::unique_ptr<WasmVm> test_vm_;
std::unique_ptr<MockContext> mock_context_;
std::unique_ptr<PluginRootContext> root_context_;
std::unique_ptr<PluginContext> context_;
std::string route_name_;
std::string path_;
BufferBase body_;
BufferBase config_;
};
TEST_F(ModelRouterTest, RewriteModelAndHeader) {
std::string configuration = R"(
{
"addProviderHeader": "x-higress-llm-provider"
})";
config_.set(configuration);
EXPECT_TRUE(root_context_->configure(configuration.size()));
path_ = "/v1/chat/completions";
std::string request_json = R"({"model": "qwen/qwen-long"})";
EXPECT_CALL(*mock_context_,
setBuffer(testing::_, testing::_, testing::_, testing::_))
.WillOnce([&](WasmBufferType, size_t, size_t, std::string_view body) {
EXPECT_EQ(body, R"({"model":"qwen-long"})");
return WasmResult::Ok;
});
EXPECT_CALL(*mock_context_,
replaceHeaderMapValue(testing::_,
std::string_view("x-higress-llm-provider"),
std::string_view("qwen")));
body_.set(request_json);
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
EXPECT_EQ(context_->onRequestBody(28, true), FilterDataStatus::Continue);
}
TEST_F(ModelRouterTest, ModelToHeader) {
std::string configuration = R"(
{
"modelToHeader": "x-higress-llm-model"
})";
config_.set(configuration);
EXPECT_TRUE(root_context_->configure(configuration.size()));
path_ = "/v1/chat/completions";
std::string request_json = R"({"model": "qwen-long"})";
EXPECT_CALL(*mock_context_,
setBuffer(testing::_, testing::_, testing::_, testing::_))
.Times(0);
EXPECT_CALL(
*mock_context_,
replaceHeaderMapValue(testing::_, std::string_view("x-higress-llm-model"),
std::string_view("qwen-long")));
body_.set(request_json);
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
EXPECT_EQ(context_->onRequestBody(28, true), FilterDataStatus::Continue);
}
TEST_F(ModelRouterTest, IgnorePath) {
std::string configuration = R"(
{
"addProviderHeader": "x-higress-llm-provider"
})";
config_.set(configuration);
EXPECT_TRUE(root_context_->configure(configuration.size()));
path_ = "/v1/chat/xxxx";
std::string request_json = R"({"model": "qwen/qwen-long"})";
EXPECT_CALL(*mock_context_,
setBuffer(testing::_, testing::_, testing::_, testing::_))
.Times(0);
EXPECT_CALL(*mock_context_,
replaceHeaderMapValue(testing::_,
std::string_view("x-higress-llm-provider"),
std::string_view("qwen")))
.Times(0);
body_.set(request_json);
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::Continue);
EXPECT_EQ(context_->onRequestBody(28, true), FilterDataStatus::Continue);
}
TEST_F(ModelRouterTest, RouteLevelRewriteModelAndHeader) {
std::string configuration = R"(
{
"_rules_": [
{
"_match_route_": ["route-a"],
"addProviderHeader": "x-higress-llm-provider"
}
]})";
config_.set(configuration);
EXPECT_TRUE(root_context_->configure(configuration.size()));
path_ = "/api/v1/chat/completions";
std::string request_json = R"({"model": "qwen/qwen-long"})";
EXPECT_CALL(*mock_context_,
setBuffer(testing::_, testing::_, testing::_, testing::_))
.WillOnce([&](WasmBufferType, size_t, size_t, std::string_view body) {
EXPECT_EQ(body, R"({"model":"qwen-long"})");
return WasmResult::Ok;
});
EXPECT_CALL(*mock_context_,
replaceHeaderMapValue(testing::_,
std::string_view("x-higress-llm-provider"),
std::string_view("qwen")));
body_.set(request_json);
route_name_ = "route-a";
EXPECT_EQ(context_->onRequestHeaders(0, false),
FilterHeadersStatus::StopIteration);
EXPECT_EQ(context_->onRequestBody(28, true), FilterDataStatus::Continue);
}
} // namespace model_router
} // namespace null_plugin
} // namespace proxy_wasm

View File

@@ -12,10 +12,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
load("@proxy_wasm_cpp_sdk//bazel/wasm:wasm.bzl", "wasm_cc_binary")
load("@proxy_wasm_cpp_sdk//bazel:defs.bzl", "proxy_wasm_cc_binary")
load("//bazel:wasm.bzl", "declare_wasm_image_targets")
wasm_cc_binary(
proxy_wasm_cc_binary(
name = "request_block.wasm",
srcs = [
"plugin.cc",
@@ -27,7 +27,6 @@ wasm_cc_binary(
"//common:json_util",
"//common:http_util",
"//common:rule_util",
"@proxy_wasm_cpp_sdk//:proxy_wasm_intrinsics",
],
)

View File

@@ -12,10 +12,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
load("@proxy_wasm_cpp_sdk//bazel/wasm:wasm.bzl", "wasm_cc_binary")
load("@proxy_wasm_cpp_sdk//bazel:defs.bzl", "proxy_wasm_cc_binary")
load("//bazel:wasm.bzl", "declare_wasm_image_targets")
wasm_cc_binary(
proxy_wasm_cc_binary(
name = "sni_misdirect.wasm",
srcs = [
"plugin.cc",
@@ -25,7 +25,6 @@ wasm_cc_binary(
"@com_google_absl//absl/strings:str_format",
"@com_google_absl//absl/strings",
"//common:http_util",
"@proxy_wasm_cpp_sdk//:proxy_wasm_intrinsics",
],
)

View File

@@ -55,6 +55,9 @@ output wasm file: extensions/request-block/plugin.wasm
tinygo build -o main.wasm -scheduler=none -target=wasi -gc=custom -tags='custommalloc nottinygc_finalizer' ./extensions/request-block/main.go
```
详细的编译说明,包括要使用更复杂的 Header 状态管理机制,请参考[ Go 开发插件的最佳实践](https://higress.io/docs/latest/user/wasm-go/#3-%E7%BC%96%E8%AF%91%E7%94%9F%E6%88%90-wasm-%E6%96%87%E4%BB%B6)。
### step2. 构建并推送插件的 docker 镜像
使用这份简单的 Dockerfile
@@ -201,4 +204,4 @@ cSuite.Setup(t)
```bash
PLUGIN_NAME=request-block make higress-wasmplugin-test
```
```

View File

@@ -5,7 +5,7 @@ description: AI Agent插件配置参考
---
## 功能说明
一个可定制化的 API AI Agent支持配置 http method 类型为 GET 与 POST 的 API支持多轮对话支持流式与非流式模式。
一个可定制化的 API AI Agent支持配置 http method 类型为 GET 与 POST 的 API支持多轮对话支持流式与非流式模式,支持将结果格式化为自定义的 json
agent流程图如下
![ai-agent](https://img.alicdn.com/imgextra/i1/O1CN01PGSDW31WQfEPm173u_!!6000000002783-0-tps-2733-1473.jpg)
@@ -21,6 +21,7 @@ agent流程图如下
| `llm` | object | 必填 | - | 配置 AI 服务提供商的信息 |
| `apis` | object | 必填 | - | 配置外部 API 服务提供商的信息 |
| `promptTemplate` | object | 非必填 | - | 配置 Agent ReAct 模板的信息 |
| `jsonResp` | object | 非必填 | - | 配置 json 格式化的相关信息 |
`llm`的配置字段说明如下:
@@ -78,7 +79,14 @@ agent流程图如下
| `observation` | string | 非必填 | - | Agent ReAct 模板的 observation 部分 |
| `thought2` | string | 非必填 | - | Agent ReAct 模板的 thought2 部分 |
## 用法示例
`jsonResp`的配置字段说明如下:
| 名称 | 数据类型 | 填写要求 | 默认值 | 描述 |
|--------------------|-----------|---------|--------|-----------------------------------|
| `enable` | bool | 非必填 | false | 是否开启 json 格式化。 |
| `jsonSchema` | string | 非必填 | - | 自定义 json schema |
## 用法示例-不开启 json 格式化
**配置信息**
@@ -293,7 +301,7 @@ deepl提供了一个工具用于翻译给定的句子支持多语言。。
**请求示例**
```shell
curl 'http://<这里换成网关公网IP>/api/openai/v1/chat/completions' \
curl 'http://<这里换成网关地址>/api/openai/v1/chat/completions' \
-H 'Accept: application/json, text/event-stream' \
-H 'Content-Type: application/json' \
--data-raw '{"model":"qwen","frequency_penalty":0,"max_tokens":800,"stream":false,"messages":[{"role":"user","content":"我想在济南市鑫盛大厦附近喝咖啡,给我推荐几个"}],"presence_penalty":0,"temperature":0,"top_p":0}'
@@ -308,7 +316,7 @@ curl 'http://<这里换成网关公网IP>/api/openai/v1/chat/completions' \
**请求示例**
```shell
curl 'http://<这里换成网关公网IP>/api/openai/v1/chat/completions' \
curl 'http://<这里换成网关地址>/api/openai/v1/chat/completions' \
-H 'Accept: application/json, text/event-stream' \
-H 'Content-Type: application/json' \
--data-raw '{"model":"qwen","frequency_penalty":0,"max_tokens":800,"stream":false,"messages":[{"role":"user","content":"济南市现在的天气情况如何?"}],"presence_penalty":0,"temperature":0,"top_p":0}'
@@ -323,7 +331,7 @@ curl 'http://<这里换成网关公网IP>/api/openai/v1/chat/completions' \
**请求示例**
```shell
curl 'http://<这里换成网关公网IP>/api/openai/v1/chat/completions' \
curl 'http://<这里换成网关地址>/api/openai/v1/chat/completions' \
-H 'Accept: application/json, text/event-stream' \
-H 'Content-Type: application/json' \
--data-raw '{"model":"qwen","frequency_penalty":0,"max_tokens":800,"stream":false,"messages":[{"role": "user","content": "济南的天气如何?"},{ "role": "assistant","content": "目前济南市的天气为多云气温为24℃数据更新时间为2024年9月12日21时50分14秒。"},{"role": "user","content": "北京呢?"}],"presence_penalty":0,"temperature":0,"top_p":0}'
@@ -338,7 +346,7 @@ curl 'http://<这里换成网关公网IP>/api/openai/v1/chat/completions' \
**请求示例**
```shell
curl 'http://<这里换成网关公网IP>/api/openai/v1/chat/completions' \
curl 'http://<这里换成网关地址>/api/openai/v1/chat/completions' \
-H 'Accept: application/json, text/event-stream' \
-H 'Content-Type: application/json' \
--data-raw '{"model":"qwen","frequency_penalty":0,"max_tokens":800,"stream":false,"messages":[{"role":"user","content":"济南市现在的天气情况如何?用华氏度表示,用日语回答"}],"presence_penalty":0,"temperature":0,"top_p":0}'
@@ -353,7 +361,7 @@ curl 'http://<这里换成网关公网IP>/api/openai/v1/chat/completions' \
**请求示例**
```shell
curl 'http://<这里换成网关公网IP>/api/openai/v1/chat/completions' \
curl 'http://<这里换成网关地址>/api/openai/v1/chat/completions' \
-H 'Accept: application/json, text/event-stream' \
-H 'Content-Type: application/json' \
--data-raw '{"model":"qwen","frequency_penalty":0,"max_tokens":800,"stream":false,"messages":[{"role":"user","content":"帮我用德语翻译以下句子:九头蛇万岁!"}],"presence_penalty":0,"temperature":0,"top_p":0}'
@@ -364,3 +372,71 @@ curl 'http://<这里换成网关公网IP>/api/openai/v1/chat/completions' \
```json
{"id":"65dcf12c-61ff-9e68-bffa-44fc9e6070d5","choices":[{"index":0,"message":{"role":"assistant","content":" “九头蛇万岁!”的德语翻译为“Hoch lebe Hydra!”。"},"finish_reason":"stop"}],"created":1724043865,"model":"qwen-max-0403","object":"chat.completion","usage":{"prompt_tokens":908,"completion_tokens":52,"total_tokens":960}}
```
## 用法示例-开启 json 格式化
**配置信息**
在上述配置的基础上增加 jsonResp 配置
```yaml
jsonResp:
enable: true
```
**请求示例**
```shell
curl 'http://<这里换成网关地址>/api/openai/v1/chat/completions' \
-H 'Accept: application/json, text/event-stream' \
-H 'Content-Type: application/json' \
--data-raw '{"model":"qwen","frequency_penalty":0,"max_tokens":800,"stream":false,"messages":[{"role":"user","content":"北京市现在的天气情况如何?"}],"presence_penalty":0,"temperature":0,"top_p":0}'
```
**响应示例**
```json
{"id":"ebd6ea91-8e38-9e14-9a5b-90178d2edea4","choices":[{"index":0,"message":{"role":"assistant","content": "{\"city\": \"北京市\", \"weather_condition\": \"多云\", \"temperature\": \"19℃\", \"data_update_time\": \"2024年10月9日16时37分53秒\"}"},"finish_reason":"stop"}],"created":1723187991,"model":"qwen-max-0403","object":"chat.completion","usage":{"prompt_tokens":890,"completion_tokens":56,"total_tokens":946}}
```
如果不自定义 json schema大模型会自动生成一个 json 格式
**配置信息**
增加自定义 json schema 配置
```yaml
jsonResp:
enable: true
jsonSchema: |
title: WeatherSchema
type: object
properties:
location:
type: string
description: 城市名称.
weather:
type: string
description: 天气情况.
temperature:
type: string
description: 温度.
update_time:
type: string
description: 数据更新时间.
required:
- location
- weather
- temperature
additionalProperties: false
```
**请求示例**
```shell
curl 'http://<这里换成网关地址>/api/openai/v1/chat/completions' \
-H 'Accept: application/json, text/event-stream' \
-H 'Content-Type: application/json' \
--data-raw '{"model":"qwen","frequency_penalty":0,"max_tokens":800,"stream":false,"messages":[{"role":"user","content":"北京市现在的天气情况如何?"}],"presence_penalty":0,"temperature":0,"top_p":0}'
```
**响应示例**
```json
{"id":"ebd6ea91-8e38-9e14-9a5b-90178d2edea4","choices":[{"index":0,"message":{"role":"assistant","content": "{\"location\": \"北京市\", \"weather\": \"多云\", \"temperature\": \"19℃\", \"update_time\": \"2024年10月9日16时37分53秒\"}"},"finish_reason":"stop"}],"created":1723187991,"model":"qwen-max-0403","object":"chat.completion","usage":{"prompt_tokens":890,"completion_tokens":56,"total_tokens":946}}
```

View File

@@ -4,7 +4,7 @@ keywords: [ AI Gateway, AI Agent ]
description: AI Agent plugin configuration reference
---
## Functional Description
A customizable API AI Agent that supports configuring HTTP method types as GET and POST APIs. Supports multiple dialogue rounds, streaming and non-streaming modes.
A customizable API AI Agent that supports configuring HTTP method types as GET and POST APIs. Supports multiple dialogue rounds, streaming and non-streaming modes, support for formatting results as custom json.
The agent flow chart is as follows:
![ai-agent](https://github.com/user-attachments/assets/b0761a0c-1afa-496c-a98e-bb9f38b340f8)
@@ -20,6 +20,7 @@ Plugin execution priority: `200`
| `llm` | object | Required | - | Configuration information for AI service provider |
| `apis` | object | Required | - | Configuration information for external API service provider |
| `promptTemplate` | object | Optional | - | Configuration information for Agent ReAct template |
| `jsonResp` | object | Optional | - | Configuring json formatting information |
The configuration fields for `llm` are as follows:
| Name | Data Type | Requirement | Default Value | Description |
@@ -71,7 +72,13 @@ The configuration fields for `chTemplate` and `enTemplate` are as follows:
| `observation` | string | Optional | - | The observation part of the Agent ReAct template |
| `thought2` | string | Optional | - | The thought2 part of the Agent ReAct template |
## Usage Example
The configuration fields for `jsonResp` are as follows:
| Name | Data Type | Requirement | Default Value | Description |
|--------------------|-----------|-------------|---------------|------------------------------------|
| `enable` | bool | Optional | - | Whether to enable json formatting. |
| `jsonSchema` | string | Optional | - | Custom json schema |
## Usage Example-disable json formatting
**Configuration Information**
```yaml
llm:
@@ -335,3 +342,68 @@ curl 'http://<replace with gateway public IP>/api/openai/v1/chat/completions' \
{"id":"65dcf12c-61ff-9e68-bffa-44fc9e6070d5","choices":[{"index":0,"message":{"role":"assistant","content":" The German translation of \"Hail Hydra!\" is \"Hoch lebe Hydra!\"."},"finish_reason":"stop"}],"created":1724043865,"model":"qwen-max-0403","object":"chat.completion","usage":{"prompt_tokens":908,"completion_tokens":52,"total_tokens":960}}
```
## Usage Example-enable json formatting
**Configuration Information**
Add jsonResp configuration to the above configuration
```yaml
jsonResp:
enable: true
```
**Request Example**
```shell
curl 'http://<replace with gateway public IP>/api/openai/v1/chat/completions' \
-H 'Accept: application/json, text/event-stream' \
-H 'Content-Type: application/json' \
--data-raw '{"model":"qwen","frequency_penalty":0,"max_tokens":800,"stream":false,"messages":[{"role":"user","content":"What is the current weather in Beijing ?"}],"presence_penalty":0,"temperature":0,"top_p":0}'
```
**Response Example**
```json
{"id":"ebd6ea91-8e38-9e14-9a5b-90178d2edea4","choices":[{"index":0,"message":{"role":"assistant","content": "{\"city\": \"BeiJing\", \"weather_condition\": \"cloudy\", \"temperature\": \"19℃\", \"data_update_time\": \"Oct 9, 2024, at 16:37\"}"},"finish_reason":"stop"}],"created":1723187991,"model":"qwen-max-0403","object":"chat.completion","usage":{"prompt_tokens":890,"completion_tokens":56,"total_tokens":946}}
```
If you don't customise the json schema, the big model will automatically generate a json format
**Configuration Information**
Add custom json schema configuration
```yaml
jsonResp:
enable: true
jsonSchema:
title: WeatherSchema
type: object
properties:
location:
type: string
description: city name.
weather:
type: string
description: weather conditions.
temperature:
type: string
description: temperature.
update_time:
type: string
description: the update time of data.
required:
- location
- weather
- temperature
additionalProperties: false
```
**Request Example**
```shell
curl 'http://<replace with gateway public IP>/api/openai/v1/chat/completions' \
-H 'Accept: application/json, text/event-stream' \
-H 'Content-Type: application/json' \
--data-raw '{"model":"qwen","frequency_penalty":0,"max_tokens":800,"stream":false,"messages":[{"role":"user","content":"What is the current weather in Beijing ?"}],"presence_penalty":0,"temperature":0,"top_p":0}'
```
**Response Example**
```json
{"id":"ebd6ea91-8e38-9e14-9a5b-90178d2edea4","choices":[{"index":0,"message":{"role":"assistant","content": "{\"location\": \"Beijing\", \"weather\": \"cloudy\", \"temperature\": \"19℃\", \"update_time\": \"Oct 9, 2024, at 16:37\"}"},"finish_reason":"stop"}],"created":1723187991,"model":"qwen-max-0403","object":"chat.completion","usage":{"prompt_tokens":890,"completion_tokens":56,"total_tokens":946}}
```

View File

@@ -211,6 +211,15 @@ type LLMInfo struct {
MaxTokens int64 `yaml:"maxToken" json:"maxTokens"`
}
type JsonResp struct {
// @Title zh-CN Enable
// @Description zh-CN 是否要启用json格式化输出
Enable bool `yaml:"enable" json:"enable"`
// @Title zh-CN Json Schema
// @Description zh-CN 用以验证响应json的Json Schema, 为空则只验证返回的响应是否为合法json
JsonSchema map[string]interface{} `required:"false" json:"jsonSchema" yaml:"jsonSchema"`
}
type PluginConfig struct {
// @Title zh-CN 返回 HTTP 响应的模版
// @Description zh-CN 用 %s 标记需要被 cache value 替换的部分
@@ -225,6 +234,7 @@ type PluginConfig struct {
LLMClient wrapper.HttpClient `yaml:"-" json:"-"`
APIsParam []APIsParam `yaml:"-" json:"-"`
PromptTemplate PromptTemplate `yaml:"promptTemplate" json:"promptTemplate"`
JsonResp JsonResp `yaml:"jsonResp" json:"jsonResp"`
}
func initResponsePromptTpl(gjson gjson.Result, c *PluginConfig) {
@@ -402,3 +412,15 @@ func initLLMClient(gjson gjson.Result, c *PluginConfig) {
Host: c.LLMInfo.Domain,
})
}
func initJsonResp(gjson gjson.Result, c *PluginConfig) {
c.JsonResp.Enable = false
if c.JsonResp.Enable = gjson.Get("jsonResp.enable").Bool(); c.JsonResp.Enable {
c.JsonResp.JsonSchema = nil
if jsonSchemaValue := gjson.Get("jsonResp.jsonSchema"); jsonSchemaValue.Exists() {
if schemaValue, ok := jsonSchemaValue.Value().(map[string]interface{}); ok {
c.JsonResp.JsonSchema = schemaValue
}
}
}
}

View File

@@ -2,8 +2,10 @@ package main
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"net/url"
"regexp"
"strings"
@@ -47,6 +49,8 @@ func parseConfig(gjson gjson.Result, c *PluginConfig, log wrapper.Log) error {
initLLMClient(gjson, c)
initJsonResp(gjson, c)
return nil
}
@@ -76,10 +80,10 @@ func firstReq(ctx wrapper.HttpContext, config PluginConfig, prompt string, rawRe
log.Debugf("[onHttpRequestBody] newRequestBody: %s", string(newbody))
err := proxywasm.ReplaceHttpRequestBody(newbody)
if err != nil {
log.Debug("替换失败")
log.Debugf("failed replace err: %s", err.Error())
proxywasm.SendHttpResponse(200, [][2]string{{"content-type", "application/json; charset=utf-8"}}, []byte(fmt.Sprintf(config.ReturnResponseTemplate, "替换失败"+err.Error())), -1)
}
log.Debug("[onHttpRequestBody] request替换成功")
log.Debug("[onHttpRequestBody] replace request success")
return types.ActionContinue
}
}
@@ -175,11 +179,103 @@ func onHttpResponseHeaders(ctx wrapper.HttpContext, config PluginConfig, log wra
return types.ActionContinue
}
func toolsCallResult(ctx wrapper.HttpContext, config PluginConfig, content string, rawResponse Response, log wrapper.Log, statusCode int, responseBody []byte) {
func extractJson(bodyStr string) (string, error) {
// simply extract json from response body string
startIndex := strings.Index(bodyStr, "{")
endIndex := strings.LastIndex(bodyStr, "}") + 1
// if not found
if startIndex == -1 || startIndex >= endIndex {
return "", errors.New("cannot find json in the response body")
}
jsonStr := bodyStr[startIndex:endIndex]
// attempt to parse the JSON
var result map[string]interface{}
err := json.Unmarshal([]byte(jsonStr), &result)
if err != nil {
return "", err
}
return jsonStr, nil
}
func jsonFormat(llmClient wrapper.HttpClient, llmInfo LLMInfo, jsonSchema map[string]interface{}, assistantMessage Message, actionInput string, headers [][2]string, streamMode bool, rawResponse Response, log wrapper.Log) string {
prompt := fmt.Sprintf(prompttpl.Json_Resp_Template, jsonSchema, actionInput)
messages := []dashscope.Message{{Role: "user", Content: prompt}}
completion := dashscope.Completion{
Model: llmInfo.Model,
Messages: messages,
}
completionSerialized, _ := json.Marshal(completion)
var content string
err := llmClient.Post(
llmInfo.Path,
headers,
completionSerialized,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
//得到gpt的返回结果
var responseCompletion dashscope.CompletionResponse
_ = json.Unmarshal(responseBody, &responseCompletion)
log.Infof("[jsonFormat] content: %s", responseCompletion.Choices[0].Message.Content)
content = responseCompletion.Choices[0].Message.Content
jsonStr, err := extractJson(content)
if err != nil {
log.Debugf("[onHttpRequestBody] extractJson err: %s", err.Error())
jsonStr = content
}
if streamMode {
stream(jsonStr, rawResponse, log)
} else {
noneStream(assistantMessage, jsonStr, rawResponse, log)
}
}, uint32(llmInfo.MaxExecutionTime))
if err != nil {
log.Debugf("[onHttpRequestBody] completion err: %s", err.Error())
proxywasm.ResumeHttpRequest()
}
return content
}
func noneStream(assistantMessage Message, actionInput string, rawResponse Response, log wrapper.Log) {
assistantMessage.Role = "assistant"
assistantMessage.Content = actionInput
rawResponse.Choices[0].Message = assistantMessage
newbody, err := json.Marshal(rawResponse)
if err != nil {
proxywasm.ResumeHttpResponse()
return
} else {
proxywasm.ReplaceHttpResponseBody(newbody)
log.Debug("[onHttpResponseBody] replace response success")
proxywasm.ResumeHttpResponse()
}
}
func stream(actionInput string, rawResponse Response, log wrapper.Log) {
headers := [][2]string{{"content-type", "text/event-stream; charset=utf-8"}}
proxywasm.ReplaceHttpResponseHeaders(headers)
// Remove quotes from actionInput
actionInput = strings.Trim(actionInput, "\"")
returnStreamResponseTemplate := `data:{"id":"%s","choices":[{"index":0,"delta":{"role":"assistant","content":"%s"},"finish_reason":"stop"}],"model":"%s","object":"chat.completion","usage":{"prompt_tokens":%d,"completion_tokens":%d,"total_tokens":%d}}` + "\n\ndata:[DONE]\n\n"
newbody := fmt.Sprintf(returnStreamResponseTemplate, rawResponse.ID, actionInput, rawResponse.Model, rawResponse.Usage.PromptTokens, rawResponse.Usage.CompletionTokens, rawResponse.Usage.TotalTokens)
log.Infof("[onHttpResponseBody] newResponseBody: ", newbody)
proxywasm.ReplaceHttpResponseBody([]byte(newbody))
log.Debug("[onHttpResponseBody] replace response success")
proxywasm.ResumeHttpResponse()
}
func toolsCallResult(ctx wrapper.HttpContext, llmClient wrapper.HttpClient, llmInfo LLMInfo, jsonResp JsonResp, aPIsParam []APIsParam, aPIClient []wrapper.HttpClient, content string, rawResponse Response, log wrapper.Log, statusCode int, responseBody []byte) {
if statusCode != http.StatusOK {
log.Debugf("statusCode: %d", statusCode)
}
log.Info("========函数返回结果========")
log.Info("========function result========")
log.Infof(string(responseBody))
observation := "Observation: " + string(responseBody)
@@ -187,15 +283,15 @@ func toolsCallResult(ctx wrapper.HttpContext, config PluginConfig, content strin
dashscope.MessageStore.AddForUser(observation)
completion := dashscope.Completion{
Model: config.LLMInfo.Model,
Model: llmInfo.Model,
Messages: dashscope.MessageStore,
MaxTokens: config.LLMInfo.MaxTokens,
MaxTokens: llmInfo.MaxTokens,
}
headers := [][2]string{{"Content-Type", "application/json"}, {"Authorization", "Bearer " + config.LLMInfo.APIKey}}
headers := [][2]string{{"Content-Type", "application/json"}, {"Authorization", "Bearer " + llmInfo.APIKey}}
completionSerialized, _ := json.Marshal(completion)
err := config.LLMClient.Post(
config.LLMInfo.Path,
err := llmClient.Post(
llmInfo.Path,
headers,
completionSerialized,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
@@ -205,42 +301,31 @@ func toolsCallResult(ctx wrapper.HttpContext, config PluginConfig, content strin
log.Infof("[toolsCall] content: %s", responseCompletion.Choices[0].Message.Content)
if responseCompletion.Choices[0].Message.Content != "" {
retType, actionInput := toolsCall(ctx, config, responseCompletion.Choices[0].Message.Content, rawResponse, log)
retType, actionInput := toolsCall(ctx, llmClient, llmInfo, jsonResp, aPIsParam, aPIClient, responseCompletion.Choices[0].Message.Content, rawResponse, log)
if retType == types.ActionContinue {
//得到了Final Answer
var assistantMessage Message
var streamMode bool
if ctx.GetContext(StreamContextKey) == nil {
assistantMessage.Role = "assistant"
assistantMessage.Content = actionInput
rawResponse.Choices[0].Message = assistantMessage
newbody, err := json.Marshal(rawResponse)
if err != nil {
proxywasm.ResumeHttpResponse()
return
streamMode = false
if jsonResp.Enable {
jsonFormat(llmClient, llmInfo, jsonResp.JsonSchema, assistantMessage, actionInput, headers, streamMode, rawResponse, log)
} else {
proxywasm.ReplaceHttpResponseBody(newbody)
log.Debug("[onHttpResponseBody] response替换成功")
proxywasm.ResumeHttpResponse()
noneStream(assistantMessage, actionInput, rawResponse, log)
}
} else {
headers := [][2]string{{"content-type", "text/event-stream; charset=utf-8"}}
proxywasm.ReplaceHttpResponseHeaders(headers)
// Remove quotes from actionInput
actionInput = strings.Trim(actionInput, "\"")
returnStreamResponseTemplate := `data:{"id":"%s","choices":[{"index":0,"delta":{"role":"assistant","content":"%s"},"finish_reason":"stop"}],"model":"%s","object":"chat.completion","usage":{"prompt_tokens":%d,"completion_tokens":%d,"total_tokens":%d}}` + "\n\ndata:[DONE]\n\n"
newbody := fmt.Sprintf(returnStreamResponseTemplate, rawResponse.ID, actionInput, rawResponse.Model, rawResponse.Usage.PromptTokens, rawResponse.Usage.CompletionTokens, rawResponse.Usage.TotalTokens)
log.Infof("[onHttpResponseBody] newResponseBody: ", newbody)
proxywasm.ReplaceHttpResponseBody([]byte(newbody))
log.Debug("[onHttpResponseBody] response替换成功")
proxywasm.ResumeHttpResponse()
streamMode = true
if jsonResp.Enable {
jsonFormat(llmClient, llmInfo, jsonResp.JsonSchema, assistantMessage, actionInput, headers, streamMode, rawResponse, log)
} else {
stream(actionInput, rawResponse, log)
}
}
}
} else {
proxywasm.ResumeHttpRequest()
}
}, uint32(config.LLMInfo.MaxExecutionTime))
}, uint32(llmInfo.MaxExecutionTime))
if err != nil {
log.Debugf("[onHttpRequestBody] completion err: %s", err.Error())
proxywasm.ResumeHttpRequest()
@@ -294,7 +379,7 @@ func outputParser(response string, log wrapper.Log) (string, string) {
return "", ""
}
func toolsCall(ctx wrapper.HttpContext, config PluginConfig, content string, rawResponse Response, log wrapper.Log) (types.Action, string) {
func toolsCall(ctx wrapper.HttpContext, llmClient wrapper.HttpClient, llmInfo LLMInfo, jsonResp JsonResp, aPIsParam []APIsParam, aPIClient []wrapper.HttpClient, content string, rawResponse Response, log wrapper.Log) (types.Action, string) {
dashscope.MessageStore.AddForAssistant(content)
action, actionInput := outputParser(content, log)
@@ -305,9 +390,9 @@ func toolsCall(ctx wrapper.HttpContext, config PluginConfig, content string, raw
}
count := ctx.GetContext(ToolCallsCount).(int)
count++
log.Debugf("toolCallsCount:%d, config.LLMInfo.MaxIterations=%d", count, config.LLMInfo.MaxIterations)
log.Debugf("toolCallsCount:%d, config.LLMInfo.MaxIterations=%d", count, llmInfo.MaxIterations)
//函数递归调用次数,达到了预设的循环次数,强制结束
if int64(count) > config.LLMInfo.MaxIterations {
if int64(count) > llmInfo.MaxIterations {
ctx.SetContext(ToolCallsCount, 0)
return types.ActionContinue, ""
} else {
@@ -316,15 +401,14 @@ func toolsCall(ctx wrapper.HttpContext, config PluginConfig, content string, raw
//没得到最终答案
var url string
var urlStr string
var headers [][2]string
var apiClient wrapper.HttpClient
var method string
var reqBody []byte
var key string
var maxExecutionTime int64
for i, apisParam := range config.APIsParam {
for i, apisParam := range aPIsParam {
maxExecutionTime = apisParam.MaxExecutionTime
for _, tools_param := range apisParam.ToolsParam {
if action == tools_param.ToolName {
@@ -340,28 +424,37 @@ func toolsCall(ctx wrapper.HttpContext, config PluginConfig, content string, raw
method = tools_param.Method
// 组装 headers 和 key
headers = [][2]string{{"Content-Type", "application/json"}}
if apisParam.APIKey.Name != "" {
if apisParam.APIKey.In == "query" {
key = "?" + apisParam.APIKey.Name + "=" + apisParam.APIKey.Value
} else if apisParam.APIKey.In == "header" {
headers = append(headers, [2]string{"Authorization", apisParam.APIKey.Name + " " + apisParam.APIKey.Value})
// 组装 URL 和请求体
urlStr = apisParam.URL + tools_param.Path
// 解析URL模板以查找路径参数
urlParts := strings.Split(urlStr, "/")
for i, part := range urlParts {
if strings.Contains(part, "{") && strings.Contains(part, "}") {
for _, param := range tools_param.ParamName {
paramNameInPath := part[1 : len(part)-1]
if paramNameInPath == param {
if value, ok := data[param]; ok {
// 删除已经使用过的
delete(data, param)
// 替换模板中的占位符
urlParts[i] = url.QueryEscape(value.(string))
}
}
}
}
}
// 组装 URL 和请求体
url = apisParam.URL + tools_param.Path + key
// 重新组合URL
urlStr = strings.Join(urlParts, "/")
queryParams := make([][2]string, 0)
if method == "GET" {
queryParams := make([]string, 0, len(tools_param.ParamName))
for _, param := range tools_param.ParamName {
if value, ok := data[param]; ok {
queryParams = append(queryParams, fmt.Sprintf("%s=%v", param, value))
queryParams = append(queryParams, [2]string{param, fmt.Sprintf("%v", value)})
}
}
if len(queryParams) > 0 {
url += "&" + strings.Join(queryParams, "&")
}
} else if method == "POST" {
var err error
reqBody, err = json.Marshal(data)
@@ -371,9 +464,30 @@ func toolsCall(ctx wrapper.HttpContext, config PluginConfig, content string, raw
}
}
log.Infof("url: %s", url)
// 组装 headers 和 key
headers = [][2]string{{"Content-Type", "application/json"}}
if apisParam.APIKey.Name != "" {
if apisParam.APIKey.In == "query" {
queryParams = append(queryParams, [2]string{apisParam.APIKey.Name, apisParam.APIKey.Value})
} else if apisParam.APIKey.In == "header" {
headers = append(headers, [2]string{"Authorization", apisParam.APIKey.Name + " " + apisParam.APIKey.Value})
}
}
apiClient = config.APIClient[i]
if len(queryParams) > 0 {
// 将 key 拼接到 url 后面
urlStr += "?"
for i, param := range queryParams {
if i != 0 {
urlStr += "&"
}
urlStr += url.QueryEscape(param[0]) + "=" + url.QueryEscape(param[1])
}
}
log.Debugf("url: %s", urlStr)
apiClient = aPIClient[i]
break
}
}
@@ -382,11 +496,11 @@ func toolsCall(ctx wrapper.HttpContext, config PluginConfig, content string, raw
if apiClient != nil {
err := apiClient.Call(
method,
url,
urlStr,
headers,
reqBody,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
toolsCallResult(ctx, config, content, rawResponse, log, statusCode, responseBody)
toolsCallResult(ctx, llmClient, llmInfo, jsonResp, aPIsParam, aPIClient, content, rawResponse, log, statusCode, responseBody)
}, uint32(maxExecutionTime))
if err != nil {
log.Debugf("tool calls error: %s", err.Error())
@@ -415,7 +529,7 @@ func onHttpResponseBody(ctx wrapper.HttpContext, config PluginConfig, body []byt
//如果gpt返回的内容不是空的
if rawResponse.Choices[0].Message.Content != "" {
//进入agent的循环思考工具调用的过程中
retType, _ := toolsCall(ctx, config, rawResponse.Choices[0].Message.Content, rawResponse, log)
retType, _ := toolsCall(ctx, config.LLMClient, config.LLMInfo, config.JsonResp, config.APIsParam, config.APIClient, rawResponse.Choices[0].Message.Content, rawResponse, log)
return retType
} else {
return types.ActionContinue

View File

@@ -167,3 +167,7 @@ Action:` + "```" + `
%s
Question: %s
`
const Json_Resp_Template = `
Given the Json Schema: %s, please help me convert the following content to a pure json: %s
Do not respond other content except the pure json!!!!
`

View File

@@ -1,5 +1,5 @@
# File generated by hgctl. Modify as required.
docker-compose-test/
*
!/.gitignore

View File

@@ -1,46 +1,176 @@
## 简介
---
title: AI 缓存
keywords: [higress,ai cache]
description: AI 缓存插件配置参考
---
**Note**
> 需要数据面的proxy wasm版本大于等于0.2.100
> 编译时需要带上版本的tag例如`tinygo build -o main.wasm -scheduler=none -target=wasi -gc=custom -tags="custommalloc nottinygc_finalizer proxy_wasm_version_0_2_100" ./`
>
## 功能说明
LLM 结果缓存插件,默认配置方式可以直接用于 openai 协议的结果缓存,同时支持流式和非流式响应的缓存。
**提示**
携带请求头`x-higress-skip-ai-cache: on`时,当前请求将不会使用缓存中的内容,而是直接转发给后端服务,同时也不会缓存该请求返回响应的内容
## 运行属性
插件执行阶段:`认证阶段`
插件执行优先级:`10`
## 配置说明
配置分为 3 个部分向量数据库vector文本向量化接口embedding缓存数据库cache同时也提供了细粒度的 LLM 请求/响应提取参数配置等。
| Name | Type | Requirement | Default | Description |
| -------- | -------- | -------- | -------- | -------- |
| cacheKeyFrom.requestBody | string | optional | "messages.@reverse.0.content" | 从请求 Body 中基于 [GJSON PATH](https://github.com/tidwall/gjson/blob/master/SYNTAX.md) 语法提取字符串 |
| cacheValueFrom.responseBody | string | optional | "choices.0.message.content" | 从响应 Body 中基于 [GJSON PATH](https://github.com/tidwall/gjson/blob/master/SYNTAX.md) 语法提取字符串 |
| cacheStreamValueFrom.responseBody | string | optional | "choices.0.delta.content" | 从流式响应 Body 中基于 [GJSON PATH](https://github.com/tidwall/gjson/blob/master/SYNTAX.md) 语法提取字符串 |
| cacheKeyPrefix | string | optional | "higress-ai-cache:" | Redis缓存Key的前缀 |
| cacheTTL | integer | optional | 0 | 缓存的过期时间单位是秒默认值为0即永不过期 |
| redis.serviceName | string | requried | - | redis 服务名称,带服务类型的完整 FQDN 名称,例如 my-redis.dns、redis.my-ns.svc.cluster.local |
| redis.servicePort | integer | optional | 6379 | redis 服务端口 |
| redis.timeout | integer | optional | 1000 | 请求 redis 的超时时间,单位为毫秒 |
| redis.username | string | optional | - | 登陆 redis 的用户名 |
| redis.password | string | optional | - | 登陆 redis 的密码 |
| returnResponseTemplate | string | optional | `{"id":"from-cache","choices":[%s],"model":"gpt-4o","object":"chat.completion","usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}` | 返回 HTTP 响应的模版,用 %s 标记需要被 cache value 替换的部分 |
| returnStreamResponseTemplate | string | optional | `data:{"id":"from-cache","choices":[{"index":0,"delta":{"role":"assistant","content":"%s"},"finish_reason":"stop"}],"model":"gpt-4o","object":"chat.completion","usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}\n\ndata:[DONE]\n\n` | 返回流式 HTTP 响应的模版,用 %s 标记需要被 cache value 替换的部分 |
## 配置说明
本插件同时支持基于向量数据库的语义化缓存和基于字符串匹配的缓存方法,如果同时配置了向量数据库和缓存数据库,优先使用向量数据库。
*Note*: 向量数据库(vector) 和 缓存数据库(cache) 不能同时为空,否则本插件无法提供缓存服务。
| Name | Type | Requirement | Default | Description |
| --- | --- | --- | --- | --- |
| vector | string | optional | "" | 向量存储服务提供者类型,例如 dashvector |
| embedding | string | optional | "" | 请求文本向量化服务类型,例如 dashscope |
| cache | string | optional | "" | 缓存服务类型,例如 redis |
| cacheKeyStrategy | string | optional | "lastQuestion" | 决定如何根据历史问题生成缓存键的策略。可选值: "lastQuestion" (使用最后一个问题), "allQuestions" (拼接所有问题) 或 "disabled" (禁用缓存) |
| enableSemanticCache | bool | optional | true | 是否启用语义化缓存, 若不启用则使用字符串匹配的方式来查找缓存此时需要配置cache服务 |
根据是否需要启用语义缓存,可以只配置组件的组合为:
1. `cache`: 仅启用字符串匹配缓存
3. `vector (+ embedding)`: 启用语义化缓存, 其中若 `vector` 未提供字符串表征服务,则需要自行配置 `embedding` 服务
2. `vector (+ embedding) + cache`: 启用语义化缓存并用缓存服务存储LLM响应以加速
注意若不配置相关组件,则可以忽略相应组件的`required`字段。
## 向量数据库服务vector
| Name | Type | Requirement | Default | Description |
| --- | --- | --- | --- | --- |
| vector.type | string | required | "" | 向量存储服务提供者类型,例如 dashvector |
| vector.serviceName | string | required | "" | 向量存储服务名称 |
| vector.serviceHost | string | required | "" | 向量存储服务域名 |
| vector.servicePort | int64 | optional | 443 | 向量存储服务端口 |
| vector.apiKey | string | optional | "" | 向量存储服务 API Key |
| vector.topK | int | optional | 1 | 返回TopK结果默认为 1 |
| vector.timeout | uint32 | optional | 10000 | 请求向量存储服务的超时时间单位为毫秒。默认值是10000即10秒 |
| vector.collectionID | string | optional | "" | 向量存储服务 Collection ID |
| vector.threshold | float64 | optional | 1000 | 向量相似度度量阈值 |
| vector.thresholdRelation | string | optional | lt | 相似度度量方式有 `Cosine`, `DotProduct`, `Euclidean` 等,前两者值越大相似度越高,后者值越小相似度越高。对于 `Cosine``DotProduct` 选择 `gt`,对于 `Euclidean` 则选择 `lt`。默认为 `lt`,所有条件包括 `lt` (less than小于)、`lte` (less than or equal to小等于)、`gt` (greater than大于)、`gte` (greater than or equal to大等于) |
## 文本向量化服务embedding
| Name | Type | Requirement | Default | Description |
| --- | --- | --- | --- | --- |
| embedding.type | string | required | "" | 请求文本向量化服务类型,例如 dashscope |
| embedding.serviceName | string | required | "" | 请求文本向量化服务名称 |
| embedding.serviceHost | string | optional | "" | 请求文本向量化服务域名 |
| embedding.servicePort | int64 | optional | 443 | 请求文本向量化服务端口 |
| embedding.apiKey | string | optional | "" | 请求文本向量化服务的 API Key |
| embedding.timeout | uint32 | optional | 10000 | 请求文本向量化服务的超时时间单位为毫秒。默认值是10000即10秒 |
| embedding.model | string | optional | "" | 请求文本向量化服务的模型名称 |
## 缓存服务cache
| cache.type | string | required | "" | 缓存服务类型,例如 redis |
| --- | --- | --- | --- | --- |
| cache.serviceName | string | required | "" | 缓存服务名称 |
| cache.serviceHost | string | required | "" | 缓存服务域名 |
| cache.servicePort | int64 | optional | 6379 | 缓存服务端口 |
| cache.username | string | optional | "" | 缓存服务用户名 |
| cache.password | string | optional | "" | 缓存服务密码 |
| cache.timeout | uint32 | optional | 10000 | 缓存服务的超时时间单位为毫秒。默认值是10000即10秒 |
| cache.cacheTTL | int | optional | 0 | 缓存过期时间,单位为秒。默认值是 0即 永不过期|
| cacheKeyPrefix | string | optional | "higress-ai-cache:" | 缓存 Key 的前缀,默认值为 "higress-ai-cache:" |
## 其他配置
| Name | Type | Requirement | Default | Description |
| --- | --- | --- | --- | --- |
| cacheKeyFrom | string | optional | "messages.@reverse.0.content" | 从请求 Body 中基于 [GJSON PATH](https://github.com/tidwall/gjson/blob/master/SYNTAX.md) 语法提取字符串 |
| cacheValueFrom | string | optional | "choices.0.message.content" | 从响应 Body 中基于 [GJSON PATH](https://github.com/tidwall/gjson/blob/master/SYNTAX.md) 语法提取字符串 |
| cacheStreamValueFrom | string | optional | "choices.0.delta.content" | 从流式响应 Body 中基于 [GJSON PATH](https://github.com/tidwall/gjson/blob/master/SYNTAX.md) 语法提取字符串 |
| cacheToolCallsFrom | string | optional | "choices.0.delta.content.tool_calls" | 从请求 Body 中基于 [GJSON PATH](https://github.com/tidwall/gjson/blob/master/SYNTAX.md) 语法提取字符串 |
| responseTemplate | string | optional | `{"id":"ai-cache.hit","choices":[{"index":0,"message":{"role":"assistant","content":%s},"finish_reason":"stop"}],"model":"gpt-4o","object":"chat.completion","usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}` | 返回 HTTP 响应的模版,用 %s 标记需要被 cache value 替换的部分 |
| streamResponseTemplate | string | optional | `data:{"id":"ai-cache.hit","choices":[{"index":0,"delta":{"role":"assistant","content":%s},"finish_reason":"stop"}],"model":"gpt-4o","object":"chat.completion","usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}\n\ndata:[DONE]\n\n` | 返回流式 HTTP 响应的模版,用 %s 标记需要被 cache value 替换的部分 |
# 向量数据库提供商特有配置
## Chroma
Chroma 所对应的 `vector.type``chroma`。它并无特有的配置字段。需要提前创建 Collection并填写 Collection ID 至配置项 `vector.collectionID`,一个 Collection ID 的示例为 `52bbb8b3-724c-477b-a4ce-d5b578214612`
## DashVector
DashVector 所对应的 `vector.type``dashvector`。它并无特有的配置字段。需要提前创建 Collection并填写 `Collection 名称` 至配置项 `vector.collectionID`
## ElasticSearch
ElasticSearch 所对应的 `vector.type``elasticsearch`。需要提前创建 Index 并填写 Index Name 至配置项 `vector.collectionID`
当前依赖于 [KNN](https://www.elastic.co/guide/en/elasticsearch/reference/current/knn-search.html) 方法,请保证 ES 版本支持 `KNN`,当前已在 `8.16` 版本测试。
它特有的配置字段如下:
| 名称 | 数据类型 | 填写要求 | 默认值 | 描述 |
|-------------------|----------|----------|--------|-------------------------------------------------------------------------------|
| `vector.esUsername` | string | 非必填 | - | ElasticSearch 用户名 |
| `vector.esPassword` | string | 非必填 | - | ElasticSearch 密码 |
`vector.esUsername``vector.esPassword` 用于 Basic 认证。同时也支持 Api Key 认证,当填写了 `vector.apiKey` 时,则启用 Api Key 认证,如果使用 SaaS 版本需要填写 `encoded` 的值。
## Milvus
Milvus 所对应的 `vector.type``milvus`。它并无特有的配置字段。需要提前创建 Collection并填写 Collection Name 至配置项 `vector.collectionID`
## Pinecone
Pinecone 所对应的 `vector.type``pinecone`。它并无特有的配置字段。需要提前创建 Index并填写 Index 访问域名至 `vector.serviceHost`
Pinecone 中的 `Namespace` 参数通过插件的 `vector.collectionID` 进行配置,如果不填写 `vector.collectionID`,则默认为 Default Namespace。
## Qdrant
Qdrant 所对应的 `vector.type``qdrant`。它并无特有的配置字段。需要提前创建 Collection并填写 Collection Name 至配置项 `vector.collectionID`
## Weaviate
Weaviate 所对应的 `vector.type``weaviate`。它并无特有的配置字段。
需要提前创建 Collection并填写 Collection Name 至配置项 `vector.collectionID`
需要注意的是 Weaviate 会设置首字母自动大写,在填写配置 `collectionID` 的时候需要将首字母设置为大写。
如果使用 SaaS 需要填写 `vector.serviceHost` 参数。
## 配置示例
### 基础配置
```yaml
embedding:
type: dashscope
serviceName: my_dashscope.dns
apiKey: [Your Key]
vector:
type: dashvector
serviceName: my_dashvector.dns
collectionID: [Your Collection ID]
serviceDomain: [Your domain]
apiKey: [Your key]
cache:
type: redis
serviceName: my_redis.dns
servicePort: 6379
timeout: 100
```
旧版本配置兼容
```yaml
redis:
serviceName: my-redis.dns
timeout: 2000
serviceName: my_redis.dns
servicePort: 6379
timeout: 100
```
## 进阶用法
当前默认的缓存 key 是基于 GJSON PATH 的表达式:`messages.@reverse.0.content` 提取,含义是把 messages 数组反转后取第一项的 content
GJSON PATH 支持条件判断语法,例如希望取最后一个 role 为 user 的 content 作为 key可以写成 `messages.@reverse.#(role=="user").content`
@@ -50,3 +180,7 @@ GJSON PATH 支持条件判断语法,例如希望取最后一个 role 为 user
还可以支持管道语法,例如希望取到数第二个 role 为 user 的 content 作为 key可以写成`messages.@reverse.#(role=="user")#.content|1`
更多用法可以参考[官方文档](https://github.com/tidwall/gjson/blob/master/SYNTAX.md),可以使用 [GJSON Playground](https://gjson.dev/) 进行语法测试。
## 常见问题
1. 如果返回的错误为 `error status returned by host: bad argument`,请检查`serviceName`是否正确包含了服务的类型后缀(.dns等)。

View File

@@ -6,6 +6,10 @@ description: AI Cache Plugin Configuration Reference
## Function Description
LLM result caching plugin, the default configuration can be directly used for result caching under the OpenAI protocol, and it supports caching of both streaming and non-streaming responses.
**Tips**
When carrying the request header `x-higress-skip-ai-cache: on`, the current request will not use content from the cache but will be directly forwarded to the backend service. Additionally, the response content from this request will not be cached.
## Runtime Properties
Plugin Execution Phase: `Authentication Phase`
Plugin Execution Priority: `10`

View File

@@ -0,0 +1,135 @@
package cache
import (
"errors"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
"github.com/tidwall/gjson"
)
const (
PROVIDER_TYPE_REDIS = "redis"
DEFAULT_CACHE_PREFIX = "higress-ai-cache:"
)
type providerInitializer interface {
ValidateConfig(ProviderConfig) error
CreateProvider(ProviderConfig) (Provider, error)
}
var (
providerInitializers = map[string]providerInitializer{
PROVIDER_TYPE_REDIS: &redisProviderInitializer{},
}
)
type ProviderConfig struct {
// @Title zh-CN redis 缓存服务提供者类型
// @Description zh-CN 缓存服务提供者类型,例如 redis
typ string
// @Title zh-CN redis 缓存服务名称
// @Description zh-CN 缓存服务名称
serviceName string
// @Title zh-CN redis 缓存服务端口
// @Description zh-CN 缓存服务端口默认值为6379
servicePort int
// @Title zh-CN redis 缓存服务地址
// @Description zh-CN Cache 缓存服务地址,非必填
serviceHost string
// @Title zh-CN 缓存服务用户名
// @Description zh-CN 缓存服务用户名,非必填
username string
// @Title zh-CN 缓存服务密码
// @Description zh-CN 缓存服务密码,非必填
password string
// @Title zh-CN 请求超时
// @Description zh-CN 请求缓存服务的超时时间单位为毫秒。默认值是10000即10秒
timeout uint32
// @Title zh-CN 缓存过期时间
// @Description zh-CN 缓存过期时间单位为秒。默认值是0即永不过期
cacheTTL int
// @Title 缓存 Key 前缀
// @Description 缓存 Key 的前缀,默认值为 "higressAiCache:"
cacheKeyPrefix string
}
func (c *ProviderConfig) GetProviderType() string {
return c.typ
}
func (c *ProviderConfig) FromJson(json gjson.Result) {
c.typ = json.Get("type").String()
c.serviceName = json.Get("serviceName").String()
c.servicePort = int(json.Get("servicePort").Int())
if !json.Get("servicePort").Exists() {
c.servicePort = 6379
}
c.serviceHost = json.Get("serviceHost").String()
c.username = json.Get("username").String()
if !json.Get("username").Exists() {
c.username = ""
}
c.password = json.Get("password").String()
if !json.Get("password").Exists() {
c.password = ""
}
c.timeout = uint32(json.Get("timeout").Int())
if !json.Get("timeout").Exists() {
c.timeout = 10000
}
c.cacheTTL = int(json.Get("cacheTTL").Int())
if !json.Get("cacheTTL").Exists() {
c.cacheTTL = 0
// c.cacheTTL = 3600000
}
if json.Get("cacheKeyPrefix").Exists() {
c.cacheKeyPrefix = json.Get("cacheKeyPrefix").String()
} else {
c.cacheKeyPrefix = DEFAULT_CACHE_PREFIX
}
}
func (c *ProviderConfig) ConvertLegacyJson(json gjson.Result) {
c.FromJson(json.Get("redis"))
c.typ = "redis"
if json.Get("cacheTTL").Exists() {
c.cacheTTL = int(json.Get("cacheTTL").Int())
}
}
func (c *ProviderConfig) Validate() error {
if c.typ == "" {
return errors.New("cache service type is required")
}
if c.serviceName == "" {
return errors.New("cache service name is required")
}
if c.cacheTTL < 0 {
return errors.New("cache TTL must be greater than or equal to 0")
}
initializer, has := providerInitializers[c.typ]
if !has {
return errors.New("unknown cache service provider type: " + c.typ)
}
if err := initializer.ValidateConfig(*c); err != nil {
return err
}
return nil
}
func CreateProvider(pc ProviderConfig) (Provider, error) {
initializer, has := providerInitializers[pc.typ]
if !has {
return nil, errors.New("unknown provider type: " + pc.typ)
}
return initializer.CreateProvider(pc)
}
type Provider interface {
GetProviderType() string
Init(username string, password string, timeout uint32) error
Get(key string, cb wrapper.RedisResponseCallback) error
Set(key string, value string, cb wrapper.RedisResponseCallback) error
GetCacheKeyPrefix() string
}

View File

@@ -0,0 +1,58 @@
package cache
import (
"errors"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
)
type redisProviderInitializer struct {
}
func (r *redisProviderInitializer) ValidateConfig(cf ProviderConfig) error {
if len(cf.serviceName) == 0 {
return errors.New("cache service name is required")
}
return nil
}
func (r *redisProviderInitializer) CreateProvider(cf ProviderConfig) (Provider, error) {
rp := redisProvider{
config: cf,
client: wrapper.NewRedisClusterClient(wrapper.FQDNCluster{
FQDN: cf.serviceName,
Host: cf.serviceHost,
Port: int64(cf.servicePort)}),
}
err := rp.Init(cf.username, cf.password, cf.timeout)
return &rp, err
}
type redisProvider struct {
config ProviderConfig
client wrapper.RedisClient
}
func (rp *redisProvider) GetProviderType() string {
return PROVIDER_TYPE_REDIS
}
func (rp *redisProvider) Init(username string, password string, timeout uint32) error {
return rp.client.Init(rp.config.username, rp.config.password, int64(rp.config.timeout))
}
func (rp *redisProvider) Get(key string, cb wrapper.RedisResponseCallback) error {
return rp.client.Get(key, cb)
}
func (rp *redisProvider) Set(key string, value string, cb wrapper.RedisResponseCallback) error {
if rp.config.cacheTTL == 0 {
return rp.client.Set(key, value, cb)
} else {
return rp.client.SetEx(key, value, rp.config.cacheTTL, cb)
}
}
func (rp *redisProvider) GetCacheKeyPrefix() string {
return rp.config.cacheKeyPrefix
}

View File

@@ -0,0 +1,225 @@
package config
import (
"fmt"
"github.com/alibaba/higress/plugins/wasm-go/extensions/ai-cache/cache"
"github.com/alibaba/higress/plugins/wasm-go/extensions/ai-cache/embedding"
"github.com/alibaba/higress/plugins/wasm-go/extensions/ai-cache/vector"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
"github.com/tidwall/gjson"
)
const (
CACHE_KEY_STRATEGY_LAST_QUESTION = "lastQuestion"
CACHE_KEY_STRATEGY_ALL_QUESTIONS = "allQuestions"
CACHE_KEY_STRATEGY_DISABLED = "disabled"
)
type PluginConfig struct {
// @Title zh-CN 返回 HTTP 响应的模版
// @Description zh-CN 用 %s 标记需要被 cache value 替换的部分
ResponseTemplate string
// @Title zh-CN 返回流式 HTTP 响应的模版
// @Description zh-CN 用 %s 标记需要被 cache value 替换的部分
StreamResponseTemplate string
cacheProvider cache.Provider
embeddingProvider embedding.Provider
vectorProvider vector.Provider
embeddingProviderConfig embedding.ProviderConfig
vectorProviderConfig vector.ProviderConfig
cacheProviderConfig cache.ProviderConfig
CacheKeyFrom string
CacheValueFrom string
CacheStreamValueFrom string
CacheToolCallsFrom string
// @Title zh-CN 启用语义化缓存
// @Description zh-CN 控制是否启用语义化缓存功能。true 表示启用false 表示禁用。
EnableSemanticCache bool
// @Title zh-CN 缓存键策略
// @Description zh-CN 决定如何生成缓存键的策略。可选值: "lastQuestion" (使用最后一个问题), "allQuestions" (拼接所有问题) 或 "disabled" (禁用缓存)
CacheKeyStrategy string
}
func (c *PluginConfig) FromJson(json gjson.Result, log wrapper.Log) {
c.vectorProviderConfig.FromJson(json.Get("vector"))
c.embeddingProviderConfig.FromJson(json.Get("embedding"))
c.cacheProviderConfig.FromJson(json.Get("cache"))
if json.Get("redis").Exists() {
// compatible with legacy config
c.cacheProviderConfig.ConvertLegacyJson(json)
}
c.CacheKeyStrategy = json.Get("cacheKeyStrategy").String()
if c.CacheKeyStrategy == "" {
c.CacheKeyStrategy = CACHE_KEY_STRATEGY_LAST_QUESTION // set default value
}
c.CacheKeyFrom = json.Get("cacheKeyFrom").String()
if c.CacheKeyFrom == "" {
c.CacheKeyFrom = "messages.@reverse.0.content"
}
c.CacheValueFrom = json.Get("cacheValueFrom").String()
if c.CacheValueFrom == "" {
c.CacheValueFrom = "choices.0.message.content"
}
c.CacheStreamValueFrom = json.Get("cacheStreamValueFrom").String()
if c.CacheStreamValueFrom == "" {
c.CacheStreamValueFrom = "choices.0.delta.content"
}
c.CacheToolCallsFrom = json.Get("cacheToolCallsFrom").String()
if c.CacheToolCallsFrom == "" {
c.CacheToolCallsFrom = "choices.0.delta.content.tool_calls"
}
c.StreamResponseTemplate = json.Get("streamResponseTemplate").String()
if c.StreamResponseTemplate == "" {
c.StreamResponseTemplate = `data:{"id":"from-cache","choices":[{"index":0,"delta":{"role":"assistant","content":"%s"},"finish_reason":"stop"}],"model":"gpt-4o","object":"chat.completion","usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}` + "\n\ndata:[DONE]\n\n"
}
c.ResponseTemplate = json.Get("responseTemplate").String()
if c.ResponseTemplate == "" {
c.ResponseTemplate = `{"id":"from-cache","choices":[{"index":0,"message":{"role":"assistant","content":"%s"},"finish_reason":"stop"}],"model":"gpt-4o","object":"chat.completion","usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}`
}
if json.Get("enableSemanticCache").Exists() {
c.EnableSemanticCache = json.Get("enableSemanticCache").Bool()
} else {
c.EnableSemanticCache = true // set default value to true
}
// compatible with legacy config
convertLegacyMapFields(c, json, log)
}
func (c *PluginConfig) Validate() error {
// if cache provider is configured, validate it
if c.cacheProviderConfig.GetProviderType() != "" {
if err := c.cacheProviderConfig.Validate(); err != nil {
return err
}
}
if c.embeddingProviderConfig.GetProviderType() != "" {
if err := c.embeddingProviderConfig.Validate(); err != nil {
return err
}
}
if c.vectorProviderConfig.GetProviderType() != "" {
if err := c.vectorProviderConfig.Validate(); err != nil {
return err
}
}
// cache, vector, and embedding cannot all be empty
if c.vectorProviderConfig.GetProviderType() == "" &&
c.embeddingProviderConfig.GetProviderType() == "" &&
c.cacheProviderConfig.GetProviderType() == "" {
return fmt.Errorf("vector, embedding and cache provider cannot be all empty")
}
// Validate the value of CacheKeyStrategy
if c.CacheKeyStrategy != CACHE_KEY_STRATEGY_LAST_QUESTION &&
c.CacheKeyStrategy != CACHE_KEY_STRATEGY_ALL_QUESTIONS &&
c.CacheKeyStrategy != CACHE_KEY_STRATEGY_DISABLED {
return fmt.Errorf("invalid CacheKeyStrategy: %s", c.CacheKeyStrategy)
}
// If semantic cache is enabled, ensure necessary components are configured
// if c.EnableSemanticCache {
// if c.embeddingProviderConfig.GetProviderType() == "" {
// return fmt.Errorf("semantic cache is enabled but embedding provider is not configured")
// }
// // if only configure cache, just warn the user
// }
return nil
}
func (c *PluginConfig) Complete(log wrapper.Log) error {
var err error
if c.embeddingProviderConfig.GetProviderType() != "" {
log.Debugf("embedding provider is set to %s", c.embeddingProviderConfig.GetProviderType())
c.embeddingProvider, err = embedding.CreateProvider(c.embeddingProviderConfig)
if err != nil {
return err
}
} else {
log.Info("embedding provider is not configured")
c.embeddingProvider = nil
}
if c.cacheProviderConfig.GetProviderType() != "" {
log.Debugf("cache provider is set to %s", c.cacheProviderConfig.GetProviderType())
c.cacheProvider, err = cache.CreateProvider(c.cacheProviderConfig)
if err != nil {
return err
}
} else {
log.Info("cache provider is not configured")
c.cacheProvider = nil
}
if c.vectorProviderConfig.GetProviderType() != "" {
log.Debugf("vector provider is set to %s", c.vectorProviderConfig.GetProviderType())
c.vectorProvider, err = vector.CreateProvider(c.vectorProviderConfig)
if err != nil {
return err
}
} else {
log.Info("vector provider is not configured")
c.vectorProvider = nil
}
return nil
}
func (c *PluginConfig) GetEmbeddingProvider() embedding.Provider {
return c.embeddingProvider
}
func (c *PluginConfig) GetVectorProvider() vector.Provider {
return c.vectorProvider
}
func (c *PluginConfig) GetVectorProviderConfig() vector.ProviderConfig {
return c.vectorProviderConfig
}
func (c *PluginConfig) GetCacheProvider() cache.Provider {
return c.cacheProvider
}
func convertLegacyMapFields(c *PluginConfig, json gjson.Result, log wrapper.Log) {
keyMap := map[string]string{
"cacheKeyFrom.requestBody": "cacheKeyFrom",
"cacheValueFrom.requestBody": "cacheValueFrom",
"cacheStreamValueFrom.requestBody": "cacheStreamValueFrom",
"returnResponseTemplate": "responseTemplate",
"returnStreamResponseTemplate": "streamResponseTemplate",
}
for oldKey, newKey := range keyMap {
if json.Get(oldKey).Exists() {
log.Debugf("[convertLegacyMapFields] mapping %s to %s", oldKey, newKey)
setField(c, newKey, json.Get(oldKey).String(), log)
} else {
log.Debugf("[convertLegacyMapFields] %s not exists", oldKey)
}
}
}
func setField(c *PluginConfig, fieldName string, value string, log wrapper.Log) {
switch fieldName {
case "cacheKeyFrom":
c.CacheKeyFrom = value
case "cacheValueFrom":
c.CacheValueFrom = value
case "cacheStreamValueFrom":
c.CacheStreamValueFrom = value
case "responseTemplate":
c.ResponseTemplate = value
case "streamResponseTemplate":
c.StreamResponseTemplate = value
}
log.Debugf("[setField] set %s to %s", fieldName, value)
}

View File

@@ -0,0 +1,275 @@
package main
import (
"errors"
"fmt"
"strconv"
"strings"
"github.com/alibaba/higress/plugins/wasm-go/extensions/ai-cache/config"
"github.com/alibaba/higress/plugins/wasm-go/extensions/ai-cache/vector"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
"github.com/higress-group/proxy-wasm-go-sdk/proxywasm"
"github.com/tidwall/resp"
)
// CheckCacheForKey checks if the key is in the cache, or triggers similarity search if not found.
func CheckCacheForKey(key string, ctx wrapper.HttpContext, c config.PluginConfig, log wrapper.Log, stream bool, useSimilaritySearch bool) error {
activeCacheProvider := c.GetCacheProvider()
if activeCacheProvider == nil {
log.Debugf("[%s] [CheckCacheForKey] no cache provider configured, performing similarity search", PLUGIN_NAME)
return performSimilaritySearch(key, ctx, c, log, key, stream)
}
queryKey := activeCacheProvider.GetCacheKeyPrefix() + key
log.Debugf("[%s] [CheckCacheForKey] querying cache with key: %s", PLUGIN_NAME, queryKey)
err := activeCacheProvider.Get(queryKey, func(response resp.Value) {
handleCacheResponse(key, response, ctx, log, stream, c, useSimilaritySearch)
})
if err != nil {
log.Errorf("[%s] [CheckCacheForKey] failed to retrieve key: %s from cache, error: %v", PLUGIN_NAME, key, err)
return err
}
return nil
}
// handleCacheResponse processes cache response and handles cache hits and misses.
func handleCacheResponse(key string, response resp.Value, ctx wrapper.HttpContext, log wrapper.Log, stream bool, c config.PluginConfig, useSimilaritySearch bool) {
if err := response.Error(); err == nil && !response.IsNull() {
log.Infof("[%s] cache hit for key: %s", PLUGIN_NAME, key)
processCacheHit(key, response.String(), stream, ctx, c, log)
return
}
log.Infof("[%s] [handleCacheResponse] cache miss for key: %s", PLUGIN_NAME, key)
if err := response.Error(); err != nil {
log.Errorf("[%s] [handleCacheResponse] error retrieving key: %s from cache, error: %v", PLUGIN_NAME, key, err)
}
if useSimilaritySearch && c.EnableSemanticCache {
if err := performSimilaritySearch(key, ctx, c, log, key, stream); err != nil {
log.Errorf("[%s] [handleCacheResponse] failed to perform similarity search for key: %s, error: %v", PLUGIN_NAME, key, err)
proxywasm.ResumeHttpRequest()
}
} else {
proxywasm.ResumeHttpRequest()
}
}
// processCacheHit handles a successful cache hit.
func processCacheHit(key string, response string, stream bool, ctx wrapper.HttpContext, c config.PluginConfig, log wrapper.Log) {
if strings.TrimSpace(response) == "" {
log.Warnf("[%s] [processCacheHit] cached response for key %s is empty", PLUGIN_NAME, key)
proxywasm.ResumeHttpRequest()
return
}
log.Debugf("[%s] [processCacheHit] cached response for key %s: %s", PLUGIN_NAME, key, response)
// Escape the response to ensure consistent formatting
escapedResponse := strings.Trim(strconv.Quote(response), "\"")
ctx.SetContext(CACHE_KEY_CONTEXT_KEY, nil)
if stream {
proxywasm.SendHttpResponseWithDetail(200, "ai-cache.hit", [][2]string{{"content-type", "text/event-stream; charset=utf-8"}}, []byte(fmt.Sprintf(c.StreamResponseTemplate, escapedResponse)), -1)
} else {
proxywasm.SendHttpResponseWithDetail(200, "ai-cache.hit", [][2]string{{"content-type", "application/json; charset=utf-8"}}, []byte(fmt.Sprintf(c.ResponseTemplate, escapedResponse)), -1)
}
}
// performSimilaritySearch determines the appropriate similarity search method to use.
func performSimilaritySearch(key string, ctx wrapper.HttpContext, c config.PluginConfig, log wrapper.Log, queryString string, stream bool) error {
activeVectorProvider := c.GetVectorProvider()
if activeVectorProvider == nil {
return logAndReturnError(log, "[performSimilaritySearch] no vector provider configured for similarity search")
}
// Check if the active vector provider implements the StringQuerier interface.
if _, ok := activeVectorProvider.(vector.StringQuerier); ok {
log.Debugf("[%s] [performSimilaritySearch] active vector provider implements StringQuerier interface, performing string query", PLUGIN_NAME)
return performStringQuery(key, queryString, ctx, c, log, stream)
}
// Check if the active vector provider implements the EmbeddingQuerier interface.
if _, ok := activeVectorProvider.(vector.EmbeddingQuerier); ok {
log.Debugf("[%s] [performSimilaritySearch] active vector provider implements EmbeddingQuerier interface, performing embedding query", PLUGIN_NAME)
return performEmbeddingQuery(key, ctx, c, log, stream)
}
return logAndReturnError(log, "[performSimilaritySearch] no suitable querier or embedding provider available for similarity search")
}
// performStringQuery executes the string-based similarity search.
func performStringQuery(key string, queryString string, ctx wrapper.HttpContext, c config.PluginConfig, log wrapper.Log, stream bool) error {
stringQuerier, ok := c.GetVectorProvider().(vector.StringQuerier)
if !ok {
return logAndReturnError(log, "[performStringQuery] active vector provider does not implement StringQuerier interface")
}
return stringQuerier.QueryString(queryString, ctx, log, func(results []vector.QueryResult, ctx wrapper.HttpContext, log wrapper.Log, err error) {
handleQueryResults(key, results, ctx, log, stream, c, err)
})
}
// performEmbeddingQuery executes the embedding-based similarity search.
func performEmbeddingQuery(key string, ctx wrapper.HttpContext, c config.PluginConfig, log wrapper.Log, stream bool) error {
embeddingQuerier, ok := c.GetVectorProvider().(vector.EmbeddingQuerier)
if !ok {
return logAndReturnError(log, fmt.Sprintf("[performEmbeddingQuery] active vector provider does not implement EmbeddingQuerier interface"))
}
activeEmbeddingProvider := c.GetEmbeddingProvider()
if activeEmbeddingProvider == nil {
return logAndReturnError(log, fmt.Sprintf("[performEmbeddingQuery] no embedding provider configured for similarity search"))
}
return activeEmbeddingProvider.GetEmbedding(key, ctx, log, func(textEmbedding []float64, err error) {
log.Debugf("[%s] [performEmbeddingQuery] GetEmbedding success, length of embedding: %d, error: %v", PLUGIN_NAME, len(textEmbedding), err)
if err != nil {
handleInternalError(err, fmt.Sprintf("[%s] [performEmbeddingQuery] error getting embedding for key: %s", PLUGIN_NAME, key), log)
return
}
ctx.SetContext(CACHE_KEY_EMBEDDING_KEY, textEmbedding)
err = embeddingQuerier.QueryEmbedding(textEmbedding, ctx, log, func(results []vector.QueryResult, ctx wrapper.HttpContext, log wrapper.Log, err error) {
handleQueryResults(key, results, ctx, log, stream, c, err)
})
if err != nil {
handleInternalError(err, fmt.Sprintf("[%s] [performEmbeddingQuery] error querying vector database for key: %s", PLUGIN_NAME, key), log)
}
})
}
// handleQueryResults processes the results of similarity search and determines next actions.
func handleQueryResults(key string, results []vector.QueryResult, ctx wrapper.HttpContext, log wrapper.Log, stream bool, c config.PluginConfig, err error) {
if err != nil {
handleInternalError(err, fmt.Sprintf("[%s] [handleQueryResults] error querying vector database for key: %s", PLUGIN_NAME, key), log)
return
}
if len(results) == 0 {
log.Warnf("[%s] [handleQueryResults] no similar keys found for key: %s", PLUGIN_NAME, key)
proxywasm.ResumeHttpRequest()
return
}
mostSimilarData := results[0]
log.Debugf("[%s] [handleQueryResults] for key: %s, the most similar key found: %s with score: %f", PLUGIN_NAME, key, mostSimilarData.Text, mostSimilarData.Score)
simThreshold := c.GetVectorProviderConfig().Threshold
simThresholdRelation := c.GetVectorProviderConfig().ThresholdRelation
if compare(simThresholdRelation, mostSimilarData.Score, simThreshold) {
log.Infof("[%s] key accepted: %s with score: %f", PLUGIN_NAME, mostSimilarData.Text, mostSimilarData.Score)
if mostSimilarData.Answer != "" {
// direct return the answer if available
cacheResponse(ctx, c, key, mostSimilarData.Answer, log)
processCacheHit(key, mostSimilarData.Answer, stream, ctx, c, log)
} else {
if c.GetCacheProvider() != nil {
CheckCacheForKey(mostSimilarData.Text, ctx, c, log, stream, false)
} else {
// Otherwise, do not check the cache, directly return
log.Infof("[%s] cache hit for key: %s, but no corresponding answer found in the vector database", PLUGIN_NAME, mostSimilarData.Text)
proxywasm.ResumeHttpRequest()
}
}
} else {
log.Infof("[%s] score not meet the threshold %f: %s with score %f", PLUGIN_NAME, simThreshold, mostSimilarData.Text, mostSimilarData.Score)
proxywasm.ResumeHttpRequest()
}
}
// logAndReturnError logs an error and returns it.
func logAndReturnError(log wrapper.Log, message string) error {
message = fmt.Sprintf("[%s] %s", PLUGIN_NAME, message)
log.Errorf(message)
return errors.New(message)
}
// handleInternalError logs an error and resumes the HTTP request.
func handleInternalError(err error, message string, log wrapper.Log) {
if err != nil {
log.Errorf("[%s] [handleInternalError] %s: %v", PLUGIN_NAME, message, err)
} else {
log.Errorf("[%s] [handleInternalError] %s", PLUGIN_NAME, message)
}
// proxywasm.SendHttpResponse(500, [][2]string{{"content-type", "text/plain"}}, []byte("Internal Server Error"), -1)
proxywasm.ResumeHttpRequest()
}
// Caches the response value
func cacheResponse(ctx wrapper.HttpContext, c config.PluginConfig, key string, value string, log wrapper.Log) {
if strings.TrimSpace(value) == "" {
log.Warnf("[%s] [cacheResponse] cached value for key %s is empty", PLUGIN_NAME, key)
return
}
activeCacheProvider := c.GetCacheProvider()
if activeCacheProvider != nil {
queryKey := activeCacheProvider.GetCacheKeyPrefix() + key
_ = activeCacheProvider.Set(queryKey, value, nil)
log.Debugf("[%s] [cacheResponse] cache set success, key: %s, length of value: %d", PLUGIN_NAME, queryKey, len(value))
}
}
// Handles embedding upload if available
func uploadEmbeddingAndAnswer(ctx wrapper.HttpContext, c config.PluginConfig, key string, value string, log wrapper.Log) {
embedding := ctx.GetContext(CACHE_KEY_EMBEDDING_KEY)
if embedding == nil {
return
}
emb, ok := embedding.([]float64)
if !ok {
log.Errorf("[%s] [uploadEmbeddingAndAnswer] embedding is not of expected type []float64", PLUGIN_NAME)
return
}
activeVectorProvider := c.GetVectorProvider()
if activeVectorProvider == nil {
log.Debugf("[%s] [uploadEmbeddingAndAnswer] no vector provider configured for uploading embedding", PLUGIN_NAME)
return
}
// Attempt to upload answer embedding first
if ansEmbUploader, ok := activeVectorProvider.(vector.AnswerAndEmbeddingUploader); ok {
log.Infof("[%s] uploading answer embedding for key: %s", PLUGIN_NAME, key)
err := ansEmbUploader.UploadAnswerAndEmbedding(key, emb, value, ctx, log, nil)
if err != nil {
log.Warnf("[%s] [uploadEmbeddingAndAnswer] failed to upload answer embedding for key: %s, error: %v", PLUGIN_NAME, key, err)
} else {
return // If successful, return early
}
}
// If answer embedding upload fails, attempt normal embedding upload
if embUploader, ok := activeVectorProvider.(vector.EmbeddingUploader); ok {
log.Infof("[%s] uploading embedding for key: %s", PLUGIN_NAME, key)
err := embUploader.UploadEmbedding(key, emb, ctx, log, nil)
if err != nil {
log.Warnf("[%s] [uploadEmbeddingAndAnswer] failed to upload embedding for key: %s, error: %v", PLUGIN_NAME, key, err)
}
}
}
// 主要用于相似度/距离/点积判断
// 余弦相似度度量的是两个向量在方向上的相似程度。相似度越高,两个向量越接近。
// 距离度量的是两个向量在空间上的远近程度。距离越小,两个向量越接近。
// compare 函数根据操作符进行判断并返回结果
func compare(operator string, value1 float64, value2 float64) bool {
switch operator {
case "gt":
return value1 > value2
case "gte":
return value1 >= value2
case "lt":
return value1 < value2
case "lte":
return value1 <= value2
default:
return false
}
}

View File

@@ -0,0 +1,187 @@
package embedding
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"strconv"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
)
const (
DASHSCOPE_DOMAIN = "dashscope.aliyuncs.com"
DASHSCOPE_PORT = 443
DASHSCOPE_DEFAULT_MODEL_NAME = "text-embedding-v2"
DASHSCOPE_ENDPOINT = "/api/v1/services/embeddings/text-embedding/text-embedding"
)
type dashScopeProviderInitializer struct {
}
func (d *dashScopeProviderInitializer) ValidateConfig(config ProviderConfig) error {
if config.apiKey == "" {
return errors.New("[DashScope] apiKey is required")
}
return nil
}
func (d *dashScopeProviderInitializer) CreateProvider(c ProviderConfig) (Provider, error) {
if c.servicePort == 0 {
c.servicePort = DASHSCOPE_PORT
}
if c.serviceHost == "" {
c.serviceHost = DASHSCOPE_DOMAIN
}
return &DSProvider{
config: c,
client: wrapper.NewClusterClient(wrapper.FQDNCluster{
FQDN: c.serviceName,
Host: c.serviceHost,
Port: int64(c.servicePort),
}),
}, nil
}
func (d *DSProvider) GetProviderType() string {
return PROVIDER_TYPE_DASHSCOPE
}
type Embedding struct {
Embedding []float64 `json:"embedding"`
TextIndex int `json:"text_index"`
}
type Input struct {
Texts []string `json:"texts"`
}
type Params struct {
TextType string `json:"text_type"`
}
type Response struct {
RequestID string `json:"request_id"`
Output Output `json:"output"`
Usage Usage `json:"usage"`
}
type Output struct {
Embeddings []Embedding `json:"embeddings"`
}
type Usage struct {
TotalTokens int `json:"total_tokens"`
}
type EmbeddingRequest struct {
Model string `json:"model"`
Input Input `json:"input"`
Parameters Params `json:"parameters"`
}
type Document struct {
Vector []float64 `json:"vector"`
Fields map[string]string `json:"fields"`
}
type DSProvider struct {
config ProviderConfig
client wrapper.HttpClient
}
func (d *DSProvider) constructParameters(texts []string, log wrapper.Log) (string, [][2]string, []byte, error) {
model := d.config.model
if model == "" {
model = DASHSCOPE_DEFAULT_MODEL_NAME
}
data := EmbeddingRequest{
Model: model,
Input: Input{
Texts: texts,
},
Parameters: Params{
TextType: "query",
},
}
requestBody, err := json.Marshal(data)
if err != nil {
log.Errorf("failed to marshal request data: %v", err)
return "", nil, nil, err
}
if d.config.apiKey == "" {
err := errors.New("dashScopeKey is empty")
log.Errorf("failed to construct headers: %v", err)
return "", nil, nil, err
}
headers := [][2]string{
{"Authorization", "Bearer " + d.config.apiKey},
{"Content-Type", "application/json"},
}
return DASHSCOPE_ENDPOINT, headers, requestBody, err
}
type Result struct {
ID string `json:"id"`
Vector []float64 `json:"vector,omitempty"`
Fields map[string]interface{} `json:"fields"`
Score float64 `json:"score"`
}
func (d *DSProvider) parseTextEmbedding(responseBody []byte) (*Response, error) {
var resp Response
err := json.Unmarshal(responseBody, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}
func (d *DSProvider) GetEmbedding(
queryString string,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(emb []float64, err error)) error {
embUrl, embHeaders, embRequestBody, err := d.constructParameters([]string{queryString}, log)
if err != nil {
log.Errorf("failed to construct parameters: %v", err)
return err
}
var resp *Response
err = d.client.Post(embUrl, embHeaders, embRequestBody,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
if statusCode != http.StatusOK {
err = errors.New("failed to get embedding due to status code: " + strconv.Itoa(statusCode))
callback(nil, err)
return
}
log.Debugf("get embedding response: %d, %s", statusCode, responseBody)
resp, err = d.parseTextEmbedding(responseBody)
if err != nil {
err = fmt.Errorf("failed to parse response: %v", err)
callback(nil, err)
return
}
if len(resp.Output.Embeddings) == 0 {
err = errors.New("no embedding found in response")
callback(nil, err)
return
}
callback(resp.Output.Embeddings[0].Embedding, nil)
}, d.config.timeout)
return err
}

View File

@@ -0,0 +1,112 @@
package embedding
import (
"errors"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
"github.com/tidwall/gjson"
)
const (
PROVIDER_TYPE_DASHSCOPE = "dashscope"
PROVIDER_TYPE_TEXTIN = "textin"
)
type providerInitializer interface {
ValidateConfig(ProviderConfig) error
CreateProvider(ProviderConfig) (Provider, error)
}
var (
providerInitializers = map[string]providerInitializer{
PROVIDER_TYPE_DASHSCOPE: &dashScopeProviderInitializer{},
PROVIDER_TYPE_TEXTIN: &textInProviderInitializer{},
}
)
type ProviderConfig struct {
// @Title zh-CN 文本特征提取服务提供者类型
// @Description zh-CN 文本特征提取服务提供者类型,例如 DashScope
typ string
// @Title zh-CN DashScope 文本特征提取服务名称
// @Description zh-CN 文本特征提取服务名称
serviceName string
// @Title zh-CN 文本特征提取服务域名
// @Description zh-CN 文本特征提取服务域名
serviceHost string
// @Title zh-CN 文本特征提取服务端口
// @Description zh-CN 文本特征提取服务端口
servicePort int64
// @Title zh-CN 文本特征提取服务 API Key
// @Description zh-CN 文本特征提取服务 API Key
apiKey string
//@Title zh-CN TextIn x-ti-app-id
// @Description zh-CN 仅适用于 TextIn 服务。参考 https://www.textin.com/document/acge_text_embedding
textinAppId string
//@Title zh-CN TextIn x-ti-secret-code
// @Description zh-CN 仅适用于 TextIn 服务。参考 https://www.textin.com/document/acge_text_embedding
textinSecretCode string
//@Title zh-CN TextIn request matryoshka_dim
// @Description zh-CN 仅适用于 TextIn 服务, 指定返回的向量维度。参考 https://www.textin.com/document/acge_text_embedding
textinMatryoshkaDim int
// @Title zh-CN 文本特征提取服务超时时间
// @Description zh-CN 文本特征提取服务超时时间
timeout uint32
// @Title zh-CN 文本特征提取服务使用的模型
// @Description zh-CN 用于文本特征提取的模型名称, 在 DashScope 中默认为 "text-embedding-v1"
model string
}
func (c *ProviderConfig) FromJson(json gjson.Result) {
c.typ = json.Get("type").String()
c.serviceName = json.Get("serviceName").String()
c.serviceHost = json.Get("serviceHost").String()
c.servicePort = json.Get("servicePort").Int()
c.apiKey = json.Get("apiKey").String()
c.textinAppId = json.Get("textinAppId").String()
c.textinSecretCode = json.Get("textinSecretCode").String()
c.textinMatryoshkaDim = int(json.Get("textinMatryoshkaDim").Int())
c.timeout = uint32(json.Get("timeout").Int())
c.model = json.Get("model").String()
if c.timeout == 0 {
c.timeout = 10000
}
}
func (c *ProviderConfig) Validate() error {
if c.serviceName == "" {
return errors.New("embedding service name is required")
}
if c.typ == "" {
return errors.New("embedding service type is required")
}
initializer, has := providerInitializers[c.typ]
if !has {
return errors.New("unknown embedding service provider type: " + c.typ)
}
if err := initializer.ValidateConfig(*c); err != nil {
return err
}
return nil
}
func (c *ProviderConfig) GetProviderType() string {
return c.typ
}
func CreateProvider(pc ProviderConfig) (Provider, error) {
initializer, has := providerInitializers[pc.typ]
if !has {
return nil, errors.New("unknown provider type: " + pc.typ)
}
return initializer.CreateProvider(pc)
}
type Provider interface {
GetProviderType() string
GetEmbedding(
queryString string,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(emb []float64, err error)) error
}

View File

@@ -0,0 +1,161 @@
package embedding
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"strconv"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
)
const (
TEXTIN_DOMAIN = "api.textin.com"
TEXTIN_PORT = 443
TEXTIN_DEFAULT_MODEL_NAME = "acge-text-embedding"
TEXTIN_ENDPOINT = "/ai/service/v1/acge_embedding"
)
type textInProviderInitializer struct {
}
func (t *textInProviderInitializer) ValidateConfig(config ProviderConfig) error {
if config.textinAppId == "" {
return errors.New("embedding service TextIn App ID is required")
}
if config.textinSecretCode == "" {
return errors.New("embedding service TextIn Secret Code is required")
}
if config.textinMatryoshkaDim == 0 {
return errors.New("embedding service TextIn Matryoshka Dim is required")
}
return nil
}
func (t *textInProviderInitializer) CreateProvider(c ProviderConfig) (Provider, error) {
if c.servicePort == 0 {
c.servicePort = TEXTIN_PORT
}
if c.serviceHost == "" {
c.serviceHost = TEXTIN_DOMAIN
}
return &TIProvider{
config: c,
client: wrapper.NewClusterClient(wrapper.FQDNCluster{
FQDN: c.serviceName,
Host: c.serviceHost,
Port: int64(c.servicePort),
}),
}, nil
}
func (t *TIProvider) GetProviderType() string {
return PROVIDER_TYPE_TEXTIN
}
type TextInResponse struct {
Code int `json:"code"`
Message string `json:"message"`
Duration float64 `json:"duration"`
Result TextInResult `json:"result"`
}
type TextInResult struct {
Embeddings [][]float64 `json:"embedding"`
MatryoshkaDim int `json:"matryoshka_dim"`
}
type TextInEmbeddingRequest struct {
Input []string `json:"input"`
MatryoshkaDim int `json:"matryoshka_dim"`
}
type TIProvider struct {
config ProviderConfig
client wrapper.HttpClient
}
func (t *TIProvider) constructParameters(texts []string, log wrapper.Log) (string, [][2]string, []byte, error) {
data := TextInEmbeddingRequest{
Input: texts,
MatryoshkaDim: t.config.textinMatryoshkaDim,
}
requestBody, err := json.Marshal(data)
if err != nil {
log.Errorf("failed to marshal request data: %v", err)
return "", nil, nil, err
}
if t.config.textinAppId == "" {
err := errors.New("textinAppId is empty")
log.Errorf("failed to construct headers: %v", err)
return "", nil, nil, err
}
if t.config.textinSecretCode == "" {
err := errors.New("textinSecretCode is empty")
log.Errorf("failed to construct headers: %v", err)
return "", nil, nil, err
}
headers := [][2]string{
{"x-ti-app-id", t.config.textinAppId},
{"x-ti-secret-code", t.config.textinSecretCode},
{"Content-Type", "application/json"},
}
return TEXTIN_ENDPOINT, headers, requestBody, err
}
func (t *TIProvider) parseTextEmbedding(responseBody []byte) (*TextInResponse, error) {
var resp TextInResponse
err := json.Unmarshal(responseBody, &resp)
if err != nil {
return nil, err
}
return &resp, nil
}
func (t *TIProvider) GetEmbedding(
queryString string,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(emb []float64, err error)) error {
embUrl, embHeaders, embRequestBody, err := t.constructParameters([]string{queryString}, log)
if err != nil {
log.Errorf("failed to construct parameters: %v", err)
return err
}
var resp *TextInResponse
err = t.client.Post(embUrl, embHeaders, embRequestBody,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
if statusCode != http.StatusOK {
err = errors.New("failed to get embedding due to status code: " + strconv.Itoa(statusCode))
callback(nil, err)
return
}
log.Debugf("get embedding response: %d, %s", statusCode, responseBody)
resp, err = t.parseTextEmbedding(responseBody)
if err != nil {
err = fmt.Errorf("failed to parse response: %v", err)
callback(nil, err)
return
}
if len(resp.Result.Embeddings) == 0 {
err = errors.New("no embedding found in response")
callback(nil, err)
return
}
callback(resp.Result.Embeddings[0], nil)
}, t.config.timeout)
return err
}

View File

@@ -7,17 +7,18 @@ go 1.19
replace github.com/alibaba/higress/plugins/wasm-go => ../..
require (
github.com/alibaba/higress/plugins/wasm-go v1.3.6-0.20240528060522-53bccf89f441
github.com/alibaba/higress/plugins/wasm-go v1.4.2
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240711023527-ba358c48772f
github.com/tidwall/gjson v1.14.3
github.com/tidwall/gjson v1.17.3
github.com/tidwall/resp v0.1.1
github.com/tidwall/sjson v1.2.5
// github.com/weaviate/weaviate-go-client/v4 v4.15.1
)
require (
github.com/google/uuid v1.3.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/higress-group/nottinygc v0.0.0-20231101025119-e93c4c2f8520 // indirect
github.com/magefile/mage v1.14.0 // indirect
github.com/stretchr/testify v1.9.0 // indirect
github.com/tidwall/match v1.1.1 // indirect
github.com/tidwall/pretty v1.2.0 // indirect
)

View File

@@ -1,24 +1,21 @@
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/higress-group/nottinygc v0.0.0-20231101025119-e93c4c2f8520 h1:IHDghbGQ2DTIXHBHxWfqCYQW1fKjyJ/I7W1pMyUDeEA=
github.com/higress-group/nottinygc v0.0.0-20231101025119-e93c4c2f8520/go.mod h1:Nz8ORLaFiLWotg6GeKlJMhv8cci8mM43uEnLA5t8iew=
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240327114451-d6b7174a84fc h1:t2AT8zb6N/59Y78lyRWedVoVWHNRSCBh0oWCC+bluTQ=
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240327114451-d6b7174a84fc/go.mod h1:hNFjhrLUIq+kJ9bOcs8QtiplSQ61GZXtd2xHKx4BYRo=
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240711023527-ba358c48772f h1:ZIiIBRvIw62gA5MJhuwp1+2wWbqL9IGElQ499rUsYYg=
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240711023527-ba358c48772f/go.mod h1:hNFjhrLUIq+kJ9bOcs8QtiplSQ61GZXtd2xHKx4BYRo=
github.com/magefile/mage v1.14.0 h1:6QDX3g6z1YvJ4olPhT1wksUcSa/V0a1B+pJb73fBjyo=
github.com/magefile/mage v1.14.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.14.3 h1:9jvXn7olKEHU1S9vwoMGliaT8jq1vJ7IH/n9zD9Dnlw=
github.com/tidwall/gjson v1.14.3/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tidwall/gjson v1.17.3 h1:bwWLZU7icoKRG+C+0PNwIKC6FCJO/Q3p2pZvuP0jN94=
github.com/tidwall/gjson v1.17.3/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs=
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/tidwall/resp v0.1.1 h1:Ly20wkhqKTmDUPlyM1S7pWo5kk0tDu8OoC/vFArXmwE=
github.com/tidwall/resp v0.1.1/go.mod h1:3/FrruOBAxPTPtundW0VXgmsQ4ZBA0Aw714lVYgwFa0=
github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY=
github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=

View File

@@ -1,32 +1,33 @@
// File generated by hgctl. Modify as required.
// See: https://higress.io/zh-cn/docs/user/wasm-go#2-%E7%BC%96%E5%86%99-maingo-%E6%96%87%E4%BB%B6
// 这个文件中主要将OnHttpRequestHeaders、OnHttpRequestBody、OnHttpResponseHeaders、OnHttpResponseBody这四个函数实现
// 其中的缓存思路调用cache.go中的逻辑然后cache.go中的逻辑会调用textEmbeddingProvider和vectorStoreProvider中的逻辑实例
package main
import (
"errors"
"fmt"
"strings"
"github.com/alibaba/higress/plugins/wasm-go/extensions/ai-cache/config"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
"github.com/higress-group/proxy-wasm-go-sdk/proxywasm"
"github.com/higress-group/proxy-wasm-go-sdk/proxywasm/types"
"github.com/tidwall/gjson"
"github.com/tidwall/resp"
)
const (
CacheKeyContextKey = "cacheKey"
CacheContentContextKey = "cacheContent"
PartialMessageContextKey = "partialMessage"
ToolCallsContextKey = "toolCalls"
StreamContextKey = "stream"
DefaultCacheKeyPrefix = "higress-ai-cache:"
PLUGIN_NAME = "ai-cache"
CACHE_KEY_CONTEXT_KEY = "cacheKey"
CACHE_KEY_EMBEDDING_KEY = "cacheKeyEmbedding"
CACHE_CONTENT_CONTEXT_KEY = "cacheContent"
PARTIAL_MESSAGE_CONTEXT_KEY = "partialMessage"
TOOL_CALLS_CONTEXT_KEY = "toolCalls"
STREAM_CONTEXT_KEY = "stream"
SKIP_CACHE_HEADER = "x-higress-skip-ai-cache"
ERROR_PARTIAL_MESSAGE_KEY = "errorPartialMessage"
)
func main() {
// CreateClient()
wrapper.SetCtx(
"ai-cache",
PLUGIN_NAME,
wrapper.ParseConfigBy(parseConfig),
wrapper.ProcessRequestHeadersBy(onHttpRequestHeaders),
wrapper.ProcessRequestBodyBy(onHttpRequestBody),
@@ -35,337 +36,152 @@ func main() {
)
}
// @Name ai-cache
// @Category protocol
// @Phase AUTHN
// @Priority 10
// @Title zh-CN AI Cache
// @Description zh-CN 大模型结果缓存
// @IconUrl
// @Version 0.1.0
//
// @Contact.name johnlanni
// @Contact.url
// @Contact.email
//
// @Example
// redis:
// serviceName: my-redis.dns
// timeout: 2000
// cacheKeyFrom:
// requestBody: "messages.@reverse.0.content"
// cacheValueFrom:
// responseBody: "choices.0.message.content"
// cacheStreamValueFrom:
// responseBody: "choices.0.delta.content"
// returnResponseTemplate: |
// {"id":"from-cache","choices":[{"index":0,"message":{"role":"assistant","content":"%s"},"finish_reason":"stop"}],"model":"gpt-4o","object":"chat.completion","usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}
// returnStreamResponseTemplate: |
// data:{"id":"from-cache","choices":[{"index":0,"delta":{"role":"assistant","content":"%s"},"finish_reason":"stop"}],"model":"gpt-4o","object":"chat.completion","usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}
//
// data:[DONE]
//
// @End
type RedisInfo struct {
// @Title zh-CN redis 服务名称
// @Description zh-CN 带服务类型的完整 FQDN 名称,例如 my-redis.dns、redis.my-ns.svc.cluster.local
ServiceName string `required:"true" yaml:"serviceName" json:"serviceName"`
// @Title zh-CN redis 服务端口
// @Description zh-CN 默认值为6379
ServicePort int `required:"false" yaml:"servicePort" json:"servicePort"`
// @Title zh-CN 用户名
// @Description zh-CN 登陆 redis 的用户名,非必填
Username string `required:"false" yaml:"username" json:"username"`
// @Title zh-CN 密码
// @Description zh-CN 登陆 redis 的密码,非必填,可以只填密码
Password string `required:"false" yaml:"password" json:"password"`
// @Title zh-CN 请求超时
// @Description zh-CN 请求 redis 的超时时间单位为毫秒。默认值是1000即1秒
Timeout int `required:"false" yaml:"timeout" json:"timeout"`
func parseConfig(json gjson.Result, c *config.PluginConfig, log wrapper.Log) error {
// config.EmbeddingProviderConfig.FromJson(json.Get("embeddingProvider"))
// config.VectorDatabaseProviderConfig.FromJson(json.Get("vectorBaseProvider"))
// config.RedisConfig.FromJson(json.Get("redis"))
c.FromJson(json, log)
if err := c.Validate(); err != nil {
return err
}
// Note that initializing the client during the parseConfig phase may cause errors, such as Redis not being usable in Docker Compose.
if err := c.Complete(log); err != nil {
log.Errorf("complete config failed: %v", err)
return err
}
return nil
}
type KVExtractor struct {
// @Title zh-CN 从请求 Body 中基于 [GJSON PATH](https://github.com/tidwall/gjson/blob/master/SYNTAX.md) 语法提取字符串
RequestBody string `required:"false" yaml:"requestBody" json:"requestBody"`
// @Title zh-CN 从响应 Body 中基于 [GJSON PATH](https://github.com/tidwall/gjson/blob/master/SYNTAX.md) 语法提取字符串
ResponseBody string `required:"false" yaml:"responseBody" json:"responseBody"`
}
type PluginConfig struct {
// @Title zh-CN Redis 地址信息
// @Description zh-CN 用于存储缓存结果的 Redis 地址
RedisInfo RedisInfo `required:"true" yaml:"redis" json:"redis"`
// @Title zh-CN 缓存 key 的来源
// @Description zh-CN 往 redis 里存时,使用的 key 的提取方式
CacheKeyFrom KVExtractor `required:"true" yaml:"cacheKeyFrom" json:"cacheKeyFrom"`
// @Title zh-CN 缓存 value 的来源
// @Description zh-CN 往 redis 里存时,使用的 value 的提取方式
CacheValueFrom KVExtractor `required:"true" yaml:"cacheValueFrom" json:"cacheValueFrom"`
// @Title zh-CN 流式响应下,缓存 value 的来源
// @Description zh-CN 往 redis 里存时,使用的 value 的提取方式
CacheStreamValueFrom KVExtractor `required:"true" yaml:"cacheStreamValueFrom" json:"cacheStreamValueFrom"`
// @Title zh-CN 返回 HTTP 响应的模版
// @Description zh-CN 用 %s 标记需要被 cache value 替换的部分
ReturnResponseTemplate string `required:"true" yaml:"returnResponseTemplate" json:"returnResponseTemplate"`
// @Title zh-CN 返回流式 HTTP 响应的模版
// @Description zh-CN 用 %s 标记需要被 cache value 替换的部分
ReturnStreamResponseTemplate string `required:"true" yaml:"returnStreamResponseTemplate" json:"returnStreamResponseTemplate"`
// @Title zh-CN 缓存的过期时间
// @Description zh-CN 单位是秒默认值为0即永不过期
CacheTTL int `required:"false" yaml:"cacheTTL" json:"cacheTTL"`
// @Title zh-CN Redis缓存Key的前缀
// @Description zh-CN 默认值是"higress-ai-cache:"
CacheKeyPrefix string `required:"false" yaml:"cacheKeyPrefix" json:"cacheKeyPrefix"`
redisClient wrapper.RedisClient `yaml:"-" json:"-"`
}
func parseConfig(json gjson.Result, c *PluginConfig, log wrapper.Log) error {
c.RedisInfo.ServiceName = json.Get("redis.serviceName").String()
if c.RedisInfo.ServiceName == "" {
return errors.New("redis service name must not by empty")
func onHttpRequestHeaders(ctx wrapper.HttpContext, c config.PluginConfig, log wrapper.Log) types.Action {
skipCache, _ := proxywasm.GetHttpRequestHeader(SKIP_CACHE_HEADER)
if skipCache == "on" {
ctx.SetContext(SKIP_CACHE_HEADER, struct{}{})
ctx.DontReadRequestBody()
return types.ActionContinue
}
c.RedisInfo.ServicePort = int(json.Get("redis.servicePort").Int())
if c.RedisInfo.ServicePort == 0 {
if strings.HasSuffix(c.RedisInfo.ServiceName, ".static") {
// use default logic port which is 80 for static service
c.RedisInfo.ServicePort = 80
} else {
c.RedisInfo.ServicePort = 6379
}
}
c.RedisInfo.Username = json.Get("redis.username").String()
c.RedisInfo.Password = json.Get("redis.password").String()
c.RedisInfo.Timeout = int(json.Get("redis.timeout").Int())
if c.RedisInfo.Timeout == 0 {
c.RedisInfo.Timeout = 1000
}
c.CacheKeyFrom.RequestBody = json.Get("cacheKeyFrom.requestBody").String()
if c.CacheKeyFrom.RequestBody == "" {
c.CacheKeyFrom.RequestBody = "messages.@reverse.0.content"
}
c.CacheValueFrom.ResponseBody = json.Get("cacheValueFrom.responseBody").String()
if c.CacheValueFrom.ResponseBody == "" {
c.CacheValueFrom.ResponseBody = "choices.0.message.content"
}
c.CacheStreamValueFrom.ResponseBody = json.Get("cacheStreamValueFrom.responseBody").String()
if c.CacheStreamValueFrom.ResponseBody == "" {
c.CacheStreamValueFrom.ResponseBody = "choices.0.delta.content"
}
c.ReturnResponseTemplate = json.Get("returnResponseTemplate").String()
if c.ReturnResponseTemplate == "" {
c.ReturnResponseTemplate = `{"id":"from-cache","choices":[{"index":0,"message":{"role":"assistant","content":"%s"},"finish_reason":"stop"}],"model":"gpt-4o","object":"chat.completion","usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}`
}
c.ReturnStreamResponseTemplate = json.Get("returnStreamResponseTemplate").String()
if c.ReturnStreamResponseTemplate == "" {
c.ReturnStreamResponseTemplate = `data:{"id":"from-cache","choices":[{"index":0,"delta":{"role":"assistant","content":"%s"},"finish_reason":"stop"}],"model":"gpt-4o","object":"chat.completion","usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}` + "\n\ndata:[DONE]\n\n"
}
c.CacheKeyPrefix = json.Get("cacheKeyPrefix").String()
if c.CacheKeyPrefix == "" {
c.CacheKeyPrefix = DefaultCacheKeyPrefix
}
c.redisClient = wrapper.NewRedisClusterClient(wrapper.FQDNCluster{
FQDN: c.RedisInfo.ServiceName,
Port: int64(c.RedisInfo.ServicePort),
})
return c.redisClient.Init(c.RedisInfo.Username, c.RedisInfo.Password, int64(c.RedisInfo.Timeout))
}
func onHttpRequestHeaders(ctx wrapper.HttpContext, config PluginConfig, log wrapper.Log) types.Action {
contentType, _ := proxywasm.GetHttpRequestHeader("content-type")
// The request does not have a body.
if contentType == "" {
return types.ActionContinue
}
if !strings.Contains(contentType, "application/json") {
log.Warnf("content is not json, can't process:%s", contentType)
log.Warnf("content is not json, can't process: %s", contentType)
ctx.DontReadRequestBody()
return types.ActionContinue
}
proxywasm.RemoveHttpRequestHeader("Accept-Encoding")
_ = proxywasm.RemoveHttpRequestHeader("Accept-Encoding")
// The request has a body and requires delaying the header transmission until a cache miss occurs,
// at which point the header should be sent.
return types.HeaderStopIteration
}
func TrimQuote(source string) string {
return strings.Trim(source, `"`)
}
func onHttpRequestBody(ctx wrapper.HttpContext, c config.PluginConfig, body []byte, log wrapper.Log) types.Action {
func onHttpRequestBody(ctx wrapper.HttpContext, config PluginConfig, body []byte, log wrapper.Log) types.Action {
bodyJson := gjson.ParseBytes(body)
// TODO: It may be necessary to support stream mode determination for different LLM providers.
stream := false
if bodyJson.Get("stream").Bool() {
stream = true
ctx.SetContext(StreamContextKey, struct{}{})
} else if ctx.GetContext(StreamContextKey) != nil {
stream = true
ctx.SetContext(STREAM_CONTEXT_KEY, struct{}{})
}
key := TrimQuote(bodyJson.Get(config.CacheKeyFrom.RequestBody).Raw)
var key string
if c.CacheKeyStrategy == config.CACHE_KEY_STRATEGY_LAST_QUESTION {
log.Debugf("[onHttpRequestBody] cache key strategy is last question, cache key from: %s", c.CacheKeyFrom)
key = bodyJson.Get(c.CacheKeyFrom).String()
} else if c.CacheKeyStrategy == config.CACHE_KEY_STRATEGY_ALL_QUESTIONS {
log.Debugf("[onHttpRequestBody] cache key strategy is all questions, cache key from: messages")
messages := bodyJson.Get("messages").Array()
var userMessages []string
for _, msg := range messages {
if msg.Get("role").String() == "user" {
userMessages = append(userMessages, msg.Get("content").String())
}
}
key = strings.Join(userMessages, "\n")
} else if c.CacheKeyStrategy == config.CACHE_KEY_STRATEGY_DISABLED {
log.Info("[onHttpRequestBody] cache key strategy is disabled")
ctx.DontReadRequestBody()
return types.ActionContinue
} else {
log.Warnf("[onHttpRequestBody] unknown cache key strategy: %s", c.CacheKeyStrategy)
ctx.DontReadRequestBody()
return types.ActionContinue
}
ctx.SetContext(CACHE_KEY_CONTEXT_KEY, key)
log.Debugf("[onHttpRequestBody] key: %s", key)
if key == "" {
log.Debug("parse key from request body failed")
log.Debug("[onHttpRequestBody] parse key from request body failed")
ctx.DontReadResponseBody()
return types.ActionContinue
}
ctx.SetContext(CacheKeyContextKey, key)
err := config.redisClient.Get(config.CacheKeyPrefix+key, func(response resp.Value) {
if err := response.Error(); err != nil {
log.Errorf("redis get key:%s failed, err:%v", key, err)
proxywasm.ResumeHttpRequest()
return
}
if response.IsNull() {
log.Debugf("cache miss, key:%s", key)
proxywasm.ResumeHttpRequest()
return
}
log.Debugf("cache hit, key:%s", key)
ctx.SetContext(CacheKeyContextKey, nil)
if !stream {
proxywasm.SendHttpResponseWithDetail(200, "ai-cache.hit", [][2]string{{"content-type", "application/json; charset=utf-8"}}, []byte(fmt.Sprintf(config.ReturnResponseTemplate, response.String())), -1)
} else {
proxywasm.SendHttpResponseWithDetail(200, "ai-cache.hit", [][2]string{{"content-type", "text/event-stream; charset=utf-8"}}, []byte(fmt.Sprintf(config.ReturnStreamResponseTemplate, response.String())), -1)
}
})
if err != nil {
log.Error("redis access failed")
if err := CheckCacheForKey(key, ctx, c, log, stream, true); err != nil {
log.Errorf("[onHttpRequestBody] check cache for key: %s failed, error: %v", key, err)
return types.ActionContinue
}
return types.ActionPause
}
func processSSEMessage(ctx wrapper.HttpContext, config PluginConfig, sseMessage string, log wrapper.Log) string {
subMessages := strings.Split(sseMessage, "\n")
var message string
for _, msg := range subMessages {
if strings.HasPrefix(msg, "data:") {
message = msg
break
}
func onHttpResponseHeaders(ctx wrapper.HttpContext, c config.PluginConfig, log wrapper.Log) types.Action {
skipCache := ctx.GetContext(SKIP_CACHE_HEADER)
if skipCache != nil {
ctx.DontReadResponseBody()
return types.ActionContinue
}
if len(message) < 6 {
log.Errorf("invalid message:%s", message)
return ""
}
// skip the prefix "data:"
bodyJson := message[5:]
if gjson.Get(bodyJson, config.CacheStreamValueFrom.ResponseBody).Exists() {
tempContentI := ctx.GetContext(CacheContentContextKey)
if tempContentI == nil {
content := TrimQuote(gjson.Get(bodyJson, config.CacheStreamValueFrom.ResponseBody).Raw)
ctx.SetContext(CacheContentContextKey, content)
return content
}
append := TrimQuote(gjson.Get(bodyJson, config.CacheStreamValueFrom.ResponseBody).Raw)
content := tempContentI.(string) + append
ctx.SetContext(CacheContentContextKey, content)
return content
} else if gjson.Get(bodyJson, "choices.0.delta.content.tool_calls").Exists() {
// TODO: compatible with other providers
ctx.SetContext(ToolCallsContextKey, struct{}{})
return ""
}
log.Debugf("unknown message:%s", bodyJson)
return ""
}
func onHttpResponseHeaders(ctx wrapper.HttpContext, config PluginConfig, log wrapper.Log) types.Action {
contentType, _ := proxywasm.GetHttpResponseHeader("content-type")
if strings.Contains(contentType, "text/event-stream") {
ctx.SetContext(StreamContextKey, struct{}{})
ctx.SetContext(STREAM_CONTEXT_KEY, struct{}{})
}
if ctx.GetContext(ERROR_PARTIAL_MESSAGE_KEY) != nil {
ctx.DontReadResponseBody()
return types.ActionContinue
}
return types.ActionContinue
}
func onHttpResponseBody(ctx wrapper.HttpContext, config PluginConfig, chunk []byte, isLastChunk bool, log wrapper.Log) []byte {
if ctx.GetContext(ToolCallsContextKey) != nil {
// we should not cache tool call result
return chunk
}
keyI := ctx.GetContext(CacheKeyContextKey)
if keyI == nil {
return chunk
}
if !isLastChunk {
stream := ctx.GetContext(StreamContextKey)
if stream == nil {
tempContentI := ctx.GetContext(CacheContentContextKey)
if tempContentI == nil {
ctx.SetContext(CacheContentContextKey, chunk)
return chunk
}
tempContent := tempContentI.([]byte)
tempContent = append(tempContent, chunk...)
ctx.SetContext(CacheContentContextKey, tempContent)
} else {
var partialMessage []byte
partialMessageI := ctx.GetContext(PartialMessageContextKey)
if partialMessageI != nil {
partialMessage = append(partialMessageI.([]byte), chunk...)
} else {
partialMessage = chunk
}
messages := strings.Split(string(partialMessage), "\n\n")
for i, msg := range messages {
if i < len(messages)-1 {
// process complete message
processSSEMessage(ctx, config, msg, log)
}
}
if !strings.HasSuffix(string(partialMessage), "\n\n") {
ctx.SetContext(PartialMessageContextKey, []byte(messages[len(messages)-1]))
} else {
ctx.SetContext(PartialMessageContextKey, nil)
}
}
return chunk
}
// last chunk
key := keyI.(string)
stream := ctx.GetContext(StreamContextKey)
var value string
if stream == nil {
var body []byte
tempContentI := ctx.GetContext(CacheContentContextKey)
if tempContentI != nil {
body = append(tempContentI.([]byte), chunk...)
} else {
body = chunk
}
bodyJson := gjson.ParseBytes(body)
func onHttpResponseBody(ctx wrapper.HttpContext, c config.PluginConfig, chunk []byte, isLastChunk bool, log wrapper.Log) []byte {
log.Debugf("[onHttpResponseBody] is last chunk: %v", isLastChunk)
log.Debugf("[onHttpResponseBody] chunk: %s", string(chunk))
value = TrimQuote(bodyJson.Get(config.CacheValueFrom.ResponseBody).Raw)
if value == "" {
log.Warnf("parse value from response body failded, body:%s", body)
return chunk
if ctx.GetContext(TOOL_CALLS_CONTEXT_KEY) != nil {
return chunk
}
key := ctx.GetContext(CACHE_KEY_CONTEXT_KEY)
if key == nil {
log.Debug("[onHttpResponseBody] key is nil, skip cache")
return chunk
}
if !isLastChunk {
if err := handleNonLastChunk(ctx, c, chunk, log); err != nil {
log.Errorf("[onHttpResponseBody] handle non last chunk failed, error: %v", err)
// Set an empty struct in the context to indicate an error in processing the partial message
ctx.SetContext(ERROR_PARTIAL_MESSAGE_KEY, struct{}{})
}
return chunk
}
stream := ctx.GetContext(STREAM_CONTEXT_KEY)
var value string
var err error
if stream == nil {
value, err = processNonStreamLastChunk(ctx, c, chunk, log)
} else {
if len(chunk) > 0 {
var lastMessage []byte
partialMessageI := ctx.GetContext(PartialMessageContextKey)
if partialMessageI != nil {
lastMessage = append(partialMessageI.([]byte), chunk...)
} else {
lastMessage = chunk
}
if !strings.HasSuffix(string(lastMessage), "\n\n") {
log.Warnf("invalid lastMessage:%s", lastMessage)
return chunk
}
// remove the last \n\n
lastMessage = lastMessage[:len(lastMessage)-2]
value = processSSEMessage(ctx, config, string(lastMessage), log)
} else {
tempContentI := ctx.GetContext(CacheContentContextKey)
if tempContentI == nil {
return chunk
}
value = tempContentI.(string)
}
value, err = processStreamLastChunk(ctx, c, chunk, log)
}
config.redisClient.Set(config.CacheKeyPrefix+key, value, nil)
if config.CacheTTL != 0 {
config.redisClient.Expire(config.CacheKeyPrefix+key, config.CacheTTL, nil)
if err != nil {
log.Errorf("[onHttpResponseBody] process last chunk failed, error: %v", err)
return chunk
}
cacheResponse(ctx, c, key.(string), value, log)
uploadEmbeddingAndAnswer(ctx, c, key.(string), value, log)
return chunk
}

View File

@@ -0,0 +1,155 @@
package main
import (
"fmt"
"strings"
"github.com/alibaba/higress/plugins/wasm-go/extensions/ai-cache/config"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
"github.com/tidwall/gjson"
)
func handleNonLastChunk(ctx wrapper.HttpContext, c config.PluginConfig, chunk []byte, log wrapper.Log) error {
stream := ctx.GetContext(STREAM_CONTEXT_KEY)
err := error(nil)
if stream == nil {
err = handleNonStreamChunk(ctx, c, chunk, log)
} else {
err = handleStreamChunk(ctx, c, chunk, log)
}
return err
}
func handleNonStreamChunk(ctx wrapper.HttpContext, c config.PluginConfig, chunk []byte, log wrapper.Log) error {
tempContentI := ctx.GetContext(CACHE_CONTENT_CONTEXT_KEY)
if tempContentI == nil {
ctx.SetContext(CACHE_CONTENT_CONTEXT_KEY, chunk)
return nil
}
tempContent := tempContentI.([]byte)
tempContent = append(tempContent, chunk...)
ctx.SetContext(CACHE_CONTENT_CONTEXT_KEY, tempContent)
return nil
}
func handleStreamChunk(ctx wrapper.HttpContext, c config.PluginConfig, chunk []byte, log wrapper.Log) error {
var partialMessage []byte
partialMessageI := ctx.GetContext(PARTIAL_MESSAGE_CONTEXT_KEY)
log.Debugf("[handleStreamChunk] cache content: %v", ctx.GetContext(CACHE_CONTENT_CONTEXT_KEY))
if partialMessageI != nil {
partialMessage = append(partialMessageI.([]byte), chunk...)
} else {
partialMessage = chunk
}
messages := strings.Split(string(partialMessage), "\n\n")
for i, msg := range messages {
if i < len(messages)-1 {
_, err := processSSEMessage(ctx, c, msg, log)
if err != nil {
return fmt.Errorf("[handleStreamChunk] processSSEMessage failed, error: %v", err)
}
}
}
if !strings.HasSuffix(string(partialMessage), "\n\n") {
ctx.SetContext(PARTIAL_MESSAGE_CONTEXT_KEY, []byte(messages[len(messages)-1]))
} else {
ctx.SetContext(PARTIAL_MESSAGE_CONTEXT_KEY, nil)
}
return nil
}
func processNonStreamLastChunk(ctx wrapper.HttpContext, c config.PluginConfig, chunk []byte, log wrapper.Log) (string, error) {
var body []byte
tempContentI := ctx.GetContext(CACHE_CONTENT_CONTEXT_KEY)
if tempContentI != nil {
body = append(tempContentI.([]byte), chunk...)
} else {
body = chunk
}
bodyJson := gjson.ParseBytes(body)
value := bodyJson.Get(c.CacheValueFrom).String()
if strings.TrimSpace(value) == "" {
return "", fmt.Errorf("[processNonStreamLastChunk] parse value from response body failed, body:%s", body)
}
return value, nil
}
func processStreamLastChunk(ctx wrapper.HttpContext, c config.PluginConfig, chunk []byte, log wrapper.Log) (string, error) {
if len(chunk) > 0 {
var lastMessage []byte
partialMessageI := ctx.GetContext(PARTIAL_MESSAGE_CONTEXT_KEY)
if partialMessageI != nil {
lastMessage = append(partialMessageI.([]byte), chunk...)
} else {
lastMessage = chunk
}
if !strings.HasSuffix(string(lastMessage), "\n\n") {
return "", fmt.Errorf("[processStreamLastChunk] invalid lastMessage:%s", lastMessage)
}
lastMessage = lastMessage[:len(lastMessage)-2]
value, err := processSSEMessage(ctx, c, string(lastMessage), log)
if err != nil {
return "", fmt.Errorf("[processStreamLastChunk] processSSEMessage failed, error: %v", err)
}
return value, nil
}
tempContentI := ctx.GetContext(CACHE_CONTENT_CONTEXT_KEY)
if tempContentI == nil {
return "", nil
}
return tempContentI.(string), nil
}
func processSSEMessage(ctx wrapper.HttpContext, c config.PluginConfig, sseMessage string, log wrapper.Log) (string, error) {
subMessages := strings.Split(sseMessage, "\n")
var message string
for _, msg := range subMessages {
if strings.HasPrefix(msg, "data:") {
message = msg
break
}
}
if len(message) < 6 {
return "", fmt.Errorf("[processSSEMessage] invalid message: %s", message)
}
// skip the prefix "data:"
bodyJson := message[5:]
if strings.TrimSpace(bodyJson) == "[DONE]" {
return "", nil
}
// Extract values from JSON fields
responseBody := gjson.Get(bodyJson, c.CacheStreamValueFrom)
toolCalls := gjson.Get(bodyJson, c.CacheToolCallsFrom)
if toolCalls.Exists() {
// TODO: Temporarily store the tool_calls value in the context for processing
ctx.SetContext(TOOL_CALLS_CONTEXT_KEY, toolCalls.String())
}
// Check if the ResponseBody field exists
if !responseBody.Exists() {
if ctx.GetContext(CACHE_CONTENT_CONTEXT_KEY) != nil {
log.Debugf("[processSSEMessage] unable to extract content from message; cache content is not nil: %s", message)
return "", nil
}
return "", fmt.Errorf("[processSSEMessage] unable to extract content from message; cache content is nil: %s", message)
} else {
tempContentI := ctx.GetContext(CACHE_CONTENT_CONTEXT_KEY)
// If there is no content in the cache, initialize and set the content
if tempContentI == nil {
content := responseBody.String()
ctx.SetContext(CACHE_CONTENT_CONTEXT_KEY, content)
return content, nil
}
// Update the content in the cache
appendMsg := responseBody.String()
content := tempContentI.(string) + appendMsg
ctx.SetContext(CACHE_CONTENT_CONTEXT_KEY, content)
return content, nil
}
}

View File

@@ -0,0 +1,201 @@
package vector
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
)
type chromaProviderInitializer struct{}
func (c *chromaProviderInitializer) ValidateConfig(config ProviderConfig) error {
if len(config.collectionID) == 0 {
return errors.New("[Chroma] collectionID is required")
}
if len(config.serviceName) == 0 {
return errors.New("[Chroma] serviceName is required")
}
return nil
}
func (c *chromaProviderInitializer) CreateProvider(config ProviderConfig) (Provider, error) {
return &ChromaProvider{
config: config,
client: wrapper.NewClusterClient(wrapper.FQDNCluster{
FQDN: config.serviceName,
Host: config.serviceHost,
Port: int64(config.servicePort),
}),
}, nil
}
type ChromaProvider struct {
config ProviderConfig
client wrapper.HttpClient
}
func (c *ChromaProvider) GetProviderType() string {
return PROVIDER_TYPE_CHROMA
}
func (d *ChromaProvider) QueryEmbedding(
emb []float64,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(results []QueryResult, ctx wrapper.HttpContext, log wrapper.Log, err error)) error {
// 最少需要填写的参数为 collection_id, embeddings 和 ids
// 下面是一个例子
// {
// "where": {}, // 用于 metadata 过滤,可选参数
// "where_document": {}, // 用于 document 过滤,可选参数
// "query_embeddings": [
// [1.1, 2.3, 3.2]
// ],
// "limit": 5,
// "include": [
// "metadatas", // 可选
// "documents", // 如果需要答案则需要
// "distances"
// ]
// }
requestBody, err := json.Marshal(chromaQueryRequest{
QueryEmbeddings: []chromaEmbedding{emb},
Limit: d.config.topK,
Include: []string{"distances", "documents"},
})
if err != nil {
log.Errorf("[Chroma] Failed to marshal query embedding request body: %v", err)
return err
}
return d.client.Post(
fmt.Sprintf("/api/v1/collections/%s/query", d.config.collectionID),
[][2]string{
{"Content-Type", "application/json"},
},
requestBody,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
log.Debugf("[Chroma] Query embedding response: %d, %s", statusCode, responseBody)
results, err := d.parseQueryResponse(responseBody, log)
if err != nil {
err = fmt.Errorf("[Chroma] Failed to parse query response: %v", err)
}
callback(results, ctx, log, err)
},
d.config.timeout,
)
}
func (d *ChromaProvider) UploadAnswerAndEmbedding(
queryString string,
queryEmb []float64,
queryAnswer string,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(ctx wrapper.HttpContext, log wrapper.Log, err error)) error {
// 最少需要填写的参数为 collection_id, embeddings 和 ids
// 下面是一个例子
// {
// "embeddings": [
// [1.1, 2.3, 3.2]
// ],
// "ids": [
// "你吃了吗?"
// ],
// "documents": [
// "我吃了。"
// ]
// }
// 如果要添加 answer则按照以下例子
// {
// "embeddings": [
// [1.1, 2.3, 3.2]
// ],
// "documents": [
// "answer1"
// ],
// "ids": [
// "id1"
// ]
// }
requestBody, err := json.Marshal(chromaInsertRequest{
Embeddings: []chromaEmbedding{queryEmb},
IDs: []string{queryString}, // queryString 指的是用户查询的问题
Documents: []string{queryAnswer}, // queryAnswer 指的是用户查询的问题的答案
})
if err != nil {
log.Errorf("[Chroma] Failed to marshal upload embedding request body: %v", err)
return err
}
err = d.client.Post(
fmt.Sprintf("/api/v1/collections/%s/add", d.config.collectionID),
[][2]string{
{"Content-Type", "application/json"},
},
requestBody,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
log.Debugf("[Chroma] statusCode:%d, responseBody:%s", statusCode, string(responseBody))
callback(ctx, log, err)
},
d.config.timeout,
)
return err
}
type chromaEmbedding []float64
type chromaMetadataMap map[string]string
type chromaInsertRequest struct {
Embeddings []chromaEmbedding `json:"embeddings"`
Metadatas []chromaMetadataMap `json:"metadatas,omitempty"` // 可选参数
Documents []string `json:"documents,omitempty"` // 可选参数
IDs []string `json:"ids"`
}
type chromaQueryRequest struct {
Where map[string]string `json:"where,omitempty"` // 可选参数
WhereDocument map[string]string `json:"where_document,omitempty"` // 可选参数
QueryEmbeddings []chromaEmbedding `json:"query_embeddings"`
Limit int `json:"limit"`
Include []string `json:"include"`
}
type chromaQueryResponse struct {
Ids [][]string `json:"ids"` // 第一维是 batch query第二维是查询到的多个 ids
Distances [][]float64 `json:"distances,omitempty"` // 与 Ids 一一对应
Metadatas []chromaMetadataMap `json:"metadatas,omitempty"` // 可选参数
Embeddings []chromaEmbedding `json:"embeddings,omitempty"` // 可选参数
Documents [][]string `json:"documents,omitempty"` // 与 Ids 一一对应
Uris []string `json:"uris,omitempty"` // 可选参数
Data []interface{} `json:"data,omitempty"` // 可选参数
Included []string `json:"included"`
}
func (d *ChromaProvider) parseQueryResponse(responseBody []byte, log wrapper.Log) ([]QueryResult, error) {
var queryResp chromaQueryResponse
err := json.Unmarshal(responseBody, &queryResp)
if err != nil {
return nil, err
}
log.Debugf("[Chroma] queryResp Ids len: %d", len(queryResp.Ids))
if len(queryResp.Ids) == 1 && len(queryResp.Ids[0]) == 0 {
return nil, errors.New("no query results found in response")
}
results := make([]QueryResult, 0, len(queryResp.Ids[0]))
for i := range queryResp.Ids[0] {
result := QueryResult{
Text: queryResp.Ids[0][i],
Score: queryResp.Distances[0][i],
Answer: queryResp.Documents[0][i],
}
results = append(results, result)
}
return results, nil
}

View File

@@ -0,0 +1,256 @@
package vector
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
)
type dashVectorProviderInitializer struct {
}
func (d *dashVectorProviderInitializer) ValidateConfig(config ProviderConfig) error {
if len(config.apiKey) == 0 {
return errors.New("[DashVector] apiKey is required")
}
if len(config.collectionID) == 0 {
return errors.New("[DashVector] collectionID is required")
}
if len(config.serviceName) == 0 {
return errors.New("[DashVector] serviceName is required")
}
if len(config.serviceHost) == 0 {
return errors.New("[DashVector] serviceHost is required")
}
return nil
}
func (d *dashVectorProviderInitializer) CreateProvider(config ProviderConfig) (Provider, error) {
return &DvProvider{
config: config,
client: wrapper.NewClusterClient(wrapper.FQDNCluster{
FQDN: config.serviceName,
Host: config.serviceHost,
Port: int64(config.servicePort),
}),
}, nil
}
type DvProvider struct {
config ProviderConfig
client wrapper.HttpClient
}
func (d *DvProvider) GetProviderType() string {
return PROVIDER_TYPE_DASH_VECTOR
}
// type embeddingRequest struct {
// Model string `json:"model"`
// Input input `json:"input"`
// Parameters params `json:"parameters"`
// }
// type params struct {
// TextType string `json:"text_type"`
// }
// type input struct {
// Texts []string `json:"texts"`
// }
// queryResponse 定义查询响应的结构
type queryResponse struct {
Code int `json:"code"`
RequestID string `json:"request_id"`
Message string `json:"message"`
Output []result `json:"output"`
}
// queryRequest 定义查询请求的结构
type queryRequest struct {
Vector []float64 `json:"vector"`
TopK int `json:"topk"`
IncludeVector bool `json:"include_vector"`
}
// result 定义查询结果的结构
type result struct {
ID string `json:"id"`
Vector []float64 `json:"vector,omitempty"` // omitempty 使得如果 vector 是空,它将不会被序列化
Fields map[string]interface{} `json:"fields"`
Score float64 `json:"score"`
}
func (d *DvProvider) constructEmbeddingQueryParameters(vector []float64) (string, []byte, [][2]string, error) {
url := fmt.Sprintf("/v1/collections/%s/query", d.config.collectionID)
requestData := queryRequest{
Vector: vector,
TopK: d.config.topK,
IncludeVector: false,
}
requestBody, err := json.Marshal(requestData)
if err != nil {
return "", nil, nil, err
}
header := [][2]string{
{"Content-Type", "application/json"},
{"dashvector-auth-token", d.config.apiKey},
}
return url, requestBody, header, nil
}
func (d *DvProvider) parseQueryResponse(responseBody []byte) (queryResponse, error) {
var queryResp queryResponse
err := json.Unmarshal(responseBody, &queryResp)
if err != nil {
return queryResponse{}, err
}
return queryResp, nil
}
func (d *DvProvider) QueryEmbedding(
emb []float64,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(results []QueryResult, ctx wrapper.HttpContext, log wrapper.Log, err error)) error {
url, body, headers, err := d.constructEmbeddingQueryParameters(emb)
log.Debugf("url:%s, body:%s, headers:%v", url, string(body), headers)
if err != nil {
err = fmt.Errorf("failed to construct embedding query parameters: %v", err)
return err
}
err = d.client.Post(url, headers, body,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
err = nil
if statusCode != http.StatusOK {
err = fmt.Errorf("failed to query embedding: %d", statusCode)
callback(nil, ctx, log, err)
return
}
log.Debugf("query embedding response: %d, %s", statusCode, responseBody)
results, err := d.ParseQueryResponse(responseBody, ctx, log)
if err != nil {
err = fmt.Errorf("failed to parse query response: %v", err)
}
callback(results, ctx, log, err)
},
d.config.timeout)
if err != nil {
err = fmt.Errorf("failed to query embedding: %v", err)
}
return err
}
func getStringValue(fields map[string]interface{}, key string) string {
if val, ok := fields[key]; ok {
return val.(string)
}
return ""
}
func (d *DvProvider) ParseQueryResponse(responseBody []byte, ctx wrapper.HttpContext, log wrapper.Log) ([]QueryResult, error) {
resp, err := d.parseQueryResponse(responseBody)
if err != nil {
return nil, err
}
if len(resp.Output) == 0 {
return nil, errors.New("no query results found in response")
}
results := make([]QueryResult, 0, len(resp.Output))
for _, output := range resp.Output {
result := QueryResult{
Text: getStringValue(output.Fields, "query"),
Embedding: output.Vector,
Score: output.Score,
Answer: getStringValue(output.Fields, "answer"),
}
results = append(results, result)
}
return results, nil
}
type document struct {
Vector []float64 `json:"vector"`
Fields map[string]string `json:"fields"`
}
type insertRequest struct {
Docs []document `json:"docs"`
}
func (d *DvProvider) constructUploadParameters(emb []float64, queryString string, answer string) (string, []byte, [][2]string, error) {
url := "/v1/collections/" + d.config.collectionID + "/docs"
doc := document{
Vector: emb,
Fields: map[string]string{
"query": queryString,
"answer": answer,
},
}
requestBody, err := json.Marshal(insertRequest{Docs: []document{doc}})
if err != nil {
return "", nil, nil, err
}
header := [][2]string{
{"Content-Type", "application/json"},
{"dashvector-auth-token", d.config.apiKey},
}
return url, requestBody, header, err
}
func (d *DvProvider) UploadEmbedding(queryString string, queryEmb []float64, ctx wrapper.HttpContext, log wrapper.Log, callback func(ctx wrapper.HttpContext, log wrapper.Log, err error)) error {
url, body, headers, err := d.constructUploadParameters(queryEmb, queryString, "")
if err != nil {
return err
}
err = d.client.Post(
url,
headers,
body,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
log.Debugf("statusCode:%d, responseBody:%s", statusCode, string(responseBody))
if statusCode != http.StatusOK {
err = fmt.Errorf("failed to upload embedding: %d", statusCode)
}
callback(ctx, log, err)
},
d.config.timeout)
return err
}
func (d *DvProvider) UploadAnswerAndEmbedding(queryString string, queryEmb []float64, queryAnswer string, ctx wrapper.HttpContext, log wrapper.Log, callback func(ctx wrapper.HttpContext, log wrapper.Log, err error)) error {
url, body, headers, err := d.constructUploadParameters(queryEmb, queryString, queryAnswer)
if err != nil {
return err
}
err = d.client.Post(
url,
headers,
body,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
log.Debugf("statusCode:%d, responseBody:%s", statusCode, string(responseBody))
if statusCode != http.StatusOK {
err = fmt.Errorf("failed to upload embedding: %d", statusCode)
}
callback(ctx, log, err)
},
d.config.timeout)
return err
}

View File

@@ -0,0 +1,200 @@
package vector
import (
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"net/http"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
)
type esProviderInitializer struct{}
func (c *esProviderInitializer) ValidateConfig(config ProviderConfig) error {
if len(config.collectionID) == 0 {
return errors.New("[ES] collectionID is required")
}
if len(config.serviceName) == 0 {
return errors.New("[ES] serviceName is required")
}
return nil
}
func (c *esProviderInitializer) CreateProvider(config ProviderConfig) (Provider, error) {
return &ESProvider{
config: config,
client: wrapper.NewClusterClient(wrapper.FQDNCluster{
FQDN: config.serviceName,
Host: config.serviceHost,
Port: int64(config.servicePort),
}),
}, nil
}
type ESProvider struct {
config ProviderConfig
client wrapper.HttpClient
}
func (c *ESProvider) GetProviderType() string {
return PROVIDER_TYPE_ES
}
func (d *ESProvider) QueryEmbedding(
emb []float64,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(results []QueryResult, ctx wrapper.HttpContext, log wrapper.Log, err error)) error {
requestBody, err := json.Marshal(esQueryRequest{
Source: Source{Excludes: []string{"embedding"}},
Knn: knn{
Field: "embedding",
QueryVector: emb,
K: d.config.topK,
},
Size: d.config.topK,
})
if err != nil {
log.Errorf("[ES] Failed to marshal query embedding request body: %v", err)
return err
}
return d.client.Post(
fmt.Sprintf("/%s/_search", d.config.collectionID),
[][2]string{
{"Content-Type", "application/json"},
{"Authorization", d.getCredentials()},
},
requestBody,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
log.Debugf("[ES] Query embedding response: %d, %s", statusCode, responseBody)
results, err := d.parseQueryResponse(responseBody, log)
if err != nil {
err = fmt.Errorf("[ES] Failed to parse query response: %v", err)
}
callback(results, ctx, log, err)
},
d.config.timeout,
)
}
// base64 编码 ES 身份认证字符串或使用 Apikey
func (d *ESProvider) getCredentials() string {
if len(d.config.apiKey) != 0 {
return fmt.Sprintf("ApiKey %s", d.config.apiKey)
} else {
credentials := fmt.Sprintf("%s:%s", d.config.esUsername, d.config.esPassword)
encodedCredentials := base64.StdEncoding.EncodeToString([]byte(credentials))
return fmt.Sprintf("Basic %s", encodedCredentials)
}
}
func (d *ESProvider) UploadAnswerAndEmbedding(
queryString string,
queryEmb []float64,
queryAnswer string,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(ctx wrapper.HttpContext, log wrapper.Log, err error)) error {
// 最少需要填写的参数为 index, embeddings 和 question
// 下面是一个例子
// POST /<index>/_doc
// {
// "embedding": [
// [1.1, 2.3, 3.2]
// ],
// "question": [
// "你吃了吗?"
// ]
// }
requestBody, err := json.Marshal(esInsertRequest{
Embedding: queryEmb,
Question: queryString,
Answer: queryAnswer,
})
if err != nil {
log.Errorf("[ES] Failed to marshal upload embedding request body: %v", err)
return err
}
return d.client.Post(
fmt.Sprintf("/%s/_doc", d.config.collectionID),
[][2]string{
{"Content-Type", "application/json"},
{"Authorization", d.getCredentials()},
},
requestBody,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
log.Debugf("[ES] statusCode:%d, responseBody:%s", statusCode, string(responseBody))
callback(ctx, log, err)
},
d.config.timeout,
)
}
type esInsertRequest struct {
Embedding []float64 `json:"embedding"`
Question string `json:"question"`
Answer string `json:"answer"`
}
type knn struct {
Field string `json:"field"`
QueryVector []float64 `json:"query_vector"`
K int `json:"k"`
}
type Source struct {
Excludes []string `json:"excludes"`
}
type esQueryRequest struct {
Source Source `json:"_source"`
Knn knn `json:"knn"`
Size int `json:"size"`
}
type esQueryResponse struct {
Took int `json:"took"`
TimedOut bool `json:"timed_out"`
Hits struct {
Total struct {
Value int `json:"value"`
Relation string `json:"relation"`
} `json:"total"`
Hits []struct {
Index string `json:"_index"`
ID string `json:"_id"`
Score float64 `json:"_score"`
Source map[string]interface{} `json:"_source"`
} `json:"hits"`
} `json:"hits"`
}
func (d *ESProvider) parseQueryResponse(responseBody []byte, log wrapper.Log) ([]QueryResult, error) {
log.Infof("[ES] responseBody: %s", string(responseBody))
var queryResp esQueryResponse
err := json.Unmarshal(responseBody, &queryResp)
if err != nil {
return []QueryResult{}, err
}
log.Debugf("[ES] queryResp Hits len: %d", len(queryResp.Hits.Hits))
if len(queryResp.Hits.Hits) == 0 {
return nil, errors.New("no query results found in response")
}
results := make([]QueryResult, 0, queryResp.Hits.Total.Value)
for _, hit := range queryResp.Hits.Hits {
result := QueryResult{
Text: hit.Source["question"].(string),
Score: hit.Score,
Answer: hit.Source["answer"].(string),
}
results = append(results, result)
}
return results, nil
}

View File

@@ -0,0 +1,206 @@
package vector
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
"github.com/tidwall/gjson"
)
type milvusProviderInitializer struct{}
func (c *milvusProviderInitializer) ValidateConfig(config ProviderConfig) error {
if len(config.serviceName) == 0 {
return errors.New("[Milvus] serviceName is required")
}
if len(config.collectionID) == 0 {
return errors.New("[Milvus] collectionID is required")
}
return nil
}
func (c *milvusProviderInitializer) CreateProvider(config ProviderConfig) (Provider, error) {
return &milvusProvider{
config: config,
client: wrapper.NewClusterClient(wrapper.FQDNCluster{
FQDN: config.serviceName,
Host: config.serviceHost,
Port: int64(config.servicePort),
}),
}, nil
}
type milvusProvider struct {
config ProviderConfig
client wrapper.HttpClient
}
func (c *milvusProvider) GetProviderType() string {
return PROVIDER_TYPE_MILVUS
}
type milvusData struct {
Vector []float64 `json:"vector"`
Question string `json:"question,omitempty"`
Answer string `json:"answer,omitempty"`
}
type milvusInsertRequest struct {
CollectionName string `json:"collectionName"`
Data []milvusData `json:"data"`
}
func (d *milvusProvider) UploadAnswerAndEmbedding(
queryString string,
queryEmb []float64,
queryAnswer string,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(ctx wrapper.HttpContext, log wrapper.Log, err error)) error {
// 最少需要填写的参数为 collectionName, data 和 Authorization. question, answer 可选
// 需要填写 id否则 v2.4.13-hotfix 提示 invalid syntax: invalid parameter[expected=Int64][actual=]
// 如果不填写 id要在创建 collection 的时候设置 autoId 为 true
// 下面是一个例子
// {
// "collectionName": "higress",
// "data": [
// {
// "question": "这里是问题",
// "answer": "这里是答案"
// "vector": [
// 0.9,
// 0.1,
// 0.1
// ]
// }
// ]
// }
requestBody, err := json.Marshal(milvusInsertRequest{
CollectionName: d.config.collectionID,
Data: []milvusData{
{
Question: queryString,
Answer: queryAnswer,
Vector: queryEmb,
},
},
})
if err != nil {
log.Errorf("[Milvus] Failed to marshal upload embedding request body: %v", err)
return err
}
return d.client.Post(
"/v2/vectordb/entities/insert",
[][2]string{
{"Content-Type", "application/json"},
{"Authorization", fmt.Sprintf("Bearer %s", d.config.apiKey)},
},
requestBody,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
log.Debugf("[Milvus] statusCode:%d, responseBody:%s", statusCode, string(responseBody))
callback(ctx, log, err)
},
d.config.timeout,
)
}
type milvusQueryRequest struct {
CollectionName string `json:"collectionName"`
Data [][]float64 `json:"data"`
AnnsField string `json:"annsField"`
Limit int `json:"limit"`
OutputFields []string `json:"outputFields"`
}
func (d *milvusProvider) QueryEmbedding(
emb []float64,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(results []QueryResult, ctx wrapper.HttpContext, log wrapper.Log, err error)) error {
// 最少需要填写的参数为 collectionName, data, annsField. outputFields 为可选参数
// 下面是一个例子
// {
// "collectionName": "quick_setup",
// "data": [
// [
// 0.3580376395471989,
// "Unknown type",
// 0.18414012509913835,
// "Unknown type",
// 0.9029438446296592
// ]
// ],
// "annsField": "vector",
// "limit": 3,
// "outputFields": [
// "color"
// ]
// }
requestBody, err := json.Marshal(milvusQueryRequest{
CollectionName: d.config.collectionID,
Data: [][]float64{emb},
AnnsField: "vector",
Limit: d.config.topK,
OutputFields: []string{
"question",
"answer",
},
})
if err != nil {
log.Errorf("[Milvus] Failed to marshal query embedding: %v", err)
return err
}
return d.client.Post(
"/v2/vectordb/entities/search",
[][2]string{
{"Content-Type", "application/json"},
{"Authorization", fmt.Sprintf("Bearer %s", d.config.apiKey)},
},
requestBody,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
log.Debugf("[Milvus] Query embedding response: %d, %s", statusCode, responseBody)
results, err := d.parseQueryResponse(responseBody, log)
if err != nil {
err = fmt.Errorf("[Milvus] Failed to parse query response: %v", err)
}
callback(results, ctx, log, err)
},
d.config.timeout,
)
}
func (d *milvusProvider) parseQueryResponse(responseBody []byte, log wrapper.Log) ([]QueryResult, error) {
if !gjson.GetBytes(responseBody, "data.0.distance").Exists() {
log.Errorf("[Milvus] No distance found in response body: %s", responseBody)
return nil, errors.New("[Milvus] No distance found in response body")
}
if !gjson.GetBytes(responseBody, "data.0.question").Exists() {
log.Errorf("[Milvus] No question found in response body: %s", responseBody)
return nil, errors.New("[Milvus] No question found in response body")
}
if !gjson.GetBytes(responseBody, "data.0.answer").Exists() {
log.Errorf("[Milvus] No answer found in response body: %s", responseBody)
return nil, errors.New("[Milvus] No answer found in response body")
}
resultNum := gjson.GetBytes(responseBody, "data.#").Int()
results := make([]QueryResult, 0, resultNum)
for i := 0; i < int(resultNum); i++ {
result := QueryResult{
Text: gjson.GetBytes(responseBody, fmt.Sprintf("data.%d.question", i)).String(),
Score: gjson.GetBytes(responseBody, fmt.Sprintf("data.%d.distance", i)).Float(),
Answer: gjson.GetBytes(responseBody, fmt.Sprintf("data.%d.answer", i)).String(),
}
results = append(results, result)
}
return results, nil
}

View File

@@ -0,0 +1,194 @@
package vector
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
"github.com/google/uuid"
"github.com/tidwall/gjson"
)
type pineconeProviderInitializer struct{}
func (c *pineconeProviderInitializer) ValidateConfig(config ProviderConfig) error {
if len(config.serviceHost) == 0 {
return errors.New("[Pinecone] serviceHost is required")
}
if len(config.serviceName) == 0 {
return errors.New("[Pinecone] serviceName is required")
}
if len(config.apiKey) == 0 {
return errors.New("[Pinecone] apiKey is required")
}
return nil
}
func (c *pineconeProviderInitializer) CreateProvider(config ProviderConfig) (Provider, error) {
return &pineconeProvider{
config: config,
client: wrapper.NewClusterClient(wrapper.FQDNCluster{
FQDN: config.serviceName,
Host: config.serviceHost,
Port: int64(config.servicePort),
}),
}, nil
}
type pineconeProvider struct {
config ProviderConfig
client wrapper.HttpClient
}
func (c *pineconeProvider) GetProviderType() string {
return PROVIDER_TYPE_PINECONE
}
type pineconeMetadata struct {
Question string `json:"question"`
Answer string `json:"answer"`
}
type pineconeVector struct {
ID string `json:"id"`
Values []float64 `json:"values"`
Properties pineconeMetadata `json:"metadata"`
}
type pineconeInsertRequest struct {
Vectors []pineconeVector `json:"vectors"`
Namespace string `json:"namespace"`
}
func (d *pineconeProvider) UploadAnswerAndEmbedding(
queryString string,
queryEmb []float64,
queryAnswer string,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(ctx wrapper.HttpContext, log wrapper.Log, err error)) error {
// 最少需要填写的参数为 vector 和 question
// 下面是一个例子
// {
// "vectors": [
// {
// "id": "A",
// "values": [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1],
// "metadata": {"question": "你好", "answer": "你也好"}
// }
// ]
// }
requestBody, err := json.Marshal(pineconeInsertRequest{
Vectors: []pineconeVector{
{
ID: uuid.New().String(),
Values: queryEmb,
Properties: pineconeMetadata{Question: queryString, Answer: queryAnswer},
},
},
Namespace: d.config.collectionID,
})
if err != nil {
log.Errorf("[Pinecone] Failed to marshal upload embedding request body: %v", err)
return err
}
return d.client.Post(
"/vectors/upsert",
[][2]string{
{"Content-Type", "application/json"},
{"Api-Key", d.config.apiKey},
},
requestBody,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
log.Debugf("[Pinecone] statusCode:%d, responseBody:%s", statusCode, string(responseBody))
callback(ctx, log, err)
},
d.config.timeout,
)
}
type pineconeQueryRequest struct {
Namespace string `json:"namespace"`
Vector []float64 `json:"vector"`
TopK int `json:"topK"`
IncludeMetadata bool `json:"includeMetadata"`
IncludeValues bool `json:"includeValues"`
}
func (d *pineconeProvider) QueryEmbedding(
emb []float64,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(results []QueryResult, ctx wrapper.HttpContext, log wrapper.Log, err error)) error {
// 最少需要填写的参数为 vector
// 下面是一个例子
// {
// "namespace": "higress",
// "vector": [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1],
// "topK": 1,
// "includeMetadata": false
// }
requestBody, err := json.Marshal(pineconeQueryRequest{
Namespace: d.config.collectionID,
Vector: emb,
TopK: d.config.topK,
IncludeMetadata: true,
IncludeValues: false,
})
if err != nil {
log.Errorf("[Pinecone] Failed to marshal query embedding: %v", err)
return err
}
return d.client.Post(
"/query",
[][2]string{
{"Content-Type", "application/json"},
{"Api-Key", d.config.apiKey},
},
requestBody,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
log.Debugf("[Pinecone] Query embedding response: %d, %s", statusCode, responseBody)
results, err := d.parseQueryResponse(responseBody, log)
if err != nil {
err = fmt.Errorf("[Pinecone] Failed to parse query response: %v", err)
}
callback(results, ctx, log, err)
},
d.config.timeout,
)
}
func (d *pineconeProvider) parseQueryResponse(responseBody []byte, log wrapper.Log) ([]QueryResult, error) {
if !gjson.GetBytes(responseBody, "matches.0.score").Exists() {
log.Errorf("[Pinecone] No distance found in response body: %s", responseBody)
return nil, errors.New("[Pinecone] No distance found in response body")
}
if !gjson.GetBytes(responseBody, "matches.0.metadata.question").Exists() {
log.Errorf("[Pinecone] No question found in response body: %s", responseBody)
return nil, errors.New("[Pinecone] No question found in response body")
}
if !gjson.GetBytes(responseBody, "matches.0.metadata.answer").Exists() {
log.Errorf("[Pinecone] No answer found in response body: %s", responseBody)
return nil, errors.New("[Pinecone] No answer found in response body")
}
resultNum := gjson.GetBytes(responseBody, "matches.#").Int()
results := make([]QueryResult, 0, resultNum)
for i := 0; i < int(resultNum); i++ {
result := QueryResult{
Text: gjson.GetBytes(responseBody, fmt.Sprintf("matches.%d.metadata.question", i)).String(),
Score: gjson.GetBytes(responseBody, fmt.Sprintf("matches.%d.score", i)).Float(),
Answer: gjson.GetBytes(responseBody, fmt.Sprintf("matches.%d.metadata.answer", i)).String(),
}
results = append(results, result)
}
return results, nil
}

View File

@@ -0,0 +1,196 @@
package vector
import (
"errors"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
"github.com/tidwall/gjson"
)
const (
PROVIDER_TYPE_DASH_VECTOR = "dashvector"
PROVIDER_TYPE_CHROMA = "chroma"
PROVIDER_TYPE_ES = "elasticsearch"
PROVIDER_TYPE_WEAVIATE = "weaviate"
PROVIDER_TYPE_PINECONE = "pinecone"
PROVIDER_TYPE_QDRANT = "qdrant"
PROVIDER_TYPE_MILVUS = "milvus"
)
type providerInitializer interface {
ValidateConfig(ProviderConfig) error
CreateProvider(ProviderConfig) (Provider, error)
}
var (
providerInitializers = map[string]providerInitializer{
PROVIDER_TYPE_DASH_VECTOR: &dashVectorProviderInitializer{},
PROVIDER_TYPE_CHROMA: &chromaProviderInitializer{},
PROVIDER_TYPE_ES: &esProviderInitializer{},
PROVIDER_TYPE_WEAVIATE: &weaviateProviderInitializer{},
PROVIDER_TYPE_PINECONE: &pineconeProviderInitializer{},
PROVIDER_TYPE_QDRANT: &qdrantProviderInitializer{},
PROVIDER_TYPE_MILVUS: &milvusProviderInitializer{},
}
)
// QueryResult 定义通用的查询结果的结构体
type QueryResult struct {
Text string // 相似的文本
Embedding []float64 // 相似文本的向量
Score float64 // 文本的向量相似度或距离等度量
Answer string // 相似文本对应的LLM生成的回答
}
type Provider interface {
GetProviderType() string
}
type EmbeddingQuerier interface {
QueryEmbedding(
emb []float64,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(results []QueryResult, ctx wrapper.HttpContext, log wrapper.Log, err error)) error
}
type EmbeddingUploader interface {
UploadEmbedding(
queryString string,
queryEmb []float64,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(ctx wrapper.HttpContext, log wrapper.Log, err error)) error
}
type AnswerAndEmbeddingUploader interface {
UploadAnswerAndEmbedding(
queryString string,
queryEmb []float64,
answer string,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(ctx wrapper.HttpContext, log wrapper.Log, err error)) error
}
type StringQuerier interface {
QueryString(
queryString string,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(results []QueryResult, ctx wrapper.HttpContext, log wrapper.Log, err error)) error
}
type ProviderConfig struct {
// @Title zh-CN 向量存储服务提供者类型
// @Description zh-CN 向量存储服务提供者类型,例如 dashvector、chroma
typ string
// @Title zh-CN 向量存储服务名称
// @Description zh-CN 向量存储服务名称
serviceName string
// @Title zh-CN 向量存储服务域名
// @Description zh-CN 向量存储服务域名
serviceHost string
// @Title zh-CN 向量存储服务端口
// @Description zh-CN 向量存储服务端口
servicePort int64
// @Title zh-CN 向量存储服务 API Key
// @Description zh-CN 向量存储服务 API Key
apiKey string
// @Title zh-CN 返回TopK结果
// @Description zh-CN 返回TopK结果默认为 1
topK int
// @Title zh-CN 请求超时
// @Description zh-CN 请求向量存储服务的超时时间单位为毫秒。默认值是10000即10秒
timeout uint32
// @Title zh-CN 向量存储服务 Collection ID
// @Description zh-CN 向量存储服务的 Collection ID
collectionID string
// @Title zh-CN 相似度度量阈值
// @Description zh-CN 默认相似度度量阈值,默认为 1000。
Threshold float64
// @Title zh-CN 相似度度量比较方式
// @Description zh-CN 相似度度量比较方式,默认为小于。
// 相似度度量方式有 Cosine, DotProduct, Euclidean 等,前两者值越大相似度越高,后者值越小相似度越高。
// 所以需要允许自定义比较方式,对于 Cosine 和 DotProduct 选择 gt对于 Euclidean 则选择 lt。
// 默认为 lt所有条件包括 lt (less than小于)、lte (less than or equal to小等于)、gt (greater than大于)、gte (greater than or equal to大等于)
ThresholdRelation string
// ES 配置
// @Title zh-CN ES 用户名
// @Description zh-CN ES 用户名
esUsername string
// @Title zh-CN ES 密码
// @Description zh-CN ES 密码
esPassword string
}
func (c *ProviderConfig) GetProviderType() string {
return c.typ
}
func (c *ProviderConfig) FromJson(json gjson.Result) {
c.typ = json.Get("type").String()
c.serviceName = json.Get("serviceName").String()
c.serviceHost = json.Get("serviceHost").String()
c.servicePort = int64(json.Get("servicePort").Int())
if c.servicePort == 0 {
c.servicePort = 443
}
c.apiKey = json.Get("apiKey").String()
c.collectionID = json.Get("collectionID").String()
c.topK = int(json.Get("topK").Int())
if c.topK == 0 {
c.topK = 1
}
c.timeout = uint32(json.Get("timeout").Int())
if c.timeout == 0 {
c.timeout = 10000
}
c.Threshold = json.Get("threshold").Float()
if c.Threshold == 0 {
c.Threshold = 1000
}
c.ThresholdRelation = json.Get("thresholdRelation").String()
if c.ThresholdRelation == "" {
c.ThresholdRelation = "lt"
}
// ES
c.esUsername = json.Get("esUsername").String()
c.esPassword = json.Get("esPassword").String()
}
func (c *ProviderConfig) Validate() error {
if c.typ == "" {
return errors.New("vector database service is required")
}
initializer, has := providerInitializers[c.typ]
if !has {
return errors.New("unknown vector database service provider type: " + c.typ)
}
if !isRelationValid(c.ThresholdRelation) {
return errors.New("invalid thresholdRelation: " + c.ThresholdRelation)
}
if err := initializer.ValidateConfig(*c); err != nil {
return err
}
return nil
}
func CreateProvider(pc ProviderConfig) (Provider, error) {
initializer, has := providerInitializers[pc.typ]
if !has {
return nil, errors.New("unknown provider type: " + pc.typ)
}
return initializer.CreateProvider(pc)
}
func isRelationValid(relation string) bool {
for _, r := range []string{"lt", "lte", "gt", "gte"} {
if r == relation {
return true
}
}
return false
}

View File

@@ -0,0 +1,208 @@
package vector
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
"github.com/google/uuid"
"github.com/tidwall/gjson"
)
type qdrantProviderInitializer struct{}
func (c *qdrantProviderInitializer) ValidateConfig(config ProviderConfig) error {
if len(config.serviceName) == 0 {
return errors.New("[Qdrant] serviceName is required")
}
if len(config.collectionID) == 0 {
return errors.New("[Qdrant] collectionID is required")
}
return nil
}
func (c *qdrantProviderInitializer) CreateProvider(config ProviderConfig) (Provider, error) {
return &qdrantProvider{
config: config,
client: wrapper.NewClusterClient(wrapper.FQDNCluster{
FQDN: config.serviceName,
Host: config.serviceHost,
Port: int64(config.servicePort),
}),
}, nil
}
type qdrantProvider struct {
config ProviderConfig
client wrapper.HttpClient
}
func (c *qdrantProvider) GetProviderType() string {
return PROVIDER_TYPE_QDRANT
}
type qdrantPayload struct {
Question string `json:"question"`
Answer string `json:"answer"`
}
type qdrantPoint struct {
ID string `json:"id"`
Vector []float64 `json:"vector"`
Payload qdrantPayload `json:"payload"`
}
type qdrantInsertRequest struct {
Points []qdrantPoint `json:"points"`
}
func (d *qdrantProvider) UploadAnswerAndEmbedding(
queryString string,
queryEmb []float64,
queryAnswer string,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(ctx wrapper.HttpContext, log wrapper.Log, err error)) error {
// 最少需要填写的参数为 id 和 vector. payload 可选
// 下面是一个例子
// {
// "points": [
// {
// "id": "76874cce-1fb9-4e16-9b0b-f085ac06ed6f",
// "payload": {
// "question": "这里是问题",
// "answer": "这里是答案"
// },
// "vector": [
// 0.9,
// 0.1,
// 0.1
// ]
// }
// ]
// }
requestBody, err := json.Marshal(qdrantInsertRequest{
Points: []qdrantPoint{
{
ID: uuid.New().String(),
Vector: queryEmb,
Payload: qdrantPayload{Question: queryString, Answer: queryAnswer},
},
},
})
if err != nil {
log.Errorf("[Qdrant] Failed to marshal upload embedding request body: %v", err)
return err
}
return d.client.Put(
fmt.Sprintf("/collections/%s/points", d.config.collectionID),
[][2]string{
{"Content-Type", "application/json"},
{"api-key", d.config.apiKey},
},
requestBody,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
log.Debugf("[Qdrant] statusCode:%d, responseBody:%s", statusCode, string(responseBody))
callback(ctx, log, err)
},
d.config.timeout,
)
}
type qdrantQueryRequest struct {
Vector []float64 `json:"vector"`
Limit int `json:"limit"`
WithPayload bool `json:"with_payload"`
}
func (d *qdrantProvider) QueryEmbedding(
emb []float64,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(results []QueryResult, ctx wrapper.HttpContext, log wrapper.Log, err error)) error {
// 最少需要填写的参数为 vector 和 limit. with_payload 可选,为了直接得到问题答案,所以这里需要
// 下面是一个例子
// {
// "vector": [
// 0.2,
// 0.1,
// 0.9,
// 0.7
// ],
// "limit": 1
// }
requestBody, err := json.Marshal(qdrantQueryRequest{
Vector: emb,
Limit: d.config.topK,
WithPayload: true,
})
if err != nil {
log.Errorf("[Qdrant] Failed to marshal query embedding: %v", err)
return err
}
return d.client.Post(
fmt.Sprintf("/collections/%s/points/search", d.config.collectionID),
[][2]string{
{"Content-Type", "application/json"},
{"api-key", d.config.apiKey},
},
requestBody,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
log.Debugf("[Qdrant] Query embedding response: %d, %s", statusCode, responseBody)
results, err := d.parseQueryResponse(responseBody, log)
if err != nil {
err = fmt.Errorf("[Qdrant] Failed to parse query response: %v", err)
}
callback(results, ctx, log, err)
},
d.config.timeout,
)
}
func (d *qdrantProvider) parseQueryResponse(responseBody []byte, log wrapper.Log) ([]QueryResult, error) {
// 返回的内容例子如下
// {
// "time": 0.002,
// "status": "ok",
// "result": [
// {
// "id": 42,
// "version": 3,
// "score": 0.75,
// "payload": {
// "question": "London",
// "answer": "green"
// },
// "shard_key": "region_1",
// "order_value": 42
// }
// ]
// }
if !gjson.GetBytes(responseBody, "result.0.score").Exists() {
log.Errorf("[Qdrant] No distance found in response body: %s", responseBody)
return nil, errors.New("[Qdrant] No distance found in response body")
}
if !gjson.GetBytes(responseBody, "result.0.payload.answer").Exists() {
log.Errorf("[Qdrant] No answer found in response body: %s", responseBody)
return nil, errors.New("[Qdrant] No answer found in response body")
}
resultNum := gjson.GetBytes(responseBody, "result.#").Int()
results := make([]QueryResult, 0, resultNum)
for i := 0; i < int(resultNum); i++ {
result := QueryResult{
Text: gjson.GetBytes(responseBody, fmt.Sprintf("result.%d.payload.question", i)).String(),
Score: gjson.GetBytes(responseBody, fmt.Sprintf("result.%d.score", i)).Float(),
Answer: gjson.GetBytes(responseBody, fmt.Sprintf("result.%d.payload.answer", i)).String(),
}
results = append(results, result)
}
return results, nil
}

View File

@@ -0,0 +1,188 @@
package vector
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"github.com/alibaba/higress/plugins/wasm-go/pkg/wrapper"
"github.com/tidwall/gjson"
)
type weaviateProviderInitializer struct{}
func (c *weaviateProviderInitializer) ValidateConfig(config ProviderConfig) error {
if len(config.collectionID) == 0 {
return errors.New("[Weaviate] collectionID is required")
}
if len(config.serviceName) == 0 {
return errors.New("[Weaviate] serviceName is required")
}
return nil
}
func (c *weaviateProviderInitializer) CreateProvider(config ProviderConfig) (Provider, error) {
return &WeaviateProvider{
config: config,
client: wrapper.NewClusterClient(wrapper.FQDNCluster{
FQDN: config.serviceName,
Host: config.serviceHost,
Port: int64(config.servicePort),
}),
}, nil
}
type WeaviateProvider struct {
config ProviderConfig
client wrapper.HttpClient
}
func (c *WeaviateProvider) GetProviderType() string {
return PROVIDER_TYPE_WEAVIATE
}
func (d *WeaviateProvider) QueryEmbedding(
emb []float64,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(results []QueryResult, ctx wrapper.HttpContext, log wrapper.Log, err error)) error {
// 最少需要填写的参数为 class, vector
// 下面是一个例子
// {"query": "{ Get { Higress ( limit: 2 nearVector: { vector: [0.1, 0.2, 0.3] } ) { question _additional { distance } } } }"}
embString, err := json.Marshal(emb)
if err != nil {
log.Errorf("[Weaviate] Failed to marshal query embedding: %v", err)
return err
}
// 这里默认按照 distance 进行升序,所以不用再次排序
graphql := fmt.Sprintf(`
{
Get {
%s (
limit: %d
nearVector: {
vector: %s
}
) {
question
answer
_additional {
distance
}
}
}
}
`, d.config.collectionID, d.config.topK, embString)
requestBody, err := json.Marshal(weaviateQueryRequest{
Query: graphql,
})
if err != nil {
log.Errorf("[Weaviate] Failed to marshal query embedding request body: %v", err)
return err
}
err = d.client.Post(
"/v1/graphql",
[][2]string{
{"Content-Type", "application/json"},
{"Authorization", fmt.Sprintf("Bearer %s", d.config.apiKey)},
},
requestBody,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
log.Debugf("[Weaviate] Query embedding response: %d, %s", statusCode, responseBody)
results, err := d.parseQueryResponse(responseBody, log)
if err != nil {
err = fmt.Errorf("[Weaviate] Failed to parse query response: %v", err)
}
callback(results, ctx, log, err)
},
d.config.timeout,
)
return err
}
func (d *WeaviateProvider) UploadAnswerAndEmbedding(
queryString string,
queryEmb []float64,
queryAnswer string,
ctx wrapper.HttpContext,
log wrapper.Log,
callback func(ctx wrapper.HttpContext, log wrapper.Log, err error)) error {
// 最少需要填写的参数为 class, vector 和 question 和 answer
// 下面是一个例子
// {"class": "Higress", "vector": [0.1, 0.2, 0.3], "properties": {"question": "这里是问题", "answer": "这里是答案"}}
requestBody, err := json.Marshal(weaviateInsertRequest{
Class: d.config.collectionID,
Vector: queryEmb,
Properties: weaviateProperties{Question: queryString, Answer: queryAnswer}, // queryString 指的是用户查询的问题
})
if err != nil {
log.Errorf("[Weaviate] Failed to marshal upload embedding request body: %v", err)
return err
}
return d.client.Post(
"/v1/objects",
[][2]string{
{"Content-Type", "application/json"},
{"Authorization", fmt.Sprintf("Bearer %s", d.config.apiKey)},
},
requestBody,
func(statusCode int, responseHeaders http.Header, responseBody []byte) {
log.Debugf("[Weaviate] statusCode: %d, responseBody: %s", statusCode, string(responseBody))
callback(ctx, log, err)
},
d.config.timeout,
)
}
type weaviateProperties struct {
Question string `json:"question"`
Answer string `json:"answer"`
}
type weaviateInsertRequest struct {
Class string `json:"class"`
Vector []float64 `json:"vector"`
Properties weaviateProperties `json:"properties"`
}
type weaviateQueryRequest struct {
Query string `json:"query"`
}
func (d *WeaviateProvider) parseQueryResponse(responseBody []byte, log wrapper.Log) ([]QueryResult, error) {
log.Infof("[Weaviate] queryResp: %s", string(responseBody))
if !gjson.GetBytes(responseBody, fmt.Sprintf("data.Get.%s.0._additional.distance", d.config.collectionID)).Exists() {
log.Errorf("[Weaviate] No distance found in response body: %s", responseBody)
return nil, errors.New("[Weaviate] No distance found in response body")
}
if !gjson.GetBytes(responseBody, fmt.Sprintf("data.Get.%s.0.question", d.config.collectionID)).Exists() {
log.Errorf("[Weaviate] No question found in response body: %s", responseBody)
return nil, errors.New("[Weaviate] No question found in response body")
}
if !gjson.GetBytes(responseBody, fmt.Sprintf("data.Get.%s.0.answer", d.config.collectionID)).Exists() {
log.Errorf("[Weaviate] No answer found in response body: %s", responseBody)
return nil, errors.New("[Weaviate] No answer found in response body")
}
resultNum := gjson.GetBytes(responseBody, fmt.Sprintf("data.Get.%s.#", d.config.collectionID)).Int()
results := make([]QueryResult, 0, resultNum)
for i := 0; i < int(resultNum); i++ {
result := QueryResult{
Text: gjson.GetBytes(responseBody, fmt.Sprintf("data.Get.%s.%d.question", d.config.collectionID, i)).String(),
Score: gjson.GetBytes(responseBody, fmt.Sprintf("data.Get.%s.%d._additional.distance", d.config.collectionID, i)).Float(),
Answer: gjson.GetBytes(responseBody, fmt.Sprintf("data.Get.%s.%d.answer", d.config.collectionID, i)).String(),
}
results = append(results, result)
}
return results, nil
}

View File

@@ -8,8 +8,8 @@ replace github.com/alibaba/higress/plugins/wasm-go => ../..
require (
github.com/alibaba/higress/plugins/wasm-go v1.3.6-0.20240528060522-53bccf89f441
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240711023527-ba358c48772f
github.com/tidwall/gjson v1.14.3
github.com/higress-group/proxy-wasm-go-sdk v1.0.0
github.com/tidwall/gjson v1.17.3
github.com/tidwall/resp v0.1.1
)

View File

@@ -5,12 +5,14 @@ github.com/higress-group/nottinygc v0.0.0-20231101025119-e93c4c2f8520 h1:IHDghbG
github.com/higress-group/nottinygc v0.0.0-20231101025119-e93c4c2f8520/go.mod h1:Nz8ORLaFiLWotg6GeKlJMhv8cci8mM43uEnLA5t8iew=
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240711023527-ba358c48772f h1:ZIiIBRvIw62gA5MJhuwp1+2wWbqL9IGElQ499rUsYYg=
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240711023527-ba358c48772f/go.mod h1:hNFjhrLUIq+kJ9bOcs8QtiplSQ61GZXtd2xHKx4BYRo=
github.com/higress-group/proxy-wasm-go-sdk v1.0.0/go.mod h1:iiSyFbo+rAtbtGt/bsefv8GU57h9CCLYGJA74/tF5/0=
github.com/magefile/mage v1.14.0 h1:6QDX3g6z1YvJ4olPhT1wksUcSa/V0a1B+pJb73fBjyo=
github.com/magefile/mage v1.14.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/tidwall/gjson v1.14.3 h1:9jvXn7olKEHU1S9vwoMGliaT8jq1vJ7IH/n9zD9Dnlw=
github.com/tidwall/gjson v1.14.3/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.17.3/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs=

View File

@@ -6,7 +6,7 @@ replace github.com/alibaba/higress/plugins/wasm-go => ../..
require (
github.com/alibaba/higress/plugins/wasm-go v1.4.2
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240711023527-ba358c48772f
github.com/higress-group/proxy-wasm-go-sdk v1.0.0
)
require (
@@ -14,7 +14,7 @@ require (
github.com/higress-group/nottinygc v0.0.0-20231101025119-e93c4c2f8520 // indirect
github.com/magefile/mage v1.14.0 // indirect
github.com/santhosh-tekuri/jsonschema v1.2.4 // indirect
github.com/tidwall/gjson v1.14.3 // indirect
github.com/tidwall/gjson v1.17.3 // indirect
github.com/tidwall/match v1.1.1 // indirect
github.com/tidwall/pretty v1.2.0 // indirect
github.com/tidwall/resp v0.1.1 // indirect

View File

@@ -9,6 +9,7 @@ github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240318034951-d5306e367c43/go
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240327114451-d6b7174a84fc/go.mod h1:hNFjhrLUIq+kJ9bOcs8QtiplSQ61GZXtd2xHKx4BYRo=
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240711023527-ba358c48772f h1:ZIiIBRvIw62gA5MJhuwp1+2wWbqL9IGElQ499rUsYYg=
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240711023527-ba358c48772f/go.mod h1:hNFjhrLUIq+kJ9bOcs8QtiplSQ61GZXtd2xHKx4BYRo=
github.com/higress-group/proxy-wasm-go-sdk v1.0.0/go.mod h1:iiSyFbo+rAtbtGt/bsefv8GU57h9CCLYGJA74/tF5/0=
github.com/magefile/mage v1.14.0 h1:6QDX3g6z1YvJ4olPhT1wksUcSa/V0a1B+pJb73fBjyo=
github.com/magefile/mage v1.14.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
@@ -17,6 +18,7 @@ github.com/santhosh-tekuri/jsonschema v1.2.4/go.mod h1:TEAUOeZSmIxTTuHatJzrvARHi
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/tidwall/gjson v1.14.3 h1:9jvXn7olKEHU1S9vwoMGliaT8jq1vJ7IH/n9zD9Dnlw=
github.com/tidwall/gjson v1.14.3/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.17.3/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs=

View File

@@ -6,8 +6,8 @@ replace github.com/alibaba/higress/plugins/wasm-go => ../..
require (
github.com/alibaba/higress/plugins/wasm-go v1.3.5
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240711023527-ba358c48772f
github.com/tidwall/gjson v1.14.3
github.com/higress-group/proxy-wasm-go-sdk v1.0.0
github.com/tidwall/gjson v1.17.3
)
require (

View File

@@ -5,6 +5,7 @@ github.com/higress-group/nottinygc v0.0.0-20231101025119-e93c4c2f8520 h1:IHDghbG
github.com/higress-group/nottinygc v0.0.0-20231101025119-e93c4c2f8520/go.mod h1:Nz8ORLaFiLWotg6GeKlJMhv8cci8mM43uEnLA5t8iew=
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240711023527-ba358c48772f h1:ZIiIBRvIw62gA5MJhuwp1+2wWbqL9IGElQ499rUsYYg=
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240711023527-ba358c48772f/go.mod h1:hNFjhrLUIq+kJ9bOcs8QtiplSQ61GZXtd2xHKx4BYRo=
github.com/higress-group/proxy-wasm-go-sdk v1.0.0/go.mod h1:iiSyFbo+rAtbtGt/bsefv8GU57h9CCLYGJA74/tF5/0=
github.com/magefile/mage v1.14.0 h1:6QDX3g6z1YvJ4olPhT1wksUcSa/V0a1B+pJb73fBjyo=
github.com/magefile/mage v1.14.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
@@ -12,6 +13,7 @@ github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcU
github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.14.3 h1:9jvXn7olKEHU1S9vwoMGliaT8jq1vJ7IH/n9zD9Dnlw=
github.com/tidwall/gjson v1.14.3/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.17.3/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs=

View File

@@ -6,8 +6,8 @@ replace github.com/alibaba/higress/plugins/wasm-go => ../..
require (
github.com/alibaba/higress/plugins/wasm-go v1.3.5
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240711023527-ba358c48772f
github.com/tidwall/gjson v1.14.3
github.com/higress-group/proxy-wasm-go-sdk v1.0.0
github.com/tidwall/gjson v1.17.3
)
require (

View File

@@ -8,6 +8,7 @@ github.com/higress-group/nottinygc v0.0.0-20231101025119-e93c4c2f8520/go.mod h1:
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240226064518-b3dc4646a35a h1:luYRvxLTE1xYxrXYj7nmjd1U0HHh8pUPiKfdZ0MhCGE=
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240226064518-b3dc4646a35a/go.mod h1:hNFjhrLUIq+kJ9bOcs8QtiplSQ61GZXtd2xHKx4BYRo=
github.com/higress-group/proxy-wasm-go-sdk v0.0.0-20240711023527-ba358c48772f/go.mod h1:hNFjhrLUIq+kJ9bOcs8QtiplSQ61GZXtd2xHKx4BYRo=
github.com/higress-group/proxy-wasm-go-sdk v1.0.0/go.mod h1:iiSyFbo+rAtbtGt/bsefv8GU57h9CCLYGJA74/tF5/0=
github.com/magefile/mage v1.14.0 h1:6QDX3g6z1YvJ4olPhT1wksUcSa/V0a1B+pJb73fBjyo=
github.com/magefile/mage v1.14.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
@@ -15,6 +16,7 @@ github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcU
github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.14.3 h1:9jvXn7olKEHU1S9vwoMGliaT8jq1vJ7IH/n9zD9Dnlw=
github.com/tidwall/gjson v1.14.3/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.17.3/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs=

View File

@@ -0,0 +1,4 @@
.DEFAULT:
build:
tinygo build -o ai-proxy.wasm -scheduler=none -target=wasi -gc=custom -tags='custommalloc nottinygc_finalizer proxy_wasm_version_0_2_100' ./main.go
mv ai-proxy.wasm ../../../../docker-compose-test/

View File

@@ -31,15 +31,16 @@ description: AI 代理插件配置参考
`provider`的配置字段说明如下:
| 名称 | 数据类型 | 填写要求 | 默认值 | 描述 |
| -------------- | --------------- | -------- | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `type` | string | 必填 | - | AI 服务提供商名称 |
| `apiTokens` | array of string | 非必填 | - | 用于在访问 AI 服务时进行认证的令牌。如果配置了多个 token插件会在请求时随机进行选择。部分服务提供商只支持配置一个 token。 |
| `timeout` | number | 非必填 | - | 访问 AI 服务的超时时间。单位为毫秒。默认值为 120000即 2 分钟 |
| `modelMapping` | map of string | 非必填 | - | AI 模型映射表,用于将请求中的模型名称映射为服务提供商支持模型名称。<br/>1. 支持前缀匹配。例如用 "gpt-3-*" 匹配所有名称以“gpt-3-”开头的模型;<br/>2. 支持使用 "*" 为键来配置通用兜底映射关系;<br/>3. 如果映射的目标名称为空字符串 "",则表示保留原模型名称。 |
| `protocol` | string | 非必填 | - | 插件对外提供的 API 接口契约。目前支持以下取值openai默认值使用 OpenAI 的接口契约、original使用目标服务提供商的原始接口契约 |
| `context` | object | 非必填 | - | 配置 AI 对话上下文信息 |
| `customSettings` | array of customSetting | 非必填 | - | 为AI请求指定覆盖或者填充参数 |
| 名称 | 数据类型 | 填写要求 | 默认值 | 描述 |
|------------------| --------------- | -------- | ------ |-----------------------------------------------------------------------------------------------------------------------------------------------------------|
| `type` | string | 必填 | - | AI 服务提供商名称 |
| `apiTokens` | array of string | 非必填 | - | 用于在访问 AI 服务时进行认证的令牌。如果配置了多个 token插件会在请求时随机进行选择。部分服务提供商只支持配置一个 token。 |
| `timeout` | number | 非必填 | - | 访问 AI 服务的超时时间。单位为毫秒。默认值为 120000即 2 分钟 |
| `modelMapping` | map of string | 非必填 | - | AI 模型映射表,用于将请求中的模型名称映射为服务提供商支持模型名称。<br/>1. 支持前缀匹配。例如用 "gpt-3-*" 匹配所有名称以“gpt-3-”开头的模型;<br/>2. 支持使用 "*" 为键来配置通用兜底映射关系;<br/>3. 如果映射的目标名称为空字符串 "",则表示保留原模型名称。 |
| `protocol` | string | 非必填 | - | 插件对外提供的 API 接口契约。目前支持以下取值openai默认值使用 OpenAI 的接口契约、original使用目标服务提供商的原始接口契约 |
| `context` | object | 非必填 | - | 配置 AI 对话上下文信息 |
| `customSettings` | array of customSetting | 非必填 | - | 为AI请求指定覆盖或者填充参数 |
| `failover` | object | 非必填 | - | 配置 apiToken 的 failover 策略,当 apiToken 不可用时,将其移出 apiToken 列表,待健康检测通过后重新添加回 apiToken 列表 |
`context`的配置字段说明如下:
@@ -75,6 +76,16 @@ custom-setting会遵循如下表格根据`name`和协议来替换对应的字
如果启用了raw模式custom-setting会直接用输入的`name``value`去更改请求中的json内容而不对参数名称做任何限制和修改。
对于大多数协议custom-setting都会在json内容的根路径修改或者填充参数。对于`qwen`协议ai-proxy会在json的`parameters`子路径下做配置。对于`gemini`协议,则会在`generation_config`子路径下做配置。
`failover` 的配置字段说明如下:
| 名称 | 数据类型 | 填写要求 | 默认值 | 描述 |
|------------------|--------|------|-------|-----------------------------|
| enabled | bool | 非必填 | false | 是否启用 apiToken 的 failover 机制 |
| failureThreshold | int | 非必填 | 3 | 触发 failover 连续请求失败的阈值(次数) |
| successThreshold | int | 非必填 | 1 | 健康检测的成功阈值(次数) |
| healthCheckInterval | int | 非必填 | 5000 | 健康检测的间隔时间,单位毫秒 |
| healthCheckTimeout | int | 非必填 | 5000 | 健康检测的超时时间,单位毫秒 |
| healthCheckModel | string | 必填 | | 健康检测使用的模型 |
### 提供商特有配置
@@ -669,6 +680,18 @@ provider:
timeout: 1200000
```
### 使用 original 协议代理 Coze 应用
**配置信息**
```yaml
provider:
type: coze
apiTokens:
- YOUR_COZE_API_KEY
protocol: original
```
### 使用月之暗面配合其原生的文件上下文
提前上传文件至月之暗面,以文件内容作为上下文使用其 AI 服务。

Some files were not shown because too many files have changed in this diff Show More