mirror of
https://github.com/alibaba/higress.git
synced 2026-02-06 23:21:08 +08:00
fix(doc): fix some dead link (#2675)
This commit is contained in:
@@ -293,7 +293,7 @@ apis:
|
||||
本示例配置了三个服务,演示了get与post两种类型的工具。其中get类型的工具包括高德地图与心知天气,post类型的工具是deepl翻译。三个服务都需要现在Higress的服务中以DNS域名的方式配置好,并确保健康。
|
||||
高德地图提供了两个工具,分别是获取指定地点的坐标,以及搜索坐标附近的感兴趣的地点。文档:https://lbs.amap.com/api/webservice/guide/api-advanced/newpoisearch
|
||||
心知天气提供了一个工具,用于获取指定城市的实时天气情况,支持中文,英文,日语返回,以及摄氏度和华氏度的表示。文档:https://seniverse.yuque.com/hyper_data/api_v3/nyiu3t
|
||||
deepl提供了一个工具,用于翻译给定的句子,支持多语言。。文档:https://developers.deepl.com/docs/v/zh/api-reference/translate?fallback=true
|
||||
deepl提供了一个工具,用于翻译给定的句子,支持多语言。文档:https://developers.deepl.com/api-reference/translate/request-translation
|
||||
|
||||
|
||||
以下为测试用例,为了效果的稳定性,建议保持大模型版本的稳定,本例子中使用的qwen-max-0403:
|
||||
|
||||
@@ -283,7 +283,7 @@ apis:
|
||||
This example configures three services demonstrating both GET and POST types of tools. The GET type tools include Amap and XZWeather, while the POST type tool is the DeepL translation. All three services need to be properly configured in the Higress service with DNS domain names and should be healthy.
|
||||
Amap provides two tools, one for obtaining the coordinates of a specified location and the other for searching for points of interest near the coordinates. Document: https://lbs.amap.com/api/webservice/guide/api-advanced/newpoisearch
|
||||
XZWeather provides one tool to get real-time weather conditions for a specified city, supporting results in Chinese, English, and Japanese, as well as representations in Celsius and Fahrenheit. Document: https://seniverse.yuque.com/hyper_data/api_v3/nyiu3t
|
||||
DeepL provides one tool for translating given sentences, supporting multiple languages. Document: https://developers.deepl.com/docs/v/zh/api-reference/translate?fallback=true
|
||||
DeepL provides one tool for translating given sentences, supporting multiple languages. Document: https://developers.deepl.com/api-reference/translate/request-translation
|
||||
|
||||
Below are test cases. For stability, it is recommended to maintain a stable version of the large model. The example used here is qwen-max-0403:
|
||||
**Request Example**
|
||||
|
||||
@@ -147,5 +147,5 @@ curl -X POST \
|
||||
- 流模式中如果脱敏后的词被多个chunk拆分,可能无法进行还原
|
||||
- 流模式中,如果敏感词语被多个chunk拆分,可能会有敏感词的一部分返回给用户的情况
|
||||
- grok 内置规则列表 https://help.aliyun.com/zh/sls/user-guide/grok-patterns
|
||||
- 内置敏感词库数据来源 https://github.com/houbb/sensitive-word/tree/master/src/main/resources
|
||||
- 内置敏感词库数据来源 https://github.com/houbb/sensitive-word-data/tree/main/src/main/resources
|
||||
- 由于敏感词列表是在文本分词后进行匹配的,所以请将 `deny_words` 设置为单个单词,英文多单词情况如 `hello word` 可能无法匹配
|
||||
|
||||
@@ -128,5 +128,5 @@ Please note that you need to replace `"key":"value"` with the actual data conten
|
||||
- In streaming mode, if the masked words are split across multiple chunks, restoration may not be possible
|
||||
- In streaming mode, if sensitive words are split across multiple chunks, there may be cases where part of the sensitive word is returned to the user
|
||||
- Grok built-in rule list: https://help.aliyun.com/zh/sls/user-guide/grok-patterns
|
||||
- Built-in sensitive word library data source: https://github.com/houbb/sensitive-word/tree/master/src/main/resources
|
||||
- Built-in sensitive word library data source: https://github.com/houbb/sensitive-word-data/tree/main/src/main/resources
|
||||
- Since the sensitive word list is matched after tokenizing the text, please set `deny_words` to single words. In the case of multiple words in English, such as `hello world`, the match may not be successful.
|
||||
|
||||
Reference in New Issue
Block a user