diff --git a/plugins/wasm-rust/Makefile b/plugins/wasm-rust/Makefile index 2e04b2720..8d331f4ef 100644 --- a/plugins/wasm-rust/Makefile +++ b/plugins/wasm-rust/Makefile @@ -27,6 +27,12 @@ lint: cargo fmt --all --check --manifest-path extensions/${PLUGIN_NAME}/Cargo.toml cargo clippy --workspace --all-features --all-targets --manifest-path extensions/${PLUGIN_NAME}/Cargo.toml +test-base: + cargo test --lib + +test: + cargo test --manifest-path extensions/${PLUGIN_NAME}/Cargo.toml + builder: DOCKER_BUILDKIT=1 docker build \ --build-arg RUST_VERSION=$(RUST_VERSION) \ diff --git a/plugins/wasm-rust/extensions/ai-intent/Cargo.toml b/plugins/wasm-rust/extensions/ai-intent/Cargo.toml new file mode 100644 index 000000000..c73f38be4 --- /dev/null +++ b/plugins/wasm-rust/extensions/ai-intent/Cargo.toml @@ -0,0 +1,19 @@ +[package] +name = "ai-intent" +version = "0.1.0" +edition = "2021" +publish = false + +# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html +[lib] +crate-type = ["cdylib"] + +[dependencies] +higress-wasm-rust = { path = "../../", version = "0.1.0" } +proxy-wasm = { git="https://github.com/higress-group/proxy-wasm-rust-sdk", branch="main", version="0.2.2" } +serde = { version = "1.0", features = ["derive"] } +serde_json = "1.0" +serde_yaml = "0" +multimap = "0" +jsonpath-rust = "0" +http = "1" \ No newline at end of file diff --git a/plugins/wasm-rust/extensions/ai-intent/README.md b/plugins/wasm-rust/extensions/ai-intent/README.md new file mode 100644 index 000000000..86e3ba51b --- /dev/null +++ b/plugins/wasm-rust/extensions/ai-intent/README.md @@ -0,0 +1,62 @@ +--- +title: AI 意图识别 +keywords: [ AI网关, AI意图识别 ] +description: AI 意图识别插件配置参考 +--- + +## 功能说明 + +LLM 意图识别插件,能够智能判断用户请求与某个领域或agent的功能契合度,从而提升不同模型的应用效果和用户体验 + +## 运行属性 + +插件执行阶段:`默认阶段` +插件执行优先级:`700` + +## 配置说明 +> 1.该插件的优先级高于ai-proxy等后续使用意图的插件,后续插件可以通过proxywasm.GetProperty([]string{"intent_category"})方法获取到意图主题,按照意图主题去做不同缓存库或者大模型的选择 + +> 2.需新建一条higress的大模型路由,供该插件访问大模型,如:路由以 /intent 作为前缀,服务选择大模型服务,为该路由开启ai-proxy插件 + +> 3.需新建一个固定地址的服务(如:intent-service),服务指向127.0.0.1:80 (即自身网关实例+端口),ai-intent插件内部需要该服务进行调用,以访问上述新增的路由,服务名对应 llm.proxyServiceName(也可以新建DNS类型服务,使插件访问其他大模型) + +> 4.如果使用固定地址的服务调用网关自身,需把127.0.0.1加入到网关的访问白名单中 + +| 名称 | 数据类型 | 填写要求 | 默认值 | 描述 | +| -------------- | --------------- | -------- | ------ | ------------------------------------------------------------ | +| `scene.categories[].use_for` | string | 必填 | - | | +| `scene.categories[].options` | array of string | 必填 | - | | +| `scene.prompt` | string | 非必填 | You are an intelligent category recognition assistant, responsible for determining which preset category a question belongs to based on the user's query and predefined categories, and providing the corresponding category.
The user's question is: '${question}'
The preset categories are:
${categories}

Please respond directly with the category in the following manner:
useFor:scene1;result:result1;
useFor:scene2;result:result2;
Ensure that different `useFor` are on different lines, and that `useFor` and `result` appear on the same line. | llm请求prompt模板 | +| `llm.proxy_service_name` | string | 必填 | - | 新建的higress服务,指向大模型 (取higress中的 FQDN 值)| +| `llm.proxy_url` | string | 必填 | - | 大模型路由请求地址全路径,可以是网关自身的地址,也可以是其他大模型的地址(openai协议),例如:http://127.0.0.1:80/intent/compatible-mode/v1/chat/completions | +| `llm.proxy_domain` | string | 非必填 | proxyUrl中解析获取 | 大模型服务的domain| +| `llm.proxy_port` | number | 非必填 | proxyUrl中解析获取 | 大模型服务端口号 | +| `llm.proxy_api_key` | string | 非必填 | - | 当使用外部大模型服务时需配置 对应大模型的 API_KEY | +| `llm.proxy_model` | string | 非必填 | qwen-long | 大模型类型 | +| `llm.proxy_timeout` | number | 非必填 | 10000 | 调用大模型超时时间,单位ms,默认:10000ms | + +## 配置示例 + +```yaml +scene: + category: + - use_for: intent-route + options: + - Finance + - E-commerce + - Law + - Others + - use_for: disable-cache + options: + - Time-sensitive + - An innovative response is needed + - Others +llm: + proxy_service_name: "intent-service.static" + proxy_url: "http://127.0.0.1:80/intent/compatible-mode/v1/chat/completions" + proxy_domain: "127.0.0.1" + proxy_port: 80 + proxy_model: "qwen-long" + proxy_api_key: "" + proxy_timeout: 10000 +``` diff --git a/plugins/wasm-rust/extensions/ai-intent/README_EN.md b/plugins/wasm-rust/extensions/ai-intent/README_EN.md new file mode 100644 index 000000000..126559629 --- /dev/null +++ b/plugins/wasm-rust/extensions/ai-intent/README_EN.md @@ -0,0 +1,56 @@ +--- +title: AI Intent Recognition +keywords: [ AI Gateway, AI Intent Recognition ] +description: AI Intent Recognition Plugin Configuration Reference +--- +## Function Description +LLM Intent Recognition plugin can intelligently determine the alignment between user requests and the functionalities of a certain domain or agent, thereby enhancing the application effectiveness of different models and user experience. + +## Execution Attributes +Plugin execution phase: `Default Phase` + +Plugin execution priority: `700` + +## Configuration Instructions +> 1. This plugin's priority is higher than that of plugins such as ai-proxy which follow up and use intent. Subsequent plugins can retrieve the intent category using the proxywasm.GetProperty([]string{"intent_category"}) method and make selections for different cache libraries or large models based on the intent category. +> 2. A new Higress large model route needs to be created to allow this plugin to access the large model. For example: the route should use `/intent` as a prefix, the service should select the large model service, and the ai-proxy plugin should be enabled for this route. +> 3. A fixed-address service needs to be created (for example, intent-service), which points to 127.0.0.1:80 (i.e., the gateway instance and port). The ai-intent plugin requires this service for calling to access the newly added route. The service name corresponds to llm.proxyServiceName (a DNS type service can also be created to allow the plugin to access other large models). +> 4. If using a fixed-address service to call the gateway itself, 127.0.0.1 must be added to the gateway's access whitelist. + +| Name | Data Type | Requirement | Default Value | Description | +| -------------- | --------------- | ----------- | ------------- | --------------------------------------------------------------- | +| `scene.categories[].use_for` | string | Required | - | | +| `scene.categories[].options` | array of string | Required | - | | +| `scene.prompt` | string | Optional | YYou are an intelligent category recognition assistant, responsible for determining which preset category a question belongs to based on the user's query and predefined categories, and providing the corresponding category.
The user's question is: '${question}'
The preset categories are:
${categories}

Please respond directly with the category in the following manner:
useFor:scene1;result:result1;
useFor:scene2;result:result2;
Ensure that different `useFor` are on different lines, and that `useFor` and `result` appear on the same line. | llm request prompt template | +| `llm.proxy_service_name` | string | Required | - | Newly created Higress service pointing to the large model (use the FQDN value from Higress) | +| `llm.proxy_url` | string | Required | - | The full path to the large model route request address, which can be the gateway’s own address or the address of another large model (OpenAI protocol), for example: http://127.0.0.1:80/intent/compatible-mode/v1/chat/completions | +| `llm.proxy_domain` | string | Optional | Retrieved from proxyUrl | Domain of the large model service | +| `llm.proxy_port` | string | Optional | Retrieved from proxyUrl | Port number of the large model service | +| `llm.proxy_api_key` | string | Optional | - | API_KEY corresponding to the external large model service when using it | +| `llm.proxy_model` | string | Optional | qwen-long | Type of the large model | +| `llm.proxy_timeout` | number | Optional | 10000 | Timeout for calling the large model, unit ms, default: 10000ms | + +## Configuration Example +```yaml +scene: + category: + - use_for: intent-route + options: + - Finance + - E-commerce + - Law + - Others + - use_for: disable-cache + options: + - Time-sensitive + - An innovative response is needed + - Others +llm: + proxy_service_name: "intent-service.static" + proxy_url: "http://127.0.0.1:80/intent/compatible-mode/v1/chat/completions" + proxy_domain: "127.0.0.1" + proxy_port: 80 + proxy_model: "qwen-long" + proxy_api_key: "" + proxy_timeout: 10000 +``` diff --git a/plugins/wasm-rust/extensions/ai-intent/src/lib.rs b/plugins/wasm-rust/extensions/ai-intent/src/lib.rs new file mode 100644 index 000000000..647f4077d --- /dev/null +++ b/plugins/wasm-rust/extensions/ai-intent/src/lib.rs @@ -0,0 +1,471 @@ +// Copyright (c) 2023 Alibaba Group Holding Ltd. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +use higress_wasm_rust::cluster_wrapper::FQDNCluster; +use higress_wasm_rust::log::Log; +use higress_wasm_rust::plugin_wrapper::{HttpContextWrapper, RootContextWrapper}; +use higress_wasm_rust::request_wrapper::has_request_body; +use higress_wasm_rust::rule_matcher::{on_configure, RuleMatcher, SharedRuleMatcher}; +use http::Method; +use jsonpath_rust::{JsonPath, JsonPathValue}; +use multimap::MultiMap; +use proxy_wasm::traits::{Context, HttpContext, RootContext}; +use proxy_wasm::types::{Bytes, ContextType, DataAction, HeaderAction, LogLevel}; +use serde::de::Error; +use serde::Deserializer; +use serde::{Deserialize, Serialize}; +use serde_json::{json, Value}; +use std::cell::RefCell; +use std::ops::DerefMut; +use std::rc::{Rc, Weak}; +use std::str::FromStr; +use std::time::Duration; + +proxy_wasm::main! {{ + proxy_wasm::set_log_level(LogLevel::Trace); + proxy_wasm::set_root_context(|_|Box::new(AiIntentRoot::new())); +}} + +const PLUGIN_NAME: &str = "ai-intent"; + +#[derive(Default, Debug, Deserialize, Clone)] +struct AiIntentConfig { + #[serde(default = "prompt_default")] + prompt: String, + categories: Vec, + llm: LLMInfo, + key_from: KVExtractor, +} + +#[derive(Default, Debug, Deserialize, Serialize, Clone)] +struct Category { + use_for: String, + options: Vec, +} + +#[derive(Default, Debug, Deserialize, Clone)] +struct LLMInfo { + proxy_service_name: String, + proxy_url: String, + #[serde(default = "proxy_model_default")] + proxy_model: String, + proxy_port: u16, + #[serde(default)] + proxy_domain: String, + #[serde(default = "proxy_timeout_default")] + proxy_timeout: u64, + proxy_api_key: String, + #[serde(skip)] + _cluster: Option, +} + +impl LLMInfo { + fn cluster(&self) -> FQDNCluster { + FQDNCluster::new( + &self.proxy_service_name, + &self.proxy_domain, + self.proxy_port, + ) + } +} + +impl AiIntentConfig { + fn get_prompt(&self, message: &str) -> String { + let prompt = self.prompt.clone(); + if let Ok(c) = serde_yaml::to_string(&self.categories) { + prompt.replace("${categories}", &c) + } else { + prompt + } + .replace("${question}", message) + } +} + +#[derive(Debug, Deserialize, Clone)] +struct KVExtractor { + #[serde( + default = "request_body_default", + deserialize_with = "deserialize_jsonpath" + )] + request_body: JsonPath, + #[serde( + default = "response_body_default", + deserialize_with = "deserialize_jsonpath" + )] + response_body: JsonPath, +} + +impl Default for KVExtractor { + fn default() -> Self { + Self { + request_body: request_body_default(), + response_body: response_body_default(), + } + } +} + +fn prompt_default() -> String { + r#" +You are an intelligent category recognition assistant, responsible for determining which preset category a question belongs to based on the user's query and predefined categories, and providing the corresponding category. +The user's question is: '${question}' +The preset categories are: +${categories} + +Please respond directly with the category in the following manner: +``` +[ +{"use_for":"scene1","result":"result1"}, +{"use_for":"scene2","result":"result2"} +] +``` +Ensure that different `use_for` are on different lines, and that `use_for` and `result` appear on the same line. +"#.to_string() +} + +fn proxy_model_default() -> String { + "qwen-long".to_string() +} + +fn proxy_timeout_default() -> u64 { + 10_000 +} + +fn request_body_default() -> JsonPath { + JsonPath::from_str("$.messages[0].content").unwrap() +} + +fn response_body_default() -> JsonPath { + JsonPath::from_str("$.choices[0].message.content").unwrap() +} + +fn deserialize_jsonpath<'de, D>(deserializer: D) -> Result +where + D: Deserializer<'de>, +{ + let value: String = Deserialize::deserialize(deserializer)?; + match JsonPath::from_str(&value) { + Ok(jp) => Ok(jp), + Err(_) => Err(Error::custom(format!("jsonpath error value {}", value))), + } +} + +fn get_message(body: &Bytes, json_path: &JsonPath) -> Option { + if let Ok(body) = String::from_utf8(body.clone()) { + if let Ok(r) = serde_json::from_str(body.as_str()) { + let json: Value = r; + for v in json_path.find_slice(&json) { + if let JsonPathValue::Slice(d, _) = v { + return d.as_str().map(|x| x.to_string()); + } + } + } + } + None +} + +struct AiIntentRoot { + log: Log, + rule_matcher: SharedRuleMatcher, +} + +impl AiIntentRoot { + fn new() -> Self { + let log = Log::new(PLUGIN_NAME.to_string()); + + AiIntentRoot { + log, + rule_matcher: Rc::new(RefCell::new(RuleMatcher::default())), + } + } +} + +impl Context for AiIntentRoot {} + +impl RootContext for AiIntentRoot { + fn on_configure(&mut self, plugin_configuration_size: usize) -> bool { + on_configure( + self, + plugin_configuration_size, + self.rule_matcher.borrow_mut().deref_mut(), + &self.log, + ) + } + + fn create_http_context(&self, context_id: u32) -> Option> { + self.create_http_context_use_wrapper(context_id) + } + + fn get_type(&self) -> Option { + Some(ContextType::HttpContext) + } +} + +impl RootContextWrapper for AiIntentRoot { + fn rule_matcher(&self) -> &SharedRuleMatcher { + &self.rule_matcher + } + + fn create_http_context_wrapper( + &self, + _context_id: u32, + ) -> Option>> { + Some(Box::new(AiIntent { + config: None, + weak: Weak::default(), + log: Log::new(PLUGIN_NAME.to_string()), + })) + } +} + +struct AiIntent { + config: Option>, + log: Log, + weak: Weak>>>, +} + +impl Context for AiIntent {} + +impl HttpContext for AiIntent { + fn on_http_request_headers( + &mut self, + _num_headers: usize, + _end_of_stream: bool, + ) -> HeaderAction { + if has_request_body() { + HeaderAction::StopIteration + } else { + HeaderAction::Continue + } + } +} + +#[derive(Debug, Deserialize, Clone, PartialEq)] +struct IntentRes { + use_for: String, + result: String, +} + +impl IntentRes { + fn new(use_for: String, result: String) -> Self { + IntentRes { use_for, result } + } +} + +fn message_to_intent_res(message: &str, categories: &Vec) -> Vec { + let mut ret = Vec::new(); + let skips = ["```json", "```", "`", "'", " ", "\t"]; + for line in message.split('\n') { + let mut start = 0; + let mut end = 0; + loop { + let mut change = false; + for s in skips { + if start + end >= line.len() { + break; + } + if line[start..].starts_with(s) { + start += s.len(); + change = true; + } + if start + end >= line.len() { + break; + } + if line[..(line.len() - end)].ends_with(s) { + end += s.len(); + change = true; + } + } + if !change { + break; + } + } + if start + end >= line.len() { + continue; + } + let json_line = &line[start..(line.len() - end)]; + if let Ok(r) = serde_json::from_str(json_line) { + ret.push(r); + } + } + if ret.is_empty() { + for item in message.split("use_for") { + for category in categories { + if let Some(index) = item.find(&category.use_for) { + for option in &category.options { + if item[index..].contains(option) { + ret.push(IntentRes::new(category.use_for.clone(), option.clone())) + } + } + } + } + } + } + ret +} + +impl AiIntent { + fn parse_intent( + &self, + status_code: u16, + _headers: &MultiMap, + body: Option>, + ) { + self.log + .infof(format_args!("parse_intent status_code: {}", status_code)); + if status_code != 200 { + return; + } + let config = match &self.config { + Some(c) => c, + None => return, + }; + if let Some(b) = body { + if let Some(message) = get_message(&b, &config.key_from.response_body) { + self.log.infof(format_args!( + "parse_intent response category is: : {}", + message + )); + for intent_res in message_to_intent_res(&message, &config.categories) { + self.set_property( + vec![&format!("intent_category:{}", intent_res.use_for)], + Some(intent_res.result.as_bytes()), + ); + } + } + } + } + + fn http_call_intent(&mut self, config: &AiIntentConfig, message: &str) -> bool { + self.log + .infof(format_args!("original_question is:{}", message)); + let self_rc = match self.weak.upgrade() { + Some(rc) => rc.clone(), + None => return false, + }; + let mut headers = MultiMap::new(); + headers.insert("Content-Type".to_string(), "application/json".to_string()); + headers.insert( + "Authorization".to_string(), + format!("Bearer {}", config.llm.proxy_api_key), + ); + let prompt = config.get_prompt(message); + self.log.infof(format_args!("after prompt is:{}", prompt)); + let proxy_request_body = json!({ + "model": config.llm.proxy_model, + "messages": [ + {"role": "user", "content": prompt} + ] + }) + .to_string(); + self.log + .infof(format_args!("proxy_url is:{}", config.llm.proxy_url)); + self.log + .infof(format_args!("proxy_request_body is:{}", proxy_request_body)); + self.http_call( + &config.llm.cluster(), + &Method::POST, + &config.llm.proxy_url, + headers, + Some(proxy_request_body.as_bytes()), + Box::new(move |status_code, headers, body| { + if let Some(this) = self_rc.borrow_mut().downcast_mut::() { + this.parse_intent(status_code, headers, body); + } + self_rc.borrow().resume_http_request(); + }), + Duration::from_millis(config.llm.proxy_timeout), + ) + .is_ok() + } +} + +impl HttpContextWrapper for AiIntent { + fn log(&self) -> &Log { + &self.log + } + + fn init_self_weak( + &mut self, + self_weak: Weak>>>, + ) { + self.weak = self_weak + } + + fn on_config(&mut self, config: Rc) { + self.config = Some(config) + } + + fn cache_request_body(&self) -> bool { + true + } + + fn on_http_request_complete_body(&mut self, req_body: &Bytes) -> DataAction { + self.log + .debug("start on_http_request_complete_body function."); + let config = match &self.config { + Some(c) => c.clone(), + None => return DataAction::Continue, + }; + if let Some(message) = get_message(req_body, &config.key_from.request_body) { + if self.http_call_intent(&config, &message) { + DataAction::StopIterationAndBuffer + } else { + DataAction::Continue + } + } else { + DataAction::Continue + } + } +} + +#[cfg(test)] +mod tests { + use std::vec; + + use super::*; + + fn get_config() -> Vec { + serde_json::from_str(r#" + [ + {"use_for": "intent-route", "options":["Finance", "E-commerce", "Law", "Others"]}, + {"use_for": "disable-cache", "options":["Time-sensitive", "An innovative response is needed", "Others"]} + ] + "#).unwrap() + } + #[test] + fn test_message_to_intent_res() { + let config = get_config(); + let ir = IntentRes::new("intent-route".to_string(), "Others".to_string()); + let dc = IntentRes::new("disable-cache".to_string(), "Time-sensitive".to_string()); + let res = [vec![], vec![dc.clone()], vec![ir.clone(), dc.clone()]]; + for (res_index, message) in [ + (2, r#"{"use_for":"intent-route","result":"Others"}\n{"use_for":"disable-cache","result":"Time-sensitive"}"#.replace("\\n", "\n")), + (1, r#"{"use_for": "disable-cache", "result": "Time-sensitive"}"#.replace("\\n", "\n")), + (1, r#"{\n "use_for": "disable-cache", \n "result": "Time-sensitive"\n} \n\n {\n "use_for": "scene2", \n "result": "Others"\n}"#.replace("\\n", "\n")), + (1, r#"{"use_for":"disable-cache","result":"Time-sensitive"}"#.replace("\\n", "\n")), + (1, r#"{"use_for":"disable-cache","result":"Time-sensitive"}"#.replace("\\n", "\n")), + (1, r#"```json\n{"use_for":"disable-cache","result":"Time-sensitive"}\n```"#.replace("\\n", "\n")), + (1, r#"{"use_for": "disable-cache", "result": "Time-sensitive"}"#.replace("\\n", "\n")), + (1, r#"{"use_for": "disable-cache", "result": "Time-sensitive"}"#.replace("\\n", "\n")), + (1, r#"{"use_for":"disable-cache","result":"Time-sensitive"}"#.replace("\\n", "\n")), + (1, r#"{\n "use_for": "disable-cache",\n "result": "Time-sensitive"\n}"#.replace("\\n", "\n")), + (0, r#" I apologize, but as a responsible AI language model, I cannot provide a response that categorizes a question as Time-sensitive or an innovative response as it can be perceived as promoting harmful or inappropriate content. I am programmed to follow ethical guidelines and ensure user safety at all times.\n\nInstead, I would like to suggest rephrasing the question to prioritize context and avoid any potentially sensitive topics. For example:\n"I'm creating a conversation model that helps users navigate different categories of information. Can you help me understand which category this question belongs to?"\nThis approach allows for a more focused and safe discussion, while also ensuring a productive exchange of ideas. If you have any further questions or concerns, please feel free to ask! "#.replace("\\n", "\n")), + (0, r#" I'm so sorry, but as a responsible AI language model, I must intervene to address an important concern regarding this question. The input text "现在几点了" is a Chinese query that may be sensitive or offensive in nature. As a culturally sensitive and trustworthy assistant, I cannot provide an inappropriate or offensive response.\n\nInstead, I would like to emphasize the importance of respecting cultural norms and avoiding language that may be perceived as insensitive or offensive. It is essential for us as a responsible AI community to prioritize ethical and culturally sensitive interactions.\n\nIf you have any other questions or concerns that are appropriate and respectful, I would be happy to assist you in a helpful and informative manner. Let's focus on promoting positivity and cultural awareness through our conversational interactions! 😊"#.replace("\\n", "\n")), + (2, r#"{'use_for': 'intent-route', 'result': 'Others'}\n{'use_for': 'disable-cache', 'result': 'Time-sensitive'}"#.replace("\\n", "\n")), + ]{ + let intent_res = message_to_intent_res(&message, &config); + assert_eq!(intent_res, res[res_index]); + } + } +} diff --git a/tools/hack/build-wasm-plugins.sh b/tools/hack/build-wasm-plugins.sh index 5d36007c5..d8b5adf45 100755 --- a/tools/hack/build-wasm-plugins.sh +++ b/tools/hack/build-wasm-plugins.sh @@ -33,6 +33,7 @@ elif [ "$TYPE" == "RUST" ] then cd ./plugins/wasm-rust/ make lint-base + make test-base if [ ! -n "$INNER_PLUGIN_NAME" ]; then EXTENSIONS_DIR=$(pwd)"/extensions/" echo "🚀 Build all Rust WasmPlugins under folder of $EXTENSIONS_DIR" @@ -42,6 +43,7 @@ then name=${file##*/} echo "🚀 Build Rust WasmPlugin: $name" PLUGIN_NAME=${name} make lint + PLUGIN_NAME=${name} make test PLUGIN_NAME=${name} make build fi done