Skip to content

Commit

Permalink
[feat] support qwen2-vl, gte.
Browse files Browse the repository at this point in the history
  • Loading branch information
wangzhaode committed Sep 12, 2024
1 parent ef0fc55 commit c4aa4f1
Show file tree
Hide file tree
Showing 6 changed files with 214 additions and 162 deletions.
12 changes: 6 additions & 6 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@ cmake_minimum_required(VERSION 3.5)
project(mnn-llm)

option(BUILD_FOR_ANDROID "Build for android whith mini memory mode." OFF)
option(USING_VISUAL_MODEL "Using visual model will need dpes: MNNOpenCV and httplib." OFF)
option(LLM_SUPPORT_VISION "Llm model support vision input." OFF)
option(DUMP_PROFILE_INFO "Dump profile info when chat." OFF)
option(BUILD_JNI "Build JNI for android app." OFF)

if (USING_VISUAL_MODEL)
add_definitions(-DUSING_VISUAL_MODEL)
if (LLM_SUPPORT_VISION)
add_definitions(-DLLM_SUPPORT_VISION)
endif()

if (DUMP_PROFILE_INFO)
Expand All @@ -24,7 +24,7 @@ set(MNN_SUPPORT_TRANSFORMER_FUSE ON CACHE BOOL "Open MNN_SUPPORT_TRANSFORMER_FUS
if (BUILD_FOR_ANDROID)
set(MNN_ARM82 ON CACHE BOOL "Open MNN_ARM82" FORCE)
endif()
if (USING_VISUAL_MODEL)
if (LLM_SUPPORT_VISION)
set(MNN_BUILD_OPENCV ON CACHE BOOL "Open MNN_BUILD_OPENCV" FORCE)
set(MNN_IMGCODECS ON CACHE BOOL "Open MNN_IMGCODECS" FORCE)
endif()
Expand All @@ -33,7 +33,7 @@ add_subdirectory(${CMAKE_CURRENT_LIST_DIR}/MNN)
# include dir
include_directories(${CMAKE_CURRENT_LIST_DIR}/include/
${CMAKE_CURRENT_LIST_DIR}/MNN/include/
${CMAKE_CURRENT_LIST_DIR}/MNN/tools/cv/include/cv/
${CMAKE_CURRENT_LIST_DIR}/MNN/tools/cv/include/
)

# source files
Expand All @@ -58,7 +58,7 @@ else()
set_target_properties(llm PROPERTIES WINDOWS_EXPORT_ALL_SYMBOLS TRUE)

target_link_libraries(llm MNN MNN_Express)
if (USING_VISUAL_MODEL)
if (LLM_SUPPORT_VISION)
target_link_libraries(llm MNNOpenCV)
endif()
endif()
Expand Down
78 changes: 49 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ llm模型导出`onnx`和`mnn`模型请使用[llm-export](https://github.com/wang
`modelscope`模型下载:

<details>
<summary>qwen系列</summary>
<summary>qwen</summary>

- [modelscope-qwen-1.8b-chat]
- [modelscope-qwen-7b-chat]
Expand All @@ -31,14 +31,16 @@ llm模型导出`onnx`和`mnn`模型请使用[llm-export](https://github.com/wang
- [modelscope-qwen1.5-1.8b-chat]
- [modelscope-qwen1.5-4b-chat]
- [modelscope-qwen1.5-7b-chat]
- [modelscope-qwen2-0.5b-chat]
- [modelscope-qwen2-1.5b-chat]
- [modelscope-qwen2-7b-chat]
- [modelscope-qwen2-0.5b-instruct]
- [modelscope-qwen2-1.5b-instruct]
- [modelscope-qwen2-7b-instruct]
- [modelscope-qwen2-vl-2b-instruct]
- [modelscope-qwen2-vl-7b-instruct]

</details>

<details>
<summary>glm系列</summary>
<summary>glm</summary>

- [modelscope-chatglm-6b]
- [modelscope-chatglm2-6b]
Expand All @@ -49,7 +51,7 @@ llm模型导出`onnx`和`mnn`模型请使用[llm-export](https://github.com/wang
</details>

<details>
<summary>llama系列</summary>
<summary>llama</summary>

- [modelscope-llama2-7b-chat]
- [modelscope-llama3-8b-instruct]
Expand All @@ -62,10 +64,17 @@ llm模型导出`onnx`和`mnn`模型请使用[llm-export](https://github.com/wang
</details>

<details>
<summary>其他</summary>
<summary>phi</summary>

- [modelscope-phi-2]

</details>

<details>
<summary>embedding</summary>

- [modelscope-bge-large-zh]
- [modelscope-gte_sentence-embedding_multilingual-base]

</details>

Expand All @@ -77,9 +86,11 @@ llm模型导出`onnx`和`mnn`模型请使用[llm-export](https://github.com/wang
[modelscope-qwen1.5-1.8b-chat]: https://modelscope.cn/models/zhaode/Qwen1.5-1.8B-Chat-MNN/files
[modelscope-qwen1.5-4b-chat]: https://modelscope.cn/models/zhaode/Qwen1.5-4B-Chat-MNN/files
[modelscope-qwen1.5-7b-chat]: https://modelscope.cn/models/zhaode/Qwen1.5-7B-Chat-MNN/files
[modelscope-qwen2-0.5b-chat]: https://modelscope.cn/models/zhaode/Qwen2-0.5B-Instruct-MNN/files
[modelscope-qwen2-1.5b-chat]: https://modelscope.cn/models/zhaode/Qwen2-1.5B-Instruct-MNN/files
[modelscope-qwen2-7b-chat]: https://modelscope.cn/models/zhaode/Qwen2-7B-Instruct-MNN/files
[modelscope-qwen2-0.5b-instruct]: https://modelscope.cn/models/zhaode/Qwen2-0.5B-Instruct-MNN/files
[modelscope-qwen2-1.5b-instruct]: https://modelscope.cn/models/zhaode/Qwen2-1.5B-Instruct-MNN/files
[modelscope-qwen2-7b-instruct]: https://modelscope.cn/models/zhaode/Qwen2-7B-Instruct-MNN/files
[modelscope-qwen2-vl-2b-instruct]: https://modelscope.cn/models/zhaode/Qwen2-VL-2B-Instruct-MNN/files
[modelscope-qwen2-vl-7b-instruct]: https://modelscope.cn/models/zhaode/Qwen2-VL-7B-Instruct-MNN/files

[modelscope-chatglm-6b]: https://modelscope.cn/models/zhaode/chatglm-6b-MNN/files
[modelscope-chatglm2-6b]: https://modelscope.cn/models/zhaode/chatglm2-6b-MNN/files
Expand All @@ -96,6 +107,7 @@ llm模型导出`onnx`和`mnn`模型请使用[llm-export](https://github.com/wang
[modelscope-tinyllama-1.1b-chat]: https://modelscope.cn/models/zhaode/TinyLlama-1.1B-Chat-MNN/files
[modelscope-phi-2]: https://modelscope.cn/models/zhaode/phi-2-MNN/files
[modelscope-bge-large-zh]: https://modelscope.cn/models/zhaode/bge-large-zh-MNN/files
[modelscope-gte_sentence-embedding_multilingual-base]: https://modelscope.cn/models/zhaode/gte_sentence-embedding_multilingual-base-MNN/files

## 构建

Expand Down Expand Up @@ -151,13 +163,13 @@ cd mnn-llm

一些编译宏:
- `BUILD_FOR_ANDROID`: 编译到Android设备;
- `USING_VISUAL_MODEL`: 支持多模态能力的模型,需要依赖`libMNNOpenCV`
- `LLM_SUPPORT_VISION`: 是否支持视觉处理能力
- `DUMP_PROFILE_INFO`: 每次对话后dump出性能数据到命令行中;

默认使用`CPU`后端且不实用多模态能力,如果使用其他后端或能力,可以在编译MNN的脚本中添加`MNN`编译宏
默认使用`CPU`,如果使用其他后端或能力,可以在编译MNN时添加`MNN`编译宏
- cuda: `-DMNN_CUDA=ON`
- opencl: `-DMNN_OPENCL=ON`
- opencv: `-DMNN_BUILD_OPENCV=ON -DMNN_IMGCODECS=ON`
- metal: `-DMNN_METAL=ON`

### 4. 执行

Expand All @@ -181,27 +193,35 @@ adb shell "cd /data/local/tmp && export LD_LIBRARY_PATH=. && ./cli_demo ./Qwen2-
<details>
<summary>reference</summary>

- [cpp-httplib](https://github.com/yhirose/cpp-httplib)
- [chatgpt-web](https://github.com/xqdoo00o/chatgpt-web)
- [ChatViewDemo](https://github.com/BrettFX/ChatViewDemo)
- [nlohmann/json](https://github.com/nlohmann/json)
- [Qwen-1.8B-Chat](https://modelscope.cn/models/qwen/Qwen-1_8B-Chat/summary)
- [Qwen-7B-Chat](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary)
- [Qwen-VL-Chat](https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary)
- [Qwen1.5-0.5B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-0.5B-Chat/summary)
- [Qwen1.5-1.8B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-1.8B-Chat/summary)
- [Qwen1.5-4B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-4B-Chat/summary)
- [Qwen1.5-7B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-7B-Chat/summary)
- [Qwen2-0.5B-Instruct](https://modelscope.cn/models/qwen/Qwen2-0.5B-Instruct/summary)
- [Qwen2-1.5B-Instruct](https://modelscope.cn/models/qwen/Qwen2-1.5B-Instruct/summary)
- [Qwen2-7B-Instruct](https://modelscope.cn/models/qwen/Qwen2-7B-Instruct/summary)
- [Qwen2-VL-2B-Instruct](https://modelscope.cn/models/qwen/Qwen2-VL-2B-Instruct/summary)
- [Qwen2-VL-7B-Instruct](https://modelscope.cn/models/qwen/Qwen2-VL-7B-Instruct/summary)
- [chatglm-6b](https://modelscope.cn/models/ZhipuAI/chatglm-6b/summary)
- [chatglm2-6b](https://modelscope.cn/models/ZhipuAI/chatglm2-6b/summary)
- [chatglm3-6b](https://modelscope.cn/models/ZhipuAI/chatglm3-6b/summary)
- [codegeex2-6b](https://modelscope.cn/models/ZhipuAI/codegeex2-6b/summary)
- [Baichuan2-7B-Chat](https://modelscope.cn/models/baichuan-inc/baichuan-7B/summary)
- [Qwen-7B-Chat](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary)
- [Qwen-VL-Chat](https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary)
- [Qwen-1.8B-Chat](https://modelscope.cn/models/qwen/Qwen-1_8B-Chat/summary)
- [chatglm3-6b](https://modelscope.cn/models/ZhipuAI/chatglm3-6b/summary)
- [glm4-9b-chat](https://modelscope.cn/models/ZhipuAI/glm-4-9b-chat/summary)
- [Llama-2-7b-chat-ms](https://modelscope.cn/models/modelscope/Llama-2-7b-chat-ms/summary)
- [Llama-3-8B-Instruct](https://modelscope.cn/models/modelscope/Meta-Llama-3-8B-Instruct/summary)
- [Baichuan2-7B-Chat](https://modelscope.cn/models/baichuan-inc/baichuan-7B/summary)
- [internlm-chat-7b](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm-chat-7b/summary)
- [Yi-6B-Chat](https://modelscope.cn/models/01ai/Yi-6B-Chat/summary)
- [deepseek-llm-7b-chat](https://modelscope.cn/models/deepseek-ai/deepseek-llm-7b-chat/summary)
- [TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6)
- [phi-2](https://modelscope.cn/models/AI-ModelScope/phi-2/summary)
- [bge-large-zh](https://modelscope.cn/models/AI-ModelScope/bge-large-zh/summary)
- [TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6)
- [Yi-6B-Chat](https://modelscope.cn/models/01ai/Yi-6B-Chat/summary)
- [Qwen1.5-0.5B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-0.5B-Chat/summary)
- [Qwen1.5-1.8B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-1.8B-Chat/summary)
- [Qwen1.5-4B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-4B-Chat/summary)
- [Qwen1.5-7B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-7B-Chat/summary)
- [cpp-httplib](https://github.com/yhirose/cpp-httplib)
- [chatgpt-web](https://github.com/xqdoo00o/chatgpt-web)
- [ChatViewDemo](https://github.com/BrettFX/ChatViewDemo)
- [nlohmann/json](https://github.com/nlohmann/json)

- [gte_sentence-embedding_multilingual-base](https://modelscope.cn/models/iic/gte_sentence-embedding_multilingual-base/summary)
</details>
68 changes: 45 additions & 23 deletions README_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,11 @@ Download models from `modelscope`:
- [modelscope-qwen1.5-1.8b-chat]
- [modelscope-qwen1.5-4b-chat]
- [modelscope-qwen1.5-7b-chat]
- [modelscope-qwen2-0.5b-chat]
- [modelscope-qwen2-1.5b-chat]
- [modelscope-qwen2-7b-chat]
- [modelscope-qwen2-0.5b-instruct]
- [modelscope-qwen2-1.5b-instruct]
- [modelscope-qwen2-7b-instruct]
- [modelscope-qwen2-vl-2b-instruct]
- [modelscope-qwen2-vl-7b-instruct]

</details>

Expand Down Expand Up @@ -63,10 +65,17 @@ Download models from `modelscope`:
</details>

<details>
<summary>others</summary>
<summary>phi</summary>

- [modelscope-phi-2]

</details>

<details>
<summary>embedding</summary>

- [modelscope-bge-large-zh]
- [modelscope-gte_sentence-embedding_multilingual-base]

</details>

Expand All @@ -78,9 +87,11 @@ Download models from `modelscope`:
[modelscope-qwen1.5-1.8b-chat]: https://modelscope.cn/models/zhaode/Qwen1.5-1.8B-Chat-MNN/files
[modelscope-qwen1.5-4b-chat]: https://modelscope.cn/models/zhaode/Qwen1.5-4B-Chat-MNN/files
[modelscope-qwen1.5-7b-chat]: https://modelscope.cn/models/zhaode/Qwen1.5-7B-Chat-MNN/files
[modelscope-qwen2-0.5b-chat]: https://modelscope.cn/models/zhaode/Qwen2-0.5B-Instruct-MNN/files
[modelscope-qwen2-1.5b-chat]: https://modelscope.cn/models/zhaode/Qwen2-1.5B-Instruct-MNN/files
[modelscope-qwen2-7b-chat]: https://modelscope.cn/models/zhaode/Qwen2-7B-Instruct-MNN/files
[modelscope-qwen2-0.5b-instruct]: https://modelscope.cn/models/zhaode/Qwen2-0.5B-Instruct-MNN/files
[modelscope-qwen2-1.5b-instruct]: https://modelscope.cn/models/zhaode/Qwen2-1.5B-Instruct-MNN/files
[modelscope-qwen2-7b-instruct]: https://modelscope.cn/models/zhaode/Qwen2-7B-Instruct-MNN/files
[modelscope-qwen2-vl-2b-instruct]: https://modelscope.cn/models/zhaode/Qwen2-VL-2B-Instruct-MNN/files
[modelscope-qwen2-vl-7b-instruct]: https://modelscope.cn/models/zhaode/Qwen2-VL-7B-Instruct-MNN/files

[modelscope-chatglm-6b]: https://modelscope.cn/models/zhaode/chatglm-6b-MNN/files
[modelscope-chatglm2-6b]: https://modelscope.cn/models/zhaode/chatglm2-6b-MNN/files
Expand All @@ -97,6 +108,7 @@ Download models from `modelscope`:
[modelscope-tinyllama-1.1b-chat]: https://modelscope.cn/models/zhaode/TinyLlama-1.1B-Chat-MNN/files
[modelscope-phi-2]: https://modelscope.cn/models/zhaode/phi-2-MNN/files
[modelscope-bge-large-zh]: https://modelscope.cn/models/zhaode/bge-large-zh-MNN/files
[modelscope-gte_sentence-embedding_multilingual-base]: https://modelscope.cn/models/zhaode/gte_sentence-embedding_multilingual-base-MNN/files

## Building

Expand Down Expand Up @@ -147,9 +159,10 @@ cd mnn-llm
./script/ios_build.sh
```

The default backend used is `CPU`. If you want to use a different backend, you can add a MNN compilation macro within the script:
The default backend used is `CPU`. If you want to use a different backend, you can add a MNN compilation macro:
- cuda: `-DMNN_CUDA=ON`
- opencl: `-DMNN_OPENCL=ON`
- metal: `-DMNN_METAL=ON`


### 4. Execution
Expand All @@ -174,27 +187,36 @@ adb shell "cd /data/local/tmp && export LD_LIBRARY_PATH=. && ./cli_demo ./Qwen2-
<details>
<summary>reference</summary>

- [cpp-httplib](https://github.com/yhirose/cpp-httplib)
- [chatgpt-web](https://github.com/xqdoo00o/chatgpt-web)
- [ChatViewDemo](https://github.com/BrettFX/ChatViewDemo)
- [nlohmann/json](https://github.com/nlohmann/json)
- [Qwen-1.8B-Chat](https://modelscope.cn/models/qwen/Qwen-1_8B-Chat/summary)
- [Qwen-7B-Chat](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary)
- [Qwen-VL-Chat](https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary)
- [Qwen1.5-0.5B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-0.5B-Chat/summary)
- [Qwen1.5-1.8B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-1.8B-Chat/summary)
- [Qwen1.5-4B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-4B-Chat/summary)
- [Qwen1.5-7B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-7B-Chat/summary)
- [Qwen2-0.5B-Instruct](https://modelscope.cn/models/qwen/Qwen2-0.5B-Instruct/summary)
- [Qwen2-1.5B-Instruct](https://modelscope.cn/models/qwen/Qwen2-1.5B-Instruct/summary)
- [Qwen2-7B-Instruct](https://modelscope.cn/models/qwen/Qwen2-7B-Instruct/summary)
- [Qwen2-VL-2B-Instruct](https://modelscope.cn/models/qwen/Qwen2-VL-2B-Instruct/summary)
- [Qwen2-VL-7B-Instruct](https://modelscope.cn/models/qwen/Qwen2-VL-7B-Instruct/summary)
- [chatglm-6b](https://modelscope.cn/models/ZhipuAI/chatglm-6b/summary)
- [chatglm2-6b](https://modelscope.cn/models/ZhipuAI/chatglm2-6b/summary)
- [chatglm3-6b](https://modelscope.cn/models/ZhipuAI/chatglm3-6b/summary)
- [codegeex2-6b](https://modelscope.cn/models/ZhipuAI/codegeex2-6b/summary)
- [Baichuan2-7B-Chat](https://modelscope.cn/models/baichuan-inc/baichuan-7B/summary)
- [Qwen-7B-Chat](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary)
- [Qwen-VL-Chat](https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary)
- [Qwen-1.8B-Chat](https://modelscope.cn/models/qwen/Qwen-1_8B-Chat/summary)
- [chatglm3-6b](https://modelscope.cn/models/ZhipuAI/chatglm3-6b/summary)
- [glm4-9b-chat](https://modelscope.cn/models/ZhipuAI/glm-4-9b-chat/summary)
- [Llama-2-7b-chat-ms](https://modelscope.cn/models/modelscope/Llama-2-7b-chat-ms/summary)
- [Llama-3-8B-Instruct](https://modelscope.cn/models/modelscope/Meta-Llama-3-8B-Instruct/summary)
- [Baichuan2-7B-Chat](https://modelscope.cn/models/baichuan-inc/baichuan-7B/summary)
- [internlm-chat-7b](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm-chat-7b/summary)
- [Yi-6B-Chat](https://modelscope.cn/models/01ai/Yi-6B-Chat/summary)
- [deepseek-llm-7b-chat](https://modelscope.cn/models/deepseek-ai/deepseek-llm-7b-chat/summary)
- [TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6)
- [phi-2](https://modelscope.cn/models/AI-ModelScope/phi-2/summary)
- [bge-large-zh](https://modelscope.cn/models/AI-ModelScope/bge-large-zh/summary)
- [TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6)
- [Yi-6B-Chat](https://modelscope.cn/models/01ai/Yi-6B-Chat/summary)
- [Qwen1.5-0.5B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-0.5B-Chat/summary)
- [Qwen1.5-1.8B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-1.8B-Chat/summary)
- [Qwen1.5-4B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-4B-Chat/summary)
- [Qwen1.5-7B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-7B-Chat/summary)
- [cpp-httplib](https://github.com/yhirose/cpp-httplib)
- [chatgpt-web](https://github.com/xqdoo00o/chatgpt-web)
- [ChatViewDemo](https://github.com/BrettFX/ChatViewDemo)
- [nlohmann/json](https://github.com/nlohmann/json)
- [gte_sentence-embedding_multilingual-base](https://modelscope.cn/models/iic/gte_sentence-embedding_multilingual-base/summary)

</details>
6 changes: 3 additions & 3 deletions demo/embedding_demo.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,9 @@ int main(int argc, const char* argv[]) {
std::string model_dir = argv[1];
std::cout << "model path is " << model_dir << std::endl;
std::unique_ptr<Embedding> embedding(Embedding::createEmbedding(model_dir));
auto vec_0 = embedding->embedding("在春暖花开的季节,走在樱花缤纷的道路上,人们纷纷拿出手机拍照留念。樱花树下,情侣手牵手享受着这绝美的春光。孩子们在树下追逐嬉戏,脸上洋溢着纯真的笑容。春天的气息在空气中弥漫,一切都显得那么生机勃勃,充满希望。");
auto vec_1 = embedding->embedding("春天到了,樱花树悄然绽放,吸引了众多游客前来观赏。小朋友们在花瓣飘落的树下玩耍,而恋人们则在这浪漫的景色中尽情享受二人世界。每个人的脸上都挂着幸福的笑容,仿佛整个世界都被春天温暖的阳光和满树的樱花渲染得更加美好。");
auto vec_2 = embedding->embedding("在炎热的夏日里,沙滩上的游客们穿着泳装享受着海水的清凉。孩子们在海边堆沙堡,大人们则在太阳伞下品尝冷饮,享受悠闲的时光。远处,冲浪者们挑战着波涛,体验着与海浪争斗的刺激。夏天的海滩,总是充满了活力和热情。");
auto vec_0 = embedding->txt_embedding("在春暖花开的季节,走在樱花缤纷的道路上,人们纷纷拿出手机拍照留念。樱花树下,情侣手牵手享受着这绝美的春光。孩子们在树下追逐嬉戏,脸上洋溢着纯真的笑容。春天的气息在空气中弥漫,一切都显得那么生机勃勃,充满希望。");
auto vec_1 = embedding->txt_embedding("春天到了,樱花树悄然绽放,吸引了众多游客前来观赏。小朋友们在花瓣飘落的树下玩耍,而恋人们则在这浪漫的景色中尽情享受二人世界。每个人的脸上都挂着幸福的笑容,仿佛整个世界都被春天温暖的阳光和满树的樱花渲染得更加美好。");
auto vec_2 = embedding->txt_embedding("在炎热的夏日里,沙滩上的游客们穿着泳装享受着海水的清凉。孩子们在海边堆沙堡,大人们则在太阳伞下品尝冷饮,享受悠闲的时光。远处,冲浪者们挑战着波涛,体验着与海浪争斗的刺激。夏天的海滩,总是充满了活力和热情。");
dumpVARP(vec_0);
dumpVARP(vec_1);
dumpVARP(vec_2);
Expand Down
Loading

0 comments on commit c4aa4f1

Please sign in to comment.