diff --git a/README.md b/README.md index 3b588e9..ecb0d89 100644 --- a/README.md +++ b/README.md @@ -68,8 +68,6 @@ You can download the [latest release](https://github.com/bytedance/UI-TARS-deskt #### VLM (Vision-Language Model) -Support HuggingFace(Cloud) and Ollama(Local) deployment. - We recommend using HuggingFace Inference Endpoints for fast deployment. We provide two docs for users to refer: [GUI Model Deployment Guide](https://juniper-switch-f10.notion.site/GUI-Model-Deployment-Guide-17b5350241e280058e98cea60317de71) @@ -77,14 +75,14 @@ We recommend using HuggingFace Inference Endpoints for fast deployment. We provi -If you use Ollama, you can use the following settings to start the server: + > **Note**: VLM Base Url is OpenAI compatible API endpoints (see [OpenAI API protocol document](https://platform.openai.com/docs/guides/vision/uploading-base-64-encoded-images) for more details). diff --git a/src/main/store/types.ts b/src/main/store/types.ts index 7129ff7..67e436d 100644 --- a/src/main/store/types.ts +++ b/src/main/store/types.ts @@ -51,7 +51,7 @@ export type AppState = { }; export enum VlmProvider { - Ollama = 'ollama', + // Ollama = 'ollama', Huggingface = 'huggingface', }