Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added paragraph about the combination of BioChatter and WebLLM #3

Merged
merged 3 commits into from
Dec 29, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion content/40.methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,10 @@ We provide Docker images for the automatic deployment of the API and the models.
Xorbits Inference includes a large number of open-source models out of the box, and new models from Hugging Face Hub [@{https://huggingface.co/}] can be added using the intuitive graphical user interface.

In addition, we provide fully browser-based deployment of LLMs using WebAssembly (WASM) and the web-llm library (https://github.com/mlc-ai/web-llm).
This allows the deployment of LLMs on end-user devices without the need for a server.
We host a local model using web-llm, which is accessed by the BioChatter server within ChatGSE Next.
This combination creates a secure conversational environment and minimises the risks associated with data breaches.
Hosting local LLMs using WebLLM allows them to run without an internet connection in a WASM module.
This architecture thus enables complete data security and is limited only by the resources of the host computer.
While the same can be achieved with a local deployment of the Xorbits Inference API, the browser-based deployment is more user-friendly and does not require any additional software.

### Model Chaining
Expand Down
Loading