A simple LLM chat front-end that makes it easy to find, download, and mess around with LLM models on your local machine. This is a very early-stage project, so expect bugs and missing features. On the bright side, here's what it supports today:
- Easily download and run built-in LLM models
- Load your own models
- GPU support
- Statically compiled
- Cross-platform
- Dark and light modes
- Warm-up prompting
- Upload files (.pdf, .txt, .html) and chat about the file contents
- Chat-style context
- Prompt templates
See releases for more downloads.
All models are downloaded and loaded from the ~/.chitchat/models
directory. You can drop the .bin
files in here.
Currently, this project only supports ggml models.
To download models that aren't supported natively in this project, check out the following links.
This is just a Tauri frontend on the incredible rustformers/llm project. This means that any bugs in model execution or performance should be taken up in that project.
See troubleshooting for more information.