This is Discord Bot for Deepseek R1 with automically fallback to local fetching once the API usage is limited. 🐋
- Brain
- Docker
- Discordjs
- TypeScript
- Ollama (just use docker, why you so fall in love with direct binary execution)
- You can also install by
flake.nix
with Nix
- You can also install by
- Deepseek R1 API
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
# or
git clone https://github.com/fzn0x/discord-deepseek-r1-bot
cd discord-deepseek-r1-bot
nix develop # Ensure flakes are enabled in /etc/nix/nix.conf
# if you are using wsl
# export LD_LIBRARY_PATH="/usr/lib/wsl/lib:$LD_LIBRARY_PATH"
ollama pull deepseek-r1:1.5b
ollama serve & # Ensure cuda is installed on your machine
I'm using 1.5b, you can choose other models here: https://ollama.com/library/deepseek-r1:1.5b
docker exec -it ollama ollama run deepseek-r1:1.5b
curl http://localhost:11434/api/generate -d '{
"model": "deepseek-r1:1.5b",
"prompt": "Why is the sky blue?"
}'
You can use this step on your VPS. If you want cheap servers, try something like Contabo (I'm not promoting them).
Optional Task: There is a clean.py file if you accidentally run out of memory when running models with vLLM.
- God
- Deepseek
- Me
- Internet
- Founder of electricity
- Github
- Your Mom
- etc
This project licensed in MIT License
Adds a smart contract development, there you go another shitcoin project.