Skip to content

First-ever Discord Bot with Deepseek R1 (Runs locally or with API)

License

Notifications You must be signed in to change notification settings

fzn0x/discord-deepseek-r1-bot

Repository files navigation

🤖 Discord Bot for Deepseek-R1 (Local and API run) 🐋

This is Discord Bot for Deepseek R1 with automically fallback to local fetching once the API usage is limited. 🐋

Requirements

  • Brain
  • Docker
  • Discordjs
  • TypeScript
  • Ollama (just use docker, why you so fall in love with direct binary execution)
    • You can also install by flake.nix with Nix
  • Deepseek R1 API

Local Serving for Deepseek-R1 (since you don't know) 🍽️

Install Ollama

docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
# or
git clone https://github.com/fzn0x/discord-deepseek-r1-bot
cd discord-deepseek-r1-bot
nix develop # Ensure flakes are enabled in /etc/nix/nix.conf
# if you are using wsl
# export LD_LIBRARY_PATH="/usr/lib/wsl/lib:$LD_LIBRARY_PATH"
ollama pull deepseek-r1:1.5b
ollama serve & # Ensure cuda is installed on your machine

Run Deepseek R1 Model with Ollama (For Docker Installation)

I'm using 1.5b, you can choose other models here: https://ollama.com/library/deepseek-r1:1.5b

docker exec -it ollama ollama run deepseek-r1:1.5b

CURL your local API

curl http://localhost:11434/api/generate -d '{
  "model": "deepseek-r1:1.5b",
  "prompt": "Why is the sky blue?"
}'

You can use this step on your VPS. If you want cheap servers, try something like Contabo (I'm not promoting them).

Optional Task: There is a clean.py file if you accidentally run out of memory when running models with vLLM.

Credits

  • God
  • Deepseek
  • Me
  • Internet
  • Founder of electricity
  • Github
  • Your Mom
  • etc

License

This project licensed in MIT License

Pro Tips 💡

Adds a smart contract development, there you go another shitcoin project.