Skip to content

System-Nebula/daddai

Repository files navigation

nebula.ai

Publish Docker image

Table of Contents

Overview

The bot connect to an ollama endpoint and queries said endpoint whenever it gets mentioned

Installation

[Describe the installation process, including any dependencies that the user needs to install before running your bot. If your bot is available on PyPI, include a link to the PyPI page.]

In order to run the bot you need the following settings:

  • .env file that contains the following info :
    • DISCORD_TOKEN -> The discord token for the bot to be running
    • level -> log level, so far the only accepted level is debug
    • OLLAMA_URL -> The ollama server url with the following format http://ollama_host:ollama_port
  • A .log directory

It is also needed to run pip3 install -r deps/requirements.txt

Another way to have the bot running could be by running the docker-compose file docker-compose up -d
Make sure you have a .env file in the current directory with the following structure

DISCORD_TOKEN = YOUR_DISCORD_TOKEN
level = debug
OLLAMA_URL = http://ollama_host:ollama_port
OLLAMA_MODEL = llm_model

To check if the container is running docker-compose ps

Repo workflow

It would be good to work on a branch called dev
When the work on the branch is done commit it to dev
The gh workflow will automatically create a PR

How to interact with the bot

In order to interact with the bot you just have to mention it in your message @bot_name text