Ollama CLI:
set OLLAMA_HOST=0.0.0.0
set OLLAMA_PORT=11434
ollama serve
A sleek and intuitive desktop application that brings the power of open-source language models to your fingertips.
Python-Ollama GUI provides a user-friendly interface to interact with various AI models through Ollama, featuring real-time response streaming, adjustable parameters, and support for models ranging from the lightweight Llama 3.2 (3B) to the more powerful Mistral (24B).
Perfect for developers, writers, and AI enthusiasts who want to harness AI capabilities without dealing with command-line interfaces.
(Add a screenshot of your application)
- 🎯 Clean and intuitive graphical user interface
- 🔄 Real-time streaming responses
- 🎛️ Adjustable parameters (temperature, max tokens)
- 📝 Support for system messages and user prompts
- 🛑 Ability to stop generation mid-stream
- 🎨 Markdown-style formatting in responses
- 📊 Multiple model support:
- Llama 3.2 (3B)
- DeepSeek (7B)
- Gemma 2 (7B)
- Phi4 (14B)
- Mistral (24B)
- Python 3.6+
- Ollama server running locally or on a remote machine
- Required Python packages (see requirements.txt)
- Clone this repository:
git clone https://github.com/yourusername/Python-Ollama-Public.git
cd Python-Ollama-Public
- Install required dependencies:
pip install -r requirements.txt
- Create a
.env
file based on the provided.env-example
:
cp .env-example .env
- Edit the
.env
file with your Ollama server details:
API_IP=127.0.0.1 # Use your Ollama server IP
PORT=11434 # Default Ollama port
- Start the application:
python main.py
-
Configure your generation settings:
- Select a model from the dropdown menu
- Adjust temperature (higher = more creative, lower = more focused)
- Set maximum tokens for response length
- Enter a system message to guide the AI's behavior
- Type your prompt in the user message field
-
Click "Generate" to start text generation
- Use the "Stop" button to halt generation at any time
- Generated text will appear in the response area with markdown formatting
The application can be configured through the .env
file:
API_IP
: IP address of your Ollama serverPORT
: Port number of your Ollama server (default: 11434)
Contributions are welcome! Please feel free to submit a Pull Request.
[Add your chosen license here]
- Built with Python and Tkinter
- Powered by Ollama
- Uses various open-source language models