This repository is for the source code handling the web-api call of llm providers as GPT from Open AI.
- "/test" Route - Route used for connection testing
- "/api_call" Route - Route used for processing the textual information from the request. The route takes 2 arguments inside the POST request: "text": "The process description in string format" and "api_key": "OPENAI (not AzureOpenAI!) key". #ADD THE RETURN FORMAT ACCORDING TO THE PARSER
- Feature 3
- Add more features as needed
Install WSL as instructed here: https://learn.microsoft.com/en-us/windows/wsl/install - The recommended Distro is Ubuntu 22.04
Install Docker as instructed here: https://docs.docker.com/desktop/wsl/
To set up the local envoronment without docker, use these commands:
- Create local environment:
From project root folder use:
python -m venv venv source venv/bin/activate
- Navigate into the app/backend folder:
cd app/backend
- Install the requirements:
pip install -r requirements.txt
To run the project as docker image, navigate to the backend directory and run the following commands:
Build the container (usually needed only once):
docker build -t my_flask_app .
Run the app:
docker run -p 4000:5000 my_flask_app
Before you start testing the endpoint, make sure the app is running. If you are not sure how to run the app, please refer to the previous section
Open Postman and send a GET request to the following URL:
http://localhost:4000/test_connection
Open a terminal and send a GET request to the following URL:
curl http://localhost:4000/test_connection
First install the requirements, see section "Setting Up Your Local Environment" for more information.
To run all the tests, use the following command:
coverage run -m unittest discover unittest
To see the coverage report, use the following command:
coverage report -m
If you want to see the coverage report in html format, use the following command:
coverage html
Then navigate to the htmlcov directory and open the index.html file in a browser.