Skip to content

Commit

Permalink
Merge pull request #3 from Endlessflow/dev
Browse files Browse the repository at this point in the history
Dev
  • Loading branch information
Endlessflow authored Mar 10, 2024
2 parents e140309 + c895258 commit 493b55c
Show file tree
Hide file tree
Showing 11 changed files with 406 additions and 0 deletions.
24 changes: 24 additions & 0 deletions .github/workflows/test-on-pr.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
name: Python application test on PR

on:
pull_request:
branches: [ main, dev ]

jobs:
test:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3
- name: Set up Python 3.x
uses: actions/setup-python@v3
with:
python-version: '3.x'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run unittest discovery
run: |
python -m unittest discover -s . -p "test_*.py"
86 changes: 86 additions & 0 deletions OpenAIChat.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
import os
from openai import OpenAI

class OpenAIChat:
def __init__(self, api_key: str = os.getenv("OPENAI_API_KEY"), model: str = "gpt-3.5-turbo"):
self.client = OpenAI(
api_key=api_key
)
self.model = model
self.templates = {}

def set_template(self, name: str, template: str, temperature: float = 0.7):
"""
Set the template with placeholders and optional parameters.
"""
self.templates[name] = {
"template": template,
"temperature": temperature
}

def load_template_from_file(self, file_path: str, temperature: float = 0.7):
"""
Load the template from a file and set it with the specified name and temperature.
"""
if not os.path.exists(file_path):
raise FileNotFoundError(f"No file found at: {file_path}")

name = os.path.splitext(os.path.basename(file_path))[0]
with open(file_path, 'r') as file:
template = file.read()

self.set_template(name, template, temperature)

def get_response(self, template_name: str, verbose: bool = False, **kwargs) -> str:
"""
Fill the template with provided field values and get a response from the API.
"""
if template_name not in self.templates:
raise ValueError(f"No template found with the name: {template_name}")

template = self.templates[template_name]["template"]
temperature = self.templates[template_name]["temperature"]
filled_template = template.format(**kwargs)

# Create a system message to initiate the conversation
messages = [{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": filled_template}]

completion = self.client.chat.completions.create(
model=self.model,
messages=messages,
temperature=temperature
)

response = completion.choices[0].message.content

if verbose:
print("Filled Template:\n", filled_template)
print("--------------------------------------------------------------")
print("API Response:\n", response)
print("===============================================================")
print("===============================================================")


return response



"""
# EXAMPLE USECASE
STARTING_CONTENT = ""
if __name__ == "__main__":
chat = OpenAIChat()
# Load templates from files
chat.load_template_from_file("prompts/prompt2.txt")
chat.load_template_from_file("prompts/prompt3.txt")
# Get a response from the API by filling the template
extracted_info = chat.get_response("prompt2", verbose=True, reference_material=STARTING_CONTENT)
# Get another response from the API by filling another template
flashcards = chat.get_response("prompt3", verbose=True, text_to_transform=extracted_info)
"""
57 changes: 57 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,60 @@
# The vision

To develop a simple, yet powerful tool that processes user-input text to generate flashcards for study purposes, with an option to export these in a format compatible with Anki, a popular flashcard application.

# Installation Guidelines

### Step 1: Setting up the Environment Variable

Before installing the application, you need to set up an environment variable for the OpenAI API key. This key allows our tool to communicate with OpenAI's services.

- **For Windows Users:**

1. Open the Start Search, type in "env", and choose "Edit the system environment variables".
2. In the System Properties window, click on the "Environment Variables..." button.
3. Under "User variables," click on "New..." to create a new environment variable.
4. For "Variable name," enter `OPENAI_API_KEY`.
5. For "Variable value," paste your OpenAI API key.
6. Click "OK" and apply the changes.

- **For macOS/Linux Users:**
1. Open your terminal.
2. Edit your shell profile file (e.g., `~/.bash_profile`, `~/.zshrc`, etc.) by typing `nano ~/.bash_profile` (replace `.bash_profile` with your specific file).
3. Add the following line at the end of the file: `export OPENAI_API_KEY='your_api_key_here'`
4. Press CTRL + X to close, then Y to save changes, and Enter to confirm.
5. To apply the changes, type `source ~/.bash_profile` (replace `.bash_profile` with your specific file).

Please replace `'your_api_key_here'` with your actual OpenAI API key.

### Step 2: Installing the Tool

With your environment variable set, the next steps involve setting up the tool itself, which is built on Flask, a lightweight WSGI web application framework in Python.

1. **Clone the Repository**
Start by cloning the repository to your local machine.

```bash
git clone https://github.com/Endlessflow/flashcard-generator.git
```

2. **Install Python Requirements**
Ensure you have Python installed on your system. This tool requires Python 3.6 or newer. You can check your Python version by running `python --version` in your terminal or command prompt.

Next, install the required Python packages using pip:

```bash
pip install -r requirements.txt
```

3. **Running the Flask Application**
Finally, to run the Flask application, use the following command from the terminal or command prompt:

```bash
flask run
```

This command starts the Flask web server. You should see output indicating the server is running and listening on a local port (usually `127.0.0.1:5000`). You can now open a web browser and navigate to this address to interact with the tool.

### Step 3: Enjoy

This should be all you need. Happy studying!
30 changes: 30 additions & 0 deletions app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
from flask import Flask, render_template
from flask import request, jsonify
from flask_cors import CORS

from OpenAIChat import OpenAIChat

app = Flask(__name__)
CORS(app)

@app.route('/')
def home():
return render_template('index.html')

@app.route('/generate_flashcards', methods=['POST'])
def generate_flashcards():
data = request.get_json()
input_text = data['text']

chat = OpenAIChat(model="gpt-4-turbo-preview")

# Load templates from files
chat.load_template_from_file("prompts/simple_flashcard.txt", 0.4)

# Get a response from the API by filling the template
response = chat.get_response("simple_flashcard", verbose=False, content=input_text)

return jsonify({'output': response})

if __name__ == '__main__':
app.run(debug=True)
6 changes: 6 additions & 0 deletions prompts/simple_flashcard.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
You are to act as a professional flashcard maker. Your goal is to engineer challenging and useful flashcards from the content given to you. Your reply should take the form of a JSON an array of objects having `question` and `answer` attributes.

Content:
"""
{content}
"""
24 changes: 24 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
annotated-types==0.6.0
anyio==4.3.0
blinker==1.7.0
certifi==2024.2.2
click==8.1.7
colorama==0.4.6
distro==1.9.0
exceptiongroup==1.2.0
Flask==3.0.2
Flask-Cors==4.0.0
h11==0.14.0
httpcore==1.0.4
httpx==0.27.0
idna==3.6
itsdangerous==2.1.2
Jinja2==3.1.3
MarkupSafe==2.1.5
openai==1.13.3
pydantic==2.6.3
pydantic_core==2.16.3
sniffio==1.3.1
tqdm==4.66.2
typing_extensions==4.10.0
Werkzeug==3.0.1
49 changes: 49 additions & 0 deletions scope.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Scope and Project Requirements

## Project Overview

The objective of this project is to create a simplified web application leveraging the capabilities of OpenAI's language models to generate flashcards from user-provided text. The application aims to facilitate learning by enabling users to extract key information from texts and convert them into a question-and-answer format suitable for memorization and study.

## Core Functionality

### Flashcard Generation

- **User Input Processing**: Accept textual input from the user to be transformed into flashcards.
- **Text Cleaning**: Implement a preprocessing step to clean and normalize the input text.
- **Text Processing and Flashcard Creation**: Process the cleaned text to extract salient points and generate flashcards in a question-and-answer format.

### Optional Feature

- **Export to Anki-Compatible Format**: If time permits, add functionality to export generated flashcards into a CSV format compatible with Anki, focusing on simplicity with a question and answer format.

## Technology Stack

- **Backend Framework**: Flask will be used for its simplicity and suitability for beginners, offering a gentle learning curve.
- **LLM Integration**: OpenAI API will be directly utilized for leveraging language model capabilities, prioritizing straightforward integration for text processing.
- **Frontend Development**: Basic HTML, CSS, and potentially minimal JavaScript will be employed to handle user interactions and display the generated flashcards.

## User Stories

- **Scenario**: A user, Mark, is studying concurrency and wishes to remember a specific paragraph from a tutorial.
- **Action**: Mark inputs the paragraph into the application and clicks the process button.
- **Outcome**: The application processes the text, and the interface displays generated flashcards in a human-readable format, allowing Mark to review and manually transfer selected flashcards into Anki.

## Project Structure

### Frontend

- The user interface will adopt a two-column layout, with the left column designated for text input and the right column for displaying generated flashcards.

### Backend

- **Controller**: A simple router will direct requests to a facade controller responsible for the "process text" operation, returning flashcard data as a JSON list.
- **Agents**: The backend will feature a series of agents, each tasked with a specific function in the text processing chain—text cleaning, information extraction for rote memorization, and flashcard generation.

## Testing

- **Route Testing**: Initial tests will verify the functionality and robustness of application routes.
- **Flexibility for Additional Tests**: Further testing will be determined based on project progress and will adapt to development needs.



*This document was generated with the help of our good friend chatGPT for improved readability*
38 changes: 38 additions & 0 deletions static/css/style.css
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
/* Basic styling for simplicity */
body,
html {
font-family: Arial, sans-serif;
padding: 20px;
margin: 0;
background-color: #f0f0f0;
}

#app {
max-width: 800px;
margin: auto;
background: white;
padding: 20px;
box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
}

textarea {
width: 100%;
padding: 10px;
margin-bottom: 20px;
border: 1px solid #ddd;
min-height: 100px;
resize: vertical;
box-sizing : border-box;
}

button {
padding: 10px 20px;
background-color: #0056b3;
color: white;
border: none;
cursor: pointer;
}

button:hover {
background-color: #003d82;
}
45 changes: 45 additions & 0 deletions static/js/script.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
// Handles the form submission for the text input
document.getElementById('textForm').addEventListener('submit', function(e) {
e.preventDefault();

const userInput = document.getElementById('textInput').value;


// Make an AJAX call to the backend to process the user input
// no idea how to do this so bless chatGPT for the help
fetch('/generate_flashcards', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ text: userInput })
})
.then(response => response.json())
.then(data => {
if (typeof(data.output) === 'string'){
displayFlashcards(data.output)
}
else{
displayFlashcards("Error: The server did not return a valid response. Backend error go fix it.")
}

})
.catch((error) => {
console.error('Error:', error);
});

});


function displayFlashcards(flashcards) {
const flashcardsOutput = document.getElementById('flashcardsOutput');
flashcardsOutput.value = flashcards; // Assuming 'flashcards' is a string
}


// Handles the `copy to clipboard` button clicks
document.getElementById('copyButton').addEventListener('click', function() {
const flashcardsOutput = document.getElementById('flashcardsOutput');
flashcardsOutput.select();
navigator.clipboard.writeText(flashcardsOutput.value);
});
Loading

0 comments on commit 493b55c

Please sign in to comment.