From 50a547b07019a1fe6b11ab930d3ea2723db4a3cc Mon Sep 17 00:00:00 2001
From: James <60270865+Endlessflow@users.noreply.github.com>
Date: Sun, 10 Mar 2024 14:12:27 -0400
Subject: [PATCH 1/7] Add scope and project requirements document
---
scope.md | 49 +++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 49 insertions(+)
create mode 100644 scope.md
diff --git a/scope.md b/scope.md
new file mode 100644
index 0000000..d495173
--- /dev/null
+++ b/scope.md
@@ -0,0 +1,49 @@
+# Scope and Project Requirements
+
+## Project Overview
+
+The objective of this project is to create a simplified web application leveraging the capabilities of OpenAI's language models to generate flashcards from user-provided text. The application aims to facilitate learning by enabling users to extract key information from texts and convert them into a question-and-answer format suitable for memorization and study.
+
+## Core Functionality
+
+### Flashcard Generation
+
+- **User Input Processing**: Accept textual input from the user to be transformed into flashcards.
+- **Text Cleaning**: Implement a preprocessing step to clean and normalize the input text.
+- **Text Processing and Flashcard Creation**: Process the cleaned text to extract salient points and generate flashcards in a question-and-answer format.
+
+### Optional Feature
+
+- **Export to Anki-Compatible Format**: If time permits, add functionality to export generated flashcards into a CSV format compatible with Anki, focusing on simplicity with a question and answer format.
+
+## Technology Stack
+
+- **Backend Framework**: Flask will be used for its simplicity and suitability for beginners, offering a gentle learning curve.
+- **LLM Integration**: OpenAI API will be directly utilized for leveraging language model capabilities, prioritizing straightforward integration for text processing.
+- **Frontend Development**: Basic HTML, CSS, and potentially minimal JavaScript will be employed to handle user interactions and display the generated flashcards.
+
+## User Stories
+
+- **Scenario**: A user, Mark, is studying concurrency and wishes to remember a specific paragraph from a tutorial.
+- **Action**: Mark inputs the paragraph into the application and clicks the process button.
+- **Outcome**: The application processes the text, and the interface displays generated flashcards in a human-readable format, allowing Mark to review and manually transfer selected flashcards into Anki.
+
+## Project Structure
+
+### Frontend
+
+- The user interface will adopt a two-column layout, with the left column designated for text input and the right column for displaying generated flashcards.
+
+### Backend
+
+- **Controller**: A simple router will direct requests to a facade controller responsible for the "process text" operation, returning flashcard data as a JSON list.
+- **Agents**: The backend will feature a series of agents, each tasked with a specific function in the text processing chain—text cleaning, information extraction for rote memorization, and flashcard generation.
+
+## Testing
+
+- **Route Testing**: Initial tests will verify the functionality and robustness of application routes.
+- **Flexibility for Additional Tests**: Further testing will be determined based on project progress and will adapt to development needs.
+
+
+
+*This document was generated with the help of our good friend chatGPT for improved readability*
\ No newline at end of file
From 28372cdbb6a534c98d8d783533c9d4a81c33eb9b Mon Sep 17 00:00:00 2001
From: James <60270865+Endlessflow@users.noreply.github.com>
Date: Sun, 10 Mar 2024 15:45:08 -0400
Subject: [PATCH 2/7] Basic frontend done with flask. And its all connected to
a simple route we have setup for the system operation of generating
flashcards. --- Note to self: Flask is so simple to setup - why am I only
discovering this today AAAAHHH
---
app.py | 19 +++++++++++++++++++
static/css/style.css | 38 +++++++++++++++++++++++++++++++++++++
static/js/script.js | 45 ++++++++++++++++++++++++++++++++++++++++++++
templates/index.html | 28 +++++++++++++++++++++++++++
4 files changed, 130 insertions(+)
create mode 100644 app.py
create mode 100644 static/css/style.css
create mode 100644 static/js/script.js
create mode 100644 templates/index.html
diff --git a/app.py b/app.py
new file mode 100644
index 0000000..91a2494
--- /dev/null
+++ b/app.py
@@ -0,0 +1,19 @@
+from flask import Flask, render_template
+from flask import request, jsonify
+
+app = Flask(__name__)
+
+@app.route('/')
+def home():
+ return render_template('index.html')
+
+@app.route('/generate_flashcards', methods=['POST'])
+def generate_flashcards():
+ data = request.get_json()
+ input_text = data['text']
+
+ # for now just return the input doubled
+ return jsonify({'output': input_text * 2})
+
+if __name__ == '__main__':
+ app.run(debug=True)
diff --git a/static/css/style.css b/static/css/style.css
new file mode 100644
index 0000000..318e696
--- /dev/null
+++ b/static/css/style.css
@@ -0,0 +1,38 @@
+/* Basic styling for simplicity */
+body,
+html {
+ font-family: Arial, sans-serif;
+ padding: 20px;
+ margin: 0;
+ background-color: #f0f0f0;
+}
+
+#app {
+ max-width: 800px;
+ margin: auto;
+ background: white;
+ padding: 20px;
+ box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
+}
+
+textarea {
+ width: 100%;
+ padding: 10px;
+ margin-bottom: 20px;
+ border: 1px solid #ddd;
+ min-height: 100px;
+ resize: vertical;
+ box-sizing : border-box;
+}
+
+button {
+ padding: 10px 20px;
+ background-color: #0056b3;
+ color: white;
+ border: none;
+ cursor: pointer;
+}
+
+button:hover {
+ background-color: #003d82;
+}
diff --git a/static/js/script.js b/static/js/script.js
new file mode 100644
index 0000000..4da2f2f
--- /dev/null
+++ b/static/js/script.js
@@ -0,0 +1,45 @@
+// Handles the form submission for the text input
+document.getElementById('textForm').addEventListener('submit', function(e) {
+ e.preventDefault();
+
+ const userInput = document.getElementById('textInput').value;
+
+
+ // Make an AJAX call to the backend to process the user input
+ // no idea how to do this so bless chatGPT for the help
+ fetch('/generate_flashcards', {
+ method: 'POST',
+ headers: {
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({ text: userInput })
+ })
+ .then(response => response.json())
+ .then(data => {
+ if (typeof(data.output) === 'string'){
+ displayFlashcards(data.output)
+ }
+ else{
+ displayFlashcards("Error: The server did not return a valid response. Backend error go fix it.")
+ }
+
+ })
+ .catch((error) => {
+ console.error('Error:', error);
+ });
+
+});
+
+
+function displayFlashcards(flashcards) {
+ const flashcardsOutput = document.getElementById('flashcardsOutput');
+ flashcardsOutput.value = flashcards; // Assuming 'flashcards' is a string
+}
+
+
+// Handles the `copy to clipboard` button clicks
+document.getElementById('copyButton').addEventListener('click', function() {
+ const flashcardsOutput = document.getElementById('flashcardsOutput');
+ flashcardsOutput.select();
+ navigator.clipboard.writeText(flashcardsOutput.value);
+});
diff --git a/templates/index.html b/templates/index.html
new file mode 100644
index 0000000..8c1a084
--- /dev/null
+++ b/templates/index.html
@@ -0,0 +1,28 @@
+
+
+
+
+
+
+ AI Flashcard Generator
+
+
+
+
+
+
AI Flashcard Generator
+
+
+
Flashcards:
+
+
+
+
+
+
+
+
+
\ No newline at end of file
From dee94204122605f86cc52cee625a9113f903d210 Mon Sep 17 00:00:00 2001
From: James <60270865+Endlessflow@users.noreply.github.com>
Date: Sun, 10 Mar 2024 16:03:30 -0400
Subject: [PATCH 3/7] added simple tests for now and a requirements.txt as well
as CORS because apparently it can be an issue
---
app.py | 2 ++
requirements.txt | 24 ++++++++++++++++++++++++
test_app.py | 25 +++++++++++++++++++++++++
3 files changed, 51 insertions(+)
create mode 100644 requirements.txt
create mode 100644 test_app.py
diff --git a/app.py b/app.py
index 91a2494..12e5671 100644
--- a/app.py
+++ b/app.py
@@ -1,7 +1,9 @@
from flask import Flask, render_template
from flask import request, jsonify
+from flask_cors import CORS
app = Flask(__name__)
+CORS(app)
@app.route('/')
def home():
diff --git a/requirements.txt b/requirements.txt
new file mode 100644
index 0000000..0ab6e1b
--- /dev/null
+++ b/requirements.txt
@@ -0,0 +1,24 @@
+annotated-types==0.6.0
+anyio==4.3.0
+blinker==1.7.0
+certifi==2024.2.2
+click==8.1.7
+colorama==0.4.6
+distro==1.9.0
+exceptiongroup==1.2.0
+Flask==3.0.2
+Flask-Cors==4.0.0
+h11==0.14.0
+httpcore==1.0.4
+httpx==0.27.0
+idna==3.6
+itsdangerous==2.1.2
+Jinja2==3.1.3
+MarkupSafe==2.1.5
+openai==1.13.3
+pydantic==2.6.3
+pydantic_core==2.16.3
+sniffio==1.3.1
+tqdm==4.66.2
+typing_extensions==4.10.0
+Werkzeug==3.0.1
diff --git a/test_app.py b/test_app.py
new file mode 100644
index 0000000..1a50c16
--- /dev/null
+++ b/test_app.py
@@ -0,0 +1,25 @@
+import unittest
+import app
+
+class TestApp(unittest.TestCase):
+
+ # Check that the home page returns a 200 status code
+ def test_root_status_code(self):
+ tester = app.app.test_client(self)
+ response = tester.get('/', content_type='html/text')
+ self.assertEqual(response.status_code, 200)
+
+ # Check that the generate_flashcards route returns a 200 status code
+ def test_generate_flashcards_status_code(self):
+ tester = app.app.test_client(self)
+ response = tester.post('/generate_flashcards', json={'text': 'test'}, content_type='application/json')
+ self.assertEqual(response.status_code, 200)
+
+ # Check that the generate_flashcards route doubles the input
+ def test_generate_flashcards_output(self):
+ tester = app.app.test_client(self)
+ response = tester.post('/generate_flashcards', json={'text': 'test'}, content_type='application/json')
+ self.assertEqual(response.json['output'], 'testtest')
+
+if __name__ == '__main__':
+ unittest.main(verbosity=2)
\ No newline at end of file
From 0ed768f3b8a280d9fbe60dca5d3106fb77f07615 Mon Sep 17 00:00:00 2001
From: James <60270865+Endlessflow@users.noreply.github.com>
Date: Sun, 10 Mar 2024 16:28:21 -0400
Subject: [PATCH 4/7] just started learning about github action yesterday so
hopefully this is going to work
---
.github/workflows/test-on-pr.yml | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
create mode 100644 .github/workflows/test-on-pr.yml
diff --git a/.github/workflows/test-on-pr.yml b/.github/workflows/test-on-pr.yml
new file mode 100644
index 0000000..832b93a
--- /dev/null
+++ b/.github/workflows/test-on-pr.yml
@@ -0,0 +1,24 @@
+name: Python application test on PR
+
+on:
+ pull_request:
+ branches: [ main, dev ]
+
+jobs:
+ test:
+
+ runs-on: ubuntu-latest
+
+ steps:
+ - uses: actions/checkout@v3
+ - name: Set up Python 3.x
+ uses: actions/setup-python@v3
+ with:
+ python-version: '3.x'
+ - name: Install dependencies
+ run: |
+ python -m pip install --upgrade pip
+ pip install -r requirements.txt
+ - name: Run unittest discovery
+ run: |
+ python -m unittest discover -s . -p "test_*.py"
From 1728aff3d3785deb067c2852714287cb7fb75d9a Mon Sep 17 00:00:00 2001
From: James <60270865+Endlessflow@users.noreply.github.com>
Date: Sun, 10 Mar 2024 17:03:15 -0400
Subject: [PATCH 5/7] Implemented a naive generation solution. for now it
doesnt parse the answer and simply returns it as is
---
OpenAIChat.py | 86 ++++++++++++++++++++++++++++++++++++
app.py | 13 +++++-
prompts/simple_flashcard.txt | 6 +++
3 files changed, 103 insertions(+), 2 deletions(-)
create mode 100644 OpenAIChat.py
create mode 100644 prompts/simple_flashcard.txt
diff --git a/OpenAIChat.py b/OpenAIChat.py
new file mode 100644
index 0000000..999bfe8
--- /dev/null
+++ b/OpenAIChat.py
@@ -0,0 +1,86 @@
+import os
+from openai import OpenAI
+
+class OpenAIChat:
+ def __init__(self, api_key: str = os.getenv("OPENAI_API_KEY"), model: str = "gpt-3.5-turbo"):
+ self.client = OpenAI(
+ api_key=api_key
+ )
+ self.model = model
+ self.templates = {}
+
+ def set_template(self, name: str, template: str, temperature: float = 0.7):
+ """
+ Set the template with placeholders and optional parameters.
+ """
+ self.templates[name] = {
+ "template": template,
+ "temperature": temperature
+ }
+
+ def load_template_from_file(self, file_path: str, temperature: float = 0.7):
+ """
+ Load the template from a file and set it with the specified name and temperature.
+ """
+ if not os.path.exists(file_path):
+ raise FileNotFoundError(f"No file found at: {file_path}")
+
+ name = os.path.splitext(os.path.basename(file_path))[0]
+ with open(file_path, 'r') as file:
+ template = file.read()
+
+ self.set_template(name, template, temperature)
+
+ def get_response(self, template_name: str, verbose: bool = False, **kwargs) -> str:
+ """
+ Fill the template with provided field values and get a response from the API.
+ """
+ if template_name not in self.templates:
+ raise ValueError(f"No template found with the name: {template_name}")
+
+ template = self.templates[template_name]["template"]
+ temperature = self.templates[template_name]["temperature"]
+ filled_template = template.format(**kwargs)
+
+ # Create a system message to initiate the conversation
+ messages = [{"role": "system", "content": "You are a helpful assistant."},
+ {"role": "user", "content": filled_template}]
+
+ completion = self.client.chat.completions.create(
+ model=self.model,
+ messages=messages,
+ temperature=temperature
+ )
+
+ response = completion.choices[0].message.content
+
+ if verbose:
+ print("Filled Template:\n", filled_template)
+ print("--------------------------------------------------------------")
+ print("API Response:\n", response)
+ print("===============================================================")
+ print("===============================================================")
+
+
+ return response
+
+
+
+"""
+# EXAMPLE USECASE
+
+STARTING_CONTENT = ""
+
+if __name__ == "__main__":
+ chat = OpenAIChat()
+
+ # Load templates from files
+ chat.load_template_from_file("prompts/prompt2.txt")
+ chat.load_template_from_file("prompts/prompt3.txt")
+
+ # Get a response from the API by filling the template
+ extracted_info = chat.get_response("prompt2", verbose=True, reference_material=STARTING_CONTENT)
+
+ # Get another response from the API by filling another template
+ flashcards = chat.get_response("prompt3", verbose=True, text_to_transform=extracted_info)
+"""
\ No newline at end of file
diff --git a/app.py b/app.py
index 12e5671..3a3b316 100644
--- a/app.py
+++ b/app.py
@@ -2,6 +2,8 @@
from flask import request, jsonify
from flask_cors import CORS
+from OpenAIChat import OpenAIChat
+
app = Flask(__name__)
CORS(app)
@@ -14,8 +16,15 @@ def generate_flashcards():
data = request.get_json()
input_text = data['text']
- # for now just return the input doubled
- return jsonify({'output': input_text * 2})
+ chat = OpenAIChat(model="gpt-4-turbo-preview")
+
+ # Load templates from files
+ chat.load_template_from_file("prompts/simple_flashcard.txt", 0.4)
+
+ # Get a response from the API by filling the template
+ response = chat.get_response("simple_flashcard", verbose=False, content=input_text)
+
+ return jsonify({'output': response})
if __name__ == '__main__':
app.run(debug=True)
diff --git a/prompts/simple_flashcard.txt b/prompts/simple_flashcard.txt
new file mode 100644
index 0000000..d702824
--- /dev/null
+++ b/prompts/simple_flashcard.txt
@@ -0,0 +1,6 @@
+You are to act as a professional flashcard maker. Your goal is to engineer challenging and useful flashcards from the content given to you. Your reply should take the form of a JSON an array of objects having `question` and `answer` attributes.
+
+Content:
+"""
+{content}
+"""
\ No newline at end of file
From c9cb52eb1acd5d513300ce033b5935c20ba18c5a Mon Sep 17 00:00:00 2001
From: James <60270865+Endlessflow@users.noreply.github.com>
Date: Sun, 10 Mar 2024 17:06:40 -0400
Subject: [PATCH 6/7] forgot i needed to delete this test since we implemented
the real generation
---
test_app.py | 6 ------
1 file changed, 6 deletions(-)
diff --git a/test_app.py b/test_app.py
index 1a50c16..2b01b73 100644
--- a/test_app.py
+++ b/test_app.py
@@ -15,11 +15,5 @@ def test_generate_flashcards_status_code(self):
response = tester.post('/generate_flashcards', json={'text': 'test'}, content_type='application/json')
self.assertEqual(response.status_code, 200)
- # Check that the generate_flashcards route doubles the input
- def test_generate_flashcards_output(self):
- tester = app.app.test_client(self)
- response = tester.post('/generate_flashcards', json={'text': 'test'}, content_type='application/json')
- self.assertEqual(response.json['output'], 'testtest')
-
if __name__ == '__main__':
unittest.main(verbosity=2)
\ No newline at end of file
From c895258bead7aa26161771b1de80e80b6e141087 Mon Sep 17 00:00:00 2001
From: James <60270865+Endlessflow@users.noreply.github.com>
Date: Sun, 10 Mar 2024 17:17:15 -0400
Subject: [PATCH 7/7] install instructions
---
README.md | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 57 insertions(+)
diff --git a/README.md b/README.md
index 0cabb93..c3aa990 100644
--- a/README.md
+++ b/README.md
@@ -1,3 +1,60 @@
# The vision
To develop a simple, yet powerful tool that processes user-input text to generate flashcards for study purposes, with an option to export these in a format compatible with Anki, a popular flashcard application.
+
+# Installation Guidelines
+
+### Step 1: Setting up the Environment Variable
+
+Before installing the application, you need to set up an environment variable for the OpenAI API key. This key allows our tool to communicate with OpenAI's services.
+
+- **For Windows Users:**
+
+ 1. Open the Start Search, type in "env", and choose "Edit the system environment variables".
+ 2. In the System Properties window, click on the "Environment Variables..." button.
+ 3. Under "User variables," click on "New..." to create a new environment variable.
+ 4. For "Variable name," enter `OPENAI_API_KEY`.
+ 5. For "Variable value," paste your OpenAI API key.
+ 6. Click "OK" and apply the changes.
+
+- **For macOS/Linux Users:**
+ 1. Open your terminal.
+ 2. Edit your shell profile file (e.g., `~/.bash_profile`, `~/.zshrc`, etc.) by typing `nano ~/.bash_profile` (replace `.bash_profile` with your specific file).
+ 3. Add the following line at the end of the file: `export OPENAI_API_KEY='your_api_key_here'`
+ 4. Press CTRL + X to close, then Y to save changes, and Enter to confirm.
+ 5. To apply the changes, type `source ~/.bash_profile` (replace `.bash_profile` with your specific file).
+
+Please replace `'your_api_key_here'` with your actual OpenAI API key.
+
+### Step 2: Installing the Tool
+
+With your environment variable set, the next steps involve setting up the tool itself, which is built on Flask, a lightweight WSGI web application framework in Python.
+
+1. **Clone the Repository**
+ Start by cloning the repository to your local machine.
+
+ ```bash
+ git clone https://github.com/Endlessflow/flashcard-generator.git
+ ```
+
+2. **Install Python Requirements**
+ Ensure you have Python installed on your system. This tool requires Python 3.6 or newer. You can check your Python version by running `python --version` in your terminal or command prompt.
+
+ Next, install the required Python packages using pip:
+
+ ```bash
+ pip install -r requirements.txt
+ ```
+
+3. **Running the Flask Application**
+ Finally, to run the Flask application, use the following command from the terminal or command prompt:
+
+ ```bash
+ flask run
+ ```
+
+ This command starts the Flask web server. You should see output indicating the server is running and listening on a local port (usually `127.0.0.1:5000`). You can now open a web browser and navigate to this address to interact with the tool.
+
+### Step 3: Enjoy
+
+This should be all you need. Happy studying!