Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A primitive NL to SQL chat feature #487

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ Your texting data never leaves your computer. We are proudly open-source for thi
- 🔍 filter by a word, friend, or time range
- 💯 sentiment analysis
- 🎁 "Your Year in Text" experience a.k.a iMessage Wrapped
- 💬 Chat with an LLM to query your data

## Download Left on Read for Mac

Expand Down
18 changes: 17 additions & 1 deletion app/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,9 +116,25 @@ You most likely have an old electron process running. If this is warning, you ca

If you are getting a "Cannot find module" error, you likely forgot to install the packages. Be sure to run `yarn` in the `app/` directory.


## Support

<a href="https://www.buymeacoffee.com/leftonread" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>

<a href="https://www.producthunt.com/posts/left-on-read?utm_source=badge-featured&utm_medium=badge&utm_souce=badge-left&#0045;on&#0045;read" target="_blank"><img src="https://api.producthunt.com/widgets/embed-image/v1/featured.svg?post_id=358899&theme=light" alt="Left&#0032;on&#0032;Read - iMessages&#0032;supercharged | Product Hunt" style="width: 250px; height: 54px;" width="250" height="54" /></a>

## LLM Chat (first go)

How it works:

1. User inputs a natural language query (How many texts did I send to Mom?)
2. This is passed to LLM to turn it into a SQL Query to run on core_main_table TODO: - give the LLM more of the DB schema
3. Execute the generated query and synthesize query result + original message/query with the LLM to output readable response to the user

Most of the backend logic happens in app/src/analysis/queries/RAGEngine.ts

Notes:

- Currently does not account for messages in group chat vs. not in a group chat, so numbers are misleading.
- Must use the exact spelling of your contacts' name as it appears in the address book.
- Requires an OpenAI API Key and internet access
- Errors (including but not limited to internet connectivity, malformed SQL queries generated by LLM, etc.) are not all handled gracefully (some are OK, like invalid API key)
1 change: 1 addition & 0 deletions app/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -142,6 +142,7 @@
"node-machine-id": "^1.1.12",
"node-schedule": "^2.1.0",
"nodemailer": "^6.7.7",
"openai": "^4.28.4",
"path-browserify": "^1.0.1",
"react": "^18.1.0",
"react-chartjs-2": "^4.3.1",
Expand Down
116 changes: 116 additions & 0 deletions app/src/analysis/queries/RagEngine.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
import OpenAI from 'openai';
import * as sqlite3 from 'sqlite3';

import * as sqlite3Wrapper from '../../utils/sqliteWrapper';

// A test function for me to understand the altered DB schema better :)
export async function printDBTableNames(
db: sqlite3.Database
): Promise<string[]> {
const q = `
SELECT
name
FROM
sqlite_master
WHERE
type='table'
ORDER BY
name
`;
return sqlite3Wrapper.allP(db, q);
}

// Put together a really hacky RAG pipeline...
export async function queryRAGEngine(
db: sqlite3.Database,
message: string,
key: string
): Promise<string> {
const openai = new OpenAI({
apiKey: key,
});

let q: string | null = null;

let prompt = `
Write a query for a table called core_main_table with this schema:
contact_name,
text (which has the message's text),
date (a unix timestamp number in nanoseconds when the message was sent),
is_from_me (a boolean indicating if I was the sender of the message)

to answer the following query: ${message}
Please respond with only the raw unformatted SQL and no other text. If this is not possible, or it's hard to get a concrete result based on the schema, return 'Not Possible'
`;

try {
const response = await openai.chat.completions.create({
model: 'gpt-3.5-turbo', // Change the model as per your requirement
messages: [{ role: 'system', content: prompt }],
temperature: 0.7,
max_tokens: 150,
});
q = response.choices[0].message.content;
console.log(response.choices[0]);
} catch (error) {
console.error(error);
return new Promise<string>((resolve, reject) => {
resolve('An error occurred. Check your API key and try a new message.');
});
}

const query = `
SELECT COUNT(*) AS message_count
FROM core_main_table
WHERE LOWER(contact_name) = LOWER('${message}');
`;

const queryResult = await sqlite3Wrapper.allP(db, q ?? query);

function isObject(value: any): value is Record<string, any> {
return (
typeof value === 'object' &&
value !== null &&
!Array.isArray(value) &&
!(value instanceof Date)
);
}
if (!isObject(queryResult[0])) {
console.log(queryResult[0]);
}
const resultString = JSON.stringify(queryResult[0]);
// Sanity check so you don't use don't accidentally use too many tokens...
if (resultString.length > 10000) {
return new Promise<string>((resolve, reject) => {
resolve('An error occurred. Try a new message.');
});
}

prompt = `
Given this message from a user: ${message},
this corresponding generated query over a database: ${query},
and this result of the query ${resultString}:
interpret the result of the query in plain english as a response to the initial message.
`;

let result = '';
try {
const response = await openai.chat.completions.create({
model: 'gpt-3.5-turbo', // Change the model as per your requirement
messages: [{ role: 'system', content: prompt }],
temperature: 0.7,
max_tokens: 150,
});
result = response.choices[0].message.content ?? 'An error occurred';
console.log(response.choices[0]);
} catch (error) {
console.error(error);
return new Promise<string>((resolve, reject) => {
resolve('An error occurred. Check your API key and try a new message.');
});
}

return new Promise<string>((resolve, reject) => {
resolve(result); // Resolve the promise with a string value
});
}
114 changes: 114 additions & 0 deletions app/src/components/Dashboard/ChatInterface.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
import { Box, Button, Flex, Input, Text } from '@chakra-ui/react';
import { ipcRenderer } from 'electron';
import React, { useEffect, useRef, useState } from 'react';

interface Message {
text: string;
sender: 'user' | 'bot';
}

const initialBotMessage: Message = {
text: 'Hi there :) You can ask me questions here about your iMessages! For example, try "Who is my best friend?"',
sender: 'bot',
};

interface ChatInterfaceProps {
openAIKey: string;
}

export function ChatInterface(props: ChatInterfaceProps) {
const { openAIKey } = props;

const [messages, setMessages] = useState<Message[]>([initialBotMessage]);
const [newMessage, setNewMessage] = useState<string>('');

const [awaitingResponse, setAwaitingResponse] = useState<boolean>(false);

const messagesContainerRef = useRef<HTMLDivElement>(null);

const handleMessageChange = (e: React.ChangeEvent<HTMLInputElement>) => {
setNewMessage(e.target.value);
};

const handleSendMessage = async () => {
if (newMessage.trim()) {
setMessages([...messages, { text: newMessage, sender: 'user' }]);
setNewMessage('');
setAwaitingResponse(true);

const llmResponse: string = await ipcRenderer.invoke(
'rag-engine',
newMessage,
openAIKey
);

setMessages((prevMessages) => [
...prevMessages,
{ text: llmResponse, sender: 'bot' },
]);
setAwaitingResponse(false);
}
};

useEffect(() => {
if (messagesContainerRef.current) {
messagesContainerRef.current.scrollTop =
messagesContainerRef.current.scrollHeight;
}
}, [messages]);

return (
<Box width="90%" mx="auto" mt={8}>
<Box
ref={messagesContainerRef}
bg="gray.100"
p={4}
borderRadius="md"
height="inherit"
maxH="70vh"
minH="70vh"
overflowY="scroll"
display="flex"
flexDirection="column"
>
{messages.map((message, index) => (
<Flex
key={index}
mb={2}
alignItems="flex-end"
justifyContent={
message.sender === 'user' ? 'flex-end' : 'flex-start'
}
>
<Box
bg={message.sender === 'user' ? 'blue.500' : 'gray.300'}
color={message.sender === 'user' ? 'white' : 'black'}
p={2}
borderRadius="md"
maxW="80%"
>
<Text>{message.text}</Text>
</Box>
</Flex>
))}
</Box>
<Flex mt={4}>
<Input
value={newMessage}
onChange={handleMessageChange}
placeholder="Type your message..."
mr={2}
/>
<Button
colorScheme="purple"
onClick={handleSendMessage}
isLoading={awaitingResponse}
>
Send
</Button>
</Flex>
</Box>
);
}

export default ChatInterface;
Loading