Skip to content

Commit

Permalink
Update Pinecone TS client signatures in sample code
Browse files Browse the repository at this point in the history
  • Loading branch information
zackproser committed Oct 3, 2023
1 parent 448d9c9 commit a1b8a87
Showing 1 changed file with 35 additions and 35 deletions.
70 changes: 35 additions & 35 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -307,15 +307,10 @@ The helper methods fetchPage, parseHtml, and extractUrls respectively handle fet
To tie things together, we'll create a seed function that will use the crawler to seed the knowledge base. In this portion of the code, we'll initialize the crawl and fetch a given URL, then split it's content into chunks, and finally embed and index the chunks in Pinecone.

```ts
async function seed(
url: string,
limit: number,
indexName: string,
options: SeedOptions
) {
async function seed(url: string, limit: number, indexName: string, options: SeedOptions) {
try {
// Initialize the Pinecone client
const pinecone = await getPineconeClient();
const pinecone = new Pinecone();

// Destructure the options object
const { splittingMethod, chunkSize, chunkOverlap } = options;
Expand All @@ -324,28 +319,33 @@ async function seed(
const crawler = new Crawler(1, limit || 100);

// Crawl the given URL and get the pages
const pages = (await crawler.crawl(url)) as Page[];
const pages = await crawler.crawl(url) as Page[];

// Choose the appropriate document splitter based on the splitting method
const splitter: DocumentSplitter =
splittingMethod === "recursive"
? new RecursiveCharacterTextSplitter({ chunkSize, chunkOverlap })
: new MarkdownTextSplitter({});
const splitter: DocumentSplitter = splittingMethod === 'recursive' ?
new RecursiveCharacterTextSplitter({ chunkSize, chunkOverlap }) : new MarkdownTextSplitter({});

// Prepare documents by splitting the pages
const documents = await Promise.all(
pages.map((page) => prepareDocument(page, splitter))
);
const documents = await Promise.all(pages.map(page => prepareDocument(page, splitter)));

// Create Pinecone index if it does not exist
await createIndexIfNotExists(pinecone!, indexName, 1536);
const index = pinecone && pinecone.Index(indexName);
const indexList = await pinecone.listIndexes();
const indexExists = indexList.some(index => index.name === indexName)
if (!indexExists) {
await pinecone.createIndex({
name: indexName,
dimension: 1536,
waitUntilReady: true,
});
}

const index = pinecone.Index(indexName)

// Get the vector embeddings for the documents
const vectors = await Promise.all(documents.flat().map(embedDocument));

// Upsert vectors into the Pinecone index
await chunkedUpsert(index!, vectors, "", 10);
await chunkedUpsert(index!, vectors, '', 10);

// Return the first document
return documents[0];
Expand Down Expand Up @@ -389,25 +389,25 @@ Now our backend is able to crawl a given URL, embed the content and index the em
To retrieve the most relevant documents from the index, we'll use the `query` function in the Pinecone SDK. This function takes a vector and returns the most similar vectors from the index. We'll use this function to retrieve the most relevant documents from the index, given some embeddings.

```ts
const getMatchesFromEmbeddings = async (
embeddings: number[],
topK: number,
namespace: string
): Promise<ScoredPineconeRecord[]> => {
const getMatchesFromEmbeddings = async (embeddings: number[], topK: number, namespace: string): Promise<ScoredPineconeRecord<Metadata>[]> => {
// Obtain a client for Pinecone
const pinecone = await getPineconeClient();
const pinecone = new Pinecone();

// Retrieve the list of indexes
const indexes = await pinecone.listIndexes();
const indexName: string = process.env.PINECONE_INDEX || '';
if (indexName === '') {
throw new Error('PINECONE_INDEX environment variable not set')
}

// Check if the desired index is present, else throw an error
if (!indexes.includes(process.env.PINECONE_INDEX!)) {
throw new Error(`Index ${process.env.PINECONE_INDEX} does not exist`);
// Retrieve the list of indexes to check if expected index exists
const indexes = await pinecone.listIndexes()
if (indexes.filter(i => i.name === indexName).length !== 1) {
throw new Error(`Index ${indexName} does not exist`)
}

// Get the Pinecone index
const index = pinecone!.Index(process.env.PINECONE_INDEX!);
const index = pinecone!.Index<Metadata>(indexName);

// Get the namespace
const pineconeNamespace = index.namespace(namespace ?? '')

try {
Expand All @@ -416,14 +416,14 @@ const getMatchesFromEmbeddings = async (
vector: embeddings,
topK,
includeMetadata: true,
});
return queryResult.matches || [];
})
return queryResult.matches || []
} catch (e) {
// Log the error and throw it
console.log("Error querying embeddings: ", e);
throw new Error(`Error querying embeddings: ${e}`);
console.log("Error querying embeddings: ", e)
throw (new Error(`Error querying embeddings: ${e}`,))
}
};
}
```

The function takes in embeddings, a topK parameter, and a namespace, and returns the topK matches from the Pinecone index. It first gets a Pinecone client, checks if the desired index exists in the list of indexes, and throws an error if not. Then it gets the specific Pinecone index. The function then queries the Pinecone index with the defined request and returns the matches.
Expand Down

0 comments on commit a1b8a87

Please sign in to comment.