Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: improved OpenAI + EDOT Node.js examples #512

Merged
merged 4 commits into from
Jan 8, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
.env
node_modules
npm-debug.log*
build
Expand Down
1 change: 1 addition & 0 deletions examples/openai/.npmrc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
package-lock=false
64 changes: 64 additions & 0 deletions examples/openai/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# OpenAI Zero-Code Instrumentation Examples

This is an example of how to instrument OpenAI calls with zero code changes,
using `@elastic/opentelemetry-node` included in the Elastic Distribution of
OpenTelemetry Node.js ([EDOT Node.js][edot-node]).

When OpenAI examples run, they export traces, metrics and logs to an OTLP
compatible endpoint. Traces and metrics include details such as the model used
and the duration of the LLM request. In the case of chat, Logs capture the
request and the generated response. The combination of these provide a
comprehensive view of the performance and behavior of your OpenAI usage.

## Install

First, set up a Node.js environment for the examples like this:

```bash
nvm use --lts # or similar to setup Node.js v20 or later
npm install
```

## Configure

Copy [env.example](env.example) to `.env` and update its `OPENAI_API_KEY`.

An OTLP compatible endpoint should be listening for traces, metrics and logs on
`http://localhost:4318`. If not, update `OTEL_EXPORTER_OTLP_ENDPOINT` as well.
For example, if Elastic APM server is running locally, edit `.env` like this:
```
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8200
```

## Run

There are two examples, and they run the same way:

### Chat

[chat.js](chat.js) asks the LLM a geography question and prints the response.

Run it like this:
```bash
node --env-file .env --require @elastic/opentelemetry-node chat.js
```

You should see something like "Atlantic Ocean" unless your LLM hallucinates!

### Embeddings


[embeddings.js](embeddings.js) creates in-memory VectorDB embeddings about
Elastic products. Then, it searches for one similar to a question.

Run it like this:
```bash
node --env-file .env --require @elastic/opentelemetry-node embeddings.js
```

You should see something like "Connectors can help you connect to a database",
unless your LLM hallucinates!

---

[edot-node]: https://github.com/elastic/elastic-otel-node/blob/main/packages/opentelemetry-node/README.md#install
28 changes: 17 additions & 11 deletions examples/openai-chat.js → examples/openai/chat.js
Original file line number Diff line number Diff line change
Expand Up @@ -17,20 +17,26 @@
* under the License.
*/

// Usage:
// OPENAI_API_KEY=sk-... \
// node -r @elastic/opentelemetry-node openai-chat.js

const {OpenAI} = require('openai');

let chatModel = process.env.CHAT_MODEL ?? 'gpt-4o-mini';

async function main() {
const openai = new OpenAI();
const result = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [
{role: 'user', content: 'Why is the sky blue? Answer briefly.'},
],
const client = new OpenAI();

const messages = [
{
role: 'user',
content:
'Answer in up to 3 words: Which ocean contains Bouvet Island?',
},
];

const chatCompletion = await client.chat.completions.create({
model: chatModel,
messages: messages,
});
console.log(result.choices[0]?.message?.content);
console.log(chatCompletion.choices[0].message.content);
}

main();
73 changes: 73 additions & 0 deletions examples/openai/embeddings.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch B.V. licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

const {OpenAI} = require('openai');
const {dot, norm} = require('mathjs');

let embeddingsModel = process.env.EMBEDDINGS_MODEL ?? 'text-embedding-3-small';

async function main() {
const client = new OpenAI();

const products = [
"Search: Ingest your data, and explore Elastic's machine learning and retrieval augmented generation (RAG) capabilities.",
'Observability: Unify your logs, metrics, traces, and profiling at scale in a single platform.',
'Security: Protect, investigate, and respond to cyber threats with AI-driven security analytics.',
'Elasticsearch: Distributed, RESTful search and analytics.',
'Kibana: Visualize your data. Navigate the Stack.',
'Beats: Collect, parse, and ship in a lightweight fashion.',
'Connectors: Connect popular databases, file systems, collaboration tools, and more.',
'Logstash: Ingest, transform, enrich, and output.',
];

// Generate embeddings for each product. Keep them in an array instead of a vector DB.
const productEmbeddings = [];
for (const product of products) {
productEmbeddings.push(await createEmbedding(client, product));
}

const queryEmbedding = await createEmbedding(
client,
'What can help me connect to a database?'
);

// Calculate cosine similarity between the query and document embeddings
const similarities = productEmbeddings.map((productEmbedding) => {
return (
dot(queryEmbedding, productEmbedding) /
(norm(queryEmbedding) * norm(productEmbedding))
);
});

// Get the index of the most similar document
const mostSimilarIndex = similarities.indexOf(Math.max(...similarities));

console.log(products[mostSimilarIndex]);
}

async function createEmbedding(client, text) {
const response = await client.embeddings.create({
input: [text],
model: embeddingsModel,
encoding_format: 'float',
});
return response.data[0].embedding;
}

main();
22 changes: 22 additions & 0 deletions examples/openai/env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Update this with your real OpenAI API key
OPENAI_API_KEY=sk-YOUR_API_KEY

# Uncomment to use Ollama instead of OpenAI
# OPENAI_BASE_URL=http://localhost:11434/v1
# OPENAI_API_KEY=unused
# CHAT_MODEL=qwen2.5:0.5b
# EMBEDDINGS_MODEL=all-minilm:33m

# OTEL_EXPORTER_* variables are not required. If you would like to change your
# OTLP endpoint to Elastic APM server using HTTP, uncomment the following:
# OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:8200
# OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf

OTEL_SERVICE_NAME=openai-example
OTEL_LOG_LEVEL=warn

# Change to 'false' to hide prompt and completion content
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true
# Change to affect behavior of which resources are detected. Note: these
# choices are specific to the runtime, in this case Node.js.
OTEL_NODE_RESOURCE_DETECTORS=container,env,host,os,serviceinstance,process,alibaba,aws,azure
24 changes: 24 additions & 0 deletions examples/openai/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
{
"name": "edot-node-openai-example",
"version": "1.0.0",
"private": true,
"type": "commonjs",
"engines": {
"node": ">=20"
},
"scripts": {
"chat": "node --env-file .env --require @elastic/opentelemetry-node chat.js",
"embeddings": "node --env-file .env --require @elastic/opentelemetry-node embeddings.js"
},
"dependencies": {
"@elastic/opentelemetry-node": "*",
"mathjs": "^14.0.1",
"openai": "^4.77.0"
},
"// overrides comment": "Override to avoid punycode warnings in recent versions of Node.JS",
"overrides": {
"[email protected]": {
"whatwg-url": "14.x"
}
}
}
Loading