Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature: support webp & avif images #167

Closed
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 1 addition & 3 deletions .eslintrc.cjs
Original file line number Diff line number Diff line change
@@ -2,9 +2,7 @@ module.exports = {
env: {
commonjs: true,
es2021: true,
node: true,
jest: true,
},
node: true, },
parser: '@typescript-eslint/parser',
parserOptions: {
ecmaVersion: 'latest',
2 changes: 1 addition & 1 deletion .github/workflows/test.yaml
Original file line number Diff line number Diff line change
@@ -20,6 +20,6 @@ jobs:
- run: npm test -- --coverage --coverageDirectory=coverage/results-${{ matrix.node-version }}
- uses: actions/upload-artifact@v3
with:
name: jest-results-${{ matrix.node-version }}
name: vitest-results-${{ matrix.node-version }}
path: coverage/results-${{ matrix.node-version }}/*.xml
if: ${{ always() }}
1 change: 0 additions & 1 deletion .npmignore
Original file line number Diff line number Diff line change
@@ -10,7 +10,6 @@ yarn.lock
.eslintrc.cjs
.eslintignore
.prettierrc.json
jest.config.cjs
tsconfig.json
test
examples
10 changes: 8 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -21,7 +21,9 @@ console.log(response.message.content)
```

### Browser Usage

To use the library without node, import the browser module.

```javascript
import ollama from 'ollama/browser'
```
@@ -34,7 +36,11 @@ Response streaming can be enabled by setting `stream: true`, modifying function
import ollama from 'ollama'

const message = { role: 'user', content: 'Why is the sky blue?' }
const response = await ollama.chat({ model: 'llama3.1', messages: [message], stream: true })
const response = await ollama.chat({
model: 'llama3.1',
messages: [message],
stream: true,
})
for await (const part of response) {
process.stdout.write(part.message.content)
}
@@ -207,7 +213,7 @@ ollama.abort()
This method will abort **all** streamed generations currently running with the client instance.
If there is a need to manage streams with timeouts, it is recommended to have one Ollama client per stream.

All asynchronous threads listening to streams (typically the ```for await (const part of response)```) will throw an ```AbortError``` exception. See [examples/abort/abort-all-requests.ts](examples/abort/abort-all-requests.ts) for an example.
All asynchronous threads listening to streams (typically the `for await (const part of response)`) will throw an `AbortError` exception. See [examples/abort/abort-all-requests.ts](examples/abort/abort-all-requests.ts) for an example.

## Custom client

50 changes: 25 additions & 25 deletions examples/abort/abort-all-requests.ts
Original file line number Diff line number Diff line change
@@ -8,45 +8,45 @@ setTimeout(() => {

// Start multiple concurrent streaming requests
Promise.all([
ollama.generate({
model: 'llama3.2',
prompt: 'Write a long story about dragons',
stream: true,
}).then(
async (stream) => {
ollama
.generate({
model: 'llama3.2',
prompt: 'Write a long story about dragons',
stream: true,
})
.then(async (stream) => {
console.log(' Starting stream for dragons story...')
for await (const chunk of stream) {
process.stdout.write(' 1> ' + chunk.response)
}
}
),
}),

ollama.generate({
model: 'llama3.2',
prompt: 'Write a long story about wizards',
stream: true,
}).then(
async (stream) => {
ollama
.generate({
model: 'llama3.2',
prompt: 'Write a long story about wizards',
stream: true,
})
.then(async (stream) => {
console.log(' Starting stream for wizards story...')
for await (const chunk of stream) {
process.stdout.write(' 2> ' + chunk.response)
}
}
),
}),

ollama.generate({
model: 'llama3.2',
prompt: 'Write a long story about knights',
stream: true,
}).then(
async (stream) => {
ollama
.generate({
model: 'llama3.2',
prompt: 'Write a long story about knights',
stream: true,
})
.then(async (stream) => {
console.log(' Starting stream for knights story...')
for await (const chunk of stream) {
process.stdout.write(' 3>' + chunk.response)
}
}
)
]).catch(error => {
}),
]).catch((error) => {
if (error.name === 'AbortError') {
console.log('All requests have been aborted')
} else {
37 changes: 17 additions & 20 deletions examples/abort/abort-single-request.ts
Original file line number Diff line number Diff line change
@@ -13,38 +13,35 @@ setTimeout(() => {

// Start multiple concurrent streaming requests with different clients
Promise.all([
client1.generate({
model: 'llama3.2',
prompt: 'Write a long story about dragons',
stream: true,
}).then(
async (stream) => {
client1
.generate({
model: 'llama3.2',
prompt: 'Write a long story about dragons',
stream: true,
})
.then(async (stream) => {
console.log(' Starting stream for dragons story...')
for await (const chunk of stream) {
process.stdout.write(' 1> ' + chunk.response)
}
}
),
}),

client2.generate({
model: 'llama3.2',
prompt: 'Write a short story about wizards',
stream: true,
}).then(
async (stream) => {
client2
.generate({
model: 'llama3.2',
prompt: 'Write a short story about wizards',
stream: true,
})
.then(async (stream) => {
console.log(' Starting stream for wizards story...')
for await (const chunk of stream) {
process.stdout.write(' 2> ' + chunk.response)
}
}
),

]).catch(error => {
}),
]).catch((error) => {
if (error.name === 'AbortError') {
console.log('Dragons story request has been aborted')
} else {
console.error('An error occurred:', error)
}
})


149 changes: 77 additions & 72 deletions examples/tools/tools.ts
Original file line number Diff line number Diff line change
@@ -1,89 +1,94 @@
import ollama from 'ollama';
import ollama from 'ollama'

// Simulates an API call to get flight times
// In a real application, this would fetch data from a live database or API
function getFlightTimes(args: { [key: string]: any }) {
// this is where you would validate the arguments you received
const departure = args.departure;
const arrival = args.arrival;
// this is where you would validate the arguments you received
const departure = args.departure
const arrival = args.arrival

const flights = {
"NYC-LAX": { departure: "08:00 AM", arrival: "11:30 AM", duration: "5h 30m" },
"LAX-NYC": { departure: "02:00 PM", arrival: "10:30 PM", duration: "5h 30m" },
"LHR-JFK": { departure: "10:00 AM", arrival: "01:00 PM", duration: "8h 00m" },
"JFK-LHR": { departure: "09:00 PM", arrival: "09:00 AM", duration: "7h 00m" },
"CDG-DXB": { departure: "11:00 AM", arrival: "08:00 PM", duration: "6h 00m" },
"DXB-CDG": { departure: "03:00 AM", arrival: "07:30 AM", duration: "7h 30m" }
};
const flights = {
'NYC-LAX': { departure: '08:00 AM', arrival: '11:30 AM', duration: '5h 30m' },
'LAX-NYC': { departure: '02:00 PM', arrival: '10:30 PM', duration: '5h 30m' },
'LHR-JFK': { departure: '10:00 AM', arrival: '01:00 PM', duration: '8h 00m' },
'JFK-LHR': { departure: '09:00 PM', arrival: '09:00 AM', duration: '7h 00m' },
'CDG-DXB': { departure: '11:00 AM', arrival: '08:00 PM', duration: '6h 00m' },
'DXB-CDG': { departure: '03:00 AM', arrival: '07:30 AM', duration: '7h 30m' },
}

const key = `${departure}-${arrival}`.toUpperCase();
return JSON.stringify(flights[key] || { error: "Flight not found" });
const key = `${departure}-${arrival}`.toUpperCase()
return JSON.stringify(flights[key] || { error: 'Flight not found' })
}

async function run(model: string) {
// Initialize conversation with a user query
let messages = [{ role: 'user', content: 'What is the flight time from New York (NYC) to Los Angeles (LAX)?' }];
// Initialize conversation with a user query
let messages = [
{
role: 'user',
content: 'What is the flight time from New York (NYC) to Los Angeles (LAX)?',
},
]

// First API call: Send the query and function description to the model
const response = await ollama.chat({
model: model,
messages: messages,
tools: [
{
type: 'function',
function: {
name: 'get_flight_times',
description: 'Get the flight times between two cities',
parameters: {
type: 'object',
properties: {
departure: {
type: 'string',
description: 'The departure city (airport code)',
},
arrival: {
type: 'string',
description: 'The arrival city (airport code)',
},
},
required: ['departure', 'arrival'],
},
},
// First API call: Send the query and function description to the model
const response = await ollama.chat({
model: model,
messages: messages,
tools: [
{
type: 'function',
function: {
name: 'get_flight_times',
description: 'Get the flight times between two cities',
parameters: {
type: 'object',
properties: {
departure: {
type: 'string',
description: 'The departure city (airport code)',
},
arrival: {
type: 'string',
description: 'The arrival city (airport code)',
},
},
],
})
// Add the model's response to the conversation history
messages.push(response.message);
required: ['departure', 'arrival'],
},
},
},
],
})
// Add the model's response to the conversation history
messages.push(response.message)

// Check if the model decided to use the provided function
if (!response.message.tool_calls || response.message.tool_calls.length === 0) {
console.log("The model didn't use the function. Its response was:");
console.log(response.message.content);
return;
}
// Check if the model decided to use the provided function
if (!response.message.tool_calls || response.message.tool_calls.length === 0) {
console.log("The model didn't use the function. Its response was:")
console.log(response.message.content)
return
}

// Process function calls made by the model
if (response.message.tool_calls) {
const availableFunctions = {
get_flight_times: getFlightTimes,
};
for (const tool of response.message.tool_calls) {
const functionToCall = availableFunctions[tool.function.name];
const functionResponse = functionToCall(tool.function.arguments);
// Add function response to the conversation
messages.push({
role: 'tool',
content: functionResponse,
});
}
// Process function calls made by the model
if (response.message.tool_calls) {
const availableFunctions = {
get_flight_times: getFlightTimes,
}
for (const tool of response.message.tool_calls) {
const functionToCall = availableFunctions[tool.function.name]
const functionResponse = functionToCall(tool.function.arguments)
// Add function response to the conversation
messages.push({
role: 'tool',
content: functionResponse,
})
}
}

// Second API call: Get final response from the model
const finalResponse = await ollama.chat({
model: model,
messages: messages,
});
console.log(finalResponse.message.content);
// Second API call: Get final response from the model
const finalResponse = await ollama.chat({
model: model,
messages: messages,
})
console.log(finalResponse.message.content)
}

run('mistral').catch(error => console.error("An error occurred:", error));
run('mistral').catch((error) => console.error('An error occurred:', error))
20 changes: 0 additions & 20 deletions jest.config.cjs

This file was deleted.

Loading