Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add @huggingface/ollama-utils #1111

Open
wants to merge 11 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ This is a collection of JS libraries to interact with the Hugging Face API, with
- [@huggingface/tasks](packages/tasks/README.md): The definition files and source-of-truth for the Hub's main primitives like pipeline tasks, model libraries, etc.
- [@huggingface/jinja](packages/jinja/README.md): A minimalistic JS implementation of the Jinja templating engine, to be used for ML chat templates.
- [@huggingface/space-header](packages/space-header/README.md): Use the Space `mini_header` outside Hugging Face
- [@huggingface/ollama-utils](packages/ollama-utils/README.md): Various utilities for maintaining Ollama compatibility with models on Hugging Face hub.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- [@huggingface/ollama-utils](packages/ollama-utils/README.md): Various utilities for maintaining Ollama compatibility with models on Hugging Face hub.
- [@huggingface/ollama-utils](packages/ollama-utils/README.md): Various utilities for maintaining Ollama compatibility with models on the Hugging Face Hub.



We use modern features to avoid polyfills and dependencies, so the libraries will only work on modern browsers / Node.js >= 18 / Bun / Deno.
Expand Down
1 change: 1 addition & 0 deletions packages/ollama-utils/.eslintignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
dist
5 changes: 5 additions & 0 deletions packages/ollama-utils/.prettierignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pnpm-lock.yaml
# In order to avoid code samples to have tabs, they don't display well on npm
README.md
dist
src/automap.ts
56 changes: 56 additions & 0 deletions packages/ollama-utils/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# `@huggingface/ollama-utils`

Various utilities for maintaining Ollama compatibility with models on Hugging Face hub.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Various utilities for maintaining Ollama compatibility with models on Hugging Face hub.
Various utilities for maintaining [Ollama compatibility with GGUF models on the Hugging Face Hub](https://huggingface.co/docs/hub/en/ollama).
For now, we are exposing chat template conversion to the Go format used by Ollama.


Documentation: https://huggingface.co/docs/hub/en/ollama

Comment on lines +5 to +6
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Documentation: https://huggingface.co/docs/hub/en/ollama

## Chat template converter

```ts
import { convertJinjaToGoTemplate } from "@huggingface/ollama-utils";

const MODEL_INFO_URL = "https://huggingface.co/api/models/bartowski/Llama-3.2-3B-Instruct-GGUF?expand[]=gguf";
const modelInfo = await (await fetch(MODEL_INFO_URL)).json();
console.log(modelInfo);
/**
* {
* gguf: {
* chat_template: "here is the Jinja chat template",
* bos_token: "...",
* eos_token: "...",
* [...]
* }
* }
*/
const convertedTemplate = convertJinjaToGoTemplate(modelInfo.gguf);
if (convertedTemplate) {
console.log(convertedTemplate.ollama);
/**
* {
* template: "this is the converted template, compatible with Ollama",
* tokens: [... list of special tokens],
* params: {
* stop: [... list of stop tokens or stop words]
* }
* }
*/
} else {
console.error("Conversion is not successful");
}
```

## How can I add a custom template?

Most templates will be converted automatically. You can debug the output template using:
- This space to retrieve the converted template: https://huggingface.co/spaces/ngxson/debug_ollama_manifest
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we can do both in the same space?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't figure out how to embed my script into gradio space, but will have a look later

(The template debugging space is static btw)

- And this space to apply the Go template into a list of messages: https://huggingface.co/spaces/ngxson/ollama_template_test

Please only add a new template only when the conversion above is not successful. Cases that are acceptable to add a custom handler:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Please only add a new template only when the conversion above is not successful. Cases that are acceptable to add a custom handler:
Please only add a new template when the conversion process above is not successful. Cases that are acceptable include:

- The converted template is wrong
- The Jinja template is not compatible with `@huggingface/jinja`
- The Jinja template is not "linear," meaning it can modify the content of other messages or append dynamic postfixes. For instance, the DeepSeek template removes `<think>...</think>` from previous messages in a conversation, making it non-linear. Another example is a template that adds the EOS token `</s>` when `add_generation_prompt=False`.

To add a new custom handler:
1. Edit the list of `CUSTOM_TEMPLATE_MAPPING` inside `chat-template.ts`
2. Add a new test case in `chat-template.spec.ts`
3. Push your change into new PR.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
3. Push your change into new PR.
3. Push your change to a new PR.

58 changes: 58 additions & 0 deletions packages/ollama-utils/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
{
"name": "@huggingface/ollama-utils",
"packageManager": "[email protected]",
"version": "0.0.1",
"description": "Various utilities for maintaining Ollama compatibility with models on Hugging Face hub",
"repository": "https://github.com/huggingface/huggingface.js.git",
"publishConfig": {
"access": "public"
},
"main": "./dist/index.js",
"module": "./dist/index.mjs",
"types": "./dist/index.d.ts",
"exports": {
".": {
"types": "./dist/index.d.ts",
"require": "./dist/index.js",
"import": "./dist/index.mjs"
}
},
"browser": {
"./src/utils/FileBlob.ts": false,
"./dist/index.js": "./dist/browser/index.js",
"./dist/index.mjs": "./dist/browser/index.mjs"
},
"engines": {
"node": ">=20"
},
"source": "index.ts",
"scripts": {
"lint": "eslint --quiet --fix --ext .cjs,.ts .",
"lint:check": "eslint --ext .cjs,.ts .",
"format": "prettier --write .",
"format:check": "prettier --check .",
"prepublishOnly": "pnpm run build",
"build": "tsup src/index.ts --format cjs,esm --clean && tsc --emitDeclarationOnly --declaration",
"build:automap": "tsx scripts/generate-automap.ts && prettier --write ./src/chat-template-automap.ts",
"test": "vitest run",
"check": "tsc"
},
"files": [
"dist",
"src",
"tsconfig.json"
],
"keywords": [
"huggingface",
"hub",
"gguf"
],
"author": "Hugging Face",
"license": "MIT",
"dependencies": {
"@huggingface/jinja": "workspace:^"
},
"devDependencies": {
"@types/node": "^20.12.8"
}
}
27 changes: 27 additions & 0 deletions packages/ollama-utils/pnpm-lock.yaml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

207 changes: 207 additions & 0 deletions packages/ollama-utils/scripts/generate-automap.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,207 @@
/**
* Script for generating llm.ts
* The source data is taken from llama.cpp
*/

import type { GGUFParseOutput } from "../../gguf/src/gguf";
import { gguf } from "../../gguf/src/gguf";
import { appendFileSync, writeFileSync, existsSync } from "node:fs";
import path from "node:path";

const DEBUG = process.env.DEBUG;
const RE_SPECIAL_TOKEN = /<[|_A-Za-z0-9]+>|\[[A-Z]+\]|<\uFF5C[\u2581A-Za-z]+\uFF5C>/g;
const MAX_NUMBER_OF_TAGS_PER_MODEL = 5;
const N_WORKERS = 16;
const OUTPUT_FILE = path.join(__dirname, "../src/chat-template-automap.ts");
const BLACKLISTED_MODELS = (model: string, tag: string) => {
// some models are know to give ServiceUnavailable
return model === "library/deepseek-r1" && tag === "7b";
};

interface OutputItem {
model: string;
gguf: string;
ollama: {
template: string;
tokens: string[];
// eslint-disable-next-line
params?: any;
};
}

interface OllamaManifestLayer {
digest: string;
mediaType: string;
size: number;
}

interface OllamaManifest {
layers: OllamaManifestLayer[];
}

const getSpecialTokens = (tmpl: string): string[] => {
const matched = tmpl.match(RE_SPECIAL_TOKEN);
const tokens = Array.from(matched || []);
return Array.from(new Set(tokens)); // deduplicate
};

(async () => {
if (DEBUG) writeFileSync("ollama_tmp.jsonl", ""); // clear the file

const models: string[] = [];
const output: OutputItem[] = [];

const html = await (await fetch("https://ollama.com/library")).text();
const matched = html.match(/href="\/library\/[^"]+/g);
if (!matched) {
throw new Error("cannot find any model url");
}
for (let i = 0; i < matched.length; i++) {
models.push(matched[i].replace('href="/', ""));
}
console.log({ models });

//////// Get tags ////////

let nDoing = 0;
let nAll = models.length;
const modelsWithTag: string[] = [];
const workerGetTags = async () => {
while (true) {
const model = models.shift();
if (!model) return;
nDoing++;
console.log(`Getting tags ${nDoing} / ${nAll}`);
const html = await (await fetch(`https://ollama.com/${model}`)).text();
const matched = html.match(/href="\/library\/[^"]+/g);
if (!matched) {
throw new Error("cannot find any tag url");
}
for (let i = 0; i < matched.length && i < MAX_NUMBER_OF_TAGS_PER_MODEL; i++) {
const midAndTag: string = matched[i].replace('href="/', "");
if (midAndTag.match(/:/) && !midAndTag.match(/\/blobs/)) {
modelsWithTag.push(midAndTag);
}
}
}
};
await Promise.all(
Array(N_WORKERS)
.fill(null)
.map(() => workerGetTags())
);
console.log({ modelsWithTag });

//////// merging with old file if necessary ////////

const seenGGUFTemplate = new Set<string>();
if (existsSync(OUTPUT_FILE)) {
const oldOutput = await import(OUTPUT_FILE);
oldOutput.OLLAMA_CHAT_TEMPLATE_MAPPING.forEach((item: OutputItem) => {
seenGGUFTemplate.add(item.gguf);
output.push(item);
});
}

//////// Get template ////////

nDoing = 0;
nAll = modelsWithTag.length;
const workerGetTemplate = async () => {
while (true) {
const modelWithTag = modelsWithTag.shift();
if (!modelWithTag) return;

nDoing++;
const [model, tag] = modelWithTag.split(":");
console.log(`Fetch template ${nDoing} / ${nAll} | model=${model} tag=${tag}`);
const getBlobUrl = (digest: string) => `https://registry.ollama.com/v2/${model}/blobs/${digest}`;
const manifest: OllamaManifest = await (
await fetch(`https://registry.ollama.com/v2/${model}/manifests/${tag}`)
).json();
if (!manifest.layers) {
console.log(" --> [X] No layers");
continue;
}
const layerModelUrl = manifest.layers.find((l) => l.mediaType.match(/\.model/));
if (!layerModelUrl) {
console.log(" --> [X] No model is found");
continue;
}
const modelUrl = getBlobUrl(layerModelUrl.digest);
let ggufData: GGUFParseOutput;
if (BLACKLISTED_MODELS(model, tag)) {
console.log(" --> [X] Blacklisted model, skip");
continue;
}
try {
ggufData = await gguf(modelUrl);
} catch (e) {
console.log(" --> [X] FATAL: GGUF error", { model, tag, modelUrl });
throw e; // rethrow
}
const { metadata } = ggufData;
const ggufTmpl = metadata["tokenizer.chat_template"];
if (ggufTmpl) {
if (seenGGUFTemplate.has(ggufTmpl)) {
console.log(" --> Already seen this GGUF template, skip...");
continue;
}
seenGGUFTemplate.add(ggufTmpl);
console.log(" --> GGUF chat template OK");
const tmplBlob = manifest.layers.find((l) => l.mediaType.match(/\.template/));
if (!tmplBlob) continue;
const ollamaTmplUrl = getBlobUrl(tmplBlob.digest);
if (!ollamaTmplUrl) {
console.log(" --> [X] No ollama template");
continue;
}
const ollamaTmpl = await (await fetch(ollamaTmplUrl)).text();
console.log(" --> All OK");
const record: OutputItem = {
model: modelWithTag,
gguf: ggufTmpl,
ollama: {
template: ollamaTmpl,
tokens: getSpecialTokens(ggufTmpl),
},
};
// get params
const ollamaParamsBlob = manifest.layers.find((l) => l.mediaType.match(/\.params/));
const ollamaParamsUrl = ollamaParamsBlob ? getBlobUrl(ollamaParamsBlob.digest) : null;
if (ollamaParamsUrl) {
console.log(" --> Got params");
record.ollama.params = await (await fetch(ollamaParamsUrl)).json();
}
output.push(record);
if (DEBUG) appendFileSync("ollama_tmp.jsonl", JSON.stringify(record) + "\n");
} else {
console.log(" --> [X] No GGUF template");
continue;
}
//console.log({modelUrl, ggufData});
//break;
}
};

await Promise.all(
Array(N_WORKERS)
.fill(null)
.map(() => workerGetTemplate())
);

console.log("DONE");
output.sort((a, b) => a.model.localeCompare(b.model));

writeFileSync(
OUTPUT_FILE,
`
// This file is auto generated, please do not modify manually
// To update it, run "pnpm run build:automap"

import { OllamaChatTemplateMapEntry } from "./types";

export const OLLAMA_CHAT_TEMPLATE_MAPPING: OllamaChatTemplateMapEntry[] = ${JSON.stringify(output, null, "\t")};
`.trim()
);
})();
Loading
Loading