Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Inference] Better types for HfInference #1121

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

SBrandeis
Copy link
Contributor

@SBrandeis SBrandeis commented Jan 21, 2025

(unfinished, but I would love some inputs)

cc @coyotte508

Before this PR, Typescript was unable to determine actual types for the inputs of methods on the HfInference client

HfInference.textToImage would lack typings for parameters for example

This PR ensures that all tasks that are exported in src/tasks/index are re-exported by the client HfInference, and introduces a helper function that keeps every task's types along the way

Comment on lines +592 to +603
// it("request - openai-community/gpt2", async () => {
// expect(
// await hf.request({
// model: "openai-community/gpt2",
// inputs: "one plus two equals",
// })
// ).toMatchObject([
// {
// generated_text: expect.any(String),
// },
// ]);
// });
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still need to find a way around request and streamingRequest... They use a generic type T which messes thing up I think

export class HfInference {
private readonly accessToken: string;
private readonly defaultOptions: Options;
export class HfInference implements Task {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

export class HfInference implements Task {

That bit makes sure HfInference defines a method for every task (textToImage, chatCompletion, etc)

Comment on lines +59 to +63
static mapInferenceFn<TOut, TArgs>(instance: HfInference, func: (...args: [TArgs, Options?]) => TOut) {
return function (...[args, options]: Parameters<(...args: [TArgs, Options?]) => TOut>): TOut {
return func({ ...args, accessToken: instance.accessToken }, { ...instance.defaultOptions, ...(options ?? {}) });
};
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This helper functions adds some default arguments when calling HfInference.<task>() while keeping the exact type for inputs and outputs

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So jealous about how Typescript's seems so much powerful than Python's typing system... 😒

@@ -40,3 +42,5 @@ export * from "./multimodal/visualQuestionAnswering";
// Tabular tasks
export * from "./tabular/tabularRegression";
export * from "./tabular/tabularClassification";

export const speechToText = automaticSpeechRecognition;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(unrelated) aliasing automaticSpeechRecognition as speechToText

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did it exist before or that's new?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(why not i guess, though we should try to push the "official" task names otherwise it's hard to standardize one way of doing things)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(why not i guess, though we should try to push the "official" task names otherwise it's hard to standardize one way of doing things)

yeah, good point

Copy link
Member

@coyotte508 coyotte508 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want to edit HfInferenceEndpoint too in the same file?

Also note the script packages/inference/scripts/generate-dts.ts used for generating definition files, you may need to change it now

(and we could switch from tsup to tshy removing the need for the script entirely maybe)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants