-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using openai o*-mini generates an invalid tool's parameters schema. #4662
Comments
Seems like structured outputs is activated by default for reasoning models: ai/packages/openai/src/openai-chat-language-model.ts Lines 59 to 64 in d0d13f9
OpenAI: https://platform.openai.com/docs/guides/structured-outputs#supported-schemas Vercel AI SDK: https://sdk.vercel.ai/providers/ai-sdk-providers/openai#structured-outputs |
You can opt out of structured outputs by changing the |
@lgrammel but as @edenstrom mentioned, |
@lgrammel can you explain why I'd prefer to keep the same default behaviour for all models. imho, the default behaviour should be |
@hopkins385 the goal is to move it to |
@lgrammel why? |
Description
When using
o1-mini
oro3-mini
model with function calls,strict: true
is added vsgpt-4o
that does not havestrict
flag. This leads to bugs when optional parameters are provided on the schema.Same bug happens with
.default
or.nullable
. The root cause seems like when"strict": true
is added the schema needs to change its typings as described here:As a workaround you can create a OpenAI client that removes strict flag:
Surprisingly, when using gpt-4o strict flag is not added vs when
o*-mini
is used. So I detected this bug when migrating to o3-miniCode example
Reproduction and workaround here: https://gist.github.com/double-thinker/f60bde68cd5705a33288f2000eeec53d
AI provider
@ai-sdk/openai 1.1.9
Additional context
No response
The text was updated successfully, but these errors were encountered: