Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using openai o*-mini generates an invalid tool's parameters schema. #4662

Open
double-thinker opened this issue Feb 2, 2025 · 6 comments
Open
Labels
bug Something isn't working

Comments

@double-thinker
Copy link

double-thinker commented Feb 2, 2025

Description

When using o1-mini or o3-mini model with function calls, strict: true is added vs gpt-4o that does not have strict flag. This leads to bugs when optional parameters are provided on the schema.

await generateText({
    model: openai("o3-mini"),
    prompt: "What is the weather here?",
    tools: {
      getWeather: tool({
        description: "Get the weather in a location",
        parameters: z.object({
        location: z
          .string()
          .optional()  // Note the optional
//    ...

Same bug happens with .default or .nullable. The root cause seems like when "strict": true is added the schema needs to change its typings as described here:

{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Retrieves current weather for the given location.",
        "strict": true,
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "City and country e.g. Bogotá, Colombia"
                },
                "units": {
                    "type": ["string", "null"], // If strict is true, type must change this way. Currently, ai-sdk keeps "string" as type even with strict: true
                    "enum": ["celsius", "fahrenheit"],
                    "description": "Units the temperature will be returned in."
                }
            },
            "required": ["location", "units"],
            "additionalProperties": false
        }
    }
}

As a workaround you can create a OpenAI client that removes strict flag:

const openaiWithWorkaround = createOpenAI({
  fetch: async (url, options) => {
    if (!options?.body) {
      return fetch(url, options);
    }

    // Parse the request body
    const body = JSON.parse(options.body as string);
    
    // Check if there are tools with functions
    if (body.tools?.length > 0) {
      body.tools = body.tools.map((tool: any) => {
        if (tool.type === 'function' && tool.function.strict) {
          // Remove the strict flag if present
          const { strict, ...functionWithoutStrict } = tool.function;
          return {
            ...tool,
            function: functionWithoutStrict
          };
        }
        return tool;
      });
    }

    // Create new options with modified body
    const newOptions = {
      ...options,
      body: JSON.stringify(body)
    };

    console.log(
      `Body ${JSON.stringify(
        JSON.parse((newOptions?.body as string) || "{}"),
        null,
        2
      )}`)

    // Make the actual fetch call
    return fetch(url, newOptions);
  },
});

Surprisingly, when using gpt-4o strict flag is not added vs when o*-mini is used. So I detected this bug when migrating to o3-mini

Code example

Reproduction and workaround here: https://gist.github.com/double-thinker/f60bde68cd5705a33288f2000eeec53d

AI provider

@ai-sdk/openai 1.1.9

Additional context

No response

@double-thinker double-thinker added the bug Something isn't working label Feb 2, 2025
@edenstrom
Copy link

Seems like structured outputs is activated by default for reasoning models:

get supportsStructuredOutputs(): boolean {
// enable structured outputs for reasoning models by default:
// TODO in the next major version, remove this and always use json mode for models
// that support structured outputs (blacklist other models)
return this.settings.structuredOutputs ?? isReasoningModel(this.modelId);
}

  1. .optional() is incompatible with OpenAI structured outputs, and should be replaced with .nullable().
  2. But issue remains that .nullable() doesn't work and still throws the error schema must have a \'type\' key.', for the properties with .nullable()

optional vs nullable documentation below:

OpenAI: https://platform.openai.com/docs/guides/structured-outputs#supported-schemas

Vercel AI SDK: https://sdk.vercel.ai/providers/ai-sdk-providers/openai#structured-outputs

@lgrammel
Copy link
Collaborator

lgrammel commented Feb 3, 2025

You can opt out of structured outputs by changing the structuredOutputs setting to false for reasoning models (default to true).

@lgrammel lgrammel closed this as completed Feb 3, 2025
@double-thinker
Copy link
Author

@lgrammel but as @edenstrom mentioned, nullable does not work either, so I think the bug is still present. If this will not be fixed, I think a mention in the docs would be worthwhile.

@lgrammel lgrammel reopened this Feb 3, 2025
@hopkins385
Copy link

@lgrammel can you explain why structuredOutputs is true per default only for the reasoning models?

I'd prefer to keep the same default behaviour for all models.
If for all models structuredOutputs=true fine, but if we start to mix it, it becomes quite hard to maintain.

imho, the default behaviour should be structuredOutputs=false

@lgrammel
Copy link
Collaborator

lgrammel commented Feb 3, 2025

@hopkins385 the goal is to move it to true for all models, but that needs to wait for 5.0 for backwards compat. However, since reasoning models are new, we can already enable it for those.

@hopkins385
Copy link

@lgrammel why?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants