Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reasoning models from open AI. #1311

Open
avesa95 opened this issue Jan 18, 2025 · 0 comments
Open

Reasoning models from open AI. #1311

avesa95 opened this issue Jan 18, 2025 · 0 comments
Labels
bug Something isn't working question Further information is requested

Comments

@avesa95
Copy link

avesa95 commented Jan 18, 2025

How can we use o1, or o1-mini with instruct?

Because I tried and received:

Error code: 404 - {'error': {'message': 'tools is not supported in this model. For a list of supported models, refer to https://platform.openai.com/docs/guides/function-calling#models-supporting-function-calling.', 'type': 'invalid_request_error', 'param': None, 'code': None}}

this is my code
`# for exponential backoff

import instructor
from langfuse.openai import OpenAI

from openai import OpenAI

from pydantic import BaseModel
from tenacity import (
retry,
stop_after_attempt,
wait_random_exponential,
)

from retrieval.core.interfaces import BasePromptTemplate, LLMInterface

class Gpt(LLMInterface):
def init(self, model: str):
self.model = model

    self.llm = instructor.from_openai(OpenAI())

@retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(6))
def completion_with_backoff(self, llm, **kwargs):
    return llm.chat.completions.create(**kwargs)

def get_answer(
    self,
    prompt: BasePromptTemplate,
    formatted_instruction: BaseModel,
    temperature=0,
    *args,
    **kwargs,
):
    formatted_prompt = prompt.create_template(*args, **kwargs)
    answer = self.completion_with_backoff(
        llm=self.llm,
        model=self.model,
        temperature=temperature,
        response_model=formatted_instruction,
        messages=[
            {
                "role": "user",
                "content": formatted_prompt,
            },
        ],
    )
    if formatted_instruction:
        return answer.dict()
    else:
        return answer.choices[0].message.content

Thanks,
Alex

@github-actions github-actions bot added bug Something isn't working question Further information is requested labels Jan 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant