-
-
Notifications
You must be signed in to change notification settings - Fork 730
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Gemini API #441
Comments
can you link docs and a more complete example? |
@jxnl
Here are the docs for function calling usinf the Google ai dev api https://ai.google.dev/tutorials/function_calling_python_quickstart A quick example: calculator = glm.Tool(
function_declarations=[
glm.FunctionDeclaration(
name='add',
description="Returns the sum of two numbers.",
parameters=glm.Schema(
type=glm.Type.OBJECT,
properties={
'a': glm.Schema(type=glm.Type.NUMBER),
'b': glm.Schema(type=glm.Type.NUMBER)
},
required=['a','b']
)
),
glm.FunctionDeclaration(
name='multiply',
description="Returns the product of two numbers.",
parameters=glm.Schema(
type=glm.Type.OBJECT,
properties={
'a':glm.Schema(type=glm.Type.NUMBER),
'b':glm.Schema(type=glm.Type.NUMBER)
},
required=['a','b']
)
)
]) model = genai.GenerativeModel('gemini-pro', tools=[calculator])
chat = model.start_chat()
response = chat.send_message(
f"What's {a} X {b} ?",
)
response.candidates
|
lol WHY WOULD THEY DO THIS |
I think they want indie developers to hack with thei googleai dev api while also offering their gcp to their enterprise customers. Just like openai and azure |
So is there any hope for this? |
Def hope! The nuance is that the api is slightly different .chat() and .send_message() |
So I want to think about how to give a good matching experience. |
Yes, it's definitely annoying. You have to specify the tools when you create the model object, not when you actually send the prompt. Like why even do this? LLMs are fucking stateless; no point in defining the functions when creating the model object. Unless they're using something completely different from OpenAI. |
Okay so after taking a second deep dive in their not-so-great docs, I think there's a different way for doing this: # Create a client
from google.ai import generativelanguage as glm
client = glm.GenerativeServiceClient(
client_options={'api_key': GOOGLE_API_KEY})
# Create function tools
my_tool = glm.Tool(
function_declarations=[
glm.FunctionDeclaration(
name='add',
description="Returns the sum of two numbers.",
parameters=glm.Schema(
type=glm.Type.OBJECT,
properties={
'a': glm.Schema(type=glm.Type.NUMBER),
'b': glm.Schema(type=glm.Type.NUMBER)
},
required=['a','b']
)
),
glm.FunctionDeclaration(
name='multiply',
description="Returns the product of two numbers.",
parameters=glm.Schema(
type=glm.Type.OBJECT,
properties={
'a':glm.Schema(type=glm.Type.NUMBER),
'b':glm.Schema(type=glm.Type.NUMBER)
},
required=['a','b']
)
)
])
request = {
"model": 'models/gemini-1.0-pro-001',
"contents": [{"parts": [{"text": "Send an email to my friend Oliver wishing them a happt birthday"}], "role": "user"}],
"tools": [my_tool],
}
response = client.generate_content(request=request) This is somewhat similar to the conventional way of doing function calling using OpenAI's client. |
@AmgadHasan I maintain https://github.com/braintrustdata/braintrust-proxy which allows you to access gemini models through the OpenAI format. We haven't yet translated the gemini tool call syntax over, but based on your code snippets, my guess is that it is just sending json-schema and should be easy to do. Want to collaborate on that? Then, you could just set |
what is the current state on this? happy to contribute! |
o work done would love a contrib |
Hi! What's the update on this? :) |
I won't be working on this anytime soon, so it'll likley have to be from another contributor unless you go tough litellm or braintrust proxy. |
Can we get Gemini to implement itself here? I'm only half-joking BTW lol |
Managed to implement gemini support here. Not 100% compatible with all of instructor's concept given the mismatch in API design but it's a start. Would love some feedback. |
The new Gemini api introduced support for function calling. You define a set of functions with their expected arguments and you pass them in the tools argument.
Can we add gemini support to instructor so it can be used with gemini pro models and later gemini ultra?
The text was updated successfully, but these errors were encountered: