-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gemini Fine Tuning #104
Comments
Okay: gemini models work serverless and are pretty slick. Fireworks is still better for Llama/etc as it offers serverless. |
conclusion: serverless gemini fine tuning is possible. However, the auth and setup experience would be awful. GCP really needs to clean this up, it manages to be much worse that AWS (which is quite bad). Putting this on ice indefinitely. Would prefer a provider with more open models than an ugly config for 1 model. |
EDIT: just noticed you had already implemented what I was suggesting below, in #106 - nice one - thanks 🥇
|
Related to convo here: #29 (comment)
Looking at options: Google has incredibly weird mix of APIs here. At least 2 deprecated APIs and 2 active APIs. Some of the active APIs have 2 names. Everything is called Vertex, except when it isn't.
They have a lovely AI assistant which will happily hallucinate services that don't exist.
Of the active ones:
Gemini API aka Generative AI API
Google AI Studio
Vertex AI
If this is serverless, it's an amazing API with lots of control that can fine tine Gemini, Llama, and much more. If it's not, it's expensive and hard to use for rapid prototyping.
The text was updated successfully, but these errors were encountered: