Skip to content

Commit

Permalink
feat: add Gemini 2.0 Flash-thinking-exp-01-21 model with 65k token su…
Browse files Browse the repository at this point in the history
…pport (stackblitz-labs#1202)

Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns.
  • Loading branch information
saif78642 authored Jan 28, 2025
1 parent 68bbbd0 commit 39a0724
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions app/lib/modules/llm/providers/google.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ export default class GoogleProvider extends BaseProvider {

staticModels: ModelInfo[] = [
{ name: 'gemini-1.5-flash-latest', label: 'Gemini 1.5 Flash', provider: 'Google', maxTokenAllowed: 8192 },
{ name: 'gemini-2.0-flash-thinking-exp-01-21', label: 'Gemini 2.0 Flash-thinking-exp-01-21', provider: 'Google', maxTokenAllowed: 65536 },
{ name: 'gemini-2.0-flash-exp', label: 'Gemini 2.0 Flash', provider: 'Google', maxTokenAllowed: 8192 },
{ name: 'gemini-1.5-flash-002', label: 'Gemini 1.5 Flash-002', provider: 'Google', maxTokenAllowed: 8192 },
{ name: 'gemini-1.5-flash-8b', label: 'Gemini 1.5 Flash-8b', provider: 'Google', maxTokenAllowed: 8192 },
Expand Down

0 comments on commit 39a0724

Please sign in to comment.