Skip to content

Commit

Permalink
[Tented release date] Copilot Mumford release (#54196)
Browse files Browse the repository at this point in the history
Co-authored-by: Copilot <[email protected]>
  • Loading branch information
saritai and Copilot authored Jan 31, 2025
1 parent ba4eaa3 commit 9c14490
Show file tree
Hide file tree
Showing 7 changed files with 75 additions and 12 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ You can configure any of the following policies for your enterprise:
* [Suggestions matching public code](#suggestions-matching-public-code)
* [Give {% data variables.product.prodname_copilot_short %} access to Bing](#give-copilot-access-to-bing)
* [{% data variables.product.prodname_copilot_short %} access to {% data variables.copilot.copilot_claude_sonnet %}](#copilot-access-to-claude-35-sonnet)
* [{% data variables.product.prodname_copilot_short %} access to the o1 family of models](#copilot-access-to-the-o1-family-of-models)
* [{% data variables.product.prodname_copilot_short %} access to the o1 and o3 families of models](#copilot-access-to-the-o1-and-o3-families-of-models)

### {% data variables.product.prodname_copilot_short %} in {% data variables.product.prodname_dotcom_the_website %}

Expand Down Expand Up @@ -81,16 +81,19 @@ You can chat with {% data variables.product.prodname_copilot %} in your IDE to g

By default, {% data variables.product.prodname_copilot_chat_short %} uses the `GPT 4o` model. If you grant access to **Anthropic {% data variables.copilot.copilot_claude_sonnet %} in {% data variables.product.prodname_copilot_short %}**, members of your enterprise can choose to use this model rather than the default `GPT 4o` model. See [AUTOTITLE](/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot).

### {% data variables.product.prodname_copilot_short %} access to the o1 family of models
### {% data variables.product.prodname_copilot_short %} access to the o1 and o3 families of models

{% data reusables.models.o1-models-preview-note %}

By default, {% data variables.product.prodname_copilot_chat_short %} uses the `GPT 4o` model. If you grant access to the o1 family of models, members of your enterprise can select to use these models rather than the default `GPT 4o` model.
By default, {% data variables.product.prodname_copilot_chat_short %} uses the `GPT 4o` model. If you grant access to the o1 or o3 models, members of your enterprise can select to use these models rather than the default `GPT 4o` model.

The o1 family of models includes three models:
The o1 family of models includes the following models:

* `o1`/`o1-preview`: These models are focused on advanced reasoning and solving complex problems, in particular in math and science. They respond more slowly than the `gpt-4o` model. Each member of your enterprise can make 10 requests to each of these models per day.
* `o1-mini`: This is the faster version of the `o1` model, balancing the use of complex reasoning with the need for faster responses. It is best suited for code generation and small context operations. Each member of your enterprise can make 50 requests to this model per day.

The o3 family of models includes one model:

* `o3-mini`: This is the next generation of reasoning models, following from `o1` and `o1-mini`. The `o3-mini` model outperforms `o1` on coding benchmarks with response times that are comparable to `o1-mini`, providing improved quality at nearly the same latency. It is best suited for code generation and small context operations. Each member of your enterprise can make 50 requests to this model every 12 hours.

### {% data variables.product.prodname_copilot_short %} Metrics API access

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Organization owners can set policies to govern how {% data variables.product.pro
* Suggestions matching public code
* Access to alternative models for {% data variables.product.prodname_copilot_short %}
* Anthropic {% data variables.copilot.copilot_claude_sonnet %} in Copilot
* OpenAI o1 models in Copilot
* OpenAI o1 and o3 models in Copilot

The policy settings selected by an organization owner determine the behavior of {% data variables.product.prodname_copilot %} for all organization members that have been granted access to {% data variables.product.prodname_copilot_short %} through the organization.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ The skills you can use in {% data variables.product.prodname_copilot_chat_dotcom

{% data reusables.copilot.copilot-chat-models-beta-note %}

{% data reusables.copilot.copilot-chat-models-list-o1 %}
{% data reusables.copilot.copilot-chat-models-list-o3 %}

### Limitations of AI models for {% data variables.product.prodname_copilot_chat_short %}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ You can tell {% data variables.product.prodname_copilot_short %} to answer a que

{% data reusables.copilot.copilot-chat-models-beta-note %}

{% data reusables.copilot.copilot-chat-models-list-o1 %}
{% data reusables.copilot.copilot-chat-models-list-o3 %}

### Changing your AI model

Expand Down
56 changes: 53 additions & 3 deletions content/github-models/prototyping-with-ai-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,70 +133,81 @@ Low, high, and embedding models have different rate limits. To see which type of
<tr>
<th scope="col" style="width:15%"><b>Rate limit tier</b></th>
<th scope="col" style="width:25%"><b>Rate limits</b></th>
<th scope="col" style="width:20%"><b>Free and Copilot Individual</b></th>
<th scope="col" style="width:20%"><b>Copilot Business</b></th>
<th scope="col" style="width:20%"><b>Copilot Enterprise</b></th>
<th scope="col" style="width:15%"><b>Copilot Free</b></th>
<th scope="col" style="width:15%"><b>Copilot Pro</b></th>
<th scope="col" style="width:15%"><b>Copilot Business</b></th>
<th scope="col" style="width:15%"><b>Copilot Enterprise</b></th>
</tr>
<tr>
<th rowspan="4" scope="rowgroup"><b>Low</b></th>
<th style="padding-left: 0"><b>Requests per minute</b></th>
<td>15</td>
<td>15</td>
<td>15</td>
<td>20</td>
</tr>
<tr>
<th><b>Requests per day</b></th>
<td>150</td>
<td>150</td>
<td>300</td>
<td>450</td>
</tr>
<tr>
<th><b>Tokens per request</b></th>
<td>8000 in, 4000 out</td>
<td>8000 in, 4000 out</td>
<td>8000 in, 4000 out</td>
<td>8000 in, 8000 out</td>
</tr>
<tr>
<th><b>Concurrent requests</b></th>
<td>5</td>
<td>5</td>
<td>5</td>
<td>8</td>
</tr>
<tr>
<th rowspan="4" scope="rowgroup"><b>High</b></th>
<th style="padding-left: 0"><b>Requests per minute</b></th>
<td>10</td>
<td>10</td>
<td>10</td>
<td>15</td>
</tr>
<tr>
<th><b>Requests per day</b></th>
<td>50</td>
<td>50</td>
<td>100</td>
<td>150</td>
</tr>
<tr>
<th><b>Tokens per request</b></th>
<td>8000 in, 4000 out</td>
<td>8000 in, 4000 out</td>
<td>8000 in, 4000 out</td>
<td>16000 in, 8000 out</td>
</tr>
<tr>
<th><b>Concurrent requests</b></th>
<td>2</td>
<td>2</td>
<td>2</td>
<td>4</td>
</tr>
<tr>
<th rowspan="4" scope="rowgroup"><b>Embedding</b></th>
<th style="padding-left: 0"><b>Requests per minute</b></th>
<td>15</td>
<td>15</td>
<td>15</td>
<td>20</td>
</tr>
<tr>
<th><b>Requests per day</b></th>
<td>150</td>
<td>150</td>
<td>300</td>
<td>450</td>
</tr>
Expand All @@ -205,59 +216,98 @@ Low, high, and embedding models have different rate limits. To see which type of
<td>64000</td>
<td>64000</td>
<td>64000</td>
<td>64000</td>
</tr>
<tr>
<th><b>Concurrent requests</b></th>
<td>5</td>
<td>5</td>
<td>5</td>
<td>8</td>
</tr>
<tr>
<th rowspan="4" scope="rowgroup"><b>Azure OpenAI o1-preview</b></th>
<th style="padding-left: 0"><b>Requests per minute</b></th>
<td>Not applicable</td>
<td>1</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<th><b>Requests per day</b></th>
<td>Not applicable</td>
<td>8</td>
<td>10</td>
<td>12</td>
</tr>
<tr>
<th><b>Tokens per request</b></th>
<td>Not applicable</td>
<td>4000 in, 4000 out</td>
<td>4000 in, 4000 out</td>
<td>4000 in, 8000 out</td>
</tr>
<tr>
<th><b>Concurrent requests</b></th>
<td>Not applicable</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th rowspan="4" scope="rowgroup" style="box-shadow: none"><b>Azure OpenAI o1-mini</b></th>
<th style="padding-left: 0"><b>Requests per minute</b></th>
<td>Not applicable</td>
<td>2</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<th><b>Requests per day</b></th>
<td>Not applicable</td>
<td>12</td>
<td>15</td>
<td>20</td>
</tr>
<tr>
<th><b>Tokens per request</b></th>
<td>Not applicable</td>
<td>4000 in, 4000 out</td>
<td>4000 in, 4000 out</td>
<td>4000 in, 4000 out</td>
</tr>
<tr>
<th><b>Concurrent requests</b></th>
<td>Not applicable</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<th rowspan="4" scope="rowgroup" style="box-shadow: none"><b>Azure OpenAI o3-mini</b></th>
<th style="padding-left: 0"><b>Requests per minute</b></th>
<td>Not applicable</td>
<td>2</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<th><b>Requests per day</b></th>
<td>Not applicable</td>
<td>12</td>
<td>15</td>
<td>20</td>
</tr>
<tr>
<th><b>Tokens per request</b></th>
<td>Not applicable</td>
<td>4000 in, 4000 out</td>
<td>4000 in, 4000 out</td>
<td>4000 in, 4000 out</td>
</tr>
<tr>
<th><b>Concurrent requests</b></th>
<td>Not applicable</td>
<td>1</td>
<td>1</td>
<td>1</td>
Expand Down
10 changes: 10 additions & 0 deletions data/reusables/copilot/copilot-chat-models-list-o3.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
The following models are currently available through multi-model {% data variables.product.prodname_copilot_chat_short %}:

* **GPT 4o:** This is the default {% data variables.product.prodname_copilot_chat_short %} model. It is a versatile, multimodal model that excels in both text and image processing and is designed to provide fast, reliable responses. It also has superior performance in non-English languages. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/gpt-4o) and review the [model card](https://openai.com/index/gpt-4o-system-card/). Gpt-4o is hosted on Azure.
* **{% data variables.copilot.copilot_claude_sonnet %}:** This model excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. Learn more about the [model's capabilities](https://www.anthropic.com/claude/sonnet) or read the [model card](https://assets.anthropic.com/m/61e7d27f8c8f5919/original/Claude-3-Model-Card.pdf). {% data variables.product.prodname_copilot %} uses {% data variables.copilot.copilot_claude_sonnet %} hosted on Amazon Web Services.
* **o1:** This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the `gpt-4o` model. You can make 10 requests to this model per day. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/o1) and review the [model card](https://openai.com/index/openai-o1-system-card/). o1 is hosted on Azure.
* **o3-mini:** This model is the next generation of reasoning models, following from o1 and o1-mini. The o3-mini model outperforms o1 on coding benchmarks with response times that are comparable to o1-mini, providing improved quality at nearly the same latency. It is best suited for code generation and small context operations. You can make 50 requests to this model every 12 hours. Learn more about the [model's capabilities](https://platform.openai.com/docs/models#o3-mini) and review the [model card](https://openai.com/index/o3-mini-system-card/). o3-mini is hosted on Azure.

For more information about the o1 and o3 models, see [Models](https://platform.openai.com/docs/models/models) in the OpenAI Platform documentation.

For more information about the {% data variables.copilot.copilot_claude_sonnet %} model from Anthropic, see [AUTOTITLE](/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot).
2 changes: 1 addition & 1 deletion data/reusables/models/o1-models-preview-note.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
> [!NOTE] Access to OpenAI's `o1` models is in {% data variables.release-phases.public_preview %} and subject to change.
> [!NOTE] Access to OpenAI's `o1` and `o3` models is in {% data variables.release-phases.public_preview %} and subject to change.

0 comments on commit 9c14490

Please sign in to comment.