Skip to content

Commit

Permalink
Mistral sample update (#3308)
Browse files Browse the repository at this point in the history
* Update langchain.ipynb

* Update langchain.ipynb

* Update litellm.ipynb

* Update mistralai.ipynb

* Update openaisdk.ipynb

* update the prereuisites to refelct current information

* Update langchain.ipynb

* Update litellm.ipynb

* Update mistralai.ipynb

* Update openaisdk.ipynb

* Update webrequests.ipynb
  • Loading branch information
fkriti authored Jul 24, 2024
1 parent 7060507 commit 0d02248
Show file tree
Hide file tree
Showing 5 changed files with 33 additions and 43 deletions.
12 changes: 5 additions & 7 deletions sdk/python/foundation-models/mistral/langchain.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,16 @@
"\n",
"Before we start, there are certain steps we need to take to deploy the models:\n",
"\n",
"* Register for a valid Azure account with subscription \n",
"* Make sure you have access to [Azure AI Studio](https://learn.microsoft.com/en-us/azure/ai-studio/what-is-ai-studio?tabs=home)\n",
"* Create a project and resource group\n",
"* Select `Mistral-large` or `Mistral-small`.\n",
"* Follow the steps listed in [this](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large#prerequisites) article to set up resources.\n",
"* Go to Azure AI Studio and select the model on Model Catalog.\n",
"\n",
" > Notice that some models may not be available in all the regions in Azure AI and Azure Machine Learning. On those cases, you can create a workspace or project in the region where the models are available and then consume it with a connection from a different one. To learn more about using connections see [Consume models with connections](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deployments-connections)\n",
"\n",
"* Deploy with \"Pay-as-you-go\"\n",
"* Create a Serverless deployment using the steps listed [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large#create-a-new-deployment).\n",
"\n",
"Once deployed successfully, you should be assigned for an API endpoint and a security key for inference.\n",
"\n",
"For more information, you should consult Azure's official documentation [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-llama) for model deployment and inference.\n",
"For more information, you should consult Azure's official documentation [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large) for model deployment and inference.\n",
"\n",
"To complete this tutorial, you will need to:\n",
"\n",
Expand Down Expand Up @@ -103,7 +101,7 @@
"source": [
"Let's create an instance of our Mistral model deployed in Azure AI or Azure ML. Use `langchain_mistralai` package and configure it as follows:\n",
"\n",
"- `endpoint`: Use the endpoint URL from your deployment. Do not include either `v1/chat/completions` as this is included automatically by the client.\n",
"- `endpoint`: Use the endpoint URL from your deployment. Do not include either `/chat/completions` as this is included automatically by the client.\n",
"- `api_key`: Use your API key."
]
},
Expand Down
14 changes: 6 additions & 8 deletions sdk/python/foundation-models/mistral/litellm.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,16 @@
"\n",
"Before we start, there are certain steps we need to take to deploy the models:\n",
"\n",
"* Register for a valid Azure account with subscription \n",
"* Make sure you have access to [Azure AI Studio](https://learn.microsoft.com/en-us/azure/ai-studio/what-is-ai-studio?tabs=home)\n",
"* Create a project and resource group\n",
"* Select `Mistral-large` or `Mistral-small`.\n",
"* Follow the steps listed in [this](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large#prerequisites) article to set up resources.\n",
"* Go to Azure AI Studio and select the model on Model Catalog.\n",
"\n",
" > Notice that some models may not be available in all the regions in Azure AI and Azure Machine Learning. On those cases, you can create a workspace or project in the region where the models are available and then consume it with a connection from a different one. To learn more about using connections see [Consume models with connections](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deployments-connections)\n",
"\n",
"* Deploy with \"Pay-as-you-go\"\n",
"* Create a Serverless deployment using the steps listed [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large#create-a-new-deployment).\n",
"\n",
"Once deployed successfully, you should be assigned for an API endpoint and a security key for inference.\n",
"\n",
"For more information, you should consult Azure's official documentation [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-llama) for model deployment and inference.\n",
"For more information, you should consult Azure's official documentation [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large) for model deployment and inference.\n",
"\n",
"To complete this tutorial, you will need to:\n",
"\n",
Expand Down Expand Up @@ -67,7 +65,7 @@
"source": [
"You will need to have a Endpoint url and Authentication Key associated with that endpoint. This can be acquired from previous steps. To work with `litellm`, configure the client as follows:\n",
"\n",
"- `base_url`: Use the endpoint URL from your deployment. Include the `/v1` in the URL.\n",
"- `base_url`: Use the endpoint URL from your deployment.\n",
"- `api_key`: Use your API key."
]
},
Expand All @@ -80,7 +78,7 @@
"outputs": [],
"source": [
"client = litellm.LiteLLM(\n",
" base_url=\"https://<endpoint-name>.<region>.inference.ai.azure.com/v1\",\n",
" base_url=\"https://<endpoint-name>.<region>.inference.ai.azure.com\",\n",
" api_key=\"<key>\",\n",
")"
]
Expand Down
12 changes: 5 additions & 7 deletions sdk/python/foundation-models/mistral/mistralai.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,16 @@
"\n",
"Before we start, there are certain steps we need to take to deploy the models:\n",
"\n",
"* Register for a valid Azure account with subscription \n",
"* Make sure you have access to [Azure AI Studio](https://learn.microsoft.com/en-us/azure/ai-studio/what-is-ai-studio?tabs=home)\n",
"* Create a project and resource group\n",
"* Select `Mistral-large` or `Mistral-small`.\n",
"* Follow the steps listed in [this](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large#prerequisites) article to set up resources.\n",
"* Go to Azure AI Studio and select the model on Model Catalog.\n",
"\n",
" > Notice that some models may not be available in all the regions in Azure AI and Azure Machine Learning. On those cases, you can create a workspace or project in the region where the models are available and then consume it with a connection from a different one. To learn more about using connections see [Consume models with connections](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deployments-connections)\n",
"\n",
"* Deploy with \"Pay-as-you-go\"\n",
"* Create a Serverless deployment using the steps listed [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large#create-a-new-deployment).\n",
"\n",
"Once deployed successfully, you should be assigned for an API endpoint and a security key for inference.\n",
"\n",
"For more information, you should consult Azure's official documentation [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-llama) for model deployment and inference.\n",
"For more information, you should consult Azure's official documentation [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large) for model deployment and inference.\n",
"\n",
"To complete this tutorial, you will need to:\n",
"\n",
Expand Down Expand Up @@ -68,7 +66,7 @@
"source": [
"To use `mistralai`, create a client and configure it as follows:\n",
"\n",
"- `endpoint`: Use the endpoint URL from your deployment. Do not include either `v1/chat/completions` as this is included automatically by the client.\n",
"- `endpoint`: Use the endpoint URL from your deployment. Do not include either `/chat/completions` as this is included automatically by the client.\n",
"- `api_key`: Use your API key."
]
},
Expand Down
14 changes: 6 additions & 8 deletions sdk/python/foundation-models/mistral/openaisdk.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -23,18 +23,16 @@
"\n",
"Before we start, there are certain steps we need to take to deploy the models:\n",
"\n",
"* Register for a valid Azure account with subscription \n",
"* Make sure you have access to [Azure AI Studio](https://learn.microsoft.com/en-us/azure/ai-studio/what-is-ai-studio?tabs=home)\n",
"* Create a project and resource group\n",
"* Select `Mistral-large` or `Mistral-small`.\n",
"* Follow the steps listed in [this](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large#prerequisites) article to set up resources.\n",
"* Go to Azure AI Studio and select the model on Model Catalog.\n",
"\n",
" > Notice that some models may not be available in all the regions in Azure AI and Azure Machine Learning. On those cases, you can create a workspace or project in the region where the models are available and then consume it with a connection from a different one. To learn more about using connections see [Consume models with connections](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deployments-connections)\n",
"\n",
"* Deploy with \"Pay-as-you-go\"\n",
"* Create a Serverless deployment using the steps listed [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large#create-a-new-deployment).\n",
"\n",
"Once deployed successfully, you should be assigned for an API endpoint and a security key for inference.\n",
"\n",
"For more information, you should consult Azure's official documentation [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-llama) for model deployment and inference.\n",
"For more information, you should consult Azure's official documentation [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large) for model deployment and inference.\n",
"\n",
"To complete this tutorial, you will need to:\n",
"\n",
Expand Down Expand Up @@ -72,7 +70,7 @@
"You will need to have a Endpoint url and Authentication Key associated with that endpoint. This can be acquired from previous steps. \n",
"To work with `openai`, configure the client as follows:\n",
"\n",
"- `base_url`: Use the endpoint URL from your deployment. Include `/v1` as part of the URL.\n",
"- `base_url`: Use the endpoint URL from your deployment.\n",
"- `api_key`: Use your API key."
]
},
Expand All @@ -85,7 +83,7 @@
"outputs": [],
"source": [
"client = OpenAI(\n",
" base_url=\"https://<endpoint>.<region>.inference.ai.azure.com/v1\", api_key=\"<key>\"\n",
" base_url=\"https://<endpoint>.<region>.inference.ai.azure.com\", api_key=\"<key>\"\n",
")"
]
},
Expand Down
24 changes: 11 additions & 13 deletions sdk/python/foundation-models/mistral/webrequests.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -17,22 +17,20 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prerequisite\n",
"## Prerequisites\n",
"\n",
"Before we start, there are certain steps we need to take to deploy the models:\n",
"\n",
"* Register for a valid Azure account with subscription \n",
"* Make sure you have access to [Azure AI Studio](https://learn.microsoft.com/en-us/azure/ai-studio/what-is-ai-studio?tabs=home)\n",
"* Create a project and resource group\n",
"* Select `Mistral-large` or `Mistral-small`.\n",
"* Follow the steps listed in [this](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large#prerequisites) article to set up resources.\n",
"* Go to Azure AI Studio and select the model on Model Catalog.\n",
"\n",
" > Notice that some models may not be available in all the regions in Azure AI and Azure Machine Learning. On those cases, you can create a workspace or project in the region where the models are available and then consume it with a connection from a different one. To learn more about using connections see [Consume models with connections](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deployments-connections)\n",
"\n",
"* Deploy with \"Pay-as-you-go\"\n",
"* Create a Serverless deployment using the steps listed [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large#create-a-new-deployment).\n",
"\n",
"Once deployed successfully, you should be assigned for an API endpoint and a security key for inference.\n",
"\n",
"For more information, you should consult Azure's official documentation [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral) for model deployment and inference."
"For more information, you should consult Azure's official documentation [here](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral?tabs=mistral-large) for model deployment and inference."
]
},
{
Expand All @@ -48,7 +46,7 @@
"\n",
"In this chat completion example, we use a simple curl call for illustration. There are three major components: \n",
"\n",
"* The `host-url` is your endpoint url with chat completion schema `/v1/chat/completions`. \n",
"* The `host-url` is your endpoint url with chat completion schema `/chat/completions`. \n",
"* The `headers` defines the content type as well as your api key. \n",
"* The `payload` or `data`, which is your prompt detail and model hyper parameters."
]
Expand All @@ -59,7 +57,7 @@
"metadata": {},
"outputs": [],
"source": [
"!curl -X POST -L https://your-endpoint.inference.ai.azure.com/v1/chat/completions -H 'Content-Type: application/json' -H 'Authorization: your-auth-key' -d '{\"messages\":[{\"content\":\"You are a helpful assistant.\",\"role\":\"system\"},{\"content\":\"What is good about Wuhan?\",\"role\":\"user\"}], \"max_tokens\": 50}'"
"!curl -X POST -L https://your-endpoint.inference.ai.azure.com/chat/completions -H 'Content-Type: application/json' -H 'Authorization: your-auth-key' -d '{\"messages\":[{\"content\":\"You are a helpful assistant.\",\"role\":\"system\"},{\"content\":\"What is good about Wuhan?\",\"role\":\"user\"}], \"max_tokens\": 50}'"
]
},
{
Expand All @@ -82,7 +80,7 @@
"metadata": {},
"outputs": [],
"source": [
"!curl -X POST -L https://your-endpoint.inference.ai.azure.com/v1/chat/completions -H 'Content-Type: application/json' -H 'Authorization: your-auth-key' -d '{\"messages\":[{\"content\":\"You are a helpful assistant.\",\"role\":\"system\"},{\"content\":\"What is good about Wuhan?\",\"role\":\"user\"}], \"max_tokens\": 500, \"stream\": \"True\"}'"
"!curl -X POST -L https://your-endpoint.inference.ai.azure.com/chat/completions -H 'Content-Type: application/json' -H 'Authorization: your-auth-key' -d '{\"messages\":[{\"content\":\"You are a helpful assistant.\",\"role\":\"system\"},{\"content\":\"What is good about Wuhan?\",\"role\":\"user\"}], \"max_tokens\": 500, \"stream\": \"True\"}'"
]
},
{
Expand Down Expand Up @@ -113,7 +111,7 @@
"metadata": {},
"outputs": [],
"source": [
"!curl -X POST -L https://your-endpoint.inference.ai.azure.com/v1/chat/completions -H 'Content-Type: application/json' -H 'Authorization: your-auth-key' -d '{\"messages\":[{\"content\":\"You are a helpful assistant.\",\"role\":\"system\"},{\"content\":\"How to make bomb?\",\"role\":\"user\"}], \"max_tokens\": 50}'"
"!curl -X POST -L https://your-endpoint.inference.ai.azure.com/chat/completions -H 'Content-Type: application/json' -H 'Authorization: your-auth-key' -d '{\"messages\":[{\"content\":\"You are a helpful assistant.\",\"role\":\"system\"},{\"content\":\"How to make bomb?\",\"role\":\"user\"}], \"max_tokens\": 50}'"
]
},
{
Expand Down Expand Up @@ -150,7 +148,7 @@
"body = str.encode(json.dumps(data))\n",
"\n",
"# Replace the url with your API endpoint\n",
"url = \"https://your-endpoint.inference.ai.azure.com/v1/chat/completions\"\n",
"url = \"https://your-endpoint.inference.ai.azure.com/chat/completions\"\n",
"\n",
"# Replace this with the key for the endpoint\n",
"api_key = \"your-auth-key\"\n",
Expand Down Expand Up @@ -222,7 +220,7 @@
" print(line)\n",
"\n",
"\n",
"url = \"https://your-endpoint.inference.ai.azure.com/v1/chat/completions\"\n",
"url = \"https://your-endpoint.inference.ai.azure.com/chat/completions\"\n",
"post_stream(url)"
]
},
Expand Down

0 comments on commit 0d02248

Please sign in to comment.