From 7296d61b90fcc546a601f39a6b3794448f7a286d Mon Sep 17 00:00:00 2001
From: Armando Daniel Diaz Gonzalez
<61255126+arm-diaz@users.noreply.github.com>
Date: Thu, 13 Jun 2024 14:56:23 -0400
Subject: [PATCH 1/7] Native function calling with converse API
---
...rock_Mistral_native_function_calling.ipynb | 763 ++++++++++++++++++
1 file changed, 763 insertions(+)
create mode 100644 notebooks/Bedrock_Mistral_native_function_calling.ipynb
diff --git a/notebooks/Bedrock_Mistral_native_function_calling.ipynb b/notebooks/Bedrock_Mistral_native_function_calling.ipynb
new file mode 100644
index 0000000..733b520
--- /dev/null
+++ b/notebooks/Bedrock_Mistral_native_function_calling.ipynb
@@ -0,0 +1,763 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "e3eed142-6144-4ba2-a693-2bcfdeeae823",
+ "metadata": {},
+ "source": [
+ "# Amazon Bedrock Function Calling (Tool Use) with Converse API using Mistral Large"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4c7a66db-4d22-4e0f-883c-8e8daf6a4291",
+ "metadata": {},
+ "source": [
+ "In this Jupyter Notebook, we walkthrough an implementation of native function calling and agentic workflows with with the Converse API for Amazon Bedrock and Mistral Large. Function Calling is a powerful technique that allows large language models to connect to external tools, systems, or APIs to enable, which can be executed to perform actions based on user's input."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3c58ce86-3cb5-4a4b-9735-dc4684db731f",
+ "metadata": {},
+ "source": [
+ "The Converse or ConverseStream API is a unified structured text action for simplifying the invocations to Bedrock LLMs. It includes the possibility to define tools for implementing external functions that can be called or triggered from the LLMs."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9a5fcd65-01b7-418d-988e-f80d05ebb4a5",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "## Mistral Model Selection\n",
+ "\n",
+ "Today, Mistral Large supports native function calling with Converse API for Amazon Bedrock:\n",
+ "\n",
+ "### 3. Mistral Large\n",
+ "\n",
+ "- **Description:** A cutting-edge text generation model with top-tier reasoning capabilities. It can be used for complex multilingual reasoning tasks, including text understanding, transformation, and code generation.\n",
+ "- **Max Tokens:** 8,196\n",
+ "- **Context Window:** 32K\n",
+ "- **Languages:** English, French, German, Spanish, Italian\n",
+ "- **Supported Use Cases:** Synthetic Text Generation, Code Generation, RAG, or Agents\n",
+ "\n",
+ "### Performance and Cost Trade-offs\n",
+ "\n",
+ "The table below compares the model performance on the Massive Multitask Language Understanding (MMLU) benchmark and their on-demand pricing on Amazon Bedrock.\n",
+ "\n",
+ "| Model | MMLU Score | Price per 1,000 Input Tokens | Price per 1,000 Output Tokens |\n",
+ "|-----------------|------------|------------------------------|-------------------------------|\n",
+ "| Mistral Large | 81.2% | \\$0.008 | \\$0.024 |\n",
+ "\n",
+ "For more information, refer to the following links:\n",
+ "\n",
+ "1. [Mistral Model Selection Guide](https://docs.mistral.ai/guides/model-selection/)\n",
+ "2. [Amazon Bedrock Pricing Page](https://aws.amazon.com/bedrock/pricing/)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f3978ae9-57d0-4dca-b94b-b6493f8eade0",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "## Supported papameters\n",
+ "\n",
+ "The Mistral AI models have the following inference parameters.\n",
+ "\n",
+ "\n",
+ "```\n",
+ "{\n",
+ " \"prompt\": string,\n",
+ " \"max_tokens\" : int,\n",
+ " \"stop\" : [string], \n",
+ " \"temperature\": float,\n",
+ " \"top_p\": float,\n",
+ " \"top_k\": int\n",
+ "}\n",
+ "```\n",
+ "\n",
+ "The Mistral AI models have the following inference parameters:\n",
+ "\n",
+ "- **Temperature** - Tunes the degree of randomness in generation. Lower temperatures mean less random generations.\n",
+ "- **Top P** - If set to float less than 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.\n",
+ "- **Top K** - Can be used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.\n",
+ "- **Maximum Length** - Maximum number of tokens to generate. Responses are not guaranteed to fill up to the maximum desired length.\n",
+ "- **Stop sequences** - Up to four sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.\n",
+ "\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "95db24c2-384a-4d1e-b6b6-a832ac002500",
+ "metadata": {},
+ "source": [
+ "### Local Setup (Optional)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "11710041-4077-4700-a47a-2459d1be4fb9",
+ "metadata": {},
+ "source": [
+ "For a local server, follow these steps to execute this jupyter notebook:\n",
+ "\n",
+ "1. **Configure AWS CLI**: Configure [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) with your AWS credentials. Run `aws configure` and enter your AWS Access Key ID, AWS Secret Access Key, AWS Region, and default output format.\n",
+ "\n",
+ "2. **Install required libraries**: Install the necessary Python libraries for working with SageMaker, such as [sagemaker](https://github.com/aws/sagemaker-python-sdk/), [boto3](https://github.com/boto/boto3), and others. You can use a Python environment manager like [conda](https://docs.conda.io/en/latest/) or [virtualenv](https://virtualenv.pypa.io/en/latest/) to manage your Python packages in your preferred IDE (e.g. [Visual Studio Code](https://code.visualstudio.com/)).\n",
+ "\n",
+ "3. **Create an IAM role for SageMaker**: Create an AWS Identity and Access Management (IAM) role that grants your user [SageMaker permissions](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). \n",
+ "\n",
+ "By following these steps, you can set up a local Jupyter Notebook environment capable of deploying machine learning models on Amazon SageMaker using the appropriate IAM role for granting the necessary permissions."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7c22cf2e-971a-4df9-9e27-1cd7a05d8307",
+ "metadata": {},
+ "source": [
+ "## Setup and Requirements"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1ba91a5b-4ceb-4e60-9f89-b31a4e446c00",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "1. Create an Amazon SageMaker Notebook Instance - [Amazon SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-setup-working-env.html)\n",
+ " - For Notebook Instance type, choose ml.t3.medium.\n",
+ "2. For Select Kernel, choose [conda_python3](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-prepare.html).\n",
+ "3. Install the required packages."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5b7ef5d8-904d-4cce-8522-e4b21279ceb1",
+ "metadata": {},
+ "source": [
+ "
\n",
+ "\n",
+ "
NOTE:\n",
+ "\n",
+ "- For
Amazon SageMaker Studio, select Kernel \"
Python 3 (ipykernel)\".\n",
+ "\n",
+ "- For
Amazon SageMaker Studio Classic, select Image \"
Base Python 3.0\" and Kernel \"
Python 3\".\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1262e8ec-674a-44c2-be1f-0a92df6b7ca6",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "\n",
+ "Before we start building the agentic workflow, we'll first install some libraries:\n",
+ "\n",
+ "+ AWS Python SDKs [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) to be able to submit API calls to [Amazon Bedrock](https://aws.amazon.com/bedrock/).\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "60fde7eb-a354-4934-9126-b793080328c1",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "%%writefile requirements.txt\n",
+ "boto3==1.34.125\n",
+ "pandas==2.2.2"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "daf00ca0-9a89-4a3e-ac25-5f172dac43d5",
+ "metadata": {},
+ "source": [
+ "Install required packages:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c54cfea5-782f-49c1-8885-c8fc8955e09e",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "!pip install -U -r requirements.txt --quiet"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "id": "6d59d834-3f3c-46e5-8cb8-56def569a934",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "import boto3\n",
+ "from datetime import datetime\n",
+ "import json\n",
+ "import pandas as pd\n",
+ "import re"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a09522ec-1bb8-4427-a498-60598c1078ee",
+ "metadata": {},
+ "source": [
+ "Let's define a few variables and create a bedrock client."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "42c5827e-45c8-4445-b01e-a611a7dfcd46",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Using modelId: mistral.mistral-large-2402-v1:0\n",
+ "Using region: us-east-1\n"
+ ]
+ }
+ ],
+ "source": [
+ "model_id = 'mistral.mistral-large-2402-v1:0'\n",
+ "print(f'Using modelId: {model_id}')\n",
+ "\n",
+ "region = 'us-east-1'\n",
+ "print('Using region: ', region)\n",
+ "\n",
+ "bedrock = boto3.client(\n",
+ " service_name='bedrock-runtime',\n",
+ " region_name=region\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7cc107b3-7dfa-494a-9269-6229b6dccb24",
+ "metadata": {},
+ "source": [
+ "## Load data"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "18477cd8-aeb2-4fec-b9ea-74b73f7386f4",
+ "metadata": {},
+ "source": [
+ "Let's say, we have a transactional dataset tracking customers, payment amounts, payment dates, and whether payments have been fully processed for each transaction identifier. For more information about the sample dataset, please visit the documentation for [Mistral Function Calling](https://docs.mistral.ai/capabilities/function_calling/)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "40c642f3-7aaf-4c37-9071-36a19188e753",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def load_data()-> pd.DataFrame:\n",
+ " \"\"\"\n",
+ " Load data from a JSON file into a Pandas DataFrame.\n",
+ "\n",
+ " Returns:\n",
+ " pd.DataFrame: A Pandas DataFrame containing the data loaded from the JSON file.\n",
+ " If an error occurs during file loading, an error message is returned instead.\n",
+ "\n",
+ " \"\"\"\n",
+ " local_path = \"sample_data/transactions.json\"\n",
+ " \n",
+ " try:\n",
+ " df = pd.read_json(local_path)\n",
+ " return df\n",
+ " except (FileNotFoundError, ValueError) as e:\n",
+ " return f\"Error: {e}\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "id": "01ad9e5a-5525-49a8-be42-1b8e7263aa69",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ " transaction_id customer_id payment_amount payment_date payment_status\n",
+ "0 T1001 C001 125.50 2021-10-05 Paid\n",
+ "1 T1002 C002 89.99 2021-10-06 Unpaid\n",
+ "2 T1003 C003 120.00 2021-10-07 Paid\n",
+ "3 T1004 C002 54.30 2021-10-05 Paid\n",
+ "4 T1005 C001 210.20 2021-10-08 Pending\n"
+ ]
+ }
+ ],
+ "source": [
+ "df = load_data()\n",
+ "print(df)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "41cb7ad0-64ea-408f-a946-8a0601202838",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "## Function Calling"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b3bc0e89-c9fe-4acd-8a9e-bc3fd7a5c828",
+ "metadata": {},
+ "source": [
+ "Function calling is the ability to reliably connect a large language model (LLM) to external tools and enable effective tool usage and interaction with external APIs. Mistral Large has the ability for building LLM powered chatbots or agents that need to retrieve context for the model or interact with external tools by converting natural language into API calls to retrieve specific domain knowledge. From conversational agents and math problem solving to API integration and information extraction, multiple use cases can benefit from this capability provided by Mistral Large."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c18150cc-1aac-4a7f-83f2-49d3c1a91abf",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "## Tools"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2ac8ce00-28f4-4074-9c27-14c9d2263b28",
+ "metadata": {},
+ "source": [
+ "Function calling (Tool use) for Amazon Bedrock allow agents and large language models to interact with the world. In our example, we will define a tool list for a payment status (`get_payment_status`) and payment date (`get_payment_date`) lookup tool. Note in our example we're just returning payment information from a sample dataset to illustrate the concept, but you could make it fully functional by connecting any database or service API."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "id": "8db00433-5194-40e5-a1cc-a05a7d5c2647",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "class ToolsList:\n",
+ " # Define our payment status tool function...\n",
+ " def get_payment_status(self, transaction_id) -> str:\n",
+ " \"Get payment status of a transaction\"\n",
+ " data = load_data()\n",
+ " \n",
+ " try:\n",
+ " # Attempt to retrieve the payment status for the given transaction ID\n",
+ " status = data[data.transaction_id == transaction_id].payment_status.item()\n",
+ " except ValueError:\n",
+ " # If the transaction ID is not found, return an error message\n",
+ " return f\"ERROR: Transaction ID {transaction_id} not found.\"\n",
+ " \n",
+ " # Retrieve the payment status for the corresponding index\n",
+ " result = f'Payment date for transaction ID {transaction_id} is {status}.'\n",
+ " print(f'Tool result: {result}')\n",
+ " return result\n",
+ "\n",
+ " # Define our payment date tool function...\n",
+ " def get_payment_date(self, transaction_id) -> str:\n",
+ " \"Get payment date of a transaction\"\n",
+ " data = load_data()\n",
+ "\n",
+ " try:\n",
+ " # Attempt to retrieve the payment date for the given transaction ID\n",
+ " date = data[data.transaction_id == transaction_id].payment_date.item()\n",
+ " except ValueError:\n",
+ " # If the transaction ID is not found, return an error message\n",
+ " return f\"ERROR: Transaction ID {transaction_id} not found.\"\n",
+ "\n",
+ " result = f'Payment date for transaction ID {transaction_id} is {date}.'\n",
+ " print(f'Tool result: {result}')\n",
+ " return result"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b779e6db-2322-4cdf-8f4b-2b97e532c380",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "### Tool configuration for Converse API"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2ae18c68-8ddd-4162-a036-f7dc496fca62",
+ "metadata": {},
+ "source": [
+ "In this step, we structure our tools configuration for passing this information to our Converse API. We have to clearly define the schema that our tools are expecting in the corresponding functions.\n",
+ "\n",
+ "**Note:** *With Converse API allows, we can define configurations to allow us either let the LLM choose automatically (auto) a tool, or overriding a fixed tool to be called always. To see more information, check the Bedrock Converse API documentation.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "id": "4c5e6964-e502-451c-9be5-5e7c171a133d",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Define the configuration for our tool...\n",
+ "\n",
+ "toolConfig = {\n",
+ " 'tools': [],\n",
+ " 'toolChoice': {\n",
+ " 'auto': {},\n",
+ " #'any': {},\n",
+ " #'tool': {\n",
+ " # 'name': 'get_payment_status'\n",
+ " #}\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "toolConfig['tools'].append({\n",
+ " 'toolSpec': {\n",
+ " 'name': 'get_payment_status',\n",
+ " 'description': 'Get payment status of a transaction',\n",
+ " 'inputSchema': {\n",
+ " 'json': {\n",
+ " 'type': 'object',\n",
+ " 'properties': {\n",
+ " 'transaction_id': {\n",
+ " 'type': 'string',\n",
+ " 'description': 'Transaction ID'\n",
+ " }\n",
+ " },\n",
+ " 'required': ['transaction_id']\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " })\n",
+ "\n",
+ "toolConfig['tools'].append({\n",
+ " 'toolSpec': {\n",
+ " 'name': 'get_payment_date',\n",
+ " 'description': 'Get payment date of a transaction',\n",
+ " 'inputSchema': {\n",
+ " 'json': {\n",
+ " 'type': 'object',\n",
+ " 'properties': {\n",
+ " 'transaction_id': {\n",
+ " 'type': 'string',\n",
+ " 'description': 'Transaction ID'\n",
+ " }\n",
+ " },\n",
+ " 'required': ['transaction_id']\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " })"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "eb858e61-aa93-4c4d-bfe6-94da5dc61194",
+ "metadata": {},
+ "source": [
+ "Let's define a handy function for calling Bedrock with the Converse API."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "id": "e61adf1b-f2a9-4e0c-a3a0-c8862e113b68",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Function for caling the Bedrock Converse API...\n",
+ "\n",
+ "def converse_with_tools(messages, system='', toolConfig=toolConfig):\n",
+ " response = bedrock.converse(\n",
+ " modelId=model_id,\n",
+ " system=system,\n",
+ " messages=messages,\n",
+ " toolConfig=toolConfig\n",
+ " )\n",
+ " return response"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "79da5ab1-0083-4702-89d8-b202199e90c2",
+ "metadata": {},
+ "source": [
+ "We're now ready for setting up our orchestration flow. In this case, we'll make a first call to the LLM with the initial prompt from the user, and depending on the answer from the LLM we'll either call a tool (function calling) or end the interaction.\n",
+ "\n",
+ "Note that in the case the LLM indicates that it wants to run a tool (function calling), it will give us the information required of tool name and arguments for us to run the relevant tool in our code; i.e. The LLMs cannot run the tools automatically."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "id": "314786dc-3726-4b7c-a872-5a8a5c3a35f7",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Function for orchestrating the conversation flow...\n",
+ "\n",
+ "def converse(prompt, system=''):\n",
+ " # Add the initial prompt:\n",
+ " messages = []\n",
+ " messages.append(\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " \"text\": prompt\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ " )\n",
+ " print(f\"\\n{datetime.now().strftime('%H:%M:%S')} - Initial prompt:\\n{json.dumps(messages, indent=2)}\")\n",
+ "\n",
+ " # Invoke the model the first time:\n",
+ " output = converse_with_tools(messages, system)\n",
+ " print(f\"\\n{datetime.now().strftime('%H:%M:%S')} - Output so far:\\n{json.dumps(output['output'], indent=2, ensure_ascii=False)}\")\n",
+ "\n",
+ " # Add the intermediate output to the prompt:\n",
+ " messages.append(output['output']['message'])\n",
+ "\n",
+ " function_calling = next((c['toolUse'] for c in output['output']['message']['content'] if 'toolUse' in c), None)\n",
+ "\n",
+ " # Check if function calling is triggered:\n",
+ " if function_calling:\n",
+ " # Get the tool name and arguments:\n",
+ " tool_name = function_calling['name']\n",
+ " tool_args = function_calling['input'] or {}\n",
+ " \n",
+ " # Run the tool:\n",
+ " print(f\"\\n{datetime.now().strftime('%H:%M:%S')} - Running ({tool_name}) tool...\")\n",
+ " tool_response = getattr(ToolsList(), tool_name)(**tool_args) or \"\"\n",
+ " if tool_response:\n",
+ " tool_status = 'success'\n",
+ " else:\n",
+ " tool_status = 'error'\n",
+ "\n",
+ " # Add the tool result to the prompt:\n",
+ " messages.append(\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " 'toolResult': {\n",
+ " 'toolUseId': function_calling['toolUseId'],\n",
+ " 'content': [\n",
+ " {\n",
+ " \"text\": tool_response\n",
+ " }\n",
+ " ],\n",
+ " 'status': tool_status\n",
+ " }\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ " )\n",
+ " \n",
+ " # Invoke the model one more time:\n",
+ " output = converse_with_tools(messages, system)\n",
+ " print(f\"\\n{datetime.now().strftime('%H:%M:%S')} - Final output:\\n{json.dumps(output['output'], indent=2, ensure_ascii=False)}\\n\")\n",
+ " return"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ecc32f77-ed70-452d-898e-e2afa3c23322",
+ "metadata": {},
+ "source": [
+ "Now, we have everything setup and are ready for testing our function-calling bot.\n",
+ "\n",
+ "Let's try with a few sample prompts, each one with different needs."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "id": "e53c78eb-f5cb-44d2-8fdf-b979b717667e",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "14:35:16 - Initial prompt:\n",
+ "[\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " \"text\": \"What's the status of my transaction ID T1001?\"\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "14:35:17 - Output so far:\n",
+ "{\n",
+ " \"message\": {\n",
+ " \"role\": \"assistant\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " \"toolUse\": {\n",
+ " \"toolUseId\": \"tooluse_4K3SZj4ASui1wNtfpTuRpg\",\n",
+ " \"name\": \"get_payment_status\",\n",
+ " \"input\": {\n",
+ " \"transaction_id\": \"T1001\"\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "14:35:17 - Running (get_payment_status) tool...\n",
+ "Tool result: Payment date for transaction ID T1001 is Paid.\n",
+ "\n",
+ "14:35:18 - Final output:\n",
+ "{\n",
+ " \"message\": {\n",
+ " \"role\": \"assistant\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " \"text\": \"Your transaction T1001 has been paid.\"\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "\n",
+ "14:35:18 - Initial prompt:\n",
+ "[\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " \"text\": \"On what date was my transaction ID T1001 made?\"\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "14:35:19 - Output so far:\n",
+ "{\n",
+ " \"message\": {\n",
+ " \"role\": \"assistant\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " \"toolUse\": {\n",
+ " \"toolUseId\": \"tooluse_oY5yF_66Sp6S4npwzeaSRw\",\n",
+ " \"name\": \"get_payment_date\",\n",
+ " \"input\": {\n",
+ " \"transaction_id\": \"T1001\"\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "14:35:19 - Running (get_payment_date) tool...\n",
+ "Tool result: Payment date for transaction ID T1001 is 2021-10-05.\n",
+ "\n",
+ "14:35:20 - Final output:\n",
+ "{\n",
+ " \"message\": {\n",
+ " \"role\": \"assistant\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " \"text\": \"Your transaction with the ID T1001 was made on October 5th, 2021.\"\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ "}\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "prompts = [\n",
+ " \"What's the status of my transaction ID T1001?\",\n",
+ " \"On what date was my transaction ID T1001 made?\"\n",
+ "]\n",
+ "\n",
+ "for prompt in prompts:\n",
+ " converse(\n",
+ " system=[\n",
+ " {\n",
+ " \"text\": \"\"\"You're provided with tools that can get the payment status and payment date for a given transaction ID; \\\n",
+ " only use the tool if required. Don't make reference to the tools in your final answer.\"\"\"\n",
+ " }\n",
+ " ],\n",
+ " prompt=prompt\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d347b97d-618b-40c8-9087-9b6481d714f7",
+ "metadata": {},
+ "source": [
+ "As we can see, the LLM decides whether or not to call the `get_payment_status` or `get_payment_date` tool depending on the question."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b458ce6a-216a-478d-968b-862fcfc8fe18",
+ "metadata": {},
+ "source": [
+ "## Distributors\n",
+ "- Amazon Web Services\n",
+ "- Mistral AI\n",
+ "\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1f5a51e5-3c22-496d-9110-bbc7f07e9dd8",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.14"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
From 84baba4b7238db47298604b2b6475c0765627008 Mon Sep 17 00:00:00 2001
From: Armando Daniel Diaz Gonzalez
<61255126+arm-diaz@users.noreply.github.com>
Date: Fri, 14 Jun 2024 09:33:09 -0400
Subject: [PATCH 2/7] fix docs: converse stream
---
notebooks/Bedrock_Mistral_native_function_calling.ipynb | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/notebooks/Bedrock_Mistral_native_function_calling.ipynb b/notebooks/Bedrock_Mistral_native_function_calling.ipynb
index 733b520..3a2ce34 100644
--- a/notebooks/Bedrock_Mistral_native_function_calling.ipynb
+++ b/notebooks/Bedrock_Mistral_native_function_calling.ipynb
@@ -21,7 +21,7 @@
"id": "3c58ce86-3cb5-4a4b-9735-dc4684db731f",
"metadata": {},
"source": [
- "The Converse or ConverseStream API is a unified structured text action for simplifying the invocations to Bedrock LLMs. It includes the possibility to define tools for implementing external functions that can be called or triggered from the LLMs."
+ "The Converse API is a unified structured text action for simplifying the invocations to Bedrock LLMs. It includes the possibility to define tools for implementing external functions that can be called or triggered from the LLMs."
]
},
{
From b7bd85953cfe3bd9c1e7bcaac85b22379bacb6af Mon Sep 17 00:00:00 2001
From: Armando Daniel Diaz Gonzalez
<61255126+arm-diaz@users.noreply.github.com>
Date: Fri, 14 Jun 2024 09:37:55 -0400
Subject: [PATCH 3/7] fix docs: change title
---
notebooks/Bedrock_Mistral_native_function_calling.ipynb | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/notebooks/Bedrock_Mistral_native_function_calling.ipynb b/notebooks/Bedrock_Mistral_native_function_calling.ipynb
index 3a2ce34..6fb7b85 100644
--- a/notebooks/Bedrock_Mistral_native_function_calling.ipynb
+++ b/notebooks/Bedrock_Mistral_native_function_calling.ipynb
@@ -5,7 +5,7 @@
"id": "e3eed142-6144-4ba2-a693-2bcfdeeae823",
"metadata": {},
"source": [
- "# Amazon Bedrock Function Calling (Tool Use) with Converse API using Mistral Large"
+ "# An agentic implementation of multiple tools for function callings with Converse API using Mistral Large"
]
},
{
From 7c8b3d7f95f9fffb21c236a0e8403fa5ba170f09 Mon Sep 17 00:00:00 2001
From: Armando Daniel Diaz Gonzalez
<61255126+arm-diaz@users.noreply.github.com>
Date: Fri, 14 Jun 2024 09:38:29 -0400
Subject: [PATCH 4/7] fix docs: change title
---
notebooks/Bedrock_Mistral_native_function_calling.ipynb | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/notebooks/Bedrock_Mistral_native_function_calling.ipynb b/notebooks/Bedrock_Mistral_native_function_calling.ipynb
index 6fb7b85..e981bec 100644
--- a/notebooks/Bedrock_Mistral_native_function_calling.ipynb
+++ b/notebooks/Bedrock_Mistral_native_function_calling.ipynb
@@ -5,7 +5,7 @@
"id": "e3eed142-6144-4ba2-a693-2bcfdeeae823",
"metadata": {},
"source": [
- "# An agentic implementation of multiple tools for function callings with Converse API using Mistral Large"
+ "# An agentic implementation of multiple tools for function calling with Converse API using Mistral Large"
]
},
{
From 9ea3754fbaa003117bbe3e5eea1d4fa71d2bd4fe Mon Sep 17 00:00:00 2001
From: Armando Daniel Diaz Gonzalez
<61255126+arm-diaz@users.noreply.github.com>
Date: Fri, 14 Jun 2024 12:35:13 -0400
Subject: [PATCH 5/7] change region
---
...rock_Mistral_native_function_calling.ipynb | 116 ++++++++++++------
1 file changed, 76 insertions(+), 40 deletions(-)
diff --git a/notebooks/Bedrock_Mistral_native_function_calling.ipynb b/notebooks/Bedrock_Mistral_native_function_calling.ipynb
index e981bec..4787715 100644
--- a/notebooks/Bedrock_Mistral_native_function_calling.ipynb
+++ b/notebooks/Bedrock_Mistral_native_function_calling.ipynb
@@ -5,7 +5,7 @@
"id": "e3eed142-6144-4ba2-a693-2bcfdeeae823",
"metadata": {},
"source": [
- "# An agentic implementation of multiple tools for function calling with Converse API using Mistral Large"
+ "# An agentic implementation of multiple tools for function callings with Converse API using Mistral Large"
]
},
{
@@ -164,12 +164,20 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 2,
"id": "60fde7eb-a354-4934-9126-b793080328c1",
"metadata": {
"tags": []
},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Writing requirements.txt\n"
+ ]
+ }
+ ],
"source": [
"%%writefile requirements.txt\n",
"boto3==1.34.125\n",
@@ -186,19 +194,32 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 1,
"id": "c54cfea5-782f-49c1-8885-c8fc8955e09e",
"metadata": {
"tags": []
},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
+ "autovizwidget 0.21.0 requires pandas<2.0.0,>=0.20.1, but you have pandas 2.2.2 which is incompatible.\n",
+ "awscli 1.32.101 requires botocore==1.34.101, but you have botocore 1.34.126 which is incompatible.\n",
+ "hdijupyterutils 0.21.0 requires pandas<2.0.0,>=0.17.1, but you have pandas 2.2.2 which is incompatible.\n",
+ "sparkmagic 0.21.0 requires pandas<2.0.0,>=0.17.1, but you have pandas 2.2.2 which is incompatible.\u001b[0m\u001b[31m\n",
+ "\u001b[0m"
+ ]
+ }
+ ],
"source": [
"!pip install -U -r requirements.txt --quiet"
]
},
{
"cell_type": "code",
- "execution_count": 13,
+ "execution_count": 1,
"id": "6d59d834-3f3c-46e5-8cb8-56def569a934",
"metadata": {
"tags": []
@@ -208,8 +229,7 @@
"import boto3\n",
"from datetime import datetime\n",
"import json\n",
- "import pandas as pd\n",
- "import re"
+ "import pandas as pd"
]
},
{
@@ -222,16 +242,18 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": 2,
"id": "42c5827e-45c8-4445-b01e-a611a7dfcd46",
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Using modelId: mistral.mistral-large-2402-v1:0\n",
- "Using region: us-east-1\n"
+ "Using region: us-west-2\n"
]
}
],
@@ -239,7 +261,7 @@
"model_id = 'mistral.mistral-large-2402-v1:0'\n",
"print(f'Using modelId: {model_id}')\n",
"\n",
- "region = 'us-east-1'\n",
+ "region = 'us-west-2'\n",
"print('Using region: ', region)\n",
"\n",
"bedrock = boto3.client(\n",
@@ -266,9 +288,11 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": 3,
"id": "40c642f3-7aaf-4c37-9071-36a19188e753",
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"def load_data()-> pd.DataFrame:\n",
@@ -291,9 +315,11 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": 4,
"id": "01ad9e5a-5525-49a8-be42-1b8e7263aa69",
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [
{
"name": "stdout",
@@ -349,9 +375,11 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": 5,
"id": "8db00433-5194-40e5-a1cc-a05a7d5c2647",
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"class ToolsList:\n",
@@ -410,9 +438,11 @@
},
{
"cell_type": "code",
- "execution_count": 14,
+ "execution_count": 6,
"id": "4c5e6964-e502-451c-9be5-5e7c171a133d",
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"# Define the configuration for our tool...\n",
@@ -477,9 +507,11 @@
},
{
"cell_type": "code",
- "execution_count": 15,
+ "execution_count": 7,
"id": "e61adf1b-f2a9-4e0c-a3a0-c8862e113b68",
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"# Function for caling the Bedrock Converse API...\n",
@@ -506,9 +538,11 @@
},
{
"cell_type": "code",
- "execution_count": 16,
+ "execution_count": 8,
"id": "314786dc-3726-4b7c-a872-5a8a5c3a35f7",
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [],
"source": [
"# Function for orchestrating the conversation flow...\n",
@@ -589,16 +623,18 @@
},
{
"cell_type": "code",
- "execution_count": 17,
+ "execution_count": 9,
"id": "e53c78eb-f5cb-44d2-8fdf-b979b717667e",
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
- "14:35:16 - Initial prompt:\n",
+ "16:33:48 - Initial prompt:\n",
"[\n",
" {\n",
" \"role\": \"user\",\n",
@@ -610,14 +646,14 @@
" }\n",
"]\n",
"\n",
- "14:35:17 - Output so far:\n",
+ "16:33:50 - Output so far:\n",
"{\n",
" \"message\": {\n",
" \"role\": \"assistant\",\n",
" \"content\": [\n",
" {\n",
" \"toolUse\": {\n",
- " \"toolUseId\": \"tooluse_4K3SZj4ASui1wNtfpTuRpg\",\n",
+ " \"toolUseId\": \"tooluse_Ha-S_2c3TTeRv5Hb2xWwtg\",\n",
" \"name\": \"get_payment_status\",\n",
" \"input\": {\n",
" \"transaction_id\": \"T1001\"\n",
@@ -628,23 +664,23 @@
" }\n",
"}\n",
"\n",
- "14:35:17 - Running (get_payment_status) tool...\n",
+ "16:33:50 - Running (get_payment_status) tool...\n",
"Tool result: Payment date for transaction ID T1001 is Paid.\n",
"\n",
- "14:35:18 - Final output:\n",
+ "16:33:50 - Final output:\n",
"{\n",
" \"message\": {\n",
" \"role\": \"assistant\",\n",
" \"content\": [\n",
" {\n",
- " \"text\": \"Your transaction T1001 has been paid.\"\n",
+ " \"text\": \"The status of your transaction ID T1001 is Paid.\"\n",
" }\n",
" ]\n",
" }\n",
"}\n",
"\n",
"\n",
- "14:35:18 - Initial prompt:\n",
+ "16:33:50 - Initial prompt:\n",
"[\n",
" {\n",
" \"role\": \"user\",\n",
@@ -656,14 +692,14 @@
" }\n",
"]\n",
"\n",
- "14:35:19 - Output so far:\n",
+ "16:33:51 - Output so far:\n",
"{\n",
" \"message\": {\n",
" \"role\": \"assistant\",\n",
" \"content\": [\n",
" {\n",
" \"toolUse\": {\n",
- " \"toolUseId\": \"tooluse_oY5yF_66Sp6S4npwzeaSRw\",\n",
+ " \"toolUseId\": \"tooluse_yD1YpWKkQemhzZQBhvsc6w\",\n",
" \"name\": \"get_payment_date\",\n",
" \"input\": {\n",
" \"transaction_id\": \"T1001\"\n",
@@ -674,16 +710,16 @@
" }\n",
"}\n",
"\n",
- "14:35:19 - Running (get_payment_date) tool...\n",
+ "16:33:51 - Running (get_payment_date) tool...\n",
"Tool result: Payment date for transaction ID T1001 is 2021-10-05.\n",
"\n",
- "14:35:20 - Final output:\n",
+ "16:33:52 - Final output:\n",
"{\n",
" \"message\": {\n",
" \"role\": \"assistant\",\n",
" \"content\": [\n",
" {\n",
- " \"text\": \"Your transaction with the ID T1001 was made on October 5th, 2021.\"\n",
+ " \"text\": \"Your transaction with ID T1001 was made on 2021-10-05.\"\n",
" }\n",
" ]\n",
" }\n",
@@ -741,9 +777,9 @@
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3 (ipykernel)",
+ "display_name": "conda_python3",
"language": "python",
- "name": "python3"
+ "name": "conda_python3"
},
"language_info": {
"codemirror_mode": {
From 2c53ea1d3b43b2228fa0e557367c641be73cbbd8 Mon Sep 17 00:00:00 2001
From: Armando Daniel Diaz Gonzalez
<61255126+arm-diaz@users.noreply.github.com>
Date: Fri, 14 Jun 2024 13:12:16 -0400
Subject: [PATCH 6/7] Rename file
---
notebooks/Agentic_workflow_with_Mistral.ipynb | 786 ++++++++++++++++++
1 file changed, 786 insertions(+)
create mode 100644 notebooks/Agentic_workflow_with_Mistral.ipynb
diff --git a/notebooks/Agentic_workflow_with_Mistral.ipynb b/notebooks/Agentic_workflow_with_Mistral.ipynb
new file mode 100644
index 0000000..e7a264a
--- /dev/null
+++ b/notebooks/Agentic_workflow_with_Mistral.ipynb
@@ -0,0 +1,786 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "e3eed142-6144-4ba2-a693-2bcfdeeae823",
+ "metadata": {},
+ "source": [
+ "# An agentic implementation of multiple tools for function calling with Converse API using Mistral Large"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4c7a66db-4d22-4e0f-883c-8e8daf6a4291",
+ "metadata": {},
+ "source": [
+ "In this Jupyter Notebook, we walkthrough an implementation of native function calling and agentic workflows with with the Converse API for Amazon Bedrock and Mistral Large. Function Calling is a powerful technique that allows large language models to connect to external tools, systems, or APIs to enable, which can be executed to perform actions based on user's input."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3c58ce86-3cb5-4a4b-9735-dc4684db731f",
+ "metadata": {},
+ "source": [
+ "The Converse API is a unified structured text action for simplifying the invocations to Bedrock LLMs. It includes the possibility to define tools for implementing external functions that can be called or triggered from the LLMs."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9a5fcd65-01b7-418d-988e-f80d05ebb4a5",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "## Mistral Model Selection\n",
+ "\n",
+ "Today, Mistral Large supports native function calling with Converse API for Amazon Bedrock:\n",
+ "\n",
+ "### 3. Mistral Large\n",
+ "\n",
+ "- **Description:** A cutting-edge text generation model with top-tier reasoning capabilities. It can be used for complex multilingual reasoning tasks, including text understanding, transformation, and code generation.\n",
+ "- **Max Tokens:** 8,196\n",
+ "- **Context Window:** 32K\n",
+ "- **Languages:** English, French, German, Spanish, Italian\n",
+ "- **Supported Use Cases:** Synthetic Text Generation, Code Generation, RAG, or Agents\n",
+ "\n",
+ "### Performance and Cost Trade-offs\n",
+ "\n",
+ "The table below compares the model performance on the Massive Multitask Language Understanding (MMLU) benchmark and their on-demand pricing on Amazon Bedrock.\n",
+ "\n",
+ "| Model | MMLU Score | Price per 1,000 Input Tokens | Price per 1,000 Output Tokens |\n",
+ "|-----------------|------------|------------------------------|-------------------------------|\n",
+ "| Mistral Large | 81.2% | \\$0.008 | \\$0.024 |\n",
+ "\n",
+ "For more information, refer to the following links:\n",
+ "\n",
+ "1. [Mistral Model Selection Guide](https://docs.mistral.ai/guides/model-selection/)\n",
+ "2. [Amazon Bedrock Pricing Page](https://aws.amazon.com/bedrock/pricing/)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f3978ae9-57d0-4dca-b94b-b6493f8eade0",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "## Supported papameters\n",
+ "\n",
+ "The Mistral AI models have the following inference parameters.\n",
+ "\n",
+ "\n",
+ "```\n",
+ "{\n",
+ " \"prompt\": string,\n",
+ " \"max_tokens\" : int,\n",
+ " \"stop\" : [string], \n",
+ " \"temperature\": float,\n",
+ " \"top_p\": float,\n",
+ " \"top_k\": int\n",
+ "}\n",
+ "```\n",
+ "\n",
+ "The Mistral AI models have the following inference parameters:\n",
+ "\n",
+ "- **Temperature** - Tunes the degree of randomness in generation. Lower temperatures mean less random generations.\n",
+ "- **Top P** - If set to float less than 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.\n",
+ "- **Top K** - Can be used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.\n",
+ "- **Maximum Length** - Maximum number of tokens to generate. Responses are not guaranteed to fill up to the maximum desired length.\n",
+ "- **Stop sequences** - Up to four sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.\n",
+ "\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "95db24c2-384a-4d1e-b6b6-a832ac002500",
+ "metadata": {},
+ "source": [
+ "### Local Setup (Optional)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "11710041-4077-4700-a47a-2459d1be4fb9",
+ "metadata": {},
+ "source": [
+ "For a local server, follow these steps to execute this jupyter notebook:\n",
+ "\n",
+ "1. **Configure AWS CLI**: Configure [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) with your AWS credentials. Run `aws configure` and enter your AWS Access Key ID, AWS Secret Access Key, AWS Region, and default output format.\n",
+ "\n",
+ "2. **Install required libraries**: Install the necessary Python libraries for working with SageMaker, such as [sagemaker](https://github.com/aws/sagemaker-python-sdk/), [boto3](https://github.com/boto/boto3), and others. You can use a Python environment manager like [conda](https://docs.conda.io/en/latest/) or [virtualenv](https://virtualenv.pypa.io/en/latest/) to manage your Python packages in your preferred IDE (e.g. [Visual Studio Code](https://code.visualstudio.com/)).\n",
+ "\n",
+ "3. **Create an IAM role for SageMaker**: Create an AWS Identity and Access Management (IAM) role that grants your user [SageMaker permissions](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). \n",
+ "\n",
+ "By following these steps, you can set up a local Jupyter Notebook environment capable of deploying machine learning models on Amazon SageMaker using the appropriate IAM role for granting the necessary permissions."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7c22cf2e-971a-4df9-9e27-1cd7a05d8307",
+ "metadata": {},
+ "source": [
+ "## Setup and Requirements"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1ba91a5b-4ceb-4e60-9f89-b31a4e446c00",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "1. Create an Amazon SageMaker Notebook Instance - [Amazon SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-setup-working-env.html)\n",
+ " - For Notebook Instance type, choose ml.t3.medium.\n",
+ "2. For Select Kernel, choose [conda_python3](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-prepare.html).\n",
+ "3. Install the required packages."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5b7ef5d8-904d-4cce-8522-e4b21279ceb1",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "
NOTE:\n",
+ "\n",
+ "- For
Amazon SageMaker Studio, select Kernel \"
Python 3 (ipykernel)\".\n",
+ "\n",
+ "- For
Amazon SageMaker Studio Classic, select Image \"
Base Python 3.0\" and Kernel \"
Python 3\".\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1262e8ec-674a-44c2-be1f-0a92df6b7ca6",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "\n",
+ "Before we start building the agentic workflow, we'll first install some libraries:\n",
+ "\n",
+ "+ AWS Python SDKs [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) to be able to submit API calls to [Amazon Bedrock](https://aws.amazon.com/bedrock/).\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "id": "60fde7eb-a354-4934-9126-b793080328c1",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Overwriting requirements.txt\n"
+ ]
+ }
+ ],
+ "source": [
+ "%%writefile requirements.txt\n",
+ "boto3==1.34.125\n",
+ "pandas==2.2.2"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "daf00ca0-9a89-4a3e-ac25-5f172dac43d5",
+ "metadata": {},
+ "source": [
+ "Install required packages:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "id": "c54cfea5-782f-49c1-8885-c8fc8955e09e",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "!pip install -qU -r requirements.txt --quiet"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "6d59d834-3f3c-46e5-8cb8-56def569a934",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "import boto3\n",
+ "from datetime import datetime\n",
+ "import json\n",
+ "import pandas as pd"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a09522ec-1bb8-4427-a498-60598c1078ee",
+ "metadata": {},
+ "source": [
+ "Let's define a few variables and create a bedrock client."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "42c5827e-45c8-4445-b01e-a611a7dfcd46",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Using modelId: mistral.mistral-large-2402-v1:0\n",
+ "Using region: us-west-2\n"
+ ]
+ }
+ ],
+ "source": [
+ "model_id = 'mistral.mistral-large-2402-v1:0'\n",
+ "print(f'Using modelId: {model_id}')\n",
+ "\n",
+ "region = 'us-west-2'\n",
+ "print('Using region: ', region)\n",
+ "\n",
+ "bedrock = boto3.client(\n",
+ " service_name='bedrock-runtime',\n",
+ " region_name=region\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7cc107b3-7dfa-494a-9269-6229b6dccb24",
+ "metadata": {},
+ "source": [
+ "## Load data"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "18477cd8-aeb2-4fec-b9ea-74b73f7386f4",
+ "metadata": {},
+ "source": [
+ "Let's say, we have a transactional dataset tracking customers, payment amounts, payment dates, and whether payments have been fully processed for each transaction identifier. For more information about the sample dataset, please visit the documentation for [Mistral Function Calling](https://docs.mistral.ai/capabilities/function_calling/)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "40c642f3-7aaf-4c37-9071-36a19188e753",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "def load_data()-> pd.DataFrame:\n",
+ " \"\"\"\n",
+ " Load data from a JSON file into a Pandas DataFrame.\n",
+ "\n",
+ " Returns:\n",
+ " pd.DataFrame: A Pandas DataFrame containing the data loaded from the JSON file.\n",
+ " If an error occurs during file loading, an error message is returned instead.\n",
+ "\n",
+ " \"\"\"\n",
+ " local_path = \"sample_data/transactions.json\"\n",
+ " \n",
+ " try:\n",
+ " df = pd.read_json(local_path)\n",
+ " return df\n",
+ " except (FileNotFoundError, ValueError) as e:\n",
+ " return f\"Error: {e}\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "01ad9e5a-5525-49a8-be42-1b8e7263aa69",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ " transaction_id customer_id payment_amount payment_date payment_status\n",
+ "0 T1001 C001 125.50 2021-10-05 Paid\n",
+ "1 T1002 C002 89.99 2021-10-06 Unpaid\n",
+ "2 T1003 C003 120.00 2021-10-07 Paid\n",
+ "3 T1004 C002 54.30 2021-10-05 Paid\n",
+ "4 T1005 C001 210.20 2021-10-08 Pending\n"
+ ]
+ }
+ ],
+ "source": [
+ "df = load_data()\n",
+ "print(df)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "41cb7ad0-64ea-408f-a946-8a0601202838",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "## Function Calling"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b3bc0e89-c9fe-4acd-8a9e-bc3fd7a5c828",
+ "metadata": {},
+ "source": [
+ "Function calling is the ability to reliably connect a large language model (LLM) to external tools and enable effective tool usage and interaction with external APIs. Mistral Large has the ability for building LLM powered chatbots or agents that need to retrieve context for the model or interact with external tools by converting natural language into API calls to retrieve specific domain knowledge. From conversational agents and math problem solving to API integration and information extraction, multiple use cases can benefit from this capability provided by Mistral Large."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c18150cc-1aac-4a7f-83f2-49d3c1a91abf",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "## Tools"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2ac8ce00-28f4-4074-9c27-14c9d2263b28",
+ "metadata": {},
+ "source": [
+ "Function calling (Tool use) for Amazon Bedrock allow agents and large language models to interact with the world. In our example, we will define a tool list for a payment status (`get_payment_status`) and payment date (`get_payment_date`) lookup tool. Note in our example we're just returning payment information from a sample dataset to illustrate the concept, but you could make it fully functional by connecting any database or service API."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "8db00433-5194-40e5-a1cc-a05a7d5c2647",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "class ToolsList:\n",
+ " # Define our payment status tool function...\n",
+ " def get_payment_status(self, transaction_id) -> str:\n",
+ " \"Get payment status of a transaction\"\n",
+ " data = load_data()\n",
+ " \n",
+ " try:\n",
+ " # Attempt to retrieve the payment status for the given transaction ID\n",
+ " status = data[data.transaction_id == transaction_id].payment_status.item()\n",
+ " except ValueError:\n",
+ " # If the transaction ID is not found, return an error message\n",
+ " return f\"ERROR: Transaction ID {transaction_id} not found.\"\n",
+ " \n",
+ " # Retrieve the payment status for the corresponding index\n",
+ " result = f'Payment date for transaction ID {transaction_id} is {status}.'\n",
+ " print(f'Tool result: {result}')\n",
+ " return result\n",
+ "\n",
+ " # Define our payment date tool function...\n",
+ " def get_payment_date(self, transaction_id) -> str:\n",
+ " \"Get payment date of a transaction\"\n",
+ " data = load_data()\n",
+ "\n",
+ " try:\n",
+ " # Attempt to retrieve the payment date for the given transaction ID\n",
+ " date = data[data.transaction_id == transaction_id].payment_date.item()\n",
+ " except ValueError:\n",
+ " # If the transaction ID is not found, return an error message\n",
+ " return f\"ERROR: Transaction ID {transaction_id} not found.\"\n",
+ "\n",
+ " result = f'Payment date for transaction ID {transaction_id} is {date}.'\n",
+ " print(f'Tool result: {result}')\n",
+ " return result"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b779e6db-2322-4cdf-8f4b-2b97e532c380",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "### Tool configuration for Converse API"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2ae18c68-8ddd-4162-a036-f7dc496fca62",
+ "metadata": {},
+ "source": [
+ "In this step, we structure our tools configuration for passing this information to our Converse API. We have to clearly define the schema that our tools are expecting in the corresponding functions.\n",
+ "\n",
+ "**Note:** *With Converse API allows, we can define configurations to allow us either let the LLM choose automatically (auto) a tool, or overriding a fixed tool to be called always. To see more information, check the Bedrock Converse API documentation.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "4c5e6964-e502-451c-9be5-5e7c171a133d",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "# Define the configuration for our tool...\n",
+ "\n",
+ "toolConfig = {\n",
+ " 'tools': [],\n",
+ " 'toolChoice': {\n",
+ " 'auto': {},\n",
+ " #'any': {},\n",
+ " #'tool': {\n",
+ " # 'name': 'get_payment_status'\n",
+ " #}\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "toolConfig['tools'].append({\n",
+ " 'toolSpec': {\n",
+ " 'name': 'get_payment_status',\n",
+ " 'description': 'Get payment status of a transaction',\n",
+ " 'inputSchema': {\n",
+ " 'json': {\n",
+ " 'type': 'object',\n",
+ " 'properties': {\n",
+ " 'transaction_id': {\n",
+ " 'type': 'string',\n",
+ " 'description': 'Transaction ID'\n",
+ " }\n",
+ " },\n",
+ " 'required': ['transaction_id']\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " })\n",
+ "\n",
+ "toolConfig['tools'].append({\n",
+ " 'toolSpec': {\n",
+ " 'name': 'get_payment_date',\n",
+ " 'description': 'Get payment date of a transaction',\n",
+ " 'inputSchema': {\n",
+ " 'json': {\n",
+ " 'type': 'object',\n",
+ " 'properties': {\n",
+ " 'transaction_id': {\n",
+ " 'type': 'string',\n",
+ " 'description': 'Transaction ID'\n",
+ " }\n",
+ " },\n",
+ " 'required': ['transaction_id']\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " })"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "eb858e61-aa93-4c4d-bfe6-94da5dc61194",
+ "metadata": {},
+ "source": [
+ "Let's define a handy function for calling Bedrock with the Converse API."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "id": "e61adf1b-f2a9-4e0c-a3a0-c8862e113b68",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "# Function for caling the Bedrock Converse API...\n",
+ "\n",
+ "def converse_with_tools(messages, system='', toolConfig=toolConfig):\n",
+ " response = bedrock.converse(\n",
+ " modelId=model_id,\n",
+ " system=system,\n",
+ " messages=messages,\n",
+ " toolConfig=toolConfig\n",
+ " )\n",
+ " return response"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "79da5ab1-0083-4702-89d8-b202199e90c2",
+ "metadata": {},
+ "source": [
+ "We're now ready for setting up our orchestration flow. In this case, we'll make a first call to the LLM with the initial prompt from the user, and depending on the answer from the LLM we'll either call a tool (function calling) or end the interaction.\n",
+ "\n",
+ "Note that in the case the LLM indicates that it wants to run a tool (function calling), it will give us the information required of tool name and arguments for us to run the relevant tool in our code; i.e. The LLMs cannot run the tools automatically."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "id": "314786dc-3726-4b7c-a872-5a8a5c3a35f7",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "# Function for orchestrating the conversation flow...\n",
+ "\n",
+ "def converse(prompt, system=''):\n",
+ " # Add the initial prompt:\n",
+ " messages = []\n",
+ " messages.append(\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " \"text\": prompt\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ " )\n",
+ " print(f\"\\n{datetime.now().strftime('%H:%M:%S')} - Initial prompt:\\n{json.dumps(messages, indent=2)}\")\n",
+ "\n",
+ " # Invoke the model the first time:\n",
+ " output = converse_with_tools(messages, system)\n",
+ " print(f\"\\n{datetime.now().strftime('%H:%M:%S')} - Output so far:\\n{json.dumps(output['output'], indent=2, ensure_ascii=False)}\")\n",
+ "\n",
+ " # Add the intermediate output to the prompt:\n",
+ " messages.append(output['output']['message'])\n",
+ "\n",
+ " function_calling = next((c['toolUse'] for c in output['output']['message']['content'] if 'toolUse' in c), None)\n",
+ "\n",
+ " # Check if function calling is triggered:\n",
+ " if function_calling:\n",
+ " # Get the tool name and arguments:\n",
+ " tool_name = function_calling['name']\n",
+ " tool_args = function_calling['input'] or {}\n",
+ " \n",
+ " # Run the tool:\n",
+ " print(f\"\\n{datetime.now().strftime('%H:%M:%S')} - Running ({tool_name}) tool...\")\n",
+ " tool_response = getattr(ToolsList(), tool_name)(**tool_args) or \"\"\n",
+ " if tool_response:\n",
+ " tool_status = 'success'\n",
+ " else:\n",
+ " tool_status = 'error'\n",
+ "\n",
+ " # Add the tool result to the prompt:\n",
+ " messages.append(\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " 'toolResult': {\n",
+ " 'toolUseId': function_calling['toolUseId'],\n",
+ " 'content': [\n",
+ " {\n",
+ " \"text\": tool_response\n",
+ " }\n",
+ " ],\n",
+ " 'status': tool_status\n",
+ " }\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ " )\n",
+ " \n",
+ " # Invoke the model one more time:\n",
+ " output = converse_with_tools(messages, system)\n",
+ " print(f\"\\n{datetime.now().strftime('%H:%M:%S')} - Final output:\\n{json.dumps(output['output'], indent=2, ensure_ascii=False)}\\n\")\n",
+ " return"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ecc32f77-ed70-452d-898e-e2afa3c23322",
+ "metadata": {},
+ "source": [
+ "Now, we have everything setup and are ready for testing our function-calling bot.\n",
+ "\n",
+ "Let's try with a few sample prompts, each one with different needs."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "id": "e53c78eb-f5cb-44d2-8fdf-b979b717667e",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "16:33:48 - Initial prompt:\n",
+ "[\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " \"text\": \"What's the status of my transaction ID T1001?\"\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "16:33:50 - Output so far:\n",
+ "{\n",
+ " \"message\": {\n",
+ " \"role\": \"assistant\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " \"toolUse\": {\n",
+ " \"toolUseId\": \"tooluse_Ha-S_2c3TTeRv5Hb2xWwtg\",\n",
+ " \"name\": \"get_payment_status\",\n",
+ " \"input\": {\n",
+ " \"transaction_id\": \"T1001\"\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "16:33:50 - Running (get_payment_status) tool...\n",
+ "Tool result: Payment date for transaction ID T1001 is Paid.\n",
+ "\n",
+ "16:33:50 - Final output:\n",
+ "{\n",
+ " \"message\": {\n",
+ " \"role\": \"assistant\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " \"text\": \"The status of your transaction ID T1001 is Paid.\"\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "\n",
+ "16:33:50 - Initial prompt:\n",
+ "[\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " \"text\": \"On what date was my transaction ID T1001 made?\"\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ "]\n",
+ "\n",
+ "16:33:51 - Output so far:\n",
+ "{\n",
+ " \"message\": {\n",
+ " \"role\": \"assistant\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " \"toolUse\": {\n",
+ " \"toolUseId\": \"tooluse_yD1YpWKkQemhzZQBhvsc6w\",\n",
+ " \"name\": \"get_payment_date\",\n",
+ " \"input\": {\n",
+ " \"transaction_id\": \"T1001\"\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "16:33:51 - Running (get_payment_date) tool...\n",
+ "Tool result: Payment date for transaction ID T1001 is 2021-10-05.\n",
+ "\n",
+ "16:33:52 - Final output:\n",
+ "{\n",
+ " \"message\": {\n",
+ " \"role\": \"assistant\",\n",
+ " \"content\": [\n",
+ " {\n",
+ " \"text\": \"Your transaction with ID T1001 was made on 2021-10-05.\"\n",
+ " }\n",
+ " ]\n",
+ " }\n",
+ "}\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "prompts = [\n",
+ " \"What's the status of my transaction ID T1001?\",\n",
+ " \"On what date was my transaction ID T1001 made?\"\n",
+ "]\n",
+ "\n",
+ "for prompt in prompts:\n",
+ " converse(\n",
+ " system=[\n",
+ " {\n",
+ " \"text\": \"\"\"You're provided with tools that can get the payment status and payment date for a given transaction ID; \\\n",
+ " only use the tool if required. Don't make reference to the tools in your final answer.\"\"\"\n",
+ " }\n",
+ " ],\n",
+ " prompt=prompt\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d347b97d-618b-40c8-9087-9b6481d714f7",
+ "metadata": {},
+ "source": [
+ "As we can see, the LLM decides whether or not to call the `get_payment_status` or `get_payment_date` tool depending on the question."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b458ce6a-216a-478d-968b-862fcfc8fe18",
+ "metadata": {},
+ "source": [
+ "## Distributors\n",
+ "- Amazon Web Services\n",
+ "- Mistral AI\n",
+ "\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1f5a51e5-3c22-496d-9110-bbc7f07e9dd8",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "conda_python3",
+ "language": "python",
+ "name": "conda_python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.14"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
From b643ee30f64378134db742138ffafa870fe09a60 Mon Sep 17 00:00:00 2001
From: Armando Daniel Diaz Gonzalez
<61255126+arm-diaz@users.noreply.github.com>
Date: Fri, 14 Jun 2024 13:12:40 -0400
Subject: [PATCH 7/7] Delete
notebooks/Bedrock_Mistral_native_function_calling.ipynb
---
...rock_Mistral_native_function_calling.ipynb | 799 ------------------
1 file changed, 799 deletions(-)
delete mode 100644 notebooks/Bedrock_Mistral_native_function_calling.ipynb
diff --git a/notebooks/Bedrock_Mistral_native_function_calling.ipynb b/notebooks/Bedrock_Mistral_native_function_calling.ipynb
deleted file mode 100644
index 4787715..0000000
--- a/notebooks/Bedrock_Mistral_native_function_calling.ipynb
+++ /dev/null
@@ -1,799 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "id": "e3eed142-6144-4ba2-a693-2bcfdeeae823",
- "metadata": {},
- "source": [
- "# An agentic implementation of multiple tools for function callings with Converse API using Mistral Large"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "4c7a66db-4d22-4e0f-883c-8e8daf6a4291",
- "metadata": {},
- "source": [
- "In this Jupyter Notebook, we walkthrough an implementation of native function calling and agentic workflows with with the Converse API for Amazon Bedrock and Mistral Large. Function Calling is a powerful technique that allows large language models to connect to external tools, systems, or APIs to enable, which can be executed to perform actions based on user's input."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "3c58ce86-3cb5-4a4b-9735-dc4684db731f",
- "metadata": {},
- "source": [
- "The Converse API is a unified structured text action for simplifying the invocations to Bedrock LLMs. It includes the possibility to define tools for implementing external functions that can be called or triggered from the LLMs."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "9a5fcd65-01b7-418d-988e-f80d05ebb4a5",
- "metadata": {},
- "source": [
- "---\n",
- "## Mistral Model Selection\n",
- "\n",
- "Today, Mistral Large supports native function calling with Converse API for Amazon Bedrock:\n",
- "\n",
- "### 3. Mistral Large\n",
- "\n",
- "- **Description:** A cutting-edge text generation model with top-tier reasoning capabilities. It can be used for complex multilingual reasoning tasks, including text understanding, transformation, and code generation.\n",
- "- **Max Tokens:** 8,196\n",
- "- **Context Window:** 32K\n",
- "- **Languages:** English, French, German, Spanish, Italian\n",
- "- **Supported Use Cases:** Synthetic Text Generation, Code Generation, RAG, or Agents\n",
- "\n",
- "### Performance and Cost Trade-offs\n",
- "\n",
- "The table below compares the model performance on the Massive Multitask Language Understanding (MMLU) benchmark and their on-demand pricing on Amazon Bedrock.\n",
- "\n",
- "| Model | MMLU Score | Price per 1,000 Input Tokens | Price per 1,000 Output Tokens |\n",
- "|-----------------|------------|------------------------------|-------------------------------|\n",
- "| Mistral Large | 81.2% | \\$0.008 | \\$0.024 |\n",
- "\n",
- "For more information, refer to the following links:\n",
- "\n",
- "1. [Mistral Model Selection Guide](https://docs.mistral.ai/guides/model-selection/)\n",
- "2. [Amazon Bedrock Pricing Page](https://aws.amazon.com/bedrock/pricing/)\n"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "f3978ae9-57d0-4dca-b94b-b6493f8eade0",
- "metadata": {},
- "source": [
- "---\n",
- "## Supported papameters\n",
- "\n",
- "The Mistral AI models have the following inference parameters.\n",
- "\n",
- "\n",
- "```\n",
- "{\n",
- " \"prompt\": string,\n",
- " \"max_tokens\" : int,\n",
- " \"stop\" : [string], \n",
- " \"temperature\": float,\n",
- " \"top_p\": float,\n",
- " \"top_k\": int\n",
- "}\n",
- "```\n",
- "\n",
- "The Mistral AI models have the following inference parameters:\n",
- "\n",
- "- **Temperature** - Tunes the degree of randomness in generation. Lower temperatures mean less random generations.\n",
- "- **Top P** - If set to float less than 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation.\n",
- "- **Top K** - Can be used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.\n",
- "- **Maximum Length** - Maximum number of tokens to generate. Responses are not guaranteed to fill up to the maximum desired length.\n",
- "- **Stop sequences** - Up to four sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.\n",
- "\n",
- "---"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "95db24c2-384a-4d1e-b6b6-a832ac002500",
- "metadata": {},
- "source": [
- "### Local Setup (Optional)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "11710041-4077-4700-a47a-2459d1be4fb9",
- "metadata": {},
- "source": [
- "For a local server, follow these steps to execute this jupyter notebook:\n",
- "\n",
- "1. **Configure AWS CLI**: Configure [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) with your AWS credentials. Run `aws configure` and enter your AWS Access Key ID, AWS Secret Access Key, AWS Region, and default output format.\n",
- "\n",
- "2. **Install required libraries**: Install the necessary Python libraries for working with SageMaker, such as [sagemaker](https://github.com/aws/sagemaker-python-sdk/), [boto3](https://github.com/boto/boto3), and others. You can use a Python environment manager like [conda](https://docs.conda.io/en/latest/) or [virtualenv](https://virtualenv.pypa.io/en/latest/) to manage your Python packages in your preferred IDE (e.g. [Visual Studio Code](https://code.visualstudio.com/)).\n",
- "\n",
- "3. **Create an IAM role for SageMaker**: Create an AWS Identity and Access Management (IAM) role that grants your user [SageMaker permissions](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). \n",
- "\n",
- "By following these steps, you can set up a local Jupyter Notebook environment capable of deploying machine learning models on Amazon SageMaker using the appropriate IAM role for granting the necessary permissions."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "7c22cf2e-971a-4df9-9e27-1cd7a05d8307",
- "metadata": {},
- "source": [
- "## Setup and Requirements"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "1ba91a5b-4ceb-4e60-9f89-b31a4e446c00",
- "metadata": {},
- "source": [
- "---\n",
- "1. Create an Amazon SageMaker Notebook Instance - [Amazon SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-setup-working-env.html)\n",
- " - For Notebook Instance type, choose ml.t3.medium.\n",
- "2. For Select Kernel, choose [conda_python3](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-prepare.html).\n",
- "3. Install the required packages."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "5b7ef5d8-904d-4cce-8522-e4b21279ceb1",
- "metadata": {},
- "source": [
- " \n",
- "\n",
- "
NOTE:\n",
- "\n",
- "- For
Amazon SageMaker Studio, select Kernel \"
Python 3 (ipykernel)\".\n",
- "\n",
- "- For
Amazon SageMaker Studio Classic, select Image \"
Base Python 3.0\" and Kernel \"
Python 3\".\n",
- "\n",
- "
"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "1262e8ec-674a-44c2-be1f-0a92df6b7ca6",
- "metadata": {},
- "source": [
- "---\n",
- "\n",
- "Before we start building the agentic workflow, we'll first install some libraries:\n",
- "\n",
- "+ AWS Python SDKs [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) to be able to submit API calls to [Amazon Bedrock](https://aws.amazon.com/bedrock/).\n",
- "---"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "id": "60fde7eb-a354-4934-9126-b793080328c1",
- "metadata": {
- "tags": []
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Writing requirements.txt\n"
- ]
- }
- ],
- "source": [
- "%%writefile requirements.txt\n",
- "boto3==1.34.125\n",
- "pandas==2.2.2"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "daf00ca0-9a89-4a3e-ac25-5f172dac43d5",
- "metadata": {},
- "source": [
- "Install required packages:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "id": "c54cfea5-782f-49c1-8885-c8fc8955e09e",
- "metadata": {
- "tags": []
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
- "autovizwidget 0.21.0 requires pandas<2.0.0,>=0.20.1, but you have pandas 2.2.2 which is incompatible.\n",
- "awscli 1.32.101 requires botocore==1.34.101, but you have botocore 1.34.126 which is incompatible.\n",
- "hdijupyterutils 0.21.0 requires pandas<2.0.0,>=0.17.1, but you have pandas 2.2.2 which is incompatible.\n",
- "sparkmagic 0.21.0 requires pandas<2.0.0,>=0.17.1, but you have pandas 2.2.2 which is incompatible.\u001b[0m\u001b[31m\n",
- "\u001b[0m"
- ]
- }
- ],
- "source": [
- "!pip install -U -r requirements.txt --quiet"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "id": "6d59d834-3f3c-46e5-8cb8-56def569a934",
- "metadata": {
- "tags": []
- },
- "outputs": [],
- "source": [
- "import boto3\n",
- "from datetime import datetime\n",
- "import json\n",
- "import pandas as pd"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "a09522ec-1bb8-4427-a498-60598c1078ee",
- "metadata": {},
- "source": [
- "Let's define a few variables and create a bedrock client."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "id": "42c5827e-45c8-4445-b01e-a611a7dfcd46",
- "metadata": {
- "tags": []
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Using modelId: mistral.mistral-large-2402-v1:0\n",
- "Using region: us-west-2\n"
- ]
- }
- ],
- "source": [
- "model_id = 'mistral.mistral-large-2402-v1:0'\n",
- "print(f'Using modelId: {model_id}')\n",
- "\n",
- "region = 'us-west-2'\n",
- "print('Using region: ', region)\n",
- "\n",
- "bedrock = boto3.client(\n",
- " service_name='bedrock-runtime',\n",
- " region_name=region\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "7cc107b3-7dfa-494a-9269-6229b6dccb24",
- "metadata": {},
- "source": [
- "## Load data"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "18477cd8-aeb2-4fec-b9ea-74b73f7386f4",
- "metadata": {},
- "source": [
- "Let's say, we have a transactional dataset tracking customers, payment amounts, payment dates, and whether payments have been fully processed for each transaction identifier. For more information about the sample dataset, please visit the documentation for [Mistral Function Calling](https://docs.mistral.ai/capabilities/function_calling/)."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 3,
- "id": "40c642f3-7aaf-4c37-9071-36a19188e753",
- "metadata": {
- "tags": []
- },
- "outputs": [],
- "source": [
- "def load_data()-> pd.DataFrame:\n",
- " \"\"\"\n",
- " Load data from a JSON file into a Pandas DataFrame.\n",
- "\n",
- " Returns:\n",
- " pd.DataFrame: A Pandas DataFrame containing the data loaded from the JSON file.\n",
- " If an error occurs during file loading, an error message is returned instead.\n",
- "\n",
- " \"\"\"\n",
- " local_path = \"sample_data/transactions.json\"\n",
- " \n",
- " try:\n",
- " df = pd.read_json(local_path)\n",
- " return df\n",
- " except (FileNotFoundError, ValueError) as e:\n",
- " return f\"Error: {e}\""
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 4,
- "id": "01ad9e5a-5525-49a8-be42-1b8e7263aa69",
- "metadata": {
- "tags": []
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- " transaction_id customer_id payment_amount payment_date payment_status\n",
- "0 T1001 C001 125.50 2021-10-05 Paid\n",
- "1 T1002 C002 89.99 2021-10-06 Unpaid\n",
- "2 T1003 C003 120.00 2021-10-07 Paid\n",
- "3 T1004 C002 54.30 2021-10-05 Paid\n",
- "4 T1005 C001 210.20 2021-10-08 Pending\n"
- ]
- }
- ],
- "source": [
- "df = load_data()\n",
- "print(df)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "41cb7ad0-64ea-408f-a946-8a0601202838",
- "metadata": {},
- "source": [
- "---\n",
- "## Function Calling"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "b3bc0e89-c9fe-4acd-8a9e-bc3fd7a5c828",
- "metadata": {},
- "source": [
- "Function calling is the ability to reliably connect a large language model (LLM) to external tools and enable effective tool usage and interaction with external APIs. Mistral Large has the ability for building LLM powered chatbots or agents that need to retrieve context for the model or interact with external tools by converting natural language into API calls to retrieve specific domain knowledge. From conversational agents and math problem solving to API integration and information extraction, multiple use cases can benefit from this capability provided by Mistral Large."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "c18150cc-1aac-4a7f-83f2-49d3c1a91abf",
- "metadata": {},
- "source": [
- "---\n",
- "## Tools"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "2ac8ce00-28f4-4074-9c27-14c9d2263b28",
- "metadata": {},
- "source": [
- "Function calling (Tool use) for Amazon Bedrock allow agents and large language models to interact with the world. In our example, we will define a tool list for a payment status (`get_payment_status`) and payment date (`get_payment_date`) lookup tool. Note in our example we're just returning payment information from a sample dataset to illustrate the concept, but you could make it fully functional by connecting any database or service API."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 5,
- "id": "8db00433-5194-40e5-a1cc-a05a7d5c2647",
- "metadata": {
- "tags": []
- },
- "outputs": [],
- "source": [
- "class ToolsList:\n",
- " # Define our payment status tool function...\n",
- " def get_payment_status(self, transaction_id) -> str:\n",
- " \"Get payment status of a transaction\"\n",
- " data = load_data()\n",
- " \n",
- " try:\n",
- " # Attempt to retrieve the payment status for the given transaction ID\n",
- " status = data[data.transaction_id == transaction_id].payment_status.item()\n",
- " except ValueError:\n",
- " # If the transaction ID is not found, return an error message\n",
- " return f\"ERROR: Transaction ID {transaction_id} not found.\"\n",
- " \n",
- " # Retrieve the payment status for the corresponding index\n",
- " result = f'Payment date for transaction ID {transaction_id} is {status}.'\n",
- " print(f'Tool result: {result}')\n",
- " return result\n",
- "\n",
- " # Define our payment date tool function...\n",
- " def get_payment_date(self, transaction_id) -> str:\n",
- " \"Get payment date of a transaction\"\n",
- " data = load_data()\n",
- "\n",
- " try:\n",
- " # Attempt to retrieve the payment date for the given transaction ID\n",
- " date = data[data.transaction_id == transaction_id].payment_date.item()\n",
- " except ValueError:\n",
- " # If the transaction ID is not found, return an error message\n",
- " return f\"ERROR: Transaction ID {transaction_id} not found.\"\n",
- "\n",
- " result = f'Payment date for transaction ID {transaction_id} is {date}.'\n",
- " print(f'Tool result: {result}')\n",
- " return result"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "b779e6db-2322-4cdf-8f4b-2b97e532c380",
- "metadata": {},
- "source": [
- "---\n",
- "### Tool configuration for Converse API"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "2ae18c68-8ddd-4162-a036-f7dc496fca62",
- "metadata": {},
- "source": [
- "In this step, we structure our tools configuration for passing this information to our Converse API. We have to clearly define the schema that our tools are expecting in the corresponding functions.\n",
- "\n",
- "**Note:** *With Converse API allows, we can define configurations to allow us either let the LLM choose automatically (auto) a tool, or overriding a fixed tool to be called always. To see more information, check the Bedrock Converse API documentation.*"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 6,
- "id": "4c5e6964-e502-451c-9be5-5e7c171a133d",
- "metadata": {
- "tags": []
- },
- "outputs": [],
- "source": [
- "# Define the configuration for our tool...\n",
- "\n",
- "toolConfig = {\n",
- " 'tools': [],\n",
- " 'toolChoice': {\n",
- " 'auto': {},\n",
- " #'any': {},\n",
- " #'tool': {\n",
- " # 'name': 'get_payment_status'\n",
- " #}\n",
- " }\n",
- "}\n",
- "\n",
- "toolConfig['tools'].append({\n",
- " 'toolSpec': {\n",
- " 'name': 'get_payment_status',\n",
- " 'description': 'Get payment status of a transaction',\n",
- " 'inputSchema': {\n",
- " 'json': {\n",
- " 'type': 'object',\n",
- " 'properties': {\n",
- " 'transaction_id': {\n",
- " 'type': 'string',\n",
- " 'description': 'Transaction ID'\n",
- " }\n",
- " },\n",
- " 'required': ['transaction_id']\n",
- " }\n",
- " }\n",
- " }\n",
- " })\n",
- "\n",
- "toolConfig['tools'].append({\n",
- " 'toolSpec': {\n",
- " 'name': 'get_payment_date',\n",
- " 'description': 'Get payment date of a transaction',\n",
- " 'inputSchema': {\n",
- " 'json': {\n",
- " 'type': 'object',\n",
- " 'properties': {\n",
- " 'transaction_id': {\n",
- " 'type': 'string',\n",
- " 'description': 'Transaction ID'\n",
- " }\n",
- " },\n",
- " 'required': ['transaction_id']\n",
- " }\n",
- " }\n",
- " }\n",
- " })"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "eb858e61-aa93-4c4d-bfe6-94da5dc61194",
- "metadata": {},
- "source": [
- "Let's define a handy function for calling Bedrock with the Converse API."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 7,
- "id": "e61adf1b-f2a9-4e0c-a3a0-c8862e113b68",
- "metadata": {
- "tags": []
- },
- "outputs": [],
- "source": [
- "# Function for caling the Bedrock Converse API...\n",
- "\n",
- "def converse_with_tools(messages, system='', toolConfig=toolConfig):\n",
- " response = bedrock.converse(\n",
- " modelId=model_id,\n",
- " system=system,\n",
- " messages=messages,\n",
- " toolConfig=toolConfig\n",
- " )\n",
- " return response"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "79da5ab1-0083-4702-89d8-b202199e90c2",
- "metadata": {},
- "source": [
- "We're now ready for setting up our orchestration flow. In this case, we'll make a first call to the LLM with the initial prompt from the user, and depending on the answer from the LLM we'll either call a tool (function calling) or end the interaction.\n",
- "\n",
- "Note that in the case the LLM indicates that it wants to run a tool (function calling), it will give us the information required of tool name and arguments for us to run the relevant tool in our code; i.e. The LLMs cannot run the tools automatically."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 8,
- "id": "314786dc-3726-4b7c-a872-5a8a5c3a35f7",
- "metadata": {
- "tags": []
- },
- "outputs": [],
- "source": [
- "# Function for orchestrating the conversation flow...\n",
- "\n",
- "def converse(prompt, system=''):\n",
- " # Add the initial prompt:\n",
- " messages = []\n",
- " messages.append(\n",
- " {\n",
- " \"role\": \"user\",\n",
- " \"content\": [\n",
- " {\n",
- " \"text\": prompt\n",
- " }\n",
- " ]\n",
- " }\n",
- " )\n",
- " print(f\"\\n{datetime.now().strftime('%H:%M:%S')} - Initial prompt:\\n{json.dumps(messages, indent=2)}\")\n",
- "\n",
- " # Invoke the model the first time:\n",
- " output = converse_with_tools(messages, system)\n",
- " print(f\"\\n{datetime.now().strftime('%H:%M:%S')} - Output so far:\\n{json.dumps(output['output'], indent=2, ensure_ascii=False)}\")\n",
- "\n",
- " # Add the intermediate output to the prompt:\n",
- " messages.append(output['output']['message'])\n",
- "\n",
- " function_calling = next((c['toolUse'] for c in output['output']['message']['content'] if 'toolUse' in c), None)\n",
- "\n",
- " # Check if function calling is triggered:\n",
- " if function_calling:\n",
- " # Get the tool name and arguments:\n",
- " tool_name = function_calling['name']\n",
- " tool_args = function_calling['input'] or {}\n",
- " \n",
- " # Run the tool:\n",
- " print(f\"\\n{datetime.now().strftime('%H:%M:%S')} - Running ({tool_name}) tool...\")\n",
- " tool_response = getattr(ToolsList(), tool_name)(**tool_args) or \"\"\n",
- " if tool_response:\n",
- " tool_status = 'success'\n",
- " else:\n",
- " tool_status = 'error'\n",
- "\n",
- " # Add the tool result to the prompt:\n",
- " messages.append(\n",
- " {\n",
- " \"role\": \"user\",\n",
- " \"content\": [\n",
- " {\n",
- " 'toolResult': {\n",
- " 'toolUseId': function_calling['toolUseId'],\n",
- " 'content': [\n",
- " {\n",
- " \"text\": tool_response\n",
- " }\n",
- " ],\n",
- " 'status': tool_status\n",
- " }\n",
- " }\n",
- " ]\n",
- " }\n",
- " )\n",
- " \n",
- " # Invoke the model one more time:\n",
- " output = converse_with_tools(messages, system)\n",
- " print(f\"\\n{datetime.now().strftime('%H:%M:%S')} - Final output:\\n{json.dumps(output['output'], indent=2, ensure_ascii=False)}\\n\")\n",
- " return"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "ecc32f77-ed70-452d-898e-e2afa3c23322",
- "metadata": {},
- "source": [
- "Now, we have everything setup and are ready for testing our function-calling bot.\n",
- "\n",
- "Let's try with a few sample prompts, each one with different needs."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 9,
- "id": "e53c78eb-f5cb-44d2-8fdf-b979b717667e",
- "metadata": {
- "tags": []
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\n",
- "16:33:48 - Initial prompt:\n",
- "[\n",
- " {\n",
- " \"role\": \"user\",\n",
- " \"content\": [\n",
- " {\n",
- " \"text\": \"What's the status of my transaction ID T1001?\"\n",
- " }\n",
- " ]\n",
- " }\n",
- "]\n",
- "\n",
- "16:33:50 - Output so far:\n",
- "{\n",
- " \"message\": {\n",
- " \"role\": \"assistant\",\n",
- " \"content\": [\n",
- " {\n",
- " \"toolUse\": {\n",
- " \"toolUseId\": \"tooluse_Ha-S_2c3TTeRv5Hb2xWwtg\",\n",
- " \"name\": \"get_payment_status\",\n",
- " \"input\": {\n",
- " \"transaction_id\": \"T1001\"\n",
- " }\n",
- " }\n",
- " }\n",
- " ]\n",
- " }\n",
- "}\n",
- "\n",
- "16:33:50 - Running (get_payment_status) tool...\n",
- "Tool result: Payment date for transaction ID T1001 is Paid.\n",
- "\n",
- "16:33:50 - Final output:\n",
- "{\n",
- " \"message\": {\n",
- " \"role\": \"assistant\",\n",
- " \"content\": [\n",
- " {\n",
- " \"text\": \"The status of your transaction ID T1001 is Paid.\"\n",
- " }\n",
- " ]\n",
- " }\n",
- "}\n",
- "\n",
- "\n",
- "16:33:50 - Initial prompt:\n",
- "[\n",
- " {\n",
- " \"role\": \"user\",\n",
- " \"content\": [\n",
- " {\n",
- " \"text\": \"On what date was my transaction ID T1001 made?\"\n",
- " }\n",
- " ]\n",
- " }\n",
- "]\n",
- "\n",
- "16:33:51 - Output so far:\n",
- "{\n",
- " \"message\": {\n",
- " \"role\": \"assistant\",\n",
- " \"content\": [\n",
- " {\n",
- " \"toolUse\": {\n",
- " \"toolUseId\": \"tooluse_yD1YpWKkQemhzZQBhvsc6w\",\n",
- " \"name\": \"get_payment_date\",\n",
- " \"input\": {\n",
- " \"transaction_id\": \"T1001\"\n",
- " }\n",
- " }\n",
- " }\n",
- " ]\n",
- " }\n",
- "}\n",
- "\n",
- "16:33:51 - Running (get_payment_date) tool...\n",
- "Tool result: Payment date for transaction ID T1001 is 2021-10-05.\n",
- "\n",
- "16:33:52 - Final output:\n",
- "{\n",
- " \"message\": {\n",
- " \"role\": \"assistant\",\n",
- " \"content\": [\n",
- " {\n",
- " \"text\": \"Your transaction with ID T1001 was made on 2021-10-05.\"\n",
- " }\n",
- " ]\n",
- " }\n",
- "}\n",
- "\n"
- ]
- }
- ],
- "source": [
- "prompts = [\n",
- " \"What's the status of my transaction ID T1001?\",\n",
- " \"On what date was my transaction ID T1001 made?\"\n",
- "]\n",
- "\n",
- "for prompt in prompts:\n",
- " converse(\n",
- " system=[\n",
- " {\n",
- " \"text\": \"\"\"You're provided with tools that can get the payment status and payment date for a given transaction ID; \\\n",
- " only use the tool if required. Don't make reference to the tools in your final answer.\"\"\"\n",
- " }\n",
- " ],\n",
- " prompt=prompt\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "d347b97d-618b-40c8-9087-9b6481d714f7",
- "metadata": {},
- "source": [
- "As we can see, the LLM decides whether or not to call the `get_payment_status` or `get_payment_date` tool depending on the question."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "b458ce6a-216a-478d-968b-862fcfc8fe18",
- "metadata": {},
- "source": [
- "## Distributors\n",
- "- Amazon Web Services\n",
- "- Mistral AI\n",
- "\n",
- "---"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "1f5a51e5-3c22-496d-9110-bbc7f07e9dd8",
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "conda_python3",
- "language": "python",
- "name": "conda_python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.10.14"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 5
-}