diff --git a/sdk/python/jobs/automl-standalone-jobs/automl-classification-task-bankmarketing/automl-classification-task-bankmarketing-serverless.ipynb b/sdk/python/jobs/automl-standalone-jobs/automl-classification-task-bankmarketing/automl-classification-task-bankmarketing-serverless.ipynb index 375134bee1..c01ecadaab 100644 --- a/sdk/python/jobs/automl-standalone-jobs/automl-classification-task-bankmarketing/automl-classification-task-bankmarketing-serverless.ipynb +++ b/sdk/python/jobs/automl-standalone-jobs/automl-classification-task-bankmarketing/automl-classification-task-bankmarketing-serverless.ipynb @@ -9,8 +9,7 @@ "**Requirements** - In order to benefit from this tutorial, you will need:\n", "- A basic understanding of Machine Learning\n", "- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)\n", - "- An Azure ML workspace. [Check this notebook for creating a workspace](../../../resources/workspace/workspace.ipynb) \n", - "- This notebook leverages **serverless compute** to run the job. There is no need for user to create and manage compute. \n", + "- An Azure ML workspace. [Check this notebook for creating a workspace](../../../resources/workspace/workspace.ipynb) \n", "- A python environment\n", "- Installed Azure Machine Learning Python SDK v2 - [install instructions](../../../README.md) - check the getting started section\n", "\n", @@ -18,7 +17,7 @@ "**Learning Objectives** - By the end of this tutorial, you should be able to:\n", "- Connect to your AML workspace from the Python SDK\n", "- Create an `AutoML classification Job` with the 'classification()' factory-fuction.\n", - "- Train the model using AmlCompute by submitting/running the AutoML training job\n", + "- Train the model using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) by submitting/running the AutoML training job\n", "- Obtaing the model and score predictions with it\n", "- Leverage the auto generated training code and use it for retraining on an updated dataset\n", "\n", diff --git a/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb b/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb index 8830ce0092..6eca78503e 100644 --- a/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb +++ b/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb @@ -37,15 +37,14 @@ "**Requirements** - In order to benefit from this tutorial, you will need:\n", "- A basic understanding of Machine Learning\n", "- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)\n", - "- An Azure ML workspace. [Check this notebook for creating a workspace](../../../resources/workspace/workspace.ipynb) \n", - "- Serverless compute to run the job\n", + "- An Azure ML workspace. [Check this notebook for creating a workspace](../../../resources/workspace/workspace.ipynb) \n", "- A python environment\n", "- Installation instructions - [install instructions](../../../README.md)\n", "\n", "**Learning Objectives** - By the end of this tutorial, you should be able to:\n", "- Connect to your AML workspace from the Python SDK\n", "- Create an `AutoML time-series forecasting Job` with the 'forecasting()' factory-fuction\n", - "- Train the model using serverless compute by submitting/running the AutoML forecasting training job\n", + "- Train the model using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) by submitting/running the AutoML forecasting training job\n", "- Obtain the model and use it to generate forecast\n", "\n", "**Motivations** - This notebook explains how to setup and run an AutoML forecasting job. This is one of the nine ML-tasks supported by AutoML. Other ML-tasks are 'regression', 'classification', 'image classification', 'image object detection', 'nlp text classification', etc.\n", diff --git a/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-orange-juice-sales/automl-forecasting-orange-juice-sales-mlflow.ipynb b/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-orange-juice-sales/automl-forecasting-orange-juice-sales-mlflow.ipynb index d4a9c4f510..8a5b6421fc 100644 --- a/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-orange-juice-sales/automl-forecasting-orange-juice-sales-mlflow.ipynb +++ b/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-orange-juice-sales/automl-forecasting-orange-juice-sales-mlflow.ipynb @@ -17,7 +17,7 @@ "**Learning Objectives** - By the end of this tutorial, you should be able to:\n", "- Connect to your AML workspace from the Python SDK\n", "- Create an `AutoML time-series forecasting Job` with the 'forecasting()' factory-fuction.\n", - "- Train the model using AmlCompute by submitting/running the AutoML forecasting training job\n", + "- Train the model using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) by submitting/running the AutoML forecasting training job\n", "- Obtaing the model and score predictions with it\n", "\n", "**Motivations** - This notebook explains how to setup and run an AutoML forecasting job. This is one of the nine ML-tasks supported by AutoML. Other ML-tasks are 'regression', 'classification', 'image classification', 'image object detection', 'nlp text classification', etc.\n", diff --git a/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-recipes-univariate/automl-forecasting-recipe-univariate-run.ipynb b/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-recipes-univariate/automl-forecasting-recipe-univariate-run.ipynb index a88b3f2042..f0449f18b2 100644 --- a/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-recipes-univariate/automl-forecasting-recipe-univariate-run.ipynb +++ b/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-recipes-univariate/automl-forecasting-recipe-univariate-run.ipynb @@ -25,7 +25,7 @@ "**Learning Objectives** - By the end of this tutorial, you should be able to:\n", "- Connect to your AML workspace from the Python SDK\n", "- Create an `AutoML time-series forecasting Job` with the 'forecasting()' factory-fuction\n", - "- Train the model using AmlCompute by submitting/running the AutoML forecasting training job\n", + "- Train the model using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) by submitting/running the AutoML forecasting training job\n", "- Obtain the model and use it to generate forecast\n", "\n" ] @@ -95,6 +95,7 @@ "from azure.ai.ml.constants import AssetTypes, InputOutputModes\n", "from azure.ai.ml import automl\n", "from azure.ai.ml import Input\n", + "from azure.ai.ml.entities import ResourceConfiguration\n", "\n", "import json\n", "import pandas as pd\n", @@ -112,9 +113,8 @@ "\n", "As part of the setup you have already created a Workspace. To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We will use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. We use the default [default azure authentication](https://docs.microsoft.com/en-us/python/api/azure-identity/azure.identity.defaultazurecredential?view=azure-python) for this tutorial. Check the [configuration notebook](../../configuration.ipynb) for more details on how to configure credentials and connect to a workspace.\n", "\n", - " You will also need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", - "\n", - "Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist." + " You will also need to create a compute target for your AutoML run. In this tutorial, you will use serverless compute (preview) as your training compute resource.\n", + "\n" ] }, { @@ -563,52 +563,6 @@ "\"\"\"" ] }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 3. Create or Attach existing AmlCompute\n", - "\n", - "[Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you will create and an AmlCompute cluster as your training compute resource.\n", - "\n", - "Creation of AmlCompute takes approximately 5 minutes.\n", - "\n", - "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", - "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "gather": { - "logged": 1684500104600 - } - }, - "outputs": [], - "source": [ - "from azure.core.exceptions import ResourceNotFoundError\n", - "from azure.ai.ml.entities import AmlCompute\n", - "\n", - "cluster_name = \"recipe-cluster\"\n", - "\n", - "try:\n", - " # Retrieve an already attached Azure Machine Learning Compute.\n", - " compute = ml_client.compute.get(cluster_name)\n", - "except ResourceNotFoundError as e:\n", - " compute = AmlCompute(\n", - " name=cluster_name,\n", - " size=\"STANDARD_DS12_V2\",\n", - " type=\"amlcompute\",\n", - " min_instances=0,\n", - " max_instances=4,\n", - " idle_time_before_scale_down=120,\n", - " )\n", - " poller = ml_client.begin_create_or_update(compute)\n", - " poller.wait()" - ] - }, { "attachments": {}, "cell_type": "markdown", @@ -629,7 +583,7 @@ "|**target_column_name**|The name of the label column.|\n", "|**primary_metric**|This is the metric that you want to optimize.
Forecasting supports the following primary metrics
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error|\n", "|**training_data**|The training data to be used for this experiment. You can use a registered MLTable in the workspace using the format `:` OR you can use a local file or folder as a MLTable. For e.g `Input(mltable='my_mltable:1')` OR `Input(mltable=MLTable(local_path=\"./data\"))` The parameter 'training_data' must always be provided.|\n", - "|**compute**|The compute on which the AutoML job will run. In this example we are using a compute called 'cpu-cluster' present in the workspace. You can replace it with any other compute in the workspace.|\n", + "|**compute**|The compute on which the AutoML job will run. In this example we are using serverless compute so compute does not need to be specified. You can replace it with a compute cluster in the workspace.|\n", "|**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection. The default value is \"auto\", in which case AutoMl determines the number of cross-validations automatically, if a validation set is not provided. Or, users could specify an integer value.|\n", "|**name**|The name of the Job/Run. This is an optional property. If not specified, a random name will be generated.\n", "|**experiment_name**|The name of the Experiment. An Experiment is like a folder with multiple runs in Azure ML Workspace that should be related to the same logical machine learning experiment. For example, if a user runs this notebook multiple times, there will be multiple runs associated with the same Experiment name.|\n", @@ -715,7 +669,6 @@ "# Create the AutoML forecasting job with the related factory-function.\n", "\n", "forecasting_job = automl.forecasting(\n", - " compute=cluster_name,\n", " experiment_name=exp_name,\n", " training_data=my_training_data_input,\n", " target_column_name=TARGET_COLNAME,\n", @@ -753,7 +706,8 @@ ")\n", "\n", "# Training properties are optional\n", - "forecasting_job.set_training(blocked_training_algorithms=BLOCKED_MODELS)" + "forecasting_job.set_training(blocked_training_algorithms=BLOCKED_MODELS)\n", + "forecasting_job.resources = ResourceConfiguration(instance_count=4)" ] }, { @@ -1089,7 +1043,38 @@ "source": [ "Now that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch inferencing on the test dataset which must have the same schema as training dataset.\n", "\n", - "The inference will run on a remote compute. In this example, it will re-use the training compute." + "The inference will run on a remote compute. In this example, we will create compute cluster." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1684500104600 + } + }, + "outputs": [], + "source": [ + "from azure.core.exceptions import ResourceNotFoundError\n", + "from azure.ai.ml.entities import AmlCompute\n", + "\n", + "cluster_name = \"recipe-cluster\"\n", + "\n", + "try:\n", + " # Retrieve an already attached Azure Machine Learning Compute.\n", + " compute = ml_client.compute.get(cluster_name)\n", + "except ResourceNotFoundError as e:\n", + " compute = AmlCompute(\n", + " name=cluster_name,\n", + " size=\"STANDARD_DS12_V2\",\n", + " type=\"amlcompute\",\n", + " min_instances=0,\n", + " max_instances=4,\n", + " idle_time_before_scale_down=120,\n", + " )\n", + " poller = ml_client.begin_create_or_update(compute)\n", + " poller.wait()" ] }, { diff --git a/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-bike-share/auto-ml-forecasting-bike-share.ipynb b/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-bike-share/auto-ml-forecasting-bike-share.ipynb index 5c678911b1..495625a6a2 100644 --- a/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-bike-share/auto-ml-forecasting-bike-share.ipynb +++ b/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-bike-share/auto-ml-forecasting-bike-share.ipynb @@ -35,7 +35,7 @@ "**Learning Objectives** - By the end of this tutorial, you should be able to:\n", "- Connect to your AML workspace from the Python SDK\n", "- Create an `AutoML time-series forecasting Job` with the 'forecasting()' factory-fuction\n", - "- Train the model using AmlCompute by submitting/running the AutoML forecasting training job\n", + "- Train the model using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) by submitting/running the AutoML forecasting training job\n", "- Obtain the model and use it to generate forecast\n", "\n", "**Motivations** - This notebook explains how to setup and run an AutoML forecasting job. This is one of the nine ML-tasks supported by AutoML. Other ML-tasks are 'regression', 'classification', 'image classification', 'image object detection', 'nlp text classification', etc.\n", @@ -312,46 +312,6 @@ "- https://learn.microsoft.com/en-us/azure/machine-learning/how-to-create-data-assets?tabs=Python-SDK covers how to work with them in the v2 CLI/SDK." ] }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# 3 Create or Attach existing AmlCompute.\n", - "[Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you will create and an AmlCompute cluster as your training compute resource.\n", - "\n", - "#### Creation of AmlCompute takes approximately 5 minutes. \n", - "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", - "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azure.core.exceptions import ResourceNotFoundError\n", - "from azure.ai.ml.entities import AmlCompute\n", - "\n", - "cluster_name = \"bike-share-v2\"\n", - "\n", - "try:\n", - " # Retrieve an already attached Azure Machine Learning Compute.\n", - " compute = ml_client.compute.get(cluster_name)\n", - "except ResourceNotFoundError as e:\n", - " compute = AmlCompute(\n", - " name=cluster_name,\n", - " size=\"STANDARD_DS12_V2\",\n", - " type=\"amlcompute\",\n", - " min_instances=0,\n", - " max_instances=4,\n", - " idle_time_before_scale_down=120,\n", - " )\n", - " poller = ml_client.begin_create_or_update(compute)\n", - " poller.wait()" - ] - }, { "attachments": {}, "cell_type": "markdown", @@ -371,7 +331,7 @@ "|**target_column_name**|The name of the label column.|\n", "|**primary_metric**|This is the metric that you want to optimize.
Forecasting supports the following primary metrics
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error|\n", "|**training_data**|The training data to be used for this experiment. You can use a registered MLTable in the workspace using the format `:` OR you can use a local file or folder as a MLTable. For e.g `Input(mltable='my_mltable:1')` OR `Input(mltable=MLTable(local_path=\"./data\"))` The parameter 'training_data' must always be provided.|\n", - "|**compute**|The compute on which the AutoML job will run. In this example we are using a compute called 'cpu-cluster' present in the workspace. You can replace it with any other compute in the workspace.|\n", + "|**compute**|The compute on which the AutoML job will run. In this example we are using serverless compute. You can replace it with a compute cluster in the workspace.|\n", "|**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection. This can be set to \"auto\", in which case AutoMl determines the number of cross-validations automatically, if a validation set is not provided. Or, users could specify an integer value.|\n", "|**name**|The name of the Job/Run. This is an optional property. If not specified, a random name will be generated.\n", "|**experiment_name**|The name of the Experiment. An Experiment is like a folder with multiple runs in Azure ML Workspace that should be related to the same logical machine learning experiment. For example, if a user runs this notebook multiple times, there will be multiple runs associated with the same Experiment name.|\n", @@ -462,8 +422,7 @@ "outputs": [], "source": [ "# Create the AutoML forecasting job with the related factory-function. Force the target column, to be integer type (To be added in phase 2)\n", - "forecasting_job = automl.forecasting(\n", - " compute=\"bike-share-v2\",\n", + "forecasting_job = automl.forecasting(\n", " experiment_name=exp_name,\n", " training_data=my_training_data_input,\n", " target_column_name=target_column_name,\n", @@ -797,7 +756,34 @@ "\n", "Now that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch inferencing on the test dataset which must have the same schema as training dataset.\n", "\n", - "The inference will run on a remote compute. In this example, it will re-use the training compute." + "The inference will run on a remote compute. In this example, it will create compute cluster." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azure.core.exceptions import ResourceNotFoundError\n", + "from azure.ai.ml.entities import AmlCompute\n", + "\n", + "cluster_name = \"bike-share-v2\"\n", + "\n", + "try:\n", + " # Retrieve an already attached Azure Machine Learning Compute.\n", + " compute = ml_client.compute.get(cluster_name)\n", + "except ResourceNotFoundError as e:\n", + " compute = AmlCompute(\n", + " name=cluster_name,\n", + " size=\"STANDARD_DS12_V2\",\n", + " type=\"amlcompute\",\n", + " min_instances=0,\n", + " max_instances=4,\n", + " idle_time_before_scale_down=120,\n", + " )\n", + " poller = ml_client.begin_create_or_update(compute)\n", + " poller.wait()" ] }, { diff --git a/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-energy-demand/automl-forecasting-task-energy-demand-advanced.ipynb b/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-energy-demand/automl-forecasting-task-energy-demand-advanced.ipynb index 74eb793a4e..d84f9822ce 100644 --- a/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-energy-demand/automl-forecasting-task-energy-demand-advanced.ipynb +++ b/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-energy-demand/automl-forecasting-task-energy-demand-advanced.ipynb @@ -10,15 +10,14 @@ "**Requirements** - In order to benefit from this tutorial, you will need:\n", "- A basic understanding of Machine Learning\n", "- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)\n", - "- An Azure ML workspace. [Check this notebook for creating a workspace](../../../resources/workspace/workspace.ipynb) \n", - "- A Compute Cluster. [Check this notebook to create a compute cluster](../../../resources/compute/compute.ipynb)\n", + "- An Azure ML workspace. [Check this notebook for creating a workspace](../../../resources/workspace/workspace.ipynb) \n", "- A python environment\n", "- Installed Azure Machine Learning Python SDK v2 - [install instructions](../../../README.md) - check the getting started section\n", "\n", "**Learning Objectives** - By the end of this tutorial, you should be able to:\n", "- Connect to your AML workspace from the Python SDK\n", "- Create an `AutoML time-series forecasting Job` with the 'forecasting()' factory-fuction.\n", - "- Train the model using AmlCompute by submitting/running the AutoML forecasting training job\n", + "- Train the model using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) by submitting/running the AutoML forecasting training job\n", "- Obtaing the model and score predictions with it\n", "\n", "**Motivations** - This notebook explains how to setup and run an AutoML forecasting job. This is one of the nine ML-tasks supported by AutoML. Other ML-tasks are 'regression', 'classification', 'image classification', 'image object detection', 'nlp text classification', etc.\n", @@ -227,7 +226,7 @@ "- `training_data` - The data to be used for training. It should contain both training feature columns and a target column. Optionally, this data can be split for segregating a validation or test dataset. \n", "You can use a registered MLTable in the workspace using the format ':' OR you can use a local file or folder as a MLTable. For e.g Input(mltable='my_mltable:1') OR Input(mltable=MLTable(local_path=\"./data\"))\n", "The parameter 'training_data' must always be provided.\n", - "- `compute` - The compute on which the AutoML job will run. In this example we are using a compute called 'adv-energy-cluster-v2' present in the workspace. You can replace it any other compute in the workspace. \n", + "- `compute` - The compute on which the AutoML job will run. In this example we are using serverless compute. You can alternatively use a compute cluster in the workspace. \n", "- `name` - The name of the Job/Run. This is an optional property. If not specified, a random name will be generated.\n", "- `experiment_name` - The name of the Experiment. An Experiment is like a folder with multiple runs in Azure ML Workspace that should be related to the same logical machine learning experiment.\n", "\n", @@ -284,8 +283,7 @@ }, "outputs": [], "source": [ - "# general job parameters\n", - "compute_name = \"adv-energy-cluster-v2\"\n", + "# general job parameters\n", "max_trials = 5\n", "exp_name = \"dpv2-forecasting-experiment\"" ] @@ -312,8 +310,7 @@ "source": [ "# Create the AutoML forecasting job with the related factory-function.\n", "\n", - "forecasting_job = automl.forecasting(\n", - " compute=compute_name,\n", + "forecasting_job = automl.forecasting(\n", " # name=\"dpv2-forecasting-job-02\",\n", " experiment_name=exp_name,\n", " training_data=my_training_data_input,\n", diff --git a/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multiclass-task-sentiment-analysis/automl-nlp-multiclass-sentiment-mlflow.ipynb b/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multiclass-task-sentiment-analysis/automl-nlp-multiclass-sentiment-mlflow.ipynb index 060b4c3a90..8dbabf2265 100644 --- a/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multiclass-task-sentiment-analysis/automl-nlp-multiclass-sentiment-mlflow.ipynb +++ b/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multiclass-task-sentiment-analysis/automl-nlp-multiclass-sentiment-mlflow.ipynb @@ -10,7 +10,6 @@ "- A basic understanding of Machine Learning\n", "- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)\n", "- An Azure ML workspace. [Check this notebook for creating a workspace](../../../resources/workspace/workspace.ipynb) \n", - "- A Compute Cluster. [Check this notebook to create a compute cluster](../../../resources/compute/compute.ipynb)\n", "- A python environment\n", "- Installed Azure Machine Learning Python SDK v2 - [install instructions](../../../README.md) - check the getting started section" ] @@ -42,7 +41,8 @@ "from azure.identity import DefaultAzureCredential\n", "from azure.ai.ml.constants import AssetTypes\n", "from azure.ai.ml import automl, Input, MLClient\n", - "from pprint import pprint" + "from pprint import pprint\n", + "from azure.ai.ml.entities import ResourceConfiguration" ] }, { @@ -190,40 +190,6 @@ "In this section we will configure and run the AutoML job, for training the model. " ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [ - "parameters" - ] - }, - "outputs": [], - "source": [ - "# general job parameters\n", - "from azure.ai.ml.entities import AmlCompute\n", - "from azure.core.exceptions import ResourceNotFoundError\n", - "\n", - "compute_name = \"gpu-cluster\"\n", - "\n", - "try:\n", - " _ = ml_client.compute.get(compute_name)\n", - " print(\"Found existing compute target.\")\n", - "except ResourceNotFoundError:\n", - " print(\"Creating a new compute target...\")\n", - " compute_config = AmlCompute(\n", - " name=compute_name,\n", - " type=\"amlcompute\",\n", - " size=\"Standard_NC6\",\n", - " idle_time_before_scale_down=120,\n", - " min_instances=0,\n", - " max_instances=4,\n", - " )\n", - " ml_client.begin_create_or_update(compute_config).result()\n", - "exp_name = \"dpv2-nlp-text-classification-experiment\"\n", - "dataset_language_code = \"eng\"" - ] - }, { "cell_type": "code", "execution_count": null, @@ -246,8 +212,9 @@ "source": [ "# Create the AutoML job with the related factory-function.\n", "\n", + "exp_name = \"dpv2-nlp-text-classification-experiment\"\n", + "dataset_language_code = \"eng\"\n", "text_classification_job = automl.text_classification(\n", - " compute=compute_name,\n", " # name=\"dpv2-nlp-text-classification-multiclass-job-01\",\n", " experiment_name=exp_name,\n", " training_data=my_training_data_input,\n", @@ -257,7 +224,10 @@ " tags={\"my_custom_tag\": \"My custom value\"},\n", ")\n", "\n", - "text_classification_job.set_limits(timeout_minutes=120)" + "text_classification_job.set_limits(timeout_minutes=120)\n", + "text_classification_job.resources = ResourceConfiguration(\n", + " instance_type=\"Standard_NC6s_v3\"\n", + ")" ] }, { diff --git a/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multiclass-task-sentiment-analysis/automl-nlp-multiclass-sentiment.ipynb b/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multiclass-task-sentiment-analysis/automl-nlp-multiclass-sentiment.ipynb index 9d3963fdcc..b565cb685d 100644 --- a/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multiclass-task-sentiment-analysis/automl-nlp-multiclass-sentiment.ipynb +++ b/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multiclass-task-sentiment-analysis/automl-nlp-multiclass-sentiment.ipynb @@ -10,7 +10,6 @@ "- A basic understanding of Machine Learning\n", "- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)\n", "- An Azure ML workspace. [Check this notebook for creating a workspace](../../../resources/workspace/workspace.ipynb) \n", - "- A Compute Cluster. [Check this notebook to create a compute cluster](../../../resources/compute/compute.ipynb)\n", "- A python environment\n", "- Installed Azure Machine Learning Python SDK v2 - [install instructions](../../../README.md) - check the getting started section" ] @@ -44,7 +43,8 @@ "from azure.ai.ml.constants import AssetTypes\n", "from azure.ai.ml import Input\n", "\n", - "from azure.ai.ml import automl" + "from azure.ai.ml import automl\n", + "from azure.ai.ml.entities import ResourceConfiguration" ] }, { @@ -195,7 +195,6 @@ "outputs": [], "source": [ "# general job parameters\n", - "compute_name = \"gpu-cluster\"\n", "exp_name = \"dpv2-nlp-text-classification-experiment\"\n", "dataset_language_code = \"eng\"" ] @@ -223,7 +222,6 @@ "# Create the AutoML job with the related factory-function.\n", "\n", "text_classification_job = automl.text_classification(\n", - " compute=compute_name,\n", " # name=\"dpv2-nlp-text-classification-multiclass-job-01\",\n", " experiment_name=exp_name,\n", " training_data=my_training_data_input,\n", @@ -233,7 +231,10 @@ " tags={\"my_custom_tag\": \"My custom value\"},\n", ")\n", "\n", - "text_classification_job.set_limits(timeout_minutes=120)" + "text_classification_job.set_limits(timeout_minutes=120)\n", + "text_classification_job.resources = ResourceConfiguration(\n", + " instance_type=\"Standard_NC6s_v3\"\n", + ")" ] }, { diff --git a/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multilabel-task-paper-categorization/automl-nlp-multilabel-paper-cat.ipynb b/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multilabel-task-paper-categorization/automl-nlp-multilabel-paper-cat.ipynb index 3fde528cf4..0845fb007d 100644 --- a/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multilabel-task-paper-categorization/automl-nlp-multilabel-paper-cat.ipynb +++ b/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-classification-multilabel-task-paper-categorization/automl-nlp-multilabel-paper-cat.ipynb @@ -10,7 +10,6 @@ "- A basic understanding of Machine Learning\n", "- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)\n", "- An Azure ML workspace. [Check this notebook for creating a workspace](../../../resources/workspace/workspace.ipynb) \n", - "- A Compute Cluster. [Check this notebook to create a compute cluster](../../../resources/compute/compute.ipynb)\n", "- A python environment\n", "- Installed Azure Machine Learning Python SDK v2 - [install instructions](../../../README.md) - check the getting started section\n", "\n", @@ -46,7 +45,8 @@ "from azure.identity import DefaultAzureCredential\n", "from azure.ai.ml.constants import AssetTypes\n", "from azure.ai.ml import automl, Input, MLClient\n", - "from pprint import pprint" + "from pprint import pprint\n", + "from azure.ai.ml.entities import ResourceConfiguration" ] }, { @@ -141,40 +141,6 @@ "In this section we will configure and run the AutoML job, for training the model. " ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [ - "parameters" - ] - }, - "outputs": [], - "source": [ - "# general job parameters\n", - "from azure.ai.ml.entities import AmlCompute\n", - "from azure.core.exceptions import ResourceNotFoundError\n", - "\n", - "compute_name = \"gpu-cluster-nc6s-v3\"\n", - "\n", - "try:\n", - " _ = ml_client.compute.get(compute_name)\n", - " print(\"Found existing compute target.\")\n", - "except ResourceNotFoundError:\n", - " print(\"Creating a new compute target...\")\n", - " compute_config = AmlCompute(\n", - " name=compute_name,\n", - " type=\"amlcompute\",\n", - " size=\"Standard_NC6s_v3\",\n", - " idle_time_before_scale_down=120,\n", - " min_instances=0,\n", - " max_instances=4,\n", - " )\n", - " ml_client.begin_create_or_update(compute_config).result()\n", - "exp_name = \"dpv2-nlp-multilabel\"\n", - "exp_timeout = 120" - ] - }, { "cell_type": "code", "execution_count": null, @@ -197,8 +163,9 @@ "source": [ "# Create the AutoML job with the related factory-function.\n", "\n", + "exp_name = \"dpv2-nlp-multilabel\"\n", + "exp_timeout = 120\n", "text_classification_multilabel_job = automl.text_classification_multilabel(\n", - " compute=compute_name,\n", " # name=\"dpv2-nlp-text-classification-multilabel-job-02\",\n", " experiment_name=exp_name,\n", " training_data=my_training_data_input,\n", @@ -208,7 +175,10 @@ " tags={\"my_custom_tag\": \"My custom value\"},\n", ")\n", "\n", - "text_classification_multilabel_job.set_limits(timeout_minutes=exp_timeout)" + "text_classification_multilabel_job.set_limits(timeout_minutes=exp_timeout)\n", + "text_classification_multilabel_job.resources = ResourceConfiguration(\n", + " instance_type=\"Standard_NC6s_v3\"\n", + ")" ] }, { diff --git a/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-named-entity-recognition-task/automl-nlp-text-ner-task.ipynb b/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-named-entity-recognition-task/automl-nlp-text-ner-task.ipynb index ba2157126e..0935ec1da3 100644 --- a/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-named-entity-recognition-task/automl-nlp-text-ner-task.ipynb +++ b/sdk/python/jobs/automl-standalone-jobs/automl-nlp-text-named-entity-recognition-task/automl-nlp-text-ner-task.ipynb @@ -10,7 +10,6 @@ "- A basic understanding of Machine Learning\n", "- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)\n", "- An Azure ML workspace. [Check this notebook for creating a workspace](../../../resources/workspace/workspace.ipynb) \n", - "- A Compute Cluster. [Check this notebook to create a compute cluster](../../../resources/compute/compute.ipynb)\n", "- A python environment\n", "- Installed Azure Machine Learning Python SDK v2 - [install instructions](../../../README.md) - check the getting started section\n", "\n", @@ -46,6 +45,7 @@ "from azure.identity import DefaultAzureCredential\n", "from azure.ai.ml import automl, Input, MLClient\n", "from azure.ai.ml.constants import AssetTypes\n", + "from azure.ai.ml.entities import ResourceConfiguration\n", "\n", "from pprint import pprint" ] @@ -145,40 +145,6 @@ "In this section we will configure and run the AutoML job, for training the model. " ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [ - "parameters" - ] - }, - "outputs": [], - "source": [ - "# general job parameters\n", - "from azure.ai.ml.entities import AmlCompute\n", - "from azure.core.exceptions import ResourceNotFoundError\n", - "\n", - "compute_name = \"gpu-cluster-nc6s-v3\"\n", - "\n", - "try:\n", - " _ = ml_client.compute.get(compute_name)\n", - " print(\"Found existing compute target.\")\n", - "except ResourceNotFoundError:\n", - " print(\"Creating a new compute target...\")\n", - " compute_config = AmlCompute(\n", - " name=compute_name,\n", - " type=\"amlcompute\",\n", - " size=\"Standard_NC6s_v3\",\n", - " idle_time_before_scale_down=120,\n", - " min_instances=0,\n", - " max_instances=4,\n", - " )\n", - " ml_client.begin_create_or_update(compute_config).result()\n", - "exp_name = \"dpv2-nlp-text-ner-experiment\"\n", - "exp_timeout = 60" - ] - }, { "cell_type": "code", "execution_count": null, @@ -202,8 +168,9 @@ "source": [ "# Create the AutoML job with the related factory-function.\n", "\n", + "exp_name = \"dpv2-nlp-text-ner-experiment\"\n", + "exp_timeout = 60\n", "text_ner_job = automl.text_ner(\n", - " compute=compute_name,\n", " # name=\"dpv2-nlp-text-ner-job-01\",\n", " experiment_name=exp_name,\n", " training_data=my_training_data_input,\n", @@ -211,7 +178,8 @@ " tags={\"my_custom_tag\": \"My custom value\"},\n", ")\n", "\n", - "text_ner_job.set_limits(timeout_minutes=exp_timeout)" + "text_ner_job.set_limits(timeout_minutes=exp_timeout)\n", + "text_ner_job.resources = ResourceConfiguration(instance_type=\"Standard_NC6s_v3\")" ] }, { diff --git a/sdk/python/jobs/automl-standalone-jobs/automl-regression-task-hardware-performance/automl-regression-task-hardware-performance.ipynb b/sdk/python/jobs/automl-standalone-jobs/automl-regression-task-hardware-performance/automl-regression-task-hardware-performance.ipynb index 42eb267610..9bb054e64b 100644 --- a/sdk/python/jobs/automl-standalone-jobs/automl-regression-task-hardware-performance/automl-regression-task-hardware-performance.ipynb +++ b/sdk/python/jobs/automl-standalone-jobs/automl-regression-task-hardware-performance/automl-regression-task-hardware-performance.ipynb @@ -9,15 +9,14 @@ "**Requirements** - In order to benefit from this tutorial, you will need:\n", "- A basic understanding of Machine Learning\n", "- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)\n", - "- An Azure ML workspace. [Check this notebook for creating a workspace](../../../resources/workspace/workspace.ipynb) \n", - "- A Compute Cluster. [Check this notebook to create a compute cluster](../../../resources/compute/compute.ipynb)\n", + "- An Azure ML workspace. [Check this notebook for creating a workspace](../../../resources/workspace/workspace.ipynb) \n", "- A python environment\n", "- Installed Azure Machine Learning Python SDK v2 - [install instructions](../../../README.md) - check the getting started section\n", "\n", "**Learning Objectives** - By the end of this tutorial, you should be able to:\n", "- Connect to your AML workspace from the Python SDK\n", "- Create an `AutoML regression Job` with the 'regression()' factory-function.\n", - "- Train the model using AmlCompute by submitting/running the AutoML regression training job\n", + "- Train the model using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) by submitting/running the AutoML regression training job\n", "- Obtaing the model and score predictions with it\n", "\n", "**Motivations** - This notebook explains how to setup and run an AutoML regression job. This is one of the nine ML-tasks supported by AutoML. Other ML-tasks are 'forecasting', 'classification', 'image classification', 'image object detection', 'nlp text classification', etc.\n", @@ -148,49 +147,6 @@ "# my_training_data_input = Input(type=AssetTypes.MLTABLE, path=\"azureml://datastores/workspaceblobstore/paths/my-regression-mltable\")" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# 3. Compute target setup\n", - "\n", - "### Create or Attach existing AmlCompute\n", - "A compute target is required to execute the Automated ML run. In this tutorial, you create AmlCompute as your training compute resource.\n", - "\n", - "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", - "\n", - "#### Creation of AmlCompute takes approximately 5 minutes. \n", - "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", - "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azure.ai.ml.entities import AmlCompute\n", - "from azure.core.exceptions import ResourceNotFoundError\n", - "\n", - "compute_name = \"cpu-cluster\"\n", - "\n", - "try:\n", - " _ = ml_client.compute.get(compute_name)\n", - " print(\"Found existing compute target.\")\n", - "except ResourceNotFoundError:\n", - " print(\"Creating a new compute target...\")\n", - " compute_config = AmlCompute(\n", - " name=compute_name,\n", - " type=\"amlcompute\",\n", - " size=\"STANDARD_DS12_V2\",\n", - " idle_time_before_scale_down=120,\n", - " min_instances=0,\n", - " max_instances=6,\n", - " )\n", - " ml_client.begin_create_or_update(compute_config).result()" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -209,7 +165,7 @@ "- `training_data` - The data to be used for training. It should contain both training feature columns and a target column. Optionally, this data can be split for segregating a validation or test dataset. \n", "You can use a registered MLTable in the workspace using the format ':' OR you can use a local file or folder as a MLTable. For e.g Input(mltable='my_mltable:1') OR Input(mltable=MLTable(local_path=\"./data\"))\n", "The parameter 'training_data' must always be provided.\n", - "- `compute` - The compute on which the AutoML job will run. In this example we are using a compute called 'cpu-cluster' present in the workspace. You can replace it any other compute in the workspace. \n", + "- `compute` - The compute on which the AutoML job will run. In this example we are using serverless compute. You can alternatively use a compute cluster as well. \n", "- `name` - The name of the Job/Run. This is an optional property. If not specified, a random name will be generated.\n", "- `experiment_name` - The name of the Experiment. An Experiment is like a folder with multiple runs in Azure ML Workspace that should be related to the same logical machine learning experiment.\n", "\n", @@ -265,8 +221,7 @@ "source": [ "# Create the AutoML regression job with the related factory-function.\n", "\n", - "regression_job = automl.regression(\n", - " compute=compute_name,\n", + "regression_job = automl.regression(\n", " experiment_name=exp_name,\n", " training_data=my_training_data_input,\n", " target_column_name=\"ERP\",\n", diff --git a/sdk/python/jobs/single-step/debug-and-monitor/debug-and-monitor.ipynb b/sdk/python/jobs/single-step/debug-and-monitor/debug-and-monitor.ipynb index 7fc2b1bca9..edb69918fb 100644 --- a/sdk/python/jobs/single-step/debug-and-monitor/debug-and-monitor.ipynb +++ b/sdk/python/jobs/single-step/debug-and-monitor/debug-and-monitor.ipynb @@ -9,7 +9,7 @@ "**Requirements** - In order to benefit from this tutorial, you will need:\n", "- A basic understanding of Machine Learning\n", "- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)\n", - "- An Azure ML workspace with computer cluster - [Configure workspace](../../../configuration.ipynb) \n", + "- An Azure ML workspace - [Configure workspace](../../../configuration.ipynb) \n", "\n", "- A python environment\n", "- Installed Azure Machine Learning Python SDK v2 - [install instructions](../../../../README.md) - check the getting started section\n", @@ -20,7 +20,7 @@ "- Use a local file as an `input` to the Command\n", "- Enable live debugging & monitoring by specifying `services` to the Command job\n", "\n", - "**Motivations** - This notebook explains how to setup and run a Command. The Command is a fundamental construct of Azure Machine Learning. It can be used to run a task on a specified compute (either local or on the cloud). The Command accepts `environment` and `compute` to setup required infrastructure. You can define a `command` to run on this infrastructure with `inputs`." + "**Motivations** - This notebook explains how to setup and run a Command. The Command is a fundamental construct of Azure Machine Learning. It can be used to run a task on a specified compute (either local or on the cloud). The Command accepts `environment` and `compute` to setup required infrastructure. Alternatively, [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) can also be used to run the training job. You can define a `command` to run on this infrastructure with `inputs`." ] }, { @@ -104,8 +104,7 @@ "- `code` - This is the path where the code to run the command is located\n", "- `command` - This is the command that needs to be run. If you need to reserve your cluster for debugging or monitoring purposes, you can use the `sleep` command.\n", "- `environment` - This is the environment needed for the command to run. Curated or custom environments from the workspace can be used. Or a custom environment can be created and used as well. Check out the [environment](../../../../assets/environment/environment.ipynb) notebook for more examples.\n", - "- `compute` - The compute on which the command will run. In this example we are using a compute called `cpu-cluster` present in the workspace. You can replace it any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", - "\n", + "- `compute` - The compute on which the command will run. In this example we are using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) so there is no need to specify any compute. You can also replace serverless with any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", "- `display_name` - The display name of the Job\n", "- `description` - The description of the experiment\n", "- `services` - Specify the applications (or SSH) that you need to interact with the live running job. You can specify `vs_code`, `tensor_board` (needs log directory), `jupyter_lab` or `SSH` (needs public key). For distributed jobs, you can specify the specific compute `nodes` index you would like to interact with. If `nodes` are not specified, interactive services are enabled only on the head node by default." @@ -133,8 +132,7 @@ "job = command(\n", " code=\"./src\", # local path where the code is stored\n", " command=\"python tfevents.py && sleep 2000\", # the sleep command allows you to reserve your compute -- recommended if you are using interactive services\n", - " environment=\"AzureML-tensorflow-2.7-ubuntu20.04-py38-cuda11-gpu@latest\",\n", - " compute=\"cpu-cluster\",\n", + " environment=\"AzureML-tensorflow-2.7-ubuntu20.04-py38-cuda11-gpu@latest\",\n", " display_name=\"debug-and-monitor-example\",\n", " services={\n", " \"My_jupyterlab\": JupyterLabJobService(),\n", diff --git a/sdk/python/jobs/single-step/pytorch/distributed-training-yolov5/objectdetectionAzureML.ipynb b/sdk/python/jobs/single-step/pytorch/distributed-training-yolov5/objectdetectionAzureML.ipynb index 22a6760c46..8b868e1290 100644 --- a/sdk/python/jobs/single-step/pytorch/distributed-training-yolov5/objectdetectionAzureML.ipynb +++ b/sdk/python/jobs/single-step/pytorch/distributed-training-yolov5/objectdetectionAzureML.ipynb @@ -75,6 +75,7 @@ "from azure.ai.ml import Output\n", "from azure.ai.ml.constants import AssetTypes\n", "from azure.identity import DefaultAzureCredential\n", + "from azure.ai.ml.entities import ResourceConfiguration\n", "\n", "credential = DefaultAzureCredential()\n", "\n", @@ -116,7 +117,7 @@ " - Azure ML `data`/`dataset` or `datastore` are of type `uri_folder`. To use `data`/`dataset` as input, you can use registered dataset in the workspace using the format ':'. For e.g Input(type='uri_folder', path='my_dataset:1')\n", " - `mode` - \tMode of how the data should be delivered to the compute target. Allowed values are `ro_mount`, `rw_mount` and `download`. Default is `ro_mount`\n", "- `environment` - This is the environment needed for the command to run. Curated or custom environments from the workspace can be used. Or a custom environment can be created and used as well. Check out the [environment](../../../../assets/environment/environment.ipynb) notebook for more examples.\n", - "- `compute` - The compute on which the command will run. In this example we are using a compute called `gpu-cluster` present in the workspace. You can replace it any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", + "- `compute` - The compute on which the command will run. In this example we are using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) so there is no need to specify any compute. You can also replace serverless with any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", "- `distribution` - Distribution configuration for distributed training scenarios. Azure Machine Learning supports PyTorch, TensorFlow, and MPI-based distributed " ] }, @@ -164,13 +165,15 @@ " \"weights\": \"yolov5n.pt\",\n", " },\n", " environment=\"AzureML-pytorch-1.8-ubuntu18.04-py37-cuda11-gpu@latest\",\n", - " compute=\"gpu-cluster\", # name of your cluster\n", - " instance_count=2, # In this, only 2 node cluster was created.\n", + " instance_count=2,\n", " distribution={\n", " \"type\": \"PyTorch\",\n", " \"process_count_per_instance\": 1, # number of GPus per node\n", " },\n", - ")" + ")\n", + "job.resources = ResourceConfiguration(\n", + " instance_type=\"Standard_NC6s_v3\", instance_count=2\n", + ") # Serverless compute resources" ] }, { diff --git a/sdk/python/jobs/single-step/pytorch/distributed-training/distributed-cifar10.ipynb b/sdk/python/jobs/single-step/pytorch/distributed-training/distributed-cifar10.ipynb index 18abb8072b..aa9efef534 100644 --- a/sdk/python/jobs/single-step/pytorch/distributed-training/distributed-cifar10.ipynb +++ b/sdk/python/jobs/single-step/pytorch/distributed-training/distributed-cifar10.ipynb @@ -119,28 +119,7 @@ } }, "source": [ - "## 1.3 Get handle to workspace and retrieve the attached compute cluster" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "jupyter": { - "outputs_hidden": false, - "source_hidden": false - }, - "nteract": { - "transient": { - "deleting": false - } - } - }, - "outputs": [], - "source": [ - "## Provide the name of the CPU compute cluster in your Azure Machine Learning Compute.\n", - "cluster_name = \"cpu-cluster\"\n", - "##print(ml_client.compute.get(cluster_name))" + "## 1.3 Get handle to workspace" ] }, { @@ -214,7 +193,7 @@ " - Azure ML `data`/`dataset` or `datastore` are of type `uri_folder`. To use `data`/`dataset` as input, you can use registered dataset in the workspace using the format ':'. For e.g Input(type='uri_folder', path='my_dataset:1')\n", " - `mode` - \tMode of how the data should be delivered to the compute target. Allowed values are `ro_mount`, `rw_mount` and `download`. Default is `ro_mount`\n", "- `environment` - This is the environment needed for the command to run. Curated or custom environments from the workspace can be used. Or a custom environment can be created and used as well. Check out the [environment](../../../../assets/environment/environment.ipynb) notebook for more examples.\n", - "- `compute` - The compute on which the command will run. In this example we are using a compute called `cpu-cluster` present in the workspace. You can replace it any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", + "- `compute` - The compute on which the command will run. In this example we are using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) so there is no need to specify any compute. You can also replace serverless with any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", "- `distribution` - Distribution configuration for distributed training scenarios. Azure Machine Learning supports PyTorch, TensorFlow, and MPI-based distributed \n" ] }, @@ -276,7 +255,6 @@ " inputs=inputs,\n", " outputs=outputs,\n", " environment=\"AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:1\",\n", - " compute=\"cpu-cluster\",\n", ")\n", "\n", "# submit the command\n", @@ -370,20 +348,24 @@ " \"output\": \"./outputs\",\n", "}\n", "\n", + "from azure.ai.ml.entities import ResourceConfiguration\n", + "\n", "job = command(\n", " code=\"./src\", # local path where the code is stored\n", " command=\"python train.py --data-dir ${{inputs.cifar}} --epochs ${{inputs.epoch}} --batch-size ${{inputs.batchsize}} --workers ${{inputs.workers}} --learning-rate ${{inputs.lr}} --momentum ${{inputs.momen}} --print-freq ${{inputs.prtfreq}} --model-dir ${{inputs.output}}\",\n", " inputs=inputs,\n", " environment=\"azureml:AzureML-pytorch-1.9-ubuntu18.04-py37-cuda11-gpu:6\",\n", - " compute=\"gpu-cluster\", # Change the name to the gpu cluster of your workspace.\n", " instance_count=2, # In this, only 2 node cluster was created.\n", " distribution={\n", " \"type\": \"PyTorch\",\n", " # set process count to the number of gpus per node\n", - " # NV6 has only 1 GPU\n", + " # NC6s_v3 has only 1 GPU\n", " \"process_count_per_instance\": 1,\n", " },\n", - ")" + ")\n", + "job.resources = ResourceConfiguration(\n", + " instance_type=\"Standard_NC6s_v3\", instance_count=2\n", + ") # Serverless compute resources" ] }, { diff --git a/sdk/python/jobs/single-step/pytorch/iris/pytorch-iris.ipynb b/sdk/python/jobs/single-step/pytorch/iris/pytorch-iris.ipynb index 1d2189ce81..7d242ec191 100644 --- a/sdk/python/jobs/single-step/pytorch/iris/pytorch-iris.ipynb +++ b/sdk/python/jobs/single-step/pytorch/iris/pytorch-iris.ipynb @@ -9,7 +9,7 @@ "**Requirements** - In order to benefit from this tutorial, you will need:\n", "- A basic understanding of Machine Learning\n", "- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)\n", - "- An Azure ML workspace with computer cluster - [Configure workspace](../../../configuration.ipynb) \n", + "- An Azure ML workspace - [Configure workspace](../../../configuration.ipynb) \n", "\n", "- A python environment\n", "- Installed Azure Machine Learning Python SDK v2 - [install instructions](../../../../README.md) - check the getting started section\n", @@ -19,7 +19,7 @@ "- Create and run a `Command` which executes a Python command\n", "- Use a local file as an `input` to the Command\n", "\n", - "**Motivations** - This notebook explains how to setup and run a Command. The Command is a fundamental construct of Azure Machine Learning. It can be used to run a task on a specified compute (either local or on the cloud). The Command accepts `environment` and `compute` to setup required infrastructure. You can define a `command` to run on this infrastructure with `inputs`." + "**Motivations** - This notebook explains how to setup and run a Command. The Command is a fundamental construct of Azure Machine Learning. It can be used to run a task on a specified compute (either local or on the cloud). The Command accepts `environment` and `compute` to setup required infrastructure. Alternatively, [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) can also be used to run the training job. You can define a `command` to run on this infrastructure with `inputs`." ] }, { @@ -96,7 +96,7 @@ " - Azure ML `data`/`dataset` or `datastore` are of type `uri_folder`. To use `data`/`dataset` as input, you can use registered dataset in the workspace using the format ':'. For e.g Input(type='uri_folder', path='my_dataset:1')\n", " - `mode` - \tMode of how the data should be delivered to the compute target. Allowed values are `ro_mount`, `rw_mount` and `download`. Default is `ro_mount`\n", "- `environment` - This is the environment needed for the command to run. Curated or custom environments from the workspace can be used. Or a custom environment can be created and used as well. Check out the [environment](../../../../assets/environment/environment.ipynb) notebook for more examples.\n", - "- `compute` - The compute on which the command will run. In this example we are using a compute called `cpu-cluster` present in the workspace. You can replace it any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", + "- `compute` - The compute on which the command will run. In this example we are using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) so there is no need to specify any compute. You can also replace serverless with any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", "- `distribution` - Distribution configuration for distributed training scenarios. Azure Machine Learning supports PyTorch, TensorFlow, and MPI-based distributed training. The allowed values are `PyTorch`, `TensorFlow` or `Mpi`.\n", "- `display_name` - The display name of the Job\n", "- `description` - The description of the experiment" @@ -119,8 +119,7 @@ " \"epochs\": 10,\n", " \"lr\": 0.1,\n", " },\n", - " environment=\"AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest\",\n", - " compute=\"cpu-cluster\",\n", + " environment=\"AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest\",\n", " display_name=\"pytorch-iris-example\",\n", " description=\"Train a neural network with PyTorch on the Iris dataset.\",\n", ")" diff --git a/sdk/python/jobs/single-step/scikit-learn/diabetes/sklearn-diabetes.ipynb b/sdk/python/jobs/single-step/scikit-learn/diabetes/sklearn-diabetes.ipynb index 979180623a..e29a143638 100644 --- a/sdk/python/jobs/single-step/scikit-learn/diabetes/sklearn-diabetes.ipynb +++ b/sdk/python/jobs/single-step/scikit-learn/diabetes/sklearn-diabetes.ipynb @@ -9,7 +9,7 @@ "**Requirements** - In order to benefit from this tutorial, you will need:\n", "- A basic understanding of Machine Learning\n", "- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)\n", - "- An Azure ML workspace with computer cluster - [Configure workspace](../../../configuration.ipynb) \n", + "- An Azure ML workspace - [Configure workspace](../../../configuration.ipynb) \n", "\n", "- A python environment\n", "- Installed Azure Machine Learning Python SDK v2 - [install instructions](../../../../README.md) - check the getting started section\n", @@ -19,7 +19,7 @@ "- Create and run a `Command` which executes a Python command\n", "- Use a local file as an `input` to the Command\n", "\n", - "**Motivations** - This notebook explains how to setup and run a Command. The Command is a fundamental construct of Azure Machine Learning. It can be used to run a task on a specified compute (either local or on the cloud). The Command accepts `environment` and `compute` to setup required infrastructure. You can define a `command` to run on this infrastructure with `inputs`." + "**Motivations** - This notebook explains how to setup and run a Command. The Command is a fundamental construct of Azure Machine Learning. It can be used to run a task on a specified compute (either local or on the cloud). The Command accepts `environment` and `compute` to setup required infrastructure. Alternatively, [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) can also be used to run the training job. You can define a `command` to run on this infrastructure with `inputs`." ] }, { @@ -127,7 +127,7 @@ " - Azure ML `data`/`dataset` or `datastore` are of type `uri_folder`. To use `data`/`dataset` as input, you can use registered dataset in the workspace using the format ':'. For e.g Input(type='uri_folder', path='my_dataset:1')\n", " - `mode` - \tMode of how the data should be delivered to the compute target. Allowed values are `ro_mount`, `rw_mount` and `download`. Default is `ro_mount`\n", "- `environment` - This is the environment needed for the command to run. Curated or custom environments from the workspace can be used. Or a custom environment can be created and used as well. Check out the [environment](../../../../assets/environment/environment.ipynb) notebook for more examples.\n", - "- `compute` - The compute on which the command will run. In this example we are using a compute called `cpu-cluster` present in the workspace. You can replace it any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", + "- `compute` - The compute on which the command will run. In this example we are using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) so there is no need to specify any compute. You can also replace serverless with any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", "- `distribution` - Distribution configuration for distributed training scenarios. Azure Machine Learning supports PyTorch, TensorFlow, and MPI-based distributed training. The allowed values are `PyTorch`, `TensorFlow` or `Mpi`.\n", "- `display_name` - The display name of the Job\n", "- `description` - The description of the experiment\n" @@ -163,8 +163,7 @@ " path=\"https://azuremlexamples.blob.core.windows.net/datasets/diabetes.csv\",\n", " )\n", " },\n", - " environment=\"AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest\",\n", - " compute=\"cpu-cluster\",\n", + " environment=\"AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest\",\n", " display_name=\"sklearn-diabetes-example\",\n", " # description,\n", " # experiment_name\n", diff --git a/sdk/python/jobs/single-step/scikit-learn/iris/iris-scikit-learn.ipynb b/sdk/python/jobs/single-step/scikit-learn/iris/iris-scikit-learn.ipynb index 375c03306d..845228561f 100644 --- a/sdk/python/jobs/single-step/scikit-learn/iris/iris-scikit-learn.ipynb +++ b/sdk/python/jobs/single-step/scikit-learn/iris/iris-scikit-learn.ipynb @@ -9,7 +9,7 @@ "**Requirements** - In order to benefit from this tutorial, you will need:\n", "- A basic understanding of Machine Learning\n", "- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)\n", - "- An Azure ML workspace with computer cluster - [Configure workspace](../../../configuration.ipynb) \n", + "- An Azure ML workspace - [Configure workspace](../../../configuration.ipynb) \n", "\n", "- A python environment\n", "- Installed Azure Machine Learning Python SDK v2 - [install instructions](../../../../README.md) - check the getting started section\n", @@ -19,7 +19,7 @@ "- Create and run a `Command` which executes a Python command\n", "- Use a local file as an `input` to the Command\n", "\n", - "**Motivations** - This notebook explains how to setup and run a Command. The Command is a fundamental construct of Azure Machine Learning. It can be used to run a task on a specified compute (either local or on the cloud). The Command accepts `environment` and `compute` to setup required infrastructure. You can define a `command` to run on this infrastructure with `inputs`." + "**Motivations** - This notebook explains how to setup and run a Command. The Command is a fundamental construct of Azure Machine Learning. It can be used to run a task on a specified compute (either local or on the cloud). Alternatively, [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) can also be used to run the training job. The Command accepts `environment` and `compute` to setup required infrastructure. You can define a `command` to run on this infrastructure with `inputs`." ] }, { @@ -127,8 +127,8 @@ " - `path` - The path to the file or folder. These can be local or remote files or folders. For remote files - http/https, wasb are supported. \n", " - Azure ML `data`/`dataset` or `datastore` are of type `uri_folder`. To use `data`/`dataset` as input, you can use registered dataset in the workspace using the format ':'. For e.g Input(type='uri_folder', path='my_dataset:1')\n", " - `mode` - \tMode of how the data should be delivered to the compute target. Allowed values are `ro_mount`, `rw_mount` and `download`. Default is `ro_mount`\n", - "- `environment` - This is the environment needed for the command to run. Curated or custom environments from the workspace can be used. Or a custom environment can be created and used as well. Check out the [environment](../../../../assets/environment/environment.ipynb) notebook for more examples.\n", - "- `compute` - The compute on which the command will run. In this example we are using a compute called `cpu-cluster` present in the workspace. You can replace it any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", + "- `environment` - This is the environment needed for the command to run. Curated or custom environments from the workspace can be used. Or a custom environment can be created and used as well. Check out the [environment](../../../../assets/environment/environment.ipynb) notebook for more examples.\n", + "- `compute` - The compute on which the command will run. In this example we are using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) so there is no need to specify any compute. You can also replace serverless with any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", "- `distribution` - Distribution configuration for distributed training scenarios. Azure Machine Learning supports PyTorch, TensorFlow, and MPI-based distributed training. The allowed values are `PyTorch`, `TensorFlow` or `Mpi`.\n", "- `display_name` - The display name of the Job\n", "- `description` - The description of the experiment" @@ -177,8 +177,7 @@ " # }\n", " # ) # uncomment add SSH Public Key to access job container via SSH\n", " },\n", - " environment=\"AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest\",\n", - " compute=\"cpu-cluster\",\n", + " environment=\"AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest\",\n", " display_name=\"sklearn-iris-example\",\n", " # experiment_name\n", " # description\n", diff --git a/sdk/python/jobs/single-step/scikit-learn/mnist/sklearn-mnist.ipynb b/sdk/python/jobs/single-step/scikit-learn/mnist/sklearn-mnist.ipynb index 6b5d611563..1a7b951b11 100644 --- a/sdk/python/jobs/single-step/scikit-learn/mnist/sklearn-mnist.ipynb +++ b/sdk/python/jobs/single-step/scikit-learn/mnist/sklearn-mnist.ipynb @@ -9,7 +9,7 @@ "**Requirements** - In order to benefit from this tutorial, you will need:\n", "- A basic understanding of Machine Learning\n", "- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)\n", - "- An Azure ML workspace with computer cluster - [Configure workspace](../../../configuration.ipynb) \n", + "- An Azure ML workspace - [Configure workspace](../../../configuration.ipynb) \n", "\n", "- A python environment\n", "- Installed Azure Machine Learning Python SDK v2 - [install instructions](../../../../README.md) - check the getting started section\n", @@ -19,7 +19,7 @@ "- Create and run a `Command` which executes a Python command\n", "- Use a local file as an `input` to the Command\n", "\n", - "**Motivations** - This notebook explains how to setup and run a Command. The Command is a fundamental construct of Azure Machine Learning. It can be used to run a task on a specified compute (either local or on the cloud). The Command accepts `environment` and `compute` to setup required infrastructure. You can define a `command` to run on this infrastructure with `inputs`." + "**Motivations** - This notebook explains how to setup and run a Command. The Command is a fundamental construct of Azure Machine Learning. It can be used to run a task on a specified compute (either local or on the cloud). Alternatively, [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) can also be used for the training job. The Command accepts `environment` and `compute` to setup required infrastructure. You can define a `command` to run on this infrastructure with `inputs`." ] }, { @@ -107,7 +107,7 @@ " - Azure ML `data`/`dataset` or `datastore` are of type `uri_folder`. To use `data`/`dataset` as input, you can use registered dataset in the workspace using the format ':'. For e.g Input(type='uri_folder', path='my_dataset:1')\n", " - `mode` - \tMode of how the data should be delivered to the compute target. Allowed values are `ro_mount`, `rw_mount` and `download`. Default is `ro_mount`\n", "- `environment` - This is the environment needed for the command to run. Curated or custom environments from the workspace can be used. Or a custom environment can be created and used as well. Check out the [environment](../../../../assets/environment/environment.ipynb) notebook for more examples.\n", - "- `compute` - The compute on which the command will run. In this example we are using a compute called `cpu-cluster` present in the workspace. You can replace it any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", + "- `compute` - The compute on which the command will run. In this example we are using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) so there is no need to specify any compute. You can also replace serverless with any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", "- `distribution` - Distribution configuration for distributed training scenarios. Azure Machine Learning supports PyTorch, TensorFlow, and MPI-based distributed training. The allowed values are `PyTorch`, `TensorFlow` or `Mpi`.\n", "- `display_name` - The display name of the Job\n", "- `description` - The description of the experiment" @@ -128,8 +128,7 @@ " code=\"./src\", # local path where the code is stored\n", " command=\"pip install -r requirements.txt && python main.py --C ${{inputs.C}} --penalty ${{inputs.penalty}}\",\n", " inputs={\"C\": 0.8, \"penalty\": \"l2\"},\n", - " environment=\"AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest\",\n", - " compute=\"cpu-cluster\",\n", + " environment=\"AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest\",\n", " display_name=\"sklearn-mnist-example\"\n", " # experiment_name: sklearn-mnist-example\n", " # description: Train a scikit-learn LogisticRegression model on the MNSIT dataset.\n", diff --git a/sdk/python/jobs/single-step/tensorflow/mnist/tensorflow-mnist.ipynb b/sdk/python/jobs/single-step/tensorflow/mnist/tensorflow-mnist.ipynb index 493bf4ae2b..def0ce0a3f 100644 --- a/sdk/python/jobs/single-step/tensorflow/mnist/tensorflow-mnist.ipynb +++ b/sdk/python/jobs/single-step/tensorflow/mnist/tensorflow-mnist.ipynb @@ -9,7 +9,7 @@ "**Requirements** - In order to benefit from this tutorial, you will need:\n", "- A basic understanding of Machine Learning\n", "- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)\n", - "- An Azure ML workspace with computer cluster - [Configure workspace](../../../configuration.ipynb) \n", + "- An Azure ML workspace - [Configure workspace](../../../configuration.ipynb) \n", "\n", "- A python environment\n", "- Installed Azure Machine Learning Python SDK v2 - [install instructions](../../../../README.md) - check the getting started section\n", @@ -19,7 +19,7 @@ "- Create and run a `Command` which executes a Python command\n", "- Use a local file as an `input` to the Command\n", "\n", - "**Motivations** - This notebook explains how to setup and run a Command. The Command is a fundamental construct of Azure Machine Learning. It can be used to run a task on a specified compute (either local or on the cloud). The Command accepts `environment` and `compute` to setup required infrastructure. You can define a `command` to run on this infrastructure with `inputs`." + "**Motivations** - This notebook explains how to setup and run a Command. The Command is a fundamental construct of Azure Machine Learning. It can be used to run a task on a specified compute (either local or on the cloud). Alternatively, [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) can also be used for the training job. The Command accepts `environment` and `compute` to setup required infrastructure. You can define a `command` to run on this infrastructure with `inputs`." ] }, { @@ -46,7 +46,8 @@ "# import required libraries\n", "from azure.ai.ml import MLClient\n", "from azure.ai.ml import command\n", - "from azure.identity import DefaultAzureCredential" + "from azure.identity import DefaultAzureCredential\n", + "from azure.ai.ml.entities import ResourceConfiguration" ] }, { @@ -107,7 +108,7 @@ " - Azure ML `data`/`dataset` or `datastore` are of type `uri_folder`. To use `data`/`dataset` as input, you can use registered dataset in the workspace using the format ':'. For e.g Input(type='uri_folder', path='my_dataset:1')\n", " - `mode` - \tMode of how the data should be delivered to the compute target. Allowed values are `ro_mount`, `rw_mount` and `download`. Default is `ro_mount`\n", "- `environment` - This is the environment needed for the command to run. Curated or custom environments from the workspace can be used. Or a custom environment can be created and used as well. Check out the [environment](../../../../assets/environment/environment.ipynb) notebook for more examples.\n", - "- `compute` - The compute on which the command will run. In this example we are using a compute called `cpu-cluster` present in the workspace. You can replace it any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", + "- `compute` - The compute on which the command will run. In this example we are using [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) so there is no need to specify any compute. You can also replace serverless with any other compute in the workspace. You can run it on the local machine by using `local` for the compute. This will run the command on the local machine and all the run details and output of the job will be uploaded to the Azure ML workspace.\n", "- `distribution` - Distribution configuration for distributed training scenarios. Azure Machine Learning supports PyTorch, TensorFlow, and MPI-based distributed training. The allowed values are `PyTorch`, `TensorFlow` or `Mpi`.\n", "- `display_name` - The display name of the Job\n", "- `description` - The description of the experiment" @@ -128,11 +129,13 @@ " code=\"./src\", # local path where the code is stored\n", " command=\"python main.py\",\n", " environment=\"AzureML-tensorflow-2.7-ubuntu20.04-py38-cuda11-gpu@latest\",\n", - " compute=\"gpu-cluster\",\n", " display_name=\"tensorflow-mnist-example\"\n", " # experiment_name: tensorflow-mnist-example\n", " # description: Train a basic neural network with TensorFlow on the MNIST dataset.\n", - ")" + ")\n", + "job.resources = ResourceConfiguration(\n", + " instance_type=\"Standard_NC6s_v3\", instance_count=2\n", + ") # Serverless compute resources" ] }, { diff --git a/tutorials/azureml-getting-started/azureml-getting-started-studio.ipynb b/tutorials/azureml-getting-started/azureml-getting-started-studio.ipynb index d23eb4f6ce..2523627ecc 100644 --- a/tutorials/azureml-getting-started/azureml-getting-started-studio.ipynb +++ b/tutorials/azureml-getting-started/azureml-getting-started-studio.ipynb @@ -270,7 +270,7 @@ " command=\"python main.py --data ${{inputs.data}} --test_train_ratio ${{inputs.test_train_ratio}} --learning_rate ${{inputs.learning_rate}} --registered_model_name ${{inputs.registered_model_name}}\",\n", " # This is the ready-made environment you are using\n", " environment=\"AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest\",\n", - " # This is the compute you created earlier\n", + " # This is the compute you created earlier. You can alternatively remove this line to use serverless compute to run the job\n", " compute=\"cpu-cluster\",\n", " # An experiment is a container for all the iterations one does on a certain project. All the jobs submitted under the same experiment name would be listed next to each other in Azure ML studio.\n", " experiment_name=\"train_model_credit_default_prediction\",\n", diff --git a/tutorials/e2e-distributed-pytorch-image/e2e-object-classification-distributed-pytorch.ipynb b/tutorials/e2e-distributed-pytorch-image/e2e-object-classification-distributed-pytorch.ipynb index 07a44afae1..95edebcc9a 100644 --- a/tutorials/e2e-distributed-pytorch-image/e2e-object-classification-distributed-pytorch.ipynb +++ b/tutorials/e2e-distributed-pytorch-image/e2e-object-classification-distributed-pytorch.ipynb @@ -14,7 +14,7 @@ "\n", "**Requirements** - In order to benefit from this tutorial, you need:\n", "- to have provisioned an AzureML workspace\n", - "- to have permissions to provision a minimal cpu and gpu cluster\n", + "- to have permissions to provision a minimal cpu and gpu cluster or simply use [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python)\n", "- to have [installed Azure Machine Learning Python SDK v2](https://github.com/Azure/azureml-examples/blob/sdk-preview/sdk/README.md)\n", "\n", "**Motivations** - Let's consider the following scenario: we want to explore training different image classifiers on distinct kinds of problems, based on a large public dataset that is available at a given url. This ML pipeline will be future-looking, in particular we want:\n", @@ -72,7 +72,9 @@ " resource_group_name=\"\",\n", " workspace_name=\"\",\n", " credential=credential,\n", - ")" + ")\n", + "cpu_cluster = None\n", + "gpu_cluster = None" ] }, { @@ -80,7 +82,7 @@ "id": "41d3ce5e", "metadata": {}, "source": [ - "### Provision the required resources for this notebook\n", + "### Provision the required resources for this notebook (Optional)\n", "\n", "We'll need 2 clusters for this notebook, a CPU cluster and a GPU cluster. First, let's create a minimal cpu cluster." ] @@ -240,7 +242,9 @@ " },\n", " # we're using a curated environment\n", " environment=\"AzureML-sklearn-1.0-ubuntu20.04-py38-cpu:1\",\n", - " compute=\"cpu-cluster\",\n", + " compute=\"cpu-cluster\"\n", + " if (cpu_cluster)\n", + " else None, # No compute needs to be passed to use serverless\n", ")" ] }, @@ -362,6 +366,7 @@ "source": [ "from azure.ai.ml import command\n", "from azure.ai.ml import Input\n", + "from azure.ai.ml.entities import ResourceConfiguration\n", "\n", "training_job = command(\n", " # local path where the code is stored\n", @@ -408,7 +413,9 @@ " \"enable_profiling\": False,\n", " },\n", " environment=\"AzureML-pytorch-1.10-ubuntu18.04-py38-cuda11-gpu@latest\",\n", - " compute=\"gpu-cluster\",\n", + " compute=\"gpu-cluster\"\n", + " if (gpu_cluster)\n", + " else None, # No compute needs to be passed to use serverless\n", " distribution={\n", " \"type\": \"PyTorch\",\n", " # set process count to the number of gpus on the node\n", @@ -419,7 +426,11 @@ " instance_count=2,\n", " display_name=\"pytorch_training_sample\",\n", " description=\"training a torchvision model\",\n", - ")" + ")\n", + "if gpu_cluster == None:\n", + " training_job.resources = ResourceConfiguration(\n", + " instance_type=\"Standard_NC6s_v3\", instance_count=2\n", + " ) # resources for serverless job" ] }, { diff --git a/tutorials/get-started-notebooks/pipeline.ipynb b/tutorials/get-started-notebooks/pipeline.ipynb index a721afd057..5650361406 100644 --- a/tutorials/get-started-notebooks/pipeline.ipynb +++ b/tutorials/get-started-notebooks/pipeline.ipynb @@ -108,7 +108,8 @@ " subscription_id=\"\",\n", " resource_group_name=\"\",\n", " workspace_name=\"\",\n", - ")" + ")\n", + "cpu_cluster = None" ] }, { @@ -187,9 +188,9 @@ "source": [ "In the future, you can fetch the same dataset from the workspace using `credit_dataset = ml_client.data.get(\"\", version='')`.\n", "\n", - "## Create a compute resource to run your pipeline\n", + "## Create a compute resource to run your pipeline (Optional)\n", "\n", - "You can **skip this step** if you want to use **serverless compute (preview)** to run the training job. Through serverless compute, Azure Machine Learning takes care of creating, scaling, deleting, patching and managing compute, along with providing managed network isolation, reducing the burden on you. \n", + "You can **skip this step** if you want to use [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) to run the training job. Through serverless compute, Azure Machine Learning takes care of creating, scaling, deleting, patching and managing compute, along with providing managed network isolation, reducing the burden on you. \n", "\n", "Each step of an Azure Machine Learning pipeline can use a different compute resource for running the specific job of that step. It can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark.\n", "\n", @@ -846,7 +847,9 @@ "\n", "\n", "@dsl.pipeline(\n", - " compute=cpu_compute_target, # replace cpu_compute_target with \"serverless\" to run pipeline on serverless compute\n", + " compute=cpu_compute_target\n", + " if (cpu_cluster)\n", + " else \"serverless\", # \"serverless\" value runs pipeline on serverless compute\n", " description=\"E2E data_perp-train pipeline\",\n", ")\n", "def credit_defaults_pipeline(\n", diff --git a/tutorials/get-started-notebooks/quickstart.ipynb b/tutorials/get-started-notebooks/quickstart.ipynb index 2c7441871f..7f80595712 100644 --- a/tutorials/get-started-notebooks/quickstart.ipynb +++ b/tutorials/get-started-notebooks/quickstart.ipynb @@ -17,7 +17,7 @@ "\n", "> * Set up a handle to your Azure Machine Learning workspace\n", "> * Create your training script\n", - "> * Create a scalable compute resource, a compute cluster or use **serverless compute (preview)** instead\n", + "> * Create a scalable compute resource, a compute cluster or use [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) instead\n", "> * Create and run a command job that will run the training script on the compute cluster, configured with the appropriate job environment\n", "> * View the output of your training script\n", "> * Deploy the newly-trained model as an endpoint\n", @@ -96,7 +96,8 @@ " subscription_id=\"\",\n", " resource_group_name=\"\",\n", " workspace_name=\"\",\n", - ")" + ")\n", + "cpu_cluster = None" ] }, { @@ -271,7 +272,7 @@ "\n", "![refresh](./media/refresh.png)\n", "\n", - "## Create a compute cluster, a scalable way to run a training job\n", + "## Create a compute cluster, a scalable way to run a training job (Optional)\n", "\n", "You can **skip this step** if you want to use **serverless compute** to run the training job. Through serverless compute, Azure Machine Learning takes care of creating, scaling, deleting, patching and managing compute, along with providing managed network isolation, reducing the burden on you. \n", "\n", @@ -338,7 +339,7 @@ "Now that you have a script that can perform the desired tasks, and a compute cluster to run the script, you'll use a general purpose **command** that can run command line actions. This command line action can directly call system commands or run a script. \n", "\n", "Here, you'll create input variables to specify the input data, split ratio, learning rate and registered model name. The command script will:\n", - "* Use the compute cluster to run the command or just **remove the compute line to use serverless compute**.\n", + "* Use the compute cluster to run the command if you created it or use serverless compute by not specifying any compute.\n", "* Use an *environment* that defines software and runtime libraries needed for the training script. Azure Machine Learning provides many curated or ready-made environments, which are useful for common training and inference scenarios. You'll use one of those environments here. In the [Train a model](train-model.ipynb) tutorial, you'll learn how to create a custom environment. \n", "* Configure the command line action itself - `python main.py` in this case. The inputs/outputs are accessible in the command via the `${{ ... }}` notation.\n", "* In this sample, we access the data from a file on the internet. " @@ -373,7 +374,9 @@ " code=\"./src/\", # location of source code\n", " command=\"python main.py --data ${{inputs.data}} --test_train_ratio ${{inputs.test_train_ratio}} --learning_rate ${{inputs.learning_rate}} --registered_model_name ${{inputs.registered_model_name}}\",\n", " environment=\"AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest\",\n", - " compute=\"cpu-cluster\", # you can remove this line to use serverless compute\n", + " compute=\"cpu-cluster\"\n", + " if (cpu_cluster)\n", + " else None, # No compute needs to be passed to use serverless\n", " display_name=\"credit_default_prediction\",\n", ")" ] diff --git a/tutorials/get-started-notebooks/train-model.ipynb b/tutorials/get-started-notebooks/train-model.ipynb index d4fed7732e..5316e1aa0d 100644 --- a/tutorials/get-started-notebooks/train-model.ipynb +++ b/tutorials/get-started-notebooks/train-model.ipynb @@ -14,7 +14,7 @@ "The steps are:\n", "\n", " * Get a handle to your Azure Machine Learning workspace\n", - " * Create your compute resource (or simply use serverless compute) and job environment\n", + " * Create your compute resource (or simply use [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) instead) and job environment\n", " * Create your training script\n", " * Create and run your command job to run the training script on the compute resource, configured with the appropriate job environment and the data source\n", " * View the output of your training script\n", @@ -65,7 +65,7 @@ "\n", "In this tutorial, we'll focus on using a command job to create a custom training job that we'll use to train a model. For any custom training job, the below items are required:\n", "\n", - "* compute resource (usually a compute cluster, which we recommend for scalability)\n", + "* compute resource (usually a compute cluster or [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python))\n", "* environment\n", "* data\n", "* command job \n", @@ -113,7 +113,8 @@ " subscription_id=\"\",\n", " resource_group_name=\"\",\n", " workspace_name=\"\",\n", - ")" + ")\n", + "cpu_cluster = None" ] }, { @@ -130,9 +131,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Create a compute cluster to run your job\n", + "## Create a compute cluster to run your job (optional) \n", "\n", - "You can **skip this step** if you want to use **serverless compute (preview)** to run the training job. Through serverless compute, Azure Machine Learning takes care of creating, scaling, deleting, patching and managing compute, along with providing managed network isolation, reducing the burden on you. \n", + "You can **skip this step** if you want to use [serverless compute (preview)](https://learn.microsoft.com/azure/machine-learning/how-to-use-serverless-compute?view=azureml-api-2&tabs=python) to run the training job. Through serverless compute, Azure Machine Learning takes care of creating, scaling, deleting, patching and managing compute, along with providing managed network isolation, reducing the burden on you. \n", "\n", "In Azure, a job can refer to several tasks that Azure allows its users to do: training, pipeline creation, deployment, etc. For this tutorial and our purpose of training a machine learning model, we'll use *job* as a reference to running training computations (*training job*).\n", "\n", @@ -509,7 +510,9 @@ " code=\"./src/\", # location of source code\n", " command=\"python main.py --data ${{inputs.data}} --test_train_ratio ${{inputs.test_train_ratio}} --learning_rate ${{inputs.learning_rate}} --registered_model_name ${{inputs.registered_model_name}}\",\n", " environment=\"aml-scikit-learn@latest\",\n", - " compute=\"cpu-cluster\", # Remove this line to use serverless compute\n", + " compute=\"cpu-cluster\"\n", + " if (cpu_cluster)\n", + " else None, # No compute needs to be passed to use serverless\n", " display_name=\"credit_default_prediction\",\n", ")" ]