Install Azure ML Python SDK
!pip install azureml-sdk
from azureml.core import Workspace
ws = Workspace.create(name='<my_workspace_name>', # provide a name for your workspace
subscription_id='<azure-subscription-id>', # provide your subscription ID
resource_group='<myresourcegroup>', # provide a resource group name
create_resource_group=True,
location='<NAME_OF_REGION>') # e.g. 'westeurope' or 'eastus2' or 'westus2' or 'southeastasia' see the documentation
# write out the workspace details to a configuration file: .azureml/config.json
ws.write_config(path='.azureml') # - in ml Studio
## Later you can acess workspace by this code
from azureml.core import Workspace
ws = Workspace.from_config()
The following example creates a compute target in your workspace with:
Modify this code below to update to GPU, or to change the SKU of your VMs.
from azureml.core import Workspace
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
ws = Workspace.from_config() # automatically looks for a directory .azureml/
# name for your cluster
cpu_cluster_name = "cpu-cluster"
try:
# check if cluster already exists
cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# if not, create it
compute_config = AmlCompute.provisioning_configuration(
vm_size='STANDARD_D2_V2',
max_nodes=4,
idle_seconds_before_scaledown=2400,)
cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
cpu_cluster.wait_for_completion(show_output=True)
now after setting the Compute target you can acess the compute with this code below
from azureml.core import ComputeTarget
cpu_cluster = ComputeTarget(ws, 'cpu-cluster')
Workspaces are a foundational object used throughout Azure ML and are used in the constructors of many other classes. Throughout this documentation we frequently omit the workspace object instantiation and simply refer to the above codes.
This has to be a part of the run.py
from azureml.core import Workspace
ws = Workspace(
subscription_id="<subscription_id>",
resource_group="<resource_group>",
workspace_name="<workspace_name>",
)
This is the onfigfile that is .azureml/config.json
{
"subscription_id": <subscription-id>,
"resource_group": <resource-group>,
"workspace_name": <workspace-name>
}
Some use tips is to have the .azureml/
as the path as this path exists in the Workspace.from_config
method.
The Workspace Assets : provides a handle to your Azure ML assets
1. Compute Target that are attached to the workspace
ws.compute_targets: Dict[str, ComputeTarget]
2. DataStores that is registerd to the workspace
ws.datastores: Dict[str, Datastore]
To get the Workspaces Defualt Datastore
ws.get_default_datastore(): Datastore
3. Keyvault
ws.get_default_keyvault(): Keyvault
4. Environments
ws.environments: Dict[str, Environment]
5. MLFlow
ws.get_mlflow_tracking_uri(): str
To get the list of the Compute Target
ComputeTarget.list(ws): List[ComputeTarget]|
or else you can check them in the portal of the Azure ML Studio
Fist select the "compute" then select the "Computer Clusters" and then clik on the New tab
Piture credit : Microsft Documentation
Compute name: This will be used to refer to the calculate later. It is necessary to provide a name. The length of your name must be between 2 and 16 characters. Letters, numbers, and the - character are all valid characters. Virtual Machine type: Either the CPU or the GPU
Type of virtual machine: CPU or GPU Virtual Machine Priority: "Dedicated" or "Low priority"> Low priority virtual machines are less expensive, but the computing nodes are not guaranteed. It's possible that your work will be pre-empted.
Virtual Machine Size: Choose from a drop-down menu. The whole list may be seen here.
Compute will autoscale between the minimum and maximum number of nodes based on the number of tasks submitted. Setting min nodes = 0 causes the cluster to scale down to 0 while no tasks are running on the compute, saving you money.
Idle seconds before scaling down: We'll wait a few seconds before scaling down the cluster to the bare minimum of nodes.
Machine Learning compute is always created in the same region as the Machine Learning service workspace.
from azureml.core import Workspace
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
ws = Workspace.from_config() # This automatically looks for a directory .azureml
# Choose a name for your CPU cluster
cpu_cluster_name = "cpu-cluster"
# Verify that the cluster does not exist already
try:
cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4,
idle_seconds_before_scaledown=2400)
cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
cpu_cluster.wait_for_completion(show_output=True)