Cheat Sheet

Basic Step

Connect to workspace

The workspace object is the primary handle on your Azure ML assets, and it's utilised throughout ws (it's also known as "workspace").

Connect to the Compute Target

Sample Usage

the complete code for the workload setup

Prepare Python Environemnt

You can use a pip requirements.txt file or a Conda env.yml file to define a Python environment on your compute.

Sample Code

you can get to know more about the command line argument

To run $ (env) python <path/to/code>/script.py [arguments] - in ml Studio> on a remote compute cluster target: ComputeTarget with an environment env: Environment we can use the ScriptRunConfig class

Commands for the Configuration

compute_target : If not provided the script will run on your local machine/Desktop environment : If not provided, uses a default Python environment managed by Azure ML. See Environment.

Experiments in Azure ML

Some Commands to think about

cmd
$ conda env create -f env.yml  # create environment called pytorch
$ conda activate pytorch
(pytorch) $ cd <path/to/code - in ml Studio>
(pytorch) $ python train.py --learning_rate 0.001 --momentum 0.9

Now to run GPU in the Azure ML

on Training using the Distributed Training

This will be for the ScriptRunConfig

Here mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.1-cudnn7-ubuntu18.04 : is a docker image with OpenMPI. This is required for distributed training on Azure ML.

MpiConfiguration : is where you specify the number of nodes and GPUs (per node) you want to train on.

Connecting to the Data in Azure ML Studio

To work with data in your training scripts using your workspace ws and its default datastore:

Now Just pass this to your training script as a command line argument.