CLI Examples
This article provides examples of popular use cases illustrating how to use the Command Line Interface (CLI)
Logging in¶
Logging in via run:ai sign in page (web)¶
You can log in from the UI, if you are using SSO or credentials
Logging in via terminal (credentials)¶
runai login user -u [email protected] -p "password"
Configuration¶
Setting a default project¶
Submitting a workload¶
Naming a workload¶
Use the commands below to provide a name for a workload.
Setting a the workload name ( my_workload_name)¶
Setting a random name with prefix (prefix=workload type)¶
Setting a random name with specific prefix (prefix determined by flag)¶
Labels and annotations¶
Labels¶
Annotations¶
Container's environment variables¶
Requests and limits¶
runai workspace submit -p "project-name" -i runai.jfrog.io/demo/quickstart-demo --cpu-core-request 0.3 --cpu-core-limit 1 --cpu-memory-request 50M --cpu-memory-limit 1G --gpu-devices-request 1 --gpu-memory-request 1G
Submitting and attaching to process¶
Submitting a jupyter notebook¶
runai workspace submit --image jupyter/scipy-notebook -p "project-name" --gpu-devices-request 1 --external-url container=8888 --name-prefix jupyter --command -- start-notebook.sh --NotebookApp.base_url='/${RUNAI_PROJECT}/${RUNAI_JOB_NAME}' --NotebookApp.token=''
Submitting distributed training workload with TensorFlow¶
runai distributed submit -f TF --workers=5 --no-master -g 1 -i kubeflow/tf-mnist-with-summaries:latest -p "project-name" --command -- python /var/tf_mnist/mnist_with_summaries.py --max_steps 1000000
Submitting a multi-pod workload¶
Submit and bash¶
Submitting a workload with bash command¶
runai training pytorch submit -p "project-name" -i nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu20.04 -g 1 --workers 3 --command -- bash -c 'trap : TERM INT; sleep infinity & wait'
Bashing into the workload¶
Submitting distributed training workload with MPI¶
runai mpi submit dist1 --workers=2 -g 1 \
-i runai.jfrog.io/demo/quickstart-distributed:v0.3.0 -e RUNAI_SLEEP_SECS=60 -p "project-name"
Submitting with PVC¶
New PVC bounded to the workspace¶
New PVCs will be deleted when the workload is deleted
New ephemeral PVC¶
New ephemeral PVCs will be deleted when the workload is deleted or paused
Existing PVC¶
Existing PVCs will not be deleted when the workload is deleted
Master/Worker configuration¶
--command flag and -- are set both leader (master) and workers command/arguments
--master-args flag sets the master arguments
--master-command flag sets the master commands with arguments
--master-args and --master-command flags can be set together
Overriding both the leader (master) and worker image's arguments¶
Overriding both the leader (master) and worker image's commands with arguments¶
Overriding arguments of the leader (master) and worker image's arguments with different values¶
runai pytorch submit -i ubuntu --master-args "-a master_arg_a -b master-arg_b'" -- '-a worker_arg_a'
Overriding command with arguments of the leader (master) and worker image's arguments¶
runai pytorch submit -i ubuntu --master-command "python_master -m pip install'" --command -- 'python_worker -m pip install'
Listing objects¶
Listing all workloads in the user's scope¶
Listing projects in a YAML format¶
Listing nodes in a JSON format¶
CLI reference¶
For the full guide of the CLI syntax, see the CLI reference