Skip to content

Install the Run:ai Command-line Interface

The Run:ai Command-line Interface (CLI) is one of the ways for a Researcher to send deep learning workloads, acquire GPU-based containers, list jobs, etc.

The instructions below will guide you through the process of installing the CLI. The Run:ai CLI runs on Mac and Linux. You can run the CLI on Windows by using Docker for Windows. See the end of this document.


  • When installing the command-line interface, it is worth considering future upgrades:

    • Install the CLI on a dedicated Jumpbox machine. Researches will connect to the Jumpbox from which they can submit Run:ai commands
    • Install the CLI on a shared directory that is mounted on Researchers' machines.
  • (CLI version v2.2.76 or earlier) Kubectl (Kubernetes command-line interface) installed and configured to access your cluster. Please refer to

  • (CLI version v2.2.76 or earlier) Helm. See on how to install Helm. Run:ai works with Helm version 3 only (not helm 2).
  • A Kubernetes configuration file obtained from the Kubernetes cluster installation.

Researcher Authentication

When enabled, Researcher authentication requires additional setup when installing the CLI. To configure authentication see Setup Project-based Researcher Access Control. Use the modified Kubernetes configuration file described in the article.


Kubernetes Configuration

  • On the Researcher's root folder, create a directory .kube. Copy the Kubernetes configuration file into the directory. Each Researcher should have a separate copy of the configuration file. The Researcher should have write access to the configuration file as it stores user defaults.
  • If you choose to locate the file at a different location than ~/.kube/config, you must create a shell variable to point to the configuration file as follows:
export KUBECONFIG=<Kubernetes-config-file>
  • Test the connection by running:
kubectl get nodes

Install Run:ai CLI

  • Download the latest release from the Run:ai releases page. For MacOS, download the darwin-amd64 release.For Linux, download the linux-amd64 release.
  • Unarchive the downloaded file
  • Install by running:
sudo ./

The command will install Run:ai CLI into /usr/local. Alternatively, you can provide a directory of your choosing:


You can omit sudo if you have write access to the directory. The directory must be added to the users' PATH.

  • To verify the installation run:
runai list jobs

Install Command Auto-Completion

It is possible to configure your Linux/Mac shell to complete Run:ai CLI commands. This feature works on bash and zsh shells only.


Edit the file ~/.zshrc. Add the lines:

autoload -U compinit; compinit -i
source <(runai completion zsh)


Install the bash-completion package:

  • Mac: brew install bash-completion
  • Ubundu/Debian: sudo apt-get install bash-completion
  • Fedora/Centos: sudo yum install bash-completion

Edit the file ~/.bashrc. Add the lines:

[[ -r “/usr/local/etc/profile.d/” ]] && . “/usr/local/etc/profile.d/”
source <(runai completion bash)

Troubleshooting the CLI Installation

See Troubleshooting a CLI installation

Update the Run:ai CLI

To update the CLI to the latest version run:

sudo runai update

Delete the Run:ai CLI

If you have installed using the default path, run:

sudo rm -rf /usr/local/bin/runai /usr/local/runai

If you have installed using a custom path, delete all Run:ai files in this path.

Use Run:ai on Windows

Install Docker for Windows.

Get the following folder from GitHub:

Replace config with your Kubernetes Configuration file.

Run: to create a docker image named runai-cli.

Test the image by running:

docker run -it runai-cli bash

Try and connect to your cluster from inside the docker by running a Run:ai CLI command. E.g. runai list projects.

Distribute the image to Windows users.

  • In case you want to use port-forward feature please use the following command
docker run -it -p <PORT>:<PORT> runai-cli bash

And when using runai submit command add the following flag:


Last update: May 14, 2022