Install the Run:ai Command-line Interface¶
The Run:ai Command-line Interface (CLI) is one of the ways for a Researcher to send deep learning workloads, acquire GPU-based containers, list jobs, etc.
The instructions below will guide you through the process of installing the CLI. The Run:ai CLI runs on Mac and Linux. You can run the CLI on Windows by using Docker for Windows. See the end of this document.
When enabled, Researcher authentication requires additional setup when installing the CLI. To configure authentication see Setup Project-based Researcher Access Control. Use the modified Kubernetes configuration file described in the article.
- When installing the command-line interface, it is worth considering future upgrades:
- Install the CLI on a dedicated Jumpbox machine. Researchers will connect to the Jumpbox from which they can submit Run:ai commands
- Install the CLI on a shared directory that is mounted on Researchers' machines.
- A Kubernetes configuration file obtained from the Kubernetes cluster installation.
Run:ai version 2.4 or earlier
- Kubectl (Kubernetes command-line interface) installed and configured to access your cluster. Please refer to https://kubernetes.io/docs/tasks/tools/install-kubectl/.
- Helm. See https://helm.sh/docs/intro/install/ on how to install Helm. Run:ai works with Helm version 3 only (not helm 2).
- On the Researcher's root folder, create a directory .kube. Copy the Kubernetes configuration file into the directory. Each Researcher should have a separate copy of the configuration file. The Researcher should have write access to the configuration file as it stores user defaults.
- If you choose to locate the file at a different location than
~/.kube/config, you must create a shell variable to point to the configuration file as follows:
- Test the connection by running:
Install Run:ai CLI¶
- Go to the Run:ai user interface. On the top right select
Researcher Command Line Interface.
- Select Mac or Linux.
- Download directly using the button or copy the command and run on a remote machine
An alternative way of downloading the CLI is provided under the CLI Troubleshooting section.
Download the latest release from the Run:ai releases page. For MacOS, download the
darwin-amd64release.For Linux, download the
Unarchive the downloaded file
- Install by running:
The command will install Run:ai CLI into
/usr/local. Alternatively, you can provide a directory of your choosing:
You can omit
sudo if you have write access to the directory. The directory must be added to the users'
To verify the installation run:
Install Command Auto-Completion¶
It is possible to configure your Linux/Mac shell to complete Run:ai CLI commands. This feature works on bash and zsh shells only.
Edit the file
~/.zshrc. Add the lines:
Install the bash-completion package:
brew install bash-completion
sudo apt-get install bash-completion
sudo yum install bash-completion
Edit the file
~/.bashrc. Add the lines:
Troubleshoot the CLI Installation¶
Update the Run:ai CLI¶
To update the CLI to the latest version perform the same install process again.
Delete the Run:ai CLI¶
If you have installed using the default path, run:
Use Run:ai on Windows¶
Install Docker for Windows.
Get the following folder from GitHub: https://github.com/run-ai/docs/tree/master/cli/windows.
config with your Kubernetes Configuration file.
build.sh to create a docker image named
Test the image by running:
Try and connect to your cluster from inside the docker by running a Run:ai CLI command. E.g.
runai list projects.
Distribute the image to Windows users.
- In case you want to use port-forward feature please use the following command
And when using
runai submit command add the following flag: