Install the Run:ai Command-line Interface¶
The Run:ai Command-line Interface (CLI) is one of the ways for a Researcher to send deep learning workloads, acquire GPU-based containers, list jobs, etc.
The instructions below will guide you through the process of installing the CLI. The Run:ai CLI runs on Mac and Linux. You can run the CLI on Windows by using Docker for Windows. See the end of this document.
When installing the command-line interface, it is worth considering future upgrades:
- Install the CLI on a dedicated Jumpbox machine. Researches will connect to the Jumpbox from which they can submit Run:ai commands
- Install the CLI on a shared directory that is mounted on Researchers' machines.
(CLI version v2.2.76 or earlier) Kubectl (Kubernetes command-line interface) installed and configured to access your cluster. Please refer to https://kubernetes.io/docs/tasks/tools/install-kubectl/.
- (CLI version v2.2.76 or earlier) Helm. See https://helm.sh/docs/intro/install/ on how to install Helm. Run:ai works with Helm version 3 only (not helm 2).
- A Kubernetes configuration file obtained from the Kubernetes cluster installation.
When enabled, Researcher authentication requires additional setup when installing the CLI. To configure authentication see Setup Project-based Researcher Access Control. Use the modified Kubernetes configuration file described in the article.
- On the Researcher's root folder, create a directory .kube. Copy the Kubernetes configuration file into the directory. Each Researcher should have a separate copy of the configuration file. The Researcher should have write access to the configuration file as it stores user defaults.
- If you choose to locate the file at a different location than
~/.kube/config, you must create a shell variable to point to the configuration file as follows:
- Test the connection by running:
kubectl get nodes
Install Run:ai CLI¶
- Download the latest release from the Run:ai releases page. For MacOS, download the
darwin-amd64release.For Linux, download the
- Unarchive the downloaded file
- Install by running:
The command will install Run:ai CLI into
/usr/local. Alternatively, you can provide a directory of your choosing:
sudo ./install-runai.sh <INSTALLATION-DIRECTORY>
You can omit
sudo if you have write access to the directory. The directory must be added to the users'
- To verify the installation run:
runai list jobs
Install Command Auto-Completion¶
It is possible to configure your Linux/Mac shell to complete Run:ai CLI commands. This feature works on bash and zsh shells only.
Edit the file
~/.zshrc. Add the lines:
autoload -U compinit; compinit -i source <(runai completion zsh)
Install the bash-completion package:
brew install bash-completion
sudo apt-get install bash-completion
sudo yum install bash-completion
Edit the file
~/.bashrc. Add the lines:
[[ -r “/usr/local/etc/profile.d/bash_completion.sh” ]] && . “/usr/local/etc/profile.d/bash_completion.sh” source <(runai completion bash)
Troubleshooting the CLI Installation¶
Update the Run:ai CLI¶
To update the CLI to the latest version run:
sudo runai update
Delete the Run:ai CLI¶
If you have installed using the default path, run:
sudo rm -rf /usr/local/bin/runai /usr/local/runai
If you have installed using a custom path, delete all Run:ai files in this path.
Use Run:ai on Windows¶
Install Docker for Windows.
Get the following folder from GitHub: https://github.com/run-ai/docs/tree/master/cli/windows.
config with your Kubernetes Configuration file.
build.sh to create a docker image named
Test the image by running:
docker run -it runai-cli bash
Try and connect to your cluster from inside the docker by running a Run:ai CLI command. E.g.
runai list projects.
Distribute the image to Windows users.
- In case you want to use port-forward feature please use the following command
docker run -it -p <PORT>:<PORT> runai-cli bash
And when using
runai submit command add the following flag: