Skip to content

Prerequisites

Before proceeding with this document, please review the installation types documentation to understand the difference between air-gapped and connected installations.

Hardware Requirements

(Production only) Run:ai System Nodes: To reduce downtime and save CPU cycles on expensive GPU Machines, we recommend that production deployments will contain two or more worker machines, designated for Run:ai Software. The nodes do not have to be dedicated to Run:ai, but for Run:ai purposes we would need:

  • 4 CPUs
  • 8GB of RAM
  • 120GB of Disk space

The control plane (backend) installation of Run:ai will require the configuration of Kubernetes Persistent Volumes of a total size of 110GB.

Run:ai Software Prerequisites

You should receive a file: runai-gcr-secret.yaml from Run:ai Customer Support. The file provides access to the Run:ai Container registry.

You should receive a single file runai-<version>.tar from Run:ai customer support

OpenShift

Run:ai supports OpenShift. Supported versions are 4.6 through 4.10.

  • OpenShift must be configured with a trusted certificate.
  • OpenShift must have a configured identity provider.

Important

  • Entitlement is the RedHat OpenShift licensing mechanism. Without entitlement, you will not be able to install the NVIDIA drivers used by the GPU Operator. For further information see: here. or the equivalent NVIDIA documentation. Entitlement is not required anymore if you are using OpenShift 4.9.9 or above
  • If you are planning to use NVIDIA A100 with CoreOS, you will need the latest GPU Operator (version 1.8).

Download Third-Party Dependencies

An OpenShift installation of Run:ai has third-party dependencies that must be pre-downloaded to an Airgapped environment. These are the NVIDIA GPU Operator and Kubernetes Node Feature Discovery Operator

No additional work needs to be performed. We will use the Red Hat Certified Operator Catalog (Operator Hub) during the installation.

Download the NVIDIA GPU Operator pre-requisites. These instructions also include the download of the Kubernetes Node Feature Discovery Operator.

Installer Machine

The machine running the installation script (typically the Kubernetes master) must have:

  • At least 50GB of free space.
  • Docker installed.

Other

  • (Airgapped installation only) Private Docker Registry. Run:ai assumes the existence of a Docker registry for images. Most likely installed within the organization. The installation requires the network address and port for the registry (referenced below as <REGISTRY_URL>).

Pre-install Script

Once you believe that the Run:ai prerequisites are met, we highly recommend installing and running the Run:ai pre-install diagnostics script. The tool:

  • Tests the below requirements as well as additional failure points related to Kubernetes, NVIDIA, storage, and networking.
  • Looks at additional components installed and analyzes their relevancy to a successful Run:ai installation.

To use the script download the latest version of the script and run:

chmod +x preinstall-diagnostics-<platform>
./preinstall-diagnostics-<platform> 

If the script fails, or if the script succeeds but the Kubernetes system contains components other than Run:ai, locate the file runai-preinstall-diagnostics.txt in the current directory and send it to Run:ai technical support.

For more information on the script including additional command-line flags, see here.


Last update: May 18, 2022