Preparing for a Run:ai OpenShift Installation¶
The following section provides IT with the information needed to prepare for a Run:ai installation.
Create OpenShift Projects¶
The Run:ai control plane uses a namespace (or project in OpenShift terminology) name runai-backend
. You must create it before installing:
Prepare Run:ai Installation Artifacts¶
Run:ai Software Files¶
SSH into a node with oc
access (oc
is the OpenShift command line) to the cluster and Docker
installed.
Run the following to enable image download from the Run:ai Container Registry on Google cloud:
To extract Run:ai files, replace <VERSION>
in the command below and run:
Upload images to a local Docker Registry. Set the Docker Registry address in the form of NAME:PORT
(do not add https
):
Run the following script (you must have at least 20GB of free disk space to run):
(If docker is configured to run as non-root then sudo
is not required).
The script should create a file named custom-env.yaml which will be used by the control-plane installation.
(Optional) Mark Run:ai System Workers¶
You can optionally set the Run:ai control plane to run on specific nodes. Kubernetes will attempt to schedule Run:ai pods to these nodes. If lacking resources, the Run:ai nodes will move to another, non-labeled node.
To set system worker nodes run:
Warning
Do not select the Kubernetes master as a runai-system
node. This may cause Kubernetes to stop working (specifically if Kubernetes API Server is configured on 443 instead of the default 6443).
Additional Permissions¶
As part of the installation, you will be required to install the Control plane and Cluster Helm Charts. The Helm Charts require Kubernetes administrator permissions. You can review the exact permissions provided by using the --dry-run
on both helm charts.
Next Steps¶
Continue with installing the Run:ai Control Plane.