Cluster Install
Below are instructions on how to install a Run:ai cluster.
Prerequisites¶
Before installing, please review the installation prerequisites listed in Run:ai GPU Cluster Prerequisites.
Important
We strongly recommend running the Run:ai pre-install script to verify that all prerequisites are met.
Install Run:ai¶
Log in to Run:ai user interface at <company-name>.run.ai
. Use credentials provided by Run:ai Customer Support:
- If no clusters are currently configured, you will see a Cluster installation wizard.
- If a cluster has already been configured, use the menu on the top left and select
Clusters
. On the top left, clickNew Cluster
.
Using the cluster wizard:
- Choose a name for your cluster.
- Choose the Run:ai version for the cluster.
- Choose a target Kubernetes distribution (see table for supported distributions).
- (SaaS and remote self-hosted cluster only) Enter a URL for the Kubernetes cluster. The URL need only be accessible within the organization's network. For more information, see Cluster prerequisites.
- Press
Continue
.
On the next page:
- (SaaS and remote self-hosted cluster only) Install a trusted certificate to the domain entered above.
- Run the Helm command provided in the wizard.
Verify your Installation¶
- Go to
<company-name>.run.ai/dashboards/now
. - Verify that the number of GPUs on the top right reflects your GPU resources on your cluster and the list of machines with GPU resources appears on the bottom line.
- Run:
kubectl get cm runai-public -n runai -o jsonpath='{.data}' | yq -P
(assumes that yq is instaled)
Example output:
cluster-version: 2.9.0
runai-public:
version: 2.9.0
runaiConfigStatus:
conditions:
- type: DependenciesFulfilled # (1)
status: "True"
reason: dependencies_fulfilled
message: Dependencies are fulfilled
- type: Deployed
status: "True"
reason: deployed
message: Resources Deployed
- type: Available
status: "True"
reason: available
message: System Available
- type: Reconciled # (2)
status: "True"
reason: reconciled
message: Reconciliation completed successfully
optional: # (3)
knative: # (4)
components:
hpa:
available: true
knative:
available: true
kourier:
available: true
mpi: # (5)
available: true
- Verifies that all mandatory dependencies are met: NVIDIA GPU Operator, Prometheus and NGINX controller.
- Verifies that all of Run:ai managed resources have been successfully deployed.
- Checks whether optional product dependencies have been met.
- See Inference prerequisites.
- See distributed training prerequisites.
For a more extensive verification of cluster health, see Determining the health of a cluster.
Troubleshooting¶
Dependencies are not fulfilled¶
- Make sure to install the missing dependencies.
- If dependencies are installed, make sure that the CRDs of said dependency are installed, and that the version is supported
- Make sure there are no necessary adjustments for specific flavors as noted in the Cluster prerequisites
Resources not deployed / System Unavailable / Reconciliation Failed¶
- Run the Preinstall diagnostic script and check for issues.
- Run
You can also run kubectl logs <pod_name>
to get logs from any failing pod.
Common Issues¶
- Run:ai was previously installed in the cluster and was deleted unsuccessfully, resulting in remaining CRDs.
- Diagnosis: This can be detected by running
kubectl get crds
in the relevant namespaces (or adding-A
and checking for Run:ai CRDs). - Solution: Force delete the listed CRDs and reinstall.
- Diagnosis: This can be detected by running
- One or more of the pods have issues around valid certificates.
- Diagnosis: The logs contains a message similar to the following
failed to verify certificate: x509: certificate signed by unknown authority
. - Solution:
- This is usually due to an expired or invalid certificate in the cluster, and if so, renew the certificate.
- If the certificate is valid, but is signed by a local CA, make sure you have followed the procedure for a local certificate authority.
- Diagnosis: The logs contains a message similar to the following
Get Installation Logs¶
You can use the get instllation logs script to obtain any relevant installation logs in case of an error.
Researcher Authentication¶
If you will be using the Run:ai command-line interface or sending YAMLs directly to Kubernetes, you must now set up Researcher Access Control.
Customize your installation¶
To customize specific aspects of the cluster installation see customize cluster installation.
Set Node Roles (Optional)¶
When installing a production cluster you may want to:
- Set one or more Run:ai system nodes. These are nodes dedicated to Run:ai software.
- Machine learning frequently requires jobs that require CPU but not GPU. You may want to direct these jobs to dedicated nodes that do not have GPUs, so as not to overload these machines.
- Limit Run:ai to specific nodes in the cluster.
To perform these tasks. See Set Node Roles.
Next Steps¶
- Set up Run:ai Users Working with Users.
- Set up Projects for Researchers Working with Projects.
- Set up Researchers to work with the Run:ai Command-line interface (CLI). See Installing the Run:ai Command-line Interface on how to install the CLI for users.
- Review advanced setup and maintenance scenarios.