Run:ai consists of two components:
- The Run:ai Cluster. One or more data-science GPU clusters hosted by the customer (on-prem or cloud).
- The Run:ai Control plane or Backend. A single entity that monitors clusters, sets priorities, and business policies.
There are two main installation options:
|Classic (SaaS)||Run:ai is installed on the customer's data science GPU clusters. The cluster connects to the Run:ai control plane on the cloud (https://app.run.ai). |
With this installation, the cluster requires an outbound connection to the Run:ai cloud.
|Self-hosted||The Run:ai control plane is also installed in the customer's data center|
The self-hosted option is for organizations that cannot use a SaaS solution due to data leakage concerns. The self-hosted installation is priced differently. For further information please talk to Run:ai sales.
Run:ai self-hosting comes with two variants:
|Connected||The organization can freely download from the internet (though upload is not allowed)|
|Air-gapped||The organization has no connection to the internet |
Self-hosting with Kubernetes vs OpenShift¶
Kubernetes has many Certified Kubernetes Providers. Run:ai has been installed with a number of those such as Rancher, Kubespray, OpenShift, HPE Ezmeral, and Native Kubernetes. The OpenShift installation is different from the rest. As such, the Run:ai self-hosted installation instructions are divided into two separate sections:
- OpenShift-based installation. See Run:ai OpenShift installation.
- Kubernetes-based installation. See Run:ai Kubernetes installation.
In many organizations, Kubernetes is governed by IT compliance rules. In this scenario, there are strict access control rules during the installation and running of workloads:
- OpenShift is secured using Security Context Constraints (SCC). The Run:ai installation supports SCC.
- Kubernetes is secured using Pod Security Policy (PSP). The Run:ai installation supports PSP.