Skip to content

Installation Types

Run:AI consists of two components:

  • The Run:AI Cluster. One or more data-science GPU clusters hosted by the customer (on-prem or cloud).
  • The Run:AI Backend or Control Plane. A single entity that monitors clusters, sets priorities, and business policies.

There are two main installation options:

Installation Type Description
Classic (SaaS) Run:AI is installed on the customer's data science GPU clusters. The cluster connects to the Run:AI backend on the cloud (https://app.run/ai).
With this installation, the cluster requires an outbound connection to the Run:AI cloud.
Self-hosted The Run:AI backend is also installed in the customer's data center

The self-hosted option is for organizations that cannot use a SaaS solution due to data leakage concerns. The self-hosted installation is priced differently. For further information please talk to Run:AI sales.

Self-hosted Installation

Run:AI self-hosting comes with two variants:

Self-hosting Type Description
Connected The organization can freely download from the internet (though upload is not allowed)
Air-gapped The organization has no connection to the internet

Self-hosting with Kubernetes vs OpenShift

Kubernetes has many Certified Kubernetes Providers. Run:AI has been installed with a number of those such as Rancher, Kubespray, OpenShift, HPE Ezmeral, and Native Kubernetes. The OpenShift installation is different from the rest. As such, the Run:AI self-hosted installation instructions are divided into two separate sections:

Secure Installation

In many organizations, Kubernetes is governed by IT compliance rules. In this scenario, there are strict access control rules during the installation and running of workloads:

  • OpenShift is secured using Security Context Constraints (SCC). The Run:AI installation supports SCC.
  • Kubernetes is secured using Pod Security Policy (PSP). The Run:AI installation supports PSP.

Last update: October 4, 2021