Understanding Cluster Access Roles¶
Run:AI has the ability to work under restrictive Kubernetes environments. Namely:
- Kubernetes PodSecurityPolicy
You can enable these restricted environment by setting the
openshift configuration flags in the Helm values file before installing the Run:AI cluster.
Other configuration flags are controlling specific behavioral aspects of Run:AI. Specifically, those which require additional permissions. Such as automatic namespace/project creation, secret propagation, and more.
The purpose of this document is to provide security officers with the ability to review what cluster-wide access Run:AI requires, and verify that it is in line with organizational policy, before installing the Run:AI cluster.
Review Cluster Access Roles¶
If you have not done so before, run:
helm repo add runai https://run-ai-charts.storage.googleapis.com helm repo update
helm pull runai/runai-cluster --untar cd runai-cluster/templates
Following is a description of some of the relevant files:
| || ||Mandatory Kubernetes Cluster Roles and Cluster Role Bindings|
| || ||Automatic Project Creation and Maintenance. Provides Run:AI with the ability to create Kubernetes namespaces when the Run:AI administrator creates new Projects. Can be controlled via flag|
| || ||Allow the propagation of Secrets. See Secrets in Jobs. Can be controlled via flag.|
| || ||Disables the usage of the Kubernetes Limit Range feature|
| || ||OpenShift-specific Security Contexts|
| ||4 files||Folder contains a list of Priority Classes used by Run:AI|
| || ||A subset of the Kubernetes baseline PodSecurityPolicy (here)|
| || ||Required for NVIDIA components|
| || ||Required for Run:AI GPU Fractions technology. Can be controlled via flag.|
| || ||Required for User Workloads. Extends the Kubernetes baseline PodSecurityPolicy for Run:AI GPU Fractions technology. Can be controlled via flag.|