This document explains how to customize the Run:ai cluster installation. Customizing the cluster installation is useful if you want to implement specific features.
Using these instructions to customize your cluster is optional.
How to customize¶
After the cluster is installed, you can edit the
runaiconfig object to add/change configuration. Use the command:
All customizations will be saved when upgrading the cluster to a future version.
| || ||Set to |
| || ||Set to |
| || ||Set to |
| || ||Defines the container runtime of the cluster (supports |
| || ||Set to true to allow researcher tools with a sub domain to be spawned from the Run:ai user interface. For more information see External access to containers|
| ||Set requests and limit configurations for CPU and memory for Run:ai containers. For more information see Large cluster configuration|
| || ||Controls the usage of GPU fractions.|
| ||On Kubernetes distributions other than OpenShift, set a dedicated certificate for the researcher service ingress in the cluster. When not set, the certificate inserted when installing the cluster will be used. The value should be a Kubernetes secret in the runai namespace|
| ||On OpenShift, set a dedicated certificate for the researcher service route. When not set, the OpenShift certificate will be used. The value should be a Kubernetes secret in the runai namespace|
| ||In air-gapped environment, allow cluster images to be pulled from local docker registry. For more information see self-hosted cluster installation|
| ||false||Restrict scheduling of workloads to specific nodes, based on node labels. For more information see node roles|
| ||2h||The interval of time where Prometheus will save Run:ai metrics. Promethues is only used as an intermediary to another metrics storage facility and metrics are typically moved within tens of seconds, so changing this setting is mostly for debugging purposes.|
Understanding Custom Access Roles¶
To review the access roles created by the Run:ai Cluster installation, see Understanding Access Roles.
Manual Creation of Namespaces¶
Run:ai Projects are implemented as Kubernetes namespaces. By default, the administrator creates a new Project via the Administration user interface which then triggers the creation of a Kubernetes namespace named
runai-<PROJECT-NAME>. There are a couple of use cases that customers will want to disable this feature:
- Some organizations prefer to use their internal naming convention for Kubernetes namespaces, rather than Run:ai's default
- Some organizations will not allow Run:ai to automatically create Kubernetes namespaces.
Follow these steps to achieve this:
- Disable the namespace creation functionality. See the
- Create a Project using the Run:ai User Interface.
- Create the namespace if needed by running:
kubectl create ns <NAMESPACE>. The suggested Run:ai default is
- Label the namespace to connect it to the Run:ai Project by running
kubectl label ns <NAMESPACE> runai/queue=<PROJECT_NAME>, where
<PROJECT_NAME>is the name of the project you have created in the Run:ai user interface above and
<NAMESPACE>is the name you chose for your namespace.