To access Run:ai resources, you have to authenticate. The purpose of this document is to explain how authentication works at Run:ai.
Generally speaking, there are two authentication endpoints:
- The Run:ai control plane.
- Run:ai GPU clusters.
Both endpoints are accessible via APIs as well as a user interface.
Run:ai includes an internal identity service. The identity service ensures users are who they claim to be and gives them the right kind of access to Run:ai.
Out of the box, The Run:ai identity service provides a way to create users and associate them with access roles.
It is also possible to configure the Run:ai identity service to connect to a company directory using the SAML protocol. For more information see single sign-on.
Both endpoints described above are protected via time-limited oauth2-like JWT authentication tokens.
There are two ways of getting a token:
- Using a user/password combination.
- Using client applications for API access.
Run:ai control plane¶
You can use the Run:ai user interface to provide user/password. These are validated against the identity service. Run:ai will return a token with the right access rights for continued operation.
You can also use a client application to get a token and then connect directly to the administration API endpoint.
Run:ai GPU Clusters¶
The Run:ai GPU cluster is a Kubernetes cluster. All communication into Kubernetes flows through the Kubernetes API server.
To facilitate authentication via Run:ai the Kubernetes API server must be configured to use the Run:ai identity service to validate authentication tokens. For more information on how to configure the Kubernetes API server see Kubernetes configuration under researcher authentication.
- To configure authentication for researchers researcher authentication.
- To configure single sign-on, see single sign-on.