Skip to content

Whats New 2021

August 30th 2021

Run:AI now supports a self-hosted installation. With the self-hosted installation the Run:AI control-plane (or backend) which typically resides on the cloud, is deployed at the customer's data center. For further details on supported installation types see Installation Types.

Note

The Run:AI self-hosted installation requires a dedicated license, and has different pricing than the SaaS installation. For more details contact your Run:AI account manager.

NFS volumes can now be mounted directly to containers run by Run:AI while submitting jobs via Run:AI. See the --nfs-server flag of runai submit.

To ease the manageability of user templates, Run:AI now supports global user templates. Global user templates are user templates that are managed by the Run:AI administrator and are available for all the projects within a specific cluster. The purpose of global user templates is to help define and enforce cross-organization resource policies.

To simplify researchers' job submission via the Run:AI Researcher User Interface (UI), the UI now supports autocomplete, which is based on pre-defined values, as configured by the Administrator using the administrative templates.

Run:AI extended the usage of Cluster name, as defined by the Administrator while configuring clusters at Run:AI. The Cluster name is now populated to the Run:AI dashboards as well as the Researcher UI.

The original command line, which was used for running a Job, is now shown under the Job details under the General tab.

August 4th 2021

Researcher User Interface (UI) enhancements:

  • Revised user interface and user experience
  • Researchers can create templates for the ease of jobs submission. Templates can be saved and used in the project level
  • Researchers can be easily re-submit jobs from the Submit page or directly from the jobs list in the Jobs page
  • Administrators can create administrative templates which set cluster-wide defaults, constraints and defaults for the submission of Jobs. For further details see Configure Command-Line Interface Templates.
  • Different teams can collaborate and share templates by exporting and importing templates in the Submit screen

Researcher Command Line Interface (CLI) enhancements:

  • Jobs can be manually suspended and resumed using the new commands: runai suspend & runai resume
  • A new command was added: runai top job

Kubeflow integration is now supported. The new integration allows building ML pipelines in Kubeflow Pipelines as well as Kubeflow Notebooks and run the workloads via the Run:AI platform. For further details see Integrate Run:AI with Kubeflow.

Mlflow integration is now supported. For further details see Integrate Run:AI with MLflow.

Run:AI Projects are implemented as Kubernetes namespaces. Run:AI now supports customizable namespace names. For further details see Manual Creation of Namespaces.

May 10th 2021

Usability improvements of Run:AI Command-line interface (CLI). The CLI now supports autocomplete for all options and parameters.

Usability improvements of the Administration user interface navigation menu now allow for easier navigation.

Run:AI can be installed when Kubernetes has Pod Security Policy (PSP) enabled.

April 20th 2021

Job List and Node list now show the GPU type (e.g. v-100).

April 18th, 2021

Inference workloads are now supported. For further details see Inference Overview.

JupyterHub integration is now supported. For further details see JupyterHub Integration

NVIDIA MIG is now supported. You can use the NVIDIA MIG technology to partition A-100 GPUs. Each partition will be considered as a single GPU by the Run:AI system and all the Run:AI functionality is supported in the partition level, including GPU Fractions.

April 1st, 2021

Run:AI now supports Kubernetes 1.20

March 24th 2021

Job List and Node list now show CPU utilization and CPU memory utilization.

February 14th, 2021

The Job list now shows per-Job graphs for GPU utilization, GPU memory.

The Node list now shows per-Node graphs for GPU utilization, GPU memory.

January 22nd, 2021

New Analytics dashboard with emphasis on CPU, CPU Memory, GPU and GPU Memory. Allows better diagnostics of resource misuse.

January 15th, 2021

New developer documentation area has been created. In it:

January 9th 2021

A new Researcher user interface is now available. See researcher UI setup.

January 2nd, 2021

Run:AI Clusters now support Azure Managed Kubernetes Service (AKS)


Last update: September 3, 2021