Skip to content

Scheduling Virtual Machines using Run:ai

Many organizations use virtual machines (VMs) to provide operating system abstraction to users. Containers are different than VMs but serve a similar purpose. Containers at a large scale are best managed by Kubernetes and Run:ai is based on Kubernetes.

It is possible to mix and match containers and VMs to some extent using a technology called KubeVirt. KubeVirt allows running VMs inside containers on top of Kubernetes.

This article describes how to use KubeVirt to schedule VMs with GPUs.


Each node in the cluster will be able to support either VMs or containers - not combined.

GPU fractions are not supported.


Making GPUs visible to VMs is not trivial. It requires either a license for NVIDIA software called NVIDIA vGPU or creating a GPU passthrough by the explicit mapping of GPU devices to virtual machines. This guide relates to the latter option.

Install KubeVirt

Install KubeVirt using the following guide.

Dedicate specific nodes for VMs

Dedicate specific nodes within the cluster to be used for VMs and not containers - following the guide.

Specifically, restrict virt-controller, virt-api and virt-handler pods to only run on the nodes you want to be used for VMs.

Assign host devices to virtual machines

For each node in the cluster that we want to use with VMs we must:

  • Identify all GPU cards we want to dedicate to be used by VMs.
  • Map GPU cards for KubeVirt to pick up (called assigning host devices to a virtual machine).

Instructions for identifying GPU cards are operating-system-specific. For Ubuntu 20.04 run:

lspci -nnk -d 10de:

Search for GPU cards that are marked with the text Kernel driver in use. Save the PCI Address, for example: 10de:1e04


Once exposed, these GPUs cannot be used by regular pods. Only VMs.

To expose the GPUs and map them to KubeVirt follow the instructions here. Specifically, run:

kubectl edit kubevirt -n kubevirt -o yaml

And add all of the PCI Addresses of all GPUs of all Nodes concatenated by commas, with the resource name kubevirt/vmgpu:

  certificateRotateStrategy: {}
      - GPU
      - HostDevices
      - pciVendorSelector: <PCI-ADDRESS>,<PCI-ADDRESS>,
        resourceName: kubevirt/vmgpu

Assign GPUs to VMs

You must create a CRD called vm for each virtual machine. vm is a reference to a virtual machine and its capabilities.

The Run:ai project is matched to a Kubernetes namespace. Unless manually configured, the namespace is runai-<PROJECT-NAME>. Per Run:ai Project, create a vm object. See KubeVirt documentation example. Specifically, the created YAML should look like this:

  running: false
      creationTimestamp: null
        priorityClassName: <WORKLOAD-TYPE>
        project: <PROJECT-NAME>
      schedulerName: runai-scheduler
          - deviceName: kubevirt/vmgpu # identical name to resourceName above
            name: gpu1  # name here is arbitrary and is not used. 

Where <WORKLOAD-TYPE> is train or build

Turn on KubeVirt feature in Run:ai

  • If you want to upgrade the runai cluster, use the instructions.

    • During the upgrade, customize the cluster installation by adding the following to the values.yaml file:
        enabled: true
  • If you don't want to upgrade the whole cluster, you can add those values to your existing values.yaml file.

    • Then, run the command:
    helm upgrade runai-cluster runai/runai-cluster -n runai -f values.yaml
  • Make sure the kubevirtCluster: enabled flag is still turned on in runaiconfig:

    kubectl edit runaiconfig runai -n runai

Start a VM


virtctl start testvm -n runai-test

You can now see the VMs pod in Run:ai.

runai list -A
NAME    STATUS   AGE  NODE         IMAGE                                   TYPE  PROJECT  USER  GPUs Allocated (Requested)  PODs Running (Pending)  SERVICE URL(S)
testvm  Running  0s   master-node        test           1 (1)                       1 (0)