Integrations with Run:ai¶
The table below summarizes the integration capabilities with various third-party products.
Integration support¶
Support for integrations varies. Where mentioned below, the integration is supported out of the box with Run:ai. With other integrations, our customer success team has previous experience with integrating with the third party software and many times the community portal will contain additional reference documentation provided on an as-is basis.
The Run:ai community portal is password protected and access is provided to customers and partners.
Integrations¶
Tool | Category | Run:ai support details | Additional Information |
---|---|---|---|
Triton | Orchestration | Supported | Usage via docker base image. Quickstart inference example |
Spark | Orchestration | It is possible to schedule Spark workflows with the Run:ai scheduler. For details, please contact Run:ai customer support. Sample code can be found in the Run:ai customer success community portal: https://runai.my.site.com/community/s/article/How-to-Run-Spark-jobs-with-Run-AI | |
Kubeflow Pipelines | Orchestration | It is possible to schedule kubeflow pipelines with the Run:ai scheduler. For details please contact Run:ai customer support. Sample code can be found in the Run:ai customer success community portal https://runai.my.site.com/community/s/article/How-to-integrate-Run-ai-with-Kubeflow | |
Apache Airflow | Orchestration | It is possible to schedule Airflow workflows with the Run:ai scheduler. For details, please contact Run:ai customer support. Sample code can be found in the Run:ai customer success community portal: https://runai.my.site.com/community/s/article/How-to-integrate-Run-ai-with-Apache-Airflow | |
Argo workflows | Orchestration | It is possible to schedule Argo workflows with the Run:ai scheduler. For details, please contact Run:ai customer support. Sample code can be found in the Run:ai customer success community portal https://runai.my.site.com/community/s/article/How-to-integrate-Run-ai-with-Argo-Workflows | |
SeldonX | Orchestration | It is possible to schedule Seldon Core workloads with the Run:ai scheduler. For details, please contact Run:ai customer success. Sample code can be found in the Run:ai customer success community portal: https://runai.my.site.com/community/s/article/How-to-integrate-Run-ai-with-Seldon-Core | |
Jupyter Notebook | Development | Supported | Run:ai provides integrated support with Jupyter Notebooks. Quickstart example: https://docs.run.ai/latest/Researcher/Walkthroughs/quickstart-jupyter/ |
Jupyter Hub | Development | It is possible to submit Run:ai workloads via JupyterHub. For more information please contact Run:ai customer support | |
PyCharm | Development | Supported | Containers created by Run:ai can be accessed via PyCharm. PyCharm example |
VScode | Development | Supported | - Containers created by Run:ai can be accessed via Visual Studio Code. example - You can automatically launch Visual Studio code web from the Run:ai console. example. |
Kubeflow notebooks | Development | It is possible to launch a kubeflow notebook with the Run:ai scheduler. For details please contact Run:ai customer support Sample code can be found in the Run:ai customer success community portal:https://runai.my.site.com/community/s/article/How-to-integrate-Run-ai-with-Kubeflow | |
Ray | training, inference, data processing. | It is possible to schedule Ray jobs with the Run:ai scheduler. Sample code can be found in the Run:ai customer success community portal https://runai.my.site.com/community/s/article/How-to-Integrate-Run-ai-with-Ray | |
TensorBoard | Experiment tracking | Supported | Run:ai comes with a preset Tensorboard Environment asset. TensorBoard example. Additional sample |
Weights & Biases | Experiment tracking | It is possible to schedule W&B workloads with the Run:ai scheduler. For details, please contact Run:ai customer success. | |
ClearML | Experiment tracking | It is possible to schedule ClearML workloads with the Run:ai scheduler. For details, please contact Run:ai customer success. | |
MLFlow | Model Serving | It is possible to use ML Flow together with the Run:ai scheduler. For details, please contact Run:ai customer support. Sample code can be found in the Run:ai customer success community portal: https://runai.my.site.com/community/s/article/How-to-integrate-Run-ai-with-MLflow Additional MLFlow sample | |
Hugging Face | Repositories | Supported | Run:ai provides an out of the box integration with Hugging Face |
Docker Registry | Repositories | Supported | Run:ai allows using a docker registry as a Credentials asset. |
S3 | Storage | Supported | Run:ai communicates with S3 by defining a data source asset. |
Github | Storage | Supported | Run:ai communicates with GitHub by defining it as a data source asset |
Tensorflow | Training | Supported | Run:ai provides out of the box support for submitting TensorFlow workloads via API or by submitting workloads via user interface. |
Pytorch | Training | Supported | Run:ai provides out of the box support for submitting PyTorch workloads via API or by submitting workloads via user interface. |
Kubeflow MPI | Training | Supported | Run:ai provides out of the box support for submitting MPI workloads via API or by submitting workloads via user interface |
XGBoost | Training | Supported | Run:ai provides out of the box support for submitting XGBoost workloads via API or by submitting workloads via user interface |
Karpenter | Cost Optimization | Supported | Run:ai provides out of the box support for Karpenter to save cloud costs. Integration notes with Karpenter can be found here |
Kubernetes Workloads Integration¶
Kubernetes has several built-in resources that encapsulate running Pods. These are called Kubernetes Workloads and should not be confused with Run:ai Workloads.
Examples of such resources are a Deployment that manages a stateless application, or a Job that runs tasks to completion.
Run:ai natively runs Run:ai Workloads. A Run:ai workload encapsulates all the resources needed to run, creates them, and deletes them together. However, Run:ai, being an open platform allows the scheduling of any Kubernetes Workflow.
For more information see Kubernetes Workloads Integration.