Skip to content
Run:ai Documentation Library
Uninstall self-hosted OpenShift installation
Initializing search
GitHub
Home
Administrator
Researcher
Developer
Run:ai Documentation Library
GitHub
Home
Home
System Components
Whats New
Whats New
Version 2.17
Version 2.16
Version 2.15
Version 2.13
Version 2.9
Changelog
Changelog
Hot Fixes for 2.17
Hot Fixes for 2.16
Hot Fixes for 2.15
Hot Fixes for 2.13
Data Privacy
Administrator
Administrator
Overview
Run:ai Setup
Run:ai Setup
Installation Types
Classic (SaaS)
Classic (SaaS)
Introduction
Prerequisites
Cluster Install
Customize Installation
Cluster Upgrade
Cluster Delete
Kubernetes Install
NVIDIA DGX Bundle
Self-hosted
Self-hosted
Overview
Kubernetes-based
Kubernetes-based
Prerequisites
Preparations
Install Control Plane
Install a Cluster
Install additional Clusters
Manually Create Projects
Next Steps
Upgrade
Uninstall
OpenShift-based
OpenShift-based
Prerequisites
Preparations
Install Control Plane
Install a Cluster
Install additional Clusters
Manually Create Projects
Next Steps
Upgrade
Uninstall
Configuration
Configuration
Overview
Set Node Roles
Review Kubernetes Access provided to Run:ai
External access to Containers
User Identity in Container
Install Administrator CLI
Node Affinity with Cloud Node Pools
Local Certificate Authority
Backup & Restore
High Availability
Scaling
Authentication
Authentication
Overview
Access control
Researcher Authentication
Single Sign-On
Maintenance
Maintenance
Node Downtime
Monitoring Cluster Health
Audit Log
Researcher Setup
Researcher Setup
Introduction
Install the CLI
Setup cluster wide PVC
Group Nodes
Workloads
Workloads
Policies
Policies
Former Policies
Training Policy
Workspaces Policy
Secrets
Inference
Submitting Workloads
User Interface
User Interface
Overview
Users
Projects
Departments
Dashboard Analysis
Jobs
Credentials
Deployments
Templates
Troubleshooting
Troubleshooting
Cluster Health
Troubleshooting
Diagnostics
Alert Manager Alerts
Alert Manager Alerts
Runai Agent Cluster Info Push Rate Low
Runai Agent Pull Rate Low
Runai Container Memory Usage Critical
Runai Container Memory Usage Warning
Runai Container Restarting
Runai Cpu Usage Warning
Runai Critical Problem
Runai DaemonSet Rollout Stuck
Runai DaemonSet Unavailable On Nodes
Runai Deployment Insufficient Replicas
Runai Deployment NoAvailable Replicas
Runai Deployment Unavailable Replicas
Runai Project Controller Reconcile Failure
Runai StatefulSet Insufficient Replicas
Runai StatefulSet No Available Replicas
Best Practices
Best Practices
From Docker to Run:ai
Researcher
Researcher
Overview
Quickstart Guides
Quickstart Guides
Run:ai Quickstart Guides
Training
Build
Build with Connected Ports
GPU Fractions
Distributed Training Workloads
Over-Quota, Basic Fairness & Bin-Packing
Queue Fairness
Inference
Dynamic MIG
User Interface
User Interface
Workspaces
Workspaces
Introduction
Building Blocks
Building Blocks
Overview
Environments
Compute Resources
Data Sources
Creation
Creation
Create an Environment
Create a Compute Resource
Create a Data Source
Create a Workspace
Statuses
Trainings
CLI Reference
CLI Reference
Introduction
runai attach
runai bash
runai config
runai delete
runai describe
runai exec
runai list
runai login
runai logout
runai logs
runai port-forward
runai resume
runai submit
runai submit-dist mpi
runai submit-dist pytorch
runai submit-dist tf
runai submit-dist xgboost
runai suspend
runai top node
runai update
runai version
runai whoami
Best Practices
Best Practices
Bare-Metal to Docker Images
Convert a Workload to Run Unattended
Save Deep Learning Checkpoints
Environment Variables
Scheduling
Scheduling
The Run:ai Scheduler
Allocation of GPU Fractions
Dynamic GPU Fractions
Optimize performance with the Node Level Scheduler
GPU Time Slicing
Allocation of CPU and Memory
Job Statuses
Scheduling Strategies
Scheduling workloads to AWS placement groups
Using Node Pools
Tools
Tools
Visual Studio Code
PyCharm
X11 & PyCharm
Jupyter Notebook
TensorBoard
Use Cases
Developer
Developer
Overview
API Authentication
Cluster API
Cluster API
Workloads Overview
Submit Workload via YAML
Submit Workload via HTTP/REST
Submit CRON job via YAML
Kubernetes Workloads
Reference
Reference
Training Workloads
Interactive Workloads
Distributed Training Workloads
Inference Workloads
Control Plane API
Metrics API
Metrics API
Metrics
Deprecated APIs
Deprecated APIs
Researcher API
Researcher API
REST API
Kubernetes API
Kubernetes API
Overview
Submit a Job via YAML
Submit a Job via Kubernetes API
Inference API
Inference API
Overview
Setup
Submit via CLI
Uninstall Run:ai
¶
See uninstall section
here
Back to top