Submitting Workloads via HTTP/REST¶
You can submit Workloads via HTTP calls, using the Kubernetes REST API.
Submit Workload Example¶
To submit a workload via HTTP, run the following:
curl -X POST \ # (1)
'https://<IP>:6443/apis/run.ai/v2alpha1/namespaces/<PROJECT>/trainingworkloads' \
--header 'Content-Type: application/yaml' \
--header 'Authorization: Bearer <BEARER>' \ # (2)
--data-raw 'apiVersion: run.ai/v2alpha1
kind: TrainingWorkload # (3)
metadata:
name: job-1
spec:
gpu:
value: "1"
image:
value: gcr.io/run-ai-demo/quickstart
name:
value: job-1
- Replace
<IP>
with the Kubernetes control-plane endpoint (can be found in kubeconfig profile).
Replace<PROJECT>
with the name of the Run:ai namespace for the specific Project (typicallyrunai-<Project-Name>
).
Replacetrainingworkloads
withinteractiveworkloads
,distributedworkloads
orinferenceworkloads
according to type. - Add Bearer token. To obtain a Bearer token see API authentication.
- See Submitting a Workload via YAML for an explanation of the YAML-based workload.
Run: runai list jobs
to see the new Workload.
Delete Workload Example¶
To delete a workload run:
curl -X DELETE \ # (1)
'https://<IP>:6443/apis/run.ai/v2alpha1/namespaces/<PROJECT>/trainingworkloads/<JOB-NAME>' \
--header 'Content-Type: application/yaml' \
--header 'Authorization: Bearer <BEARER>' # (2)
- Replace
<IP>
with the Kubernetes control-plane endpoint (can be found in kubeconfig profile).
Replace<PROJECT>
with the name of the Run:ai namespace for the specific Project (typicallyrunai-<Project-Name>
).
Replacetrainingworkloads
withinteractiveworkloads
,distributedworkloads
orinferenceworkloads
according to type.
Replace<JOB-NAME>
with the name of the Job. - Add Bearer token. To obtain a Bearer token see API authentication.
Suspend/Stop workload example¶
To suspend or stop a workload run:
curl -X PATCH \ # (1)
'https://<IP>:6443/apis/run.ai/v2alpha1/namespaces/<PROJECT>/interactiveworkload/<JOB-NAME>' \
--header 'Content-Type: application/json'
--header 'Authorization: Bearer <TOKEN>'# (2)
--data '{"spec":{"active": {"value": "false"}}}'
- Replace
<IP>
with the Kubernetes control-plane endpoint (can be found in kubeconfig profile).
Replace<PROJECT>
with the name of the Run:ai namespace for the specific Project (typicallyrunai-<Project-Name>
).
Replacetrainingworkloads
withinteractiveworkloads
,distributedworkloads
orinferenceworkloads
according to type.
Replace<JOB-NAME>
with the name of the Job. - Add Bearer token. To obtain a Bearer token see API authentication.
Using other Programming Languages¶
You can use any Kubernetes client library together with the YAML documentation above to submit workloads via other programming languages. For more information see Kubernetes client libraries.
Python example¶
Create the following file and run it via python:
create-train.py
import json
import requests
# (1)
url = "https://<IP>:6443/apis/run.ai/v2alpha1/namespaces/<PROJECT>/trainingworkloads"
payload = json.dumps({
"apiVersion": "run.ai/v2alpha1",
"kind": "TrainingWorkload",
"metadata": {
"name": "train1",
"namespace": "runai-team-a"
},
"spec": {
"image": {
"value": "gcr.io/run-ai-demo/quickstart"
},
"name": {
"value": "train1"
},
"gpu": {
"value": "1"
}
}
})
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer <TOKEN>' #(2)
}
response = requests.request("POST", url, headers=headers, data=payload) # (3)
print(json.dumps(json.loads(response.text), indent=4))
- Replace
<IP>
with the Kubernetes control-plane endpoint (can be found in kubeconfig profile).
Replace<PROJECT>
with the name of the Run:ai namespace for the specific Project (typicallyrunai-<Project-Name>
).
Replacetrainingworkloads
withinteractiveworkloads
,distributedworkloads
orinferenceworkloads
according to type. - Add Bearer token. To obtain a Bearer token see API authentication.
- if you do not have a valid certificate, you can add the flag
verify=False
.