Allow external access to containers
Researchers who work with containers sometimes need to expose ports to access the container from remote. Some examples:
- Using a Jupyter notebook that runs within the container
- Using PyCharm to run python commands remotely.
- Using TensorBoard to view machine learning visualizations
When using docker, the way Researchers expose ports is by declaring them when starting the container. Run:AI has similar syntax.
Run:AI is based on Kubernetes. Kubernetes offers an abstraction of the container's location. This complicates the exposure of ports. Kubernetes offers a number of alternative ways to expose ports. With Run:AI you can use all of these options (see the Alternatives section below), however, Run:AI comes built-in with ingress.
Ingress allows access to Kubernetes services from outside the Kubernetes cluster. You configure access by creating a collection of rules that define which inbound connections reach which services. More information about ingress can be found here.
Before installing ingress, you must obtain an IP Address or an IP address range which is external to the cluster.
A Run:AI cluster is installed by:
- Accessing the Administrator User Interface Clusters area at https://app.run.ai/clusters
- Downloading a YAML file
- Applying it to Kubernetes. You must edit the YAML file. Search for
localLoadBalancer enabled: true ipRangeFrom: 10.0.2.1 ipRangeTo: 10.0.2.2
Set enabled to true and set the IP range appropriately.
To add or change a load balancer after the system has been installed run:
kubectl edit runaiconfig runai -n runai
localLoadBalancer and edit the above. Then run:
kubectl rollout restart deployment runai-metallb-controller -n runai
To apply the changes
The Researcher uses the Run:AI CLI to set the method type and the ports when submitting the Workload. Example:
runai submit test-ingress -i jupyter/base-notebook -g 1 --interactive --service-type=ingress \ --port 8888:8888 --command -- start-notebook.sh --NotebookApp.base_url=test-ingress
After submitting a Job through the Run:AI CLI, run:
runai list jobs
You will see the service URL with which to access the Jupyter notebook
The URL will be composed of the ingress end-point, the Job name and the port (e.g. https://10.255.174.13/test-ingress-8888.
Run:AI is based on Kubernetes. Kubernetes offers an abstraction of the container's location. This complicates the exposure of ports. Kubernetes offers a number of alternative ways to expose ports:
- NodePort - Exposes the Service on each Node’s IP at a static port (the NodePort). You’ll be able to contact the NodePort service from outside the cluster by requesting
<NODE-IP>:<NODE-PORT>regardless of which node the container resides in.
- LoadBalancer - Useful for cloud environments. Exposes the Service externally using a cloud provider’s load balancer.
- Ingress (see example in link below) - Allows access to Kubernetes services from outside the Kubernetes cluster. You configure access by creating a collection of rules that define which inbound connections reach which services. More information about ingress can be found here.
- Port Forwarding (see example in link below) - Simple port forwarding allows access to the container via localhost:<Port>.
See https://kubernetes.io/docs/concepts/services-networking/service for further details.
To learn how to use port forwarding see Quickstart document: Launch an Interactive Build Workload with Connected Ports.