Workflow Patterns
Required Permissions
K8sClusterManagers.jl requires a minimal set of permisisons for managing worker Pods within the cluster. A minimal set of permissions is documented below along with a ServiceAccount to make use of these permissions:
# Minimal set of permissions required by K8sClusterManagers.jl
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: julia-manager-role
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["create", "delete", "get", "patch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
---
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
apiVersion: v1
kind: ServiceAccount
metadata:
name: julia-manager-serviceaccount
automountServiceAccountToken: true
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: julia-manager-role-binding
roleRef:
kind: Role
name: julia-manager-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: julia-manager-serviceaccount
Use K8sClusterManager only within cluster
Since the K8sClusterManager
can only be used when running inside of a Kubernetes Pod you may want conditionally use it. The isk8s
predicate provides a convenient way of determining if the running Julia process is executing within a K8s Pod.
using Distributed, K8sClusterManagers
manager = if isk8s()
K8sClusterManager(n)
else
Distributed.LocalManager(n)
end
addprocs(manager; exeflags="--project")
Executing a script
Depending on your use case you may find yourself wanting to execute a script on the K8s cluster. One basic workflow would be as follows.
Write a "script.jl" which uses
K8sClusterManager
Build and push a Docker image containing the "script.jl" and the required dependencies:
docker build -t $IMAGE . docker push $IMAGE
Define a Kubernetes manifest ("script-example.template.yaml") which executes the Docker image on the cluster. Note that the use of
envsubst
will substitute${...}
with the respectively named environmental variable.apiVersion: v1 kind: Pod metadata: generateName: script-example- spec: serviceAccountName: "${PROJECT}-service-account" restartPolicy: Never containers: - name: manager image: "${IMAGE}" imagePullPolicy: Always command: ["julia", "script.jl"] args: ["${ARG}"]
Create the resource which will run our script.
# Expects that `PROJECT`, `IMAGE`, and `ARG` are all predefined cat script-example.template.yaml | envsubst | kubectl create -f -