Until recently, all of the Kubernetes implementations supported Docker APIs. The initial Argo implementation depended on them.
With the introduction of OpenShift 4, which does not support the Docker APIs, the situation changed.
To support the absence of Docker APIs Argo introduced several new executors: Docker, Kubelet, and K8s APIs.
The containerRuntimeExecutor
config value in the Argo parameters file controls which executor is used.
The summary of pros and cons to each executor (based on the information here) is summarized in Table A-1.
Executor |
Docker |
Kubelet |
K8s API |
PNC |
Pros |
Supports all workflow examples. Most reliable and well tested very scalable. Communicates to docker daemon for heavy lifting. |
Secure. Cannot escape privileges of pod’s service account. Medium scalability—log retrieval and container polling is done against kubelet. |
Secure. Cannot escape privileges of pod’s service account. No extra configuration. |
Secure. Cannot escape privileges of service account. Artifact collection can be collected from base image layer Scalable—process polling is done over procfs and not kubelet/k8s API |
Cons |
Least secure. requires docker.sock of host to be mounted (often rejected by OPA) |
Additional kubelet configuration may be required. Can only save params/artifacts in volumes (e.g., emptyDir), and not the base image layer (e.g., /tmp) |
Least scalable—log retrieval and container polling is done against k8s API server. Can only save params/artifacts in volumes (e.g., emptyDir), and not the base image layer (e.g., /tmp) |
Processes will no longer run with pid 1. Artifact collection from base image may fail for containers which complete too fast. Cannot capture artifact directories from base image layer which has a volume mounted under it. Immature |
Argo Config |
docker |
kubelet |
k8sapi |
pns |
This table should help you pick the correct value of the Argo executor.