![]() The scheduler tries to adopt the completed Worker pod. Normal Started 74s kubelet, aks-default-23404167-vmss000000 Started container base Normal Created 74s kubelet, aks-default-23404167-vmss000000 Created container base Normal Pulled 75s kubelet, aks-default-23404167-vmss000000 Container image "apache/airflow:2.0.0-python3.8" already present on machine Normal Scheduled Successfully assigned airflow-build/testmikemiketesttask-f312cd42164b4907af4214e8ee7af8b3 to aks-default-23404167-vmss000000 SecretName: scheduler-serviceaccount-token-wt57g Type: Secret (a volume populated by a Secret) Type: ConfigMap (a volume populated by a ConfigMap) Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) var/run/secrets/kubernetes.io/serviceaccount from scheduler-serviceaccount-token-wt57g (ro) ![]() opt/airflow/pod_template_file.yaml from airflow-config (ro,path="pod_template_file.yaml") opt/airflow/logs from airflow-logs (rw,path="airflow/logs") opt/airflow/dags from airflow-dags (ro,path="airflow/dags") opt/airflow/airflow.cfg from airflow-config (ro,path="airflow.cfg") By supplying an image URL and a command with optional arguments, the operator uses the Kube Python Client to generate a Kubernetes API request that dynamically launches those individual pods. However, the worker Pod remains on the "Completed" status, like this: Name: testmikemiketesttask-f312cd42164b4907af4214e8ee7af8b3 The KubernetesPodOperator uses the Kubernetes API to launch a pod in a Kubernetes cluster. The Kubernetes Operator has been merged into the 1.10 release branch of Airflow (the executor in experimental mode), along with a fully k8s native scheduler called the Kubernetes Executor. When the Operator Pod is done, it is cleaned up quickly. Service_account_name='scheduler-serviceaccount', Namespace = conf.get('kubernetes', 'NAMESPACE') My simple DAG looks like this: from airflow import DAGįrom _pod_operator import KubernetesPodOperatorįrom airflow import configuration as confįrom kubernetes.client import CoreV1Api, models as k8s ServiceAccountName: scheduler-serviceaccount MountPath: /opt/airflow/pod_template_file.yaml The Airflow workers are created with this template: pod_template_file.yaml: It tries and tries, but to no avail.Īll the pods are running in the airflow-build Kubernetes namespace. The worker and operator pods all run fine, but Airflow has trouble adopting the status.phase:'Completed' pods. The Executor starts Worker Pods, which in turn start Pods with our data-transformation logic. Our Airflow instance is deployed using the Kubernetes Executor. This might be a hard one to solve, but I'm depending on you! This question is bothering me for days now and I can't seem to figure it out on my own. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |