status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 10
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
โ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 23,670 | ["airflow/www/static/js/dags.js", "airflow/www/views.py", "tests/www/views/test_views_acl.py"] | Airflow 2.3.0: can't filter by owner if selected from dropdown | ### Apache Airflow version
2.3.0 (latest released)
### What happened
On a clean install of 2.3.0, whenever I try to filter by owner, if I select it from the dropdown (which correctly detects the owner's name) it returns the following error:
`DAG "ecodina" seems to be missing from DagBag.`
Webserver's log:
```
127.0.0.1 - - [12/May/2022:12:27:47 +0000] "GET /dagmodel/autocomplete?query=ecodin&status=all HTTP/1.1" 200 17 "http://localhost/home?search=ecodina" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "GET /dags/ecodina/grid?search=ecodina HTTP/1.1" 302 217 "http://localhost/home?search=ecodina" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "GET /home HTTP/1.1" 200 35774 "http://localhost/home?search=ecodina" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "POST /blocked HTTP/1.1" 200 2 "http://localhost/home" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "POST /last_dagruns HTTP/1.1" 200 402 "http://localhost/home" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "POST /dag_stats HTTP/1.1" 200 333 "http://localhost/home" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
127.0.0.1 - - [12/May/2022:12:27:50 +0000] "POST /task_stats HTTP/1.1" 200 1194 "http://localhost/home" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
```
Instead, if I write the owner's name fully and avoid selecting it from the dropdown, it works as expected since it constructs the correct URL:
`my.airflow.com/home?search=ecodina`
### What you think should happen instead
The DAGs table should only show the selected owner's DAGs.
### How to reproduce
- Start the Airflow Webserver
- Connect to the Airflow webpage
- Type an owner name in the _Search DAGs_ textbox and select it from the dropdown
### Operating System
CentOS Linux 8
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
Installed on a conda environment, as if it was a virtualenv:
- `conda create -c conda-forge -n airflow python=3.9`
- `conda activate airflow`
- `pip install "apache-airflow[postgres]==2.3.0" --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.0/constraints-3.9.txt"`
Database: PostgreSQL 13
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23670 | https://github.com/apache/airflow/pull/23804 | 70b41e46b46e65c0446a40ab91624cb2291a5039 | 29afd35b9cfe141b668ce7ceccecdba60775a8ff | "2022-05-12T12:33:06Z" | python | "2022-05-24T13:43:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,669 | ["docs/README.rst"] | Fix ./breeze build-docs command options in docs/README.rst | ### What do you see as an issue?
I got an error when executing `./breeze build-docs -- --help` command in docs/README.rst.
```bash
% ./breeze build-docs -- --help
Usage: breeze build-docs [OPTIONS]
Try running the '--help' flag for more information.
โญโ Error โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ Got unexpected extra argument (--help) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
To find out more, visit
https://github.com/apache/airflow/blob/main/BREEZE.rst
```
### Solving the problem
"--" in option should be removed.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23669 | https://github.com/apache/airflow/pull/23671 | 3138604b264878f27505223bd14c7814eacc1e57 | 3fa57168a520d8afe0c06d8a0200dd3517f43078 | "2022-05-12T12:17:00Z" | python | "2022-05-12T12:33:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,666 | ["airflow/providers/amazon/aws/transfers/s3_to_sql.py", "airflow/providers/amazon/provider.yaml", "docs/apache-airflow-providers-amazon/operators/transfer/s3_to_sql.rst", "tests/providers/amazon/aws/transfers/test_s3_to_sql.py", "tests/system/providers/amazon/aws/example_s3_to_sql.py"] | Add transfers operator S3 to SQL / SQL to SQL | ### Description
Should we add S3 to SQL to aws transfers?
### Use case/motivation
1. After process data from spark/glue(more), we need to publish data to sql
2. Synchronize data between 2 sql databases.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23666 | https://github.com/apache/airflow/pull/29085 | e5730364b4eb5a3b30e815ca965db0f0e710edb6 | efaed34213ad4416e2f4834d0cd2f60c41814507 | "2022-05-12T09:41:35Z" | python | "2023-01-23T21:53:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,642 | ["airflow/models/mappedoperator.py", "tests/models/test_taskinstance.py"] | Dynamic Task Crashes scheduler - Non Empty Return | ### Apache Airflow version
2.3.0 (latest released)
### What happened
I have a dag that looks like this.
When I uncomment `py_job`(Dynamically mapped PythonOperator) it works well with `pull_messages` (Taskflow API).
When I try to do the same with `DatabricksRunNowOperator` it crashes the scheduler with error
Related issues #23486
### Sample DAG
```
import json
import pendulum
from airflow.decorators import dag, task
from airflow.operators.python import PythonOperator
from airflow.providers.databricks.operators.databricks import DatabricksRunNowOperator
@dag(
schedule_interval=None,
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
catchup=False,
tags=['example'],
)
def tutorial_taskflow_api_etl():
def random(*args, **kwargs):
print ("==== kwargs inside random ====", args, kwargs)
print ("I'm random")
return 49
@task
def pull_messages():
return [["hi"], ["hello"]]
op = DatabricksRunNowOperator.partial(
task_id = "new_job",
job_id=42,
notebook_params={"dry-run": "true"},
python_params=["douglas adams", "42"],
spark_submit_params=["--class", "org.apache.spark.examples.SparkPi"]
).expand(jar_params=pull_messages())
# py_job = PythonOperator.partial(
# task_id = 'py_job',
# python_callable=random
# ).expand(op_args= pull_messages())
tutorial_etl_dag = tutorial_taskflow_api_etl()
```
### Error
```
[2022-05-11 11:46:30 +0000] [40] [INFO] Worker exiting (pid: 40)
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler
_run_scheduler_job(args=args)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_scheduler_job
job.run()
File "/usr/local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
self._execute()
File "/usr/local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 736, in _execute
self._run_scheduler_loop()
File "/usr/local/lib/python3.9/site-packages/astronomer/airflow/version_check/plugin.py", line 29, in run_before
fn(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 824, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/usr/local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 906, in _do_scheduling
callback_to_run = self._schedule_dag_run(dag_run, session)
File "/usr/local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1148, in _schedule_dag_run
schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagrun.py", line 522, in update_state
info = self.task_instance_scheduling_decisions(session)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagrun.py", line 658, in task_instance_scheduling_decisions
schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis(
File "/usr/local/lib/python3.9/site-packages/airflow/models/dagrun.py", line 714, in _get_ready_tis
expanded_tis, _ = schedulable.task.expand_mapped_task(self.run_id, session=session)
File "/usr/local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 609, in expand_mapped_task
operator.mul, self._resolve_map_lengths(run_id, session=session).values()
File "/usr/local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 595, in _resolve_map_lengths
raise RuntimeError(f"Failed to populate all mapping metadata; missing: {keys}")
RuntimeError: Failed to populate all mapping metadata; missing: 'jar_params'
[2022-05-11 11:46:30 +0000] [31] [INFO] Shutting down: Master
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23642 | https://github.com/apache/airflow/pull/23771 | 5e3f652397005c5fac6c6b0099de345b5c39148d | 3849ebb8d22bbc229d464c4171c9b5ff960cd089 | "2022-05-11T11:56:36Z" | python | "2022-05-18T19:43:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,639 | ["airflow/models/trigger.py"] | Triggerer process die with DB Deadlock | ### Apache Airflow version
2.2.5
### What happened
When create many Deferrable operator (eg. `TimeDeltaSensorAsync`), triggerer component died because of DB Deadlock issue.
```
[2022-05-11 02:45:08,420] {triggerer_job.py:358} INFO - Trigger <airflow.triggers.temporal.DateTimeTrigger moment=2022-05-13T11:10:00+00:00> (ID 5397) starting
[2022-05-11 02:45:08,421] {triggerer_job.py:358} INFO - Trigger <airflow.triggers.temporal.DateTimeTrigger moment=2022-05-13T11:10:00+00:00> (ID 5398) starting
[2022-05-11 02:45:09,459] {triggerer_job.py:358} INFO - Trigger <airflow.triggers.temporal.DateTimeTrigger moment=2022-05-13T11:10:00+00:00> (ID 5400) starting
[2022-05-11 02:45:09,461] {triggerer_job.py:358} INFO - Trigger <airflow.triggers.temporal.DateTimeTrigger moment=2022-05-13T11:10:00+00:00> (ID 5399) starting
[2022-05-11 02:45:10,503] {triggerer_job.py:358} INFO - Trigger <airflow.triggers.temporal.DateTimeTrigger moment=2022-05-13T11:10:00+00:00> (ID 5401) starting
[2022-05-11 02:45:10,504] {triggerer_job.py:358} INFO - Trigger <airflow.triggers.temporal.DateTimeTrigger moment=2022-05-13T11:10:00+00:00> (ID 5402) starting
[2022-05-11 02:45:11,113] {triggerer_job.py:108} ERROR - Exception when executing TriggererJob._run_trigger_loop
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/triggerer_job.py", line 106, in _execute
self._run_trigger_loop()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/triggerer_job.py", line 127, in _run_trigger_loop
Trigger.clean_unused()
File "/usr/local/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/models/trigger.py", line 91, in clean_unused
session.query(TaskInstance).filter(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 4063, in update
update_op.exec_()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1697, in exec_
self._do_exec()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1895, in _do_exec
self._execute_stmt(update_stmt)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1702, in _execute_stmt
self.result = self.query._execute_crud(stmt, self.mapper)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3568, in _execute_crud
return conn.execute(stmt, self._params)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1124, in _execute_clauseelement
ret = self._execute_context(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1316, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1510, in _handle_dbapi_exception
util.raise_(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction')
[SQL: UPDATE task_instance SET trigger_id=%s WHERE task_instance.state != %s AND task_instance.trigger_id IS NOT NULL]
[parameters: (None, <TaskInstanceState.DEFERRED: 'deferred'>)]
(Background on this error at: http://sqlalche.me/e/13/e3q8)
[2022-05-11 02:45:11,118] {triggerer_job.py:111} INFO - Waiting for triggers to clean up
[2022-05-11 02:45:11,592] {triggerer_job.py:117} INFO - Exited trigger loop
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (1213, 'Deadlock found when trying to get lock; try restarting transaction')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.8/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/usr/local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/cli/commands/triggerer_command.py", line 56, in triggerer
job.run()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/base_job.py", line 246, in run
self._execute()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/triggerer_job.py", line 106, in _execute
self._run_trigger_loop()
File "/usr/local/lib/python3.8/site-packages/airflow/jobs/triggerer_job.py", line 127, in _run_trigger_loop
Trigger.clean_unused()
File "/usr/local/lib/python3.8/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.8/site-packages/airflow/models/trigger.py", line 91, in clean_unused
session.query(TaskInstance).filter(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 4063, in update
update_op.exec_()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1697, in exec_
self._do_exec()
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1895, in _do_exec
self._execute_stmt(update_stmt)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1702, in _execute_stmt
self.result = self.query._execute_crud(stmt, self.mapper)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 3568, in _execute_crud
return conn.execute(stmt, self._params)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1011, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1124, in _execute_clauseelement
ret = self._execute_context(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1316, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1510, in _handle_dbapi_exception
util.raise_(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1276, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 608, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction')
[SQL: UPDATE task_instance SET trigger_id=%s WHERE task_instance.state != %s AND task_instance.trigger_id IS NOT NULL]
[parameters: (None, <TaskInstanceState.DEFERRED: 'deferred'>)]
(Background on this error at: http://sqlalche.me/e/13/e3q8)
```
### What you think should happen instead
Triggerer processor does not raise Deadlock error.
### How to reproduce
Create "test_timedelta" DAG and run it.
```python
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.sensors.time_delta import TimeDeltaSensorAsync
default_args = {
"owner": "user",
"start_date": datetime(2021, 2, 8),
"retries": 2,
"retry_delay": timedelta(minutes=20),
"depends_on_past": False,
}
with DAG(
dag_id="test_timedelta",
default_args=default_args,
schedule_interval="10 11 * * *",
max_active_runs=1,
max_active_tasks=2,
catchup=False,
) as dag:
start = DummyOperator(task_id="start")
end = DummyOperator(task_id="end")
for idx in range(800):
tx = TimeDeltaSensorAsync(
task_id=f"sleep_{idx}",
delta=timedelta(days=3),
)
start >> tx >> end
```
### Operating System
uname_result(system='Linux', node='d2845d6331fd', release='5.10.104-linuxkit', version='#1 SMP Thu Mar 17 17:08:06 UTC 2022', machine='x86_64', processor='')
### Versions of Apache Airflow Providers
apache-airflow-providers-apache-druid | 2.3.3
apache-airflow-providers-apache-hive | 2.3.2
apache-airflow-providers-apache-spark | 2.1.3
apache-airflow-providers-celery | 2.1.3
apache-airflow-providers-ftp | 2.1.2
apache-airflow-providers-http | 2.1.2
apache-airflow-providers-imap | 2.2.3
apache-airflow-providers-jdbc | 2.1.3
apache-airflow-providers-mysql | 2.2.3
apache-airflow-providers-postgres | 4.1.0
apache-airflow-providers-redis | 2.0.4
apache-airflow-providers-sqlite | 2.1.3
apache-airflow-providers-ssh | 2.4.3
### Deployment
Other Docker-based deployment
### Deployment details
webserver: 1 instance
scheduler: 1 instance
worker: 1 instance (Celery)
triggerer: 1 instance
redis: 1 instance
Database: 1 instance (mysql)
### Anything else
webserver: 172.19.0.9
scheduler: 172.19.0.7
triggerer: 172.19.0.5
worker: 172.19.0.8
MYSQL (`SHOW ENGINE INNODB STATUS;`)
```
------------------------
LATEST DETECTED DEADLOCK
------------------------
2022-05-11 07:47:49 139953955817216
*** (1) TRANSACTION:
TRANSACTION 544772, ACTIVE 0 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 7 lock struct(s), heap size 1128, 2 row lock(s)
MySQL thread id 20, OS thread handle 139953861383936, query id 228318 172.19.0.5 airflow_user updating
UPDATE task_instance SET trigger_id=NULL WHERE task_instance.state != 'deferred' AND task_instance.trigger_id IS NOT NULL
*** (1) HOLDS THE LOCK(S):
RECORD LOCKS space id 125 page no 231 n bits 264 index ti_state of table `airflow_db`.`task_instance` trx id 544772 lock_mode X locks rec but not gap
Record lock, heap no 180 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
0: len 6; hex 717565756564; asc queued;;
1: len 14; hex 746573745f74696d6564656c7461; asc test_timedelta;;
2: len 9; hex 736c6565705f323436; asc sleep_246;;
3: len 30; hex 7363686564756c65645f5f323032322d30352d30395431313a31303a3030; asc scheduled__2022-05-09T11:10:00; (total 36 bytes);
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 125 page no 47 n bits 128 index PRIMARY of table `airflow_db`.`task_instance` trx id 544772 lock_mode X locks rec but not gap waiting
Record lock, heap no 55 PHYSICAL RECORD: n_fields 28; compact format; info bits 0
0: len 14; hex 746573745f74696d6564656c7461; asc test_timedelta;;
1: len 9; hex 736c6565705f323436; asc sleep_246;;
2: len 30; hex 7363686564756c65645f5f323032322d30352d30395431313a31303a3030; asc scheduled__2022-05-09T11:10:00; (total 36 bytes);
3: len 6; hex 000000085001; asc P ;;
4: len 7; hex 01000001411e2f; asc A /;;
5: len 7; hex 627b6a250b612d; asc b{j% a-;;
6: SQL NULL;
7: SQL NULL;
8: len 7; hex 72756e6e696e67; asc running;;
9: len 4; hex 80000001; asc ;;
10: len 12; hex 643238343564363333316664; asc d2845d6331fd;;
11: len 4; hex 726f6f74; asc root;;
12: len 4; hex 8000245e; asc $^;;
13: len 12; hex 64656661756c745f706f6f6c; asc default_pool;;
14: len 7; hex 64656661756c74; asc default;;
15: len 4; hex 80000002; asc ;;
16: len 20; hex 54696d6544656c746153656e736f724173796e63; asc TimeDeltaSensorAsync;;
17: len 7; hex 627b6a240472e0; asc b{j$ r ;;
18: SQL NULL;
19: len 4; hex 80000002; asc ;;
20: len 5; hex 80057d942e; asc } .;;
21: len 4; hex 80000001; asc ;;
22: len 4; hex 800021c7; asc ! ;;
23: len 30; hex 36353061663737642d363762372d343166382d383439342d636637333061; asc 650af77d-67b7-41f8-8494-cf730a; (total 36 bytes);
24: SQL NULL;
25: SQL NULL;
26: SQL NULL;
27: len 2; hex 0400; asc ;;
*** (2) TRANSACTION:
TRANSACTION 544769, ACTIVE 0 sec updating or deleting
mysql tables in use 1, locked 1
LOCK WAIT 7 lock struct(s), heap size 1128, 4 row lock(s), undo log entries 2
MySQL thread id 12010, OS thread handle 139953323235072, query id 228319 172.19.0.8 airflow_user updating
UPDATE task_instance SET start_date='2022-05-11 07:47:49.745773', state='running', try_number=1, hostname='d2845d6331fd', job_id=9310 WHERE task_instance.task_id = 'sleep_246' AND task_instance.dag_id = 'test_timedelta' AND task_instance.run_id = 'scheduled__2022-05-09T11:10:00+00:00'
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 125 page no 47 n bits 120 index PRIMARY of table `airflow_db`.`task_instance` trx id 544769 lock_mode X locks rec but not gap
Record lock, heap no 55 PHYSICAL RECORD: n_fields 28; compact format; info bits 0
0: len 14; hex 746573745f74696d6564656c7461; asc test_timedelta;;
1: len 9; hex 736c6565705f323436; asc sleep_246;;
2: len 30; hex 7363686564756c65645f5f323032322d30352d30395431313a31303a3030; asc scheduled__2022-05-09T11:10:00; (total 36 bytes);
3: len 6; hex 000000085001; asc P ;;
4: len 7; hex 01000001411e2f; asc A /;;
5: len 7; hex 627b6a250b612d; asc b{j% a-;;
6: SQL NULL;
7: SQL NULL;
8: len 7; hex 72756e6e696e67; asc running;;
9: len 4; hex 80000001; asc ;;
10: len 12; hex 643238343564363333316664; asc d2845d6331fd;;
11: len 4; hex 726f6f74; asc root;;
12: len 4; hex 8000245e; asc $^;;
13: len 12; hex 64656661756c745f706f6f6c; asc default_pool;;
14: len 7; hex 64656661756c74; asc default;;
15: len 4; hex 80000002; asc ;;
16: len 20; hex 54696d6544656c746153656e736f724173796e63; asc TimeDeltaSensorAsync;;
17: len 7; hex 627b6a240472e0; asc b{j$ r ;;
18: SQL NULL;
19: len 4; hex 80000002; asc ;;
20: len 5; hex 80057d942e; asc } .;;
21: len 4; hex 80000001; asc ;;
22: len 4; hex 800021c7; asc ! ;;
23: len 30; hex 36353061663737642d363762372d343166382d383439342d636637333061; asc 650af77d-67b7-41f8-8494-cf730a; (total 36 bytes);
24: SQL NULL;
25: SQL NULL;
26: SQL NULL;
27: len 2; hex 0400; asc ;;
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 125 page no 231 n bits 264 index ti_state of table `airflow_db`.`task_instance` trx id 544769 lock_mode X locks rec but not gap waiting
Record lock, heap no 180 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
0: len 6; hex 717565756564; asc queued;;
1: len 14; hex 746573745f74696d6564656c7461; asc test_timedelta;;
2: len 9; hex 736c6565705f323436; asc sleep_246;;
3: len 30; hex 7363686564756c65645f5f323032322d30352d30395431313a31303a3030; asc scheduled__2022-05-09T11:10:00; (total 36 bytes);
*** WE ROLL BACK TRANSACTION (1)
```
Airflow env
```
AIRFLOW__CELERY__RESULT_BACKEND=db+mysql://airflow_user:airflow_pass@mysql/airflow_db
AIRFLOW__CORE__DEFAULT_TIMEZONE=KST
AIRFLOW__CELERY__BROKER_URL=redis://redis:6379/0
AIRFLOW__CORE__LOAD_EXAMPLES=False
AIRFLOW__WEBSERVER__DEFAULT_UI_TIMEZONE=KST
AIRFLOW_HOME=/home/deploy/airflow
AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL=30
AIRFLOW__CORE__EXECUTOR=CeleryExecutor
AIRFLOW__WEBSERVER__SECRET_KEY=aoiuwernholo
AIRFLOW__DATABASE__LOAD_DEFAULT_CONNECTIONS=False
AIRFLOW__CORE__SQL_ALCHEMY_CONN=mysql+mysqldb://airflow_user:airflow_pass@mysql/airflow_db
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23639 | https://github.com/apache/airflow/pull/24071 | 5087f96600f6d7cc852b91079e92d00df6a50486 | d86ae090350de97e385ca4aaf128235f4c21f158 | "2022-05-11T08:03:17Z" | python | "2022-06-01T17:54:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,623 | ["airflow/providers/snowflake/hooks/snowflake.py", "tests/providers/snowflake/hooks/test_snowflake.py"] | SnowflakeHook.run() raises UnboundLocalError exception if sql argument is empty | ### Apache Airflow Provider(s)
snowflake
### Versions of Apache Airflow Providers
apache-airflow-providers-snowflake==2.3.0
### Apache Airflow version
2.2.2
### Operating System
Amazon Linux AMI
### Deployment
MWAA
### Deployment details
_No response_
### What happened
If the sql parameter is an empty list, the execution_info list variable is attempted to be returned when it hasn't been initialized.
The execution_info variable is [defined](https://github.com/apache/airflow/blob/2.3.0/airflow/providers/snowflake/hooks/snowflake.py#L330) only within parsing through each sql query, so if the sql queries list is empty, it never gets defined.
```
[...]
snowflake_hook.run(sql=queries, autocommit=True)
File "/usr/local/airflow/.local/lib/python3.7/site-packages/airflow/providers/snowflake/hooks/snowflake.py", line 304, in run
return execution_info
UnboundLocalError: local variable 'execution_info' referenced before assignment
```
### What you think should happen instead
The function could either return an empty list or None.
Perhaps the `execution_info` variable definition could just be moved further up in the function definition so that returning it at the end doesn't raise issues.
Or, there should be a check in the `run` implementation to see if the `sql` argument is empty or not, and appropriately handle what to return from there.
### How to reproduce
Pass an empty list to the sql argument when calling `SnowflakeHook.run()`.
### Anything else
My script that utilizes the `SnowflakeHook.run()` function is automated in a way where there isn't always a case that there are sql queries to run.
Of course, on my end I would update my code to first check if the sql queries list is populated before calling the hook to run.
However, it would save for unintended exceptions if the hook's `run()` function also appropriately handles what gets returned in the event that the `sql` argument is empty.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23623 | https://github.com/apache/airflow/pull/23767 | 4c9f7560355eefd57a29afee73bf04273e81a7e8 | 86cfd1244a641a8f17c9b33a34399d9be264f556 | "2022-05-10T14:37:36Z" | python | "2022-05-20T03:59:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,622 | ["airflow/providers/databricks/operators/databricks.py"] | DatabricksSubmitRunOperator and DatabricksRunNowOperator cannot define .json as template_ext | ### Apache Airflow version
2.2.2
### What happened
Introduced here https://github.com/apache/airflow/commit/0a2d0d1ecbb7a72677f96bc17117799ab40853e0 databricks operators now define template_ext property as `('.json',)`. This change broke a few dags we have currently as they basically define a config json file that needs to be posted to databricks. Example:
```python
DatabricksRunNowOperator(
task_id=...,
job_name=...,
python_params=["app.py", "--config", "/path/to/config/inside-docker-image.json"],
databricks_conn_id=...,
email_on_failure=...,
)
```
This snippet will make airflow to load /path/to/config/inside-docker-image.json and it is not desired.
@utkarsharma2 @potiuk can this change be reverted, please? It's causing headaches when a json file is provided as part of the dag parameters.
### What you think should happen instead
Use a more specific extension for databricks operators, like ```.json-tpl```
### How to reproduce
_No response_
### Operating System
Any
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks==2.6.0
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23622 | https://github.com/apache/airflow/pull/23641 | 84c9f4bf70cbc2f4ba19fdc5aa88791500d4daaa | acf89510cd5a18d15c1a45e674ba0bcae9293097 | "2022-05-10T13:54:23Z" | python | "2022-06-04T21:51:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,613 | ["airflow/providers/google/cloud/example_dags/example_cloud_sql.py", "airflow/providers/google/cloud/operators/cloud_sql.py", "tests/providers/google/cloud/operators/test_cloud_sql.py"] | Add an offload option to CloudSQLExportInstanceOperator validation specification | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==5.0.0
### Apache Airflow version
2.1.2
### Operating System
GCP Container
### Deployment
Composer
### Deployment details
composer-1.17.1-airflow-2.1.2
### What happened
I want to use serverless export to offload the export operation from the primary instance.
https://cloud.google.com/sql/docs/mysql/import-export#serverless
Used CloudSQLExportInstanceOperator with the exportContext.offload flag to perform a serverless export operation.
I got the following warning:
```
{field_validator.py:266} WARNING - The field 'exportContext.offload' is in the body, but is not specified in the validation specification '[{'name': 'fileType', 'allow_empty': False}, {'name': 'uri', 'allow_empty': False}, {'name': 'databases', 'optional': True, 'type': 'list'}, {'name': 'sqlExportOptions', 'type': 'dict', 'optional': True, 'fields': [{'name': 'tables', 'optional': True, 'type': 'list'}, {'name': 'schemaOnly', 'optional': True}]}, {'name': 'csvExportOptions', 'type': 'dict', 'optional': True, 'fields': [{'name': 'selectQuery'}]}]'. This might be because you are using newer API version and new field names defined for that version. Then the warning can be safely ignored, or you might want to upgrade the operatorto the version that supports the new API version.
```
### What you think should happen instead
I think a validation specification for `exportContext.offload` should be added.
### How to reproduce
Try to use `exportContext.offload`, as in the example below.
```python
CloudSQLExportInstanceOperator(
task_id='export_task',
project_id='some_project',
instance='cloud_sql_instance',
body={
"exportContext": {
"fileType": "csv",
"uri": "gs://my-bucket/export.csv",
"databases": ["some_db"],
"csvExportOptions": {"selectQuery": "select * from some_table limit 10"},
"offload": True
}
},
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23613 | https://github.com/apache/airflow/pull/23614 | 1bd75ddbe3b1e590e38d735757d99b43db1725d6 | 74557e41e3dcedec241ea583123d53176994cccc | "2022-05-10T07:23:07Z" | python | "2022-05-10T09:49:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,610 | ["airflow/executors/celery_kubernetes_executor.py", "airflow/executors/local_kubernetes_executor.py", "tests/executors/test_celery_kubernetes_executor.py", "tests/executors/test_local_kubernetes_executor.py"] | AttributeError: 'CeleryKubernetesExecutor' object has no attribute 'send_callback' | ### Apache Airflow version
2.3.0 (latest released)
### What happened
The issue started to occur after upgrading airflow from v2.2.5 to v2.3.0. The schedulers are crashing when DAG's SLA is configured. Only occurred when I used `CeleryKubernetesExecutor`. Tested on `CeleryExecutor` and it works as expected.
```
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler
_run_scheduler_job(args=args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_scheduler_job
job.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
self._execute()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 736, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 824, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 919, in _do_scheduling
self._send_dag_callbacks_to_processor(dag, callback_to_run)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1179, in _send_dag_callbacks_to_processor
self._send_sla_callbacks_to_processor(dag)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 1195, in _send_sla_callbacks_to_processor
self.executor.send_callback(request)
AttributeError: 'CeleryKubernetesExecutor' object has no attribute 'send_callback'
```
### What you think should happen instead
Work like previous version
### How to reproduce
1. Use `CeleryKubernetesExecutor`
2. Configure DAG's SLA
DAG to reproduce:
```
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Example DAG demonstrating the usage of the BashOperator."""
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.dummy import DummyOperator
DEFAULT_ARGS = {
"sla": timedelta(hours=1),
}
with DAG(
dag_id="example_bash_operator",
default_args=DEFAULT_ARGS,
schedule_interval="0 0 * * *",
start_date=datetime(2021, 1, 1),
catchup=False,
dagrun_timeout=timedelta(minutes=60),
tags=["example", "example2"],
params={"example_key": "example_value"},
) as dag:
run_this_last = DummyOperator(
task_id="run_this_last",
)
# [START howto_operator_bash]
run_this = BashOperator(
task_id="run_after_loop",
bash_command="echo 1",
)
# [END howto_operator_bash]
run_this >> run_this_last
for i in range(3):
task = BashOperator(
task_id="runme_" + str(i),
bash_command='echo "{{ task_instance_key_str }}" && sleep 1',
)
task >> run_this
# [START howto_operator_bash_template]
also_run_this = BashOperator(
task_id="also_run_this",
bash_command='echo "run_id={{ run_id }} | dag_run={{ dag_run }}"',
)
# [END howto_operator_bash_template]
also_run_this >> run_this_last
# [START howto_operator_bash_skip]
this_will_skip = BashOperator(
task_id="this_will_skip",
bash_command='echo "hello world"; exit 99;',
dag=dag,
)
# [END howto_operator_bash_skip]
this_will_skip >> run_this_last
if __name__ == "__main__":
dag.cli()
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23610 | https://github.com/apache/airflow/pull/23617 | 60a1d9d191fb8fc01893024c897df9632ad5fbf4 | c5b72bf30c8b80b6c022055834fc7272a1a44526 | "2022-05-10T03:29:05Z" | python | "2022-05-10T17:13:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,588 | ["airflow/www/static/js/dag/details/taskInstance/taskActions/ClearInstance.tsx", "airflow/www/static/js/dag/details/taskInstance/taskActions/MarkInstanceAs.tsx"] | After upgrade from Airflow 2.2.4, grid disappears for some DAGs | ### Apache Airflow version
2.3.0 (latest released)
### What happened
After the upgrade from 2.2.4 to 2.3.0, some DAGs grid data seems missing and it renders the UI blank
### What you think should happen instead
When I click the grid for a specific execution date, I expect to be able to click the tasks and view the log, render jinja templating, and clear status
### How to reproduce
Run an upgrade from 2.2.4 to 2.3.0 with a huge database (we have ~750 DAGs with a minimum of 10 tasks each).
In addition, we heavily rely on XCom.
### Operating System
Ubuntu 20.04.3 LTS
### Versions of Apache Airflow Providers
apache-airflow apache_airflow-2.3.0-py3-none-any.whl
apache-airflow-providers-amazon apache_airflow_providers_amazon-3.3.0-py3-none-any.whl
apache-airflow-providers-ftp apache_airflow_providers_ftp-2.1.2-py3-none-any.whl
apache-airflow-providers-http apache_airflow_providers_http-2.1.2-py3-none-any.whl
apache-airflow-providers-imap apache_airflow_providers_imap-2.2.3-py3-none-any.whl
apache-airflow-providers-mongo apache_airflow_providers_mongo-2.3.3-py3-none-any.whl
apache-airflow-providers-mysql apache_airflow_providers_mysql-2.2.3-py3-none-any.whl
apache-airflow-providers-pagerduty apache_airflow_providers_pagerduty-2.1.3-py3-none-any.whl
apache-airflow-providers-postgres apache_airflow_providers_postgres-4.1.0-py3-none-any.whl
apache-airflow-providers-sendgrid apache_airflow_providers_sendgrid-2.0.4-py3-none-any.whl
apache-airflow-providers-slack apache_airflow_providers_slack-4.2.3-py3-none-any.whl
apache-airflow-providers-sqlite apache_airflow_providers_sqlite-2.1.3-py3-none-any.whl
apache-airflow-providers-ssh apache_airflow_providers_ssh-2.4.3-py3-none-any.whl
apache-airflow-providers-vertica apache_airflow_providers_vertica-2.1.3-py3-none-any.whl
### Deployment
Virtualenv installation
### Deployment details
Python 3.8.10
### Anything else
For the affected DAGs, all the time
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23588 | https://github.com/apache/airflow/pull/32992 | 8bfad056d8ef481cc44288c5749fa5c54efadeaa | 943b97850a1e82e4da22e8489c4ede958a42213d | "2022-05-09T13:37:42Z" | python | "2023-08-03T08:29:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,580 | ["airflow/www/static/js/grid/AutoRefresh.jsx", "airflow/www/static/js/grid/Grid.jsx", "airflow/www/static/js/grid/Grid.test.jsx", "airflow/www/static/js/grid/Main.jsx", "airflow/www/static/js/grid/ToggleGroups.jsx", "airflow/www/static/js/grid/api/useGridData.test.jsx", "airflow/www/static/js/grid/details/index.jsx", "airflow/www/static/js/grid/index.jsx", "airflow/www/static/js/grid/renderTaskRows.jsx", "airflow/www/static/js/grid/renderTaskRows.test.jsx"] | `task_id` with `.` e.g. `hello.world` is not rendered in grid view | ### Apache Airflow version
2.3.0 (latest released)
### What happened
`task_id` with `.` e.g. `hello.world` is not rendered in grid view.
### What you think should happen instead
The task should be rendered just fine in Grid view.
### How to reproduce
```
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Example DAG demonstrating the usage of the BashOperator."""
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.dummy import DummyOperator
with DAG(
dag_id="example_bash_operator",
schedule_interval="0 0 * * *",
start_date=datetime(2021, 1, 1),
catchup=False,
dagrun_timeout=timedelta(minutes=60),
tags=["example", "example2"],
params={"example_key": "example_value"},
) as dag:
run_this_last = DummyOperator(
task_id="run.this.last",
)
# [START howto_operator_bash]
run_this = BashOperator(
task_id="run.after.loop",
bash_command="echo 1",
)
# [END howto_operator_bash]
run_this >> run_this_last
for i in range(3):
task = BashOperator(
task_id="runme." + str(i),
bash_command='echo "{{ task_instance_key_str }}" && sleep 1',
)
task >> run_this
# [START howto_operator_bash_template]
also_run_this = BashOperator(
task_id="also.run.this",
bash_command='echo "run_id={{ run_id }} | dag_run={{ dag_run }}"',
)
# [END howto_operator_bash_template]
also_run_this >> run_this_last
# [START howto_operator_bash_skip]
this_will_skip = BashOperator(
task_id="this.will.skip",
bash_command='echo "hello world"; exit 99;',
dag=dag,
)
# [END howto_operator_bash_skip]
this_will_skip >> run_this_last
if __name__ == "__main__":
dag.cli()
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23580 | https://github.com/apache/airflow/pull/23590 | 028087b5a6e94fd98542d0e681d947979eb1011f | afdfece9372fed83602d50e2eaa365597b7d0101 | "2022-05-09T07:04:00Z" | python | "2022-05-12T19:48:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,576 | ["setup.py"] | The xmltodict 0.13.0 breaks some emr tests | ### Apache Airflow version
main (development)
### What happened
The xmltodict 0.13.0 breaks some EMR tests (this is happening in `main` currently:
Example: https://github.com/apache/airflow/runs/6343826225?check_suite_focus=true#step:9:13417
```
tests/providers/amazon/aws/hooks/test_emr.py::TestEmrHook::test_create_job_flow_extra_args: ValueError: Malformatted input
tests/providers/amazon/aws/hooks/test_emr.py::TestEmrHook::test_create_job_flow_uses_the_emr_config_to_create_a_cluster: ValueError: Malformatted input
tests/providers/amazon/aws/hooks/test_emr.py::TestEmrHook::test_get_cluster_id_by_name: ValueError: Malformatted input
```
Downgrading to 0.12.0 fixes the problem.
### What you think should happen instead
The tests should work
### How to reproduce
* Run Breeze
* Run `pytest tests/providers/amazon/aws/hooks/test_emr.py` -> observe it to succeed
* Run `pip install xmltodict==0.13.0` -> observe it being upgraded from 0.12.0
* Run `pytest tests/providers/amazon/aws/hooks/test_emr.py` -> observe it to fail with `Malformed input` error
### Operating System
Any
### Versions of Apache Airflow Providers
Latest from main
### Deployment
Other
### Deployment details
CI
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23576 | https://github.com/apache/airflow/pull/23992 | 614b2329c1603ef1e2199044e2cc9e4b7332c2e0 | eec85d397ef0ecbbe5fd679cf5790adae2ad9c9f | "2022-05-09T01:07:36Z" | python | "2022-05-28T21:58:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,572 | ["airflow/cli/commands/dag_processor_command.py", "tests/cli/commands/test_dag_processor_command.py"] | cli command `dag-processor` uses `[core] sql_alchemy_conn` | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Dag processor failed to start if `[core] sql_alchemy_conn` not defined
```
airflow-local-airflow-dag-processor-1 | [2022-05-08 16:42:35,835] {configuration.py:494} WARNING - section/key [core/sql_alchemy_conn] not found in config
airflow-local-airflow-dag-processor-1 | Traceback (most recent call last):
airflow-local-airflow-dag-processor-1 | File "/home/airflow/.local/bin/airflow", line 8, in <module>
airflow-local-airflow-dag-processor-1 | sys.exit(main())
airflow-local-airflow-dag-processor-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
airflow-local-airflow-dag-processor-1 | args.func(args)
airflow-local-airflow-dag-processor-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
airflow-local-airflow-dag-processor-1 | return func(*args, **kwargs)
airflow-local-airflow-dag-processor-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
airflow-local-airflow-dag-processor-1 | return f(*args, **kwargs)
airflow-local-airflow-dag-processor-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/dag_processor_command.py", line 53, in dag_processor
airflow-local-airflow-dag-processor-1 | sql_conn: str = conf.get('core', 'sql_alchemy_conn').lower()
airflow-local-airflow-dag-processor-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/configuration.py", line 486, in get
airflow-local-airflow-dag-processor-1 | return self._get_option_from_default_config(section, key, **kwargs)
airflow-local-airflow-dag-processor-1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/configuration.py", line 496, in _get_option_from_default_config
airflow-local-airflow-dag-processor-1 | raise AirflowConfigException(f"section/key [{section}/{key}] not found in config")
airflow-local-airflow-dag-processor-1 | airflow.exceptions.AirflowConfigException: section/key [core/sql_alchemy_conn] not found in config
```
### What you think should happen instead
Since https://github.com/apache/airflow/pull/22284 `sql_alchemy_conn` moved to `[database]` section `dag-processor` should use this configuration
### How to reproduce
Run `airflow dag-processor` without defined `[core] sql_alchemy_conn`
https://github.com/apache/airflow/blob/6e5955831672c71bfc0424dd50c8e72f6fd5b2a7/airflow/cli/commands/dag_processor_command.py#L52-L53
### Operating System
Arch Linux
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-docker==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-grpc==2.0.4
apache-airflow-providers-hashicorp==2.2.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-mysql==2.2.3
apache-airflow-providers-odbc==2.0.4
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-sendgrid==2.0.4
apache-airflow-providers-sftp==2.6.0
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-snowflake==2.6.0
apache-airflow-providers-sqlite==2.1.3
apache-airflow-providers-ssh==2.4.3
```
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23572 | https://github.com/apache/airflow/pull/23575 | 827bfda59b7a0db6ada697ccd01c739d37430b9a | 9837e6d813744e3c5861c32e87b3aeb496d0f88d | "2022-05-08T16:48:55Z" | python | "2022-05-09T08:50:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,566 | ["chart/values.yaml"] | Description of defaultAirflowRepository in values.yaml is misleading | ### Official Helm Chart version
1.5.0 (latest released)
### Apache Airflow version
2.3.0 (latest released)
### Kubernetes Version
minikube v1.25.2
### Helm Chart configuration
_No response_
### Docker Image customisations
_No response_
### What happened
defaultAirflowRepository is described in ```values.yaml``` as
```yaml
# Default airflow repository -- overrides all the specific images below
defaultAirflowRepository: apache/airflow
```
### What you think should happen instead
```defaultAirflowRepository``` doesn't override the specific images, it is _overridden by them_. For example, in ```_helpers.yaml```
```
{{ define "pod_template_image" -}}
{{ printf "%s:%s" (.Values.images.pod_template.repository | default .Values.defaultAirflowRepository) (.Values.images.pod_template.tag | default .Values.defaultAirflowTag) }}
{{- end }}
```
Suggest updating the comment line to:-
```yaml
# Default airflow repository -- overridden by all the specific images below
```
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23566 | https://github.com/apache/airflow/pull/26428 | a2b186a152ade5b2932c5d01b437f5549f250a89 | 02d22f6ce2dbb4a1c5c5eb01dfa3070327e377bb | "2022-05-08T12:49:55Z" | python | "2022-09-19T14:03:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,557 | ["airflow/operators/python.py", "tests/operators/test_python.py"] | templates_dict, op_args, op_kwargs no longer rendered in PythonVirtualenvOperator | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Templated strings in templates_dict, op_args, op_kwargs of PythonVirtualenvOperator are no longer rendered.
### What you think should happen instead
All templated strings in templates_dict, op_args and op_kwargs must be rendered, i.e. these 3 arguments must be template_fields of PythonVirtualenvOperator, as it was in Airflow 2.2.3
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
This is due to template_fields class variable being set in PythonVirtualenvOperator
`template_fields: Sequence[str] = ('requirements',)`
that overrode class variable of PythonOperator
`template_fields = ('templates_dict', 'op_args', 'op_kwargs')`.
I read in some discussion that wanted to make requirements a template field for PythonVirtualenvOperator, but we must specify all template fields of parent class as well.
`template_fields: Sequence[str] = ('templates_dict', 'op_args', 'op_kwargs', 'requirements',)`
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23557 | https://github.com/apache/airflow/pull/23559 | 7132be2f11db24161940f57613874b4af86369c7 | 1657bd2827a3299a91ae0abbbfe4f6b80bd4cdc0 | "2022-05-07T11:49:44Z" | python | "2022-05-09T15:17:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,550 | ["airflow/models/dagrun.py", "tests/models/test_dagrun.py"] | Dynamic Task Mapping is Immutable within a Run | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Looks like mapped tasks are immutable, even when the source XCOM that created them changes.
This is a problem for things like Late Arriving Data and Data Reprocessing
### What you think should happen instead
Mapped tasks should change in response to a change of input
### How to reproduce
Here is a writeup and MVP DAG demonstrating the issue
https://gist.github.com/fritz-astronomer/d159d0e29d57458af5b95c0f253a3361
### Operating System
docker/debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
Can look into a fix - but may not be able to submit a full PR
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23550 | https://github.com/apache/airflow/pull/23667 | ad297c91777277e2b76dd7b7f0e3e3fc5c32e07c | b692517ce3aafb276e9d23570e9734c30a5f3d1f | "2022-05-06T21:42:12Z" | python | "2022-06-18T07:32:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,546 | ["airflow/www/views.py", "tests/www/views/test_views_graph_gantt.py"] | Gantt Chart Broken After Deleting a Task | ### Apache Airflow version
2.2.5
### What happened
After a task was deleted from a DAG we received the following message when visiting the gantt view for the DAG in the webserver.
```
{
"detail": null,
"status": 404,
"title": "Task delete-me not found",
"type": "https://airflow.apache.org/docs/apache-airflow/2.2.5/stable-rest-api-ref.html#section/Errors/NotFound"
}
```
This was only corrected by manually deleting the offending task instances from the `task_instance` and `task_fail` tables.
### What you think should happen instead
I would expect the gantt chart to load either excluding the non-existent task or flagging that the task associated with task instance no longer exists.
### How to reproduce
* Create a DAG with multiple tasks.
* Run the DAG.
* Delete one of the tasks.
* Attempt to open the gantt view for the DAG.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
Custom docker container hosted on Amazon ECS.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23546 | https://github.com/apache/airflow/pull/23627 | e09e4635b0dc50cbd3a18f8be02ce9b2e2f3d742 | 4b731f440734b7a0da1bbc8595702aaa1110ad8d | "2022-05-06T20:07:01Z" | python | "2022-05-20T19:24:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,532 | ["airflow/utils/file.py", "tests/utils/test_file.py"] | Airflow .airflowignore not handling soft link properly. | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Soft link and folder under same root folder will be handled as the same relative path. Say i have dags folder which looks like this:
```
-dags:
-- .airflowignore
-- folder
-- soft-links-to-folder -> folder
```
and .airflowignore:
```
folder/
```
both folder and soft-links-to-folder will be ignored.
### What you think should happen instead
Only the folder should be ignored. This is the expected behavior in airflow 2.2.4, before i upgraded. ~~The root cause is that both _RegexpIgnoreRule and _GlobIgnoreRule is calling `relative_to` method to get search path.~~
### How to reproduce
check @tirkarthi comment for the test case.
### Operating System
ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23532 | https://github.com/apache/airflow/pull/23535 | 7ab5ea7853df9d99f6da3ab804ffe085378fbd8a | 8494fc7036c33683af06a0e57474b8a6157fda05 | "2022-05-06T13:57:32Z" | python | "2022-05-20T06:35:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,529 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"] | Provide resources attribute in KubernetesPodOperator to be templated | ### Description
Make resources in KubernetesPodOperator as templated. We need to modify this during several runs and it needs code change for each run.
### Use case/motivation
For running CPU and memory intensive workloads, we want to continuously optimise the "limt_cpu" and "limit_memory" parameters. Hence, we want to provide these parameters as a part of the pipeline definition.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23529 | https://github.com/apache/airflow/pull/27457 | aefadb8c5b9272613d5806b054a1b46edf29d82e | 47a2b9ee7f1ff2cc1cc1aa1c3d1b523c88ba29fb | "2022-05-06T13:35:16Z" | python | "2022-11-09T08:47:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,523 | ["scripts/ci/docker-compose/integration-cassandra.yml"] | Cassandra container 3.0.26 fails to start on CI | ### Apache Airflow version
main (development)
### What happened
Cassandra released a new image (3.0.26) on 05.05.2022 and it broke our builds, for example:
* https://github.com/apache/airflow/runs/6320170343?check_suite_focus=true#step:10:6651
* https://github.com/apache/airflow/runs/6319805534?check_suite_focus=true#step:10:12629
* https://github.com/apache/airflow/runs/6319710486?check_suite_focus=true#step:10:6759
The problem was that container for cassandra did not cleanly start:
```
ERROR: for airflow Container "3bd115315ba7" is unhealthy.
Encountered errors while bringing up the project.
3bd115315ba7 cassandra:3.0 "docker-entrypoint.sโฆ" 5 minutes ago Up 5 minutes (unhealthy) 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp airflow-integration-postgres_cassandra_1
```
The logs of cassandra container do not show anything suspected, cassandra seems to start ok, but the health-checks for the :
```
INFO 08:45:22 Using Netty Version: [netty-buffer=netty-buffer-4.0.44.Final.452812a, netty-codec=netty-codec-4.0.44.Final.452812a, netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, netty-codec-http=netty-codec-http-4.0.44.Final.452812a, netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, netty-common=netty-common-4.0.44.Final.452812a, netty-handler=netty-handler-4.0.44.Final.452812a, netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, netty-transport=netty-transport-4.0.44.Final.452812a, netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a, netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
INFO 08:45:22 Starting listening for CQL clients on /0.0.0.0:9042 (unencrypted)...
INFO 08:45:23 Not starting RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool (enablethrift) to start it
INFO 08:45:23 Startup complete
INFO 08:45:24 Created default superuser role โcassandraโ
```
We mitigated it by #23522 and pinned cassandra to 3.0.25 version but more investigation/reachout is needed.
### What you think should happen instead
Cassandra should start properly.
### How to reproduce
Revert #23522 and make. PR. The builds will start to fail with "cassandra unhealthy"
### Operating System
Github Actions
### Versions of Apache Airflow Providers
not relevant
### Deployment
Other
### Deployment details
CI
### Anything else
Always.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23523 | https://github.com/apache/airflow/pull/23537 | 953b85d8a911301c040a3467ab2a1ba2b6d37cd7 | 22a564296be1aee62d738105859bd94003ad9afc | "2022-05-06T10:40:06Z" | python | "2022-05-07T13:36:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,514 | ["airflow/providers/amazon/aws/hooks/s3.py", "tests/providers/amazon/aws/hooks/test_s3.py"] | Json files from S3 downloading as text files | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.3.0 (latest released)
### Operating System
Mac OS Mojave 10.14.6
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When I download a json file from S3 using the S3Hook:
`filename=s3_hook.download_file(bucket_name=self.source_s3_bucket, key=key, local_path="./data")
`
The file is being downloaded as a text file starting with `airflow_temp_`.
### What you think should happen instead
It would be nice to have them download as a json file or keep the same filename as in S3. Since it requires additional code to go back and read the file as a dictionary (ast.literal_eval) and there is no guarantee that the json structure is maintained.
### How to reproduce
Where s3_conn_id is the Airflow connection and s3_bucket is a bucket on AWS S3.
This is the custom operator class:
```
from airflow.models.baseoperator import BaseOperator
from airflow.utils.decorators import apply_defaults
from airflow.hooks.S3_hook import S3Hook
import logging
class S3SearchFilingsOperator(BaseOperator):
"""
Queries the Datastore API and uploads the processed info as a csv to the S3 bucket.
:param source_s3_bucket: Choose source s3 bucket
:param source_s3_directory: Source s3 directory
:param s3_conn_id: S3 Connection ID
:param destination_s3_bucket: S3 Bucket Destination
"""
@apply_defaults
def __init__(
self,
source_s3_bucket=None,
source_s3_directory=True,
s3_conn_id=True,
destination_s3_bucket=None,
destination_s3_directory=None,
search_terms=[],
*args,
**kwargs) -> None:
super().__init__(*args, **kwargs)
self.source_s3_bucket = source_s3_bucket
self.source_s3_directory = source_s3_directory
self.s3_conn_id = s3_conn_id
self.destination_s3_bucket = destination_s3_bucket
self.destination_s3_directory = destination_s3_directory
def execute(self, context):
"""
Executes the operator.
"""
s3_hook = S3Hook(self.s3_conn_id)
keys = s3_hook.list_keys(bucket_name=self.source_s3_bucket)
for key in keys:
# download file
filename=s3_hook.download_file(bucket_name=self.source_s3_bucket, key=key, local_path="./data")
logging.info(filename)
with open(filename, 'rb') as handle:
filing = handle.read()
filing = pickle.loads(filing)
logging.info(filing.keys())
```
And this is the dag file:
```
from keywordSearch.operators.s3_search_filings_operator import S3SearchFilingsOperator
from airflow import DAG
from airflow.utils.dates import days_ago
from datetime import timedelta
# from aws_pull import aws_pull
default_args = {
"owner" : "airflow",
"depends_on_past" : False,
"start_date": days_ago(2),
"email" : ["airflow@example.com"],
"email_on_failure" : False,
"email_on_retry" : False,
"retries" : 1,
"retry_delay": timedelta(seconds=30)
}
with DAG("keyword-search-full-load",
default_args=default_args,
description="Syntax Keyword Search",
max_active_runs=1,
schedule_interval=None) as dag:
op3 = S3SearchFilingsOperator(
task_id="s3_search_filings",
source_s3_bucket="processed-filings",
source_s3_directory="citations",
s3_conn_id="Syntax_S3",
destination_s3_bucket="keywordsearch",
destination_s3_directory="results",
dag=dag
)
op3
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23514 | https://github.com/apache/airflow/pull/26886 | d544e8fbeb362e76e14d7615d354a299445e5b5a | 777b57f0c6a8ca16df2b96fd17c26eab56b3f268 | "2022-05-05T21:59:08Z" | python | "2022-10-26T11:01:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,512 | ["airflow/cli/commands/webserver_command.py", "tests/cli/commands/test_webserver_command.py"] | Random "duplicate key value violates unique constraint" errors when initializing the postgres database | ### Apache Airflow version
2.3.0 (latest released)
### What happened
while testing airflow 2.3.0 locally (using postgresql 12.4), the webserver container shows random errors:
```
webserver_1 | + airflow db init
...
webserver_1 | + exec airflow webserver
...
webserver_1 | [2022-05-04 18:58:46,011] {{manager.py:568}} INFO - Added Permission menu access on Permissions to role Admin
postgres_1 | 2022-05-04 18:58:46.013 UTC [41] ERROR: duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
postgres_1 | 2022-05-04 18:58:46.013 UTC [41] DETAIL: Key (permission_view_id, role_id)=(204, 1) already exists.
postgres_1 | 2022-05-04 18:58:46.013 UTC [41] STATEMENT: INSERT INTO ab_permission_view_role (id, permission_view_id, role_id) VALUES (nextval('ab_permission_view_role_id_seq'), 204, 1) RETURNING ab_permission_view_role.id
webserver_1 | [2022-05-04 18:58:46,015] {{manager.py:570}} ERROR - Add Permission to Role Error: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
webserver_1 | DETAIL: Key (permission_view_id, role_id)=(204, 1) already exists.
webserver_1 |
webserver_1 | [SQL: INSERT INTO ab_permission_view_role (id, permission_view_id, role_id) VALUES (nextval('ab_permission_view_role_id_seq'), %(permission_view_id)s, %(role_id)s) RETURNING ab_permission_view_role.id]
webserver_1 | [parameters: {'permission_view_id': 204, 'role_id': 1}]
```
notes:
1. when the db is first initialized, i have ~40 errors like this (with ~40 different `permission_view_id` but always the same `'role_id': 1`)
2. when it's not the first time initializing db, i always have 1 error like this but it shows different `permission_view_id` each time
3. all these errors don't seem to have any real negative effects, the webserver is still running and airflow is still running and scheduling tasks
4. "occasionally" i do get real exceptions which render the webserver workers all dead:
```
postgres_1 | 2022-05-05 20:03:30.580 UTC [44] ERROR: duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
postgres_1 | 2022-05-05 20:03:30.580 UTC [44] DETAIL: Key (permission_view_id, role_id)=(214, 1) already exists.
postgres_1 | 2022-05-05 20:03:30.580 UTC [44] STATEMENT: INSERT INTO ab_permission_view_role (id, permission_view_id, role_id) VALUES (nextval('ab_permission_view_role_id_seq'), 214, 1) RETURNING ab_permission_view_role.id
webserver_1 | [2022-05-05 20:03:30 +0000] [121] [ERROR] Exception in worker process
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
webserver_1 | self.dialect.do_execute(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
webserver_1 | cursor.execute(statement, parameters)
webserver_1 | psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
webserver_1 | DETAIL: Key (permission_view_id, role_id)=(214, 1) already exists.
webserver_1 |
webserver_1 |
webserver_1 | The above exception was the direct cause of the following exception:
webserver_1 |
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
webserver_1 | worker.init_process()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 134, in init_process
webserver_1 | self.load_wsgi()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
webserver_1 | self.wsgi = self.app.wsgi()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 67, in wsgi
webserver_1 | self.callable = self.load()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
webserver_1 | return self.load_wsgiapp()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
webserver_1 | return util.import_app(self.app_uri)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/util.py", line 412, in import_app
webserver_1 | app = app(*args, **kwargs)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/airflow/www/app.py", line 158, in cached_app
webserver_1 | app = create_app(config=config, testing=testing)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/airflow/www/app.py", line 146, in create_app
webserver_1 | sync_appbuilder_roles(flask_app)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/airflow/www/app.py", line 68, in sync_appbuilder_roles
webserver_1 | flask_app.appbuilder.sm.sync_roles()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/airflow/www/security.py", line 580, in sync_roles
webserver_1 | self.update_admin_permission()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/airflow/www/security.py", line 562, in update_admin_permission
webserver_1 | self.get_session.commit()
webserver_1 | File "<string>", line 2, in commit
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1423, in commit
webserver_1 | self._transaction.commit(_to_root=self.future)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 829, in commit
webserver_1 | self._prepare_impl()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
webserver_1 | self.session.flush()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3255, in flush
webserver_1 | self._flush(objects)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3395, in _flush
webserver_1 | transaction.rollback(_capture_exception=True)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
webserver_1 | compat.raise_(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
webserver_1 | raise exception
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3355, in _flush
webserver_1 | flush_context.execute()
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 453, in execute
webserver_1 | rec.execute(self)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 576, in execute
webserver_1 | self.dependency_processor.process_saves(uow, states)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/dependency.py", line 1182, in process_saves
webserver_1 | self._run_crud(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/orm/dependency.py", line 1245, in _run_crud
webserver_1 | connection.execute(statement, secondary_insert)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1200, in execute
webserver_1 | return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 313, in _execute_on_connection
webserver_1 | return connection._execute_clauseelement(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1389, in _execute_clauseelement
webserver_1 | ret = self._execute_context(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1748, in _execute_context
webserver_1 | self._handle_dbapi_exception(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1929, in _handle_dbapi_exception
webserver_1 | util.raise_(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
webserver_1 | raise exception
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
webserver_1 | self.dialect.do_execute(
webserver_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
webserver_1 | cursor.execute(statement, parameters)
webserver_1 | sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "ab_permission_view_role_permission_view_id_role_id_key"
webserver_1 | DETAIL: Key (permission_view_id, role_id)=(214, 1) already exists.
webserver_1 |
webserver_1 | [SQL: INSERT INTO ab_permission_view_role (id, permission_view_id, role_id) VALUES (nextval('ab_permission_view_role_id_seq'), %(permission_view_id)s, %(role_id)s) RETURNING ab_permission_view_role.id]
webserver_1 | [parameters: {'permission_view_id': 214, 'role_id': 1}]
webserver_1 | (Background on this error at: http://sqlalche.me/e/14/gkpj)
webserver_1 | [2022-05-05 20:03:30 +0000] [121] [INFO] Worker exiting (pid: 121)
flower_1 | + exec airflow celery flower
scheduler_1 | + exec airflow scheduler
webserver_1 | [2022-05-05 20:03:31 +0000] [118] [INFO] Worker exiting (pid: 118)
webserver_1 | [2022-05-05 20:03:31 +0000] [119] [INFO] Worker exiting (pid: 119)
webserver_1 | [2022-05-05 20:03:31 +0000] [120] [INFO] Worker exiting (pid: 120)
worker_1 | + exec airflow celery worker
```
However such exceptions are rare and pure random, i can't find a way to reproduce them consistently.
### What you think should happen instead
prior to 2.3.0 there were no such errors
### How to reproduce
_No response_
### Operating System
Linux Mint 20.3
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23512 | https://github.com/apache/airflow/pull/27297 | 9ab1a6a3e70b32a3cddddf0adede5d2f3f7e29ea | 8f99c793ec4289f7fc28d890b6c2887f0951e09b | "2022-05-05T20:00:11Z" | python | "2022-10-27T04:25:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,497 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "airflow/providers/cncf/kubernetes/utils/pod_manager.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"] | Tasks stuck indefinitely when following container logs | ### Apache Airflow version
2.2.4
### What happened
I observed that some workers hanged randomly after being running. Also, logs were not being reported. After some time, the pod status was on "Completed" when inspecting from k8s api, but wasn't on Airflow, which showed "status:running" for the pod.
After some investigation, the issue is in the new kubernetes pod operator and is dependant of a current issue in the kubernetes api.
When a log rotate event occurs in kubernetes, the stream we consume on fetch_container_logs(follow=True,...) is no longer being feeded.
Therefore, the k8s pod operator hangs indefinetly at the middle of the log. Only a sigterm could terminate it as logs consumption is blocking execute() to finish.
Ref to the issue in kubernetes: https://github.com/kubernetes/kubernetes/issues/59902
Linking to https://github.com/apache/airflow/issues/12103 for reference, as the result is more or less the same for end user (although the root cause is different)
### What you think should happen instead
Pod operator should not hang.
Pod operator could follow the new logs from the container - this is out of scope of airflow as ideally the k8s api does it automatically.
### Solution proposal
I think there are many possibilities to walk-around this from airflow-side to not hang indefinitely (like making `fetch_container_logs` non-blocking for `execute` and instead always block until status.phase.completed as it's currently done when get_logs is not true).
### How to reproduce
Running multiple tasks will sooner or later trigger this. Also, one can configure a more aggressive logs rotation in k8s so this race is triggered more often.
#### Operating System
Debian GNU/Linux 11 (bullseye)
#### Versions of Apache Airflow Providers
```
apache-airflow==2.2.4
apache-airflow-providers-google==6.4.0
apache-airflow-providers-cncf-kubernetes==3.0.2
```
However, this should be reproducible with master.
#### Deployment
Official Apache Airflow Helm Chart
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23497 | https://github.com/apache/airflow/pull/28336 | 97006910a384579c9f0601a72410223f9b6a0830 | 6d2face107f24b7e7dce4b98ae3def1178e1fc4c | "2022-05-05T09:06:19Z" | python | "2023-03-04T18:08:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,476 | ["airflow/www/static/js/grid/TaskName.jsx"] | Grid View - Multilevel taskgroup shows white text on the UI | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Blank text if there are nested Task Groups .
Nested TaskGroup - Graph view:
![image](https://user-images.githubusercontent.com/6821208/166685216-8a13e691-4e33-400e-9ee2-f489b7113853.png)
Nested TaskGroup - Grid view:
![image](https://user-images.githubusercontent.com/6821208/166685452-a3b59ee5-95da-43b2-a352-97d52a0acbbd.png)
### What you think should happen instead
We should see the text as up task group level.
### How to reproduce
### deploy below DAG:
```
from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.utils.dates import datetime
from airflow.utils.task_group import TaskGroup
with DAG(dag_id="grid_view_dag", start_date=datetime(2022, 5, 3, 0, 00), schedule_interval=None, concurrency=2,
max_active_runs=2) as dag:
parent_task_group = None
for i in range(0, 10):
with TaskGroup(group_id=f"tg_level_{i}", parent_group=parent_task_group) as tg:
t = DummyOperator(task_id=f"task_level_{i}")
parent_task_group = tg
```
### got to grid view and expand the nodes:
![image](https://user-images.githubusercontent.com/6821208/166683975-0ed583a4-fa24-43e7-8caa-1cd610c07187.png)
#### you can see the text after text selection:
![image](https://user-images.githubusercontent.com/6821208/166684102-03482eb3-1207-4f79-abc3-8c1a0116d135.png)
### Operating System
N/A
### Versions of Apache Airflow Providers
N/A
### Deployment
Docker-Compose
### Deployment details
reproducible using the following docker-compose file: https://airflow.apache.org/docs/apache-airflow/2.3.0/docker-compose.yaml
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23476 | https://github.com/apache/airflow/pull/23482 | d9902958448b9d6e013f90f14d2d066f3121dcd5 | 14befe3ad6a03f27e20357e9d4e69f99d19a06d1 | "2022-05-04T13:01:20Z" | python | "2022-05-04T15:30:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,473 | ["airflow/models/dagbag.py", "airflow/security/permissions.py", "airflow/www/security.py", "tests/www/test_security.py"] | Could not get DAG access permission after upgrade to 2.3.0 | ### Apache Airflow version
2.3.0 (latest released)
### What happened
I upgraded my airflow instance from version 2.1.3 to 2.3.0 but got issue that there are no permission for new DAGs.
**The issue only happens in DAG which has dag_id contains dot symbol.**
### What you think should happen instead
There should be 3 new permissions for a DAG.
### How to reproduce
+ Create a new DAG with id, lets say: `dag.id_1`
+ Go to the UI -> Security -> List Role
+ Edit any Role
+ Try to insert permissions of new DAG above to chosen role.
-> Could not get any permission for created DAG above.
There are 3 DAG permissions named `can_read_DAG:dag`, `can_edit_DAG:dag`, `can_delete_DAG:dag`
There should be 3 new permissions: `can_read_DAG:dag.id_1`, `can_edit_DAG:dag.id_1`, `can_delete_DAG:dag.id_1`
### Operating System
Kubernetes
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23473 | https://github.com/apache/airflow/pull/23510 | ae3e68af3c42a53214e8264ecc5121049c3beaf3 | cc35fcaf89eeff3d89e18088c2e68f01f8baad56 | "2022-05-04T09:37:57Z" | python | "2022-06-08T07:47:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,460 | ["README.md", "breeze-complete", "dev/breeze/src/airflow_breeze/global_constants.py", "images/breeze/output-commands-hash.txt", "images/breeze/output-commands.svg", "images/breeze/output-config.svg", "images/breeze/output-shell.svg", "images/breeze/output-start-airflow.svg", "scripts/ci/libraries/_initialization.sh"] | Add Postgres 14 support | ### Description
_No response_
### Use case/motivation
Using Postgres 14 as backend
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23460 | https://github.com/apache/airflow/pull/23506 | 9ab9cd47cff5292c3ad602762ae3e371c992ea92 | 6169e0a69875fb5080e8d70cfd9d5e650a9d13ba | "2022-05-03T18:15:31Z" | python | "2022-05-11T16:26:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,447 | ["airflow/cli/commands/dag_processor_command.py", "tests/cli/commands/test_dag_processor_command.py"] | External DAG processor not working | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Running a standalone Dag Processor instance with `airflow dag-processor` throws the following exception:
```
Standalone DagProcessor is not supported when using sqlite.
```
### What you think should happen instead
The `airflow dag-processor` should start without an exception in case of Postgres database
### How to reproduce
The error is in the following line: https://github.com/apache/airflow/blob/6f146e721c81e9304bf7c0af66fc3d203d902dab/airflow/cli/commands/dag_processor_command.py#L53
It should be
```python
sql_conn: str = conf.get('database', 'sql_alchemy_conn').lower()
```
due to the change in the configuration file done in https://github.com/apache/airflow/pull/22284
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23447 | https://github.com/apache/airflow/pull/23575 | 827bfda59b7a0db6ada697ccd01c739d37430b9a | 9837e6d813744e3c5861c32e87b3aeb496d0f88d | "2022-05-03T13:36:02Z" | python | "2022-05-09T08:50:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,439 | ["airflow/providers/google/cloud/hooks/dataproc.py", "tests/providers/google/cloud/hooks/test_dataproc.py", "tests/providers/google/cloud/operators/test_dataproc.py"] | DataprocJobBaseOperator not compatible with TaskGroups | ### Body
Following Stackoverflow question: https://stackoverflow.com/questions/72091119/airflow-issues-with-calling-taskgroup
The issue is that when defining task in TaskGroup the identifier of the task becomes `group_id.task_id`
[DataprocJobBaseOperator](https://github.com/apache/airflow/blob/05ccfd42f28db7d0a8fe3ed023b0e7a8ec188609/airflow/providers/google/cloud/operators/dataproc.py#L836-L838) have default of using `task_id` for job name but Google doesn't allow the `.` char :
`google.api_core.exceptions.InvalidArgument: 400 Job id 'weekday_analytics.avg_speed_20220502_22c11bdf' must conform to '[a-zA-Z0-9]([a-zA-Z0-9\-\_]{0,98}[a-zA-Z0-9])?' pattern`
We probably should fix `DataprocJobBaseOperator` to handle cases where the task defined in task group by replacing the `.` to another char.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23439 | https://github.com/apache/airflow/pull/23791 | 509b277dce50fb1fbc25aea565182933bb506ee2 | a43e98d05047d9c4d5a7778bcb10efc4bdef7a01 | "2022-05-03T05:49:10Z" | python | "2022-05-22T11:43:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,437 | ["airflow/providers/amazon/aws/hooks/s3.py", "airflow/providers/amazon/aws/links/emr.py", "airflow/providers/amazon/aws/sensors/emr.py", "airflow/providers/amazon/provider.yaml", "tests/providers/amazon/aws/hooks/test_s3.py", "tests/providers/amazon/aws/sensors/test_emr_base.py", "tests/providers/amazon/aws/sensors/test_emr_job_flow.py"] | Logs for EmrStepSensor | ### Description
Add feature to EmrStepSensor to bring back the spark task url & logs after task execution
### Use case/motivation
After starting an EMR step task using EmrAddStepsOperator we generally have an EmrStepSensor to track the status of the step. The job ID is available for the sensor and is being poked at regular interval.
```
[2022-04-26, 22:07:43 UTC] {base_aws.py:100} INFO - Retrieving region_name from Connection.extra_config['region_name']
[2022-04-26, 22:07:44 UTC] {emr.py:316} INFO - Poking step s-123ABC123ABC on cluster j-123ABC123ABC
[2022-04-26, 22:07:44 UTC] {emr.py:74} INFO - Job flow currently PENDING
[2022-04-26, 22:08:44 UTC] {emr.py:316} INFO - Poking step s-123ABC123ABC on cluster j-123ABC123ABC
[2022-04-26, 22:08:44 UTC] {emr.py:74} INFO - Job flow currently PENDING
[2022-04-26, 22:09:44 UTC] {emr.py:316} INFO - Poking step s-123ABC123ABC on cluster j-123ABC123ABC
[2022-04-26, 22:09:44 UTC] {emr.py:74} INFO - Job flow currently COMPLETED
[2022-04-26, 22:09:44 UTC] {base.py:251} INFO - Success criteria met. Exiting.
[2022-04-26, 22:09:44 UTC] {taskinstance.py:1288} INFO - Marking task as SUCCESS. dag_id=datapipeline_sample, task_id=calculate_pi_watch_step, execution_date=20220426T220739, start_date=20220426T220743, end_date=20220426T220944
```
After the task is completed the status is displayed. If the user wants to review the logs of the task, it is a multistep process to get hold of the job logs from EMR cluster.
It will be a great addition to add the log url and possibly relay the logs to Airflow EmrStepSensor post completion of the task. This will be very handy when there are failures of many tasks and will make it a great user experience.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23437 | https://github.com/apache/airflow/pull/28180 | 9eacf607be109eb6ab80f7e27d234a17fb128ae0 | fefcb1d567d8d605f7ec9b7d408831d656736541 | "2022-05-03T04:35:44Z" | python | "2022-12-20T08:05:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,435 | ["airflow/decorators/base.py", "airflow/models/mappedoperator.py", "airflow/serialization/serialized_objects.py", "tests/api_connexion/endpoints/test_task_endpoint.py", "tests/models/test_taskinstance.py"] | Empty `expand()` crashes the scheduler | ### Apache Airflow version
2.3.0 (latest released)
### What happened
I've found a DAG that will crash the scheduler:
```
@task
def hello():
return "hello"
hello.expand()
```
```
[2022-05-03 03:41:23,779] {scheduler_job.py:753} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 736, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 824, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 906, in _do_scheduling
callback_to_run = self._schedule_dag_run(dag_run, session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 1148, in _schedule_dag_run
schedulable_tis, callback_to_run = dag_run.update_state(session=session, execute_callbacks=False)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/dagrun.py", line 522, in update_state
info = self.task_instance_scheduling_decisions(session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/dagrun.py", line 661, in task_instance_scheduling_decisions
session=session,
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/dagrun.py", line 714, in _get_ready_tis
expanded_tis, _ = schedulable.task.expand_mapped_task(self.run_id, session=session)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/mappedoperator.py", line 609, in expand_mapped_task
operator.mul, self._resolve_map_lengths(run_id, session=session).values()
TypeError: reduce() of empty sequence with no initial value
```
### What you think should happen instead
A user DAG shouldn't crash the scheduler. This specific case could likely be an ImportError at parse time, but it makes me think we might be missing some exception handling?
### How to reproduce
_No response_
### Operating System
Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23435 | https://github.com/apache/airflow/pull/23463 | c9b21b8026c595878ee4cc934209fc1fc2ca2396 | 9214018153dd193be6b1147629f73b23d8195cce | "2022-05-03T03:46:12Z" | python | "2022-05-27T04:25:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,425 | ["airflow/models/mappedoperator.py", "tests/models/test_taskinstance.py"] | Mapping over multiple parameters results in 1 task fewer than expected | ### Apache Airflow version
2.3.0 (latest released)
### What happened
While testing the [example](https://airflow.apache.org/docs/apache-airflow/2.3.0/concepts/dynamic-task-mapping.html#mapping-over-multiple-parameters) given for `Mapping over multiple parameters` I noticed only 5 tasks are being mapped rather than the expected 6.
task example from the doc:
```
@task
def add(x: int, y: int):
return x + y
added_values = add.expand(x=[2, 4, 8], y=[5, 10])
```
The doc mentions:
```
# This results in the add function being called with
# add(x=2, y=5)
# add(x=2, y=10)
# add(x=4, y=5)
# add(x=4, y=10)
# add(x=8, y=5)
# add(x=8, y=10)
```
But when I create a DAG with the example, only 5 tasks are mapped instead of 6:
![image](https://user-images.githubusercontent.com/15913202/166302366-64c23767-2e5f-418d-a58f-fd997a75937e.png)
### What you think should happen instead
A task should be mapped for all 6 possible outcomes, rather than only 5
### How to reproduce
Create a DAG using the example provided [here](Mapping over multiple parameters) and check the number of mapped instances:
![image](https://user-images.githubusercontent.com/15913202/166302419-b10d5c87-9b95-4b30-be27-030929ab1fcd.png)
### Operating System
macOS 11.5.2
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-databricks==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-snowflake==2.6.0
apache-airflow-providers-sqlite==2.1.3
### Deployment
Astronomer
### Deployment details
Localhost instance of Astronomer Runtime 5.0.0
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23425 | https://github.com/apache/airflow/pull/23434 | 0fde90d92ae306f37041831f5514e9421eee676b | 3fb8e0b0b4e8810bedece873949871a94dd7387a | "2022-05-02T18:17:23Z" | python | "2022-05-04T19:02:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,420 | ["airflow/api_connexion/endpoints/dag_run_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_run_schema.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"] | Add a queue DAG run endpoint to REST API | ### Description
Add a POST endpoint to queue a dag run like we currently do [here](https://github.com/apache/airflow/issues/23419).
Url format: `api/v1/dags/{dag_id}/dagRuns/{dag_run_id}/queue`
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23420 | https://github.com/apache/airflow/pull/23481 | 1220c1a7a9698cdb15289d7066b29c209aaba6aa | 4485393562ea4151a42f1be47bea11638b236001 | "2022-05-02T17:42:15Z" | python | "2022-05-09T12:25:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,419 | ["airflow/api_connexion/endpoints/dag_run_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_run_schema.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"] | Add a DAG Run clear endpoint to REST API | ### Description
Add a POST endpoint to clear a dag run like we currently do [here](https://github.com/apache/airflow/blob/main/airflow/www/views.py#L2087).
Url format: `api/v1/dags/{dag_id}/dagRuns/{dag_run_id}/clear`
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23419 | https://github.com/apache/airflow/pull/23451 | f352ee63a5d09546a7997ba8f2f8702a1ddb4af7 | b83cc9b5e2c7e2516b0881861bbc0f8589cb531d | "2022-05-02T17:40:44Z" | python | "2022-05-24T03:30:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,415 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_run_schema.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py", "tests/api_connexion/schemas/test_dag_run_schema.py"] | Add more fields to DAG Run API endpoints | ### Description
There are a few fields that would be useful to include in the REST API for getting a DAG run or list of DAG runs:
`data_interval_start`
`data_interval_end`
`last_scheduling_decision`
`run_type` as (backfill, manual and scheduled)
### Use case/motivation
We use this information in the Grid view as part of `tree_data`. If we added these extra fields to the REST APi we could remove all dag run info from tree_data.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23415 | https://github.com/apache/airflow/pull/23440 | 22b49d334ef0008be7bd3d8481b55b8ab5d71c80 | 6178491a117924155963586b246d2bf54be5320f | "2022-05-02T17:26:24Z" | python | "2022-05-03T12:27:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,414 | ["airflow/migrations/utils.py", "airflow/migrations/versions/0110_2_3_2_add_cascade_to_dag_tag_foreignkey.py", "airflow/models/dag.py", "docs/apache-airflow/migrations-ref.rst"] | airflow db clean - Dag cleanup won't run if dag is tagged | ### Apache Airflow version
2.3.0 (latest released)
### What happened
When running `airflow db clean`, if a to-be-cleaned dag is also tagged, a foreign key constraint in dag_tag is violated. Full error:
```
sqlalchemy.exc.IntegrityError: (psycopg2.errors.ForeignKeyViolation) update or delete on table "dag" violates foreign key constraint "dag_tag_dag_id_fkey" on table "dag_tag"
DETAIL: Key (dag_id)=(some-dag-id-here) is still referenced from table "dag_tag".
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-mssql==2.1.3
apache-airflow-providers-oracle==2.2.3
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-samba==3.0.4
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-sqlite==2.1.3
apache-airflow-providers-ssh==2.4.3
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23414 | https://github.com/apache/airflow/pull/23444 | e2401329345dcc5effa933b92ca969b8779755e4 | 8ccff9244a6d1a936d8732721373b967e95ec404 | "2022-05-02T17:23:19Z" | python | "2022-05-27T14:28:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,411 | ["airflow/sensors/base.py", "tests/serialization/test_dag_serialization.py", "tests/ti_deps/deps/test_ready_to_reschedule_dep.py"] | PythonSensor is not considering mode='reschedule', instead marking task UP_FOR_RETRY | ### Apache Airflow version
2.3.0 (latest released)
### What happened
A PythonSensor that works on versions <2.3.0 in mode reschedule is now marking the task as `UP_FOR_RETRY` instead.
Log says:
```
[2022-05-02, 15:48:23 UTC] {python.py:66} INFO - Poking callable: <function test at 0x7fd56286bc10>
[2022-05-02, 15:48:23 UTC] {taskinstance.py:1853} INFO - Rescheduling task, marking task as UP_FOR_RESCHEDULE
[2022-05-02, 15:48:23 UTC] {local_task_job.py:156} INFO - Task exited with return code 0
[2022-05-02, 15:48:23 UTC] {local_task_job.py:273} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
But it directly marks it as `UP_FOR_RETRY` and then follows `retry_delay` and `retries`
### What you think should happen instead
It should mark the task as `UP_FOR_RESCHEDULE` and reschedule it according to the `poke_interval`
### How to reproduce
```
from datetime import datetime, timedelta
from airflow import DAG
from airflow.sensors.python import PythonSensor
def test():
return False
default_args = {
"owner": "airflow",
"depends_on_past": False,
"start_date": datetime(2022, 5, 2),
"email_on_failure": False,
"email_on_retry": False,
"retries": 1,
"retry_delay": timedelta(minutes=1),
}
dag = DAG("dag_csdepkrr_development_v001",
default_args=default_args,
catchup=False,
max_active_runs=1,
schedule_interval=None)
t1 = PythonSensor(task_id="PythonSensor",
python_callable=test,
poke_interval=30,
mode='reschedule',
dag=dag)
```
### Operating System
Latest Docker image
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-docker==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-grpc==2.0.4
apache-airflow-providers-hashicorp==2.2.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-mysql==2.2.3
apache-airflow-providers-odbc==2.0.4
apache-airflow-providers-oracle==2.2.3
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-sendgrid==2.0.4
apache-airflow-providers-sftp==2.5.2
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-sqlite==2.1.3
apache-airflow-providers-ssh==2.4.3
```
### Deployment
Docker-Compose
### Deployment details
Latest Docker compose from the documentation
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23411 | https://github.com/apache/airflow/pull/23674 | d3b08802861b006fc902f895802f460a72d504b0 | f9e2a3051cd3a5b6fcf33bca4c929d220cf5661e | "2022-05-02T16:07:22Z" | python | "2022-05-17T12:18:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,408 | ["airflow/configuration.py"] | Airflow 2.3.0 does not keep promised backward compatibility regarding database configuration using _CMD Env | ### Apache Airflow version
2.3.0 (latest released)
### What happened
We used to configure the Database using the AIRFLOW__CORE__SQL_ALCHEMY_CONN_CMD Environment variable.
Now the config option moved from CORE to DATABASE. However, we intended to keep backward compatibility as stated in the [Release Notes](AIRFLOW__CORE__SQL_ALCHEMY_CONN_CMD).
Upon 2.3.0 update however, the _CMD suffixed variables are no longer recognized for database configuration in Core - I think due to a missing entry here:
https://github.com/apache/airflow/blob/8622808aa79531bcaa5099d26fbaf54b4afe931a/airflow/configuration.py#L135
### What you think should happen instead
We should only get a deprecation warning but the Database should be configured correctly.
### How to reproduce
Configure Airflow using an external Database using the AIRFLOW__CORE__SQL_ALCHEMY_CONN_CMD environment variable. Notice that Airflow falls back to SQLight.
### Operating System
kubernetes
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23408 | https://github.com/apache/airflow/pull/23441 | 6178491a117924155963586b246d2bf54be5320f | 0cdd401cda61006a42afba243f1ad813315934d4 | "2022-05-02T14:49:36Z" | python | "2022-05-03T12:48:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,396 | ["airflow/providers/cncf/kubernetes/utils/pod_manager.py"] | Airflow kubernetes pod operator fetch xcom fails | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Airflow kubernetes pod operator load xcom fails
def _exec_pod_command(self, resp, command: str) -> Optional[str]:
if resp.is_open():
self.log.info('Running command... %s\n', command)
resp.write_stdin(command + '\n')
while resp.is_open():
resp.update(timeout=1)
if resp.peek_stdout():
return resp.read_stdout()
if resp.peek_stderr():
self.log.info("stderr from command: %s", resp.read_stderr())
break
return None
_exec_pod_command read only peek stdout doesn't read full response.This content is loaded as json file json. loads function which causes system break with error "unterminated string"
### What you think should happen instead
It should not read partial content
### How to reproduce
When json size is larger
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23396 | https://github.com/apache/airflow/pull/23490 | b0406f58f0c51db46d2da7c7c84a0b5c3d4f09ae | faae9faae396610086d5ea18d61c356a78a3d365 | "2022-05-02T00:42:02Z" | python | "2022-05-10T15:46:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,361 | ["airflow/models/taskinstance.py", "tests/jobs/test_scheduler_job.py"] | Scheduler crashes with psycopg2.errors.DeadlockDetected exception | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Customer has a dag that generates around 2500 tasks dynamically using a task group. While running the dag, a subset of the tasks (~1000) run successfully with no issue and (~1500) of the tasks are getting "skipped", and the dag fails. The same DAG runs successfully in Airflow v2.1.3 with same Airflow configuration.
While investigating the Airflow processes, We found that both the scheduler got restarted with below error during the DAG execution.
```
[2022-04-27 20:42:44,347] {scheduler_job.py:742} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1256, in _execute_context
self.dialect.do_executemany(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/dialects/postgresql/psycopg2.py", line 912, in do_executemany
cursor.executemany(statement, parameters)
psycopg2.errors.DeadlockDetected: deadlock detected
DETAIL: Process 1646244 waits for ShareLock on transaction 3915993452; blocked by process 1640692.
Process 1640692 waits for ShareLock on transaction 3915992745; blocked by process 1646244.
HINT: See server log for query details.
CONTEXT: while updating tuple (189873,4) in relation "task_instance"
```
This issue seems to be related to #19957
### What you think should happen instead
This issue was observed while running huge number of concurrent task created dynamically by a DAG. Some of the tasks are getting skipped due to restart of scheduler with Deadlock exception.
### How to reproduce
DAG file:
```
from propmix_listings_details import BUCKET, ZIPS_FOLDER, CITIES_ZIP_COL_NAME, DETAILS_DEV_LIMIT, DETAILS_RETRY, DETAILS_CONCURRENCY, get_api_token, get_values, process_listing_ids_based_zip
from airflow.utils.task_group import TaskGroup
from airflow import DAG
from airflow.operators.dummy_operator import DummyOperator
from airflow.operators.python_operator import PythonOperator
from datetime import datetime, timedelta
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
'retries': 0,
}
date = '{{ execution_date }}'
email_to = ['example@airflow.com']
# Using a DAG context manager, you don't have to specify the dag property of each task
state = 'Maha'
with DAG('listings_details_generator_{0}'.format(state),
start_date=datetime(2021, 11, 18),
schedule_interval=None,
max_active_runs=1,
concurrency=DETAILS_CONCURRENCY,
dagrun_timeout=timedelta(minutes=10),
catchup=False # enable if you don't want historical dag runs to run
) as dag:
t0 = DummyOperator(task_id='start')
with TaskGroup(group_id='group_1') as tg1:
token = get_api_token()
zip_list = get_values(BUCKET, ZIPS_FOLDER+state, CITIES_ZIP_COL_NAME)
for zip in zip_list[0:DETAILS_DEV_LIMIT]:
details_operator = PythonOperator(
task_id='details_{0}_{1}'.format(state, zip), # task id is generated dynamically
pool='pm_details_pool',
python_callable=process_listing_ids_based_zip,
task_concurrency=40,
retries=3,
retry_delay=timedelta(seconds=10),
op_kwargs={'zip': zip, 'date': date, 'token':token, 'state':state}
)
t0 >> tg1
```
### Operating System
kubernetes cluster running on GCP linux (amd64)
### Versions of Apache Airflow Providers
pip freeze | grep apache-airflow-providers
apache-airflow-providers-amazon==1!3.2.0
apache-airflow-providers-cncf-kubernetes==1!3.0.0
apache-airflow-providers-elasticsearch==1!2.2.0
apache-airflow-providers-ftp==1!2.1.2
apache-airflow-providers-google==1!6.7.0
apache-airflow-providers-http==1!2.1.2
apache-airflow-providers-imap==1!2.2.3
apache-airflow-providers-microsoft-azure==1!3.7.2
apache-airflow-providers-mysql==1!2.2.3
apache-airflow-providers-postgres==1!4.1.0
apache-airflow-providers-redis==1!2.0.4
apache-airflow-providers-slack==1!4.2.3
apache-airflow-providers-snowflake==2.6.0
apache-airflow-providers-sqlite==1!2.1.3
apache-airflow-providers-ssh==1!2.4.3
### Deployment
Astronomer
### Deployment details
Airflow v2.2.5-2
Scheduler count: 2
Scheduler resources: 20AU (2CPU and 7.5GB)
Executor used: Celery
Worker count : 2
Worker resources: 24AU (2.4 CPU and 9GB)
Termination grace period : 2mins
### Anything else
This issue happens in all the dag runs. Some of the tasks are getting skipped and some are getting succeeded and the scheduler fails with the Deadlock exception error.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23361 | https://github.com/apache/airflow/pull/25312 | 741c20770230c83a95f74fe7ad7cc9f95329f2cc | be2b53eaaf6fc136db8f3fa3edd797a6c529409a | "2022-04-29T13:05:15Z" | python | "2022-08-09T14:17:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,356 | ["airflow/executors/kubernetes_executor.py", "tests/executors/test_kubernetes_executor.py"] | Tasks set to queued by a backfill get cleared and rescheduled by the kubernetes executor, breaking the backfill | ### Apache Airflow version
2.2.5 (latest released)
### What happened
A backfill launched from the scheduler pod, queues tasks as it should but while they are in the process of starting the kubernentes executor loop running in the scheduler clears these tasks and reschedules them via this function https://github.com/apache/airflow/blob/9449a107f092f2f6cfa9c8bbcf5fd62fadfa01be/airflow/executors/kubernetes_executor.py#L444
This causes the backfill to not queue any more tasks and enters an endless loop of waiting for the task it has queued to complete.
The way I have mitigated this is to set the `AIRFLOW__KUBERNETES__WORKER_PODS_QUEUED_CHECK_INTERVAL` to 3600, which is not ideal
### What you think should happen instead
The function clear_not_launched_queued_tasks should respect tasks launched by a backfill process and not clear them.
### How to reproduce
start a backfill with large number of tasks and watch as they get queued and then subsequently rescheduled by the kubernetes executor running in the scheduler pod
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
```
apache-airflow 2.2.5 py38h578d9bd_0
apache-airflow-providers-cncf-kubernetes 3.0.2 pyhd8ed1ab_0
apache-airflow-providers-docker 2.4.1 pyhd8ed1ab_0
apache-airflow-providers-ftp 2.1.2 pyhd8ed1ab_0
apache-airflow-providers-http 2.1.2 pyhd8ed1ab_0
apache-airflow-providers-imap 2.2.3 pyhd8ed1ab_0
apache-airflow-providers-postgres 3.0.0 pyhd8ed1ab_0
apache-airflow-providers-sqlite 2.1.3 pyhd8ed1ab_0
```
### Deployment
Other 3rd-party Helm chart
### Deployment details
Deployment is running the latest helm chart of Airflow Community Edition
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23356 | https://github.com/apache/airflow/pull/23720 | 49cfb6498eed0acfc336a24fd827b69156d5e5bb | 640d4f9636d3867d66af2478bca15272811329da | "2022-04-29T08:57:18Z" | python | "2022-11-18T01:09:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,343 | ["tests/cluster_policies/__init__.py", "tests/dags_corrupted/test_nonstring_owner.py", "tests/models/test_dagbag.py"] | Silent DAG import error by making owner a list | ### Apache Airflow version
2.2.5 (latest released)
### What happened
If the argument `owner` is unhashable, such as a list, the DAG will fail to be imported, but will also not report as an import error. If the DAG is new, it will simply be missing. If this is an update to the existing DAG, the webserver will continue to show the old version.
### What you think should happen instead
A DAG import error should be raised.
### How to reproduce
Set the `owner` argument for a task to be a list. See this minimal reproduction DAG.
```
from datetime import datetime
from airflow.decorators import dag, task
@dag(
schedule_interval="@daily",
start_date=datetime(2021, 1, 1),
catchup=False,
default_args={"owner": ["person"]},
tags=['example'])
def demo_bad_owner():
@task()
def say_hello():
print("hello")
demo_bad_owner()
```
### Operating System
Debian Bullseye
### Versions of Apache Airflow Providers
None needed.
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
The worker appears to still be able to execute the tasks when updating an existing DAG. Not sure how that's possible.
Also reproduced on 2.3.0rc2.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23343 | https://github.com/apache/airflow/pull/23359 | 9a0080c20bb2c4a9c0f6ccf1ece79bde895688ac | c4887bcb162aab9f381e49cecc2f212600c493de | "2022-04-28T22:09:14Z" | python | "2022-05-02T10:58:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,327 | ["airflow/providers/google/cloud/operators/gcs.py"] | GCSTransformOperator: provide Jinja templating in source and destination object names | ### Description
Provide an option to receive the source_object and destination_object via Jinja params.
### Use case/motivation
Usecase: Need to execute a DAG to fetch a video from GCS bucket based on paramater and then transform it and store it back.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23327 | https://github.com/apache/airflow/pull/23328 | 505af06303d8160c71f6a7abe4792746f640083d | c82b3b94660a38360f61d47676ed180a0d32c189 | "2022-04-28T12:27:11Z" | python | "2022-04-28T17:07:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,315 | ["airflow/utils/dot_renderer.py", "tests/utils/test_dot_renderer.py"] | `airflow dags show` Exception: "The node ... should be TaskGroup and is not" | ### Apache Airflow version
main (development)
### What happened
This happens for any dag with a task expansion. For instance:
```python
from datetime import datetime
from airflow import DAG
from airflow.operators.bash import BashOperator
with DAG(
dag_id="simple_mapped",
start_date=datetime(1970, 1, 1),
schedule_interval=None,
) as dag:
BashOperator.partial(task_id="hello_world").expand(
bash_command=["echo hello", "echo world"]
)
```
I ran `airflow dags show simple_mapped` and instead of graphviz DOT notation, I saw this:
```
{dagbag.py:507} INFO - Filling up the DagBag from /Users/matt/2022/04/27/dags
Traceback (most recent call last):
File .../bin/airflow", line 8, in <module>
sys.exit(main())
File ... lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File ... lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File ... lib/python3.9/site-packages/airflow/cli/commands/dag_command.py", line 205, in dag_show
dot = render_dag(dag)
File ... lib/python3.9/site-packages/airflow/utils/dot_renderer.py", line 188, in render_dag
_draw_nodes(dag.task_group, dot, states_by_task_id)
File ... lib/python3.9/site-packages/airflow/utils/dot_renderer.py", line 125, in _draw_nodes
_draw_task_group(node, parent_graph, states_by_task_id)
File ... lib/python3.9/site-packages/airflow/utils/dot_renderer.py", line 110, in _draw_task_group
_draw_nodes(child, parent_graph, states_by_task_id)
File ... lib/python3.9/site-packages/airflow/utils/dot_renderer.py", line 121, in _draw_nodes
raise AirflowException(f"The node {node} should be TaskGroup and is not")
airflow.exceptions.AirflowException: The node <Mapped(BashOperator): hello_world> should be TaskGroup and is not
```
### What you think should happen instead
I should see something about the dag structure.
### How to reproduce
run `airflow dags show` for any dag with a task expansion
### Operating System
MacOS, venv
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
```
โฏ airflow version
2.3.0.dev0
```
cloned at 4f6fe727a
### Anything else
There's a related card on this board https://github.com/apache/airflow/projects/12
> Support Mapped task groups in the DAG "dot renderer" (i.e. backfill job with --show-dagrun)
But I don't think that functionality is making it into 2.3.0, so maybe we need to add a fix here in the meantime?
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23315 | https://github.com/apache/airflow/pull/23339 | d3028e1e9036a3c67ec4477eee6cd203c12f7f5c | 59e93106d55881163a93dac4a5289df1ba6e1db5 | "2022-04-28T01:49:46Z" | python | "2022-04-30T17:46:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,306 | ["docs/helm-chart/production-guide.rst"] | Helm chart production guide fails to inform resultBackendSecretName parameter should be used | ### What do you see as an issue?
The [production guide](https://airflow.apache.org/docs/helm-chart/stable/production-guide.html) indicates that the code below is what is necessary for deploying with secrets. But `resultBackendSecretName` should also be filled, or Airflow wont start.
```
data:
metadataSecretName: mydatabase
```
In addition to that, the expected URL is different in both variables.
`resultBackendSecretName` expects a url that starts with `db+postgresql://`, while `metadataSecretName` expects `postgresql://` or `postgres://` and wont work with `db+postgresql://`. To solve this, it might be necessary to create multiple secrets.
Just in case this is relevant, I'm using CeleryKubernetesExecutor.
### Solving the problem
Docs should warn about the issue above.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23306 | https://github.com/apache/airflow/pull/23307 | 3977e1798d8294ba628b5f330f43702c1a5c79fc | 48915bd149bd8b58853880d63b8c6415688479ec | "2022-04-27T20:34:07Z" | python | "2022-05-04T21:28:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,293 | [".github/ISSUE_TEMPLATE/airflow_doc_issue_report.yml", "README.md"] | Fix typos in README.md and airflow_doc_issue_report.yml | ### What do you see as an issue?
Just found small typos as below:
1) Missing a period symbol right after the sentence
- File Location: README.md
- Simply added a period at the end of the sentence: "...it is effectively removed when we release the first new MINOR (Or MAJOR if there is no new MINOR version) of Airflow."
2) Typo in Airflow Doc issue report
- File Location: .github/ISSUE_TEMPLATE/airflow_doc_issue_report.yml
- Changed "eequest" to "request"
### Solving the problem
Simply fix them as explained above and will make a PR for this!
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23293 | https://github.com/apache/airflow/pull/23294 | 97ad3dbab59407fde97367fe7c0c4602c1d3452f | c26796e31a9543cd8b45b50264128ac17455002c | "2022-04-27T17:43:01Z" | python | "2022-04-27T21:30:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,292 | ["airflow/providers/google/cloud/hooks/cloud_sql.py"] | GCP Composer v1.18.6 and 2.0.10 incompatible with CloudSqlProxyRunner | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
6.6.0 or above
### Apache Airflow version
2.2.3
### Operating System
n/a
### Deployment
Composer
### Deployment details
_No response_
### What happened
Hi! A [user on StackOverflow](https://stackoverflow.com/questions/71975635/gcp-composer-v1-18-6-and-2-0-10-incompatible-with-cloudsqlproxyrunner
) and some Cloud SQL engineers at Google noticed that the CloudSQLProxyRunner was broken by [this commit](https://github.com/apache/airflow/pull/22127/files#diff-5992ce7fff93c23c57833df9ef892e11a023494341b80a9fefa8401f91988942L454)
### What you think should happen instead
Ideally DAGs should continue to work as they did before
### How to reproduce
Make a DAG that connects to Cloud SQL using the CloudSQLProxyRunner in Composer 1.18.6 or above using the google providers 6.6.0 or above and see a 404
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23292 | https://github.com/apache/airflow/pull/23299 | 0c9c1cf94acc6fb315a9bc6f5bf1fbd4e4b4c923 | 1f3260354988b304cf31d5e1d945ce91798bed48 | "2022-04-27T17:34:37Z" | python | "2022-04-28T13:42:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,285 | ["airflow/models/taskmixin.py", "airflow/utils/edgemodifier.py", "airflow/utils/task_group.py", "tests/utils/test_edgemodifier.py"] | Cycle incorrectly detected in DAGs when using Labels within Task Groups | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
When attempting to create a DAG containing Task Groups and in those Task Groups there are Labels between nodes, the DAG fails to import due to cycle detection.
Consider this DAG:
```python
from pendulum import datetime
from airflow.decorators import dag, task, task_group
from airflow.utils.edgemodifier import Label
@task
def begin():
...
@task
def end():
...
@dag(start_date=datetime(2022, 1, 1), schedule_interval=None)
def task_groups_with_edge_labels():
@task_group
def group():
begin() >> Label("label") >> end()
group()
_ = task_groups_with_edge_labels()
```
When attempting to import the DAG, this error message is displayed:
<img width="1395" alt="image" src="https://user-images.githubusercontent.com/48934154/165566299-3dd65cff-5e36-47d3-a243-7bc33d4344d6.png">
This also occurs on the `main` branch as well.
### What you think should happen instead
Users should be able to specify Labels between tasks within a Task Group.
### How to reproduce
- Use the DAG mentioned above and try to import into an Airflow environment
- Or, create a simple unit test of the following and execute said test.
```python
def test_cycle_task_group_with_edge_labels(self):
from airflow.models.baseoperator import chain
from airflow.utils.task_group import TaskGroup
from airflow.utils.edgemodifier import Label
dag = DAG('dag', start_date=DEFAULT_DATE, default_args={'owner': 'owner1'})
with dag:
with TaskGroup(group_id="task_group") as task_group:
op1 = EmptyOperator(task_id='A')
op2 = EmptyOperator(task_id='B')
op1 >> Label("label") >> op2
assert not check_cycle(dag)
```
A `AirflowDagCycleException` should be thrown:
```
tests/utils/test_dag_cycle.py::TestCycleTester::test_cycle_task_group_with_edge_labels FAILED [100%]
=============================================================================================== FAILURES ===============================================================================================
________________________________________________________________________ TestCycleTester.test_cycle_task_group_with_edge_labels ________________________________________________________________________
self = <tests.utils.test_dag_cycle.TestCycleTester testMethod=test_cycle_task_group_with_edge_labels>
def test_cycle_task_group_with_edge_labels(self):
from airflow.models.baseoperator import chain
from airflow.utils.task_group import TaskGroup
from airflow.utils.edgemodifier import Label
dag = DAG('dag', start_date=DEFAULT_DATE, default_args={'owner': 'owner1'})
with dag:
with TaskGroup(group_id="task_group") as task_group:
op1 = EmptyOperator(task_id='A')
op2 = EmptyOperator(task_id='B')
op1 >> Label("label") >> op2
> assert not check_cycle(dag)
tests/utils/test_dag_cycle.py:168:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
airflow/utils/dag_cycle_tester.py:76: in check_cycle
child_to_check = _check_adjacent_tasks(current_task_id, task)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
task_id = 'task_group.B', current_task = <Task(EmptyOperator): task_group.B>
def _check_adjacent_tasks(task_id, current_task):
"""Returns first untraversed child task, else None if all tasks traversed."""
for adjacent_task in current_task.get_direct_relative_ids():
if visited[adjacent_task] == CYCLE_IN_PROGRESS:
msg = f"Cycle detected in DAG. Faulty task: {task_id}"
> raise AirflowDagCycleException(msg)
E airflow.exceptions.AirflowDagCycleException: Cycle detected in DAG. Faulty task: task_group.B
airflow/utils/dag_cycle_tester.py:62: AirflowDagCycleException
---------------------------------------------------------------------------------------- Captured stdout setup -----------------------------------------------------------------------------------------
========================= AIRFLOW ==========================
Home of the user: /root
Airflow home /root/airflow
Skipping initializing of the DB as it was initialized already.
You can re-initialize the database by adding --with-db-init flag when running tests.
======================================================================================= short test summary info ========================================================================================
FAILED tests/utils/test_dag_cycle.py::TestCycleTester::test_cycle_task_group_with_edge_labels - airflow.exceptions.AirflowDagCycleException: Cycle detected in DAG. Faulty task: task_group.B
==================================================================================== 1 failed, 2 warnings in 1.08s =====================================================================================
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
N/A
### Deployment
Astronomer
### Deployment details
This issue also occurs on the `main` branch using Breeze.
### Anything else
Possibly related to #21404
When the Label is removed, no cycle is detected.
```python
from pendulum import datetime
from airflow.decorators import dag, task, task_group
from airflow.utils.edgemodifier import Label
@task
def begin():
...
@task
def end():
...
@dag(start_date=datetime(2022, 1, 1), schedule_interval=None)
def task_groups_with_edge_labels():
@task_group
def group():
begin() >> end()
group()
_ = task_groups_with_edge_labels()
```
<img width="1437" alt="image" src="https://user-images.githubusercontent.com/48934154/165566908-a521d685-a032-482e-9e6b-ef85f0743e64.png">
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23285 | https://github.com/apache/airflow/pull/23291 | 726b27f86cf964924e5ee7b29a30aefe24dac45a | 3182303ce50bda6d5d27a6ef4e19450fb4e47eea | "2022-04-27T16:28:04Z" | python | "2022-04-27T18:12:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,284 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_schema.py", "tests/api_connexion/endpoints/test_task_endpoint.py", "tests/api_connexion/schemas/test_task_schema.py"] | Get DAG tasks in REST API does not include is_mapped | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
The rest API endpoint for get [/dags/{dag_id}/tasks](https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/get_tasks) does not include `is_mapped`.
Example: `consumer` is mapped but I have no way to tell that from the API response:
<img width="306" alt="Screen Shot 2022-04-27 at 11 35 54 AM" src="https://user-images.githubusercontent.com/4600967/165556420-f8ade6e6-e904-4be0-a759-5281ddc04cba.png">
<img width="672" alt="Screen Shot 2022-04-27 at 11 35 25 AM" src="https://user-images.githubusercontent.com/4600967/165556310-742ec23d-f5a8-4cae-bea1-d00fd6c6916f.png">
### What you think should happen instead
Someone should be able to know if a task from get /tasks is mapped or not.
### How to reproduce
call get /tasks on a dag with mapped tasks. see there is no way to determine if it is mapped from the response body.
### Operating System
Mac OSX
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23284 | https://github.com/apache/airflow/pull/23319 | 98ec8c6990347fda60cbad33db915dc21497b1f0 | f3d80c2a0dce93b908d7c9de30c9cba673eb20d5 | "2022-04-27T15:37:09Z" | python | "2022-04-28T12:54:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,272 | ["breeze-legacy"] | Breeze-legacy missing flag_build_docker_images | ### Apache Airflow version
main (development)
### What happened
Running `./breeze-legacy` warns about a potential issue:
```shell
โฏ ./breeze-legacy --help
Good version of docker 20.10.13.
./breeze-legacy: line 1434: breeze::flag_build_docker_images: command not found
...
```
And sure enough, `flag_build_docker_images` is referenced but not defined anywhere:
```shell
โฏ ag flag_build_docker_images
breeze-legacy
1433:$(breeze::flag_build_docker_images)
```
And I believe that completely breaks `breeze-legacy`:
```shell
โฏ ./breeze-legacy
Good version of docker 20.10.13.
ERROR: Allowed platform: [ ]. Passed: 'linux/amd64'
Switch to supported value with --platform flag.
ERROR: The previous step completed with error. Please take a look at output above
```
### What you think should happen instead
Breeze-legacy should still work. Bash functions should be defined if they are still in use.
### How to reproduce
Pull `main` branch.
Run `./breeze-legacy`.
### Operating System
macOS 11.6.4 Big Sur (Intel)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23272 | https://github.com/apache/airflow/pull/23276 | 1e87f51d163a8db7821d3a146c358879aff7ec0e | aee40f82ccec7651abe388d6a2cbac35f5f4c895 | "2022-04-26T19:20:12Z" | python | "2022-04-26T22:43:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,266 | ["airflow/providers/microsoft/azure/hooks/wasb.py", "tests/providers/microsoft/azure/hooks/test_wasb.py"] | wasb hook not using AZURE_CLIENT_ID environment variable as client_id for ManagedIdentityCredential | ### Apache Airflow Provider(s)
microsoft-azure
### Versions of Apache Airflow Providers
apache-airflow-providers-microsoft-azure==3.8.0
### Apache Airflow version
2.2.4
### Operating System
Ubuntu 20.04.2 LTS
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Have deployed airflow using the official helm chart on aks cluster.
### What happened
I have deployed apache airflow using the official helm chart on an AKS cluster.
The pod has multiple user assigned identity assigned to it.
i have set the AZURE_CLIENT_ID environment variable to the client id that i want to use for authentication.
_Airflow connection:_
wasb_default = '{"login":"storageaccountname"}'
**Env**
AZURE_CLIENT_ID="user-managed-identity-client-id"
_**code**_
```
# suppress azure.core logs
import logging
logger = logging.getLogger("azure.core")
logger.setLevel(logging.ERROR)
from airflow.providers.microsoft.azure.hooks.wasb import WasbHook
conn_id = 'wasb-default'
hook = WasbHook(conn_id)
for blob_name in hook.get_blobs_list("testcontainer"):
print(blob_name)
```
**error**
```
azure.core.exceptions.ClientAuthenticationError: Unexpected content type "text/plain; charset=utf-8"
Content: failed to get service principal token, error: adal: Refresh request failed. Status Code = '400'. Response body: {"error":"invalid_request","error_description":"Multiple user assigned identities exist, please specify the clientId / resourceId of the identity in the token request"} Endpoint http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fstorage.azure.com
```
**trace**
```
[2022-04-26 16:37:23,446] {environment.py:103} WARNING - Incomplete environment configuration. These variables are set: AZURE_CLIENT_ID
[2022-04-26 16:37:23,446] {managed_identity.py:89} INFO - ManagedIdentityCredential will use IMDS
[2022-04-26 16:37:23,605] {chained.py:84} INFO - DefaultAzureCredential acquired a token from ManagedIdentityCredential
#Note: azure key vault azure.secrets.key_vault.AzureKeyVaultBackend uses DefaultAzureCredential to get the connection
[2022-04-26 16:37:23,687] {base.py:68} INFO - Using connection ID 'wasb-default' for task execution.
[2022-04-26 16:37:23,687] {managed_identity.py:89} INFO - ManagedIdentityCredential will use IMDS
[2022-04-26 16:37:23,688] {wasb.py:155} INFO - Using managed identity as credential
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/policies/_universal.py", line 561, in deserialize_from_text
return json.loads(data_as_str)
File "/usr/local/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_internal/managed_identity_client.py", line 51, in _process_response
content = ContentDecodePolicy.deserialize_from_text(
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/policies/_universal.py", line 563, in deserialize_from_text
raise DecodeError(message="JSON is invalid: {}".format(err), response=response, error=err)
azure.core.exceptions.DecodeError: JSON is invalid: Expecting value: line 1 column 1 (char 0)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_credentials/imds.py", line 97, in _request_token
token = self._client.request_token(*scopes, headers={"Metadata": "true"})
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_internal/managed_identity_client.py", line 126, in request_token
token = self._process_response(response, request_time)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_internal/managed_identity_client.py", line 59, in _process_response
six.raise_from(ClientAuthenticationError(message=message, response=response.http_response), ex)
File "<string>", line 3, in raise_from
azure.core.exceptions.ClientAuthenticationError: Unexpected content type "text/plain; charset=utf-8"
Content: failed to get service principal token, error: adal: Refresh request failed. Status Code = '400'. Response body: {"error":"invalid_request","error_description":"Multiple user assigned identities exist, please specify the clientId / resourceId of the identity in the token request"} Endpoint http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fstorage.azure.com
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/test.py", line 7, in <module>
for blob_name in hook.get_blobs_list("test_container"):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/microsoft/azure/hooks/wasb.py", line 231, in get_blobs_list
for blob in blobs:
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/paging.py", line 129, in __next__
return next(self._page_iterator)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/paging.py", line 76, in __next__
self._response = self._get_next(self.continuation_token)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/storage/blob/_list_blobs_helper.py", line 79, in _get_next_cb
process_storage_error(error)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/storage/blob/_shared/response_handlers.py", line 89, in process_storage_error
raise storage_error
File "/home/airflow/.local/lib/python3.10/site-packages/azure/storage/blob/_list_blobs_helper.py", line 72, in _get_next_cb
return self._command(
File "/home/airflow/.local/lib/python3.10/site-packages/azure/storage/blob/_generated/operations/_container_operations.py", line 1572, in list_blob_hierarchy_segment
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/_base.py", line 211, in run
return first_node.send(pipeline_request) # type: ignore
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/_base.py", line 71, in send
response = self.next.send(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/_base.py", line 71, in send
response = self.next.send(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/_base.py", line 71, in send
response = self.next.send(request)
[Previous line repeated 2 more times]
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/policies/_redirect.py", line 158, in send
response = self.next.send(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/_base.py", line 71, in send
response = self.next.send(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/storage/blob/_shared/policies.py", line 515, in send
raise err
File "/home/airflow/.local/lib/python3.10/site-packages/azure/storage/blob/_shared/policies.py", line 489, in send
response = self.next.send(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/_base.py", line 71, in send
response = self.next.send(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/_base.py", line 71, in send
response = self.next.send(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/policies/_authentication.py", line 117, in send
self.on_request(request)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/core/pipeline/policies/_authentication.py", line 94, in on_request
self._token = self._credential.get_token(*self._scopes)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_internal/decorators.py", line 32, in wrapper
token = fn(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_credentials/managed_identity.py", line 123, in get_token
return self._credential.get_token(*scopes, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_internal/get_token_mixin.py", line 76, in get_token
token = self._request_token(*scopes, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/azure/identity/_credentials/imds.py", line 111, in _request_token
six.raise_from(ClientAuthenticationError(message=ex.message, response=ex.response), ex)
File "<string>", line 3, in raise_from
azure.core.exceptions.ClientAuthenticationError: Unexpected content type "text/plain; charset=utf-8"
Content: failed to get service principal token, error: adal: Refresh request failed. Status Code = '400'. Response body: {"error":"invalid_request","error_description":"Multiple user assigned identities exist, please specify the clientId / resourceId of the identity in the token request"} Endpoint http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fstorage.azure.com
```
### What you think should happen instead
The wasb hook should be able to authenticate using the user identity specified in the AZURE_CLIENT_ID and list the blobs
### How to reproduce
In an environment with multiple user assigned identity.
```
import logging
logger = logging.getLogger("azure.core")
logger.setLevel(logging.ERROR)
from airflow.providers.microsoft.azure.hooks.wasb import WasbHook
conn_id = 'wasb-default'
hook = WasbHook(conn_id)
for blob_name in hook.get_blobs_list("testcontainer"):
print(blob_name)
```
### Anything else
the issue is caused because we are not passing client_id to ManagedIdentityCredential in
[azure.hooks.wasb.WasbHook](https://github.com/apache/airflow/blob/1d875a45994540adef23ad6f638d78c9945ef873/airflow/providers/microsoft/azure/hooks/wasb.py#L153-L160)
```
if not credential:
credential = ManagedIdentityCredential()
self.log.info("Using managed identity as credential")
return BlobServiceClient(
account_url=f"https://{conn.login}.blob.core.windows.net/",
credential=credential,
**extra,
)
```
solution 1:
instead of ManagedIdentityCredential use [Azure.identity.DefaultAzureCredential](https://github.com/Azure/azure-sdk-for-python/blob/aa35d07aebf062393f14d147da54f0342e6b94a8/sdk/identity/azure-identity/azure/identity/_credentials/default.py#L32)
solution 2:
pass the client id from env [as done in DefaultAzureCredential](https://github.com/Azure/azure-sdk-for-python/blob/aa35d07aebf062393f14d147da54f0342e6b94a8/sdk/identity/azure-identity/azure/identity/_credentials/default.py#L104-L106):
`ManagedIdentityCredential(client_id=os.environ.get("AZURE_CLIENT_ID")`
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23266 | https://github.com/apache/airflow/pull/23394 | fcfaa8307ac410283f1270a0df9e557570e5ffd3 | 8f181c10344bd319ac5f6aeb102ee3c06e1f1637 | "2022-04-26T17:23:24Z" | python | "2022-05-08T21:12:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,249 | ["airflow/cli/commands/task_command.py", "tests/cli/commands/test_task_command.py"] | Pool option does not work in backfill command | ### Apache Airflow version
2.2.4
### What happened
Discussion Ref: https://github.com/apache/airflow/discussions/22201
Added the pool option to the backfill command, but only uses default_pool.
The log appears as below, but if you check the Task Instance Details / List Pool UI, default_pool is used.
```--------------------------------------------------------------------------------
[2022-03-12, 20:03:44 KST] {taskinstance.py:1244} INFO - Starting attempt 1 of 1
[2022-03-12, 20:03:44 KST] {taskinstance.py:1245} INFO -
--------------------------------------------------------------------------------
[2022-03-12, 20:03:44 KST] {taskinstance.py:1264} INFO - Executing <Task(BashOperator): runme_0> on 2022-03-05 00:00:00+00:00
[2022-03-12, 20:03:44 KST] {standard_task_runner.py:52} INFO - Started process 555 to run task
[2022-03-12, 20:03:45 KST] {standard_task_runner.py:76} INFO - Running: ['***', 'tasks', 'run', 'example_bash_operator', 'runme_0', 'backfill__2022-03-05T00:00:00+00:00', '--job-id', '127', '--pool', 'backfill_pool', '--raw', '--subdir', '/home/***/.local/lib/python3.8/site-packages/***/example_dags/example_bash_operator.py', '--cfg-path', '/tmp/tmprhjr0bc_', '--error-file', '/tmp/tmpkew9ufim']
[2022-03-12, 20:03:45 KST] {standard_task_runner.py:77} INFO - Job 127: Subtask runme_0
[2022-03-12, 20:03:45 KST] {logging_mixin.py:109} INFO - Running <TaskInstance: example_bash_operator.runme_0 backfill__2022-03-05T00:00:00+00:00 [running]> on host 56d55382c860
[2022-03-12, 20:03:45 KST] {taskinstance.py:1429} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=***
AIRFLOW_CTX_DAG_ID=example_bash_operator
AIRFLOW_CTX_TASK_ID=runme_0
AIRFLOW_CTX_EXECUTION_DATE=2022-03-05T00:00:00+00:00
AIRFLOW_CTX_DAG_RUN_ID=backfill__2022-03-05T00:00:00+00:00
[2022-03-12, 20:03:45 KST] {subprocess.py:62} INFO - Tmp dir root location:
/tmp
[2022-03-12, 20:03:45 KST] {subprocess.py:74} INFO - Running command: ['bash', '-c', 'echo "example_bash_operator__runme_0__20220305" && sleep 1']
[2022-03-12, 20:03:45 KST] {subprocess.py:85} INFO - Output:
[2022-03-12, 20:03:46 KST] {subprocess.py:89} INFO - example_bash_operator__runme_0__20220305
[2022-03-12, 20:03:47 KST] {subprocess.py:93} INFO - Command exited with return code 0
[2022-03-12, 20:03:47 KST] {taskinstance.py:1272} INFO - Marking task as SUCCESS. dag_id=example_bash_operator, task_id=runme_0, execution_date=20220305T000000, start_date=20220312T110344, end_date=20220312T110347
[2022-03-12, 20:03:47 KST] {local_task_job.py:154} INFO - Task exited with return code 0
[2022-03-12, 20:03:47 KST] {local_task_job.py:264} INFO - 0 downstream tasks scheduled from follow-on schedule check
```
### What you think should happen instead
The backfill task instance should use a slot in the backfill_pool.
### How to reproduce
1. Create a backfill_pool in UI.
2. Run the backfill command on the example dag.
```
$ docker exec -it airflow_airflow-scheduler_1 /bin/bash
$ airflow dags backfill example_bash_operator -s 2022-03-05 -e 2022-03-06 \
--pool backfill_pool --reset-dagruns -y
[2022-03-12 11:03:52,720] {backfill_job.py:386} INFO - [backfill progress] | finished run 0 of 2 | tasks waiting: 2 | succeeded: 8 | running: 2 | failed: 0 | skipped: 2 | deadlocked: 0 | not ready: 2
[2022-03-12 11:03:57,574] {dagrun.py:545} INFO - Marking run <DagRun example_bash_operator @ 2022-03-05T00:00:00+00:00: backfill__2022-03-05T00:00:00+00:00, externally triggered: False> successful
[2022-03-12 11:03:57,575] {dagrun.py:590} INFO - DagRun Finished: dag_id=example_bash_operator, execution_date=2022-03-05T00:00:00+00:00, run_id=backfill__2022-03-05T00:00:00+00:00, run_start_date=2022-03-12 11:03:37.530158+00:00, run_end_date=2022-03-12 11:03:57.575869+00:00, run_duration=20.045711, state=success, external_trigger=False, run_type=backfill, data_interval_start=2022-03-05T00:00:00+00:00, data_interval_end=2022-03-06 00:00:00+00:00, dag_hash=None
[2022-03-12 11:03:57,582] {dagrun.py:545} INFO - Marking run <DagRun example_bash_operator @ 2022-03-06T00:00:00+00:00: backfill__2022-03-06T00:00:00+00:00, externally triggered: False> successful
[2022-03-12 11:03:57,583] {dagrun.py:590} INFO - DagRun Finished: dag_id=example_bash_operator, execution_date=2022-03-06T00:00:00+00:00, run_id=backfill__2022-03-06T00:00:00+00:00, run_start_date=2022-03-12 11:03:37.598927+00:00, run_end_date=2022-03-12 11:03:57.583295+00:00, run_duration=19.984368, state=success, external_trigger=False, run_type=backfill, data_interval_start=2022-03-06 00:00:00+00:00, data_interval_end=2022-03-07 00:00:00+00:00, dag_hash=None
[2022-03-12 11:03:57,584] {backfill_job.py:386} INFO - [backfill progress] | finished run 2 of 2 | tasks waiting: 0 | succeeded: 10 | running: 0 | failed: 0 | skipped: 4 | deadlocked: 0 | not ready: 0
[2022-03-12 11:03:57,589] {backfill_job.py:851} INFO - Backfill done. Exiting.
```
### Operating System
MacOS BigSur, docker-compose
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Follow the guide - [Running Airflow in Docker]. Use CeleryExecutor.
https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23249 | https://github.com/apache/airflow/pull/23258 | 511d0ee256b819690ccf0f6b30d12340b1dd7f0a | 3970ea386d5e0a371143ad1e69b897fd1262842d | "2022-04-26T10:48:39Z" | python | "2022-04-30T19:11:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,246 | ["airflow/api_connexion/endpoints/task_instance_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_instance_schema.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | Add api call for changing task instance status | ### Description
In the UI you can change the status of a task instance, but there is no API call available for the same feature.
It would be nice to have an api call for this as well.
### Use case/motivation
I found a solution on stack-overflow on [How to add manual tasks in an Apache Airflow Dag]. There is a suggestion to set a task on failed and change it manually to succeed when the task is done.
Our project has many manual tasks. This suggestions seems like a good option, but there is no api call yet to call instead of change all status manually. I would like to use an api call for this instead.
You can change the status of on a dag run so it also seems natural to have something similar for task instances.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23246 | https://github.com/apache/airflow/pull/26165 | 5c37b503f118b8ad2585dff9949dd8fdb96689ed | 1e6f1d54c54e5dc50078216e23ba01560ebb133c | "2022-04-26T09:17:52Z" | python | "2022-10-31T05:31:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,227 | ["airflow/api_connexion/endpoints/task_instance_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_instance_schema.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/schemas/test_task_instance_schema.py"] | Ability to clear a specific DAG Run's task instances via REST APIs | ### Discussed in https://github.com/apache/airflow/discussions/23220
<div type='discussions-op-text'>
<sup>Originally posted by **yashk97** April 25, 2022</sup>
Hi,
My use case is in case multiple DAG Runs fail on some task (not the same one in all of them), I want to individually re-trigger each of these DAG Runs. Currently, I have to rely on the Airflow UI (attached screenshots) where I select the failed task and clear its state (along with the downstream tasks) to re-run from that point. While this works, it becomes tedious if the number of failed DAG runs is huge.
I checked the REST API Documentation and came across the clear Task Instances API with the following URL: /api/v1/dags/{dag_id}/clearTaskInstances
However, it filters task instances of the specified DAG in a given date range.
I was wondering if, for a specified DAG Run, we can clear a task along with its downstream tasks irrespective of the states of the tasks or the DAG run through REST API.
This will give us more granular control over re-running DAGs from the point of failure.
![image](https://user-images.githubusercontent.com/25115516/165099593-46ce449a-d303-49ee-9edb-fc5d524f4517.png)
![image](https://user-images.githubusercontent.com/25115516/165099683-4ba7f438-3660-4a16-a66c-2017aee5042f.png)
</div> | https://github.com/apache/airflow/issues/23227 | https://github.com/apache/airflow/pull/23516 | 3221ed5968423ea7a0dc7e1a4b51084351c2d56b | eceb4cc5888a7cf86a9250fff001fede2d6aba0f | "2022-04-25T18:40:24Z" | python | "2022-08-05T17:27:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,206 | ["airflow/migrations/utils.py", "airflow/migrations/versions/0110_2_3_2_add_cascade_to_dag_tag_foreignkey.py", "airflow/models/dag.py", "docs/apache-airflow/migrations-ref.rst"] | UI shows Foreign Key Error when deleting a dag | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
I tried to delete a dag from the grid view, and I saw this instead
```
Ooops!
Something bad has happened.
...
Python version: 3.7.13
Airflow version: 2.3.0b1
Node: airflow-webserver-7c4f49f5dd-h74w2
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1706, in _execute_context
cursor, statement, parameters, context
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.ForeignKeyViolation: update or delete on table "dag" violates foreign key constraint "dag_tag_dag_id_fkey" on table "dag_tag"
DETAIL: Key (dag_id)=(core_todo) is still referenced from table "dag_tag".
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/airflow/.local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/www/auth.py", line 40, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/www/decorators.py", line 80, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/www/views.py", line 1812, in delete
delete_dag.delete_dag(dag_id)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/api/common/delete_dag.py", line 80, in delete_dag
.delete(synchronize_session='fetch')
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 3111, in delete
execution_options={"synchronize_session": synchronize_session},
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/orm/session.py", line 1670, in execute
result = conn._execute_20(statement, params or {}, execution_options)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1520, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 314, in _execute_on_connection
self, multiparams, params, execution_options
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1399, in _execute_clauseelement
cache_hit=cache_hit,
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1749, in _execute_context
e, statement, parameters, cursor, context
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1930, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1706, in _execute_context
cursor, statement, parameters, context
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.errors.ForeignKeyViolation) update or delete on table "dag" violates foreign key constraint "dag_tag_dag_id_fkey" on table "dag_tag"
DETAIL: Key (dag_id)=(core_todo) is still referenced from table "dag_tag".
[SQL: DELETE FROM dag WHERE dag.dag_id IN (%(dag_id_1_1)s) RETURNING dag.dag_id]
[parameters: {'dag_id_1_1': 'core_todo'}]
(Background on this error at: http://sqlalche.me/e/14/gkpj)
```
Also, here are the database pod logs:
```
โ 2022-04-25 01:42:14.185 GMT [155] STATEMENT: INSERT INTO log (dttm, dag_id, task_id, map_index, event, execution_date, owner, extra) VALUES ('2022-04-25T01:42:14.178085+00:00'::timestamptz, NULL, NULL, NULL, 'cli_upgradedb', NULL, ' โ
โ 2022-04-25 01:42:14.371 GMT [155] ERROR: relation "connection" does not exist at character 55 โ
โ 2022-04-25 01:42:14.371 GMT [155] STATEMENT: SELECT connection.conn_id AS connection_conn_id โ
โ FROM connection GROUP BY connection.conn_id โ
โ HAVING count(*) > 1 โ
โ 2022-04-25 01:42:14.372 GMT [155] ERROR: relation "connection" does not exist at character 55 โ
โ 2022-04-25 01:42:14.372 GMT [155] STATEMENT: SELECT connection.conn_id AS connection_conn_id โ
โ FROM connection โ
โ WHERE connection.conn_type IS NULL โ
โ 2022-04-25 01:42:16.489 GMT [158] ERROR: relation "log" does not exist at character 13 โ
โ 2022-04-25 01:42:16.489 GMT [158] STATEMENT: INSERT INTO log (dttm, dag_id, task_id, map_index, event, execution_date, owner, extra) VALUES ('2022-04-25T01:42:16.482543+00:00'::timestamptz, NULL, NULL, NULL, 'cli_check', NULL, 'airf โ
โ 2022-04-25 01:42:17.917 GMT [160] ERROR: column "map_index" of relation "log" does not exist at character 41 โ
โ 2022-04-25 01:42:17.917 GMT [160] STATEMENT: INSERT INTO log (dttm, dag_id, task_id, map_index, event, execution_date, owner, extra) VALUES ('2022-04-25T01:42:17.910396+00:00'::timestamptz, NULL, NULL, NULL, 'cli_flower', NULL, 'air โ
โ 2022-04-25 03:18:33.631 GMT [24494] ERROR: update or delete on table "dag" violates foreign key constraint "dag_tag_dag_id_fkey" on table "dag_tag" โ
โ 2022-04-25 03:18:33.631 GMT [24494] DETAIL: Key (dag_id)=(core_todo) is still referenced from table "dag_tag". โ
โ 2022-04-25 03:18:33.631 GMT [24494] STATEMENT: DELETE FROM dag WHERE dag.dag_id IN ('core_todo') RETURNING dag.dag_id โ
โ 2022-04-25 03:31:18.858 GMT [24760] ERROR: update or delete on table "dag" violates foreign key constraint "dag_tag_dag_id_fkey" on table "dag_tag" โ
โ 2022-04-25 03:31:18.858 GMT [24760] DETAIL: Key (dag_id)=(core_todo) is still referenced from table "dag_tag". โ
โ 2022-04-25 03:31:18.858 GMT [24760] STATEMENT: DELETE FROM dag WHERE dag.dag_id IN ('core_todo') RETURNING dag.dag_id
```
### What you think should happen instead
The dag gets deleted, no error
### How to reproduce
I'm not sure if I can replicate it, but I'll report back here if I can. So far as I remember the steps were:
1. run a (large) dag
2. the dag failed for unrelated reasons
3. delete the dag from the grid view
4. see error page
### Operating System
kubernetes/debian
### Versions of Apache Airflow Providers
n/a
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Deployed via helm into a microk8s cluster, which was running in a VM, which was deployed by CircleCI.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23206 | https://github.com/apache/airflow/pull/23444 | e2401329345dcc5effa933b92ca969b8779755e4 | 8ccff9244a6d1a936d8732721373b967e95ec404 | "2022-04-25T03:42:45Z" | python | "2022-05-27T14:28:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,174 | ["CONTRIBUTORS_QUICK_START.rst", "CONTRIBUTORS_QUICK_START_CODESPACES.rst", "CONTRIBUTORS_QUICK_START_GITPOD.rst", "CONTRIBUTORS_QUICK_START_PYCHARM.rst", "CONTRIBUTORS_QUICK_START_VSCODE.rst"] | Some links in contributor's quickstart table of contents are broken | ### What do you see as an issue?
In `CONTRIBUTORS_QUICK_START.rst`, the links in the table of contents that direct users to parts of the guide that are hidden by the drop down don't work if the drop down isn't expanded. For example, clicking "[Setup Airflow with Breeze](https://github.com/apache/airflow/blob/main/CONTRIBUTORS_QUICK_START.rst#setup-airflow-with-breeze)" does nothing until you open the appropriate drop down `Setup and develop using <PyCharm, Visual Studio Code, Gitpod>`
### Solving the problem
Instead of having the entire documentation blocks under `Setup and develop using {method}` dropdowns, there could be drop downs under each section so that the guide remains concise without sacrificing the functionality of the table of contents.
### Anything else
I'm happy to submit a PR eventually, but I might not be able to get around to it for a bit if anyone else wants to handle it real quick.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23174 | https://github.com/apache/airflow/pull/23762 | e08b59da48743ff0d0ce145d1bc06bb7b5f86e68 | 1bf6dded9a5dcc22238b8943028b08741e36dfe5 | "2022-04-22T17:29:05Z" | python | "2022-05-24T17:03:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,171 | ["airflow/api/common/mark_tasks.py", "airflow/models/dag.py", "tests/models/test_dag.py", "tests/test_utils/mapping.py"] | Mark Success on a mapped task, reruns other failing mapped tasks | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
Have a DAG with mapped tasks. Mark at least two mapped tasks as failed. Mark one of the failures as success. See the other task(s) switch to `no_status` and rerun.
![Apr-22-2022 10-21-41](https://user-images.githubusercontent.com/4600967/164734320-bafe267d-6ef0-46fb-b13f-6d85f9ef86ba.gif)
### What you think should happen instead
Marking a single mapped task as a success probably shouldn't affect other failed mapped tasks.
### How to reproduce
_No response_
### Operating System
OSX
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23171 | https://github.com/apache/airflow/pull/23177 | d262a72ca7ab75df336b93cefa338e7ba3f90ebb | 26a9ec65816e3ec7542d63ab4a2a494931a06c9b | "2022-04-22T14:25:54Z" | python | "2022-04-25T09:03:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,168 | ["airflow/api_connexion/schemas/connection_schema.py", "tests/api_connexion/endpoints/test_connection_endpoint.py"] | Getting error "Extra Field may not be null" while hitting create connection api with extra=null | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
Getting error "Extra Field may not be null" while hitting create connection api with extra=null
```
{
"detail": "{'extra': ['Field may not be null.']}",
"status": 400,
"title": "Bad Request",
"type": "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/BadRequest"
}
```
### What you think should happen instead
I should be able to create connection through API
### How to reproduce
Steps to reproduce:
1. Hit connection end point with json body
Api Endpoint - api/v1/connections
HTTP Method - Post
Json Body -
```
{
"connection_id": "string6",
"conn_type": "string",
"host": "string",
"login": null,
"schema": null,
"port": null,
"password": "pa$$word",
"extra":null
}
```
### Operating System
debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
Astro dev start
### Anything else
As per code I am assuming it may be null.
```
Connection:
description: Full representation of the connection.
allOf:
- $ref: '#/components/schemas/ConnectionCollectionItem'
- type: object
properties:
password:
type: string
format: password
writeOnly: true
description: Password of the connection.
extra:
type: string
nullable: true
description: Other values that cannot be put into another field, e.g. RSA keys.
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23168 | https://github.com/apache/airflow/pull/23183 | b33cd10941dd10d461023df5c2d3014f5dcbb7ac | b45240ad21ca750106931ba2b882b3238ef2b37d | "2022-04-22T10:48:23Z" | python | "2022-04-25T14:55:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,162 | ["airflow/providers/google/cloud/transfers/gcs_to_gcs.py", "tests/providers/google/cloud/transfers/test_gcs_to_gcs.py"] | GCSToGCSOperator ignores replace parameter when there is no wildcard | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
Latest
### Apache Airflow version
2.2.5 (latest released)
### Operating System
MacOS 12.2.1
### Deployment
Composer
### Deployment details
_No response_
### What happened
Ran the same DAG twice with 'replace = False', in the second run files are overwritten anyway.
source_object does not include wildcard.
Not sure whether this incorrect behavior happens to "with wildcard" scenario, but from source code
https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/gcs_to_gcs.py
in line 346 (inside _copy_source_with_wildcard) we have
if not self.replace:
but in _copy_source_without_wildcard we don't check self.replace at all.
### What you think should happen instead
When 'replace = False', the second run should skip copying files since they are already there.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23162 | https://github.com/apache/airflow/pull/23340 | 03718194f4fa509f16fcaf3d41ff186dbae5d427 | 82c244f9c7f24735ee952951bcb5add45422d186 | "2022-04-22T06:45:06Z" | python | "2022-05-08T19:46:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,159 | ["airflow/providers/docker/operators/docker.py", "airflow/providers/docker/operators/docker_swarm.py"] | docker container still running while dag run failed | ### Apache Airflow version
2.1.4
### What happened
I have operator run with docker .
When dag run failed , docker.py try to remove container but remove failed and got the following error:
`2022-04-20 00:03:50,381] {taskinstance.py:1463} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/docker/operators/docker.py", line 301, in _run_image_with_mounts
for line in lines:
File "/home/airflow/.local/lib/python3.8/site-packages/docker/types/daemon.py", line 32, in __next__
return next(self._stream)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/api/client.py", line 412, in <genexpr>
gen = (data for (_, data) in gen)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/utils/socket.py", line 92, in frames_iter_no_tty
(stream, n) = next_frame_header(socket)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/utils/socket.py", line 64, in next_frame_header
data = read_exactly(socket, 8)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/utils/socket.py", line 49, in read_exactly
next_data = read(socket, n - len(data))
File "/home/airflow/.local/lib/python3.8/site-packages/docker/utils/socket.py", line 29, in read
select.select([socket], [], [])
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1238, in signal_handler
raise AirflowException("Task received SIGTERM signal")
airflow.exceptions.AirflowException: Task received SIGTERM signal
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/docker/api/client.py", line 268, in _raise_for_status
response.raise_for_status()
File "/home/airflow/.local/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 409 Client Error: Conflict for url: http+docker://localhost/v1.35/containers/de4cd812f8b0dcc448d591d1bd28fa736b1712237c8c8848919be512938bd515?v=False&link=False&force=False
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1165, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1283, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1313, in _execute_task
result = task_copy.execute(context=context)
File "/usr/local/airflow/dags/operators/byx_base_operator.py", line 611, in execute
raise e
File "/usr/local/airflow/dags/operators/byx_base_operator.py", line 591, in execute
self.execute_job(context)
File "/usr/local/airflow/dags/operators/byx_datax_operator.py", line 93, in execute_job
result = call_datax.execute(context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/docker/operators/docker.py", line 343, in execute
return self._run_image()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/docker/operators/docker.py", line 265, in _run_image
return self._run_image_with_mounts(self.mounts, add_tmp_variable=False)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/docker/operators/docker.py", line 317, in _run_image_with_mounts
self.cli.remove_container(self.container['Id'])
File "/home/airflow/.local/lib/python3.8/site-packages/docker/utils/decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/api/container.py", line 1010, in remove_container
self._raise_for_status(res)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/api/client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/home/airflow/.local/lib/python3.8/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 409 Client Error for http+docker://localhost/v1.35/containers/de4cd812f8b0dcc448d591d1bd28fa736b1712237c8c8848919be512938bd515?v=False&link=False&force=False: Conflict ("You cannot remove a running container de4cd812f8b0dcc448d591d1bd28fa736b1712237c8c8848919be512938bd515. Stop the container before attempting removal or force remove")
`
### What you think should happen instead
the container should removed successful when dag run failed
### How to reproduce
step 1: create a dag with execute DockerOperator operation
step 2: trigger dag
step 3: mark dag run to failed simulate dag run failed, and the remove container failed error will appear and the docker container still running.
### Operating System
NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2"
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23159 | https://github.com/apache/airflow/pull/23160 | 5d5d62e41e93fe9845c96ab894047422761023d8 | 237d2225d6b92a5012a025ece93cd062382470ed | "2022-04-22T00:15:38Z" | python | "2022-07-02T15:44:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,146 | ["airflow/providers/google/cloud/sensors/bigquery_dts.py", "tests/providers/google/cloud/sensors/test_bigquery_dts.py"] | location is missing in BigQueryDataTransferServiceTransferRunSensor | ### Apache Airflow version
2.2.3
### What happened
Location is missing in [BigQueryDataTransferServiceTransferRunSensor](airflow/providers/google/cloud/sensors/bigquery_dts.py).
This forces us to execute data transfers only in the us. When starting a transfer the location can be provided.
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Google Cloud Composer
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23146 | https://github.com/apache/airflow/pull/23166 | 692a0899430f86d160577c3dd0f52644c4ffad37 | 967140e6c3bd0f359393e018bf27b7f2310a2fd9 | "2022-04-21T12:32:26Z" | python | "2022-04-25T21:05:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,145 | ["airflow/executors/kubernetes_executor.py", "tests/executors/test_kubernetes_executor.py"] | Task stuck in "scheduled" when running in backfill job | ### Apache Airflow version
2.2.4
### What happened
We are running airflow 2.2.4 with KubernetesExecutor. I have created a dag to run airflow backfill command with SubprocessHook. What was observed is that when I started to backfill a few days' dagruns the backfill would get stuck with some dag runs having tasks staying in the "scheduled" state and never getting running.
We are using the default pool and the pool is totoally free when the tasks got stuck.
I could find some logs saying:
`TaskInstance: <TaskInstance: test_dag_2.task_1 backfill__2022-03-29T00:00:00+00:00 [queued]> found in queued state but was not launched, rescheduling` and nothing else in the log.
### What you think should happen instead
The tasks stuck in "scheduled" should start running when there is free slot in the pool.
### How to reproduce
Airflow 2.2.4 with python 3.8.13, KubernetesExecutor running in AWS EKS.
One backfill command example is: `airflow dags backfill test_dag_2 -s 2022-03-01 -e 2022-03-10 --rerun-failed-tasks`
The test_dag_2 dag is like:
```
import time
from datetime import timedelta
import pendulum
from airflow import DAG
from airflow.decorators import task
from airflow.models.dag import dag
from airflow.operators.bash import BashOperator
from airflow.operators.dummy import DummyOperator
from airflow.operators.python import PythonOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email': ['airflow@example.com'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=5),
}
def get_execution_date(**kwargs):
ds = kwargs['ds']
print(ds)
with DAG(
'test_dag_2',
default_args=default_args,
description='Testing dag',
start_date=pendulum.datetime(2022, 4, 2, tz='UTC'),
schedule_interval="@daily", catchup=True, max_active_runs=1,
) as dag:
t1 = BashOperator(
task_id='task_1',
depends_on_past=False,
bash_command='sleep 30'
)
t2 = PythonOperator(
task_id='get_execution_date',
python_callable=get_execution_date
)
t1 >> t2
```
### Operating System
Debian GNU/Linux
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.0.0
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-cncf-kubernetes==3.0.2
apache-airflow-providers-docker==2.4.1
apache-airflow-providers-elasticsearch==2.2.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==6.4.0
apache-airflow-providers-grpc==2.0.1
apache-airflow-providers-hashicorp==2.1.1
apache-airflow-providers-http==2.0.3
apache-airflow-providers-imap==2.2.0
apache-airflow-providers-microsoft-azure==3.6.0
apache-airflow-providers-microsoft-mssql==2.1.0
apache-airflow-providers-odbc==2.0.1
apache-airflow-providers-postgres==3.0.0
apache-airflow-providers-redis==2.0.1
apache-airflow-providers-sendgrid==2.0.1
apache-airflow-providers-sftp==2.4.1
apache-airflow-providers-slack==4.2.0
apache-airflow-providers-snowflake==2.5.0
apache-airflow-providers-sqlite==2.1.0
apache-airflow-providers-ssh==2.4.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23145 | https://github.com/apache/airflow/pull/23720 | 49cfb6498eed0acfc336a24fd827b69156d5e5bb | 640d4f9636d3867d66af2478bca15272811329da | "2022-04-21T12:29:32Z" | python | "2022-11-18T01:09:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,131 | ["airflow/models/dagrun.py", "airflow/models/mappedoperator.py", "tests/models/test_dagrun.py"] | Scheduler deadlock error when mapping over empty list | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
manually triggered this dag:
```python
from datetime import datetime
from airflow import DAG
with DAG(
dag_id="null_mapped_2",
start_date=datetime(1970, 1, 1),
schedule_interval=None,
) as dag:
@dag.task
def empty():
return []
@dag.task
def print_it(thing):
print(thing)
print_it.expand(thing=empty())
```
scheduler logs (whitespace added for emphasis):
```
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
[2022-04-20 20:46:24,760] {scheduler_job.py:697} INFO - Starting the scheduler
[2022-04-20 20:46:24,761] {scheduler_job.py:702} INFO - Processing each file at most -1 times
[2022-04-20 20:46:24 +0000] [28] [INFO] Starting gunicorn 20.1.0
[2022-04-20 20:46:24 +0000] [28] [INFO] Listening at: http://0.0.0.0:8793 (28)
[2022-04-20 20:46:24 +0000] [28] [INFO] Using worker: sync
[2022-04-20 20:46:24 +0000] [29] [INFO] Booting worker with pid: 29
[2022-04-20 20:46:24,782] {executor_loader.py:106} INFO - Loaded executor: LocalExecutor
[2022-04-20 20:46:24 +0000] [78] [INFO] Booting worker with pid: 78
[2022-04-20 20:46:24,953] {manager.py:156} INFO - Launched DagFileProcessorManager with pid: 166
[2022-04-20 20:46:24,962] {settings.py:55} INFO - Configured default timezone Timezone('UTC')
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:466 DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
[2022-04-20 20:46:24,988] {scheduler_job.py:1231} INFO - Resetting orphaned tasks for active dag runs
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:466 DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
[2022-04-20 20:46:32,124] {update_checks.py:128} INFO - Checking for new version of Astronomer Certified Airflow, previous check was performed at None
[2022-04-20 20:46:32,441] {update_checks.py:84} INFO - Check finished, next check in 86400.0 seconds
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:466 DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
[2022-04-20 20:47:45,296] {scheduler_job.py:354} INFO - 1 tasks up for execution:
<TaskInstance: null_mapped_2.empty manual__2022-04-20T20:47:45.075321+00:00 [scheduled]>
[2022-04-20 20:47:45,297] {scheduler_job.py:419} INFO - DAG null_mapped_2 has 0/16 running and queued tasks
[2022-04-20 20:47:45,297] {scheduler_job.py:505} INFO - Setting the following tasks to queued state:
<TaskInstance: null_mapped_2.empty manual__2022-04-20T20:47:45.075321+00:00 [scheduled]>
[2022-04-20 20:47:45,300] {scheduler_job.py:547} INFO - Sending TaskInstanceKey(dag_id='null_mapped_2', task_id='empty', run_id='manual__2022-04-20T20:47:45.075321+00:00', try_number=1, map_index=-1) to executor with priority 2 and queue default
[2022-04-20 20:47:45,300] {base_executor.py:88} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'null_mapped_2', 'empty', 'manual__2022-04-20T20:47:45.075321+00:00', '--local', '--subdir', 'DAGS_FOLDER/null_mapped_2.py']
[2022-04-20 20:47:45,303] {local_executor.py:79} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'null_mapped_2', 'empty', 'manual__2022-04-20T20:47:45.075321+00:00', '--local', '--subdir', 'DAGS_FOLDER/null_mapped_2.py']
[2022-04-20 20:47:45,340] {dagbag.py:507} INFO - Filling up the DagBag from /usr/local/airflow/dags/null_mapped_2.py
/usr/local/lib/python3.9/site-packages/airflow/configuration.py:466 DeprecationWarning: The sql_alchemy_conn option in [core] has been moved to the sql_alchemy_conn option in [database] - the old setting has been used, but please update your config.
[2022-04-20 20:47:45,432] {task_command.py:369} INFO - Running <TaskInstance: null_mapped_2.empty manual__2022-04-20T20:47:45.075321+00:00 [queued]> on host 7ec8e95b149d
[2022-04-20 20:47:46,796] {dagrun.py:583} ERROR - Deadlock; marking run <DagRun null_mapped_2 @ 2022-04-20 20:47:45.075321+00:00: manual__2022-04-20T20:47:45.075321+00:00, externally triggered: True> failed
[2022-04-20 20:47:46,797] {dagrun.py:607} INFO - DagRun Finished: dag_id=null_mapped_2, execution_date=2022-04-20 20:47:45.075321+00:00, run_id=manual__2022-04-20T20:47:45.075321+00:00, run_start_date=2022-04-20 20:47:45.254176+00:00, run_end_date=2022-04-20 20:47:46.797390+00:00, run_duration=1.543214, state=failed, external_trigger=True, run_type=manual, data_interval_start=2022-04-20 20:47:45.075321+00:00, data_interval_end=2022-04-20 20:47:45.075321+00:00, dag_hash=8476a887126cf6d52573ee41fa81c637
[2022-04-20 20:47:46,801] {dag.py:2894} INFO - Setting next_dagrun for null_mapped_2 to None, run_after=None
[2022-04-20 20:47:46,818] {scheduler_job.py:600} INFO - Executor reports execution of null_mapped_2.empty run_id=manual__2022-04-20T20:47:45.075321+00:00 exited with status success for try_number 1
[2022-04-20 20:47:46,831] {scheduler_job.py:644} INFO - TaskInstance Finished: dag_id=null_mapped_2, task_id=empty, run_id=manual__2022-04-20T20:47:45.075321+00:00, map_index=-1, run_start_date=2022-04-20 20:47:45.491953+00:00, run_end_date=2022-04-20 20:47:45.760654+00:00, run_duration=0.268701, state=success, executor_state=success, try_number=1, max_tries=0, job_id=2, pool=default_pool, queue=default, priority_weight=2, operator=_PythonDecoratedOperator, queued_dttm=2022-04-20 20:47:45.297857+00:00, queued_by_job_id=1, pid=250
```
The dagrun fails, even though none of its tasks get set to failed:
<img width="319" alt="Screen Shot 2022-04-20 at 2 43 15 PM" src="https://user-images.githubusercontent.com/5834582/164320377-663fcc0a-0bc4-4edc-8cc3-91884b84748d.png">
### What you think should happen instead
Mapping over a 0 length list should create no map_index'ed tasks, but the parent task should succeed because none of the 0 tasks failed.
### How to reproduce
Run the dag above
### Operating System
debian (docker)
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astrocloud dev start`, image contains version 2.3.0.dev20220414
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23131 | https://github.com/apache/airflow/pull/23134 | af45483b95896033ba1937a2037a8e0a6db1bff0 | 03f7d857e940b9c719975e72ded4a89f183b0100 | "2022-04-20T20:54:23Z" | python | "2022-04-21T12:57:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,114 | ["airflow/providers/cncf/kubernetes/sensors/spark_kubernetes.py", "tests/providers/cncf/kubernetes/sensors/test_spark_kubernetes.py"] | SparkKubernetesSensor Cannot Attach Log When There Are Sidecars in the Driver Pod | ### Apache Airflow Provider(s)
cncf-kubernetes
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==3.0.0
### Apache Airflow version
2.2.5 (latest released)
### Operating System
Debian GNU/Linux 10 (buster)
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When using `SparkKubernetesSensor` with `attach_log=True`, it cannot get the log correctly with the below error:
``` [2022-04-20, 08:42:04 UTC] {spark_kubernetes.py:95} WARNING - Could not read logs for pod spark-pi-0.4753748373914717-1-driver. It may have been disposed.
Make sure timeToLiveSeconds is set on your SparkApplication spec.
underlying exception: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Audit-Id': '29ac5abb-452d-4411-a420-8d74155e187d', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Wed, 20 Apr 2022 08:42:04 GMT', 'Content-Length': '259'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"a container name must be specified for pod spark-pi-0.4753748373914717-1-driver, choose one of: [istio-init istio-proxy spark-kubernetes-driver]","reason":"BadRequest","code":400}\n'
```
It is because no container is specified when calling kubernetes hook.get_pod_logs
https://github.com/apache/airflow/blob/501a3c3fbefbcc0d6071a00eb101110fc4733e08/airflow/providers/cncf/kubernetes/sensors/spark_kubernetes.py#L85
### What you think should happen instead
It should get the log of container `spark-kubernetes-driver`
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23114 | https://github.com/apache/airflow/pull/26560 | 923f1ef30e8f4c0df2845817b8f96373991ad3ce | 5c97e5be484ff572070b0ad320c5936bc028be93 | "2022-04-20T09:58:18Z" | python | "2022-10-10T05:36:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,111 | ["airflow/providers/amazon/aws/hooks/s3.py", "airflow/providers/amazon/aws/operators/s3.py", "airflow/providers/amazon/aws/sensors/s3.py", "airflow/providers/amazon/aws/transfers/local_to_s3.py", "tests/providers/amazon/aws/hooks/test_s3.py", "tests/providers/amazon/aws/operators/test_s3_object.py", "tests/providers/amazon/aws/sensors/test_s3_key.py", "tests/providers/amazon/aws/transfers/test_local_to_s3.py"] | LocalFilesystemToS3Operator dest_key can not be a full s3:// style url | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-mongo==2.3.3
apache-airflow-providers-sqlite==2.1.3
### Apache Airflow version
2.3.0b1 (pre-release)
### Operating System
Arch Linux
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
`LocalFilesystemToS3Operator` does not accept full s3:// style url as `dest_key`, although it states, that it should:
```
:param dest_key: The key of the object to copy to. (templated)
It can be either full s3:// style url or relative path from root level.
When it's specified as a full s3:// url, including dest_bucket results in a TypeError.
```
### What you think should happen instead
`LocalFilesystemToS3Operator` should behave as documented.
### How to reproduce
A modification of an existing UT:
```
@mock_s3
def test_execute_with_only_key(self):
conn = boto3.client('s3')
conn.create_bucket(Bucket=self.dest_bucket)
operator = LocalFilesystemToS3Operator(
task_id='s3_to_file_sensor',
dag=self.dag,
filename=self.testfile1,
dest_key=f's3://dummy/{self.dest_key}',
**self._config,
)
operator.execute(None)
objects_in_dest_bucket = conn.list_objects(Bucket=self.dest_bucket, Prefix=self.dest_key)
# there should be object found, and there should only be one object found
assert len(objects_in_dest_bucket['Contents']) == 1
# the object found should be consistent with dest_key specified earlier
assert objects_in_dest_bucket['Contents'][0]['Key'] == self.dest_key
```
`FAILED tests/providers/amazon/aws/transfers/test_local_to_s3.py::TestFileToS3Operator::test_execute_with_only_key - TypeError: expected string or bytes-like object`
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23111 | https://github.com/apache/airflow/pull/23180 | e2c7847c6bf73685f0576364787fab906397a6fe | 27a80511ec3ffcf036354741bd0bfe18d4b4a471 | "2022-04-20T08:45:07Z" | python | "2022-05-07T09:19:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,107 | ["airflow/dag_processing/processor.py", "airflow/models/taskfail.py", "airflow/models/taskinstance.py", "tests/api/common/test_delete_dag.py", "tests/callbacks/test_callback_requests.py", "tests/jobs/test_scheduler_job.py"] | Mapped KubernetesPodOperator "fails" but UI shows it is as still running | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
This dag has a problem. The `name` kwarg is missing from one of the mapped instances.
```python3
from datetime import datetime
from airflow import DAG
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import (
KubernetesPodOperator,
)
from airflow.configuration import conf
namespace = conf.get("kubernetes", "NAMESPACE")
with DAG(
dag_id="kpo_mapped",
start_date=datetime(1970, 1, 1),
schedule_interval=None,
) as dag:
KubernetesPodOperator(
task_id="cowsay_static_named",
name="cowsay_statc",
namespace=namespace,
image="docker.io/rancher/cowsay",
cmds=["cowsay"],
arguments=["moo"],
)
KubernetesPodOperator.partial(
task_id="cowsay_mapped",
# name="cowsay_mapped", # required field missing
image="docker.io/rancher/cowsay",
namespace=namespace,
cmds=["cowsay"],
).expand(arguments=[["mooooove"], ["cow"], ["get out the way"]])
KubernetesPodOperator.partial(
task_id="cowsay_mapped_named",
name="cowsay_mapped",
namespace=namespace,
image="docker.io/rancher/cowsay",
cmds=["cowsay"],
).expand(arguments=[["mooooove"], ["cow"], ["get out the way"]])
```
If you omit that field in an unmapped task, you get a dag parse error, which is appropriate. But omitting it from the mapped task gives you this runtime error in the task logs:
```
[2022-04-20, 05:11:02 UTC] {standard_task_runner.py:52} INFO - Started process 60 to run task
[2022-04-20, 05:11:02 UTC] {standard_task_runner.py:79} INFO - Running: ['airflow', 'tasks', 'run', 'kpo_mapped', 'cowsay_mapped', 'manual__2022-04-20T05:11:01+00:00', '--job-id', '12', '--raw', '--subdir', 'DAGS_FOLDER/dags/taskmap/kpo_mapped.py', '--cfg-path', '/tmp/tmp_g3sj496', '--map-index', '0', '--error-file', '/tmp/tmp2_313wxj']
[2022-04-20, 05:11:02 UTC] {standard_task_runner.py:80} INFO - Job 12: Subtask cowsay_mapped
[2022-04-20, 05:11:02 UTC] {task_command.py:369} INFO - Running <TaskInstance: kpo_mapped.cowsay_mapped manual__2022-04-20T05:11:01+00:00 map_index=0 [running]> on host airflow-worker-65f9fd9d5b-vpgnk
[2022-04-20, 05:11:02 UTC] {taskinstance.py:1863} WARNING - We expected to get frame set in local storage but it was not. Please report this as an issue with full logs at https://github.com/apache/airflow/issues/new
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1440, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1544, in _execute_task_with_callbacks
task_orig = self.render_templates(context=context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2210, in render_templates
rendered_task = self.task.render_template_fields(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 722, in render_template_fields
unmapped_task = self.unmap(unmap_kwargs=kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 508, in unmap
op = self.operator_class(**unmap_kwargs, _airflow_from_mapped=True)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 390, in apply_defaults
result = func(self, **kwargs, default_args=default_args)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 259, in __init__
self.name = self._set
_name(name)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 442, in _set_name
raise AirflowException("`name` is required unless `pod_template_file` or `full_pod_spec` is set")
airflow.exceptions.AirflowException: `name` is required unless `pod_template_file` or `full_pod_spec` is set
```
But rather than failing the task, Airflow just thinks that the task is still running:
<img width="833" alt="Screen Shot 2022-04-19 at 11 13 47 PM" src="https://user-images.githubusercontent.com/5834582/164156155-41986d3a-d171-4943-8443-a0fc3c542988.png">
### What you think should happen instead
Ideally this error would be surfaced when the dag is first parsed. If that's not possible, then it should fail the task completely (i.e. a red square should show up in the grid view).
### How to reproduce
Run the dag above
### Operating System
ubuntu (microk8s)
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes | 4.0.0
### Deployment
Astronomer
### Deployment details
Deployed via the astronomer airflow helm chart, values:
```
airflow:
airflowHome: /usr/local/airflow
defaultAirflowRepository: 172.28.11.191:30500/airflow
defaultAirflowTag: tb11c-inner-operator-expansion
env:
- name: AIRFLOW__CORE__DAGBAG_IMPORT_ERROR_TRACEBACK_DEPTH
value: '99'
executor: CeleryExecutor
gid: 50000
images:
airflow:
pullPolicy: Always
repository: 172.28.11.191:30500/airflow
flower:
pullPolicy: Always
pod_template:
pullPolicy: Always
logs:
persistence:
enabled: true
size: 2Gi
scheduler:
livenessProbe:
timeoutSeconds: 45
triggerer:
livenessProbe:
timeoutSeconds: 45
```
Image base: `quay.io/astronomer/ap-airflow-dev:main`
Airflow version: `2.3.0.dev20220414`
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23107 | https://github.com/apache/airflow/pull/23119 | 1e8ac47589967f2a7284faeab0f65b01bfd8202d | 91b82763c5c17e8ab021f2d4f2a5681ea90adf6b | "2022-04-20T05:29:38Z" | python | "2022-04-21T15:08:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,092 | ["airflow/www/static/css/bootstrap-theme.css"] | UI: Transparent border causes dropshadow to render 1px away from Action dropdown menu in Task Instance list | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Airflow:
> Astronomer Certified: v2.2.5.post1 based on Apache Airflow v2.2.5
> Git Version: .release:2.2.5+astro.1+90fc013e6e4139e2d4bfe438ad46c3af1d523668
Due to this CSS in `airflowDefaultTheme.ce329611a683ab0c05fd.css`:
```css
.dropdown-menu {
background-clip: padding-box;
background-color: #fff;
border: 1px solid transparent; /* <-- transparent border */
}
```
the dropdown border and dropshadow renders...weirdly:
![Screen Shot 2022-04-19 at 9 50 45 AM](https://user-images.githubusercontent.com/597113/164063925-10aaec58-ce6b-417e-a90f-4fa93eee4f9e.png)
Zoomed in - take a close look at the border and how the contents underneath the dropdown bleed through the border, making the dropshadow render 1px away from the dropdown menu:
![Screen Shot 2022-04-19 at 9 51 24 AM](https://user-images.githubusercontent.com/597113/164063995-e2d266ae-2cbf-43fc-9d97-7f90080c5507.png)
### What you think should happen instead
When I remove the abberrant line of CSS above, it cascades to this in `bootstrap.min.css`:
```css
.dropdown-menu {
...
border: 1px solid rgba(0,0,0,.15);
...
}
```
which renders the border as gray:
![Screen Shot 2022-04-19 at 9 59 23 AM](https://user-images.githubusercontent.com/597113/164064014-d575d039-aeb1-4a99-ab80-36c8cd6ca39e.png)
So I think we should not use a transparent border, or we should remove the explicit border from the dropdown and let Bootstrap control it.
### How to reproduce
Spin up an instance of Airflow with `astro dev start`, trigger a DAG, inspect the DAG details, and list all task instances of a DAG run. Then click the Actions dropdown menu.
### Operating System
macOS 11.6.4 Big Sur (Intel)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
Astro installed via Homebrew:
> Astro CLI Version: 0.28.1, Git Commit: 980c0d7bd06b818a2cb0e948bb101d0b27e3a90a
> Astro Server Version: 0.28.4-rc9
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23092 | https://github.com/apache/airflow/pull/27789 | 8b1ebdacd8ddbe841a74830f750ed8f5e6f38f0a | d233c12c30f9a7f3da63348f3f028104cb14c76b | "2022-04-19T17:56:36Z" | python | "2022-11-19T23:57:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,083 | ["BREEZE.rst", "TESTING.rst", "dev/breeze/src/airflow_breeze/commands/testing.py", "dev/breeze/src/airflow_breeze/shell/enter_shell.py", "dev/breeze/src/airflow_breeze/utils/docker_command_utils.py", "images/breeze/output-commands.svg", "images/breeze/output-tests.svg"] | Breeze: Running integration tests in Breeze | We should be able to run integration tests with Breeze - this is extension of `test` unit tests command that should allow to enable --integrations (same as in Shell) and run the tests with only the integration tests selected. | https://github.com/apache/airflow/issues/23083 | https://github.com/apache/airflow/pull/23445 | 83784d9e7b79d2400307454ccafdacddaee16769 | 7ba4e35a9d1b65b4c1a318ba4abdf521f98421a2 | "2022-04-19T14:17:28Z" | python | "2022-05-06T09:03:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,082 | ["BREEZE.rst", "TESTING.rst", "dev/breeze/src/airflow_breeze/commands/testing.py", "dev/breeze/src/airflow_breeze/shell/enter_shell.py", "dev/breeze/src/airflow_breeze/utils/docker_command_utils.py", "images/breeze/output-commands.svg", "images/breeze/output-tests.svg"] | Breeze: Add running unit tests with Breeze | We should be able to run unit tests automatically from breeze (`test` command in legacy-breeze) | https://github.com/apache/airflow/issues/23082 | https://github.com/apache/airflow/pull/23445 | 83784d9e7b79d2400307454ccafdacddaee16769 | 7ba4e35a9d1b65b4c1a318ba4abdf521f98421a2 | "2022-04-19T14:15:49Z" | python | "2022-05-06T09:03:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,074 | ["airflow/providers/google/cloud/hooks/vertex_ai/endpoint_service.py", "airflow/providers/google/cloud/operators/vertex_ai/endpoint_service.py", "tests/providers/google/cloud/hooks/vertex_ai/test_endpoint_service.py", "tests/providers/google/cloud/operators/test_vertex_ai.py"] | Add `endpoint_id` arg to `vertex_ai.endpoint_service.CreateEndpointOperator` | ### Description
Add the optional argument `endpoint_id` to `google.cloud.operators.vertex_ai.endpoint_service.CreateEndpointOperator` class and `google.cloud.hooks.vertex_ai.endpoint_service.EndpointServiceHook.create_endpoint` method.
### Use case/motivation
`google.cloud.operators.vertex_ai.endpoint_service.CreateEndpointOperator` class and `google.cloud.hooks.vertex_ai.endpoint_service.EndpointServiceHook.create_endpoint` method do not have `endpoint_id` argument. They internally use [`CreateEndpointRequest`](https://github.com/googleapis/python-aiplatform/blob/v1.11.0/google/cloud/aiplatform_v1/types/endpoint_service.py#L43), which accepts `endpoint_id`. Hence, I'd like them to accept `endpoint_id` argument and pass it to [`CreateEndpointRequest`](https://github.com/googleapis/python-aiplatform/blob/v1.11.0/google/cloud/aiplatform_v1/types/endpoint_service.py#L43).
If this is satisfied, we can create Vertex Endpoints with a specific Endpoint ID. Then, an Endpoint will be created with the specified Endpoint ID. Without it, an Endpoint will be created with an ID generated randomly.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23074 | https://github.com/apache/airflow/pull/23070 | 6b459995b260cc7023e4720974ef4f59893cd283 | d4a33480550db841657b998c0b4464feffec0ef9 | "2022-04-19T07:09:56Z" | python | "2022-04-25T15:09:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,068 | ["airflow/www/static/js/tree/InstanceTooltip.jsx", "airflow/www/static/js/tree/details/content/dagRun/index.jsx", "airflow/www/static/js/tree/details/content/taskInstance/Details.jsx", "airflow/www/static/js/tree/details/content/taskInstance/MappedInstances.jsx", "airflow/www/utils.py"] | Grid view: "duration" shows 00:00:00 | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
Run [a dag with an expanded TimedeltaSensor and a normal TimedeltaSensor](https://gist.github.com/MatrixManAtYrService/051fdc7164d187ab215ff8087e4db043), and navigate to the corresponding entries in the grid view.
While the dag runs:
- The unmapped task shows its "duration" to be increasing
- The mapped task shows a blank entry for the duration
Once the dag has finished:
- both show `00:00:00` for the duration
### What you think should happen instead
I'm not sure what it should show, probably time spent running? Or maybe queued + running? Whatever it should be, 00:00:00 doesn't seem right if it spent 90 seconds waiting around (e.g. in the "running" state)
Also, if we're going to update duration continuously while the normal task is running, we should do the same for the expanded task.
### How to reproduce
run a dag with expanded sensors, notice 00:00:00 duration
### Operating System
debian (docker)
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astrocloud dev start`
Dockerfile:
```
FROM quay.io/astronomer/ap-airflow-dev:main
```
image at airflow version 6d6ac2b2bcbb0547a488a1a13fea3cb1a69d24e8
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23068 | https://github.com/apache/airflow/pull/23259 | 511ea702d5f732582d018dad79754b54d5e53f9d | 9e2531fa4d9890f002d184121e018e3face5586b | "2022-04-19T03:11:17Z" | python | "2022-04-26T15:42:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,059 | ["airflow/providers/presto/hooks/presto.py", "airflow/providers/trino/hooks/trino.py"] | Presto hook is broken in the latest provider release (2.2.0) | ### Apache Airflow version
2.2.5 (latest released)
### What happened
The latest presto provider release https://pypi.org/project/apache-airflow-providers-presto/ is broken due to:
```
File "/usr/local/lib/python3.8/site-packages/airflow/providers/presto/hooks/presto.py", line 117, in get_conn
http_headers = {"X-Presto-Client-Info": generate_presto_client_info()}
File "/usr/local/lib/python3.8/site-packages/airflow/providers/presto/hooks/presto.py", line 56, in generate_presto_client_info
'try_number': context_var['try_number'],
KeyError: 'try_number'
```
### What you think should happen instead
This is due to the latest airflow release 2.2.5 does not include this PR:
https://github.com/apache/airflow/pull/22297/
the presto hook changes were introduced in this pr https://github.com/apache/airflow/pull/22416
### How to reproduce
_No response_
### Operating System
Mac
### Versions of Apache Airflow Providers
https://pypi.org/project/apache-airflow-providers-presto/
version: 2.2.0
### Deployment
Other
### Deployment details
local
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
cc @levyitay | https://github.com/apache/airflow/issues/23059 | https://github.com/apache/airflow/pull/23061 | b24650c0cc156ceb5ef5791f1647d4d37a529920 | 5164cdbe98ad63754d969b4b300a7a0167565e33 | "2022-04-18T17:23:45Z" | python | "2022-04-19T05:29:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,042 | ["airflow/www/static/css/graph.css", "airflow/www/static/js/graph.js"] | Graph view: Nodes arrows are cut | ### Body
<img width="709" alt="Screen Shot 2022-04-15 at 17 37 37" src="https://user-images.githubusercontent.com/45845474/163584251-f1ea5bc7-e132-41c4-a20c-cc247b81b899.png">
Reproduce example using [example_emr_job_flow_manual_steps ](https://github.com/apache/airflow/blob/b3cae77218788671a72411a344aab42a3c58e89c/airflow/providers/amazon/aws/example_dags/example_emr_job_flow_manual_steps.py)in AWS provider
Already discussed with @bbovenzi this issue will be fixed after 2.3.0 as it requires quite a bit of changes... also this is not a regression and it's just a "comsitic" issue in very specific DAGs.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23042 | https://github.com/apache/airflow/pull/23044 | 749e53def43055225a2e5d09596af7821d91b4ac | 028087b5a6e94fd98542d0e681d947979eb1011f | "2022-04-15T14:45:05Z" | python | "2022-05-12T19:47:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,040 | ["airflow/providers/google/cloud/transfers/mssql_to_gcs.py", "airflow/providers/google/cloud/transfers/mysql_to_gcs.py", "airflow/providers/google/cloud/transfers/oracle_to_gcs.py", "airflow/providers/google/cloud/transfers/postgres_to_gcs.py", "airflow/providers/google/cloud/transfers/presto_to_gcs.py", "airflow/providers/google/cloud/transfers/sql_to_gcs.py", "airflow/providers/google/cloud/transfers/trino_to_gcs.py", "tests/providers/google/cloud/transfers/test_postgres_to_gcs.py", "tests/providers/google/cloud/transfers/test_sql_to_gcs.py"] | PostgresToGCSOperator does not allow nested JSON | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.3.0
### Apache Airflow version
2.1.4
### Operating System
macOS Big Sur version 11.6.1
### Deployment
Composer
### Deployment details
_No response_
### What happened
Postgres JSON column output contains extra `\`:
`{"info": "{\"phones\": [{\"type\": \"mobile\", \"phone\": \"001001\"}, {\"type\": \"fix\", \"phone\": \"002002\"}]}", "name": null}`
While in the previous version the output looks like
`{"info": {"phones": [{"phone": "001001", "type": "mobile"}, {"phone": "002002", "type": "fix"}]}, "name": null}`
The introduced extra `\` will cause JSON parsing error in following `GCSToBigQueryOperator`
### What you think should happen instead
The output should NOT contain extra `\`:
`{"info": {"phones": [{"phone": "001001", "type": "mobile"}, {"phone": "002002", "type": "fix"}]}, "name": null}`
It is caused by this new code change in https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/transfers/postgres_to_gcs.py
should comment out this block
> if isinstance(value, dict):
> return json.dumps(value)
### How to reproduce
Try to output a Postgres table with JSON column --- you may use the the `info` above as example.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23040 | https://github.com/apache/airflow/pull/23063 | ca3fbbbe14203774a16ddd23e82cfe652b22eb4a | 766726f2e3a282fcd2662f5dc6e9926dc38a6540 | "2022-04-15T14:19:53Z" | python | "2022-05-08T22:06:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,033 | ["airflow/providers_manager.py", "tests/core/test_providers_manager.py"] | providers_manager | Exception when importing 'apache-airflow-providers-google' package ModuleNotFoundError: No module named 'airflow.providers.mysql' | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
```shell
airflow users create -r Admin -u admin -e admin@example.com -f admin -l user -p admin
```
give
```log
[2022-04-15 07:08:30,801] {manager.py:807} WARNING - No user yet created, use flask fab command to do it.
[2022-04-15 07:08:31,024] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
[2022-04-15 07:08:31,049] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
[2022-04-15 07:08:31,149] {manager.py:508} INFO - Created Permission View: menu access on Permissions
[2022-04-15 07:08:31,160] {manager.py:568} INFO - Added Permission menu access on Permissions to role Admin
[2022-04-15 07:08:32,250] {providers_manager.py:237} WARNING - Exception when importing 'airflow.providers.google.cloud.hooks.cloud_sql.CloudSQLHook' from 'apache-airflow-providers-google' package
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/airflow/providers_manager.py", line 215, in _sanity_check
imported_class = import_string(class_name)
File "/usr/local/lib/python3.8/site-packages/airflow/utils/module_loading.py", line 32, in import_string
module = import_module(module_path)
File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/usr/local/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/cloud_sql.py", line 52, in <module>
from airflow.providers.mysql.hooks.mysql import MySqlHook
ModuleNotFoundError: No module named 'airflow.providers.mysql'
[2022-04-15 07:29:12,007] {manager.py:213} INFO - Added user admin
User "admin" created with role "Admin"
```
### What you think should happen instead
it do not log this warning with
```
apache-airflow==2.2.5
apache-airflow-providers-google==6.7.0
```
```log
[2022-04-15 07:44:45,962] {manager.py:779} WARNING - No user yet created, use flask fab command to do it.
[2022-04-15 07:44:46,304] {manager.py:512} WARNING - Refused to delete permission view, assoc with role exists DAG Runs.can_create Admin
[2022-04-15 07:44:48,310] {manager.py:214} INFO - Added user admin
User "admin" created with role "Admin"
```
### How to reproduce
_No response_
### Operating System
ubuntu
### Versions of Apache Airflow Providers
requirements.txt :
```
apache-airflow-providers-google==6.8.0
```
pip install -r requirements.txt --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.0b1/constraints-3.8.txt"
### Deployment
Other Docker-based deployment
### Deployment details
pip install apache-airflow[postgres]==2.3.0b1 --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.0b1/constraints-3.8.txt"
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23033 | https://github.com/apache/airflow/pull/23037 | 4fa718e4db2daeb89085ea20e8b3ce0c895e415c | 8dedd2ac13a6cdc0c363446985f492e0f702f639 | "2022-04-15T07:31:53Z" | python | "2022-04-20T21:52:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,028 | ["airflow/cli/commands/task_command.py"] | `airflow tasks states-for-dag-run` has no `map_index` column | ### Apache Airflow version
2.3.0b1 (pre-release)
### What happened
I ran:
```
$ airflow tasks states-for-dag-run taskmap_xcom_pull 'manual__2022-04-14T13:27:04.958420+00:00'
dag_id | execution_date | task_id | state | start_date | end_date
==================+==================================+===========+=========+==================================+=================================
taskmap_xcom_pull | 2022-04-14T13:27:04.958420+00:00 | foo | success | 2022-04-14T13:27:05.343134+00:00 | 2022-04-14T13:27:05.598641+00:00
taskmap_xcom_pull | 2022-04-14T13:27:04.958420+00:00 | bar | success | 2022-04-14T13:27:06.256684+00:00 | 2022-04-14T13:27:06.462664+00:00
taskmap_xcom_pull | 2022-04-14T13:27:04.958420+00:00 | identity | success | 2022-04-14T13:27:07.480364+00:00 | 2022-04-14T13:27:07.713226+00:00
taskmap_xcom_pull | 2022-04-14T13:27:04.958420+00:00 | identity | success | 2022-04-14T13:27:07.512084+00:00 | 2022-04-14T13:27:07.768716+00:00
taskmap_xcom_pull | 2022-04-14T13:27:04.958420+00:00 | identity | success | 2022-04-14T13:27:07.546097+00:00 | 2022-04-14T13:27:07.782719+00:00
```
...targeting a dagrun for which `identity` had three expanded tasks. All three showed up, but the output didn't show me enough to know which one was which.
### What you think should happen instead
There should be a `map_index` column so that I know which one is which.
### How to reproduce
Run a dag with expanded tasks, then try to view their states via the cli
### Operating System
debian (docker)
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23028 | https://github.com/apache/airflow/pull/23030 | 10c9cb5318fd8a9e41a7b4338e5052c8feece7ae | b24650c0cc156ceb5ef5791f1647d4d37a529920 | "2022-04-14T23:35:08Z" | python | "2022-04-19T02:23:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,018 | ["airflow/jobs/backfill_job.py", "airflow/models/mappedoperator.py", "airflow/models/taskinstance.py", "airflow/models/taskmixin.py", "airflow/models/xcom_arg.py", "tests/models/test_taskinstance.py"] | A task's returned object should not be checked for mappability if the dag doesn't use it in an expansion. | ### Apache Airflow version
main (development)
### What happened
Here's a dag:
```python3
with DAG(...) as dag:
@dag.task
def foo():
return "foo"
@dag.task
def identity(thing):
return thing
foo() >> identity.expand(thing=[1, 2, 3])
```
`foo` fails with these task logs:
```
[2022-04-14, 14:15:26 UTC] {python.py:173} INFO - Done. Returned value was: foo
[2022-04-14, 14:15:26 UTC] {taskinstance.py:1837} WARNING - We expected to get frame set in local storage but it was not. Please report this as an issue with full logs at https://github.com/apache/airflow/issues/new
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1417, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1564, in _execute_task_with_callbacks
result = self._execute_task(context, task_orig)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1634, in _execute_task
self._record_task_map_for_downstreams(task_orig, result, session=session)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2314, in _record_task_map_for_downstreams
raise UnmappableXComTypePushed(value)
airflow.exceptions.UnmappableXComTypePushed: unmappable return type 'str'
```
### What you think should happen instead
Airflow shouldn't bother checking `foo`'s return type for mappability because its return value is never used in an expansion.
### How to reproduce
Run the dag, notice the failure
### Operating System
debian (docker)
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
using image with ref: e5dd6fdcfd2f53ed90e29070711c121de447b404
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23018 | https://github.com/apache/airflow/pull/23053 | b8bbfd4b318108b4fdadc78cd46fd1735da243ae | 197cff3194e855b9207c3c0da8ae093a0d5dda55 | "2022-04-14T14:28:26Z" | python | "2022-04-19T18:02:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 23,005 | ["BREEZE.rst"] | Breeze: Add uninstallation instructions for Breeze | We should have information how to uninstall Breeze:
* in the cheatsheet
* in BREEZE.rst | https://github.com/apache/airflow/issues/23005 | https://github.com/apache/airflow/pull/23045 | 2597ea47944488f3756a84bd917fa780ff5594da | 2722c42659100474b21aae3504ee4cbe24f72ab4 | "2022-04-14T09:02:52Z" | python | "2022-04-25T12:33:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,969 | ["airflow/www/views.py", "tests/www/views/test_views.py"] | Invalid execution_date crashes pages accepting the query parameter | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Invalid execution_date in query parameter will crash durations page since pendulum parsing exception is not handled in several views
### What you think should happen instead
On `ParseError` the page should resort to some default value like in grid page or show an error flash message instead of crash.
### How to reproduce
1. Visit a dag duration page with invalid date in URL : http://localhost:8080/dags/raise_exception/duration?days=30&root=&num_runs=25&base_date=2022-04-12+16%3A29%3A21%2B05%3A30er
2. Stacktrace
```python
Python version: 3.10.4
Airflow version: 2.3.0.dev0
Node: laptop
-------------------------------------------------------------------------------
Traceback (most recent call last):
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/pendulum/parsing/__init__.py", line 131, in _parse
dt = parser.parse(
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/dateutil/parser/_parser.py", line 1368, in parse
return DEFAULTPARSER.parse(timestr, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/dateutil/parser/_parser.py", line 643, in parse
raise ParserError("Unknown string format: %s", timestr)
dateutil.parser._parser.ParserError: Unknown string format: 2022-04-12 16:29:21+05:30er
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/karthikeyan/stuff/python/airflow/airflow/www/auth.py", line 40, in decorated
return func(*args, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/airflow/www/decorators.py", line 80, in wrapper
return f(*args, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/airflow/www/views.py", line 2870, in duration
base_date = timezone.parse(base_date)
File "/home/karthikeyan/stuff/python/airflow/airflow/utils/timezone.py", line 205, in parse
return pendulum.parse(string, tz=timezone or TIMEZONE, strict=False) # type: ignore
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/pendulum/parser.py", line 29, in parse
return _parse(text, **options)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/pendulum/parser.py", line 45, in _parse
parsed = base_parse(text, **options)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/pendulum/parsing/__init__.py", line 74, in parse
return _normalize(_parse(text, **_options), **_options)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/pendulum/parsing/__init__.py", line 135, in _parse
raise ParserError("Invalid date string: {}".format(text))
pendulum.parsing.exceptions.ParserError: Invalid date string: 2022-04-12 16:29:21+05:30er
```
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22969 | https://github.com/apache/airflow/pull/23161 | 6f82fc70ec91b493924249f062306330ee929728 | 9e25bc211f6f7bba1aff133d21fe3865dabda53d | "2022-04-13T07:20:19Z" | python | "2022-05-16T19:15:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,947 | ["airflow/hooks/dbapi.py"] | closing connection chunks in DbApiHook.get_pandas_df | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Hi all,
Please be patient with me, it's my first Bugreport in git at all :)
**Affected function:** DbApiHook.get_pandas_df
**Short description**: If I use DbApiHook.get_pandas_df with parameter "chunksize" the connection is lost
**Error description**
I tried using the DbApiHook.get_pandas_df function instead of pandas.read_sql. Without the parameter "chunksize" both functions work the same. But as soon as I add the parameter chunksize to get_pandas_df, I lose the connection in the first iteration. This happens both when querying Oracle and Mysql (Mariadb) databases.
During my research I found a comment on a closed issue that describes the same -> [#8468
](https://github.com/apache/airflow/issues/8468)
My Airflow version: 2.2.5
I think it's something to do with the "with closing" argument, because when I remove that argument, the chunksize argument was working.
```
def get_pandas_df(self, sql, parameters=None, **kwargs):
"""
Executes the sql and returns a pandas dataframe
:param sql: the sql statement to be executed (str) or a list of
sql statements to execute
:param parameters: The parameters to render the SQL query with.
:param kwargs: (optional) passed into pandas.io.sql.read_sql method
"""
try:
from pandas.io import sql as psql
except ImportError:
raise Exception("pandas library not installed, run: pip install 'apache-airflow[pandas]'.")
# Not working
with closing(self.get_conn()) as conn:
return psql.read_sql(sql, con=conn, params=parameters, **kwargs)
# would working
# return psql.read_sql(sql, con=conn, params=parameters, **kwargs)_
```
### What you think should happen instead
It should give me a chunk of DataFrame
### How to reproduce
**not working**
```
src_hook = OracleHook(oracle_conn_id='oracle_source_conn_id')
query = "select * from example_table"
for chunk in src_hook.get_pandas_df(query,chunksize=2):
print(chunk.head())
```
**works**
```
for chunk in src_hook.get_pandas_df(query):
print(chunk.head())
```
**works**
```
for chunk in pandas.read_sql(query,src_hook.get_conn(),chunksize=2):
print(chunk.head())
```
### Operating System
MacOS Monetรคre
### Versions of Apache Airflow Providers
apache-airflow 2.2.5
apache-airflow-providers-ftp 2.1.2
apache-airflow-providers-http 2.1.2
apache-airflow-providers-imap 2.2.3
apache-airflow-providers-microsoft-mssql 2.1.3
apache-airflow-providers-mongo 2.3.3
apache-airflow-providers-mysql 2.2.3
apache-airflow-providers-oracle 2.2.3
apache-airflow-providers-salesforce 3.4.3
apache-airflow-providers-sftp 2.5.2
apache-airflow-providers-sqlite 2.1.3
apache-airflow-providers-ssh 2.4.3
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22947 | https://github.com/apache/airflow/pull/23452 | 41e94b475e06f63db39b0943c9d9a7476367083c | ab1f637e463011a34d950c306583400b7a2fceb3 | "2022-04-12T11:41:24Z" | python | "2022-05-31T10:39:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,942 | ["airflow/models/taskinstance.py", "tests/models/test_trigger.py"] | Deferrable operator trigger event payload is not persisted in db and not passed to completion method | ### Apache Airflow version
2.2.5 (latest released)
### What happened
When trigger is fired, event payload is added in next_kwargs with 'event' key.
This gets persisted in db when next_kwargs are not provided by operator. but when present due to modification of existing dict its not persisted in db
### What you think should happen instead
It should persist trigger event payload in db even when next kwargs are provided
### How to reproduce
_No response_
### Operating System
any
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22942 | https://github.com/apache/airflow/pull/22944 | a801ea3927b8bf3ca154fea3774ebf2d90e74e50 | bab740c0a49b828401a8baf04eb297d083605ae8 | "2022-04-12T10:00:48Z" | python | "2022-04-13T18:26:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,931 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | XCom is cleared when a task resumes from deferral. | ### Apache Airflow version
2.2.5 (latest released)
### What happened
A task's XCom value is cleared when a task is rescheduled after being deferred.
### What you think should happen instead
XCom should not be cleared in this case, as it is still the same task run.
### How to reproduce
```
from datetime import datetime, timedelta
from airflow import DAG
from airflow.models import BaseOperator
from airflow.triggers.temporal import TimeDeltaTrigger
class XComPushDeferOperator(BaseOperator):
def execute(self, context):
context["ti"].xcom_push("test", "test_value")
self.defer(
trigger=TimeDeltaTrigger(delta=timedelta(seconds=10)),
method_name="next",
)
def next(self, context, event=None):
pass
with DAG(
"xcom_clear", schedule_interval=None, start_date=datetime(2022, 4, 11),
) as dag:
XComPushDeferOperator(task_id="xcom_push")
```
### Operating System
macOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22931 | https://github.com/apache/airflow/pull/22932 | 4291de218e0738f32f516afe0f9d6adce7f3220d | 8b687ec82a7047fc35410f5c5bb0726de434e749 | "2022-04-12T00:34:38Z" | python | "2022-04-12T06:12:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,912 | ["airflow/www/static/css/main.css"] | Text wrap for task group tooltips | ### Description
Improve the readability of task group tooltips by wrapping the text after a certain number of characters.
### Use case/motivation
When tooltips have a lot of words in them, and your computer monitor is fairly large, Airflow will display the task group tooltip on one very long line. This can be difficult to read. It would be nice if after, say, 60 characters, additional tooltip text would be displayed on a new line.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22912 | https://github.com/apache/airflow/pull/22978 | 0cd8833df74f4b0498026c4103bab130e1fc1068 | 2f051e303fd433e64619f931eab2180db44bba23 | "2022-04-11T15:46:34Z" | python | "2022-04-13T13:57:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,897 | ["airflow/www/views.py", "tests/www/views/test_views_log.py"] | Invalid JSON metadata in get_logs_with_metadata causes server error. | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Invalid JSON metadata in get_logs_with_metadata causes server error. The `json.loads` exception is not handled like validation in other endpoints.
http://127.0.0.1:8080/get_logs_with_metadata?execution_date=2015-11-16T14:34:15+00:00&metadata=invalid
### What you think should happen instead
A proper error message can be returned
### How to reproduce
Accessing below endpoint with invalid metadata payload
http://127.0.0.1:8080/get_logs_with_metadata?execution_date=2015-11-16T14:34:15+00:00&metadata=invalid
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22897 | https://github.com/apache/airflow/pull/22898 | 8af77127f1aa332c6e976c14c8b98b28c8a4cd26 | a3dd8473e4c5bbea214ebc8d5545b75281166428 | "2022-04-11T08:03:51Z" | python | "2022-04-11T10:48:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,878 | ["airflow/providers/amazon/aws/operators/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"] | ECS operator throws an error on attempting to reattach to ECS tasks | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon 3.2.0
### Apache Airflow version
2.2.5 (latest released)
### Operating System
Linux / ECS
### Deployment
Other Docker-based deployment
### Deployment details
We are running Docker on Open Shift 4
### What happened
There seems to be a bug in the code for ECS operator, during the "reattach" flow. We are running into some instability issues that cause our Airflow scheduler to restart. When the scheduler restarts while a task is running using ECS, the ECS operator will try to reattach to the ECS task once the Airflow scheduler restarts. The code works fine finding the ECS task and attaching to it, but then when it tries to fetch the logs, it throws the following error:
`Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1334, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1460, in _execute_task_with_callbacks
result = self._execute_task(context, self.task)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1516, in _execute_task
result = execute_callable(context=context)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/operators/ecs.py", line 295, in execute
self.task_log_fetcher = self._get_task_log_fetcher()
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/operators/ecs.py", line 417, in _get_task_log_fetcher
log_stream_name = f"{self.awslogs_stream_prefix}/{self.ecs_task_id}"
AttributeError: 'EcsOperator' object has no attribute 'ecs_task_id'`
At this point, the operator will fail and the task will be marked for retries and eventually gets marked as failed, while on the ECS side, the ECS task is running fine. The manual way to fix this would be to wait for the ECS task to complete, then mark the task as successful and trigger downstream tasks. This is not very practical, since the task can take a long time (in our case the task can take hours)
### What you think should happen instead
I expect that the ECS operator should be able to reattach and pull the logs as normal.
### How to reproduce
Configure a task that would run using the ECS operator, and make sure it takes a very long time. Start the task, and once the logs starts flowing to Airflow, restart the Airflow scheduler. Wait for the scheduler to restart and check that upon retry, the task would be able to attach and fetch the logs.
### Anything else
When restarting Airflow, it tries to kill the task at hand. In our case, we didn't give the permission to the AWS role to kill the running ECS tasks, and therefore the ECS tasks keep running during the restart of Airflow. Others might not have this setup, and therefore they won't run into the "reattach" flow, and they won't encounter the issue reported here. This is not a good option for us, since our tasks can take hours to complete, and we don't want to interfere with their execution.
We also need to improve the stability of the Open Shift infrastructure where Airflow is running, so that the scheduler doesn't restart so often, but that is a different story.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22878 | https://github.com/apache/airflow/pull/23370 | 3f6d5eef427f3ea33d0cd342143983f54226bf05 | d6141c6594da86653b15d67eaa99511e8fe37a26 | "2022-04-09T17:25:06Z" | python | "2022-05-01T10:58:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,868 | ["Dockerfile", "scripts/docker/entrypoint_prod.sh"] | There is no handler for BACKEND=sqs in entrypoint_prod.sh function wait_for_connection | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Actually this is looking more like a bug see error below. I think I am configuring it correctly.
Might be a configuration issue see https://github.com/apache/airflow/issues/22863
From a docker container running on an EC2 I'm trying to use AWS sqs as my celery broker.
I'm using ec2 IAM credentials so I set
broker_url = sqs://
According to https://docs.celeryq.dev/en/latest/getting-started/backends-and-brokers/sqs.html
If you are using IAM roles on instances, you can set the BROKER_URL to: sqs:// and kombu will attempt to retrieve access tokens from the instance metadata.
The error I get is:
airflow-worker-1_1 |
airflow-worker-1_1 | ### BACKEND=sqs
airflow-worker-1_1 | DB_HOST=None
airflow-worker-1_1 | DB_PORT=
airflow-worker-1_1 | ....................
airflow-worker-1_1 | ERROR! Maximum number of retries (20) reached.
airflow-worker-1_1 |
airflow-worker-1_1 | Last check result:
airflow-worker-1_1 | $ run_nc 'None' ''
airflow-worker-1_1 | Traceback (most recent call last):
airflow-worker-1_1 | File "", line 1, in
airflow-worker-1_1 | socket.gaierror: [Errno -5] No address associated with hostname
airflow-worker-1_1 | Can't parse as an IP address
I traced the source of the error to entrypoint_prod.sh.
FROM function wait_for_connection {
echo BACKEND="${BACKEND:=${detected_backend}}"
readonly BACKEND
if [[ -z "${detected_port=}" ]]; then
if [[ ${BACKEND} == "postgres"* ]]; then
detected_port=5432
elif [[ ${BACKEND} == "mysql"* ]]; then
detected_port=3306
elif [[ ${BACKEND} == "mssql"* ]]; then
detected_port=1433
elif [[ ${BACKEND} == "redis"* ]]; then
detected_port=6379
elif [[ ${BACKEND} == "amqp"* ]]; then
detected_port=5672
fi
fi
There is no handler for ### BACKEND=sqs
Verified the ### BACKEND=sqs is comming from the broker_url = sqs://
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu container
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22868 | https://github.com/apache/airflow/pull/22883 | ee449fec6ca855aff3c4830c6758a9d5e5db1a2d | 0ae0f7e2448e05917e51e29b854ad60463378fbe | "2022-04-08T21:01:12Z" | python | "2022-04-10T07:50:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,843 | ["airflow/models/dag.py", "airflow/models/param.py", "tests/models/test_dag.py"] | When passing the 'False' value to the parameters of a decorated dag function I get this traceback | ### Apache Airflow version
2.2.3
### What happened
When passing the `False` value to a decorated dag function I get this traceback below. Also the default value is not shown when clicking 'trigger dag w/ config'.
```[2022-04-07, 20:08:57 UTC] {taskinstance.py:1259} INFO - Executing <Task(_PythonDecoratedOperator): value_consumer> on 2022-04-07 20:08:56.914410+00:00
[2022-04-07, 20:08:57 UTC] {standard_task_runner.py:52} INFO - Started process 2170 to run task
[2022-04-07, 20:08:57 UTC] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', 'check_ui_config', 'value_consumer', 'manual__2022-04-07T20:08:56.914410+00:00', '--job-id', '24', '--raw', '--subdir', 'DAGS_FOLDER/check_ui_config.py', '--cfg-path', '/tmp/tmpww9euksv', '--error-file', '/tmp/tmp7kjdfks5']
[2022-04-07, 20:08:57 UTC] {standard_task_runner.py:77} INFO - Job 24: Subtask value_consumer
[2022-04-07, 20:08:57 UTC] {logging_mixin.py:109} INFO - Running <TaskInstance: check_ui_config.value_consumer manual__2022-04-07T20:08:56.914410+00:00 [running]> on host a643f8828615
[2022-04-07, 20:08:57 UTC] {taskinstance.py:1700} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1329, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1418, in _execute_task_with_callbacks
self.render_templates(context=context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1992, in render_templates
self.task.render_template_fields(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1061, in render_template_fields
self._do_render_template_fields(self, self.template_fields, context, jinja_env, set())
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1074, in _do_render_template_fields
rendered_content = self.render_template(content, context, jinja_env, seen_oids)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1125, in render_template
return tuple(self.render_template(element, context, jinja_env) for element in content)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1125, in <genexpr>
return tuple(self.render_template(element, context, jinja_env) for element in content)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1116, in render_template
return content.resolve(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/param.py", line 226, in resolve
raise AirflowException(f'No value could be resolved for parameter {self._name}')
airflow.exceptions.AirflowException: No value could be resolved for parameter test
[2022-04-07, 20:08:57 UTC] {taskinstance.py:1267} INFO - Marking task as FAILED. dag_id=check_ui_config, task_id=value_consumer, execution_date=20220407T200856, start_date=20220407T200857, end_date=20220407T200857
[2022-04-07, 20:08:57 UTC] {standard_task_runner.py:89} ERROR - Failed to execute job 24 for task value_consumer
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/task/task_runner/standard_task_runner.py", line 85, in _start_by_fork
args.func(args, dag=self.dag)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 298, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 107, in _run_task_by_selected_method
_run_raw_task(args, ti)
File "/usr/local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 180, in _run_raw_task
ti._run_raw_task(
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 70, in wrapper
return func(*args, session=session, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1329, in _run_raw_task
self._execute_task_with_callbacks(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1418, in _execute_task_with_callbacks
self.render_templates(context=context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1992, in render_templates
self.task.render_template_fields(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1061, in render_template_fields
self._do_render_template_fields(self, self.template_fields, context, jinja_env, set())
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1074, in _do_render_template_fields
rendered_content = self.render_template(content, context, jinja_env, seen_oids)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1125, in render_template
return tuple(self.render_template(element, context, jinja_env) for element in content)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1125, in <genexpr>
return tuple(self.render_template(element, context, jinja_env) for element in content)
File "/usr/local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1116, in render_template
return content.resolve(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/param.py", line 226, in resolve
raise AirflowException(f'No value could be resolved for parameter {self._name}')
airflow.exceptions.AirflowException: No value could be resolved for parameter test
```
### What you think should happen instead
I think airflow should be able to handle the False value when passing it as a dag param.
### How to reproduce
```
from airflow.decorators import dag, task
from airflow.models.param import Param
from datetime import datetime, timedelta
@task
def value_consumer(val):
print(val)
@dag(
start_date=datetime(2021, 1, 1),
schedule_interval=timedelta(days=365, hours=6)
)
def check_ui_config(test):
value_consumer(test)
the_dag = check_ui_config(False)
```
### Operating System
Docker (debian:buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
Astro cli with this image:
quay.io/astronomer/ap-airflow-dev:2.2.3-2
### Anything else
![Screenshot from 2022-04-07 14-13-43](https://user-images.githubusercontent.com/102494105/162288264-bb6c6ca6-977f-4ff7-a0cc-9616c0ce8ac8.png)
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22843 | https://github.com/apache/airflow/pull/22964 | e09b4f144d1edefad50a58ebef56bd40df4eb39c | a0f7e61497d547b82edc1154d39535d79aaedff3 | "2022-04-07T20:14:46Z" | python | "2022-04-13T07:48:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,833 | ["airflow/models/mappedoperator.py", "airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | Allow mapped Task as input to another mapped task | This dag
```python
with DAG(dag_id="simple_mapping", start_date=pendulum.DateTime(2022, 4, 6), catchup=True) as d3:
@task(email='a@b.com')
def add_one(x: int):
return x + 1
two_three_four = add_one.expand(x=[1, 2, 3])
three_four_five = add_one.expand(x=two_three_four)
```
Fails with this error:
```
File "/home/ash/code/airflow/airflow/airflow/models/taskinstance.py", line 2239, in _record_task_map_for_downstreams
raise UnmappableXComTypePushed(value)
airflow.exceptions.UnmappableXComTypePushed: unmappable return type 'int'
``` | https://github.com/apache/airflow/issues/22833 | https://github.com/apache/airflow/pull/22849 | 1a8b8f521c887716d7e0c987a58e8e5c3b62bdaa | 8af77127f1aa332c6e976c14c8b98b28c8a4cd26 | "2022-04-07T14:21:14Z" | python | "2022-04-11T09:29:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,810 | ["airflow/providers/jira/sensors/jira.py"] | JiraTicketSensor duplicates TaskId | ### Apache Airflow Provider(s)
jira
### Versions of Apache Airflow Providers
apache-airflow-providers-jira==2.0.1
### Apache Airflow version
2.2.2
### Operating System
Amazon Linux 2
### Deployment
MWAA
### Deployment details
_No response_
### What happened
I've been trying to use the Jira Operator to create a Ticket from Airflow and use the JiraTicketSensor to check if the ticket was resolved. Creating the task works fine, but I can't get the Sensor to work.
If I don't provide the method_name I get an error that it is required, if I provide it as None, I get an error saying the Task id has already been added to the DAG.
```text
Broken DAG: [/usr/local/airflow/dags/jira_ticket_sensor.py] Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/models/baseoperator.py", line 553, in __init__
task_group.add(self)
File "/usr/local/lib/python3.7/site-packages/airflow/utils/task_group.py", line 175, in add
raise DuplicateTaskIdFound(f"Task id '{key}' has already been added to the DAG")
airflow.exceptions.DuplicateTaskIdFound: Task id 'jira_sensor' has already been added to the DAG
```
### What you think should happen instead
_No response_
### How to reproduce
use this dag
```python
from datetime import datetime
from airflow import DAG
from airflow.providers.jira.sensors.jira import JiraTicketSensor
with DAG(
dag_id='jira_ticket_sensor',
schedule_interval=None,
start_date=datetime(2021, 1, 1),
catchup=False
) as dag:
jira_sensor = JiraTicketSensor(
task_id='jira_sensor',
jira_conn_id='jira_default',
ticket_id='TEST-1',
field='status',
expected_value='Completed',
method_name='issue',
poke_interval=600
)
```
### Anything else
This error occurs every time
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22810 | https://github.com/apache/airflow/pull/23046 | e82a2fdf841dd571f3b8f456c4d054cf3a94fc03 | bf10545d8358bcdb9ca5dacba101482296251cab | "2022-04-07T10:43:06Z" | python | "2022-04-25T11:16:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,790 | ["chart/templates/secrets/metadata-connection-secret.yaml", "tests/charts/test_basic_helm_chart.py"] | Helm deployment fails when postgresql.nameOverride is used | ### Apache Airflow version
2.2.5 (latest released)
### What happened
Helm installation fails with the following config:
```
postgresql:
enabled: true
nameOverride: overridename
```
The problem is manifested in the `-airflow-metadata` secret where the connection string will be generated without respect to the `nameOverride`
With the example config the generated string should be:
`postgresql://postgres:postgres@myrelease-overridename:5432/postgres?sslmode=disable`
but the actual string generated is:
`postgresql://postgres:postgres@myrelease-overridename.namespace:5432/postgres?sslmode=disable`
### What you think should happen instead
Installation should succeed with correctly generated metadata connection string
### How to reproduce
To reproduce just set the following in values.yaml and attempt `helm install`
```
postgresql:
enabled: true
nameOverride: overridename
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
using helm with kind cluster
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22790 | https://github.com/apache/airflow/pull/29214 | 338a633fc9faab54e72c408e8a47eeadb3ad55f5 | 56175e4afae00bf7ccea4116ecc09d987a6213c3 | "2022-04-06T16:28:38Z" | python | "2023-02-02T17:00:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,782 | ["airflow/sensors/external_task.py", "tests/sensors/test_external_task_sensor.py"] | ExternalTaskSensor does not properly expand templates in external_task_id(s) | ### Apache Airflow version
2.2.4
### What happened
When using `ExternalTaskSensor`, if a Jinja template is used in `external_task_id` or `external_task_ids`, that template will not be expanded, causing the sensor to always fail.
### What you think should happen instead
Ideally the template should be expanded. If we can't make that work for whatever reason, we should remove `external_task_id` from the list of valid template fields.
### How to reproduce
```
#!/usr/bin/env python3
from datetime import datetime
from airflow import DAG
from airflow.operators.dummy import DummyOperator
from airflow.sensors.external_task import ExternalTaskSensor
with DAG('dag1', start_date=datetime(2022, 4, 1), schedule_interval='@daily', is_paused_upon_creation=False) as dag1:
DummyOperator(task_id='task_123')
with DAG('dag2', start_date=datetime(2022, 4, 1), schedule_interval='@daily', is_paused_upon_creation=False) as dag2:
ExternalTaskSensor(
task_id='not_using_params',
external_dag_id='dag1',
external_task_id='task_123',
check_existence=True,
)
ExternalTaskSensor(
task_id='using_external_task_id',
external_dag_id='{{ params.dag_name }}',
external_task_id='{{ params.task_name }}',
check_existence=True,
params={
'dag_name': 'dag1',
'task_name': 'task_123',
},
)
ExternalTaskSensor(
task_id='using_external_task_ids',
external_dag_id='{{ params.dag_name }}',
external_task_ids=['{{ params.task_name }}'],
check_existence=True,
params={
'dag_name': 'dag1',
'task_name': 'task_123',
},
)
```
Here are some relevant snippets from the task logs:
'not_using_params':
```
[2022-04-06, 04:25:40 CDT] {external_task.py:169} INFO - Poking for tasks ['task_123'] in dag dag1 on 2022-04-01T00:00:00+00:00 ...
```
'using_external_task_id':
```
[2022-04-06, 04:25:41 CDT] {external_task.py:169} INFO - Poking for tasks ['{{ params.task_name }}'] in dag dag1 on 2022-04-01T00:00:00+00:00 ...
```
'using_external_task_ids':
```
[2022-04-06, 04:25:43 CDT] {external_task.py:169} INFO - Poking for tasks ['{{ params.task_name }}'] in dag dag1 on 2022-04-01T00:00:00+00:00 ...
```
As we can see, the templated versions correctly expand the `dag_name` parameter, but not `task_name`.
### Operating System
CentOS 7.4
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Standalone
### Anything else
Maybe a separate issue, but worth noting: `ExternalTaskSensor` does not even list `external_task_ids` as a valid template field, though it seems like it should. In the above example, the "Rendered Template" works correctly for `'using_external_task_id'`, but not for `'using_external_task_ids'`.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22782 | https://github.com/apache/airflow/pull/22809 | aa317d92ea4dd38fbc27501048ee78b1c0c0aeb5 | 7331eefc393b8f1fae6f3cf061cf17eb5eaa3fc8 | "2022-04-06T14:29:05Z" | python | "2022-04-13T09:44:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,738 | ["airflow/models/taskinstance.py", "airflow/utils/log/secrets_masker.py", "tests/utils/log/test_secrets_masker.py"] | Webserver doesn't mask rendered fields for pending tasks | ### Apache Airflow version
2.2.5 (latest released)
### What happened
When triggering a new dagrun the webserver will not mask secrets in the rendered fields for that dagrun's tasks which didn't start yet.
Tasks which have completed or are in state running are not affected by this.
### What you think should happen instead
The webserver should mask all secrets for tasks which have started or not started.
<img width="628" alt="Screenshot 2022-04-04 at 15 36 29" src="https://user-images.githubusercontent.com/7921017/161628806-c2c579e2-faea-40cc-835c-ac6802d15dc1.png">
.
### How to reproduce
Create a variable `my_secret` and run this DAG
```python
from datetime import timedelta
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.sensors.time_delta import TimeDeltaSensor
from airflow.utils.dates import days_ago
with DAG(
"secrets",
start_date=days_ago(1),
schedule_interval=None,
) as dag:
wait = TimeDeltaSensor(
task_id="wait",
delta=timedelta(minutes=1),
)
task = wait >> BashOperator(
task_id="secret_task",
bash_command="echo '{{ var.value.my_secret }}'",
)
```
While the first task `wait` is running, displaying rendered fields for the second task `secret_task` will show the unmasked secret variable.
<img width="1221" alt="Screenshot 2022-04-04 at 15 33 43" src="https://user-images.githubusercontent.com/7921017/161628734-b7b13190-a3fe-4898-8fa9-ff7537245c1c.png">
### Operating System
Debian (Astronomer Airflow Docker image)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==1!3.2.0
apache-airflow-providers-cncf-kubernetes==1!3.0.0
apache-airflow-providers-elasticsearch==1!3.0.2
apache-airflow-providers-ftp==1!2.1.2
apache-airflow-providers-google==1!6.7.0
apache-airflow-providers-http==1!2.1.2
apache-airflow-providers-imap==1!2.2.3
apache-airflow-providers-microsoft-azure==1!3.7.2
apache-airflow-providers-mysql==1!2.2.3
apache-airflow-providers-postgres==1!4.1.0
apache-airflow-providers-redis==1!2.0.4
apache-airflow-providers-slack==1!4.2.3
apache-airflow-providers-sqlite==1!2.1.3
apache-airflow-providers-ssh==1!2.4.3
```
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
We have seen this issue also in Airflow 2.2.3.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22738 | https://github.com/apache/airflow/pull/23807 | 10a0d8e7085f018b7328533030de76b48de747e2 | 2dc806367c3dc27df5db4b955d151e789fbc78b0 | "2022-04-04T20:47:44Z" | python | "2022-05-21T15:36:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,731 | ["airflow/models/dag.py", "airflow/models/taskmixin.py", "airflow/serialization/serialized_objects.py", "airflow/utils/task_group.py", "airflow/www/views.py", "tests/models/test_dag.py", "tests/serialization/test_dag_serialization.py", "tests/utils/test_task_group.py"] | Fix the order that tasks are displayed in Grid view | The order that tasks are displayed in Grid view do not correlate with the order that the tasks would be expected to execute in the DAG. See `example_bash_operator` below:
<img width="335" alt="Screen Shot 2022-04-04 at 11 47 31 AM" src="https://user-images.githubusercontent.com/4600967/161582603-dffea697-68d9-4145-909d-3240f3a65ad2.png">
<img width="426" alt="Screen Shot 2022-04-04 at 11 47 36 AM" src="https://user-images.githubusercontent.com/4600967/161582604-d59885cc-2c71-4a7d-b332-e439115d8c4c.png">
We should update the [task_group_to_tree](https://github.com/apache/airflow/blob/main/airflow/www/views.py#L232) function in views.py to better approximate the order that tasks would be run.
| https://github.com/apache/airflow/issues/22731 | https://github.com/apache/airflow/pull/22741 | e9df0f2de95bb69490d9530d5a27d7b05b71c32e | 34154803ac73d62d3e969e480405df3073032622 | "2022-04-04T15:49:06Z" | python | "2022-04-05T12:59:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,730 | ["airflow/providers/dbt/cloud/hooks/dbt.py", "tests/providers/dbt/cloud/hooks/test_dbt_cloud.py", "tests/providers/dbt/cloud/sensors/test_dbt_cloud.py"] | dbt Cloud Provider only works for Multi-tenant instances | ### Apache Airflow Provider(s)
dbt-cloud
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.2.5 (latest released)
### Operating System
any
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
Some dbt Cloud deployments require the setting of a different base URL (could be X.getdbt.com) or cloud.X.getdbt.com)
Relevant line: https://github.com/apache/airflow/blame/436c17c655494eff5724df98d1a231ffa2142253/airflow/providers/dbt/cloud/hooks/dbt.py#L154
self.base_url = "https://cloud.getdbt.com/api/v2/accounts/"
### What you think should happen instead
A runtime paramater that defaults to cloud.getdbt
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22730 | https://github.com/apache/airflow/pull/24264 | 98b4e48fbc1262f1381e7a4ca6cce31d96e6f5e9 | 7498fba826ec477b02a40a2e23e1c685f148e20f | "2022-04-04T15:43:54Z" | python | "2022-06-06T23:32:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,705 | ["airflow/providers/google/cloud/transfers/local_to_gcs.py", "tests/providers/google/cloud/transfers/test_local_to_gcs.py"] | LocalFileSystemToGCSOperator give false positive while copying file from src to dest, even when src has no file | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==6.4.0
### Apache Airflow version
2.1.4
### Operating System
Debian GNU/Linux 10 (buster)
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
When you run LocalFilesSystemToGCSOperator with the params for src and dest, the operator reports a false positive when there are no files present under the specified src directory. I expected it to fail stating the specified directory doesn't have any file.
[2022-03-15 14:26:15,475] {taskinstance.py:1107} INFO - Executing <Task(LocalFilesystemToGCSOperator): upload_files_to_GCS> on 2022-03-15T14:25:59.554459+00:00
[2022-03-15 14:26:15,484] {standard_task_runner.py:52} INFO - Started process 709 to run task
[2022-03-15 14:26:15,492] {standard_task_runner.py:76} INFO - Running: ['***', 'tasks', 'run', 'dag', 'upload_files_to_GCS', '2022-03-15T14:25:59.554459+00:00', '--job-id', '1562', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/dag.py', '--cfg-path', '/tmp/tmp_e9t7pl9', '--error-file', '/tmp/tmpyij6m4er']
[2022-03-15 14:26:15,493] {standard_task_runner.py:77} INFO - Job 1562: Subtask upload_files_to_GCS
[2022-03-15 14:26:15,590] {logging_mixin.py:104} INFO - Running <TaskInstance: dag.upload_files_to_GCS 2022-03-15T14:25:59.554459+00:00 [running]> on host 653e566fd372
[2022-03-15 14:26:15,752] {taskinstance.py:1300} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=jet2
AIRFLOW_CTX_DAG_ID=dag
AIRFLOW_CTX_TASK_ID=upload_files_to_GCS
AIRFLOW_CTX_EXECUTION_DATE=2022-03-15T14:25:59.554459+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2022-03-15T14:25:59.554459+00:00
[2022-03-15 14:26:19,357] {taskinstance.py:1204} INFO - Marking task as SUCCESS. gag, task_id=upload_files_to_GCS, execution_date=20220315T142559, start_date=20220315T142615, end_date=20220315T142619
[2022-03-15 14:26:19,422] {taskinstance.py:1265} INFO - 1 downstream tasks scheduled from follow-on schedule check
[2022-03-15 14:26:19,458] {local_task_job.py:149} INFO - Task exited with return code 0
### What you think should happen instead
The operator should at least info that no files were copied than just making it successful.
### How to reproduce
- create a Dag with LocalFilesSystemToGCSOperator
- specify an empty directory as src and a gcp bucket as bucket_name, dest param(can be blank).
- run the dag
### Anything else
No
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22705 | https://github.com/apache/airflow/pull/22772 | 921ccedf7f90f15e8d18c27a77b29d232be3c8cb | 838cf401b9a424ad0fbccd5fb8d3040a8f4a7f44 | "2022-04-02T11:30:11Z" | python | "2022-04-06T19:22:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,693 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "airflow/providers/cncf/kubernetes/utils/pod_manager.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"] | KubernetesPodOperator failure email alert with actual error log from command executed | ### Description
When a command executed using KubernetesPodOperator fails, the alert email only says:
`Exception: Pod Launching failed: Pod pod_name_xyz returned a failure`
along with other parameters supplied to the operator but doesn't contain actual error message thrown by the command.
~~I am thinking similar to how xcom works with KubernetesPodOperator, if the command could write the error log in sidecar container in /airflow/log/error.log and airflow picks that up, then it could be included in the alert email (probably at the top). It can use same sidecar as for xcom (if that is easier to maintain) but write in different folder.~~
Looks like kubernetes has a way to send termination message.
https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/
Just need to pull that from container status message and include it in failure message at the top.
### Use case/motivation
Similar to how email alert for most other operator includes key error message right there without having to login to airflow to see the logs, i am expecting similar functionality from KubernetesPodOperator too.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22693 | https://github.com/apache/airflow/pull/22871 | ddb5d9b4a2b4e6605f66f82a6bec30393f096c05 | d81703c5778e13470fcd267578697158776b8318 | "2022-04-01T17:07:52Z" | python | "2022-04-14T00:16:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,689 | ["docs/apache-airflow-providers-apache-hdfs/index.rst"] | HDFS provider causes TypeError: __init__() got an unexpected keyword argument 'encoding' | ### Discussed in https://github.com/apache/airflow/discussions/22301
<div type='discussions-op-text'>
<sup>Originally posted by **frankie1211** March 16, 2022</sup>
I build the custom container image, below is my Dockerfile.
```dockerfile
FROM apache/airflow:2.2.4-python3.9
USER root
RUN apt-get update \
&& apt-get install -y gcc g++ vim libkrb5-dev build-essential libsasl2-dev \
&& apt-get autoremove -yqq --purge \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
USER airflow
RUN pip install --upgrade pip
RUN pip install apache-airflow-providers-apache-spark --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.4/constraints-3.9.txt
RUN pip install apache-airflow-providers-apache-hdfs --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.4/constraints-3.9.txt"
```
But i got the error when i run the container
```
airflow-init_1 | The container is run as root user. For security, consider using a regular user account.
airflow-init_1 | ....................
airflow-init_1 | ERROR! Maximum number of retries (20) reached.
airflow-init_1 |
airflow-init_1 | Last check result:
airflow-init_1 | $ airflow db check
airflow-init_1 | Traceback (most recent call last):
airflow-init_1 | File "/home/airflow/.local/bin/airflow", line 5, in <module>
airflow-init_1 | from airflow.__main__ import main
airflow-init_1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 28, in <module>
airflow-init_1 | from airflow.cli import cli_parser
airflow-init_1 | File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 621, in <module>
airflow-init_1 | type=argparse.FileType('w', encoding='UTF-8'),
airflow-init_1 | TypeError: __init__() got an unexpected keyword argument 'encoding'
airflow-init_1 |
airflow_airflow-init_1 exited with code 1
```
</div> | https://github.com/apache/airflow/issues/22689 | https://github.com/apache/airflow/pull/29614 | 79c07e3fc5d580aea271ff3f0887291ae9e4473f | 0a4184e34c1d83ad25c61adc23b838e994fc43f1 | "2022-04-01T14:05:22Z" | python | "2023-02-19T20:37:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,675 | ["airflow/providers/google/cloud/transfers/gcs_to_gcs.py", "tests/providers/google/cloud/transfers/test_gcs_to_gcs.py"] | GCSToGCSOperator cannot copy a single file/folder without copying other files/folders with that prefix | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.2.4 (latest released)
### Operating System
MacOS 12.2.1
### Deployment
Composer
### Deployment details
_No response_
### What happened
I have file "hourse.jpeg" and "hourse.jpeg.copy" and a folder "hourse.jpeg.folder" in source bucket.
I use the following code to try to copy only "hourse.jpeg" to another bucket.
gcs_to_gcs_op = GCSToGCSOperator(
task_id="gcs_to_gcs",
source_bucket=my_source_bucket,
source_object="hourse.jpeg",
destination_bucket=my_destination_bucket
)
The result is the two files and one folder mentioned above are copied.
From the source code it seems there is no way to do what i want.
### What you think should happen instead
Only the file specified should be copied, that means we should treat source_object as exact match instead of prefix.
To accomplish the current behavior as prefix, the user can/should use wild char
source_object="hourse.jpeg*"
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22675 | https://github.com/apache/airflow/pull/24039 | 5e6997ed45be0972bf5ea7dc06e4e1cef73b735a | ec84ffe71cfa8246155b9b4cb10bf2167e75adcf | "2022-04-01T06:25:57Z" | python | "2022-06-06T12:17:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,665 | ["airflow/models/mappedoperator.py"] | Superfluous TypeError when passing not-iterables to `expand()` | ### Apache Airflow version
main (development)
### What happened
Here's a problematic dag. `False` is invalid here.
```python3
from airflow.models import DAG
from airflow.operators.python import PythonOperator
from datetime import datetime, timedelta
with DAG(
dag_id="singleton_expanded",
schedule_interval=timedelta(days=365),
start_date=datetime(2001, 1, 1),
) as dag:
# has problem
PythonOperator.partial(
task_id="foo",
python_callable=lambda x: "hi" if x else "bye",
).expand(op_args=False)
```
When I check for errors like `python dags/the_dag.py` I get the following error:
```
Traceback (most recent call last):
File "/Users/matt/2022/03/30/dags/the_dag.py", line 13, in <module>
PythonOperator.partial(
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 187, in expand
validate_mapping_kwargs(self.operator_class, "expand", mapped_kwargs)
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 116, in validate_mapping_kwargs
raise ValueError(error)
ValueError: PythonOperator.expand() got an unexpected type 'bool' for keyword argument op_args
Exception ignored in: <function OperatorPartial.__del__ at 0x10c63b1f0>
Traceback (most recent call last):
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 182, in __del__
warnings.warn(f"{self!r} was never mapped!")
File "/usr/local/Cellar/python@3.9/3.9.10/Frameworks/Python.framework/Versions/3.9/lib/python3.9/warnings.py", line 109, in _showwarnmsg
sw(msg.message, msg.category, msg.filename, msg.lineno,
File "/Users/matt/src/airflow/airflow/settings.py", line 115, in custom_show_warning
from rich.markup import escape
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 982, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 925, in _find_spec
File "<frozen importlib._bootstrap_external>", line 1414, in find_spec
File "<frozen importlib._bootstrap_external>", line 1380, in _get_spec
TypeError: 'NoneType' object is not iterable
```
### What you think should happen instead
I'm not sure what's up with that type error, the ValueError is what I needed to see. So I expected this:
```
Traceback (most recent call last):
File "/Users/matt/2022/03/30/dags/the_dag.py", line 13, in <module>
PythonOperator.partial(
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 187, in expand
validate_mapping_kwargs(self.operator_class, "expand", mapped_kwargs)
File "/Users/matt/src/airflow/airflow/models/mappedoperator.py", line 116, in validate_mapping_kwargs
raise ValueError(error)
ValueError: PythonOperator.expand() got an unexpected type 'bool' for keyword argument op_args
```
### How to reproduce
_No response_
### Operating System
Mac OS
### Versions of Apache Airflow Providers
n/a
### Deployment
Virtualenv installation
### Deployment details
- cloned main at 327eab3e2
- created fresh venv and used pip to install
- `airflow info`
- `airflow db init`
- add the dag
- `python dags/the_dag.py`
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22665 | https://github.com/apache/airflow/pull/22678 | 9583c1cab65d28146e73aab0993304886c724bf3 | 17cf6367469c059c82bb7fa4289645682ef22dda | "2022-03-31T19:16:46Z" | python | "2022-04-01T10:14:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,657 | ["chart/templates/flower/flower-ingress.yaml", "chart/templates/webserver/webserver-ingress.yaml"] | Wrong apiVersion Detected During Ingress Creation | ### Official Helm Chart version
1.5.0 (latest released)
### Apache Airflow version
2.2.4 (latest released)
### Kubernetes Version
microk8s 1.23/stable
### Helm Chart configuration
```
executor: KubernetesExecutor
ingress:
enabled: true
## airflow webserver ingress configs
web:
annotations:
kubernetes.io/ingress.class: public
hosts:
-name: "example.com"
path: "/airflow"
## Disabled due to using KubernetesExecutor as recommended in the documentation
flower:
enabled: false
## Disabled due to using KubernetesExecutor as recommended in the documentation
redis:
enabled: false
```
### Docker Image customisations
No customization required to recreate, the default image has the same behavior.
### What happened
Installation notes below, as displayed the install fails due to the web ingress chart attempting a semVerCompare to check that the kube version is greater than 1.19 and, if it's not, it defaults back to the v1beta networking version. The microk8s install exceeds this version so I would expect the Webserver Ingress version to utilize "networking.k8s.io/v1" instead of the beta version.
Airflow installation
```
$: helm install airflow apache-airflow/airflow --namespace airflow --values ./custom-values.yaml
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
```
microk8s installation
```
$: kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.5-2+c812603a312d2b", GitCommit:"c812603a312d2b0c59687a1be1ae17c0878104cc", GitTreeState:"clean", BuildDate:"2022-03-17T16:14:08Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.5-2+c812603a312d2b", GitCommit:"c812603a312d2b0c59687a1be1ae17c0878104cc", GitTreeState:"clean", BuildDate:"2022-03-17T16:11:06Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
```
### What you think should happen instead
The Webserver Ingress chart should detect that the kube version is greater than 1.19 and utilize the version ```networking.k8s.io/v1```.
### How to reproduce
On Ubuntu 18.04, run:
1. ```sudo snap install microk8s --classic```
2. ```microk8s status --wait-ready```
3. ```microk8s enable dns ha-cluster helm3 ingress metrics-server storage```
4. ```microk8s helm3 repo add apache-airflow https://airflow.apache.org```
5. ```microk8s kubectl create namespace airflow```
6. ```touch ./custom-values.yaml```
7. ```vi ./custom-values.yaml``` and insert the values.yaml contents from above
8. ```microk8s helm3 install airflow apache-airflow/airflow --namespace airflow --values ./custom-values.yaml```
### Anything else
This problem can be reproduced consistently.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22657 | https://github.com/apache/airflow/pull/28461 | e377e869da9f0e42ac1e0a615347cf7cd6565d54 | 5c94ef0a77358dbee8ad8735a132b42d78843df7 | "2022-03-31T16:19:33Z" | python | "2022-12-19T15:03:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 22,647 | ["airflow/utils/sqlalchemy.py"] | SAWarning: TypeDecorator UtcDateTime(timezone=True) will not produce a cache key because the ``cache_ok`` attribute is not set to True | ### Apache Airflow version
2.2.4 (latest released)
### What happened
Error
```
[2022-03-31, 11:47:06 UTC] {warnings.py:110} WARNING - /home/ec2-user/.local/lib/python3.7/site-packages/airflow/models/xcom.py:437: SAWarning: TypeDecorator UtcDateTime(timezone=True) will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
return query.delete()
[2022-03-31, 11:47:06 UTC] {warnings.py:110} WARNING - /home/ec2-user/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py:2214: SAWarning: TypeDecorator UtcDateTime(timezone=True) will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
for result in query.with_entities(XCom.task_id, XCom.value)
[2022-03-31, 11:47:06 UTC] {warnings.py:110} WARNING - /home/ec2-user/.local/lib/python3.7/site-packages/airflow/models/renderedtifields.py:126: SAWarning: TypeDecorator UtcDateTime(timezone=True) will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
session.merge(self)
[2022-03-31, 11:47:06 UTC] {warnings.py:110} WARNING - /home/ec2-user/.local/lib/python3.7/site-packages/airflow/models/renderedtifields.py:162: SAWarning: Coercing Subquery object into a select() for use in IN(); please pass a select() construct explicitly
tuple_(cls.dag_id, cls.task_id, cls.execution_date).notin_(subq1),
[2022-03-31, 11:47:06 UTC] {warnings.py:110} WARNING - /home/ec2-user/.local/lib/python3.7/site-packages/airflow/models/renderedtifields.py:163: SAWarning: TypeDecorator UtcDateTime(timezone=True) will not produce a cache key because the ``cache_ok`` attribute is not set to True. This can have significant performance implications including some performance degradations in comparison to prior SQLAlchemy versions. Set this attribute to True if this type object's state is safe to use in a cache key, or False to disable this warning. (Background on this error at: https://sqlalche.me/e/14/cprf)
).delete(synchronize_session=False)
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora" VERSION_ID="2" PRETTY_NAME="Amazon Linux 2" ANSI_COLOR="0;33" CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2" HOME_URL="https://amazonlinux.com/"
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.0.0
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-http==2.0.3
apache-airflow-providers-imap==2.2.0
apache-airflow-providers-postgres==3.0.0
apache-airflow-providers-redis==2.0.1
apache-airflow-providers-sqlite==2.1.0
### Deployment
Other
### Deployment details
Pip package
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/22647 | https://github.com/apache/airflow/pull/24499 | cc6a44bdc396a305fd53c7236427c578e9d4d0b7 | d9694733cafd9a3d637eb37d5154f0e1e92aadd4 | "2022-03-31T12:23:17Z" | python | "2022-07-05T12:50:20Z" |