threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Hi,\n\nWhile investigating a bug report [1] I wanted to find all the pieces\nof code that form PqMsg_DataRow messages and couldn't easily do it.\nThis is because one authors prefer writing:\n\npq_beginmessage(buf, 'D');\n\n.. while others:\n\npq_beginmessage(buf, PqMsg_DataRow);\n\nThe proposed patchset fixes this.\n\n- Patch 1 replaces all the char's with PqMsg's\n- Patch 2 makes PqMsg an enum. This ensures that the problem will not\nappear again in the future and also gives us a bit more type-safety.\n- Patch 3 rearranges the order of the functions in pqformat.{c,h} a\nbit to make the code easier to read.\n\n[1]: https://www.postgresql.org/message-id/flat/1df84daa-7d0d-e8cc-4762-85523e45e5e7%40mailbox.org\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 16 Jul 2024 16:09:44 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Refactor pqformat.{c,h} and protocol.h"
},
{
"msg_contents": "Hi,\n\n> The proposed patchset fixes this.\n\nIn v1 I mistakenly named the enum PgMsg while it should have been\nPqMsg. Here is the corrected patchset.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 16 Jul 2024 17:08:52 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Refactor pqformat.{c,h} and protocol.h"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 04:09:44PM +0300, Aleksander Alekseev wrote:\n> - Patch 1 replaces all the char's with PqMsg's\n\nThanks. I'll look into committing this one in the near future.\n\n> - Patch 2 makes PqMsg an enum. This ensures that the problem will not\n> appear again in the future and also gives us a bit more type-safety.\n\nThis was briefly brought up in the discussion that ultimately led to\nprotocol.h [0]. I don't have a particularly strong opinion on the matter,\nbut I will note that protocol.h was intended to be easily usable in\nthird-party code, and changing it from characters to an enum from v17 to\nv18 might cause some extra code churn.\n\n[0] https://postgr.es/m/20230807091044.jjgsl2rgheazaung%40alvherre.pgsql\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 16 Jul 2024 10:29:32 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Refactor pqformat.{c,h} and protocol.h"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> This was briefly brought up in the discussion that ultimately led to\n> protocol.h [0]. I don't have a particularly strong opinion on the matter,\n> but I will note that protocol.h was intended to be easily usable in\n> third-party code, and changing it from characters to an enum from v17 to\n> v18 might cause some extra code churn.\n\nWe could avoid that issue by back-patching into 17; I don't think\nit's quite too late for that, and the header is new as of 17.\n\nHowever, I'm generally -1 on the proposal independently of that,\nbecause I think it is the wrong thing from an ABI standpoint.\nThe message type codes in the protocol are chars, full stop.\nIf we declare the data type as an enum, we have basically zero\ncontrol over how wide the compiler will choose to make that ---\nalthough it's pretty likely that it *won't* choose char width.\nSo you could not represent the message layout accurately with\nan enum. Perhaps nobody is doing that, but it seems to me like\na foot-gun of roughly the same caliber as not using an enum.\nAlso, going in this direction would require adding casts between\nchar and enum in assorted places, which might be error-prone or\nwarning-inducing.\n\nSo on the whole, \"leave it alone\" seems like the right answer.\n\n(This applies only to the s/char/enum/ proposal; I've not read\nthe patchset further than that.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jul 2024 12:48:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Refactor pqformat.{c,h} and protocol.h"
},
{
"msg_contents": "Hi,\n\n> > This was briefly brought up in the discussion that ultimately led to\n> > protocol.h [0]. I don't have a particularly strong opinion on the matter,\n> > but I will note that protocol.h was intended to be easily usable in\n> > third-party code, and changing it from characters to an enum from v17 to\n> > v18 might cause some extra code churn.\n>\n> We could avoid that issue by back-patching into 17; I don't think\n> it's quite too late for that, and the header is new as of 17.\n>\n> However, I'm generally -1 on the proposal independently of that,\n>\n> [...]\n>\n> (This applies only to the s/char/enum/ proposal; I've not read\n> the patchset further than that.)\n\nThat's fair enough. Also it's not clear how much type safety enums\ngive us exactly. E.g. after applying 0002 pq_putmessage() a.k.a.\nPQcommMethods->putmessage() silently casts its first argument from\n`enum PqMsg` to `char` (shrug).\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 16 Jul 2024 20:21:50 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Refactor pqformat.{c,h} and protocol.h"
},
{
"msg_contents": "I took a closer look at 0001.\n\ndiff --git a/src/include/libpq/protocol.h b/src/include/libpq/protocol.h\nindex 4b8d440365..8c0f095edf 100644\n--- a/src/include/libpq/protocol.h\n+++ b/src/include/libpq/protocol.h\n@@ -47,6 +47,7 @@\n #define PqMsg_EmptyQueryResponse\t'I'\n #define PqMsg_BackendKeyData\t\t'K'\n #define PqMsg_NoticeResponse\t\t'N'\n+#define PqMsg_Progress 'P'\n #define PqMsg_AuthenticationRequest 'R'\n #define PqMsg_ParameterStatus\t\t'S'\n #define PqMsg_RowDescription\t\t'T'\n\nAs discussed elsewhere [0], we can add the leader/worker protocol\ncharacters to protocol.h, but they should probably go in a separate\nsection. I'd recommend breaking that part out to a separate patch, too.\n\n[0] https://postgr.es/m/20230829161555.GB2136737%40nathanxps13\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 16 Jul 2024 12:34:12 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Refactor pqformat.{c,h} and protocol.h"
},
{
"msg_contents": "Hi,\n\n> As discussed elsewhere [0], we can add the leader/worker protocol\n> characters to protocol.h, but they should probably go in a separate\n> section. I'd recommend breaking that part out to a separate patch, too.\n\nOK, here is the updated patchset. This time I chose not to include this patch:\n\n> - Patch 3 rearranges the order of the functions in pqformat.{c,h} a\n> bit to make the code easier to read.\n\n... since arguably there is not much value in it. Please let me know\nif you think it's actually needed.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 16 Jul 2024 21:14:35 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Refactor pqformat.{c,h} and protocol.h"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 09:14:35PM +0300, Aleksander Alekseev wrote:\n>> As discussed elsewhere [0], we can add the leader/worker protocol\n>> characters to protocol.h, but they should probably go in a separate\n>> section. I'd recommend breaking that part out to a separate patch, too.\n> \n> OK, here is the updated patchset. This time I chose not to include this patch:\n\nThanks. The only thing that stands out to me is the name of the parallel\nleader/worker protocol message. In the original thread for protocol\ncharacters, some early versions of the patch called it a \"parallel\nprogress\" message, but this new one just calls it PqMsg_Progress. I guess\nPqMsg_ParallelProgress might be a tad more descriptive and less likely to\ncause naming collisions with new frontend/backend messages, but I'm not\ntremendously worried about either of those things. Thoughts?\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 16 Jul 2024 14:48:34 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Refactor pqformat.{c,h} and protocol.h"
},
{
"msg_contents": "Hi,\n\n> Thanks. The only thing that stands out to me is the name of the parallel\n> leader/worker protocol message. In the original thread for protocol\n> characters, some early versions of the patch called it a \"parallel\n> progress\" message, but this new one just calls it PqMsg_Progress. I guess\n> PqMsg_ParallelProgress might be a tad more descriptive and less likely to\n> cause naming collisions with new frontend/backend messages, but I'm not\n> tremendously worried about either of those things. Thoughts?\n\nPersonally I'm fine with either option.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 16 Jul 2024 22:58:37 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Refactor pqformat.{c,h} and protocol.h"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 10:58:37PM +0300, Aleksander Alekseev wrote:\n>> Thanks. The only thing that stands out to me is the name of the parallel\n>> leader/worker protocol message. In the original thread for protocol\n>> characters, some early versions of the patch called it a \"parallel\n>> progress\" message, but this new one just calls it PqMsg_Progress. I guess\n>> PqMsg_ParallelProgress might be a tad more descriptive and less likely to\n>> cause naming collisions with new frontend/backend messages, but I'm not\n>> tremendously worried about either of those things. Thoughts?\n> \n> Personally I'm fine with either option.\n\nAlright. Well, I guess I'll flip a coin tomorrow unless someone else\nchimes in with an opinion.\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 16 Jul 2024 16:38:06 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Refactor pqformat.{c,h} and protocol.h"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 04:38:06PM -0500, Nathan Bossart wrote:\n> Alright. Well, I guess I'll flip a coin tomorrow unless someone else\n> chimes in with an opinion.\n\nCommitted and back-patched to v17. I left it as PqMsg_Progress.\n\n-- \nnathan\n\n\n",
"msg_date": "Wed, 17 Jul 2024 10:59:39 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Refactor pqformat.{c,h} and protocol.h"
}
] |
[
{
"msg_contents": "Hello,\n\nOne of our customer is facing an issue regarding a prepared statement \nand a gin index.\nAn important gap between execution times can be shown when using execute \nor plain literals.\n\nHere is the test that shown this issue :\n\nInit table :\n\ncreate table tmp_tk_test_index\n(\n sync_id bigint,\n line_id varchar(50),\n chk_upgrade_index smallint default 0\n);\n\ncreate index idx_tmp_tk_test_index_1\n on tmp_tk_test_index using gin (sync_id, line_id);\n\ninsert into tmp_tk_test_index SELECT tb_seq as sync_id, case when tb_seq = 950000 then 'the-test-value-fa529a621a15' else gen_random_uuid()::text end as line_id FROM generate_series(1,1000000) as tb_seq;\n\nPrepare query :\n\nprepare stmt(bigint, text, int) as delete from tmp_tk_test_index where sync_id <= $1 and line_id = $2 and chk_upgrade_index = $3;\n\nAnd then execute it :\n\npostgres=# begin;\nBEGIN\npostgres=*# explain (analyse) execute stmt(950000, 'the-test-value-fa529a621a15', 0);\n QUERY PLAN\n\n \n\n-----------------------------------------------------------------------------------------------------------------------\n\n Delete on tmp_tk_test_index (cost=212.52..2662.59 rows=0 width=0) (actual time=60.766..60.767 rows=0 loops=1)\n\n -> Bitmap Heap Scan on tmp_tk_test_index (cost=212.52..2662.59 rows=4 width=6) (actual time=60.756..60.758 rows=1\n\nloops=1)\n\n Recheck Cond: ((sync_id <= '950000'::bigint) AND ((line_id)::text = 'the-test-value-fa529a621a15'::text))\n\n Filter: (chk_upgrade_index = 0)\n\n Heap Blocks: exact=1\n\n -> Bitmap Index Scan on idx_tmp_tk_test_index_1 (cost=0.00..212.52 rows=810 width=0) (actual time=60.745..60\n\n.745 rows=1 loops=1)\n\n Index Cond: ((sync_id <= '950000'::bigint) AND ((line_id)::text = 'the-test-value-fa529a621a15'::text))\n\n Planning Time: 6.765 ms\n\n Execution Time: 61.160 ms\n\n(9 rows)\npostgres=*# rollback ;\nROLLBACK\n\nIt takes 61.160ms to be executed. However, the \"same\" query without a \nprepared statement is far faster :\n\npostgres=# begin;\nBEGIN\npostgres=*# explain analyze delete from tmp_tk_test_index where sync_id <= 950000 and line_id = 'the-test-value-fa529a621a15' and chk_upgrade_index = 0;\n\n QUERY PLAN\n\n \n\n-----------------------------------------------------------------------------------------------------------------------\n\n----------------\n\n Delete on tmp_tk_test_index (cost=21.36..25.38 rows=0 width=0) (actual time=0.084..0.085 rows=0 loops=1)\n\n -> Bitmap Heap Scan on tmp_tk_test_index (cost=21.36..25.38 rows=1 width=6) (actual time=0.042..0.043 rows=1 loops\n\n=1)\n\n Recheck Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n\n Filter: ((sync_id <= 950000) AND (chk_upgrade_index = 0))\n\n Heap Blocks: exact=1\n\n -> Bitmap Index Scan on idx_tmp_tk_test_index_1 (cost=0.00..21.36 rows=1 width=0) (actual time=0.027..0.028\n\nrows=1 loops=1)\n\n Index Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n\n Planning Time: 0.325 ms\n\n Execution Time: 0.148 ms\n\n(9 rows)\n\npostgres=*# rollback ;\n\nROLLBACK\n\nOnly 0.148ms to be executed...\n\nWhat can cause this quite important gap ? (0.148ms vs 61.160ms).\nIn the explain plan, I can see that the/Index Cond /is different : in \nthe slowest case (sync_id <= '950000'::bigint) is present.\nWhy isn't it present in the second case ?\n\nTwo more tests :\n\nThe prepared statement is tweaked with sync_id+0. The query is fast :\n\nprepare stmt3(bigint, text, int) as delete from tmp_tk_test_index where sync_id+0 <= $1 and line_id = $2 and chk_upgrade_index = $3;\nexplain (analyse) execute stmt3(950000, 'the-test-value-fa529a621a15', 0);\n\nDelete on tmp_tk_test_index (cost=21.36..25.38 rows=0 width=0) (actual time=0.057..0.058 rows=0 loops=1)\n\n -> Bitmap Heap Scan on tmp_tk_test_index (cost=21.36..25.38 rows=1 width=6) (actual time=0.040..0.042 rows=1 loops\n\n=1)\n\n Recheck Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n\n Filter: ((chk_upgrade_index = 0) AND ((sync_id + 0) <= '950000'::bigint))\n\n Heap Blocks: exact=1\n\n -> Bitmap Index Scan on idx_tmp_tk_test_index_1 (cost=0.00..21.36 rows=1 width=0) (actual time=0.020..0.020\n\nrows=1 loops=1)\n\n Index Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n\n Planning Time: 0.355 ms\n\n Execution Time: 0.101 ms\n\n(9 rows)\n\nAnd one with an index but without sync_id inside it. Prepared statement \nisn't changed.\n\ncreate index idx_tmp_tk_test_index_2 on tmp_tk_test_index using gin (line_id);\n\n Delete on tmp_tk_test_index (cost=21.36..25.38 rows=0 width=0) (actual time=0.052..0.053 rows=0 loops=1)\n -> Bitmap Heap Scan on tmp_tk_test_index (cost=21.36..25.38 rows=1 width=6) (actual time=0.036..0.038 rows=1 loops\n=1)\n Recheck Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n Filter: ((sync_id <= '950000'::bigint) AND (chk_upgrade_index = 0))\n Heap Blocks: exact=1\n -> Bitmap Index Scan on idx_tmp_tk_test_index_2 (cost=0.00..21.36 rows=1 width=0) (actual time=0.020..0.021\nrows=1 loops=1)\n Index Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n Planning Time: 0.439 ms\n Execution Time: 0.098 ms\n(9 rows\n\nQuite fast as well...\n\nHave you got an idea on the initial issue ? Why when using a prepared \nstatement and a gin index the execution time \"explode\" ?\nSomething to do with the planner ? optimizer ?\n\n(We executed the same test with a btree index and execution times are \nthe same in both cases).\n\nRegards\n\n\n\n\n\n\n\n\n\n\n\n\nHello,\nOne of our customer is facing an issue regarding a prepared\n statement and a gin index.\n An important gap between execution times can be shown when using\n execute or plain literals.\n\nHere is the test that shown this issue :\nInit table :\n\ncreate table tmp_tk_test_index\n(\n sync_id bigint,\n line_id varchar(50),\n chk_upgrade_index smallint default 0\n);\n\ncreate index idx_tmp_tk_test_index_1\n on tmp_tk_test_index using gin (sync_id, line_id);\n\ninsert into tmp_tk_test_index SELECT tb_seq as sync_id, case when tb_seq = 950000 then 'the-test-value-fa529a621a15' else gen_random_uuid()::text end as line_id FROM generate_series(1,1000000) as tb_seq;\nPrepare query :\nprepare stmt(bigint, text, int) as delete from tmp_tk_test_index where sync_id <= $1 and line_id = $2 and chk_upgrade_index = $3;\nAnd then execute it :\npostgres=# begin;\nBEGIN\npostgres=*# explain (analyse) execute stmt(950000, 'the-test-value-fa529a621a15', 0);\n QUERY PLAN \n\n \n\n-----------------------------------------------------------------------------------------------------------------------\n\n Delete on tmp_tk_test_index (cost=212.52..2662.59 rows=0 width=0) (actual time=60.766..60.767 rows=0 loops=1)\n\n -> Bitmap Heap Scan on tmp_tk_test_index (cost=212.52..2662.59 rows=4 width=6) (actual time=60.756..60.758 rows=1 \n\nloops=1)\n\n Recheck Cond: ((sync_id <= '950000'::bigint) AND ((line_id)::text = 'the-test-value-fa529a621a15'::text))\n\n Filter: (chk_upgrade_index = 0)\n\n Heap Blocks: exact=1\n\n -> Bitmap Index Scan on idx_tmp_tk_test_index_1 (cost=0.00..212.52 rows=810 width=0) (actual time=60.745..60\n\n.745 rows=1 loops=1)\n\n Index Cond: ((sync_id <= '950000'::bigint) AND ((line_id)::text = 'the-test-value-fa529a621a15'::text))\n\n Planning Time: 6.765 ms\n\n Execution Time: 61.160 ms\n\n(9 rows)\npostgres=*# rollback ;\nROLLBACK\n\n\nIt takes 61.160ms to be executed. However, the \"same\" query\n without a prepared statement is far faster :\npostgres=# begin;\nBEGIN\npostgres=*# explain analyze delete from tmp_tk_test_index where sync_id <= 950000 and line_id = 'the-test-value-fa529a621a15' and chk_upgrade_index = 0;\n QUERY PLAN \n \n-----------------------------------------------------------------------------------------------------------------------\n----------------\n Delete on tmp_tk_test_index (cost=21.36..25.38 rows=0 width=0) (actual time=0.084..0.085 rows=0 loops=1)\n -> Bitmap Heap Scan on tmp_tk_test_index (cost=21.36..25.38 rows=1 width=6) (actual time=0.042..0.043 rows=1 loops\n=1)\n Recheck Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n Filter: ((sync_id <= 950000) AND (chk_upgrade_index = 0))\n Heap Blocks: exact=1\n -> Bitmap Index Scan on idx_tmp_tk_test_index_1 (cost=0.00..21.36 rows=1 width=0) (actual time=0.027..0.028 \nrows=1 loops=1)\n Index Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n Planning Time: 0.325 ms\n Execution Time: 0.148 ms\n(9 rows)\n\npostgres=*# rollback ;\nROLLBACK\nOnly 0.148ms to be executed...\nWhat can cause this quite important gap ? (0.148ms vs 61.160ms).\n In the explain plan, I can see that the Index Cond is\n different : in the slowest case (sync_id <= '950000'::bigint)\n is present.\n Why isn't it present in the second case ?\n\nTwo more tests :\n\nThe prepared statement is tweaked with sync_id+0. The query is\n fast :\n\nprepare stmt3(bigint, text, int) as delete from tmp_tk_test_index where sync_id+0 <= $1 and line_id = $2 and chk_upgrade_index = $3;\nexplain (analyse) execute stmt3(950000, 'the-test-value-fa529a621a15', 0);\n\nDelete on tmp_tk_test_index (cost=21.36..25.38 rows=0 width=0) (actual time=0.057..0.058 rows=0 loops=1)\n -> Bitmap Heap Scan on tmp_tk_test_index (cost=21.36..25.38 rows=1 width=6) (actual time=0.040..0.042 rows=1 loops\n=1)\n Recheck Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n Filter: ((chk_upgrade_index = 0) AND ((sync_id + 0) <= '950000'::bigint))\n Heap Blocks: exact=1\n -> Bitmap Index Scan on idx_tmp_tk_test_index_1 (cost=0.00..21.36 rows=1 width=0) (actual time=0.020..0.020 \nrows=1 loops=1)\n Index Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n Planning Time: 0.355 ms\n Execution Time: 0.101 ms\n(9 rows)\nAnd one with an index but without sync_id inside it. Prepared\n statement isn't changed.\n\ncreate index idx_tmp_tk_test_index_2 on tmp_tk_test_index using gin (line_id);\n\n\n Delete on tmp_tk_test_index (cost=21.36..25.38 rows=0 width=0) (actual time=0.052..0.053 rows=0 loops=1)\n -> Bitmap Heap Scan on tmp_tk_test_index (cost=21.36..25.38 rows=1 width=6) (actual time=0.036..0.038 rows=1 loops\n=1)\n Recheck Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n Filter: ((sync_id <= '950000'::bigint) AND (chk_upgrade_index = 0))\n Heap Blocks: exact=1\n -> Bitmap Index Scan on idx_tmp_tk_test_index_2 (cost=0.00..21.36 rows=1 width=0) (actual time=0.020..0.021 \nrows=1 loops=1)\n Index Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n Planning Time: 0.439 ms\n Execution Time: 0.098 ms\n(9 rows\n\nQuite fast as well...\nHave you got an idea on the initial issue ? Why when using a\n prepared statement and a gin index the execution time \"explode\" ?\n Something to do with the planner ? optimizer ?\n\n(We executed the same test with a btree index and execution times\n are the same in both cases).\nRegards",
"msg_date": "Tue, 16 Jul 2024 17:43:49 +0200",
"msg_from": "Pierrick Chovelon <pierrick.chovelon@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Differents execution times with gin index, prepared statement and\n literals."
},
{
"msg_contents": "On 7/16/24 17:43, Pierrick Chovelon wrote:\n> ...\n>\n> Quite fast as well...\n> \n> Have you got an idea on the initial issue ? Why when using a prepared\n> statement and a gin index the execution time \"explode\" ?\n> Something to do with the planner ? optimizer ?\n> \n> (We executed the same test with a btree index and execution times are\n> the same in both cases).\n> \n\nThe reason why the two queries end up with different plans is pretty\nsimple - the condition ends up matching different operators, because of\ndata type difference. In case of the prepared query, the (x <= 950000)\nmatches <=(bigint,bitint) operator, and thus it matches the index. But\nthat happens because the query is prepared with bigint parameter. For\nthe regular query, the 950000 literal gets treated as int, the condition\nmatches to <=(bigint,int) and that does not match the index - hence it's\ntreated as a filter, not an index condition.\n\nIf you cast the literal to bigint (by doing ::bigint) in the regular\nquery, we end it'll use the same same plan as the prepared query - but\nthat's the slower one, unfortunately :-(\n\nWhich gets us to why that plan is slower, compared to the plan using\nfewer conditions. I think the problem is that <= 950000 matches most of\nthe table, which means the GIN index will have to load and process a\npretty long TID list, which is clearly not cheap.\n\nI don't think there's much you can do do - we don't consider this when\nmatching conditions to the index, we simply match as many conditions as\npossible. And the GIN code is not smart enough to make judgements about\nwhich columns to process first - it just goes column by column and\nbuilds the bitmap, and building a bitmap on 95% of the table is costly.\n\nIf this is a systemic problem for most/all queries (i.e. it's enough to\nhave a condition on line_id), I believe the +0 trick is a good way to\nmake sure the condition is treated as a filter.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 16 Jul 2024 19:54:32 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Differents execution times with gin index, prepared statement and\n literals."
},
{
"msg_contents": "Hello,\n\nThanks a lot for your clear answer.\n\nOn 16/07/2024 19:54, Tomas Vondra wrote:\n> On 7/16/24 17:43, Pierrick Chovelon wrote:\n>> ...\n>>\n>> Quite fast as well...\n>>\n>> Have you got an idea on the initial issue ? Why when using a prepared\n>> statement and a gin index the execution time \"explode\" ?\n>> Something to do with the planner ? optimizer ?\n>>\n>> (We executed the same test with a btree index and execution times are\n>> the same in both cases).\n>>\n> The reason why the two queries end up with different plans is pretty\n> simple - the condition ends up matching different operators, because of\n> data type difference. In case of the prepared query, the (x <= 950000)\n> matches <=(bigint,bitint) operator, and thus it matches the index. But\n> that happens because the query is prepared with bigint parameter. For\n> the regular query, the 950000 literal gets treated as int, the condition\n> matches to <=(bigint,int) and that does not match the index - hence it's\n> treated as a filter, not an index condition.\n>\n> If you cast the literal to bigint (by doing ::bigint) in the regular\n> query, we end it'll use the same same plan as the prepared query - but\n> that's the slower one, unfortunately :-(\nI try the following thing :\n\npostgres=# prepare stmt(int, text, int) as delete from tmp_tk_test_index where sync_id <= $1 and line_id = $2 and chk_upgrade_index = $3;\nPREPARE\npostgres=# begin ;\nBEGIN\npostgres=*# explain (analyse) execute stmt(950000, 'the-test-value-fa529a621a15', 0);\n\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------\n\n Delete on tmp_tk_test_index (cost=21.36..25.38 rows=0 width=0) (actual time=0.148..0.149 rows=0 loops=1)\n -> Bitmap Heap Scan on tmp_tk_test_index (cost=21.36..25.38 rows=1 width=6) (actual time=0.146..0.147 rows=0 loops=1)\n Recheck Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n Filter: ((sync_id <= 950000) AND (chk_upgrade_index = 0))\n Heap Blocks: exact=1\n -> Bitmap Index Scan on idx_tmp_tk_test_index_1 (cost=0.00..21.36 rows=1 width=0) (actual time=0.099..0.099 rows=1 loops=1)\n Index Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n Planning Time: 9.412 ms\n Execution Time: 1.570 ms\n(9 rows)\npostgres=*# rollback ;\nROLLBACK\n\nSo preparing a query with a data type different from the column (int \n(prepared statement) vs bigint (table)) is faster in our case :/\nIt doesn't sound obvious to me :)\n\nThanks again for your answer Tomas.\n\n> Which gets us to why that plan is slower, compared to the plan using\n> fewer conditions. I think the problem is that <= 950000 matches most of\n> the table, which means the GIN index will have to load and process a\n> pretty long TID list, which is clearly not cheap.\n>\n> I don't think there's much you can do do - we don't consider this when\n> matching conditions to the index, we simply match as many conditions as\n> possible. And the GIN code is not smart enough to make judgements about\n> which columns to process first - it just goes column by column and\n> builds the bitmap, and building a bitmap on 95% of the table is costly.\n>\n> If this is a systemic problem for most/all queries (i.e. it's enough to\n> have a condition on line_id), I believe the +0 trick is a good way to\n> make sure the condition is treated as a filter.\n>\n>\n> regards\n>\n-- \nPierrick Chovelon\nConsultant DBA PostgreSQL - Dalibo\n\n\n\n\n\n\nHello,\nThanks a lot for your clear answer.\nOn 16/07/2024 19:54, Tomas Vondra\n wrote:\n\n\nOn 7/16/24 17:43, Pierrick Chovelon wrote:\n\n\n...\n\nQuite fast as well...\n\nHave you got an idea on the initial issue ? Why when using a prepared\nstatement and a gin index the execution time \"explode\" ?\nSomething to do with the planner ? optimizer ?\n\n(We executed the same test with a btree index and execution times are\nthe same in both cases).\n\n\n\n\nThe reason why the two queries end up with different plans is pretty\nsimple - the condition ends up matching different operators, because of\ndata type difference. In case of the prepared query, the (x <= 950000)\nmatches <=(bigint,bitint) operator, and thus it matches the index. But\nthat happens because the query is prepared with bigint parameter. For\nthe regular query, the 950000 literal gets treated as int, the condition\nmatches to <=(bigint,int) and that does not match the index - hence it's\ntreated as a filter, not an index condition.\n\nIf you cast the literal to bigint (by doing ::bigint) in the regular\nquery, we end it'll use the same same plan as the prepared query - but\nthat's the slower one, unfortunately :-(\n\n\n I try the following thing :\n\npostgres=# prepare stmt(int, text, int) as delete from tmp_tk_test_index where sync_id <= $1 and line_id = $2 and chk_upgrade_index = $3;\nPREPARE\npostgres=# begin ;\nBEGIN\npostgres=*# explain (analyse) execute stmt(950000, 'the-test-value-fa529a621a15', 0);\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------\n Delete on tmp_tk_test_index (cost=21.36..25.38 rows=0 width=0) (actual time=0.148..0.149 rows=0 loops=1)\n -> Bitmap Heap Scan on tmp_tk_test_index (cost=21.36..25.38 rows=1 width=6) (actual time=0.146..0.147 rows=0 loops=1)\n Recheck Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n Filter: ((sync_id <= 950000) AND (chk_upgrade_index = 0))\n Heap Blocks: exact=1\n -> Bitmap Index Scan on idx_tmp_tk_test_index_1 (cost=0.00..21.36 rows=1 width=0) (actual time=0.099..0.099 rows=1 loops=1)\n Index Cond: ((line_id)::text = 'the-test-value-fa529a621a15'::text)\n Planning Time: 9.412 ms\n Execution Time: 1.570 ms\n(9 rows)\npostgres=*# rollback ;\nROLLBACK\nSo preparing a query with a data type different from the column\n (int (prepared statement) vs bigint (table)) is faster in our case\n :/ \n It doesn't sound obvious to me :)\nThanks again for your answer Tomas.\n\nWhich gets us to why that plan is slower, compared to the plan using\nfewer conditions. I think the problem is that <= 950000 matches most of\nthe table, which means the GIN index will have to load and process a\npretty long TID list, which is clearly not cheap.\n\nI don't think there's much you can do do - we don't consider this when\nmatching conditions to the index, we simply match as many conditions as\npossible. And the GIN code is not smart enough to make judgements about\nwhich columns to process first - it just goes column by column and\nbuilds the bitmap, and building a bitmap on 95% of the table is costly.\n\nIf this is a systemic problem for most/all queries (i.e. it's enough to\nhave a condition on line_id), I believe the +0 trick is a good way to\nmake sure the condition is treated as a filter.\n\n\nregards\n\n\n\n-- \nPierrick Chovelon\nConsultant DBA PostgreSQL - Dalibo",
"msg_date": "Wed, 17 Jul 2024 10:15:26 +0200",
"msg_from": "Pierrick Chovelon <pierrick.chovelon@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Differents execution times with gin index, prepared statement and\n literals."
}
] |
[
{
"msg_contents": "The IMMUTABLE marker for functions is quite simple on the surface, but\ncould be interpreted a few different ways, and there's some historical\nbaggage that makes it complicated.\n\nThere are a number of ways in which IMMUTABLE functions can change\nbehavior:\n\n1. Updating or moving to a different OS affects all collations that use\nthe libc provider (other than \"C\" and \"POSIX\", which don't actually use\nlibc). LOWER(), INITCAP(), UPPER() and pattern matching are also\naffected.\n\n2. Updating ICU affects the collations that use the ICU provider.\nICU_UNICODE_VERSION(), LOWER(), INITCAP(), UPPER() and pattern matching\nare also affected.\n\n3. Moving to a different database encoding may affect collations that\nuse the \"C\" or \"POSIX\" locales in the libc provider (NB: those locales\ndon't actually use libc).\n\n4. A PG Unicode update may change the results of functions that depend\non Unicode. For instance, NORMALIZE(), UNICODE_ASSIGNED(), and\nUNICODE_VERSION(). Or, if using the new builtin provider's \"C.UTF-8\"\nlocale in version 17, LOWER(), INITCAP(), UPPER(), and pattern matching\n(NB: collation itself is not affected -- always code point order).\n\n5. If a well-defined IMMUTABLE function produces the wrong results, we\nmay fix the bug in the next major release.\n\n6. The GUC extra_float_digits can change the results of floating point\ntext output.\n\n7. A UDF may be improperly marked IMMUTABLE. A particularly common\nvariant is a UDF without search_path specified, which is probably not\ntruly IMMUTABLE.\n\n(more I'm sure, please add to list...)\n\n\n#1 and #2 have been discussed much more than the rest, but I think it's\nworthwhile to enumerate the other problems even if the impact is a lot\nlower.\n\n\nNoah seemed particularly concerned[1] about #4, so I'll start off by\ndiscussing that. Here's a brief history (slightly confusing because the\nPG and Unicode versions are similar numbers):\n\n PG13: Unicode 13.0 and NORMALIZE() is first exposed as a SQL function\n PG15: Unicode updated to 14.0\n PG16: Unicode updated to 15.0\n PG17: Unicode updated to 15.1, UNICODE_ASSIGNED(), UNICODE_VERSION()\nand builtin \"C.UTF-8\" locale are introduced\n\nTo repeat, these Unicode updates do not affect collation itself, they\naffect affect NORMALIZE(), UNICODE_VERSION(), and UNICODE_ASSIGNED().\nIf using the builtin \"C.UTF-8\" locale, they also affect LOWER(),\nINITCAP(), UPPER(), and pattern matching. (NB: the builtin collation\nprovider hasn't yet gone through any Unicode update.)\n\nThere are two alternative philosophies:\n\nA. By choosing to use a Unicode-based function, the user has opted in\nto the Unicode stability guarantees[2], and it's fine to update Unicode\noccasionally in new major versions as long as we are transparent with\nthe user.\n\nB. IMMUTABLE implies some very strict definition of stability, and we\nshould never again update Unicode because it changes the results of\nIMMUTABLE functions.\n\nWe've been following (A), and that's the defacto policy today[3][4].\nNoah and Laurenz argued[5] that the policy starting in version 18\nshould be (B). Given that it's a policy decision that affects more than\njust the builtin collation provider, I'd like to discuss it more\nbroadly outside of that subthread.\n\nRegards,\n\tJeff Davis\n\n\n[1] \nhttps://www.postgresql.org/message-id/20240629220857.fb.nmisch@google.com\n\n[2]\nhttps://www.unicode.org/policies/stability_policy.html\n\n[3] \nhttps://www.postgresql.org/message-id/1d178eb1bbd61da1bcfe4a11d6545e9cdcede1d1.camel%40j-davis.com\n\n[4]\nhttps://www.postgresql.org/message-id/564325.1720297161%40sss.pgh.pa.us\n\n[5]\nhttps://www.postgresql.org/message-id/af82b292f13dd234790bc701933e9992ee07d4fa.camel%40cybertec.at\n\n\n",
"msg_date": "Tue, 16 Jul 2024 10:42:03 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "[18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On 7/16/24 13:42, Jeff Davis wrote:\n> The IMMUTABLE marker for functions is quite simple on the surface, but\n> could be interpreted a few different ways, and there's some historical\n> baggage that makes it complicated.\n> \n> There are a number of ways in which IMMUTABLE functions can change\n> behavior:\n> \n> 1. Updating or moving to a different OS affects all collations that use\n> the libc provider (other than \"C\" and \"POSIX\", which don't actually use\n> libc). LOWER(), INITCAP(), UPPER() and pattern matching are also\n> affected.\n> \n> 2. Updating ICU affects the collations that use the ICU provider.\n> ICU_UNICODE_VERSION(), LOWER(), INITCAP(), UPPER() and pattern matching\n> are also affected.\n> \n> 3. Moving to a different database encoding may affect collations that\n> use the \"C\" or \"POSIX\" locales in the libc provider (NB: those locales\n> don't actually use libc).\n> \n> 4. A PG Unicode update may change the results of functions that depend\n> on Unicode. For instance, NORMALIZE(), UNICODE_ASSIGNED(), and\n> UNICODE_VERSION(). Or, if using the new builtin provider's \"C.UTF-8\"\n> locale in version 17, LOWER(), INITCAP(), UPPER(), and pattern matching\n> (NB: collation itself is not affected -- always code point order).\n> \n> 5. If a well-defined IMMUTABLE function produces the wrong results, we\n> may fix the bug in the next major release.\n> \n> 6. The GUC extra_float_digits can change the results of floating point\n> text output.\n> \n> 7. A UDF may be improperly marked IMMUTABLE. A particularly common\n> variant is a UDF without search_path specified, which is probably not\n> truly IMMUTABLE.\n> \n> (more I'm sure, please add to list...)\n> \n> \n> #1 and #2 have been discussed much more than the rest, but I think it's\n> worthwhile to enumerate the other problems even if the impact is a lot\n> lower.\n> \n> \n> Noah seemed particularly concerned[1] about #4, so I'll start off by\n> discussing that. Here's a brief history (slightly confusing because the\n> PG and Unicode versions are similar numbers):\n> \n> PG13: Unicode 13.0 and NORMALIZE() is first exposed as a SQL function\n> PG15: Unicode updated to 14.0\n> PG16: Unicode updated to 15.0\n> PG17: Unicode updated to 15.1, UNICODE_ASSIGNED(), UNICODE_VERSION()\n> and builtin \"C.UTF-8\" locale are introduced\n> \n> To repeat, these Unicode updates do not affect collation itself, they\n> affect affect NORMALIZE(), UNICODE_VERSION(), and UNICODE_ASSIGNED().\n> If using the builtin \"C.UTF-8\" locale, they also affect LOWER(),\n> INITCAP(), UPPER(), and pattern matching. (NB: the builtin collation\n> provider hasn't yet gone through any Unicode update.)\n> \n> There are two alternative philosophies:\n> \n> A. By choosing to use a Unicode-based function, the user has opted in\n> to the Unicode stability guarantees[2], and it's fine to update Unicode\n> occasionally in new major versions as long as we are transparent with\n> the user.\n> \n> B. IMMUTABLE implies some very strict definition of stability, and we\n> should never again update Unicode because it changes the results of\n> IMMUTABLE functions.\n> \n> We've been following (A), and that's the defacto policy today[3][4].\n> Noah and Laurenz argued[5] that the policy starting in version 18\n> should be (B). Given that it's a policy decision that affects more than\n> just the builtin collation provider, I'd like to discuss it more\n> broadly outside of that subthread.\n\nOn the general topic, we have these definitions in the fine manual:\n\n8<-----------------\nA VOLATILE function can do anything, ... A query using a volatile \nfunction will re-evaluate the function at every row where its value is \nneeded.\n\nA STABLE function cannot modify the database and is guaranteed to return \nthe same results given the same arguments for all rows within a single \nstatement...\n\nAn IMMUTABLE function cannot modify the database and is guaranteed to \nreturn the same results given the same arguments forever.\n8<-----------------\n\nAs Jeff points out, the IMMUTABLE definition has never really been true. \nEven the STABLE is not quite right, as there are at least some STABLE \nfunctions that will return the same value for multiple statements if \nthey are within a transaction block (e.g. \"now()\" -- TBH I don't \nremember offhand if that is true for all stable functions).\n\nIn any case, there is quite a gap between \"forever\" and \"single \nstatement\". Perhaps we need to have more volatility categories, with \nguarantees that lie somewhere between the two, and allow those to be \nused like we do IMMUTABLE except with appropriate warning labels. E.g. \nsomething (\"STABLE_VERSION\"?) to mean \"forever within a major version \nlifetime\" and something (\"STABLE_SYSTEM?\") to mean \"as long as you don't \nupgrade your OS\".\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Tue, 16 Jul 2024 14:57:10 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 11:57 AM Joe Conway <mail@joeconway.com> wrote:\n\n>\n> > There are two alternative philosophies:\n> >\n> > A. By choosing to use a Unicode-based function, the user has opted in\n> > to the Unicode stability guarantees[2], and it's fine to update Unicode\n> > occasionally in new major versions as long as we are transparent with\n> > the user.\n> >\n> > B. IMMUTABLE implies some very strict definition of stability, and we\n> > should never again update Unicode because it changes the results of\n> > IMMUTABLE functions.\n> >\n> > We've been following (A), and that's the defacto policy today[3][4].\n> > Noah and Laurenz argued[5] that the policy starting in version 18\n> > should be (B). Given that it's a policy decision that affects more than\n> > just the builtin collation provider, I'd like to discuss it more\n> > broadly outside of that subthread.\n>\n> On the general topic, we have these definitions in the fine manual:\n>\n> 8<-----------------\n> A VOLATILE function can do anything, ... A query using a volatile\n> function will re-evaluate the function at every row where its value is\n> needed.\n>\n> A STABLE function cannot modify the database and is guaranteed to return\n> the same results given the same arguments for all rows within a single\n> statement...\n>\n> An IMMUTABLE function cannot modify the database and is guaranteed to\n> return the same results given the same arguments forever.\n> 8<-----------------\n>\n> As Jeff points out, the IMMUTABLE definition has never really been true.\n>\n\n\n> Even the STABLE is not quite right, as there are at least some STABLE\n> functions that will return the same value for multiple statements if\n> they are within a transaction block (e.g. \"now()\" -- TBH I don't\n> remember offhand if that is true for all stable functions).\n>\n\nUnder-specification here doesn't make the meaning of stable incorrect. We\ndon't have anything that guarantees stability at the transaction scope\nbecause I don't think it can be guaranteed there without considering\nwhether said transaction is read-committed, repeatable read, or\nserializable. The function itself can promise more but the marker seems\ncorrectly scoped for how the system uses it in statement optimization.\n\n and allow those to be\n> used like we do IMMUTABLE except with appropriate warning labels. E.g.\n> something (\"STABLE_VERSION\"?) to mean \"forever within a major version\n> lifetime\" and something (\"STABLE_SYSTEM?\") to mean \"as long as you don't\n> upgrade your OS\".\n>\n>\nI'd be content cutting \"forever\" down to \"within a given server\nconfiguration\". Then just note that immutable functions can depend\nimplicitly on external server characteristics and so when moving data\nbetween servers re-evaluation of immutable functions may be necessary. Not\nso bad for indexes. A bit more problematic for generated values.\n\nI'm not against adding metadata options here but for internal functions\ncomments and documentation can work. For user-defined functions I have my\ndoubts on how trustworthy they would end up being.\n\nFor the original question, I suggest continuing behaving per \"A\" and work\non making it more clear to users what that means in terms of server\nupgrades.\n\nIf we do add metadata to reflect our reality I'd settle on a generic\n\"STATIC\" marker that can be used on those functions the rely on real world\nstate, whether we are directly calling into the system (e.g., hashing) or\nhave chosen to provide the state access management ourselves (e.g.,\nunicode).\n\nWhen we do take control we should have a goal of allowing for a given\nexternal dependency version to exist in many PostgreSQL versions and give\nthe DBA the choice of when to move individual databases from one version to\nthe next. Possibly dropping the dependency version support alongside the\ndropping of support of the major version it first appeared in. Not keeping\nup with external dependency versions just punishes new users by forbidding\nthem a tool permanently, as well as puts us out-of-step with those\ndependency development groups, to save existing users some short-term\npain. Being able to deal with that pain at a time different than the\nmiddle of a major version upgrade, one database at a time, gives those\nexisting users reasonable options.\n\nDavid J.\n\nOn Tue, Jul 16, 2024 at 11:57 AM Joe Conway <mail@joeconway.com> wrote:\n> There are two alternative philosophies:\n> \n> A. By choosing to use a Unicode-based function, the user has opted in\n> to the Unicode stability guarantees[2], and it's fine to update Unicode\n> occasionally in new major versions as long as we are transparent with\n> the user.\n> \n> B. IMMUTABLE implies some very strict definition of stability, and we\n> should never again update Unicode because it changes the results of\n> IMMUTABLE functions.\n> \n> We've been following (A), and that's the defacto policy today[3][4].\n> Noah and Laurenz argued[5] that the policy starting in version 18\n> should be (B). Given that it's a policy decision that affects more than\n> just the builtin collation provider, I'd like to discuss it more\n> broadly outside of that subthread.\n\nOn the general topic, we have these definitions in the fine manual:\n\n8<-----------------\nA VOLATILE function can do anything, ... A query using a volatile \nfunction will re-evaluate the function at every row where its value is \nneeded.\n\nA STABLE function cannot modify the database and is guaranteed to return \nthe same results given the same arguments for all rows within a single \nstatement...\n\nAn IMMUTABLE function cannot modify the database and is guaranteed to \nreturn the same results given the same arguments forever.\n8<-----------------\n\nAs Jeff points out, the IMMUTABLE definition has never really been true. \n Even the STABLE is not quite right, as there are at least some STABLE \nfunctions that will return the same value for multiple statements if \nthey are within a transaction block (e.g. \"now()\" -- TBH I don't \nremember offhand if that is true for all stable functions).Under-specification here doesn't make the meaning of stable incorrect. We don't have anything that guarantees stability at the transaction scope because I don't think it can be guaranteed there without considering whether said transaction is read-committed, repeatable read, or serializable. The function itself can promise more but the marker seems correctly scoped for how the system uses it in statement optimization. and allow those to be \nused like we do IMMUTABLE except with appropriate warning labels. E.g. \nsomething (\"STABLE_VERSION\"?) to mean \"forever within a major version \nlifetime\" and something (\"STABLE_SYSTEM?\") to mean \"as long as you don't \nupgrade your OS\".I'd be content cutting \"forever\" down to \"within a given server configuration\". Then just note that immutable functions can depend implicitly on external server characteristics and so when moving data between servers re-evaluation of immutable functions may be necessary. Not so bad for indexes. A bit more problematic for generated values.I'm not against adding metadata options here but for internal functions comments and documentation can work. For user-defined functions I have my doubts on how trustworthy they would end up being.For the original question, I suggest continuing behaving per \"A\" and work on making it more clear to users what that means in terms of server upgrades.If we do add metadata to reflect our reality I'd settle on a generic \"STATIC\" marker that can be used on those functions the rely on real world state, whether we are directly calling into the system (e.g., hashing) or have chosen to provide the state access management ourselves (e.g., unicode).When we do take control we should have a goal of allowing for a given external dependency version to exist in many PostgreSQL versions and give the DBA the choice of when to move individual databases from one version to the next. Possibly dropping the dependency version support alongside the dropping of support of the major version it first appeared in. Not keeping up with external dependency versions just punishes new users by forbidding them a tool permanently, as well as puts us out-of-step with those dependency development groups, to save existing users some short-term pain. Being able to deal with that pain at a time different than the middle of a major version upgrade, one database at a time, gives those existing users reasonable options.David J.",
"msg_date": "Tue, 16 Jul 2024 12:33:55 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On 7/16/24 15:33, David G. Johnston wrote:\n> On Tue, Jul 16, 2024 at 11:57 AM Joe Conway <mail@joeconway.com \n> <mailto:mail@joeconway.com>> wrote:\n> \n> \n> > There are two alternative philosophies:\n> >\n> > A. By choosing to use a Unicode-based function, the user has opted in\n> > to the Unicode stability guarantees[2], and it's fine to update\n> Unicode\n> > occasionally in new major versions as long as we are transparent with\n> > the user.\n> >\n> > B. IMMUTABLE implies some very strict definition of stability, and we\n> > should never again update Unicode because it changes the results of\n> > IMMUTABLE functions.\n> >\n> > We've been following (A), and that's the defacto policy today[3][4].\n> > Noah and Laurenz argued[5] that the policy starting in version 18\n> > should be (B). Given that it's a policy decision that affects\n> more than\n> > just the builtin collation provider, I'd like to discuss it more\n> > broadly outside of that subthread.\n> \n> On the general topic, we have these definitions in the fine manual:\n> \n> 8<-----------------\n> A VOLATILE function can do anything, ... A query using a volatile\n> function will re-evaluate the function at every row where its value is\n> needed.\n> \n> A STABLE function cannot modify the database and is guaranteed to\n> return\n> the same results given the same arguments for all rows within a single\n> statement...\n> \n> An IMMUTABLE function cannot modify the database and is guaranteed to\n> return the same results given the same arguments forever.\n> 8<-----------------\n> \n> As Jeff points out, the IMMUTABLE definition has never really been\n> true.\n> \n> Even the STABLE is not quite right, as there are at least some STABLE\n> functions that will return the same value for multiple statements if\n> they are within a transaction block (e.g. \"now()\" -- TBH I don't\n> remember offhand if that is true for all stable functions).\n> \n> \n> Under-specification here doesn't make the meaning of stable incorrect. \n> We don't have anything that guarantees stability at the transaction \n> scope because I don't think it can be guaranteed there without \n> considering whether said transaction is read-committed, repeatable read, \n> or serializable. The function itself can promise more but the marker \n> seems correctly scoped for how the system uses it in statement optimization.\n\n\nThe way it is described is still surprising and can bite you if you are \nnot familiar with the nuances. In particular I have seen now() used in \ntransaction blocks surprise more than one person over the years.\n\n\n> and allow those to be\n> used like we do IMMUTABLE except with appropriate warning labels. E.g.\n> something (\"STABLE_VERSION\"?) to mean \"forever within a major version\n> lifetime\" and something (\"STABLE_SYSTEM?\") to mean \"as long as you\n> don't\n> upgrade your OS\".\n> \n> I'd be content cutting \"forever\" down to \"within a given server \n> configuration\". Then just note that immutable functions can depend \n> implicitly on external server characteristics and so when moving data \n> between servers re-evaluation of immutable functions may be necessary. \n> Not so bad for indexes. A bit more problematic for generated values.\n\nYeah I forgot about the configuration controlled ones.\n\n> I'm not against adding metadata options here but for internal functions \n> comments and documentation can work. For user-defined functions I have \n> my doubts on how trustworthy they would end up being.\n\nPeople lie all the time for user-defined functions, usually specifically \nwhen they need IMMUTABLE semantics and are willing to live with the risk \nand/or apply their own controls to ensure no changes in output.\n\n> For the original question, I suggest continuing behaving per \"A\" and \n> work on making it more clear to users what that means in terms of server \n> upgrades.\n> \n> If we do add metadata to reflect our reality I'd settle on a generic \n> \"STATIC\" marker that can be used on those functions the rely on real \n> world state, whether we are directly calling into the system (e.g., \n> hashing) or have chosen to provide the state access management ourselves \n> (e.g., unicode).\n\nSo you are proposing we add STATIC to VOLATILE/STABLE/IMMUTABLE (in the \nthird position before IMMUTABLE), give it IMMUTABLE semantics, mark \nbuiltin functions that deserve it, and document with suitable caution \nstatements?\n\nI guess can live with just one additional level of granularity.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Tue, 16 Jul 2024 16:00:47 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> So you are proposing we add STATIC to VOLATILE/STABLE/IMMUTABLE (in the \n> third position before IMMUTABLE), give it IMMUTABLE semantics, mark \n> builtin functions that deserve it, and document with suitable caution \n> statements?\n\nWhat is the point of that, exactly?\n\nI'll agree that the user documentation could use some improvement\nin how it describes the volatility levels, but I do not see how\nit will reduce anybody's confusion to invent multiple aliases for\nwhat's effectively the same volatility level. Nor do I see a\nuse-case for actually having multiple versions of \"immutable\".\nOnce you've decided you can put something into an index, quibbling\nover just how immutable it is doesn't really change anything.\n\nTo put this another way: the existing volatility levels were\nbasically reverse-engineered from the ways that the planner could\nmeaningfully treat a function: it's dangerous, it is safe enough\nto use in an index condition (which changes the number of times\nthe query will evaluate it), or it's safe to constant-fold in\nadvance of execution. Unless there's a fourth planner behavior that's\nworth having, we don't need a fourth level. Possibly you could\nargue that \"safe to put in an index\" is a different level from\n\"safe to constant-fold\", but I don't really agree with that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jul 2024 16:16:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On 7/16/24 16:16, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> So you are proposing we add STATIC to VOLATILE/STABLE/IMMUTABLE (in the \n>> third position before IMMUTABLE), give it IMMUTABLE semantics, mark \n>> builtin functions that deserve it, and document with suitable caution \n>> statements?\n> \n> What is the point of that, exactly?\n> \n> I'll agree that the user documentation could use some improvement\n> in how it describes the volatility levels, but I do not see how\n> it will reduce anybody's confusion to invent multiple aliases for\n> what's effectively the same volatility level. Nor do I see a\n> use-case for actually having multiple versions of \"immutable\".\n> Once you've decided you can put something into an index, quibbling\n> over just how immutable it is doesn't really change anything.\n> \n> To put this another way: the existing volatility levels were\n> basically reverse-engineered from the ways that the planner could\n> meaningfully treat a function: it's dangerous, it is safe enough\n> to use in an index condition (which changes the number of times\n> the query will evaluate it), or it's safe to constant-fold in\n> advance of execution. Unless there's a fourth planner behavior that's\n> worth having, we don't need a fourth level. Possibly you could\n> argue that \"safe to put in an index\" is a different level from\n> \"safe to constant-fold\", but I don't really agree with that.\n\nFair enough, but then I think we should change the documentation to not \nsay \"forever\".\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Tue, 16 Jul 2024 16:18:50 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Fair enough, but then I think we should change the documentation to not \n> say \"forever\".\n\nNo objection to that, it's clearly a misleading definition.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Jul 2024 16:26:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 1:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Joe Conway <mail@joeconway.com> writes:\n> > So you are proposing we add STATIC to VOLATILE/STABLE/IMMUTABLE (in the\n> > third position before IMMUTABLE), give it IMMUTABLE semantics, mark\n> > builtin functions that deserve it, and document with suitable caution\n> > statements?\n>\n> What is the point of that, exactly?\n>\n> I'll agree that the user documentation could use some improvement\n> in how it describes the volatility levels, but I do not see how\n> it will reduce anybody's confusion to invent multiple aliases for\n> what's effectively the same volatility level. Nor do I see a\n> use-case for actually having multiple versions of \"immutable\".\n> Once you've decided you can put something into an index, quibbling\n> over just how immutable it is doesn't really change anything.\n>\n>\nI'd teach pg_upgrade to inspect the post-upgraded catalog of is-use\ndependencies and report on any of these it finds and remind the DBA that\nthis latent issue may exist in their system.\n\nI agree the core behaviors of the system would remain unchanged and both\nmodes would be handled identically. Though requiring superuser or a\npredefined role membership to actually use a \"static\" mode function in an\nindex or generated expression would be an interesting option to consider.\n\nDavid J.\n\nOn Tue, Jul 16, 2024 at 1:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Joe Conway <mail@joeconway.com> writes:\n> So you are proposing we add STATIC to VOLATILE/STABLE/IMMUTABLE (in the \n> third position before IMMUTABLE), give it IMMUTABLE semantics, mark \n> builtin functions that deserve it, and document with suitable caution \n> statements?\n\nWhat is the point of that, exactly?\n\nI'll agree that the user documentation could use some improvement\nin how it describes the volatility levels, but I do not see how\nit will reduce anybody's confusion to invent multiple aliases for\nwhat's effectively the same volatility level. Nor do I see a\nuse-case for actually having multiple versions of \"immutable\".\nOnce you've decided you can put something into an index, quibbling\nover just how immutable it is doesn't really change anything.I'd teach pg_upgrade to inspect the post-upgraded catalog of is-use dependencies and report on any of these it finds and remind the DBA that this latent issue may exist in their system.I agree the core behaviors of the system would remain unchanged and both modes would be handled identically. Though requiring superuser or a predefined role membership to actually use a \"static\" mode function in an index or generated expression would be an interesting option to consider.David J.",
"msg_date": "Tue, 16 Jul 2024 13:27:28 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 3:28 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n>\n> I'd teach pg_upgrade to inspect the post-upgraded catalog of in-use\n> dependencies and report on any of these it finds and remind the DBA that\n> this latent issue may exist in their system.\n>\n\nWould this help? Collation-related dependency changes are a different thing\nfrom major version DB upgrades\n\nTom’s point about how the levels are directly tied to concrete differences\nin behavior (planner/executor) makes a lot of sense to me\n\n-Jeremy\n\nOn Tue, Jul 16, 2024 at 3:28 PM David G. Johnston <david.g.johnston@gmail.com> wrote:I'd teach pg_upgrade to inspect the post-upgraded catalog of in-use dependencies and report on any of these it finds and remind the DBA that this latent issue may exist in their system.Would this help? Collation-related dependency changes are a different thing from major version DB upgradesTom’s point about how the levels are directly tied to concrete differences in behavior (planner/executor) makes a lot of sense to me-Jeremy",
"msg_date": "Tue, 16 Jul 2024 16:41:08 -0400",
"msg_from": "Jeremy Schneider <schneider@ardentperf.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, 2024-07-16 at 13:27 -0700, David G. Johnston wrote:\n> I'd teach pg_upgrade to inspect the post-upgraded catalog of is-use\n> dependencies and report on any of these it finds and remind the DBA\n> that this latent issue may exist in their system.\n\nThat's impossible to do in a complete way, and hard to do with much\naccuracy. I don't oppose it though -- if someone finds a way to provide\nenough information to be useful, then that's fine with me.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 16 Jul 2024 13:47:08 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, 2024-07-16 at 10:42 -0700, Jeff Davis wrote:\n> The IMMUTABLE marker for functions is quite simple on the surface, but\n> could be interpreted a few different ways, and there's some historical\n> baggage that makes it complicated.\n> \n> There are a number of ways in which IMMUTABLE functions can change\n> behavior:\n> \n> 1. Updating or moving to a different OS affects all collations that use\n> the libc provider (other than \"C\" and \"POSIX\", which don't actually use\n> libc). LOWER(), INITCAP(), UPPER() and pattern matching are also\n> affected.\n> \n> 2. Updating ICU affects the collations that use the ICU provider.\n> ICU_UNICODE_VERSION(), LOWER(), INITCAP(), UPPER() and pattern matching\n> are also affected.\n> \n> 3. Moving to a different database encoding may affect collations that\n> use the \"C\" or \"POSIX\" locales in the libc provider (NB: those locales\n> don't actually use libc).\n> \n> 4. A PG Unicode update may change the results of functions that depend\n> on Unicode. For instance, NORMALIZE(), UNICODE_ASSIGNED(), and\n> UNICODE_VERSION(). Or, if using the new builtin provider's \"C.UTF-8\"\n> locale in version 17, LOWER(), INITCAP(), UPPER(), and pattern matching\n> (NB: collation itself is not affected -- always code point order).\n> \n> 5. If a well-defined IMMUTABLE function produces the wrong results, we\n> may fix the bug in the next major release.\n> \n> 6. The GUC extra_float_digits can change the results of floating point\n> text output.\n> \n> 7. A UDF may be improperly marked IMMUTABLE. A particularly common\n> variant is a UDF without search_path specified, which is probably not\n> truly IMMUTABLE.\n> \n> Noah seemed particularly concerned[1] about #4, so I'll start off by\n> discussing that.\n>\n> Unicode updates do not affect collation itself, they\n> affect affect NORMALIZE(), UNICODE_VERSION(), and UNICODE_ASSIGNED().\n> If using the builtin \"C.UTF-8\" locale, they also affect LOWER(),\n> INITCAP(), UPPER(), and pattern matching. (NB: the builtin collation\n> provider hasn't yet gone through any Unicode update.)\n> \n> There are two alternative philosophies:\n> \n> A. By choosing to use a Unicode-based function, the user has opted in\n> to the Unicode stability guarantees[2], and it's fine to update Unicode\n> occasionally in new major versions as long as we are transparent with\n> the user.\n> \n> B. IMMUTABLE implies some very strict definition of stability, and we\n> should never again update Unicode because it changes the results of\n> IMMUTABLE functions.\n> \n> We've been following (A), and that's the defacto policy today[3][4].\n> Noah and Laurenz argued[5] that the policy starting in version 18\n> should be (B). Given that it's a policy decision that affects more than\n> just the builtin collation provider, I'd like to discuss it more\n> broadly outside of that subthread.\n> \n> [1] \n> https://www.postgresql.org/message-id/20240629220857.fb.nmisch@google.com\n> \n> [2]\n> https://www.unicode.org/policies/stability_policy.html\n> \n> [3] \n> https://www.postgresql.org/message-id/1d178eb1bbd61da1bcfe4a11d6545e9cdcede1d1.camel%40j-davis.com\n> \n> [4]\n> https://www.postgresql.org/message-id/564325.1720297161%40sss.pgh.pa.us\n> \n> [5]\n> https://www.postgresql.org/message-id/af82b292f13dd234790bc701933e9992ee07d4fa.camel%40cybertec.at\n\nConcerning #4, the new built-in locale, my hope (and, in my opinion, its only\nvalue) is to get out of the problems #1 and #2 that are not under our control.\n\nIf changes in major PostgreSQL versions force users of the built-in\nlocale provider to rebuild indexes, that would invalidate it. I think that\nusers care more about data corruption than about exact Unicode-compliant\nbehavior. Anybody who does can use ICU.\n\nPeople routinely create indexes that involve upper() or lower(), so I'd\nsay changing their behavior would be a problem.\n\nPerhaps I should moderate my statement: if a change affects only a newly\nintroduced code point (which is unlikely to be used in a database), and we\nthink that the change is very important, we could consider applying it.\nBut that should be carefully considered; I am against blindly following the\nchanges in Unicode.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 19 Jul 2024 21:06:57 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Fri, 2024-07-19 at 21:06 +0200, Laurenz Albe wrote:\n> Perhaps I should moderate my statement: if a change affects only a\n> newly\n> introduced code point (which is unlikely to be used in a database),\n> and we\n> think that the change is very important, we could consider applying\n> it.\n> But that should be carefully considered; I am against blindly\n> following the\n> changes in Unicode.\n\nThat sounds reasonable.\n\nI propose that, going forward, we take more care with Unicode updates:\nassess the impact, provide time for comments, and consider possible\nmitigations. In other words, it would be reviewed like any other\nchange.\n\nIdeally, some new developments would make it less worrisome, and\nUnicode updates could become more routine. I have some ideas, which I\ncan propose in separate threads. But for now, I don't see a reason to\nrush Unicode updates.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 19 Jul 2024 12:41:49 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On 19.07.24 21:41, Jeff Davis wrote:\n> On Fri, 2024-07-19 at 21:06 +0200, Laurenz Albe wrote:\n>> Perhaps I should moderate my statement: if a change affects only a\n>> newly\n>> introduced code point (which is unlikely to be used in a database),\n>> and we\n>> think that the change is very important, we could consider applying\n>> it.\n>> But that should be carefully considered; I am against blindly\n>> following the\n>> changes in Unicode.\n> \n> That sounds reasonable.\n> \n> I propose that, going forward, we take more care with Unicode updates:\n> assess the impact, provide time for comments, and consider possible\n> mitigations. In other words, it would be reviewed like any other\n> change.\n\nI disagree with that. We should put ourselves into the position to \nadopt new Unicode versions without fear. Similar to updates to time \nzones, snowball, etc.\n\nWe can't be discussing the merits of the Unicode update every year. \nThat would be madness. How would we weigh each change against the \nothers? Some new character is introduced because it's the new currency \nof some country; seems important. Some mobile phone platforms jumped \nthe gun and already use the character for the same purpose before it was \nassigned; now the character is in databases but some function results \nwill change with the upgrade. How do we proceed?\n\nMoreover, if we were to decide to not take a particular Unicode update, \nthat would then stop that process forever, because whatever the issue \nwas wouldn't go away with the next Unicode version.\n\n\nUnless I missed something here, all the problem examples involve \nunassigned code points that were later assigned. (Assigned code points \nalready have compatibility mechanisms, such as collation versions.) So \nI would focus on that issue. We already have a mechanism to disallow \nunassigned code points. So there is a tradeoff that users can make: \nDisallow unassigned code points and avoid upgrade issues resulting from \nthem. Maybe that just needs to be documented more prominently.\n\n\n\n",
"msg_date": "Mon, 22 Jul 2024 16:26:37 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 10:26 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> I disagree with that. We should put ourselves into the position to\n> adopt new Unicode versions without fear. Similar to updates to time\n> zones, snowball, etc.\n>\n> We can't be discussing the merits of the Unicode update every year.\n> That would be madness.\n\nYeah, I agree with that 100%. I can't imagine that we want to, in\neffect, develop our own version of Unicode that is not quite the same\nas upstream.\n\nWe've got to figure out a way to fix this problem from the other end -\ncoping with updates when they happen. I feel like we've already\ndiscussed the obvious approach at some length: have a way to mark\nindexes invalid when \"immutable\" things change. That doesn't fix\neverything because you could, for example, manufacture constraint\nviolations, even if there are no relevant indexes, so maybe index\ninvalidation wouldn't be the only thing we'd ever need to do, but it\nwould help a lot. In view of Jeff's list at the start of the thread,\nmaybe that mechanism needs to be more general than just\ncollation-related stuff: maybe there should be a general way to say\n\"oopsie, this index can't be relied upon until it's rebuit\" and a user\ncould manually do that if they change the definition of an immutable\nfunction. Or there could even be some flag to CREATE FUNCTION that\ntriggers it for all dependent indexes. I'm not really sure.\n\nIf I remember correctly, Thomas Munro put a good deal of work into\ndeveloping specifically for collation definition changes a few\nreleases ago and it was judged not good enough, but that means we just\nstill have nothing, which is unfortunate considering how often things\ngo wrong in this area.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 22 Jul 2024 11:14:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Mon, 2024-07-22 at 16:26 +0200, Peter Eisentraut wrote:\n> Unless I missed something here, all the problem examples involve \n> unassigned code points that were later assigned.\n\nFor normalization and case mapping that's right.\n\nFor regexes, a character property could change. But that's mostly a\ntheoretical problem because, at least in my experience, I can't recall\never seeing an index that would be affected.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 22 Jul 2024 08:38:26 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Mon, 2024-07-22 at 11:14 -0400, Robert Haas wrote:\n> On Mon, Jul 22, 2024 at 10:26 AM Peter Eisentraut\n> <peter@eisentraut.org> wrote:\n> > I disagree with that. We should put ourselves into the position to\n> > adopt new Unicode versions without fear. Similar to updates to\n> > time\n> > zones, snowball, etc.\n> > \n> > We can't be discussing the merits of the Unicode update every year.\n> > That would be madness.\n> \n> Yeah, I agree with that 100%.\n\nIt's hard for me to argue; that was my reasoning during development.\n\nBut Noah seems to have a very strong opinion on this matter:\n\nhttps://www.postgresql.org/message-id/20240629220857.fb.nmisch%40google.com\n\nand I thought this thread would be a better opportunity for him to\nexpress it. Noah?\n\n> In view of Jeff's list at the start of the thread,\n> maybe that mechanism needs to be more general than just\n> collation-related stuff: maybe there should be a general way to say\n> \"oopsie, this index can't be relied upon until it's rebuit\" \n\n...\n\n> If I remember correctly, Thomas Munro put a good deal of work into\n> developing specifically for collation definition changes a few\n> releases ago and it was judged not good enough, \n\nYeah, see ec48314708. The revert appears to be for a number of\ntechnical reasons, but even if we solve all of those, it's hard to have\na perfect solution that accounts for plpgsql functions that create\narbitrary query strings and EXECUTE them.\n\nThough perhaps not impossible if we use some kind of runtime detection.\nWe could have some kind of global context that tracks, at runtime, when\nan expression is executing for the purposes of an index. If a function\ndepends on a versioned collation, then mark the index or add a version\nsomewhere.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 22 Jul 2024 09:34:42 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Mon, 2024-07-22 at 16:26 +0200, Peter Eisentraut wrote:\n> I propose that, going forward, we take more care with Unicode updates:\n> > assess the impact, provide time for comments, and consider possible\n> > mitigations. In other words, it would be reviewed like any other\n> > change.\n> \n> I disagree with that. We should put ourselves into the position to \n> adopt new Unicode versions without fear. Similar to updates to time \n> zones, snowball, etc.\n> \n> We can't be discussing the merits of the Unicode update every year. \n> That would be madness. How would we weigh each change against the \n> others? Some new character is introduced because it's the new currency \n> of some country; seems important. Some mobile phone platforms jumped \n> the gun and already use the character for the same purpose before it was \n> assigned; now the character is in databases but some function results \n> will change with the upgrade. How do we proceed?\n> \n> Moreover, if we were to decide to not take a particular Unicode update, \n> that would then stop that process forever, because whatever the issue \n> was wouldn't go away with the next Unicode version.\n\nI understand the difficulty (madness) of discussing every Unicode\nchange. If that's unworkable, my preference would be to stick with some\nUnicode version and never modify it, ever.\n\nThe choice that users could make in that case is\n\na) use the built-in provider, don't get proper support for new code\n points, but never again worry about corrupted indexes after an\n upgrade\n\nb) use ICU collations, be up to date with Unicode, but reindex whenever\n you upgrade to a new ICU version\n\n> Unless I missed something here, all the problem examples involve \n> unassigned code points that were later assigned. (Assigned code points \n> already have compatibility mechanisms, such as collation versions.) So \n> I would focus on that issue. We already have a mechanism to disallow \n> unassigned code points. So there is a tradeoff that users can make: \n> Disallow unassigned code points and avoid upgrade issues resulting from \n> them. Maybe that just needs to be documented more prominently.\n\nAre you proposing a switch that would make PostgreSQL error out if\nsomebody wants to use an unassigned code point? That would be an option.\nIf what you mean is just add some documentation that tells people not\nto use unassigned code points if they want to avoid a reindex, I'd say\nthat is not enough.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 22 Jul 2024 19:18:08 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Mon, 2024-07-22 at 19:18 +0200, Laurenz Albe wrote:\n> I understand the difficulty (madness) of discussing every Unicode\n> change. If that's unworkable, my preference would be to stick with\n> some\n> Unicode version and never modify it, ever.\n\nAmong all the ways that IMMUTABLE and indexes can go wrong, is there a\nreason why you think we should draw such a bright line in this one\ncase?\n\n\n> \n> Are you proposing a switch that would make PostgreSQL error out if\n> somebody wants to use an unassigned code point? That would be an\n> option.\n\nYou can use a CHECK(UNICODE_ASSIGNED(t)) in version 17, and in version\n18 I have a proposal here to make it a database-level option:\n\nhttps://www.postgresql.org/message-id/a0e85aca6e03042881924c4b31a840a915a9d349.camel@j-davis.com\n\n(Note: the proposal might have a few holes in it, I didn't look at it\nlately and nobody has commented yet.)\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 22 Jul 2024 10:51:17 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Mon, 22 Jul 2024 at 13:51, Jeff Davis <pgsql@j-davis.com> wrote:\n\n\n> > Are you proposing a switch that would make PostgreSQL error out if\n> > somebody wants to use an unassigned code point? That would be an\n> > option.\n>\n> You can use a CHECK(UNICODE_ASSIGNED(t)) in version 17, and in version\n> 18 I have a proposal here to make it a database-level option:\n>\n\nAnd if you define a domain over text with this check, you would effectively\nhave a type that works exactly like text except you can only store assigned\ncode points in it. Then use that instead of text everywhere (easy to audit\nwith a query over the system tables).\n\nOn Mon, 22 Jul 2024 at 13:51, Jeff Davis <pgsql@j-davis.com> wrote: \n> Are you proposing a switch that would make PostgreSQL error out if\n> somebody wants to use an unassigned code point? That would be an\n> option.\n\nYou can use a CHECK(UNICODE_ASSIGNED(t)) in version 17, and in version\n18 I have a proposal here to make it a database-level option:And if you define a domain over text with this check, you would effectively have a type that works exactly like text except you can only store assigned code points in it. Then use that instead of text everywhere (easy to audit with a query over the system tables).",
"msg_date": "Mon, 22 Jul 2024 13:54:21 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 1:18 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> I understand the difficulty (madness) of discussing every Unicode\n> change. If that's unworkable, my preference would be to stick with some\n> Unicode version and never modify it, ever.\n\nI think that's a completely non-viable way forward. Even if everyone\nhere voted in favor of that, five years from now there will be someone\nwho shows up to say \"I can't use your crappy software because the\nUnicode tables haven't been updated in five years, here's a patch!\".\nAnd, like, what are we going to do? Still keeping shipping the 2024\nversion of Unicode four hundred years from now, assuming humanity and\ncivilization and PostgreSQL are still around then? Holding something\nstill \"forever\" is just never going to work.\n\nEvery other piece of software in the world has to deal with changes as\na result of the addition of new code points, and probably less\ncommonly, revisions to existing code points. Presumably, their stuff\nbreaks too, from time to time. I mean, I find it a bit difficult to\nbelieve that web browsers or messaging applications on phones only\never display emoji, and never try to do any sort of string sorting.\nThe idea that PostgreSQL is the only thing that ever sorts strings\ncannot be taken seriously. So other people are presumably hacking\naround this in some way appropriate to what their software does, and\nwe're going to have to figure out how to do the same thing. We could\nof course sit here and talk about whether it's really a good of the\nUnicode folks to add a lime emoji and a bunch of new emojis of people\nproceeding in a rightward direction to complement the existing emojis\nof people proceeding in a leftward direction, but they are going to do\nthat whether we like it or not, and people -- including me, I'm afraid\n-- are going to use those emojis once they show up, so software that\nwants to remain relevant is going to have to support them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 22 Jul 2024 13:55:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Mon, 2024-07-22 at 13:55 -0400, Robert Haas wrote:\n> On Mon, Jul 22, 2024 at 1:18 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > I understand the difficulty (madness) of discussing every Unicode\n> > change. If that's unworkable, my preference would be to stick with some\n> > Unicode version and never modify it, ever.\n> \n> I think that's a completely non-viable way forward. Even if everyone\n> here voted in favor of that, five years from now there will be someone\n> who shows up to say \"I can't use your crappy software because the\n> Unicode tables haven't been updated in five years, here's a patch!\".\n> And, like, what are we going to do? Still keeping shipping the 2024\n> version of Unicode four hundred years from now, assuming humanity and\n> civilization and PostgreSQL are still around then? Holding something\n> still \"forever\" is just never going to work.\n\nI hear you. It would be interesting to know what other RDBMS do here.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 23 Jul 2024 09:11:42 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 3:11 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> I hear you. It would be interesting to know what other RDBMS do here.\n\nYeah, I agree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Jul 2024 08:11:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 1:11 AM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> On Mon, 2024-07-22 at 13:55 -0400, Robert Haas wrote:\n> > On Mon, Jul 22, 2024 at 1:18 PM Laurenz Albe <laurenz.albe@cybertec.at>\n> wrote:\n> > > I understand the difficulty (madness) of discussing every Unicode\n> > > change. If that's unworkable, my preference would be to stick with\n> some\n> > > Unicode version and never modify it, ever.\n> >\n> > I think that's a completely non-viable way forward. Even if everyone\n> > here voted in favor of that, five years from now there will be someone\n> > who shows up to say \"I can't use your crappy software because the\n> > Unicode tables haven't been updated in five years, here's a patch!\".\n> > And, like, what are we going to do? Still keeping shipping the 2024\n> > version of Unicode four hundred years from now, assuming humanity and\n> > civilization and PostgreSQL are still around then? Holding something\n> > still \"forever\" is just never going to work.\n>\n> I hear you. It would be interesting to know what other RDBMS do here.\n\n\nOther RDBMS are very careful not to corrupt databases, afaik including\nfunction based indexes, by changing Unicode. I’m not aware of any other\nRDBMS that updates Unicode versions in place; instead they support multiple\nUnicode versions and do not drop the old ones.\n\nSee also:\nhttps://www.postgresql.org/message-id/E8754F74-C65F-4A1A-826F-FD9F37599A2E%40ardentperf.com\n\nI know Jeff mentioned that Unicode tables copied into Postgres for\nnormalization have been updated a few times. Did anyone ever actually\ndiscuss the fact that things like function based indexes can be corrupted\nby this, and weigh the reasoning? Are there past mailing list threads\ntouching on the corruption problem and making the argument why updating\nanyway is the right thing to do? I always assumed that nobody had really\ndug deeply into this before the last few years.\n\nI do agree it isn’t as broad of a problem as linguistic collation itself,\nwhich causes a lot more widespread corruption when it changes (as we’ve\nseen from glibc 2.28 and also other older hacker mailing list threads about\nsmaller changes in older glibc versions corrupting databases). For now,\nPostgres only has code-point collation and the other Unicode functions\nmentioned in this thread.\n\n-Jeremy\n\nOn Tue, Jul 23, 2024 at 1:11 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Mon, 2024-07-22 at 13:55 -0400, Robert Haas wrote:\n> On Mon, Jul 22, 2024 at 1:18 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > I understand the difficulty (madness) of discussing every Unicode\n> > change. If that's unworkable, my preference would be to stick with some\n> > Unicode version and never modify it, ever.\n> \n> I think that's a completely non-viable way forward. Even if everyone\n> here voted in favor of that, five years from now there will be someone\n> who shows up to say \"I can't use your crappy software because the\n> Unicode tables haven't been updated in five years, here's a patch!\".\n> And, like, what are we going to do? Still keeping shipping the 2024\n> version of Unicode four hundred years from now, assuming humanity and\n> civilization and PostgreSQL are still around then? Holding something\n> still \"forever\" is just never going to work.\n\nI hear you. It would be interesting to know what other RDBMS do here.Other RDBMS are very careful not to corrupt databases, afaik including function based indexes, by changing Unicode. I’m not aware of any other RDBMS that updates Unicode versions in place; instead they support multiple Unicode versions and do not drop the old ones.See also:https://www.postgresql.org/message-id/E8754F74-C65F-4A1A-826F-FD9F37599A2E%40ardentperf.comI know Jeff mentioned that Unicode tables copied into Postgres for normalization have been updated a few times. Did anyone ever actually discuss the fact that things like function based indexes can be corrupted by this, and weigh the reasoning? Are there past mailing list threads touching on the corruption problem and making the argument why updating anyway is the right thing to do? I always assumed that nobody had really dug deeply into this before the last few years.I do agree it isn’t as broad of a problem as linguistic collation itself, which causes a lot more widespread corruption when it changes (as we’ve seen from glibc 2.28 and also other older hacker mailing list threads about smaller changes in older glibc versions corrupting databases). For now, Postgres only has code-point collation and the other Unicode functions mentioned in this thread.-Jeremy",
"msg_date": "Tue, 23 Jul 2024 06:31:56 -0600",
"msg_from": "Jeremy Schneider <schneider@ardentperf.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 8:32 AM Jeremy Schneider\n<schneider@ardentperf.com> wrote:\n> Other RDBMS are very careful not to corrupt databases, afaik including function based indexes, by changing Unicode. I’m not aware of any other RDBMS that updates Unicode versions in place; instead they support multiple Unicode versions and do not drop the old ones.\n>\n> See also:\n> https://www.postgresql.org/message-id/E8754F74-C65F-4A1A-826F-FD9F37599A2E%40ardentperf.com\n\nHmm. I think we might have some unique problems due to the fact that\nwe rely partly on the operating system behavior, partly on libicu, and\npartly on our own internal tables.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Jul 2024 08:49:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 09:34:42AM -0700, Jeff Davis wrote:\n> On Mon, 2024-07-22 at 11:14 -0400, Robert Haas wrote:\n> > On Mon, Jul 22, 2024 at 10:26 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> > > I disagree with that. We should put ourselves into the position to\n> > > adopt new Unicode versions without fear. Similar to updates to\n> > > time\n> > > zones, snowball, etc.\n> > > \n> > > We can't be discussing the merits of the Unicode update every year.\n> > > That would be madness.\n> > \n> > Yeah, I agree with that 100%.\n> \n> It's hard for me to argue; that was my reasoning during development.\n> \n> But Noah seems to have a very strong opinion on this matter:\n> \n> https://www.postgresql.org/message-id/20240629220857.fb.nmisch%40google.com\n> \n> and I thought this thread would be a better opportunity for him to\n> express it. Noah?\n\nLong-term, we should handle this like Oracle, SQL Server, and DB2 do:\nhttps://postgr.es/m/CA+fnDAbmn2d5tzZsj-4wmD0jApHTsg_zGWUpteb=OMSsX5rdAg@mail.gmail.com\n\nShort-term, we should remedy the step backward that pg_c_utf8 has taken:\nhttps://postgr.es/m/20240718233908.52.nmisch@google.com\nhttps://postgr.es/m/486d71991a3f80ec1c47e1bd7931e2ef3627b6b3.camel@cybertec.at\n\n\n$SUBJECT has proposed remedy \"take more care with Unicode updates\". If one\nwanted to pursue that, it should get more specific, by giving one or both of:\n\n(a) principles for deciding whether a Unicode update is okay\n(b) examples of past Unicode release changes and whether PostgreSQL should\n adopt a future Unicode version making a similar change\n\nThat said, I'm not aware of an (a) or (b) likely to create an attractive\ncompromise between the \"index scan agrees with seqscan after pg_upgrade\" goal\n(https://postgr.es/m/20240706195129.fd@rfd.leadboat.com) and the \"don't freeze\nUnicode data\" goal\n(https://postgr.es/m/CA+TgmoZRpOFVmQWKEXHdcKj9AFLbXT5ouwtXa58J=3ydLP00ZQ@mail.gmail.com).\nThe \"long-term\" above would satisfy both goals. If it were me, I would\nabandon the \"more care\" proposal.\n\n\n",
"msg_date": "Tue, 23 Jul 2024 07:39:49 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, 2024-07-23 at 08:49 -0400, Robert Haas wrote:\n> Hmm. I think we might have some unique problems due to the fact that\n> we rely partly on the operating system behavior, partly on libicu,\n> and\n> partly on our own internal tables.\n\nThe reliance on the OS is especially problematic for reasons that have\nalready been discussed extensively.\n\nOne of my strongest motivations for PG_C_UTF8 was that there was still\na use case for libc in PG16: the \"C.UTF-8\" locale, which is not\nsupported at all in ICU. Daniel Vérité made me aware of the importance\nof this locale, which offers code point order collation combined with\nUnicode ctype semantics.\n\nWith PG17, between ICU and the builtin provider, there's little\nremaining reason to use libc (aside from legacy).\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 23 Jul 2024 10:03:29 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 1:03 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> One of my strongest motivations for PG_C_UTF8 was that there was still\n> a use case for libc in PG16: the \"C.UTF-8\" locale, which is not\n> supported at all in ICU. Daniel Vérité made me aware of the importance\n> of this locale, which offers code point order collation combined with\n> Unicode ctype semantics.\n>\n> With PG17, between ICU and the builtin provider, there's little\n> remaining reason to use libc (aside from legacy).\n\nI was really interested to read Jeremy Schneider's slide deck, to\nwhich he linked earlier, wherein he explained that other major\ndatabases default to something more like C.UTF-8. Maybe we need to\nrelitigate the debate about what our default should be in light of\nthose findings (but, if so, on another thread with a clear subject\nline). But even if we were to decide to change the default, there are\nlots and lots of existing databases out there that are using libc\ncollations. I'm not in a good position to guess how many of those\npeople actually truly care about language-specific collations. I'm\npositive it's not zero, but I can't really guess how much more than\nzero it is. Even if it were zero, though, the fact that so many\nupgrades are done using pg_upgrade means that this problem will still\nbe around in a decade even if we changed the default tomorrow.\n\n(I do understand that you wrote \"aside from legacy\" so I'm not\naccusing you of ignoring the upgrade issues, just taking the\nopportunity to be more explicit about my own view.)\n\nAlso, Noah has pointed out that C.UTF-8 introduces some\nforward-compatibility hazards of its own, at least with respect to\nctype semantics. I don't have a clear view of what ought to be done\nabout that, but if we just replace a dependency on an unstable set of\nlibc definitions with a dependency on an equally unstable set of\nPostgreSQL definitions, we're not really winning. Do we need to\nversion the new ctype provider?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Jul 2024 14:40:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Also, Noah has pointed out that C.UTF-8 introduces some\n> forward-compatibility hazards of its own, at least with respect to\n> ctype semantics. I don't have a clear view of what ought to be done\n> about that, but if we just replace a dependency on an unstable set of\n> libc definitions with a dependency on an equally unstable set of\n> PostgreSQL definitions, we're not really winning.\n\nNo, I think we *are* winning, because the updates are not \"equally\nunstable\": with pg_c_utf8, we control when changes happen. We can\nalign them with major releases and release-note the differences.\nWith libc-based collations, we have zero control and not much\nnotification.\n\n> Do we need to version the new ctype provider?\n\nIt would be a version for the underlying Unicode definitions,\nnot the provider as such, but perhaps yes. I don't know to what\nextent doing so would satisfy Noah's concern; but if it would do\nso I'd be happy with that answer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Jul 2024 15:26:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On 7/23/24 15:26, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> Also, Noah has pointed out that C.UTF-8 introduces some\n>> forward-compatibility hazards of its own, at least with respect to\n>> ctype semantics. I don't have a clear view of what ought to be done\n>> about that, but if we just replace a dependency on an unstable set of\n>> libc definitions with a dependency on an equally unstable set of\n>> PostgreSQL definitions, we're not really winning.\n> \n> No, I think we *are* winning, because the updates are not \"equally\n> unstable\": with pg_c_utf8, we control when changes happen. We can\n> align them with major releases and release-note the differences.\n> With libc-based collations, we have zero control and not much\n> notification.\n\n+1\n\n>> Do we need to version the new ctype provider?\n> \n> It would be a version for the underlying Unicode definitions,\n> not the provider as such, but perhaps yes. I don't know to what\n> extent doing so would satisfy Noah's concern; but if it would do\n> so I'd be happy with that answer.\n\nI came to the same conclusion. I think someone mentioned somewhere on \nthis thread that other databases support multiple Unicode versions. I \nthink we need to figure out how to do that too.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Tue, 23 Jul 2024 15:41:00 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, 2024-07-23 at 15:26 -0400, Tom Lane wrote:\n> No, I think we *are* winning, because the updates are not \"equally\n> unstable\": with pg_c_utf8, we control when changes happen. We can\n> align them with major releases and release-note the differences.\n> With libc-based collations, we have zero control and not much\n> notification.\n\nAlso, changes to libc collations are much more impactful, at least two\norders of magnitude. All indexes on text are at risk, even primary\nkeys.\n\nPG_C_UTF8 has stable code point ordering (memcmp()) that is unaffected\nby Unicode updates, so primary keys will never be affected. The risks\nwe are talking about are for expression indexes, e.g. on LOWER(). Even\nif you do have such expression indexes, the types of changes Unicode\nmakes to casing and character properties are typically much more mild.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 23 Jul 2024 12:56:28 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Tue, 2024-07-23 at 15:26 -0400, Tom Lane wrote:\n>> No, I think we *are* winning, because the updates are not \"equally\n>> unstable\": with pg_c_utf8, we control when changes happen. We can\n>> align them with major releases and release-note the differences.\n>> With libc-based collations, we have zero control and not much\n>> notification.\n\n> Also, changes to libc collations are much more impactful, at least two\n> orders of magnitude. All indexes on text are at risk, even primary\n> keys.\n\nWell, it depends on which libc collation you have in mind. I was\nthinking of a libc-supplied C.UTF-8 collation, which I would expect\nto behave the same as pg_c_utf8, modulo which Unicode version it's\nbased on. But even when comparing to that, pg_c_utf8 can win on\nstability for the reasons I stated. If you don't have a C.UTF-8\ncollation available, and are forced to use en_US.UTF-8 or\n$locale-of-choice, then the stability picture is far more dire,\nas Jeff says.\n\nNoah seems to be comparing the stability of pg_c_utf8 to the stability\nof a pure C/POSIX collation, but I do not think that is the relevant\ncomparison to make. Besides, if someone is using C/POSIX, this\nfeature doesn't stop them from continuing to do so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Jul 2024 16:07:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, 2024-07-23 at 07:39 -0700, Noah Misch wrote:\n> we should remedy the step backward that pg_c_utf8 has taken:\n\nObviously I disagree that we've taken a step backwards.\n\nCan you articulate the principle by which all of the other problems\nwith IMMUTABLE are just fine, but updates to Unicode are intolerable,\nand only for PG_C_UTF8?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 23 Jul 2024 13:07:49 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 3:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> No, I think we *are* winning, because the updates are not \"equally\n> unstable\": with pg_c_utf8, we control when changes happen. We can\n> align them with major releases and release-note the differences.\n> With libc-based collations, we have zero control and not much\n> notification.\n\nOK, that's pretty fair.\n\n> > Do we need to version the new ctype provider?\n>\n> It would be a version for the underlying Unicode definitions,\n> not the provider as such, but perhaps yes. I don't know to what\n> extent doing so would satisfy Noah's concern; but if it would do\n> so I'd be happy with that answer.\n\nI don't see how we can get by without some kind of versioning here.\nIt's probably too late to do that for v17, but if we bet either that\n(1) we'll never need to change anything for pg_c_utf8 or that (2)\nthose changes will be so minor that nobody will have a problem, I\nthink we will lose our bet.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Jul 2024 16:18:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jul 23, 2024 at 3:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Do we need to version the new ctype provider?\n\n>> It would be a version for the underlying Unicode definitions,\n>> not the provider as such, but perhaps yes. I don't know to what\n>> extent doing so would satisfy Noah's concern; but if it would do\n>> so I'd be happy with that answer.\n\n> I don't see how we can get by without some kind of versioning here.\n> It's probably too late to do that for v17,\n\nWhy? If we agree that that's the way forward, we could certainly\nstick some collversion other than \"1\" into pg_c_utf8's pg_collation\nentry. There's already been one v17 catversion bump since beta2\n(716bd12d2), so another one is basically free.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Jul 2024 16:28:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "\tTom Lane wrote:\n\n> > I don't see how we can get by without some kind of versioning here.\n> > It's probably too late to do that for v17,\n> \n> Why? If we agree that that's the way forward, we could certainly\n> stick some collversion other than \"1\" into pg_c_utf8's pg_collation\n> entry. There's already been one v17 catversion bump since beta2\n> (716bd12d2), so another one is basically free.\n\npg_collation.collversion has been used so far for the sort part\nof the collations.\n\nFor the ctype part:\n\npostgres=# select unicode_version();\n unicode_version \n-----------------\n 15.1\n(1 row)\n\n\npostgres=# select icu_unicode_version ();\n icu_unicode_version \n---------------------\n 14.0\n(1 row)\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Tue, 23 Jul 2024 22:34:00 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On 22.07.24 19:55, Robert Haas wrote:\n> Every other piece of software in the world has to deal with changes as\n> a result of the addition of new code points, and probably less\n> commonly, revisions to existing code points. Presumably, their stuff\n> breaks too, from time to time. I mean, I find it a bit difficult to\n> believe that web browsers or messaging applications on phones only\n> ever display emoji, and never try to do any sort of string sorting.\n\nThe sorting isn't the problem. We have a versioning mechanism for \ncollations. What we do with the version information is clearly not \nperfect yet, but the mechanism exists and you can hack together queries \nthat answer the question, did anything change here that would affect my \nindexes. And you could build more tooling around that and so on.\n\nThe problem being considered here are updates to Unicode itself, as \ndistinct from the collation tables. A Unicode update can impact at \nleast two things:\n\n- Code points that were previously unassigned are now assigned. That's \nobviously a very common thing with every Unicode update. The new \ncharacter will have new properties attached to it, so the result of \nvarious functions that use such properties (upper(), lower(), \nnormalize(), etc.) could change, because previously the code point had \nno properties, and so those functions would not do anything interesting \nwith the character.\n\n- Certain properties of an existing character can change. Like, a \ncharacter used to be a letter and now it's a digit. (This is an \nexample; I'm not sure if that particular change would be allowed.) In \nthe extreme case, this could have the same impact as the above, but in \npractice the kinds of changes that are allowed wouldn't affect typical \nindexes.\n\nI don't think this has anything in particular to do with the new builtin \ncollation provider. That is just one new consumer of this.\n\n\n",
"msg_date": "Tue, 23 Jul 2024 22:36:12 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "\"Daniel Verite\" <daniel@manitou-mail.org> writes:\n> \tTom Lane wrote:\n>> Why? If we agree that that's the way forward, we could certainly\n>> stick some collversion other than \"1\" into pg_c_utf8's pg_collation\n>> entry. There's already been one v17 catversion bump since beta2\n>> (716bd12d2), so another one is basically free.\n\n> pg_collation.collversion has been used so far for the sort part\n> of the collations.\n\nHmm, we haven't particularly drawn a distinction between sort-related\nand not-sort-related aspects of collation versions AFAIK. Perhaps\nit'd be appropriate to do so, and I agree that there's not time to\ndesign such a thing for v17. But pg_c_utf8 might be the only case\nwhere we could do anything other than advance those versions in\nlockstep. I doubt we have enough insight into the behaviors of\nother providers to say confidently that an update affects only\none side of their behavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Jul 2024 16:39:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, 2024-07-23 at 16:07 -0400, Tom Lane wrote:\n> Well, it depends on which libc collation you have in mind. I was\n> thinking of a libc-supplied C.UTF-8 collation, which I would expect\n> to behave the same as pg_c_utf8, modulo which Unicode version it's\n> based on.\n\nDaniel Vérité documented[1] cases where the libc C.UTF-8 locale changed\nthe *sort* behavior, thereby affecting primary keys.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/message-id/8a3dc06f-9b9d-4ed7-9a12-2070d8b0165f%40manitou-mail.org\n\n\n\n",
"msg_date": "Tue, 23 Jul 2024 13:43:18 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Tue, 2024-07-23 at 16:07 -0400, Tom Lane wrote:\n>> Well, it depends on which libc collation you have in mind. I was\n>> thinking of a libc-supplied C.UTF-8 collation, which I would expect\n>> to behave the same as pg_c_utf8, modulo which Unicode version it's\n>> based on.\n\n> Daniel Vérité documented[1] cases where the libc C.UTF-8 locale changed\n> the *sort* behavior, thereby affecting primary keys.\n\nOuch. But we didn't establish whether that was an ancient bug,\nor something likely to happen again. (In any case, that surely\nreinforces the point that we can expect pg_c_utf8 to be more\nstable than any previously-available alternative.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Jul 2024 17:11:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 4:36 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> The sorting isn't the problem. We have a versioning mechanism for\n> collations. What we do with the version information is clearly not\n> perfect yet, but the mechanism exists and you can hack together queries\n> that answer the question, did anything change here that would affect my\n> indexes. And you could build more tooling around that and so on.\n\nIn my experience, sorting is, overwhelmingly, the problem. What people\ncomplain about is that they do an upgrade - of PG or some OS package -\nand then their indexes are broken. Or their partition bounds are\nbroken.\n\nThat we have versioning information that someone could hypothetically\nknow how to do something useful with is not really useful, because\nnobody actually knows how to do it, and there's nothing to trigger\nthem to do it in the first place. People don't think \"oh, I'm running\ndnf update, I better run undocumented queries against the PostgreSQL\nsystem catalogs to see whether my system is going to melt afterwards.\"\n\nWhat needs to happen is that when you do something that breaks\nsomething, something notices automatically and tells you and gives you\na way to get it fixed again. Or better yet, when you do something that\nwould break something as things stand today, some kind of versioning\nlogic kicks in and you keep the old behavior and nothing actually\nbreaks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Jul 2024 21:37:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, 2024-07-23 at 21:37 -0400, Robert Haas wrote:\n> In my experience, sorting is, overwhelmingly, the problem.\n\nI strongly agree.\n\n> That we have versioning information that someone could hypothetically\n> know how to do something useful with is not really useful, because\n> nobody actually knows how to do it\n\nIncluding me. I put significant effort into creating some views that\ncould help users identify potentially-affected indexes based on\ncollation changes, and I gave up. In theory it's just about impossible\n(consider some UDF that constructs queries and EXECUTEs them -- what\ncollations does that depend on?). In practice, it's not much easier,\nand you might as well just reindex everything having to do with text.\n\nIn contrast, if the problem is CTYPE-related, users are in a much\nbetter position. It won't affect their primary keys or most indexes.\nIt's much more tractable to review your expression indexes and look for\nproblems (not ideal, but better). Also, as Peter points out, CTYPE\nchanges are typically more narrow, so there's a good chance that\nthere's no problem at all.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 23 Jul 2024 19:26:29 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On 24.07.24 03:37, Robert Haas wrote:\n> On Tue, Jul 23, 2024 at 4:36 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>> The sorting isn't the problem. We have a versioning mechanism for\n>> collations. What we do with the version information is clearly not\n>> perfect yet, but the mechanism exists and you can hack together queries\n>> that answer the question, did anything change here that would affect my\n>> indexes. And you could build more tooling around that and so on.\n> \n> In my experience, sorting is, overwhelmingly, the problem. What people\n> complain about is that they do an upgrade - of PG or some OS package -\n> and then their indexes are broken. Or their partition bounds are\n> broken.\n\nFair enough. My argument was, that topic is distinct from the topic of \nthis thread.\n\n\n\n",
"msg_date": "Wed, 24 Jul 2024 06:42:26 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 01:07:49PM -0700, Jeff Davis wrote:\n> On Tue, 2024-07-23 at 07:39 -0700, Noah Misch wrote:\n> > Short-term, we should remedy the step backward that pg_c_utf8 has taken:\n> > https://postgr.es/m/20240718233908.52.nmisch@google.com\n> > https://postgr.es/m/486d71991a3f80ec1c47e1bd7931e2ef3627b6b3.camel@cybertec.at\n> \n> Obviously I disagree that we've taken a step backwards.\n\nYes.\n\n> Can you articulate the principle by which all of the other problems\n> with IMMUTABLE are just fine, but updates to Unicode are intolerable,\n> and only for PG_C_UTF8?\n\nNo, because I don't think all the other problems with IMMUTABLE are just fine.\nThe two messages linked cover the comparisons I do consider important,\nespecially the comparison between pg_c_utf8 and packager-frozen ICU.\n\n\n",
"msg_date": "Wed, 24 Jul 2024 05:07:42 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 12:42 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> Fair enough. My argument was, that topic is distinct from the topic of\n> this thread.\n\nOK, that's fair. But I think the solutions are the same: we complain\nall the time about glibc and ICU shipping collations and not\nversioning them. We shouldn't make the same kinds of mistakes. Even if\nctype is less likely to break things than collations, it still can,\nand we should move in the direction of letting people keep the v17\nbehavior for the foreseeable future while at the same time having a\nway that they can also get the new behavior if they want it (and the\nnew behavior should be the default).\n\nI note in passing that the last time I saw a customer query with\nUPPER() in the join clause was... yesterday. The problems there had\nnothing to do with CTYPE, but there's no reason to suppose that it\ncouldn't have had such a problem. I suspect the reason we don't hear\nabout ctype problems now is that the collation problems are worse and\nhappen in similar situations. But if all the collation problems went\naway, a subset of the same users would then be unhappy about ctype.\n\nSo I don't want to see us sit on our hands and assert that we don't\nneed to worry about ctype because it's minor in comparison with\ncollation. It *is* minor in comparison with collation. But one problem\ncan be small in comparison with another and still bad. If an aircraft\nis on fire whilst experiencing a dual engine failure, it's still in a\nlot of trouble even if the fire can be put out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Jul 2024 08:20:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 6:20 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> I note in passing that the last time I saw a customer query with\n> UPPER() in the join clause was... yesterday. The problems there had\n> nothing to do with CTYPE, but there's no reason to suppose that it\n> couldn't have had such a problem. I suspect the reason we don't hear\n> about ctype problems now is that the collation problems are worse and\n> happen in similar situations. But if all the collation problems went\n> away, a subset of the same users would then be unhappy about ctype.\n\n\nI have seen and created indexes on upper() functions a number of times too,\nand I think this is not an uncommon pattern for case insensitive searching\n\nBefore glibc 2.28, there was at least one mailing list thread where an\nunhappy person complained about collation problems; but for a number of\nyears before 2.28 I guess the collation changes were uncommon so it didn’t\nget enough momentum to be considered a real problem until the problem\nbecame widespread a few years ago?\n\nhttps://www.postgresql.org/message-id/flat/BA6132ED-1F6B-4A0B-AC22-81278F5AB81E%40tripadvisor.com\n\nI myself would prefer an approach here that sets a higher bar for\npg_upgrade not corrupting indexes, rather than saying it’s ok as long as\nit’s rare\n\n-Jeremy\n\nOn Wed, Jul 24, 2024 at 6:20 AM Robert Haas <robertmhaas@gmail.com> wrote:\nI note in passing that the last time I saw a customer query with\nUPPER() in the join clause was... yesterday. The problems there had\nnothing to do with CTYPE, but there's no reason to suppose that it\ncouldn't have had such a problem. I suspect the reason we don't hear\nabout ctype problems now is that the collation problems are worse and\nhappen in similar situations. But if all the collation problems went\naway, a subset of the same users would then be unhappy about ctype.I have seen and created indexes on upper() functions a number of times too, and I think this is not an uncommon pattern for case insensitive searchingBefore glibc 2.28, there was at least one mailing list thread where an unhappy person complained about collation problems; but for a number of years before 2.28 I guess the collation changes were uncommon so it didn’t get enough momentum to be considered a real problem until the problem became widespread a few years ago?https://www.postgresql.org/message-id/flat/BA6132ED-1F6B-4A0B-AC22-81278F5AB81E%40tripadvisor.comI myself would prefer an approach here that sets a higher bar for pg_upgrade not corrupting indexes, rather than saying it’s ok as long as it’s rare-Jeremy",
"msg_date": "Wed, 24 Jul 2024 07:29:31 -0600",
"msg_from": "Jeremy Schneider <schneider@ardentperf.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Tue, 2024-07-23 at 06:31 -0600, Jeremy Schneider wrote:\n> Other RDBMS are very careful not to corrupt databases, afaik\n> including function based indexes, by changing Unicode. I’m not aware\n> of any other RDBMS that updates Unicode versions in place; instead\n> they support multiple Unicode versions and do not drop the old ones.\n\nI'm curious about the details of what other RDBMSs do.\n\nLet's simplify and say that there's one database-wide collation at\nversion 1, and the application doesn't use any COLLATE clause or other\nspecifications for queries or DDL.\n\nThen, version 2 of that collation becomes available. When a query comes\ninto the database, which version of the collation does it use, 1 or 2?\nIf it uses the latest available (version 2), then all the old indexes\nare effectively useless.\n\nSo I suppose there's some kind of migration process where you\nrebuild/fix objects to use the new collation, and when that's done then\nyou change the default so that queries use version 2. How does all that\nwork?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 24 Jul 2024 10:35:38 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Wed, 2024-07-24 at 08:20 -0400, Robert Haas wrote:\n> I note in passing that the last time I saw a customer query with\n> UPPER() in the join clause was... yesterday.\n\nCan you expand on that? This thread is mostly about durable state so I\ndon't immediately see the connection.\n\n> So I don't want to see us sit on our hands and assert that we don't\n> need to worry about ctype because it's minor in comparison with\n> collation. It *is* minor in comparison with collation. \n\n...\n\n> But one problem\n> can be small in comparison with another and still bad. If an aircraft\n> is on fire whilst experiencing a dual engine failure, it's still in a\n> lot of trouble even if the fire can be put out.\n\nThere's a qualitative difference between a collation update which can\nbreak your PKs and FKs, and a ctype update which definitely will not.\nYour analogy doesn't quite capture this distinction. I don't mean to\nover-emphasize this point, but I do think we need to keep some\nperspective here.\n\nBut I agree with your general point that we shouldn't dismiss the\nproblem just because it's minor. We should expect the problem to\nsurface at some point and be reasonably prepared.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 24 Jul 2024 10:45:31 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On 24.07.24 14:20, Robert Haas wrote:\n> On Wed, Jul 24, 2024 at 12:42 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n>> Fair enough. My argument was, that topic is distinct from the topic of\n>> this thread.\n> \n> OK, that's fair. But I think the solutions are the same: we complain\n> all the time about glibc and ICU shipping collations and not\n> versioning them. We shouldn't make the same kinds of mistakes. Even if\n> ctype is less likely to break things than collations, it still can,\n> and we should move in the direction of letting people keep the v17\n> behavior for the foreseeable future while at the same time having a\n> way that they can also get the new behavior if they want it (and the\n> new behavior should be the default).\n\nVersioning is possibly part of the answer, but I think it would be \ndifferent versioning from the collation version.\n\nThe collation versions are in principle designed to change rarely. Some \nlanguages' rules might change once in twenty years, some never. Maybe \nyou have a database mostly in English and a few tables in, I don't know, \nSwedish (unverified examples). Most of the time nothing happens during \nupgrades, but one time in many years you need to reindex the Swedish \ntables, and the system starts warning you about that as soon as you \naccess the Swedish tables. (Conversely, if you never actually access \nthe Swedish tables, then you don't get warned about.)\n\nIf we wanted a similar versioning system for the Unicode updates, it \nwould be separate. We'd write the Unicode version that was current when \nthe system catalogs were initialized into, say, a pg_database column. \nAnd then at run-time, when someone runs say the normalize() function or \nsome regular expression character classification, then we check what the \nversion of the current compiled-in Unicode tables are, and then we'd \nissue a warning when they are different.\n\nA possible problem is that the Unicode version changes in practice with \nevery major PostgreSQL release, so this approach would end up warning \nusers after every upgrade. To avoid that, we'd probably need to keep \nsupport for multiple Unicode versions around, as has been suggested in \nthis thread already.\n\n\n\n",
"msg_date": "Wed, 24 Jul 2024 20:10:45 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 1:45 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> There's a qualitative difference between a collation update which can\n> break your PKs and FKs, and a ctype update which definitely will not.\n\nI don't think that's true. All you need is a unique index on UPPER(somecol).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Jul 2024 14:47:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Wed, 2024-07-24 at 14:47 -0400, Robert Haas wrote:\n> On Wed, Jul 24, 2024 at 1:45 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > There's a qualitative difference between a collation update which\n> > can\n> > break your PKs and FKs, and a ctype update which definitely will\n> > not.\n> \n> I don't think that's true. All you need is a unique index on\n> UPPER(somecol).\n\nPrimary keys are on plain column references, not expressions; and don't\nsupport WHERE clauses, so I don't see how a ctype update would affect a\nPK.\n\nIn any case, you are correct that Unicode updates could put some\nconstraints at risk, including unique indexes, CHECK, and partition\nconstraints. But someone has to actually use one of the affected\nfunctions somewhere, and that's the main distinction that I'm trying to\ndraw.\n\nThe reason why collation is qualitatively a much bigger problem is\nbecause there's no obvious indication that you are doing anything\nrelated to collation at all. A very plain \"CREATE TABLE x(t text\nPRIMARY KEY)\" is at risk.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 24 Jul 2024 12:12:34 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 3:12 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> In any case, you are correct that Unicode updates could put some\n> constraints at risk, including unique indexes, CHECK, and partition\n> constraints. But someone has to actually use one of the affected\n> functions somewhere, and that's the main distinction that I'm trying to\n> draw.\n>\n> The reason why collation is qualitatively a much bigger problem is\n> because there's no obvious indication that you are doing anything\n> related to collation at all. A very plain \"CREATE TABLE x(t text\n> PRIMARY KEY)\" is at risk.\n\nWell, I don't know. I agree that collation is a much bigger problem,\nbut not for that reason. I think a user who is familiar with the\nproblems in this area will see the danger either way, and one who\nisn't, won't. For me, the only real difference is that a unique index\non a text column is a lot more common than one that involves UPPER.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Jul 2024 15:19:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 12:47 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\nOn Wed, Jul 24, 2024 at 1:45 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> There's a qualitative difference between a collation update which can\n> break your PKs and FKs, and a ctype update which definitely will not.\n\nI don't think that's true. All you need is a unique index on UPPER(somecol).\n\n\nI doubt it’s common to have unique on upper()\n\nBut non-unique indexes for case insensitive searches will be more common.\nHistorically this is the most common way people did case insensitive on\noracle.\n\nChanging ctype would mean these queries can return wrong results\n\nThe impact would be similar to the critical problem TripAdvisor hit in 2014\nwith their read replicas, in the Postgres email thread I linked above\n\n-Jeremy\n\nOn Wed, Jul 24, 2024 at 12:47 PM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Jul 24, 2024 at 1:45 PM Jeff Davis <pgsql@j-davis.com> wrote:> There's a qualitative difference between a collation update which can> break your PKs and FKs, and a ctype update which definitely will not.I don't think that's true. All you need is a unique index on UPPER(somecol).I doubt it’s common to have unique on upper()But non-unique indexes for case insensitive searches will be more common. Historically this is the most common way people did case insensitive on oracle.Changing ctype would mean these queries can return wrong resultsThe impact would be similar to the critical problem TripAdvisor hit in 2014 with their read replicas, in the Postgres email thread I linked above-Jeremy",
"msg_date": "Wed, 24 Jul 2024 13:43:44 -0600",
"msg_from": "Jeremy Schneider <schneider@ardentperf.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 3:43 PM Jeremy Schneider\n<schneider@ardentperf.com> wrote:\n> But non-unique indexes for case insensitive searches will be more common. Historically this is the most common way people did case insensitive on oracle.\n>\n> Changing ctype would mean these queries can return wrong results\n\nYeah. I mentioned earlier that I very recently saw a customer query\nwith UPPER() in the join condition. If someone is doing foo JOIN bar\nON upper(foo.x) = upper(bar.x), it is not unlikely that one or both of\nthose expressions are indexed. Not guaranteed, of course, but very\nplausible.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Jul 2024 15:47:12 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [18] Policy on IMMUTABLE functions and Unicode updates"
}
] |
[
{
"msg_contents": "\nAs I was trying out the libpq perl wrapper on Windows, I encountered a \nfailure in recovery test 002_archiving.pl from this query:\n\nSELECT size IS NOT NULL FROM \npg_stat_file('c:/prog/postgresql/build/testrun/recovery/002_archiving/data/t_002_archiving_primary_data/archives/00000002.history')\n\nThe test errored out because the file didn't exist.\n\nThis was called by poll_query_until(), which is changed by the patch to \nuse a libpq session rather than constantly forking psql. ISTM we should \nbe passing true as a second parameter so we keep going if the file \ndoesn't exist.\n\nThoughts?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 16 Jul 2024 15:04:13 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "recovery test error"
},
{
"msg_contents": "On Tue, Jul 16, 2024 at 03:04:13PM -0400, Andrew Dunstan wrote:\n> This was called by poll_query_until(), which is changed by the patch to use\n> a libpq session rather than constantly forking psql. ISTM we should be\n> passing true as a second parameter so we keep going if the file doesn't\n> exist.\n> \n> Thoughts?\n\nSounds like a good idea to me as this call could return ENOENT\ndepending on the timing of the archiver pushing the new history file,\nas writeTimeLineHistory() at the end of recovery notifies the archiver\nbut does not wait for the fact to happen (history files are\nprioritized, still there is a delay).\n--\nMichael",
"msg_date": "Wed, 17 Jul 2024 08:45:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery test error"
},
{
"msg_contents": "\nOn 2024-07-16 Tu 7:45 PM, Michael Paquier wrote:\n> On Tue, Jul 16, 2024 at 03:04:13PM -0400, Andrew Dunstan wrote:\n>> This was called by poll_query_until(), which is changed by the patch to use\n>> a libpq session rather than constantly forking psql. ISTM we should be\n>> passing true as a second parameter so we keep going if the file doesn't\n>> exist.\n>>\n>> Thoughts?\n> Sounds like a good idea to me as this call could return ENOENT\n> depending on the timing of the archiver pushing the new history file,\n> as writeTimeLineHistory() at the end of recovery notifies the archiver\n> but does not wait for the fact to happen (history files are\n> prioritized, still there is a delay).\n\n\nThanks. Done.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 17 Jul 2024 10:47:20 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: recovery test error"
}
] |
[
{
"msg_contents": "Hello.\n\nI'd like to make MergeAppend node Async-capable like Append node. \nNowadays when planner chooses MergeAppend plan, asynchronous execution \nis not possible. With attached patches you can see plans like\n\nEXPLAIN (VERBOSE, COSTS OFF)\nSELECT * FROM async_pt WHERE b % 100 = 0 ORDER BY b, a;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Merge Append\n Sort Key: async_pt.b, async_pt.a\n -> Async Foreign Scan on public.async_p1 async_pt_1\n Output: async_pt_1.a, async_pt_1.b, async_pt_1.c\n Remote SQL: SELECT a, b, c FROM public.base_tbl1 WHERE (((b % \n100) = 0)) ORDER BY b ASC NULLS LAST, a ASC NULLS LAST\n -> Async Foreign Scan on public.async_p2 async_pt_2\n Output: async_pt_2.a, async_pt_2.b, async_pt_2.c\n Remote SQL: SELECT a, b, c FROM public.base_tbl2 WHERE (((b % \n100) = 0)) ORDER BY b ASC NULLS LAST, a ASC NULLS LAST\n\nThis can be quite profitable (in our test cases you can gain up to two \ntimes better speed with MergeAppend async execution on remote servers).\n\nCode for asynchronous execution in Merge Append was mostly borrowed from \nAppend node.\n\nWhat significantly differs - in ExecMergeAppendAsyncGetNext() you must \nreturn tuple from the specified slot.\nSubplan number determines tuple slot where data should be retrieved to. \nWhen subplan is ready to provide some data,\nit's cached in ms_asyncresults. When we get tuple for subplan, specified \nin ExecMergeAppendAsyncGetNext(),\nExecMergeAppendAsyncRequest() returns true and loop in \nExecMergeAppendAsyncGetNext() ends. We can fetch data for\nsubplans which either don't have cached result ready or have already \nreturned them to the upper node. This\nflag is stored in ms_has_asyncresults. As we can get data for some \nsubplan either earlier or after loop in ExecMergeAppendAsyncRequest(),\nwe check this flag twice in this function.\nUnlike ExecAppendAsyncEventWait(), it seems \nExecMergeAppendAsyncEventWait() doesn't need a timeout - as there's no \nneed to get result\nfrom synchronous subplan if a tuple form async one was explicitly \nrequested.\n\nAlso we had to fix postgres_fdw to avoid directly looking at Append \nfields. Perhaps, accesors to Append fields look strange, but allows\nto avoid some code duplication. I suppose, duplication could be even \nless if we reworked async Append implementation, but so far I haven't\ntried to do this to avoid big diff from master.\n\nAlso mark_async_capable() believes that path corresponds to plan. This \ncan be not true when create_[merge_]append_plan() inserts sort node.\nIn this case mark_async_capable() can treat Sort plan node as some other \nand crash, so there's a small fix for this.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Wed, 17 Jul 2024 16:24:28 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Asynchronous MergeAppend"
},
{
"msg_contents": "Hi! Thank you for your work on this subject! I think this is a very \nuseful optimization)\n\nWhile looking through your code, I noticed some points that I think \nshould be taken into account. Firstly, I noticed only two tests to \nverify the functionality of this function and I think that this is not \nenough.\nAre you thinking about adding some tests with queries involving, for \nexample, join connections with different tables and unusual operators?\n\nIn addition, I have a question about testing your feature on a \nbenchmark. Are you going to do this?\n\nOn 17.07.2024 16:24, Alexander Pyhalov wrote:\n> Hello.\n>\n> I'd like to make MergeAppend node Async-capable like Append node. \n> Nowadays when planner chooses MergeAppend plan, asynchronous execution \n> is not possible. With attached patches you can see plans like\n>\n> EXPLAIN (VERBOSE, COSTS OFF)\n> SELECT * FROM async_pt WHERE b % 100 = 0 ORDER BY b, a;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------ \n>\n> Merge Append\n> Sort Key: async_pt.b, async_pt.a\n> -> Async Foreign Scan on public.async_p1 async_pt_1\n> Output: async_pt_1.a, async_pt_1.b, async_pt_1.c\n> Remote SQL: SELECT a, b, c FROM public.base_tbl1 WHERE (((b % \n> 100) = 0)) ORDER BY b ASC NULLS LAST, a ASC NULLS LAST\n> -> Async Foreign Scan on public.async_p2 async_pt_2\n> Output: async_pt_2.a, async_pt_2.b, async_pt_2.c\n> Remote SQL: SELECT a, b, c FROM public.base_tbl2 WHERE (((b % \n> 100) = 0)) ORDER BY b ASC NULLS LAST, a ASC NULLS LAST\n>\n> This can be quite profitable (in our test cases you can gain up to two \n> times better speed with MergeAppend async execution on remote servers).\n>\n> Code for asynchronous execution in Merge Append was mostly borrowed \n> from Append node.\n>\n> What significantly differs - in ExecMergeAppendAsyncGetNext() you must \n> return tuple from the specified slot.\n> Subplan number determines tuple slot where data should be retrieved \n> to. When subplan is ready to provide some data,\n> it's cached in ms_asyncresults. When we get tuple for subplan, \n> specified in ExecMergeAppendAsyncGetNext(),\n> ExecMergeAppendAsyncRequest() returns true and loop in \n> ExecMergeAppendAsyncGetNext() ends. We can fetch data for\n> subplans which either don't have cached result ready or have already \n> returned them to the upper node. This\n> flag is stored in ms_has_asyncresults. As we can get data for some \n> subplan either earlier or after loop in ExecMergeAppendAsyncRequest(),\n> we check this flag twice in this function.\n> Unlike ExecAppendAsyncEventWait(), it seems \n> ExecMergeAppendAsyncEventWait() doesn't need a timeout - as there's no \n> need to get result\n> from synchronous subplan if a tuple form async one was explicitly \n> requested.\n>\n> Also we had to fix postgres_fdw to avoid directly looking at Append \n> fields. Perhaps, accesors to Append fields look strange, but allows\n> to avoid some code duplication. I suppose, duplication could be even \n> less if we reworked async Append implementation, but so far I haven't\n> tried to do this to avoid big diff from master.\n>\n> Also mark_async_capable() believes that path corresponds to plan. This \n> can be not true when create_[merge_]append_plan() inserts sort node.\n> In this case mark_async_capable() can treat Sort plan node as some \n> other and crash, so there's a small fix for this.\n\nI think you should add this explanation to the commit message because \nwithout it it's hard to understand the full picture of how your code works.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Sat, 10 Aug 2024 23:24:43 +0300",
"msg_from": "Alena Rybakina <a.rybakina@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous MergeAppend"
},
{
"msg_contents": "Hi.\n\nAlena Rybakina писал(а) 2024-08-10 23:24:\n> Hi! Thank you for your work on this subject! I think this is a very \n> useful optimization)\n> \n> While looking through your code, I noticed some points that I think \n> should be taken into account. Firstly, I noticed only two tests to \n> verify the functionality of this function and I think that this is not \n> enough.\n> Are you thinking about adding some tests with queries involving, for \n> example, join connections with different tables and unusual operators?\n\nI've added some more tests - tests for joins and pruning.\n\n> \n> In addition, I have a question about testing your feature on a \n> benchmark. Are you going to do this?\n> \n\nThe main reason for this work is a dramatic performance degradation when \nAppend plans with async foreign scan nodes are switched to MergeAppend \nplans with synchronous foreign scans.\n\nI've performed some synthetic tests to prove the benefits of async Merge \nAppend. So far tests are performed on one physical host.\n\nFor tests I've deployed 3 PostgreSQL instances on ports 5432-5434.\n\nThe first instance:\ncreate server s2 foreign data wrapper postgres_fdw OPTIONS ( port \n'5433', dbname 'postgres', async_capable 'on');\ncreate server s3 foreign data wrapper postgres_fdw OPTIONS ( port \n'5434', dbname 'postgres', async_capable 'on');\n\ncreate foreign table players_p1 partition of players for values with \n(modulus 4, remainder 0) server s2;\ncreate foreign table players_p2 partition of players for values with \n(modulus 4, remainder 1) server s2;\ncreate foreign table players_p3 partition of players for values with \n(modulus 4, remainder 2) server s3;\ncreate foreign table players_p4 partition of players for values with \n(modulus 4, remainder 3) server s3;\n\ns2 instance:\ncreate table players_p1 (id int, name text, score int);\ncreate table players_p2 (id int, name text, score int);\ncreate index on players_p1(score);\ncreate index on players_p2(score);\n\ns3 instance:\ncreate table players_p3 (id int, name text, score int);\ncreate table players_p4 (id int, name text, score int);\ncreate index on players_p3(score);\ncreate index on players_p4(score);\n\ns1 instance:\ninsert into players select i, 'player_' ||i, random()* 100 from \ngenerate_series(1,100000) i;\n\npgbench script:\n\\set rnd_offset random(0,200)\n\\set rnd_limit random(10,20)\n\nselect * from players order by score desc offset :rnd_offset limit \n:rnd_limit;\n\npgbench was run as:\npgbench -n -f 1.sql postgres -T 100 -c 16 -j 16\n\nCPU idle was about 5-10%.\n\npgbench results:\n\nWithout patch, async_capable on:\n\npgbench (14.13, server 18devel)\ntransaction type: 1.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 16\nnumber of threads: 16\nduration: 100 s\nnumber of transactions actually processed: 130523\nlatency average = 12.257 ms\ninitial connection time = 29.824 ms\ntps = 1305.363500 (without initial connection time)\n\nWithout patch, async_capable off:\n\npgbench (14.13, server 18devel)\ntransaction type: 1.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 16\nnumber of threads: 16\nduration: 100 s\nnumber of transactions actually processed: 130075\nlatency average = 12.299 ms\ninitial connection time = 26.931 ms\ntps = 1300.877993 (without initial connection time)\n\nas expected - we see no difference.\n\nPatched, async_capable on:\n\npgbench (14.13, server 18devel)\ntransaction type: 1.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 16\nnumber of threads: 16\nduration: 100 s\nnumber of transactions actually processed: 135616\nlatency average = 11.796 ms\ninitial connection time = 28.619 ms\ntps = 1356.341587 (without initial connection time)\n\nPatched, async_capable off:\n\npgbench (14.13, server 18devel)\ntransaction type: 1.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 16\nnumber of threads: 16\nduration: 100 s\nnumber of transactions actually processed: 131300\nlatency average = 12.185 ms\ninitial connection time = 29.573 ms\ntps = 1313.138405 (without initial connection time)\n\nHere we can see that async MergeAppend behaves a bit better. You can \nargue that benefit is not so big and perhaps is related to some random \nfactors.\nHowever, if we set number of threads to 1, so that CPU has idle cores, \nwe'll see more evident improvements:\n\nPatched, async_capable on:\npgbench (14.13, server 18devel)\ntransaction type: 1.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 100 s\nnumber of transactions actually processed: 20221\nlatency average = 4.945 ms\ninitial connection time = 7.035 ms\ntps = 202.221816 (without initial connection time)\n\n\nPatched, async_capable off\ntransaction type: 1.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 100 s\nnumber of transactions actually processed: 14941\nlatency average = 6.693 ms\ninitial connection time = 7.037 ms\ntps = 149.415688 (without initial connection time)\n\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Tue, 20 Aug 2024 12:14:44 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Asynchronous MergeAppend"
}
] |
[
{
"msg_contents": "Over on [1], there's a complaint about a query OOMing because the use\nof enable_partitionwise_aggregate caused a plan with 1000 Hash\nAggregate nodes.\n\nThe only mention in the docs is the additional memory requirements and\nCPU for query planning when that GUC is enabled. There's no mention\nthat execution could use work_mem * nparts more memory to be used. I\nthink that's bad and we should fix it.\n\nI've attached my proposal to fix that.\n\nDavid\n\n[1] https://postgr.es/m/3603c380-d094-136e-e333-610914fb3e80%40gmx.net",
"msg_date": "Thu, 18 Jul 2024 10:33:35 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add mention of execution time memory for enable_partitionwise_* GUCs"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 4:03 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Over on [1], there's a complaint about a query OOMing because the use\n> of enable_partitionwise_aggregate caused a plan with 1000 Hash\n> Aggregate nodes.\n>\n> The only mention in the docs is the additional memory requirements and\n> CPU for query planning when that GUC is enabled. There's no mention\n> that execution could use work_mem * nparts more memory to be used. I\n> think that's bad and we should fix it.\n>\n> I've attached my proposal to fix that.\n\nIf those GUCs are enabled, the planner consumes large amount of memory\nand also takes longer irrespective of whether partitionwise plan is\nused or not. That's why the default is false. If majority of those\njoins use nested loop memory, or use index scans instead sorting,\nmemory consumption won't be as large. Saying that it \"can\" result in\nlarge increase in execution memory is not accurate. But I agree that\nwe need to mention the effect of work_mem on partitionwise\njoin/aggregation.\n\nI had an offlist email exchange with Dimitrios where I suggested that\nwe should mention this in the work_mem description. I.e. in work_mem\ndescription change \"Note that a complex query might perform several\nsort and hash operations\"\nto \"Note that a complex query or a query using partitionwise\naggregates or joins might perform several sort and hash operations' '.\nAnd in the description of enable_partitionwise_* GUCs mention that\n\"Each of the partitionwise join or aggregation which performs\nsorting/hashing may consume work_mem worth of memory increasing the\ntotal memory consumed during query execution.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 18 Jul 2024 14:53:58 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add mention of execution time memory for enable_partitionwise_*\n GUCs"
},
{
"msg_contents": "On Thu, 18 Jul 2024 at 21:24, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> If those GUCs are enabled, the planner consumes large amount of memory\n> and also takes longer irrespective of whether partitionwise plan is\n> used or not. That's why the default is false. If majority of those\n> joins use nested loop memory, or use index scans instead sorting,\n> memory consumption won't be as large. Saying that it \"can\" result in\n> large increase in execution memory is not accurate. But I agree that\n> we need to mention the effect of work_mem on partitionwise\n> join/aggregation.\n\nhmm? please tell me what word other than \"can\" best describes\nsomething that is possible to happen but does not always happen under\nall circumstances.\n\nDavid\n\n\n",
"msg_date": "Thu, 18 Jul 2024 22:03:40 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add mention of execution time memory for enable_partitionwise_*\n GUCs"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 3:33 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 18 Jul 2024 at 21:24, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > If those GUCs are enabled, the planner consumes large amount of memory\n> > and also takes longer irrespective of whether partitionwise plan is\n> > used or not. That's why the default is false. If majority of those\n> > joins use nested loop memory, or use index scans instead sorting,\n> > memory consumption won't be as large. Saying that it \"can\" result in\n> > large increase in execution memory is not accurate. But I agree that\n> > we need to mention the effect of work_mem on partitionwise\n> > join/aggregation.\n>\n> hmm? please tell me what word other than \"can\" best describes\n> something that is possible to happen but does not always happen under\n> all circumstances.\n\nMay I suggest \"may\"? :) [1], [2], [3].\n\nMy point is, we need to highlight the role of work_mem. So modify both\nthe descriptions.\n\n[1] https://www.thesaurus.com/e/grammar/can-vs-may/\n[2] https://www.britannica.com/dictionary/eb/qa/modal-verbs-may-might-can-could-and-ought\n[3] https://www.merriam-webster.com/grammar/when-to-use-can-and-may\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 18 Jul 2024 15:58:30 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add mention of execution time memory for enable_partitionwise_*\n GUCs"
},
{
"msg_contents": "On Thu, 18 Jul 2024 at 22:28, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Thu, Jul 18, 2024 at 3:33 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > hmm? please tell me what word other than \"can\" best describes\n> > something that is possible to happen but does not always happen under\n> > all circumstances.\n>\n> May I suggest \"may\"? :) [1], [2], [3].\n\nIs this a wind-up?\n\nIf it's not, I disagree that \"may\" is a better choice. The\npossibility example in your first link says \"It may rain tomorrow.\n(possibility)\", but that's something someone would only say if there\nwas some evidence to support that, e.g. ominous clouds on the horizon\nat dusk, or >0% chance of precipitation on the weather forecast.\nNobody is going to say that unless there's some supporting evidence.\nFor the executor using work_mem * nparts, we've no evidence either.\nIt's just a >0% possibility with no supporting evidence.\n\n> My point is, we need to highlight the role of work_mem. So modify both\n> the descriptions.\n\nI considered writing about work_mem, but felt I wanted to keep it as\nbrief as possible and just have some words that might make someone\nthink twice. The details in the work_mem documentation should inform\nthe reader that work_mem is per executor node. It likely wouldn't\nhurt to have more documentation around which executor node types can\nuse a work_mem, which use work_mem * hash_mem_multiplier and which use\nneither. We tend to not write too much about executor nodes in the\ndocuments, so I'm not proposing that for this patch.\n\nDavid\n\n> [1] https://www.thesaurus.com/e/grammar/can-vs-may/\n\n\n",
"msg_date": "Thu, 18 Jul 2024 22:54:43 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add mention of execution time memory for enable_partitionwise_*\n GUCs"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 4:24 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 18 Jul 2024 at 22:28, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Thu, Jul 18, 2024 at 3:33 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > hmm? please tell me what word other than \"can\" best describes\n> > > something that is possible to happen but does not always happen under\n> > > all circumstances.\n> >\n> > May I suggest \"may\"? :) [1], [2], [3].\n>\n> Is this a wind-up?\n>\n> If it's not, I disagree that \"may\" is a better choice. The\n> possibility example in your first link says \"It may rain tomorrow.\n> (possibility)\", but that's something someone would only say if there\n> was some evidence to support that, e.g. ominous clouds on the horizon\n> at dusk, or >0% chance of precipitation on the weather forecast.\n> Nobody is going to say that unless there's some supporting evidence.\n> For the executor using work_mem * nparts, we've no evidence either.\n> It's just a >0% possibility with no supporting evidence.\n\nI am not a native English speaker and might have made a mistake when\ninterpreting the definitions. Will leave that aside.\n\n>\n> > My point is, we need to highlight the role of work_mem. So modify both\n> > the descriptions.\n>\n> I considered writing about work_mem, but felt I wanted to keep it as\n> brief as possible and just have some words that might make someone\n> think twice. The details in the work_mem documentation should inform\n> the reader that work_mem is per executor node. It likely wouldn't\n> hurt to have more documentation around which executor node types can\n> use a work_mem, which use work_mem * hash_mem_multiplier and which use\n> neither. We tend to not write too much about executor nodes in the\n> documents, so I'm not proposing that for this patch.\n\nSomething I didn't write in my first reply but wanted to discuss was\nthe intention of adding those GUCs. Sorry for missing it in my first\nemail. According to [1] these GUCs were added because of increased\nmemory consumption during planning and time taken to plan the query.\nThe execution time memory consumption issue was known even back then\nbut the GUC was not set to default because of that. But your patch\nproposes to change the narrative. In the same thread [1], you will\nfind the discussion about turning the default to ON once we have fixed\nplanner's memory and time consumption. We have patches addressing\nthose issues [2] [3]. With your proposed changes we will almost never\nhave a chance to turn those GUCs ON by default. That seems a rather\nsad prospect.\n\nI am fine if we want to mention that the executor may consume a large\namount of memory when these GUCs are turned ON. Users may decide to\nturn those OFF if they can not afford to spend that much memory during\nexecution. But I don't like tying execution time consumption with\ndefault OFF.\n\n[1] https://www.postgresql.org/message-id/CA+TgmoYggDp6k-HXNAgrykZh79w6nv2FevpYR_jeMbrORDaQrA@mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAExHW5stmOUobE55pMt83r8UxvfCph%2BPvo5dNpdrVCsBgXEzDQ%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/flat/CAJ2pMkZNCgoUKSE+_5LthD+KbXKvq6h2hQN8Esxpxd+cxmgomg@mail.gmail.com\n\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 19 Jul 2024 10:54:44 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add mention of execution time memory for enable_partitionwise_*\n GUCs"
},
{
"msg_contents": "On Fri, 19 Jul 2024 at 17:24, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> According to [1] these GUCs were added because of increased\n> memory consumption during planning and time taken to plan the query.\n> The execution time memory consumption issue was known even back then\n> but the GUC was not set to default because of that. But your patch\n> proposes to change the narrative. In the same thread [1], you will\n> find the discussion about turning the default to ON once we have fixed\n> planner's memory and time consumption. We have patches addressing\n> those issues [2] [3]. With your proposed changes we will almost never\n> have a chance to turn those GUCs ON by default. That seems a rather\n> sad prospect.\n\nSad prospect for who? If the day comes and we're considering enabling\nthese GUCs by default, I think we'll likely also be considering if\nit'll be sad for users who get an increased likelihood of OOM kills\nbecause the chosen plan uses $nparts times more memory to execute than\nthe old plan.\n\nCan you honestly say that you have no concern about increased executor\nmemory consumption if we switched on these GUCs by default? I was\ncertainly concerned about it in [5], so I dropped that patch after\nrealising what could happen.\n\n> I am fine if we want to mention that the executor may consume a large\n> amount of memory when these GUCs are turned ON. Users may decide to\n> turn those OFF if they can not afford to spend that much memory during\n> execution. But I don't like tying execution time consumption with\n> default OFF.\n\nIf you were to fix the planner issues then this text would need to be\nrevisited anyway. However, I seriously question your judgment on\nfixing those alone being enough to allow us to switch these on by\ndefault. It seems unlikely that the planner would use anything near\nany realistic work_mem setting extra memory per partition, but that's\nwhat the executor would do, given enough data per partition.\n\nI'd say any analysis that only found planner memory and time to be the\nonly reason to turn these GUCs off by default was incomplete analysis.\nMaybe it was a case of stopping looking for all the reasons once\nenough had been found to make the choice. If not, then they found the\nmolehill and failed to notice the mountain.\n\nDavid\n\n[4] https://postgr.es/m/3603c380-d094-136e-e333-610914fb3e80%40gmx.net\n[5] https://postgr.es/m/CAApHDvojKdBR3MR59JXmaCYbyHB6Q_5qPRU+dy93En8wm+XiDA@mail.gmail.com\n\n\n",
"msg_date": "Sun, 21 Jul 2024 13:43:35 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add mention of execution time memory for enable_partitionwise_*\n GUCs"
},
{
"msg_contents": "Thank you for the patch improving the docs, I think it's a clear\nimprovement from before.\n\nOn Thu, 18 Jul 2024, David Rowley wrote:\n\n> I considered writing about work_mem, but felt I wanted to keep it as\n> brief as possible and just have some words that might make someone\n> think twice. The details in the work_mem documentation should inform\n> the reader that work_mem is per executor node. It likely wouldn't\n> hurt to have more documentation around which executor node types can\n> use a work_mem, which use work_mem * hash_mem_multiplier and which use\n> neither. We tend to not write too much about executor nodes in the\n> documents, so I'm not proposing that for this patch.\n\nThis is the only part I think is missing, since we now know (measurements\nin [1], reproducible scenario in [2]) that the number of partitions plays\nan important role in sizing the RAM of the server. It's just too big to\nnot mention that worst case will be n_partitions * work_mem.\n\n\n[1] https://www.postgresql.org/message-id/d26e67d3-74bc-60aa-bf24-2a8fb83efe9c%40gmx.net\n\n[2] https://www.postgresql.org/message-id/af6ed790-a5fe-19aa-1141-927595604c01%40gmx.net\n\n\nI would also like to add an entry about this issue with links to the above\npages, to the TODO page at [3], as this is the only bugtracker I'm aware\nof. Am I doing it right bringing it up for approval on this list?\n\n[3] https://wiki.postgresql.org/wiki/Todo\n\n\n\nThanks,\nDimitris\n\n\n\n",
"msg_date": "Tue, 23 Jul 2024 19:02:02 +0200 (CEST)",
"msg_from": "Dimitrios Apostolou <jimis@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Add mention of execution time memory for enable_partitionwise_*\n GUCs"
},
{
"msg_contents": "On Fri, 19 Jul 2024 at 17:24, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> I am fine if we want to mention that the executor may consume a large\n> amount of memory when these GUCs are turned ON. Users may decide to\n> turn those OFF if they can not afford to spend that much memory during\n> execution. But I don't like tying execution time consumption with\n> default OFF.\n\nWould the attached address your concern about the reasons for defaulting to off?\n\nDavid",
"msg_date": "Mon, 29 Jul 2024 17:07:06 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add mention of execution time memory for enable_partitionwise_*\n GUCs"
},
{
"msg_contents": "On Mon, Jul 29, 2024 at 10:37 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 19 Jul 2024 at 17:24, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > I am fine if we want to mention that the executor may consume a large\n> > amount of memory when these GUCs are turned ON. Users may decide to\n> > turn those OFF if they can not afford to spend that much memory during\n> > execution. But I don't like tying execution time consumption with\n> > default OFF.\n>\n> Would the attached address your concern about the reasons for defaulting to off?\n\nThanks. This looks better. Nitpick\n\n+ child partitions. With this setting enabled, the number of executor\n+ nodes whose memory usage is restricted by <varname>work_mem</varname>\n\nThis sentence appears to say that the memory usage of \"all\" nodes is\nrestricted by work_mem. I think what you want to convey is - nodes,\nwhose memory usage is subjected to <varname>work_mem</varname>\nsetting, ....\n\nOr break it into two sentences\n\nWith this setting enabled, the number of executor nodes appearing in\nthe final plan can increase linearly proportional to the number of\npartitions being scanned. Each of those nodes may use upto\n<varname>work_mem</varname> memory. This can ...\n\nI note that the work_mem documentation does not talk about executor\nnodes, instead it uses the term \"query operations\".\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 30 Jul 2024 14:42:11 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add mention of execution time memory for enable_partitionwise_*\n GUCs"
},
{
"msg_contents": "On Tue, 30 Jul 2024 at 21:12, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> Thanks. This looks better. Nitpick\n>\n> + child partitions. With this setting enabled, the number of executor\n> + nodes whose memory usage is restricted by <varname>work_mem</varname>\n>\n> This sentence appears to say that the memory usage of \"all\" nodes is\n> restricted by work_mem. I think what you want to convey is - nodes,\n> whose memory usage is subjected to <varname>work_mem</varname>\n> setting, ....\n\nI'm open to improving the sentence but I don't quite follow why\n\"subjected to\" is better than \"restricted by\". It seems to remove\nmeaning without saving any words. With \"restricted by\" we can deduce\nthat the memory cannot go over work_mem, whereas \"subjected to\" only\ngives us some indication that the memory usage somehow relates to the\nwork_mem setting and doesn't not mean that the memory used could be >=\nwork_mem.\n\n> Or break it into two sentences\n>\n> With this setting enabled, the number of executor nodes appearing in\n> the final plan can increase linearly proportional to the number of\n> partitions being scanned. Each of those nodes may use upto\n> <varname>work_mem</varname> memory. This can ...\n\n... but that's incorrect. This means that all additional nodes that\nappear in the plan as a result of enabling the setting can use up to\nwork_mem. That's not the case as some of the new nodes might be\nNested Loops or Merge Joins and they're not going to use up to\nwork_mem.\n\nAs a compromise, I've dropped the word \"executor\" and it now just says\n\"the number of nodes whose memory usage is restricted by work_mem\nappearing in the final plan\". I'd have used \"plan nodes\" but it seems\nstrange to use \"plan nodes\" then later \"in the final plan\".\n\n> I note that the work_mem documentation does not talk about executor\n> nodes, instead it uses the term \"query operations\".\n\nI much prefer \"nodes\" to \"operations\". If someone started asking me\nabout \"query operations\", I'd have to confirm what they meant. An I/O\nis an operation that can occur during the execution of a query. Is\nthat a \"query operation\"? IMO, the wording you're proposing is less\nconcise. I'm not a fan of the terminology when talking about plan\nnodes. I'd rather see the work_mem docs modified than copy what it\nsays.\n\nPlease have a look at: git grep -B 1 -i -F \"plan node\" -- doc/*\n\nthat seems like a popular term.\n\nDavid",
"msg_date": "Wed, 31 Jul 2024 00:08:40 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add mention of execution time memory for enable_partitionwise_*\n GUCs"
},
{
"msg_contents": "On Tue, Jul 30, 2024 at 5:38 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 30 Jul 2024 at 21:12, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > Thanks. This looks better. Nitpick\n> >\n> > + child partitions. With this setting enabled, the number of executor\n> > + nodes whose memory usage is restricted by <varname>work_mem</varname>\n> >\n> > This sentence appears to say that the memory usage of \"all\" nodes is\n> > restricted by work_mem. I think what you want to convey is - nodes,\n> > whose memory usage is subjected to <varname>work_mem</varname>\n> > setting, ....\n>\n> I'm open to improving the sentence but I don't quite follow why\n> \"subjected to\" is better than \"restricted by\". It seems to remove\n> meaning without saving any words. With \"restricted by\" we can deduce\n> that the memory cannot go over work_mem, whereas \"subjected to\" only\n> gives us some indication that the memory usage somehow relates to the\n> work_mem setting and doesn't not mean that the memory used could be >=\n> work_mem.\n>\n> > Or break it into two sentences\n> >\n> > With this setting enabled, the number of executor nodes appearing in\n> > the final plan can increase linearly proportional to the number of\n> > partitions being scanned. Each of those nodes may use upto\n> > <varname>work_mem</varname> memory. This can ...\n>\n> ... but that's incorrect. This means that all additional nodes that\n> appear in the plan as a result of enabling the setting can use up to\n> work_mem. That's not the case as some of the new nodes might be\n> Nested Loops or Merge Joins and they're not going to use up to\n> work_mem.\n\nAny wording, which indicates that \"some\" of those nodes \"may\" use upto\n\"work_mem\" memory \"each\", is fine with me. If you think that your\ncurrent wording conveys that meaning, I am ok.\n\n>\n> I much prefer \"nodes\" to \"operations\". If someone started asking me\n> about \"query operations\", I'd have to confirm what they meant. An I/O\n> is an operation that can occur during the execution of a query. Is\n> that a \"query operation\"? IMO, the wording you're proposing is less\n> concise. I'm not a fan of the terminology when talking about plan\n> nodes. I'd rather see the work_mem docs modified than copy what it\n> says.\n\nWe need to use consistent terms at both places. Somebody reading the\nnew text and then referring to work_mem description will wonder\nwhether \"query operations\" are the same thing as \"executor nodes\" or\nnot. But I get your point that query operation doesn't necessarily\nmean operations performed by each executor node, especially when those\noperations are not specified directly in the query. We can commit your\nversion and see if users find it confusing. May be they already know\nthat operations and nodes are interchangeable in this context.\n\nI noticed a previous discussion about work_mem documentation [1]. But\nthat didn't change the first sentence in the description.\n\n[1] https://www.postgresql.org/message-id/66590882-F48C-4A25-83E3-73792CF8C51F%40amazon.com\n\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 31 Jul 2024 09:45:41 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add mention of execution time memory for enable_partitionwise_*\n GUCs"
},
{
"msg_contents": "On Wed, 31 Jul 2024 at 16:15, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> We can commit your\n> version and see if users find it confusing.\n\nOk. I've now pushed the patch. Thanks for reviewing it.\n\nDavid\n\n\n",
"msg_date": "Thu, 1 Aug 2024 09:58:44 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add mention of execution time memory for enable_partitionwise_*\n GUCs"
}
] |
[
{
"msg_contents": "In create_gather_merge_path, we should always guarantee that the\nsubpath is adequately ordered, and we do not add a Sort node in\ncreateplan.c for a Gather Merge node. Therefore, the 'else' branch in\nthe snippet from create_gather_merge_path is redundant.\n\n if (pathkeys_contained_in(pathkeys, subpath->pathkeys))\n {\n /* Subpath is adequately ordered, we won't need to sort it */\n input_startup_cost += subpath->startup_cost;\n input_total_cost += subpath->total_cost;\n }\n else\n {\n /* We'll need to insert a Sort node, so include cost for that */\n Path sort_path; /* dummy for result of cost_sort */\n\n cost_sort(&sort_path,\n root,\n pathkeys,\n subpath->total_cost,\n subpath->rows,\n subpath->pathtarget->width,\n 0.0,\n work_mem,\n -1);\n input_startup_cost += sort_path.startup_cost;\n input_total_cost += sort_path.total_cost;\n }\n\nWe should be able to assert that pathkeys_contained_in(pathkeys,\nsubpath->pathkeys) is always true, otherwise we'll be in trouble.\n\nI noticed this while reviewing patch [1], thinking that it might be\nworth fixing. Any thoughts?\n\n[1] https://postgr.es/m/CAO6_Xqr9+51NxgO=XospEkUeAg-p=EjAWmtpdcZwjRgGKJ53iA@mail.gmail.com\n\nThanks\nRichard\n\n\n",
"msg_date": "Thu, 18 Jul 2024 10:02:50 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Redundant code in create_gather_merge_path"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 10:02 AM Richard Guo <guofenglinux@gmail.com> wrote:\n> I noticed this while reviewing patch [1], thinking that it might be\n> worth fixing. Any thoughts?\n\nHere is the patch.\n\nThanks\nRichard",
"msg_date": "Thu, 18 Jul 2024 11:08:48 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Redundant code in create_gather_merge_path"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 11:08 AM Richard Guo <guofenglinux@gmail.com> wrote:\n> On Thu, Jul 18, 2024 at 10:02 AM Richard Guo <guofenglinux@gmail.com> wrote:\n> > I noticed this while reviewing patch [1], thinking that it might be\n> > worth fixing. Any thoughts?\n>\n> Here is the patch.\n\nThis patch is quite straightforward to remove the redundant code. So\nI've gone ahead and pushed it.\n\nThanks\nRichard\n\n\n",
"msg_date": "Tue, 23 Jul 2024 10:45:41 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Redundant code in create_gather_merge_path"
}
] |
[
{
"msg_contents": "*The Current Situation*\nAs a long-time PostgreSQL user, I've increasingly run into issues with the\n63-byte limit for identifiers, particularly table names. This limit, while\nhistorically sufficient, is becoming a significant pain point in modern\ndatabase design and usage.\n\n*Real-World Examples*\nThe problem is particularly evident in English table names, which make up a\nlarge portion of PostgreSQL's user base:\n\n - \"Gross Domestic Product, Billions of Dollars, Not Seasonally Adjusted\n (GDPA)~Percent Change from Year Ago\"\n - \"Inflation, consumer prices for the United States, Percent, Not\n Seasonally Adjusted (FPCPITOTLZGUSA)\"\n - \"Annual Average Consumer Price Index for All Urban Consumers (CPI-U):\n U.S. City Average, All Items\"\n\nThese names, while descriptive and useful, exceed our current limit. The\nissue extends to multi-byte character sets as well, such as this Chinese\ntable name:\n\n - \"能源消耗统计_全球主要国家石油与天然气使用量_年度碳排放与可再生能源比例表\" (Energy consumption statistics\n table)\n\n*Why This Matters*\nIn my experience, the complexity of data and the need for self-documenting\nschemas have grown significantly. We're dealing with:\n\n - Increasingly descriptive table names for clarity and self-documentation\n - Automated systems generating tables with detailed, often lengthy names\n - A growing international user base requiring support for multi-byte\n characters\n\nWhile workarounds exist, such as using abbreviations or moving descriptions\nto comments, these solutions often lead to less intuitive database designs\nand reduced readability.\n\n*Potential Solutions*\nBased on community discussions, I believe we should consider:\n\n 1. Increasing NAMEDATALEN to 256 bytes or more\n 2. Making NAMEDATALEN configurable at compile-time\n 3. Changing NAMEDATALEN to represent character count instead of byte\n count\n 4. Implementing a variable-length \"name\" data type\n\n*Challenges and Considerations*\nI'm aware that this change presents significant technical challenges:\n\n 1. Maintaining backward compatibility\n 2. Potential impacts on storage and performance\n 3. Complexities in implementation, especially if moving to a\n variable-length system\n\nDespite these challenges, I believe addressing this limitation is crucial\nfor maintaining PostgreSQL's position as a versatile, user-friendly\ndatabase system.\n\n*Call to Action*\nI respectfully request the PostgreSQL development team to consider this\nfeature request. While I understand the technical complexities involved, I\nbelieve the benefits to the user experience would be substantial.\nI'm eager to contribute to discussions and potentially assist in testing\nany proposed solutions. Let's work together to enhance PostgreSQL's\ncapability to handle the evolving needs of modern database users.\nThank you for your consideration and your ongoing efforts in improving\nPostgreSQL.\n\nThe Current SituationAs a long-time PostgreSQL user, I've increasingly run into issues with the 63-byte limit for identifiers, particularly table names. This limit, while historically sufficient, is becoming a significant pain point in modern database design and usage.Real-World ExamplesThe problem is particularly evident in English table names, which make up a large portion of PostgreSQL's user base:\"Gross Domestic Product, Billions of Dollars, Not Seasonally Adjusted (GDPA)~Percent Change from Year Ago\"\"Inflation, consumer prices for the United States, Percent, Not Seasonally Adjusted (FPCPITOTLZGUSA)\"\"Annual Average Consumer Price Index for All Urban Consumers (CPI-U): U.S. City Average, All Items\"These names, while descriptive and useful, exceed our current limit. The issue extends to multi-byte character sets as well, such as this Chinese table name:\"能源消耗统计_全球主要国家石油与天然气使用量_年度碳排放与可再生能源比例表\" (Energy consumption statistics table)Why This MattersIn my experience, the complexity of data and the need for self-documenting schemas have grown significantly. We're dealing with:Increasingly descriptive table names for clarity and self-documentationAutomated systems generating tables with detailed, often lengthy namesA growing international user base requiring support for multi-byte charactersWhile workarounds exist, such as using abbreviations or moving descriptions to comments, these solutions often lead to less intuitive database designs and reduced readability.Potential SolutionsBased on community discussions, I believe we should consider:Increasing NAMEDATALEN to 256 bytes or moreMaking NAMEDATALEN configurable at compile-timeChanging NAMEDATALEN to represent character count instead of byte countImplementing a variable-length \"name\" data typeChallenges and ConsiderationsI'm aware that this change presents significant technical challenges:Maintaining backward compatibilityPotential impacts on storage and performanceComplexities in implementation, especially if moving to a variable-length systemDespite these challenges, I believe addressing this limitation is crucial for maintaining PostgreSQL's position as a versatile, user-friendly database system.Call to ActionI respectfully request the PostgreSQL development team to consider this feature request. While I understand the technical complexities involved, I believe the benefits to the user experience would be substantial.I'm eager to contribute to discussions and potentially assist in testing any proposed solutions. Let's work together to enhance PostgreSQL's capability to handle the evolving needs of modern database users.Thank you for your consideration and your ongoing efforts in improving PostgreSQL.",
"msg_date": "Thu, 18 Jul 2024 16:20:01 +0800",
"msg_from": "David HJ <chuxiongzhong@gmail.com>",
"msg_from_op": true,
"msg_subject": "Feature Request: Extending PostgreSQL's Identifier Length Limit"
},
{
"msg_contents": "Hi David,\n\n> As a long-time PostgreSQL user, I've increasingly run into issues with the 63-byte limit for identifiers, particularly table names. This limit, while historically sufficient, is becoming a significant pain point in modern database design and usage.\n\nI can understand your pain. Unfortunately there are a number of\ncomplications involved.\n\nTake pg_class catalog table [1] as an example and its column `relname`\nof type `name` [2]. On disk it is stored as an fixed-sized array of\nchars:\n\n```\ntypedef struct nameData\n{\n char data[NAMEDATALEN];\n} NameData;\ntypedef NameData *Name;\n```\n\nWhy not use TEXT? Mostly for performance reasons. In general case TEXT\ndata can be TOASTed. When using TEXT one should do an additional call\nof heap_deform_tuple().\n\nUsing NAME allows the code to interpret tuple data as is, e.g.:\n```\ntypedef struct FormData_phonebook\n{\n int32 id;\n NameData name;\n int32 phone;\n} FormData_phonebook;\n\ntypedef FormData_phonebook* Form_phonebook;\n\n/* ... */\n\n while ((tup = heap_getnext(scan, ForwardScanDirection)) != NULL)\n {\n Form_phonebook record = (Form_phonebook) GETSTRUCT(tup);\n\n if(strcmp(record->name.data, name->data) == 0)\n {\n found_phone = record->phone;\n break;\n }\n }\n```\n\nSo if you change NAME definition several things will happen:\n\n1. You have to rebuild catalog tables. This can't be done in-place\nbecause larger tuples may not fit into pages. Note that page size may\nalso vary depending on how PostgreSQL was compiled. Note that the\nindexes will also be affected.\n\n2. You will break all the extensions that use NAME and the\ncorresponding heap_* and index_* APIs.\n\n3. The performance will generally decrease - many existing\napplications will just waste memory or do unnecessary work due to\nextra calls to heap_deform_tuple().\n\nIf (1) is doable in theory, I don't think (2) and (3) are something we\ndo in this project.\n\nOn top of that there is are relatively simple workarounds for the situation:\n\n1. An application may have a function like shorten_name(x) = substr(x,\n1, 50) || '_' || substr(md5(x), 1, 8). So instead of `SELECT * FROM x`\nyou just do `SELECT * FROM shorten_name(x)`.\n\n2. You may fork the code and enlarge NAMEDATALEN. This is not\nrecommended and not guaranteed to work but worth a try.\n\nThis makes me think that solving the named limitation isn't worth the effort.\n\nPersonally I'm not opposed to the idea in general but you will need to\ncome up with a specific RFC that explains how exactly you propose to\nsolve named problems.\n\n[1]: https://www.postgresql.org/docs/current/catalog-pg-class.html\n[2]: https://www.postgresql.org/docs/current/datatype-character.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 18 Jul 2024 12:25:01 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request: Extending PostgreSQL's Identifier Length Limit"
},
{
"msg_contents": "On 2024-Jul-18, David HJ wrote:\n\n> As a long-time PostgreSQL user, I've increasingly run into issues with the\n> 63-byte limit for identifiers, particularly table names. This limit, while\n> historically sufficient, is becoming a significant pain point in modern\n> database design and usage.\n\nThis has been discussed before. I think the latest discussion, and some\npreliminary proof-of-concept patches, were around here:\n\nhttps://postgr.es/m/CAFBsxsF2V8n9w0SGK56bre3Mk9fzZS=9aaA8Gfs_n+woa3Dr-Q@mail.gmail.com\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"In fact, the basic problem with Perl 5's subroutines is that they're not\ncrufty enough, so the cruft leaks out into user-defined code instead, by\nthe Conservation of Cruft Principle.\" (Larry Wall, Apocalypse 6)\n\n\n",
"msg_date": "Thu, 18 Jul 2024 11:45:45 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request: Extending PostgreSQL's Identifier Length Limit"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 11:45:45AM +0200, Álvaro Herrera wrote:\n> On 2024-Jul-18, David HJ wrote:\n> \n> > As a long-time PostgreSQL user, I've increasingly run into issues with the\n> > 63-byte limit for identifiers, particularly table names. This limit, while\n> > historically sufficient, is becoming a significant pain point in modern\n> > database design and usage.\n> \n> This has been discussed before. I think the latest discussion, and some\n> preliminary proof-of-concept patches, were around here:\n> \n> https://postgr.es/m/CAFBsxsF2V8n9w0SGK56bre3Mk9fzZS=9aaA8Gfs_n+woa3Dr-Q@mail.gmail.com\n\nFYI, the COMMENT ON command can help to document identifiers.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Sun, 18 Aug 2024 21:17:38 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request: Extending PostgreSQL's Identifier Length Limit"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile answering one of the recent questions [1] I wanted to use\ncrc32(text) and discovered that it's missing out-of-the box. Of\ncourse, one can use `substr(md5(x), 1, 8)` with almost the same effect\nbut it's less convenient and could be slower (I didn't do actual\nbenchmarks though). Also it's incompatible with third-party software\nthat may calculate crc32's and store the results in PostgreSQL.\n\nI vaguely recall that I faced this problem before. Supporting crc32\nwas requested on the mailing list [2] and a number of workarounds\nexist in PL/pgSQL [3][4]. Since there seems to be a demand and it\ncosts us nothing to maintain crc32() I suggest adding it.\n\nThe proposed patch exposes our internal crc32 implementation to the\nuser. I chose to return a hex string similarly to md5(). In my humble\nexperience this is most convenient in practical use. However if the\nmajority believes that the function should return a bigint (in order\nto fit an unsigned int32) or a bytea (as SHA* functions do), I'm fine\nwith whatever consensus the community reaches.\n\n[1]: https://www.postgresql.org/message-id/CAJ7c6TOurV4uA5Yz%3DaJ-ae4czL_zdFNqxbu47eyVrYFefrWoog%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/flat/auto-000557707157%40umail.ru\n[3]: https://stackoverflow.com/questions/28179335/crc32-function-with-pl-pgsql\n[4]: https://gist.github.com/cuber/bcf0a3a96fc9a790d96d\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 18 Jul 2024 14:24:23 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 02:24:23PM +0300, Aleksander Alekseev wrote:\n> I vaguely recall that I faced this problem before. Supporting crc32\n> was requested on the mailing list [2] and a number of workarounds\n> exist in PL/pgSQL [3][4]. Since there seems to be a demand and it\n> costs us nothing to maintain crc32() I suggest adding it.\n\nThis sounds generally reasonable to me, especially given the apparent\ndemand. Should we also introduce crc32c() while we're at it?\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 25 Jul 2024 10:16:09 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "Hi,\n\n> This sounds generally reasonable to me, especially given the apparent\n> demand. Should we also introduce crc32c() while we're at it?\n\nMight be a good idea. However I didn't see a demand for crc32c() SQL\nfunction yet. Also I'm not sure whether the best interface for it\nwould be crc32c() or crc32(x, version='c') or perhaps crc32(x,\npolinomial=...). I propose keeping the scope small this time.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 26 Jul 2024 12:01:40 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 12:01:40PM +0300, Aleksander Alekseev wrote:\n>> This sounds generally reasonable to me, especially given the apparent\n>> demand. Should we also introduce crc32c() while we're at it?\n> \n> Might be a good idea. However I didn't see a demand for crc32c() SQL\n> function yet. Also I'm not sure whether the best interface for it\n> would be crc32c() or crc32(x, version='c') or perhaps crc32(x,\n> polinomial=...). I propose keeping the scope small this time.\n\nI don't think adding crc32c() would sufficiently increase the scope. We'd\nuse the existing implementations for both crc32() and crc32c(). And\nbesides, this could be useful for adding tests for that code.\n\n+ <function>crc32</function> ( <type>text</type> )\n\nDo we need a version of the function that takes a text input? It's easy\nenough to cast to a bytea.\n\n+ <returnvalue>text</returnvalue>\n\nMy first reaction is that we should just have this return bytea like the\nSHA ones do, if for no other reason than commit 10cfce3 seems intended to\nmove us away from returning text for these kinds of functions. Upthread,\nyou mentioned the possibility of returning a bigint, too. I think I'd\nstill prefer bytea in case we want to add, say, crc64() or crc16() in the\nfuture. That would allow us to keep all of these functions consistent\ninstead of returning different types for each. However, I understand that\nreturning the numeric types might be more convenient. I'm curious what\nothers think about this.\n\n+ Computes the CRC32 <link linkend=\"functions-hash-note\">hash</link> of\n+ the binary string, with the result written in hexadecimal.\n\nI'm not sure we should call the check values \"hashes.\" Wikipedia does\ninclude them in the \"List of hash functions\" page [0], but it seems to\ndeliberately avoid calling them hashes in the CRC page [1]. I'd suggest\ncalling them \"CRC32 values\" instead.\n\n[0] https://en.wikipedia.org/wiki/List_of_hash_functions\n[1] https://en.wikipedia.org/wiki/Cyclic_redundancy_check\n\n-- \nnathan\n\n\n",
"msg_date": "Fri, 26 Jul 2024 10:42:32 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "Hi Nathan,\n\n> I don't think adding crc32c() would sufficiently increase the scope. We'd\n> use the existing implementations for both crc32() and crc32c(). And\n> besides, this could be useful for adding tests for that code.\n>\n> + <function>crc32</function> ( <type>text</type> )\n>\n> Do we need a version of the function that takes a text input? It's easy\n> enough to cast to a bytea.\n>\n> + <returnvalue>text</returnvalue>\n>\n> My first reaction is that we should just have this return bytea like the\n> SHA ones do, if for no other reason than commit 10cfce3 seems intended to\n> move us away from returning text for these kinds of functions. Upthread,\n> you mentioned the possibility of returning a bigint, too. I think I'd\n> still prefer bytea in case we want to add, say, crc64() or crc16() in the\n> future. That would allow us to keep all of these functions consistent\n> instead of returning different types for each. However, I understand that\n> returning the numeric types might be more convenient. I'm curious what\n> others think about this.\n>\n> + Computes the CRC32 <link linkend=\"functions-hash-note\">hash</link> of\n> + the binary string, with the result written in hexadecimal.\n>\n> I'm not sure we should call the check values \"hashes.\" Wikipedia does\n> include them in the \"List of hash functions\" page [0], but it seems to\n> deliberately avoid calling them hashes in the CRC page [1]. I'd suggest\n> calling them \"CRC32 values\" instead.\n\nThanks for the code review. Here is the updated patch.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 29 Jul 2024 13:55:37 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "+/*\n+ * Calculate CRC32 of the given data.\n+ */\n+static inline pg_crc32\n+crc32_sz(const char *buf, int size)\n+{\n+\tpg_crc32\tcrc;\n+\tconst char *p = buf;\n+\n+\tINIT_TRADITIONAL_CRC32(crc);\n+\twhile (size > 0)\n+\t{\n+\t\tchar\t\tc = (char) (*p);\n+\n+\t\tCOMP_TRADITIONAL_CRC32(crc, &c, 1);\n+\t\tsize--;\n+\t\tp++;\n+\t}\n+\tFIN_TRADITIONAL_CRC32(crc);\n+\treturn crc;\n+}\n\nI'm curious why we need to do this instead of only using the macros:\n\n INIT_TRADITIONAL_CRC32(crc);\n COMP_TRADITIONAL_CRC32(crc, VARDATA_ANY(in), len);\n FIN_TRADITIONAL_CRC32(crc);\n\n+ * IDENTIFICATION\n+ * src/backend/utils/adt/hashfuncs.c\n\nPerhaps these would fit into src/backend/utils/hash/pg_crc.c?\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 1 Aug 2024 11:21:56 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "Hi,\n\n> I'm curious why we need to do this instead of only using the macros:\n>\n> INIT_TRADITIONAL_CRC32(crc);\n> COMP_TRADITIONAL_CRC32(crc, VARDATA_ANY(in), len);\n> FIN_TRADITIONAL_CRC32(crc);\n>\n> + * IDENTIFICATION\n> + * src/backend/utils/adt/hashfuncs.c\n>\n> Perhaps these would fit into src/backend/utils/hash/pg_crc.c?\n\nThanks, PFA patch v3.\n\n--\nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 5 Aug 2024 16:19:45 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "On Mon, Aug 05, 2024 at 04:19:45PM +0300, Aleksander Alekseev wrote:\n> Thanks, PFA patch v3.\n\nThis looks pretty good to me. The only point that I think deserves more\ndiscussion is the return type. Does bytea make the most sense here? Or\nshould we consider int/bigint?\n\n-- \nnathan\n\n\n",
"msg_date": "Mon, 5 Aug 2024 10:28:48 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "Hi,\n\n> This looks pretty good to me. The only point that I think deserves more\n> discussion is the return type. Does bytea make the most sense here? Or\n> should we consider int/bigint?\n\nPersonally I would choose BYTEA in order to be consistent with sha*() functions.\n\nIt can be casted to TEXT if user wants a result similar to the one\nmd5() returns:\n\n```\nSELECT encode(crc32('PostgreSQL'), 'hex');\n```\n\n... and somewhat less convenient to BIGINT:\n\n```\nSELECT ((get_byte(crc, 0) :: bigint << 24) | (get_byte(crc, 1) << 16)\n| (get_byte(crc, 2) << 8) | get_byte(crc, 3))\nFROM (SELECT crc32('PostgreSQL') AS crc);\n```\n\nI don't like the `integer` option because crc32 value is typically\nconsidered as an unsigned one and `integer` is not large enough to\nrepresent uint32.\n\nPerhaps we need get_int4() / get_int8() / get_numeric() as there seems\nto be a demand [1][2] and it will allow us to easily cast a `bytea`\nvalue to `integer` or `bigint`. This is probably another topic though.\n\n[1]: https://stackoverflow.com/questions/32944267/postgresql-converting-bytea-to-bigint\n[2]: https://postgr.es/m/AANLkTikip9xs8iXc8e%2BMgz1T1701i8Xk6QtbVB3KJQzX%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 6 Aug 2024 11:04:41 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "On Tue, Aug 06, 2024 at 11:04:41AM +0300, Aleksander Alekseev wrote:\n> Perhaps we need get_int4() / get_int8() / get_numeric() as there seems\n> to be a demand [1][2] and it will allow us to easily cast a `bytea`\n> value to `integer` or `bigint`. This is probably another topic though.\n\nYeah, I was surprised to learn there wasn't yet an easy way to do this.\nI'm not sure how much of a factor this should play in deciding the return\nvalue for the CRC functions, but IMHO it's a reason to reconsider returning\ntext as you originally proposed.\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 6 Aug 2024 12:01:51 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "Hi,\n\n> Yeah, I was surprised to learn there wasn't yet an easy way to do this.\n> I'm not sure how much of a factor this should play in deciding the return\n> value for the CRC functions, but IMHO it's a reason to reconsider returning\n> text as you originally proposed.\n\nOK, here is the corrected patch v4.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 7 Aug 2024 13:19:19 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "On 05.08.24 17:28, Nathan Bossart wrote:\n> This looks pretty good to me. The only point that I think deserves more\n> discussion is the return type. Does bytea make the most sense here? Or\n> should we consider int/bigint?\n\nThe correct return type of a CRC operation in general is some kind of \nexact numerical type. Just pick the best one that fits the result. I \ndon't think bytea is appropriate.\n\n\n\n",
"msg_date": "Thu, 8 Aug 2024 16:27:20 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "On Thu, Aug 08, 2024 at 04:27:20PM +0200, Peter Eisentraut wrote:\n> On 05.08.24 17:28, Nathan Bossart wrote:\n>> This looks pretty good to me. The only point that I think deserves more\n>> discussion is the return type. Does bytea make the most sense here? Or\n>> should we consider int/bigint?\n> \n> The correct return type of a CRC operation in general is some kind of exact\n> numerical type. Just pick the best one that fits the result. I don't think\n> bytea is appropriate.\n\nThat would leave us either \"integer\" or \"bigint\". \"integer\" is more\ncorrect from a size perspective, but will result in negative values because\nit is signed. \"bigint\" uses twice as many bytes but won't display any CRC\nvalues as negative.\n\nI guess we could also choose \"numeric\", which would set a more sustainable\nprecedent if we added functions for CRC-64...\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 8 Aug 2024 09:35:33 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, Aug 08, 2024 at 04:27:20PM +0200, Peter Eisentraut wrote:\n>> The correct return type of a CRC operation in general is some kind of exact\n>> numerical type. Just pick the best one that fits the result. I don't think\n>> bytea is appropriate.\n\n> That would leave us either \"integer\" or \"bigint\". \"integer\" is more\n> correct from a size perspective, but will result in negative values because\n> it is signed. \"bigint\" uses twice as many bytes but won't display any CRC\n> values as negative.\n\nbigint seems fine to me; we have used that in other places as a\nsubstitute for uint32, eg block numbers in contrib/pageinspect.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Aug 2024 10:49:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "On Thu, Aug 08, 2024 at 10:49:42AM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> On Thu, Aug 08, 2024 at 04:27:20PM +0200, Peter Eisentraut wrote:\n>>> The correct return type of a CRC operation in general is some kind of exact\n>>> numerical type. Just pick the best one that fits the result. I don't think\n>>> bytea is appropriate.\n> \n>> That would leave us either \"integer\" or \"bigint\". \"integer\" is more\n>> correct from a size perspective, but will result in negative values because\n>> it is signed. \"bigint\" uses twice as many bytes but won't display any CRC\n>> values as negative.\n> \n> bigint seems fine to me; we have used that in other places as a\n> substitute for uint32, eg block numbers in contrib/pageinspect.\n\nWFM. Here is what I have staged for commit.\n\n-- \nnathan",
"msg_date": "Thu, 8 Aug 2024 10:59:52 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "Hi,\n\n> WFM. Here is what I have staged for commit.\n\nPatch v5 LGTM.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 12 Aug 2024 16:13:02 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
},
{
"msg_contents": "On Mon, Aug 12, 2024 at 04:13:02PM +0300, Aleksander Alekseev wrote:\n> Patch v5 LGTM.\n\nCommitted.\n\n-- \nnathan\n\n\n",
"msg_date": "Mon, 12 Aug 2024 10:36:33 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add crc32(text) & crc32(bytea)"
}
] |
[
{
"msg_contents": "Hi,\n\nFor utility statements defined within a function, the queryTree is\ncopied to a plannedStmt as utility commands don't require planning.\nHowever, the queryId is not propagated to the plannedStmt. This leads\nto plugins relying on queryId like pg_stat_statements to not be able\nto track utility statements within function calls.\n\nThis patch fixes the issue by correctly propagating queryId from the\ncached queryTree to the plannedStmt.\n\nRegards,\nAnthonin",
"msg_date": "Thu, 18 Jul 2024 13:37:40 +0200",
"msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>",
"msg_from_op": true,
"msg_subject": "Correctly propagate queryId for utility stmt in function"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 01:37:40PM +0200, Anthonin Bonnefoy wrote:\n> For utility statements defined within a function, the queryTree is\n> copied to a plannedStmt as utility commands don't require planning.\n> However, the queryId is not propagated to the plannedStmt. This leads\n> to plugins relying on queryId like pg_stat_statements to not be able\n> to track utility statements within function calls.\n\nYou are right, good catch. This leads to only partial information\nbeing reported depending on the setting of pg_stat_statements.track.\nIt is a point of detail, but I'd rather expand a bit more the tests on\ntop of what you are proposing:\n- Upper and down-casing for non-top utility commands, to check that\nthey are counted consistently.\n- Check with pg_stat_statements.track = 'top'\n- Not cross-checking pg_stat_statements.track_utility = false is OK.\n\nWhile this qualifies as something that could go down to all the stable\nbranches, it is much easier to think about utility statements in 16~\nnow that we compile the query IDs depending on their parsed tree, so\nwill apply down to that.\n\npg_stat_statements tests have also been refactored in 16~, but that's\na nit here..\n--\nMichael",
"msg_date": "Fri, 19 Jul 2024 09:13:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Correctly propagate queryId for utility stmt in function"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've been optimizing queries for a long time, and I don't think I've\never seen something more surprising to me than this -- sufficiently so\nthat I wanted to ask if others thought it implied a bug. It's possible\nmy mental model for the planner is broken in some significant way, or\nthat I'm overlooking something obvious, so please let me know if\nthat's the case too.\n\nWe have a query like this (all tables renamed to be generic):\n\nSELECT DISTINCT \"objects\".\"pk\"\nFROM \"objects\"\nLEFT OUTER JOIN \"objects\" \"allowed_objects_objects\" ON\n\"allowed_objects_objects\".\"pk\" = \"objects\".\"allowed_object_fk\" AND\n\"objects\".\"allowed_object_fk\" IS NOT NULL\nLEFT OUTER JOIN \"facilitated_object_metadata\" ON\n\"facilitated_object_metadata\".\"object_fk\" = \"objects\".\"pk\"\nLEFT OUTER JOIN \"object_audits\" ON \"object_audits\".\"object_fk\" = \"objects\".\"pk\"\nLEFT OUTER JOIN \"objects\" \"action_objects_objects\" ON\n\"action_objects_objects\".\"allowed_object_fk\" IS NOT NULL AND\n\"action_objects_objects\".\"allowed_object_fk\" = \"objects\".\"pk\"\nLEFT OUTER JOIN \"objects\" \"return_objects_objects\" ON\n\"return_objects_objects\".\"returned_object_fk\" IS NOT NULL AND\n\"return_objects_objects\".\"returned_object_fk\" = \"objects\".\"pk\"\nLEFT OUTER JOIN \"object_taxs\" ON \"object_taxs\".\"object_fk\" = \"objects\".\"pk\"\nLEFT OUTER JOIN \"object_configs\" ON \"object_configs\".\"object_fk\" =\n\"objects\".\"pk\"\nLEFT OUTER JOIN \"object_statuses\" ON \"object_statuses\".\"object_fk\" =\n\"objects\".\"pk\"\nLEFT OUTER JOIN \"object_edits\" ON \"object_edits\".\"object_fk\" = \"objects\".\"pk\"\nLEFT OUTER JOIN \"audit_answers\" ON \"audit_answers\".\"requester_type\" =\n'Transaction' AND \"audit_answers\".\"requester_fk\" = \"objects\".\"pk\"\nWHERE \"objects\".\"pk\" = 131690144\nLIMIT 1;\n\nThis is ORM generated, so granted there's some slight oddity with the\n\"DISTINCT pk\" and then the \"LIMIT 1\".\n\nThe plan generated by the planner changed suddenly one morning this\nweek, and in a very surprising way: the innermost scan (of \"objects\")\nstarted choosing a seq scan, despite the cost from that node being\nvery high and an index scan being possible -- it's the primary key and\nwe're restricting on a single value, so intuitively we know, and\nindeed the planner knows, that there will only be a single row\nreturned.\n\nHere's the surprising plan:\n\nLimit (cost=3.42..4.00 rows=1 width=8)\n -> Limit (cost=3.42..4.00 rows=1 width=8)\n -> Nested Loop Left Join (cost=3.42..7939067.28 rows=13777920 width=8)\n -> Nested Loop Left Join (cost=2.86..7766839.95\nrows=1059840 width=8)\n -> Nested Loop Left Join (cost=2.44..7753589.13\nrows=211968 width=8)\n -> Nested Loop Left Join\n(cost=1.86..7750902.24 rows=138 width=8)\n -> Nested Loop Left Join\n(cost=1.29..7750895.86 rows=1 width=8)\n -> Nested Loop Left Join\n(cost=1.29..7750895.85 rows=1 width=8)\n -> Nested Loop Left Join\n(cost=0.86..7750893.39 rows=1 width=8)\n -> Nested Loop Left\nJoin (cost=0.43..7750890.93 rows=1 width=8)\n -> Seq Scan\non objects (cost=0.00..7750888.47 rows=1 width=16)\n Filter:\n(pk = 131690144)\n -> Index Only\nScan using index_facilitated_object_metadata_on_object_fk on\nfacilitated_object_metadata (cost=0.43..2.45 rows=1 width=8)\n Index\nCond: (object_fk = 131690144)\n -> Index Only Scan\nusing index_objects_on_allowed_object_fk_not_null on objects\naction_objects_objects (cost=0.43..2.45 rows=1 width=8)\n Index Cond:\n(allowed_object_fk = 131690144)\n -> Index Only Scan using\nindex_objects_on_returned_object_fk_not_null on objects\nreturn_objects_objects (cost=0.43..2.45 rows=1 width=8)\n Index Cond:\n(returned_object_fk = 131690144)\n -> Seq Scan on object_taxs\n(cost=0.00..0.00 rows=1 width=8)\n Filter: (object_fk = 131690144)\n -> Index Only Scan using\nindex_object_audits_on_object_fk on object_audits (cost=0.57..5.00\nrows=138 width=8)\n Index Cond: (object_fk = 131690144)\n -> Materialize (cost=0.57..41.13 rows=1536 width=8)\n -> Index Only Scan using\nindex_object_configs_on_object_id on object_configs (cost=0.57..33.45\nrows=1536 width=8)\n Index Cond: (object_fk = 131690144)\n -> Materialize (cost=0.42..2.84 rows=5 width=8)\n -> Index Only Scan using\nindex_adjustment_responses_on_response_fk on object_edits\n(cost=0.42..2.81 rows=5 width=8)\n Index Cond: (object_fk = 131690144)\n -> Materialize (cost=0.56..3.36 rows=13 width=8)\n -> Index Only Scan using\nindex_audit_answers_on_requester_type_and_fk on audit_answers\n(cost=0.56..3.30 rows=13 width=8)\n Index Cond: ((requester_type =\n'Object'::bt_object_or_alternate_object) AND (requester_fk =\n131690144))\n\n\nNote the innermost table scan:\nSeq Scan on objects (cost=0.00..7750888.47 rows=1 width=16)\n Filter: (pk = 131690144)\n\nIf I set enable_seqscan = off, then I get the old plan:\n\nLimit (cost=4.12..4.13 rows=1 width=8)\n -> Limit (cost=4.12..4.13 rows=1 width=8)\n -> Nested Loop Left Join (cost=4.12..188183.54 rows=13777920 width=8)\n -> Nested Loop Left Join (cost=3.55..15956.20\nrows=1059840 width=8)\n -> Nested Loop Left Join (cost=3.13..2705.38\nrows=211968 width=8)\n -> Nested Loop Left Join\n(cost=2.55..18.49 rows=138 width=8)\n -> Nested Loop Left Join\n(cost=1.98..12.11 rows=1 width=8)\n -> Nested Loop Left Join\n(cost=1.86..9.96 rows=1 width=8)\n -> Nested Loop Left Join\n (cost=1.43..7.50 rows=1 width=8)\n -> Nested Loop\nLeft Join (cost=0.99..5.04 rows=1 width=8)\n -> Index\nScan using objects_pkey on objects (cost=0.57..2.58 rows=1 width=16)\n Index\nCond: (pk = 131690144)\n -> Index\nOnly Scan using index_facilitated_object_metadata_on_object_fk on\nfacilitated_object_metadata (cost=0.43..2.45 rows=1 width=8)\n Index\nCond: (object_fk = 131690144)\n -> Index Only Scan\nusing index_objects_on_allowed_object_fk_not_null on objects\naction_objects_objects (cost=0.43..2.45 rows=1 width=8)\n Index Cond:\n(allowed_object_fk = 131690144)\n -> Index Only Scan using\nindex_objects_on_returned_object_fk_not_null on objects\nreturn_objects_objects (cost=0.43..2.45 rows=1 width=8)\n Index Cond:\n(returned_object_fk = 131690144)\n -> Index Only Scan using\nindex_object_taxs_on_object_fk on object_taxs (cost=0.12..2.14 rows=1\nwidth=8)\n Index Cond: (object_fk = 131690144)\n -> Index Only Scan using\nindex_object_audits_on_object_fk on object_audits (cost=0.57..5.00\nrows=138 width=8)\n Index Cond: (object_fk = 131690144)\n -> Materialize (cost=0.57..41.13 rows=1536 width=8)\n -> Index Only Scan using\nindex_object_configs_on_object_id on object_configs (cost=0.57..33.45\nrows=1536 width=8)\n Index Cond: (object_fk = 131690144)\n -> Materialize (cost=0.42..2.84 rows=5 width=8)\n -> Index Only Scan using\nindex_adjustment_responses_on_response_fk on object_edits\n(cost=0.42..2.81 rows=5 width=8)\n Index Cond: (object_fk = 131690144)\n -> Materialize (cost=0.56..3.36 rows=13 width=8)\n -> Index Only Scan using\nindex_audit_answers_on_requester_type_and_fk on audit_answers\n(cost=0.56..3.30 rows=13 width=8)\n Index Cond: ((requester_type =\n'Object'::bt_object_or_alternate_object) AND (requester_fk =\n131690144))\n\n\nNotice the innermost table scan is the expect index scan:\n -> Index Scan using objects_pkey on objects (cost=0.57..2.58 rows=1 width=16)\n Index Cond: (pk = 131690144)\n\nThe join order stays the same between the two plans. The scan on\nobject_taxs changes from a seq scan to an index scan (the table is\nempty so neither would be a problem) obviously because of disabled\nsequence scans.\n\nAnalyzing object_audits during the incident did *not* change the plan,\nbut a later auto-analyze of that same table seemed to change the plan\nback. That being said, we're now back at the bad plan (without setting\nenable_seqscan = off). I don't understand why that analyze should\nreally affect anything at all given that join order stays the same\nbetween good/bad plans and given that I'd expect the planner to\nstrongly prefer the index scan given its far lower cost for the number\nof estimated tuples. Analyzing objects doesn't help. Removing the\nDISTINCT does not fix the plan.\n\nRemoving the LIMIT 1 does fix the plan and removes the double LIMIT\nnode in the plan. The second LIMIT node is implicit from the DISTINCT,\nand it doesn't seem to me that a LIMIT node should change the\ncost/path chosen of an identical limit node inside it, and indeed the\ncost seems to be the same at both.\n\nThe only explanation I can come up with is that the startup cost for\nthe seq scan is 0 while for the index scan is 0.57, and the LIMIT at\nthe top level is causing the planner to care about startup cost.\n\nAssuming that's true I think the early return cost multiplication of\nthe LIMIT is being applied very naively on the seq scan node. Or\nperhaps the issue is that the startup cost for a single tuple on a seq\nscan like this shouldn't really have a startup cost of 0 -- that cost\nis presumably for tuples being returned _without_ having applied the\nfilter. That seems slightly odd to me, because the cost of getting the\nfirst row out of that node -- in my naive view thinking about it for\nall of 5 seconds -- should be calculated based on applying the filter\n(and thus the likelihood that that filter matches right away). If we\ndid that then this path would never win. But that 0.00 startup cost\nfor the seq scan with a filter shows up in PG14 and PG11 also, not\njust PG16, so that's not something that's changed.\n\nTo recap: the estimation of rows is correct, the estimated high\n(total) cost of the seq scan is correct, but the seq scan is chosen\nover the index scan anyway for a plan that returns a single \"random\"\nrow based on the primary key.\n\nAm I right to be surprised here?\n\nJames Coleman\n\n\n",
"msg_date": "Thu, 18 Jul 2024 14:10:12 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Seq scan instead of index scan querying single row from primary key\n on large table"
},
{
"msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> The plan generated by the planner changed suddenly one morning this\n> week, and in a very surprising way: the innermost scan (of \"objects\")\n> started choosing a seq scan, despite the cost from that node being\n> very high and an index scan being possible\n\nThat looks weird to me too, but given the number of tables involved\nI wonder what you have join_collapse_limit/from_collapse_limit set\nto. If those are smaller than the query's relation count it could\nbe that this is an artifact of optimization of a join subproblem.\nHowever, if it's the very same query you've been generating all along,\nthis theory doesn't really explain the sudden change of plan choice.\n\nAlso, even granting the bad-peephole-optimization theory, it seems\nlike the best path for the objects table alone would still have been\nthe index scan, so I'm confused too. What nondefault planner settings\nhave you got? (\"EXPLAIN (SETTINGS)\" would help with answering that\naccurately.)\n\nAre you really sure nothing changed about what the ORM is emitting?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jul 2024 14:38:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Seq scan instead of index scan querying single row from primary\n key on large table"
},
{
"msg_contents": "On Thu, Jul 18, 2024 at 2:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Coleman <jtc331@gmail.com> writes:\n> > The plan generated by the planner changed suddenly one morning this\n> > week, and in a very surprising way: the innermost scan (of \"objects\")\n> > started choosing a seq scan, despite the cost from that node being\n> > very high and an index scan being possible\n>\n> That looks weird to me too, but given the number of tables involved\n> I wonder what you have join_collapse_limit/from_collapse_limit set\n> to. If those are smaller than the query's relation count it could\n> be that this is an artifact of optimization of a join subproblem.\n> However, if it's the very same query you've been generating all along,\n> this theory doesn't really explain the sudden change of plan choice.\n\nThose gucs are both set to 8, so...that could be a factor, except that\nas you noted if that's not a variable that's changing during the\nflip-flop of plans, then it's hard to make that the explanation. See\nbelow for more on this.\n\n> Also, even granting the bad-peephole-optimization theory, it seems\n> like the best path for the objects table alone would still have been\n> the index scan, so I'm confused too.\n\nI'm glad I'm not the only one.\n\n> What nondefault planner settings\n> have you got? (\"EXPLAIN (SETTINGS)\" would help with answering that\n> accurately.)\n\nSettings: max_parallel_workers = '24', maintenance_io_concurrency =\n'1', effective_cache_size = '264741832kB', work_mem = '16MB',\nrandom_page_cost = '1', search_path = 'object_shard,public'\n\nOf those I'd expect random_page_cost to steer it towards the index\nscan rather than away from it. The others don't seem to me like they\nshould effect which path would be the best one for the objects table\nalone.\n\n> Are you really sure nothing changed about what the ORM is emitting?\n\nYes, aside from the fact that application code didn't change, we\nreproduced the problem by restoring a physical snapshot of the\ndatabase and were able to get the bad plan and then get it to change\nto the good plan by analyzing object_audits. Additionally on the live\nproduction database (well, a read replica anyway) today it'd switched\nback to the bad plan, and then an hour layer it'd swapped back again\nfor the same exact query.\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Thu, 18 Jul 2024 15:18:31 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Seq scan instead of index scan querying single row from primary\n key on large table"
},
{
"msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> On Thu, Jul 18, 2024 at 2:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Are you really sure nothing changed about what the ORM is emitting?\n\n> Yes, aside from the fact that application code didn't change, we\n> reproduced the problem by restoring a physical snapshot of the\n> database and were able to get the bad plan and then get it to change\n> to the good plan by analyzing object_audits.\n\nOK. After staring at this for awhile, I think I see what's happening,\nand you're right that it's basically a problem with how we do LIMIT.\n\nConsider this simple test case:\n\nregression=# create table t1 (id int primary key, link int);\nCREATE TABLE\nregression=# insert into t1 select id, id/1000 from generate_series(1,1000000) id;\nINSERT 0 1000000\nregression=# vacuum analyze t1;\nVACUUM\nregression=# set enable_indexscan TO 0;\nSET\nregression=# set enable_bitmapscan TO 0;\nSET\nregression=# set max_parallel_workers_per_gather TO 0;\nSET\nregression=# explain select * from t1 where id = 42 limit 1;\n QUERY PLAN \n------------------------------------------------------------\n Limit (cost=0.00..16925.00 rows=1 width=8)\n -> Seq Scan on t1 (cost=0.00..16925.00 rows=1 width=8)\n Filter: (id = 42)\n(3 rows)\n\nBecause we'll pick the winning Limit plan based just on its\ntotal_cost, this set of cost assignments amounts to assuming that the\none matching tuple will appear at the end of the seqscan. That's\noverly conservative perhaps, but it's fine.\n\nBut now look what happens when we join to another table that has many\nmatching rows:\n\nregression=# explain select * from t1 t1 left join t1 t2 on (t1.id=t2.link) where t1.id = 42 limit 1;\n QUERY PLAN \n-----------------------------------------------------------------------\n Limit (cost=0.00..33.96 rows=1 width=16)\n -> Nested Loop Left Join (cost=0.00..33859.97 rows=997 width=16)\n -> Seq Scan on t1 (cost=0.00..16925.00 rows=1 width=8)\n Filter: (id = 42)\n -> Seq Scan on t1 t2 (cost=0.00..16925.00 rows=997 width=8)\n Filter: (link = 42)\n(6 rows)\n\nWe have, basically, completely forgotten about that conservative\npositioning estimate: this set of costs amounts to assuming that the\nfirst matching tuple will be found 1/1000th of the way through the\nentire join, which is a lot less than what we'd just decided it would\ncost to retrieve the first t1 tuple. In your example, with over 13\nmillion rows to be joined, that allows us to make a ridiculously small\nestimate of the time to find the first match, and so it's able to\nwin a cost comparison against the far-more-sanely-estimated indexscan\nplan.\n\nI'm not sure what to do about this. I think that things might work\nout better if we redefined the startup cost as \"estimated cost to\nretrieve the first tuple\", rather than its current very-squishy\ndefinition as \"cost to initialize the scan\". That would end up\nwith the LIMIT node having a cost that's at least the sum of the\nstartup costs of the input scans, which would fix this problem.\nBut changing that everywhere would be a lotta work, and I'm far\nfrom sure that it would not have any negative side-effects.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Jul 2024 16:22:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Seq scan instead of index scan querying single row from primary\n key on large table"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> I'm not sure what to do about this. I think that things might work\n> out better if we redefined the startup cost as \"estimated cost to\n> retrieve the first tuple\", rather than its current very-squishy\n> definition as \"cost to initialize the scan\".\n\nActually I wanted to raise this question very long time ago when I\nread the code, but I don't have a example to prove it can cause any real\nimpact, then I didn't ask it. \n\nstartup_cost is defined by the cost to retrieve the *first tuple*, so\nfor the query like \"SELECT * FROM t WHERE foo\", the IO cost to retrieve\nthe first tpule is obviously not 0. (I think it can be total_cost /\nrows?) at the same time, the startup_cost of IndexScan is more\nrestricted, it counts the IO blocks from root -> leaf nodes. I think\nthere is a inconsistent issue as well. \n\n> That would end up\n> with the LIMIT node having a cost that's at least the sum of the\n> startup costs of the input scans, which would fix this problem.\n\ngreat to know this.\n\n> But changing that everywhere would be a lotta work.\n\nIn my understanding, the only place we need to change is the\nstartup_cost in cost_seqscan, I must be wrong now, but I want to know\nwhere is it. \n\n> and I'm far from sure that it would not have any negative\n> side-effects. \n\nYes, I think it is a semantics correct than before however.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Wed, 31 Jul 2024 09:05:05 +0800",
"msg_from": "Andy Fan <zhihuifan1213@163.com>",
"msg_from_op": false,
"msg_subject": "Re: Seq scan instead of index scan querying single row from primary\n key on large table"
}
] |
[
{
"msg_contents": "There are currently 124 commitfest entries needing a reviewer.\nOf those, 38 have activity this month, and the other 86 are a bit more\nstale, some going back to last year.\n\nWe're already past the halfway point of this commitfest, so we need to get\nreviewers on these.\n\nIf you know your patch isn't going to get reviewed in this commitfest,\nplease consider moving it to the next commitfest or withdrawing it.\n\nIf you've ever wanted to review a patch, there are lots to choose from.\n\nI'll be updating this thread every few days with our progress and\nincreasingly hyperbolic calls to action.\n\nsource:\nhttps://commitfest.postgresql.org/48/?text=&status=1&targetversion=-1&author=-1&reviewer=-2&sortkey=2\n.\n\nThere are currently 124 commitfest entries needing a reviewer.Of those, 38 have activity this month, and the other 86 are a bit more stale, some going back to last year.We're already past the halfway point of this commitfest, so we need to get reviewers on these.If you know your patch isn't going to get reviewed in this commitfest, please consider moving it to the next commitfest or withdrawing it.If you've ever wanted to review a patch, there are lots to choose from.I'll be updating this thread every few days with our progress and increasingly hyperbolic calls to action.source: https://commitfest.postgresql.org/48/?text=&status=1&targetversion=-1&author=-1&reviewer=-2&sortkey=2.",
"msg_date": "Thu, 18 Jul 2024 14:17:38 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "July Commitfest: Entries Needing Review"
},
{
"msg_contents": "Hi,\n\nIn <CADkLM=fOjTpfcwwxhuARPPZCQAFdrhMvVQtPC3XA4UvuFs1y1g@mail.gmail.com>\n \"July Commitfest: Entries Needing Review\" on Thu, 18 Jul 2024 14:17:38 -0400,\n Corey Huinker <corey.huinker@gmail.com> wrote:\n\n> If you know your patch isn't going to get reviewed in this commitfest,\n> please consider moving it to the next commitfest or withdrawing it.\n\nI hope my patch https://commitfest.postgresql.org/48/4681/\ngets reviewed in this commitfest but it's not done yet.\n\nI'm reviewing other patches in this commitfest because I\nheard that my patch will be got reviewed if I review other\npatches in this commitfest. But it seems that it's not\nrelated. Should I move my patch to the next commitfest or\nwithdraw my patch?\n\n\nThanks,\n-- \nkou\n\n\n",
"msg_date": "Fri, 19 Jul 2024 11:53:12 +0900 (JST)",
"msg_from": "Sutou Kouhei <kou@clear-code.com>",
"msg_from_op": false,
"msg_subject": "Re: July Commitfest: Entries Needing Review"
},
{
"msg_contents": "On Fri, 2024-07-19 at 11:53 +0900, Sutou Kouhei wrote:\n> In <CADkLM=fOjTpfcwwxhuARPPZCQAFdrhMvVQtPC3XA4UvuFs1y1g@mail.gmail.com>\n> \"July Commitfest: Entries Needing Review\" on Thu, 18 Jul 2024 14:17:38 -0400,\n> Corey Huinker <corey.huinker@gmail.com> wrote:\n> \n> > If you know your patch isn't going to get reviewed in this commitfest,\n> > please consider moving it to the next commitfest or withdrawing it.\n> \n> I hope my patch https://commitfest.postgresql.org/48/4681/\n> gets reviewed in this commitfest but it's not done yet.\n> \n> I'm reviewing other patches in this commitfest because I\n> heard that my patch will be got reviewed if I review other\n> patches in this commitfest. But it seems that it's not\n> related. Should I move my patch to the next commitfest or\n> withdraw my patch?\n\nDon't. It is not your fault. You don't know - it might still get\na review. I'm not sure if Corey's advice makes much sense: how\nshould you be able to divine that your patch won't receive any attention?\n\nThe rule you quote isn't enforced in any way, and it would be difficult.\n\nSee\nhttps://postgr.es/m/c0474165-2a33-4f50-9a77-7e2c67ab4f21%40enterprisedb.com\nand the lengthy discussion around it.\n\nOur development process isn't perfect, and it can be quite frustrating\nfor contributors. Sorry about that.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Fri, 19 Jul 2024 10:02:15 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: July Commitfest: Entries Needing Review"
},
{
"msg_contents": "Hi,\n\n> There are currently 124 commitfest entries needing a reviewer.\n> Of those, 38 have activity this month, and the other 86 are a bit more stale, some going back to last year.\n\nIt's worth noting that some patches marked as \"Needs review\" in fact\ngot some review and now require actions from the author.\n\nIf you are an author and you know that you are going to update the\npatch, consider changing its status to \"Waiting on Author\" for the\ntime being. This will allow the reviewers to focus on patches that\nactually didn't get any attention so far.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 19 Jul 2024 12:41:38 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: July Commitfest: Entries Needing Review"
}
] |
[
{
"msg_contents": "Hi hackers,\r\nI am trying to design a new \"pg_get_functiondef\" function extension, like this:\r\n\r\n\r\nCREATE FUNCTION pg_get_functiondef(OID, VARIADIC OID[]) RETURNS TABLE (OID oid, pg_get_functiondef text)\r\nAS 'pg_get_functiondef', 'pg_get_functiondef_mul'\r\nLANGUAGE C;\r\n\r\n\r\n\r\nAnd I have read the <C-Language Functions>, learn the way to build the tuple (use BlessTupleDesc or BuildTupleFromCStrings) to return the Composite Types, but when I finish my work, its performance does not meet my expectations, like this:\r\n\r\n\r\n pg_get_functiondef \r\n----------------------------------------------------------------------\r\n (1400,\"CREATE OR REPLACE FUNCTION pg_catalog.name(character varying)+\r\n RETURNS name +\r\n LANGUAGE internal +\r\n IMMUTABLE PARALLEL SAFE STRICT LEAKPROOF +\r\n AS $function$text_name$function$ +\r\n \")\r\n (1400,\"CREATE OR REPLACE FUNCTION pg_catalog.name(character varying)+\r\n RETURNS name +\r\n LANGUAGE internal +\r\n IMMUTABLE PARALLEL SAFE STRICT LEAKPROOF +\r\n AS $function$text_name$function$ +\r\n \")\r\n\r\n\r\n\r\nIn my expectations, it should be:\r\n\r\n\r\npostgres=# SELECT 1400 AS oid, 'CREATE OR REPLACE FUNCTION pg_catalog.name(character varying)+\r\n RETURNS name +\r\n LANGUAGE internal +\r\n IMMUTABLE PARALLEL SAFE STRICT LEAKPROOF +\r\n AS $function$text_name$function' AS pg_get_functiondef;\r\n oid | pg_get_functiondef \r\n------+------------------------------------------------------------------------\r\n 1400 | CREATE OR REPLACE FUNCTION pg_catalog.name(character varying)+ +\r\n | RETURNS name ++\r\n | LANGUAGE internal ++\r\n | IMMUTABLE PARALLEL SAFE STRICT LEAKPROOF ++\r\n | AS $function$text_name$function\r\n(1 row)\r\n\r\n\r\nBecause I write:\r\n\r\n\r\nTupleDescInitEntry(info->result_desc, 1, \"OID\", OIDOID, -1, 0);\r\nTupleDescInitEntry(info->result_desc, 2, \"pg_get_functiondef\", CSTRINGOID, -1, 0);\r\n\r\n\r\nCan someone give my some advice?\r\nThanks in advance!\r\n\r\n\r\nYours,\r\nWen Yi\nHi hackers,I am trying to design a new \"pg_get_functiondef\" function extension, like this:CREATE FUNCTION pg_get_functiondef(OID, VARIADIC OID[]) RETURNS TABLE (OID oid, pg_get_functiondef text)AS 'pg_get_functiondef', 'pg_get_functiondef_mul'LANGUAGE C;And I have read the <C-Language Functions>, learn the way to build the tuple (use BlessTupleDesc or BuildTupleFromCStrings) to return the Composite Types, but when I finish my work, its performance does not meet my expectations, like this: pg_get_functiondef ---------------------------------------------------------------------- (1400,\"CREATE OR REPLACE FUNCTION pg_catalog.name(character varying)+ RETURNS name + LANGUAGE internal + IMMUTABLE PARALLEL SAFE STRICT LEAKPROOF + AS $function$text_name$function$ + \") (1400,\"CREATE OR REPLACE FUNCTION pg_catalog.name(character varying)+ RETURNS name + LANGUAGE internal + IMMUTABLE PARALLEL SAFE STRICT LEAKPROOF + AS $function$text_name$function$ + \")In my expectations, it should be:postgres=# SELECT 1400 AS oid, 'CREATE OR REPLACE FUNCTION pg_catalog.name(character varying)+ RETURNS name + LANGUAGE internal + IMMUTABLE PARALLEL SAFE STRICT LEAKPROOF + AS $function$text_name$function' AS pg_get_functiondef; oid | pg_get_functiondef ------+------------------------------------------------------------------------ 1400 | CREATE OR REPLACE FUNCTION pg_catalog.name(character varying)+ + | RETURNS name ++ | LANGUAGE internal ++ | IMMUTABLE PARALLEL SAFE STRICT LEAKPROOF ++ | AS $function$text_name$function(1 row)Because I write:TupleDescInitEntry(info->result_desc, 1, \"OID\", OIDOID, -1, 0);TupleDescInitEntry(info->result_desc, 2, \"pg_get_functiondef\", CSTRINGOID, -1, 0);Can someone give my some advice?Thanks in advance!Yours,Wen Yi",
"msg_date": "Fri, 19 Jul 2024 14:57:51 +0800",
"msg_from": "\"=?ISO-8859-1?B?V2VuIFlp?=\" <wen-yi@qq.com>",
"msg_from_op": true,
"msg_subject": "How can udf c function return table, not the rows?"
},
{
"msg_contents": "On Thursday, July 18, 2024, Wen Yi <wen-yi@qq.com> wrote:\n>\n>\n> pg_get_functiondef\n> ----------------------------------------------------------------------\n>\n> In my expectations, it should be:\n>\n> oid | pg_get_functiondef\n>\n> ------+-----------------------------------------------------\n> -------------------\n>\n> Can someone give my some advice?\n>\n>\nWrite:\n\nSelect * from function_call()\n\nInstead of\n\nSelect function_call()\n\nGuessing a bit since you never did show the first query.\n\nDavid J.\n\nOn Thursday, July 18, 2024, Wen Yi <wen-yi@qq.com> wrote: pg_get_functiondef ----------------------------------------------------------------------In my expectations, it should be: oid | pg_get_functiondef ------+------------------------------------------------------------------------Can someone give my some advice?Write:Select * from function_call()Instead ofSelect function_call() Guessing a bit since you never did show the first query.David J.",
"msg_date": "Fri, 19 Jul 2024 06:04:40 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "How can udf c function return table, not the rows?"
}
] |
[
{
"msg_contents": "Hi hackers,\nI'm doing work related to creating an index with parallel workers. I found\nthat SnapshotAny\nis used in table_beginscan_parallel() when indexInfo->ii_Concurrent Is set\nto false. So can we\nnot pass the snapshot from the parallel worker creator to the parallel\nworker? like this:\n```\nInitializeParallelDSM()\n{\n ...\n\n if (is_concurrent == false)\n {\n /* Serialize the active snapshot. */\n asnapspace = shm_toc_allocate(pcxt->toc, asnaplen);\n SerializeSnapshot(active_snapshot, asnapspace);\n shm_toc_insert(pcxt->toc, PARALLEL_KEY_ACTIVE_SNAPSHOT,\nasnapspace);\n }\n\n ...\n}\n\nParallelWorkerMain()\n{\n ...\n\n if(is_concurrent == false)\n {\n asnapspace = shm_toc_lookup(toc, PARALLEL_KEY_ACTIVE_SNAPSHOT,\nfalse);\n tsnapspace = shm_toc_lookup(toc, PARALLEL_KEY_TRANSACTION_SNAPSHOT,\ntrue);\n asnapshot = RestoreSnapshot(asnapspace);\n tsnapshot = tsnapspace ? RestoreSnapshot(tsnapspace) : asnapshot;\n RestoreTransactionSnapshot(tsnapshot,\n fps->parallel_leader_pgproc);\n PushActiveSnapshot(asnapshot);\n }\n\n ...\n}\n```\n\nI would appreciate your help.\n\nWith Regards\nHao Zhang\n\nHi hackers,I'm doing work related to creating an index with parallel workers. I found that SnapshotAnyis used in table_beginscan_parallel() when indexInfo->ii_Concurrent Is set to false. So can wenot pass the snapshot from the parallel worker creator to the parallel worker? like this:```InitializeParallelDSM(){ ... if (is_concurrent == false) { /* Serialize the active snapshot. */ asnapspace = shm_toc_allocate(pcxt->toc, asnaplen); SerializeSnapshot(active_snapshot, asnapspace); shm_toc_insert(pcxt->toc, PARALLEL_KEY_ACTIVE_SNAPSHOT, asnapspace); } ...}ParallelWorkerMain(){ ... if(is_concurrent == false) { asnapspace = shm_toc_lookup(toc, PARALLEL_KEY_ACTIVE_SNAPSHOT, false); tsnapspace = shm_toc_lookup(toc, PARALLEL_KEY_TRANSACTION_SNAPSHOT, true); asnapshot = RestoreSnapshot(asnapspace); tsnapshot = tsnapspace ? RestoreSnapshot(tsnapspace) : asnapshot; RestoreTransactionSnapshot(tsnapshot, fps->parallel_leader_pgproc); PushActiveSnapshot(asnapshot); } ...}```I would appreciate your help.With RegardsHao Zhang",
"msg_date": "Fri, 19 Jul 2024 15:11:25 +0800",
"msg_from": "Hao Zhang <zhrt1446384557@gmail.com>",
"msg_from_op": true,
"msg_subject": "Can we use parallel workers to create index without\n active/transaction snapshot?"
},
{
"msg_contents": "On 7/19/24 09:11, Hao Zhang wrote:\n> Hi hackers,\n> I'm doing work related to creating an index with parallel workers. I found\n> that SnapshotAny\n> is used in table_beginscan_parallel() when indexInfo->ii_Concurrent Is set\n> to false. So can we\n> not pass the snapshot from the parallel worker creator to the parallel\n> worker? like this:\n\nMaybe, but I wonder why are you thinking about doing this. I'm guessing\nyou're trying to skip \"unnecessary\" stuff to make parallel workers\nfaster, or is the goal different? FWIW I doubt this will make measurable\ndifference, I'd expect the mere fork() to be way more expensive than\ncopying the SnapshotAny (which I think is pretty small).\n\nUp to you, but I'd suggest doing some measurements first, to show how\nmuch overhead this actually is.\n\n> ```> InitializeParallelDSM()\n> {\n> ...\n> \n> if (is_concurrent == false)\n> {\n> /* Serialize the active snapshot. */\n> asnapspace = shm_toc_allocate(pcxt->toc, asnaplen);\n> SerializeSnapshot(active_snapshot, asnapspace);\n> shm_toc_insert(pcxt->toc, PARALLEL_KEY_ACTIVE_SNAPSHOT,\n> asnapspace);\n> }\n> \n> ...\n> }\n> \n> ParallelWorkerMain()\n> {\n> ...\n> \n> if(is_concurrent == false)\n> {\n> asnapspace = shm_toc_lookup(toc, PARALLEL_KEY_ACTIVE_SNAPSHOT,\n> false);\n> tsnapspace = shm_toc_lookup(toc, PARALLEL_KEY_TRANSACTION_SNAPSHOT,\n> true);\n> asnapshot = RestoreSnapshot(asnapspace);\n> tsnapshot = tsnapspace ? RestoreSnapshot(tsnapspace) : asnapshot;\n> RestoreTransactionSnapshot(tsnapshot,\n> fps->parallel_leader_pgproc);\n> PushActiveSnapshot(asnapshot);\n> }\n> \n> ...\n> }\n> ```\n> \n\nIt's not clear to me where you get the is_concurrent flag in those\nplaces. Also, in ParallelWorkerMain() you probably should not skip\nrestoring the transaction snapshot.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Jul 2024 15:17:21 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Can we use parallel workers to create index without\n active/transaction snapshot?"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 18545\nLogged by: Andrey Rachitskiy\nEmail address: therealgofman@mail.ru\nPostgreSQL version: 16.3\nOperating system: Debian 12\nDescription: \n\n\\dt breaks transaction, calling error when executed in SET SESSION\nAUTHORIZATION. This happens in two cases, with different SET.\r\n\r\nCase one:\r\npostgres@debian-test:~$ psql -U postgres\r\npsql (16.3)\r\nType \"help\" for help.\r\n\r\npostgres=# \\set VERBOSITY verbose\r\npostgres=# BEGIN;\r\nBEGIN\r\npostgres=*# CREATE USER regress_priv_user8;\r\nCREATE ROLE\r\npostgres=*# SET SESSION AUTHORIZATION regress_priv_user8;\r\nSET\r\npostgres=*> \\dt+;\r\nDid not find any relations.\r\npostgres=*> SET LOCAL debug_parallel_query = 1;\r\nSET\r\npostgres=*> \\dt+;\r\nERROR: 22023: role \"regress_priv_user8\" does not exist\r\nCONTEXT: while setting parameter \"session_authorization\" to\n\"regress_priv_user8\"\r\nparallel worker\r\nLOCATION: call_string_check_hook, guc.c:6734\r\npostgres=!# \r\n\\q\r\n\r\nCase two:\r\npostgres@debian-test:~$ psql -U postgres\r\npsql (16.3)\r\nType \"help\" for help.\r\n\r\npostgres=# \\set VERBOSITY verbose\r\npostgres=# BEGIN;\r\nBEGIN\r\npostgres=*# CREATE USER regress_priv_user8;\r\nCREATE ROLE\r\npostgres=*# SET SESSION AUTHORIZATION regress_priv_user8;\r\nSET\r\npostgres=*> \\dt+\r\nDid not find any relations.\r\npostgres=*> set local parallel_setup_cost = 0;\r\nSET\r\npostgres=*> set local min_parallel_table_scan_size = 0;\r\nSET\r\npostgres=*> \\dt+\r\nERROR: 22023: role \"regress_priv_user8\" does not exist\r\nCONTEXT: while setting parameter \"session_authorization\" to\n\"regress_priv_user8\"\r\nparallel worker\r\nLOCATION: call_string_check_hook, guc.c:6734\r\npostgres=!# \r\n\\q",
"msg_date": "Fri, 19 Jul 2024 09:25:11 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #18545: \\dt breaks transaction,\n calling error when executed in SET SESSION AUTHORIZATION"
},
{
"msg_contents": "PG Bug reporting form <noreply@postgresql.org> writes:\n> postgres=# BEGIN;\n> BEGIN\n> postgres=*# CREATE USER regress_priv_user8;\n> CREATE ROLE\n> postgres=*# SET SESSION AUTHORIZATION regress_priv_user8;\n> SET\n> postgres=*> SET LOCAL debug_parallel_query = 1;\n> SET\n> postgres=*> \\dt+;\n> ERROR: 22023: role \"regress_priv_user8\" does not exist\n> CONTEXT: while setting parameter \"session_authorization\" to \"regress_priv_user8\"\n> parallel worker\n\nSo this has exactly nothing to do with \\dt+; any parallel query\nwill hit it. The problem is that parallel workers do\nRestoreGUCState() before they've restored the leader's snapshot.\nThus, in this example where session_authorization refers to an\nuncommitted pg_authid entry, the workers don't see that entry.\nIt seems likely that similar failures are possible with other\nGUCs that perform catalog lookups.\n\nI experimented with two different ways to fix this:\n\n1. Run RestoreGUCState() outside a transaction, thus preventing\ncatalog lookups. Assume that individual GUC check hooks that\nwould wish to do a catalog lookup will cope. Unfortunately,\nsome of them don't and would need fixed; check_role and\ncheck_session_authorization for two.\n\n2. Delay RestoreGUCState() into the parallel worker's main\ntransaction, after we've restored the leader's snapshot.\nThis turns out to break a different set of check hooks, notably\ncheck_transaction_deferrable.\n\nI think that the blast radius of option 2 is probably smaller than\noption 1's, because it should only matter to check hooks that think\nthey should run before the transaction has set a snapshot, and there\nare few of those. check_transaction_read_only already had a guard,\nbut I added similar ones to check_transaction_isolation and\ncheck_transaction_deferrable.\n\nThe attached draft patch also contains changes to prevent\ncheck_session_authorization from doing anything during parallel\nworker startup. That's left over from experimenting with option 1,\nand is not strictly necessary with option 2. I left it in anyway\nbecause it's saving some unnecessary work. (For some reason,\ncheck_role seems not to fail if you modify the test case to use\nSET ROLE. I did not figure out why not. I kind of want to modify\ncheck_role to be a no-op too when InitializingParallelWorker,\nbut did not touch that here pending more investigation.)\n\nAnother thing I'm wondering about is whether to postpone\nRestoreLibraryState similarly. Its current placement is said\nto be \"before restoring GUC values\", so it looks a little out\nof place now. Moving it into the main transaction would save\none StartTransactionCommand/CommitTransactionCommand pair\nduring parallel worker start, which is worth something.\nBut I think the real argument for it is that if any loaded\nlibraries try to do catalog lookups during load, we'd rather\nthat they see the same catalog state the leader does.\nAs against that, it feels like there's a nonzero risk of\nbreaking some third-party code if we move that call.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 19 Jul 2024 15:03:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18545: \\dt breaks transaction,\n calling error when executed in SET SESSION AUTHORIZATION"
},
{
"msg_contents": "Hi, Tom! Thank you for work on the subject. After applying patch, problem is no longer reproducible.\n \n---\nBest regards,\nAndrey Rachitskiy\nPostgres Professional: http://postgrespro.com\n \n>Суббота, 20 июля 2024, 0:04 +05:00 от Tom Lane <tgl@sss.pgh.pa.us>:\n> \n>PG Bug reporting form < noreply@postgresql.org > writes:\n>> postgres=# BEGIN;\n>> BEGIN\n>> postgres=*# CREATE USER regress_priv_user8;\n>> CREATE ROLE\n>> postgres=*# SET SESSION AUTHORIZATION regress_priv_user8;\n>> SET\n>> postgres=*> SET LOCAL debug_parallel_query = 1;\n>> SET\n>> postgres=*> \\dt+;\n>> ERROR: 22023: role \"regress_priv_user8\" does not exist\n>> CONTEXT: while setting parameter \"session_authorization\" to \"regress_priv_user8\"\n>> parallel worker\n>So this has exactly nothing to do with \\dt+; any parallel query\n>will hit it. The problem is that parallel workers do\n>RestoreGUCState() before they've restored the leader's snapshot.\n>Thus, in this example where session_authorization refers to an\n>uncommitted pg_authid entry, the workers don't see that entry.\n>It seems likely that similar failures are possible with other\n>GUCs that perform catalog lookups.\n>\n>I experimented with two different ways to fix this:\n>\n>1. Run RestoreGUCState() outside a transaction, thus preventing\n>catalog lookups. Assume that individual GUC check hooks that\n>would wish to do a catalog lookup will cope. Unfortunately,\n>some of them don't and would need fixed; check_role and\n>check_session_authorization for two.\n>\n>2. Delay RestoreGUCState() into the parallel worker's main\n>transaction, after we've restored the leader's snapshot.\n>This turns out to break a different set of check hooks, notably\n>check_transaction_deferrable.\n>\n>I think that the blast radius of option 2 is probably smaller than\n>option 1's, because it should only matter to check hooks that think\n>they should run before the transaction has set a snapshot, and there\n>are few of those. check_transaction_read_only already had a guard,\n>but I added similar ones to check_transaction_isolation and\n>check_transaction_deferrable.\n>\n>The attached draft patch also contains changes to prevent\n>check_session_authorization from doing anything during parallel\n>worker startup. That's left over from experimenting with option 1,\n>and is not strictly necessary with option 2. I left it in anyway\n>because it's saving some unnecessary work. (For some reason,\n>check_role seems not to fail if you modify the test case to use\n>SET ROLE. I did not figure out why not. I kind of want to modify\n>check_role to be a no-op too when InitializingParallelWorker,\n>but did not touch that here pending more investigation.)\n>\n>Another thing I'm wondering about is whether to postpone\n>RestoreLibraryState similarly. Its current placement is said\n>to be \"before restoring GUC values\", so it looks a little out\n>of place now. Moving it into the main transaction would save\n>one StartTransactionCommand/CommitTransactionCommand pair\n>during parallel worker start, which is worth something.\n>But I think the real argument for it is that if any loaded\n>libraries try to do catalog lookups during load, we'd rather\n>that they see the same catalog state the leader does.\n>As against that, it feels like there's a nonzero risk of\n>breaking some third-party code if we move that call.\n>\n>Thoughts?\n>\n>regards, tom lane\n> \n \nHi, Tom! Thank you for work on the subject. After applying patch, problem is no longer reproducible. ---Best regards,Andrey RachitskiyPostgres Professional: http://postgrespro.com Суббота, 20 июля 2024, 0:04 +05:00 от Tom Lane <tgl@sss.pgh.pa.us>: PG Bug reporting form <noreply@postgresql.org> writes:> postgres=# BEGIN;> BEGIN> postgres=*# CREATE USER regress_priv_user8;> CREATE ROLE> postgres=*# SET SESSION AUTHORIZATION regress_priv_user8;> SET> postgres=*> SET LOCAL debug_parallel_query = 1;> SET> postgres=*> \\dt+;> ERROR: 22023: role \"regress_priv_user8\" does not exist> CONTEXT: while setting parameter \"session_authorization\" to \"regress_priv_user8\"> parallel workerSo this has exactly nothing to do with \\dt+; any parallel querywill hit it. The problem is that parallel workers doRestoreGUCState() before they've restored the leader's snapshot.Thus, in this example where session_authorization refers to anuncommitted pg_authid entry, the workers don't see that entry.It seems likely that similar failures are possible with otherGUCs that perform catalog lookups.I experimented with two different ways to fix this:1. Run RestoreGUCState() outside a transaction, thus preventingcatalog lookups. Assume that individual GUC check hooks thatwould wish to do a catalog lookup will cope. Unfortunately,some of them don't and would need fixed; check_role andcheck_session_authorization for two.2. Delay RestoreGUCState() into the parallel worker's maintransaction, after we've restored the leader's snapshot.This turns out to break a different set of check hooks, notablycheck_transaction_deferrable.I think that the blast radius of option 2 is probably smaller thanoption 1's, because it should only matter to check hooks that thinkthey should run before the transaction has set a snapshot, and thereare few of those. check_transaction_read_only already had a guard,but I added similar ones to check_transaction_isolation andcheck_transaction_deferrable.The attached draft patch also contains changes to preventcheck_session_authorization from doing anything during parallelworker startup. That's left over from experimenting with option 1,and is not strictly necessary with option 2. I left it in anywaybecause it's saving some unnecessary work. (For some reason,check_role seems not to fail if you modify the test case to useSET ROLE. I did not figure out why not. I kind of want to modifycheck_role to be a no-op too when InitializingParallelWorker,but did not touch that here pending more investigation.)Another thing I'm wondering about is whether to postponeRestoreLibraryState similarly. Its current placement is saidto be \"before restoring GUC values\", so it looks a little outof place now. Moving it into the main transaction would saveone StartTransactionCommand/CommitTransactionCommand pairduring parallel worker start, which is worth something.But I think the real argument for it is that if any loadedlibraries try to do catalog lookups during load, we'd ratherthat they see the same catalog state the leader does.As against that, it feels like there's a nonzero risk ofbreaking some third-party code if we move that call.Thoughts?regards, tom lane",
"msg_date": "Mon, 29 Jul 2024 11:24:48 +0300",
"msg_from": "=?UTF-8?B?0JDQvdC00YDQtdC5INCg0LDRh9C40YbQutC40Lk=?=\n <therealgofman@mail.ru>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?B?UmU6IEJVRyAjMTg1NDU6IFxkdCBicmVha3MgdHJhbnNhY3Rpb24sIGNhbGxp?=\n =?UTF-8?B?bmcgZXJyb3Igd2hlbiBleGVjdXRlZCBpbiBTRVQgU0VTU0lPTiBBVVRIT1JJ?=\n =?UTF-8?B?WkFUSU9O?="
},
{
"msg_contents": "=?UTF-8?B?0JDQvdC00YDQtdC5INCg0LDRh9C40YbQutC40Lk=?= <therealgofman@mail.ru> writes:\n> Hi, Tom! Thank you for work on the subject. After applying patch, problem is no longer reproducible.\n\nThanks for checking. I realized that the idea of making\ncheck_session_authorization a no-op was wrong as presented:\nif we don't set up an \"extra\" struct then guc.c would be\nunable to restore the setting later, in case say a function\nthat's run inside the parallel query has a SET\nsession_authorization clause. It's probably possible to\nrevive that idea with more work, but it's not essential to the\nbug fix and we're getting close to the August minor releases.\nSo I pushed the core bug fix, and I'll take another look at\nthat part later.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2024 18:59:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18545: \\dt breaks transaction,\n calling error when executed in SET SESSION AUTHORIZATION"
},
{
"msg_contents": "I wrote:\n> So I pushed the core bug fix, and I'll take another look at\n> that part later.\n\n... or not; the buildfarm didn't like that much. It'll\nhave to wait till after these releases, because I'm\noverdue to get to work on the release notes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2024 20:59:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18545: \\dt breaks transaction,\n calling error when executed in SET SESSION AUTHORIZATION"
},
{
"msg_contents": "I wrote:\n>> So I pushed the core bug fix, and I'll take another look at\n>> that part later.\n\n> ... or not; the buildfarm didn't like that much. It'll\n> have to wait till after these releases, because I'm\n> overdue to get to work on the release notes.\n\nThe reason the buildfarm found something I'd missed in testing\nis that I didn't run check-world with debug_parallel_query set,\nwhich was a bad idea for a patch messing with parallel query\nmechanics :-(. Mea culpa.\n\nHowever, what the farm found is that assign_client_encoding\nis flat-out broken. It's ignoring the first commandment\nfor GUC hooks, which is \"Thy assign hooks shalt not fail\".\n(If we didn't need that, there wouldn't be a separation\nbetween check hooks and assign hooks in the first place.)\n\nBecause it's throwing an error at the wrong time, it spits\nup if it sees an (irrelevant) rollback of client_encoding\nduring cleanup of a failed parallel worker. We didn't see\nthis before because GUCRestoreState was being run in a separate\nmini-transaction that (usually at least) doesn't fail.\n\nThe attached correction basically just moves that test into\ncheck_client_encoding where it should have been to begin with.\nAfter applying this, I can un-revert f5f30c22e and everything\npasses.\n\nHowever, this episode definitely gives me pause about back-patching\nf5f30c22e as I did before. It seems not impossible that there\nare extensions with similarly mis-coded assign hooks, and if\nso those are going to need to be fixed. (I did check that none\nof the other core GUCs have this problem. assign_recovery_target\nand friends would, except that they are for PGC_POSTMASTER variables\nthat won't be getting changed by GUCRestoreState. Anyway they're a\nknown kluge, and this patch isn't making it worse.) So we'd better\ntreat this as a minor API change, which means we probably shouldn't\nput it in stable branches.\n\nWhat I'm currently thinking is to apply in HEAD and perhaps v17,\nbut not further back. Given that this bug has existed since\nthe beginning of parallel query yet wasn't reported till now,\nit's not sufficiently problematic to take any risk for in\nstable branches.\n\nAny opinions about whether it's too late to do this in v17?\nPost-beta3 is pretty late, for sure, but maybe we could get\naway with it. And we are fixing a bug here.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 04 Aug 2024 18:08:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18545: \\dt breaks transaction,\n calling error when executed in SET SESSION AUTHORIZATION"
},
{
"msg_contents": "On Sun, Aug 4, 2024 at 3:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Any opinions about whether it's too late to do this in v17?\n> Post-beta3 is pretty late, for sure, but maybe we could get\n> away with it. And we are fixing a bug here.\n>\n>\nIf this isn't going to appear in the beta3 build I'd say it's probably too\nlate given the target audience for waiting on this is extension authors.\nIf it is going into beta3 then I'd vote to allow it.\n\nFeels like this dynamic should be covered as part of our recent attempt to\nbetter communicate our policies to our extension authoring community.\n\nhttps://www.postgresql.org/docs/devel/xfunc-c.html#XFUNC-GUIDANCE-ABI-MNINOR-VERSIONS\n\nSomething like:\n\nNamely, beta releases do not constitute a minor release under our policies\nand updates to our API/ABIs can happen during beta at any point - whether\nfor features newly added in the under- development major version or not.\nExtension authors are thus encouraged to test their extensions against the\nRC build at minimum should they wish for their extension to be ready when\nthe initial release comes out. Tests against beta versions are very\nhelpful to all interested parties but there is no guarantee that tests that\npass any given beta release will pass when performed against the release\ncandidate. For the release candidate we will use the same patching policy\nas for a normal minor release. Any exceptions will necessitate a second\nrelease candidate.\n\nThe above wording allows us to put this patch into beta3, which I'd be fine\nwith. But I'd also be fine with adding wording like: \"Changes introduced\nafter the final beta is released for testing will [generally?] be limited\nto fixing items conforming to the Open Item policy.\" Probably favor the\nlatter too by a small margin.\n\nDavid J.\n\nOn Sun, Aug 4, 2024 at 3:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Any opinions about whether it's too late to do this in v17?\nPost-beta3 is pretty late, for sure, but maybe we could get\naway with it. And we are fixing a bug here.If this isn't going to appear in the beta3 build I'd say it's probably too late given the target audience for waiting on this is extension authors. If it is going into beta3 then I'd vote to allow it.Feels like this dynamic should be covered as part of our recent attempt to better communicate our policies to our extension authoring community.https://www.postgresql.org/docs/devel/xfunc-c.html#XFUNC-GUIDANCE-ABI-MNINOR-VERSIONSSomething like:Namely, beta releases do not constitute a minor release under our policies and updates to our API/ABIs can happen during beta at any point - whether for features newly added in the under- development major version or not. Extension authors are thus encouraged to test their extensions against the RC build at minimum should they wish for their extension to be ready when the initial release comes out. Tests against beta versions are very helpful to all interested parties but there is no guarantee that tests that pass any given beta release will pass when performed against the release candidate. For the release candidate we will use the same patching policy as for a normal minor release. Any exceptions will necessitate a second release candidate.The above wording allows us to put this patch into beta3, which I'd be fine with. But I'd also be fine with adding wording like: \"Changes introduced after the final beta is released for testing will [generally?] be limited to fixing items conforming to the Open Item policy.\" Probably favor the latter too by a small margin.David J.",
"msg_date": "Sun, 4 Aug 2024 17:46:53 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18545: \\dt breaks transaction, calling error when executed\n in SET SESSION AUTHORIZATION"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Sun, Aug 4, 2024 at 3:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Any opinions about whether it's too late to do this in v17?\n>> Post-beta3 is pretty late, for sure, but maybe we could get\n>> away with it. And we are fixing a bug here.\n\n> If this isn't going to appear in the beta3 build I'd say it's probably too\n> late given the target audience for waiting on this is extension authors.\n> If it is going into beta3 then I'd vote to allow it.\n\nNope, it's definitely not going into beta3; it's about two days\ntoo late for that.\n\nI agree fixing it in HEAD only is the more conservative course.\nTo do otherwise, we'd have to rank the #18545 bug as fairly\nimportant, and I'm not sure I buy that given how long it took to\nnotice it. But I was curious to see if anyone felt differently.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Aug 2024 20:59:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18545: \\dt breaks transaction,\n calling error when executed in SET SESSION AUTHORIZATION"
},
{
"msg_contents": "On Sun, Aug 4, 2024 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> However, what the farm found is that assign_client_encoding\n> is flat-out broken. It's ignoring the first commandment\n> for GUC hooks, which is \"Thy assign hooks shalt not fail\".\n> (If we didn't need that, there wouldn't be a separation\n> between check hooks and assign hooks in the first place.)\n\nInteresting. Looks like my mistake, dating to\n10c0558ffefcd12bf1d3dc35587eba41d1ce4571. I'm honestly kind of\nsurprised that nobody discovered this problem for 8 years. I would\nhave expected it to cause more problems.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Aug 2024 14:48:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18545: \\dt breaks transaction, calling error when executed\n in SET SESSION AUTHORIZATION"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sun, Aug 4, 2024 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> However, what the farm found is that assign_client_encoding\n>> is flat-out broken. It's ignoring the first commandment\n>> for GUC hooks, which is \"Thy assign hooks shalt not fail\".\n\n> Interesting. Looks like my mistake, dating to\n> 10c0558ffefcd12bf1d3dc35587eba41d1ce4571. I'm honestly kind of\n> surprised that nobody discovered this problem for 8 years. I would\n> have expected it to cause more problems.\n\nYeah, it's a bit accidental that that's not reachable up to now.\nOr I think it's not reachable, anyway. If we find out differently\nwe can back-patch 0ae5b763e, but for now I refrained.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Aug 2024 14:54:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18545: \\dt breaks transaction,\n calling error when executed in SET SESSION AUTHORIZATION"
},
{
"msg_contents": "This commit seems to trigger elog(), not reproducible in the\nparent commit.\n\n6e086fa2e77 Allow parallel workers to cope with a newly-created session user ID.\n\npostgres=# SET min_parallel_table_scan_size=0; CLUSTER pg_attribute USING pg_attribute_relid_attnum_index;\nERROR: pg_attribute catalog is missing 26 attribute(s) for relation OID 70321\npostgres=# \\errverbose\nERROR: XX000: pg_attribute catalog is missing 26 attribute(s) for relation OID 70321\nLOCATION: RelationBuildTupleDesc, relcache.c:658\n\nThis is not completely deterministic:\n\npostgres=# CLUSTER pg_attribute USING pg_attribute_relid_attnum_index;\nCLUSTER\npostgres=# CLUSTER pg_attribute USING pg_attribute_relid_attnum_index;\nCLUSTER\npostgres=# CLUSTER pg_attribute USING pg_attribute_relid_attnum_index;\nCLUSTER\npostgres=# CLUSTER pg_attribute USING pg_attribute_relid_attnum_index;\nCLUSTER\npostgres=# CLUSTER pg_attribute USING pg_attribute_relid_attnum_index;\nERROR: pg_attribute catalog is missing 26 attribute(s) for relation OID 70391\n\nBut I think this will be reproducible in any database with a nontrivial\nnumber of attributes.\n\n\n",
"msg_date": "Tue, 17 Sep 2024 18:47:11 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18545: \\dt breaks transaction, calling error when executed\n in SET SESSION AUTHORIZATION"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> This commit seems to trigger elog(), not reproducible in the\n> parent commit.\n\nYeah, I can reproduce that. Will take a look tomorrow.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Sep 2024 20:16:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18545: \\dt breaks transaction,\n calling error when executed in SET SESSION AUTHORIZATION"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> This commit seems to trigger elog(), not reproducible in the\n> parent commit.\n\n> 6e086fa2e77 Allow parallel workers to cope with a newly-created session user ID.\n\n> postgres=# SET min_parallel_table_scan_size=0; CLUSTER pg_attribute USING pg_attribute_relid_attnum_index;\n> ERROR: pg_attribute catalog is missing 26 attribute(s) for relation OID 70321\n\nI've been poking at this all day, and I still have little idea what's\ngoing on. I've added a bunch of throwaway instrumentation, and have\nmanaged to convince myself that the problem is that parallel heap\nscan is broken. The scans done to rebuild pg_attribute's indexes\nseem to sometimes miss heap pages or visit pages twice (in different\nworkers). I have no idea why this is, and even less idea how\n6e086fa2e is provoking it. As you say, the behavior isn't entirely\nreproducible, but I couldn't make it happen at all after reverting\n6e086fa2e's changes in transam/parallel.c, so apparently there is\nsome connection.\n\nAnother possibly useful data point is that for me it reproduces\nfairly well (more than one time in two) on x86_64 Linux, but\nI could not make it happen on macOS ARM64. If it's a race\ncondition, which smells plausible, that's perhaps not hugely\nsurprising.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Sep 2024 23:30:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18545: \\dt breaks transaction,\n calling error when executed in SET SESSION AUTHORIZATION"
},
{
"msg_contents": "I wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>> This commit seems to trigger elog(), not reproducible in the\n>> parent commit.\n>> 6e086fa2e77 Allow parallel workers to cope with a newly-created session user ID.\n\n>> postgres=# SET min_parallel_table_scan_size=0; CLUSTER pg_attribute USING pg_attribute_relid_attnum_index;\n>> ERROR: pg_attribute catalog is missing 26 attribute(s) for relation OID 70321\n\n> I've been poking at this all day, and I still have little idea what's\n> going on.\n\nGot it, after a good deal more head-scratching. Here's the relevant\nparts of ParallelWorkerMain:\n\n /*\n * We've changed which tuples we can see, and must therefore invalidate\n * system caches.\n */\n InvalidateSystemCaches();\n\n /*\n * Restore GUC values from launching backend. We can't do this earlier,\n * because GUC check hooks that do catalog lookups need to see the same\n * database state as the leader.\n */\n gucspace = shm_toc_lookup(toc, PARALLEL_KEY_GUC, false);\n RestoreGUCState(gucspace);\n\n ...\n\n /* Restore relmapper state. */\n relmapperspace = shm_toc_lookup(toc, PARALLEL_KEY_RELMAPPER_STATE, false);\n RestoreRelationMap(relmapperspace);\n\nInvalidateSystemCaches blows away the worker's relcache. Then \nRestoreGUCState causes some catalog lookups (tracing shows that\nrestoring default_text_search_config is what triggers this on my\nsetup), and in particular pg_attribute's relcache entry will get\nconstructed to support that. Then we wheel in a new set of\nrelation map entries *without doing anything about what that\nmight invalidate*.\n\nIn the given test case, the globally-visible relmap says that\npg_attribute's relfilenode is, say, XXXX. But we are busy rewriting\nit, so the parent process has an \"active\" relmap entry that says\npg_attribute's relfilenode is YYYY. Given the above, the worker\nprocess will have built a pg_attribute relcache entry that contains\nXXXX, and even though it now knows YYYY is the value it should be\nusing, that information never makes it to the worker's relcache.\n\nThe upshot of this is that when the parallel heap scan machinery\ndoles out some block numbers for the parent process to read, and\nsome other block numbers for the worker to read, the worker is\nreading those block numbers from the pre-clustering copy of\npg_attribute, which most likely doesn't match the post-clustering\nimage. This accounts for the missing and duplicate tuples I was\nseeing in the scan output.\n\nOf course, the reason 6e086fa2e made this visible is that before\nthat, any catalog reads triggered by RestoreGUCState were done\nin an earlier transaction, and then we would blow away the ensuing\nrelcache entries in InvalidateSystemCaches. So there was no bug\nas long as you assume that the \"...\" code doesn't cause any\ncatalog reads. I'm not too sure of that though --- it's certainly\nnot very comfortable to assume that functions like SetCurrentRoleId\nand SetTempNamespaceState will never attempt a catalog lookup.\n\nThe code has another hazard too, which is that this all implies\nthat the GUC-related catalog lookups will be done against the\nglobally-visible relmap state not whatever is active in the parent\nprocess. I have not tried to construct a POC showing that that\ncan give incorrect answers (that is, different from what the\nparent thinks), but it seems plausible that it could.\n\nSo the fix seems clear to me: RestoreRelationMap needs to happen\nbefore anything that could result in catalog lookups. I'm kind\nof inclined to move up the adjacent restores of non-transactional\nlow-level stuff too, particularly RestoreReindexState which has\ndirect impact on how catalog lookups are done.\n\nIndependently of that, it's annoying that the parallel heap scan\nmachinery failed to notice that it was handing out block numbers\nfor two different relfilenodes. I'm inclined to see if we can\nput some Asserts in there that would detect that. This particular\nbug would have been far easier to diagnose that way, and it hardly\nseems unlikely that \"worker is reading the wrong relation\" could\nhappen with other mistakes in future.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Sep 2024 17:35:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18545: \\dt breaks transaction,\n calling error when executed in SET SESSION AUTHORIZATION"
}
] |
[
{
"msg_contents": "Hackers,\n\nI wanted to surface a discussion in [1] regarding the expected behavior of\nGROUP BY with VOLATILE expressions. There seems to be a discrepancy between\nhow volatile functions (RANDOM(), also confirmed with TIMEOFDAY()) and\nsubqueries are evaluated in groups. In the examples below, volatile\nfunctions do not always appear to be evaluated per-call (evidenced by\nlooking at EXPLAIN or results) whereas scalar subqueries always appear to\nbe independently evaluated.\n\nBased on the docs, \"A query using a volatile function will re-evaluate the\nfunction at every row where its value is needed,\" it seems that the\nhandling of subqueries is correct and that each call to RANDOM() should be\nevaluated (not the current behavior). But, what is correct/anticipated?\n\n[version: PostgreSQL 16.3]\n\n-- ### grouping volatile functions\n-- single evaluation of RANDOM()\nselect random(), random(), random() group by 1;\n random | random | random\n--------------------+--------------------+--------------------\n 0.5156775158087117 | 0.5156775158087117 | 0.5156775158087117\n(1 row)\n\n-- two evaluations of RANDOM()\nselect random(), random(), random() group by 1, 2;\n random | random | random\n---------------------+---------------------+---------------------\n 0.36612763448670793 | 0.23423805164449374 | 0.36612763448670793\n(1 row)\n\n-- three evaluations of RANDOM()\nselect random(), random(), random() group by 1, 2, 3;\n random | random | random\n--------------------+--------------------+--------------------\n 0.2292929455776751 | 0.6613628224046473 | 0.5367692073422399\n(1 row)\n\n-- single evaluation of RANDOM()\nselect random(), random(), random() group by random();\n random | random | random\n--------------------+--------------------+--------------------\n 0.3069805404158834 | 0.3069805404158834 | 0.3069805404158834\n(1 row)\n\n-- single evaluation of RANDOM()\nselect random(), random(), random() group by random(), random();\n random | random | random\n--------------------+--------------------+--------------------\n 0.2860459945718521 | 0.2860459945718521 | 0.2860459945718521\n(1 row)\n\n-- single evaluation of RANDOM()\nselect random(), random(), random() group by random(), random(), random();\n random | random | random\n--------------------+--------------------+--------------------\n 0.3249129391658361 | 0.3249129391658361 | 0.3249129391658361\n(1 row)\n\n\n-- ### grouping scalar subqueries\n-- each subquery evaluated\nselect (select random()), (select random()), (select random()) group by 1;\n random | random | random\n---------------------+--------------------+--------------------\n 0.30149979064538757 | 0.7911979526441186 | 0.5251471322291046\n(1 row)\n\n-- each subquery evaluated\nselect (select random()), (select random()), (select random()) group by\n(select random());\n random | random | random\n--------------------+--------------------+----------------------\n 0.3411533489925591 | 0.4359004781684166 | 0.018305770511828356\n(1 row)\n\n\n-- ### sample EXPLAINs\n-- two evaluations of RANDOM()\nexplain (verbose, costs off) select random(), random(), random() group by\n1, 2;\n QUERY PLAN\n----------------------------------------------\n HashAggregate\n Output: (random()), (random()), (random())\n Group Key: random(), random()\n -> Result\n Output: random(), random()\n(5 rows)\n\n-- singe evaluation of RANDOM()\nexplain (verbose, costs off) select random(), random(), random() group by\nrandom(), random();\n QUERY PLAN\n----------------------------------------------\n HashAggregate\n Output: (random()), (random()), (random())\n Group Key: random()\n -> Result\n Output: random()\n(5 rows)\n\n\n-Paul-\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs4_xhRAfy-i%3D%3DnFMZGukw4M%3DOnkfwpfEfiGmAx6a3SYBKw%40mail.gmail.com\n\nHackers,I wanted to surface a discussion in [1] regarding the expected behavior of GROUP BY with VOLATILE expressions. There seems to be a discrepancy between how volatile functions (RANDOM(), also confirmed with TIMEOFDAY()) and subqueries are evaluated in groups. In the examples below, volatile functions do not always appear to be evaluated per-call (evidenced by looking at EXPLAIN or results) whereas scalar subqueries always appear to be independently evaluated.Based on the docs, \"A query using a volatile function will re-evaluate the function at every row where its value is needed,\" it seems that the handling of subqueries is correct and that each call to RANDOM() should be evaluated (not the current behavior). But, what is correct/anticipated?[version: PostgreSQL 16.3]-- ### grouping volatile functions-- single evaluation of RANDOM()select random(), random(), random() group by 1; random | random | random --------------------+--------------------+-------------------- 0.5156775158087117 | 0.5156775158087117 | 0.5156775158087117(1 row)-- two evaluations of RANDOM()select random(), random(), random() group by 1, 2; random | random | random ---------------------+---------------------+--------------------- 0.36612763448670793 | 0.23423805164449374 | 0.36612763448670793(1 row)-- three evaluations of RANDOM()select random(), random(), random() group by 1, 2, 3; random | random | random --------------------+--------------------+-------------------- 0.2292929455776751 | 0.6613628224046473 | 0.5367692073422399(1 row)-- single evaluation of RANDOM()select random(), random(), random() group by random(); random | random | random --------------------+--------------------+-------------------- 0.3069805404158834 | 0.3069805404158834 | 0.3069805404158834(1 row)-- single evaluation of RANDOM()select random(), random(), random() group by random(), random(); random | random | random --------------------+--------------------+-------------------- 0.2860459945718521 | 0.2860459945718521 | 0.2860459945718521(1 row)-- single evaluation of RANDOM()select random(), random(), random() group by random(), random(), random(); random | random | random --------------------+--------------------+-------------------- 0.3249129391658361 | 0.3249129391658361 | 0.3249129391658361(1 row)-- ### grouping scalar subqueries-- each subquery evaluatedselect (select random()), (select random()), (select random()) group by 1; random | random | random ---------------------+--------------------+-------------------- 0.30149979064538757 | 0.7911979526441186 | 0.5251471322291046(1 row)-- each subquery evaluatedselect (select random()), (select random()), (select random()) group by (select random()); random | random | random --------------------+--------------------+---------------------- 0.3411533489925591 | 0.4359004781684166 | 0.018305770511828356(1 row)-- ### sample EXPLAINs-- two evaluations of RANDOM()explain (verbose, costs off) select random(), random(), random() group by 1, 2; QUERY PLAN ---------------------------------------------- HashAggregate Output: (random()), (random()), (random()) Group Key: random(), random() -> Result Output: random(), random()(5 rows)-- singe evaluation of RANDOM()explain (verbose, costs off) select random(), random(), random() group by random(), random(); QUERY PLAN ---------------------------------------------- HashAggregate Output: (random()), (random()), (random()) Group Key: random() -> Result Output: random()(5 rows)-Paul-[1] https://www.postgresql.org/message-id/CAMbWs4_xhRAfy-i%3D%3DnFMZGukw4M%3DOnkfwpfEfiGmAx6a3SYBKw%40mail.gmail.com",
"msg_date": "Fri, 19 Jul 2024 07:20:33 -0700",
"msg_from": "Paul George <p.a.george19@gmail.com>",
"msg_from_op": true,
"msg_subject": "behavior of GROUP BY with VOLATILE expressions"
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 7:20 AM Paul George <p.a.george19@gmail.com> wrote:\n\n>\n> I wanted to surface a discussion in [1] regarding the expected behavior of\n> GROUP BY with VOLATILE expressions. There seems to be a discrepancy between\n> how volatile functions (RANDOM(), also confirmed with TIMEOFDAY()) and\n> subqueries are evaluated in groups. In the examples below, volatile\n> functions do not always appear to be evaluated per-call (evidenced by\n> looking at EXPLAIN or results) whereas scalar subqueries always appear to\n> be independently evaluated.\n>\n> Based on the docs, \"A query using a volatile function will re-evaluate\n> the function at every row where its value is needed,\" it seems that the\n> handling of subqueries is correct and that each call to RANDOM() should be\n> evaluated (not the current behavior). But, what is correct/anticipated?\n>\n>\nThe observed behaviors are unlikely to change. Prior discussions can be\nfound regarding this:\n\nhttps://www.postgresql.org/message-id/flat/CZHAF947QQQO.27MAUK2SVMBXW%40nmfay.com\n\nDavid J.\n\nOn Fri, Jul 19, 2024 at 7:20 AM Paul George <p.a.george19@gmail.com> wrote:I wanted to surface a discussion in [1] regarding the expected behavior of GROUP BY with VOLATILE expressions. There seems to be a discrepancy between how volatile functions (RANDOM(), also confirmed with TIMEOFDAY()) and subqueries are evaluated in groups. In the examples below, volatile functions do not always appear to be evaluated per-call (evidenced by looking at EXPLAIN or results) whereas scalar subqueries always appear to be independently evaluated.Based on the docs, \"A query using a volatile function will re-evaluate the function at every row where its value is needed,\" it seems that the handling of subqueries is correct and that each call to RANDOM() should be evaluated (not the current behavior). But, what is correct/anticipated?The observed behaviors are unlikely to change. Prior discussions can be found regarding this:https://www.postgresql.org/message-id/flat/CZHAF947QQQO.27MAUK2SVMBXW%40nmfay.comDavid J.",
"msg_date": "Fri, 19 Jul 2024 07:47:48 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: behavior of GROUP BY with VOLATILE expressions"
},
{
"msg_contents": "Great, thanks for the links and useful past discussions! I figured I wasn't\nthe first to stumble across this, and it's interesting to see the issue\narise with ORDER BY [VOLATILE FUNC] as well.\n\nMy question was not so much about changing behavior as it was about\nunderstanding what is desired, especially in light of the fact that\nsubqueries behave differently. From my reading of the links you provided,\nit seems that even the notion of \"desired\" here is itself dubious and that\nthere is a case for reevaluating RANDOM() everywhere and a case for not\ndoing that. Given this murkiness, is it fair then to say that drawing\nparallels between how GROUP BY subquery is handled is moot?\n\n-Paul-\n\nOn Fri, Jul 19, 2024 at 7:48 AM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Fri, Jul 19, 2024 at 7:20 AM Paul George <p.a.george19@gmail.com>\n> wrote:\n>\n>>\n>> I wanted to surface a discussion in [1] regarding the expected behavior\n>> of GROUP BY with VOLATILE expressions. There seems to be a discrepancy\n>> between how volatile functions (RANDOM(), also confirmed with TIMEOFDAY())\n>> and subqueries are evaluated in groups. In the examples below, volatile\n>> functions do not always appear to be evaluated per-call (evidenced by\n>> looking at EXPLAIN or results) whereas scalar subqueries always appear to\n>> be independently evaluated.\n>>\n>> Based on the docs, \"A query using a volatile function will re-evaluate\n>> the function at every row where its value is needed,\" it seems that the\n>> handling of subqueries is correct and that each call to RANDOM() should be\n>> evaluated (not the current behavior). But, what is correct/anticipated?\n>>\n>>\n> The observed behaviors are unlikely to change. Prior discussions can be\n> found regarding this:\n>\n>\n> https://www.postgresql.org/message-id/flat/CZHAF947QQQO.27MAUK2SVMBXW%40nmfay.com\n>\n> David J.\n>\n>\n\nGreat, thanks for the links and useful past discussions! I figured I wasn't the first to stumble across this, and it's interesting to see the issue arise with ORDER BY [VOLATILE FUNC] as well.My question was not so much about changing behavior as it was about understanding what is desired, especially in light of the fact that subqueries behave differently. From my reading of the links you provided, it seems that even the notion of \"desired\" here is itself dubious and that there is a case for reevaluating RANDOM() everywhere and a case for not doing that. Given this murkiness, is it fair then to say that drawing parallels between how GROUP BY subquery is handled is moot?-Paul-On Fri, Jul 19, 2024 at 7:48 AM David G. Johnston <david.g.johnston@gmail.com> wrote:On Fri, Jul 19, 2024 at 7:20 AM Paul George <p.a.george19@gmail.com> wrote:I wanted to surface a discussion in [1] regarding the expected behavior of GROUP BY with VOLATILE expressions. There seems to be a discrepancy between how volatile functions (RANDOM(), also confirmed with TIMEOFDAY()) and subqueries are evaluated in groups. In the examples below, volatile functions do not always appear to be evaluated per-call (evidenced by looking at EXPLAIN or results) whereas scalar subqueries always appear to be independently evaluated.Based on the docs, \"A query using a volatile function will re-evaluate the function at every row where its value is needed,\" it seems that the handling of subqueries is correct and that each call to RANDOM() should be evaluated (not the current behavior). But, what is correct/anticipated?The observed behaviors are unlikely to change. Prior discussions can be found regarding this:https://www.postgresql.org/message-id/flat/CZHAF947QQQO.27MAUK2SVMBXW%40nmfay.comDavid J.",
"msg_date": "Fri, 19 Jul 2024 14:21:05 -0700",
"msg_from": "Paul George <p.a.george19@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: behavior of GROUP BY with VOLATILE expressions"
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 2:21 PM Paul George <p.a.george19@gmail.com> wrote:\n\n> Great, thanks for the links and useful past discussions! I figured I\n> wasn't the first to stumble across this, and it's interesting to see the\n> issue arise with ORDER BY [VOLATILE FUNC] as well.\n>\n> My question was not so much about changing behavior as it was about\n> understanding what is desired, especially in light of the fact that\n> subqueries behave differently. From my reading of the links you provided,\n> it seems that even the notion of \"desired\" here is itself dubious and that\n> there is a case for reevaluating RANDOM() everywhere and a case for not\n> doing that. Given this murkiness, is it fair then to say that drawing\n> parallels between how GROUP BY subquery is handled is moot?\n>\n\nOnly now just grasping that you are trying to group something that is\ndefinitionally random. That just doesn't make sense to me. Grouping is\nfor categorical data (loosely defined, something like Invoice# arguably\ncounts as a category if you are looking at invoice details.)\n\nI'll stick with: this whole area, implementation-wise, is going to remain\nstatus-quo. If you've got ideas for documenting it better hopefully a\npatch goes in at some point. Mostly that can be done black-box style -\ninputs and outputs, not code reading.\n\nDavid J.\n\nOn Fri, Jul 19, 2024 at 2:21 PM Paul George <p.a.george19@gmail.com> wrote:Great, thanks for the links and useful past discussions! I figured I wasn't the first to stumble across this, and it's interesting to see the issue arise with ORDER BY [VOLATILE FUNC] as well.My question was not so much about changing behavior as it was about understanding what is desired, especially in light of the fact that subqueries behave differently. From my reading of the links you provided, it seems that even the notion of \"desired\" here is itself dubious and that there is a case for reevaluating RANDOM() everywhere and a case for not doing that. Given this murkiness, is it fair then to say that drawing parallels between how GROUP BY subquery is handled is moot?Only now just grasping that you are trying to group something that is definitionally random. That just doesn't make sense to me. Grouping is for categorical data (loosely defined, something like Invoice# arguably counts as a category if you are looking at invoice details.)I'll stick with: this whole area, implementation-wise, is going to remain status-quo. If you've got ideas for documenting it better hopefully a patch goes in at some point. Mostly that can be done black-box style - inputs and outputs, not code reading.David J.",
"msg_date": "Fri, 19 Jul 2024 14:27:52 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: behavior of GROUP BY with VOLATILE expressions"
},
{
"msg_contents": "David:\n\n>Only now just grasping that you are trying to group something that is\ndefinitionally random. That just doesn't make sense to me.\n\nOh, sorry for the confusion. Yeah, totally. I didn't mean to draw specific\nattention to GROUP BY -- as you've pointed out elsewhere this issue also\nexists with ORDER BY.\n\nTo clean this up a bit, it's specifically the comparison of how volatile\nfunctions and expressions are evaluated differently here (covered in prior\nlinks you've provided),\n\npostgres=# select random(), random() order by random();\n random | random\n-------------------+-------------------\n 0.956989895473876 | 0.956989895473876\n(1 row)\n\nand, here,\n\npostgres=# select (select random()), (select random()) order by (select\nrandom());\n random | random\n--------------------+--------------------\n 0.2872914386383745 | 0.8976525075618966\n(1 row)\n\nRegarding documentation, I think those changes would be useful. There's\nthis suggestion\n\n\"An expression or subexpression in\nthe SELECT list that matches an ORDER BY or GROUP BY item is taken to represent\nthe same value that was sorted or grouped by, even when the\n(sub)expression is volatile\".\n\nand this one,\n\n\"A side-effect of this feature is that ORDER BY expressions containing\nvolatile functions will execute the volatile function only once for the\nentire row; thus any column expressions using the same function will reuse\nthe same function result.\"\n\nBut I don't think either cover the additional, albeit nuanced, case of\nvolatile scalar subqueries.\n\n-Paul-\n\nOn Fri, Jul 19, 2024 at 2:28 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Fri, Jul 19, 2024 at 2:21 PM Paul George <p.a.george19@gmail.com>\n> wrote:\n>\n>> Great, thanks for the links and useful past discussions! I figured I\n>> wasn't the first to stumble across this, and it's interesting to see the\n>> issue arise with ORDER BY [VOLATILE FUNC] as well.\n>>\n>> My question was not so much about changing behavior as it was about\n>> understanding what is desired, especially in light of the fact that\n>> subqueries behave differently. From my reading of the links you provided,\n>> it seems that even the notion of \"desired\" here is itself dubious and that\n>> there is a case for reevaluating RANDOM() everywhere and a case for not\n>> doing that. Given this murkiness, is it fair then to say that drawing\n>> parallels between how GROUP BY subquery is handled is moot?\n>>\n>\n> Only now just grasping that you are trying to group something that is\n> definitionally random. That just doesn't make sense to me. Grouping is\n> for categorical data (loosely defined, something like Invoice# arguably\n> counts as a category if you are looking at invoice details.)\n>\n> I'll stick with: this whole area, implementation-wise, is going to remain\n> status-quo. If you've got ideas for documenting it better hopefully a\n> patch goes in at some point. Mostly that can be done black-box style -\n> inputs and outputs, not code reading.\n>\n> David J.\n>\n>\n\nDavid:>Only now just grasping that you are trying to group something that is definitionally random. That just doesn't make sense to me.Oh, sorry for the confusion. Yeah, totally. I didn't mean to draw specific attention to GROUP BY -- as you've pointed out elsewhere this issue also exists with ORDER BY.To clean this up a bit, it's specifically the comparison of how volatile functions and expressions are evaluated differently here (covered in prior links you've provided),postgres=# select random(), random() order by random(); random | random -------------------+------------------- 0.956989895473876 | 0.956989895473876(1 row)and, here,postgres=# select (select random()), (select random()) order by (select random()); random | random --------------------+-------------------- 0.2872914386383745 | 0.8976525075618966(1 row)Regarding documentation, I think those changes would be useful. There's this suggestion\"An expression or subexpression inthe SELECT list that matches an ORDER BY or GROUP BY item is taken to represent the same value that was sorted or grouped by, even when the(sub)expression is volatile\".and this one,\"A side-effect of this feature is that ORDER BY expressions containingvolatile functions will execute the volatile function only once for theentire row; thus any column expressions using the same function will reusethe same function result.\"But I don't think either cover the additional, albeit nuanced, case of volatile scalar subqueries.-Paul-On Fri, Jul 19, 2024 at 2:28 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Fri, Jul 19, 2024 at 2:21 PM Paul George <p.a.george19@gmail.com> wrote:Great, thanks for the links and useful past discussions! I figured I wasn't the first to stumble across this, and it's interesting to see the issue arise with ORDER BY [VOLATILE FUNC] as well.My question was not so much about changing behavior as it was about understanding what is desired, especially in light of the fact that subqueries behave differently. From my reading of the links you provided, it seems that even the notion of \"desired\" here is itself dubious and that there is a case for reevaluating RANDOM() everywhere and a case for not doing that. Given this murkiness, is it fair then to say that drawing parallels between how GROUP BY subquery is handled is moot?Only now just grasping that you are trying to group something that is definitionally random. That just doesn't make sense to me. Grouping is for categorical data (loosely defined, something like Invoice# arguably counts as a category if you are looking at invoice details.)I'll stick with: this whole area, implementation-wise, is going to remain status-quo. If you've got ideas for documenting it better hopefully a patch goes in at some point. Mostly that can be done black-box style - inputs and outputs, not code reading.David J.",
"msg_date": "Fri, 19 Jul 2024 16:32:22 -0700",
"msg_from": "Paul George <p.a.george19@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: behavior of GROUP BY with VOLATILE expressions"
}
] |
[
{
"msg_contents": "Hackers,\n\nI’m trying to understand the standard terms for extension libraries. There seem too a bewildering number of terms used to refer to a shared object library, for example:\n\n* LOAD[1]:\n * “shared library”\n * “shared library file”\n* dynamic_library_path[2]:\n * “dynamically loadable module”\n* xfunc-c[3]:\n * “dynamically loadable object”\n * “shared library”\n * “loadable object”\n * “loadable object file”\n * “object file”\n * “dynamically loaded object file”\n* pg_config[5]:\n * “object code libraries” (static?)\n * “dynamically loadable modules”\n* PGXS[4]:\n * “MODULES”\n * “shared-library objects”\n * “shared library”\n\nBonus confusion points to PGXS for MODULEDIR having nothing to do with MODULES.\n\nWhat is the standard term for these things? Or perhaps, what *should* it be? “Module”? “Library”? “Object”? “Shared ____”? “Dynamic ____”?\n\nWould it be useful to decide on one term (perhaps with “file” appended where it refers to a file that contains one of these things) and standardize the docs?\n\nConfusedly yours,\n\nDavid\n\n[1]: https://www.postgresql.org/docs/current/sql-load.html\n[2]: https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DYNAMIC-LIBRARY-PATH\n[3]: https://www.postgresql.org/docs/current/xfunc-c.html\n[4]: https://www.postgresql.org/docs/current/extend-pgxs.html\n[5]: https://www.postgresql.org/docs/current/app-pgconfig.html\n\n\n\n",
"msg_date": "Fri, 19 Jul 2024 15:27:49 -0400",
"msg_from": "\"David E. Wheeler\" <david@justatheory.com>",
"msg_from_op": true,
"msg_subject": "DSO Terms Galore"
},
{
"msg_contents": "On Fri, Jul 19, 2024 at 03:27:49PM -0400, David E. Wheeler wrote:\n> I�m trying to understand the standard terms for extension libraries.\n> There seem too a bewildering number of terms used to refer to a shared\n> object library, for example:\n>\n> [...] \n> \n> What is the standard term for these things? Or perhaps, what *should* it\n> be? \"Module\"? \"Library\"? \"Object\"? \"Shared ____\"? \"Dynamic ____\"?\n> \n> Would it be useful to decide on one term (perhaps with \"file\" appended\n> where it refers to a file that contains one of these things) and\n> standardize the docs?\n\nThe lack of consistent terminology seems at least potentially confusing for\nreaders. My first reaction is that \"shared library\" is probably fine.\n\n-- \nnathan\n\n\n",
"msg_date": "Fri, 19 Jul 2024 14:46:28 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DSO Terms Galore"
},
{
"msg_contents": "On Jul 19, 2024, at 15:46, Nathan Bossart <nathandbossart@gmail.com> wrote:\n\n> The lack of consistent terminology seems at least potentially confusing for\n> readers. My first reaction is that \"shared library\" is probably fine.\n\nThat’s the direction I was leaning, as well, but I thought I heard somewhere that the project used the term “module” for this feature specifically. That would be a bit nicer for the new PGXN Meta Spec revision I’m working on[1], where these three different types of things could be usefully separated:\n\n* extensions: CREATE EXTENSION extensions\n* modules: loadable modules for extensions, hooks, and workers (anything else?)\n* apps: Programs and scripts like pg_top, pgAdmin, or pg_partman scripts[2]\n\nHere the term “libraries” would be a little over-generic, and “share_libraries” longer than I'd like (these are JSON object keys).\n\nBest,\n\nDavid\n\n[1]: https://github.com/pgxn/rfcs/pull/3\n[2]: https://github.com/pgpartman/pg_partman/tree/master/bin/common\n\n\n\n",
"msg_date": "Fri, 19 Jul 2024 16:15:28 -0400",
"msg_from": "\"David E. Wheeler\" <david@justatheory.com>",
"msg_from_op": true,
"msg_subject": "Re: DSO Terms Galore"
},
{
"msg_contents": "On 19.07.24 21:27, David E. Wheeler wrote:\n> I’m trying to understand the standard terms for extension libraries. There seem too a bewildering number of terms used to refer to a shared object library, for example:\n> \n> * LOAD[1]:\n> * “shared library”\n> * “shared library file”\n> * dynamic_library_path[2]:\n> * “dynamically loadable module”\n> * xfunc-c[3]:\n> * “dynamically loadable object”\n> * “shared library”\n> * “loadable object”\n> * “loadable object file”\n> * “object file”\n> * “dynamically loaded object file”\n> * pg_config[5]:\n> * “object code libraries” (static?)\n> * “dynamically loadable modules”\n> * PGXS[4]:\n> * “MODULES”\n> * “shared-library objects”\n> * “shared library”\n\nI think in the POSIX-ish realm, the best term is \"dynamically loadable \nlibrary\". It's a library, because it contains functions you can, uh, \nborrow, just like a static library. And it's dynamically loadable, as \nopposed to being loaded in a fixed manner at startup time.\n\nAlso, the \"dl\" in dlopen() etc. presumably stands for dynamic-something \nload-something.\n\nLibtool uses the term \"dlopened module\" for this, and the command-line \noption is -module. \n(https://www.gnu.org/software/libtool/manual/libtool.html#Dlopened-modules)\n\nMeson uses shared_module() for this. (It has shared_library() and \nstatic_library() for things like libpq.)\n\nThings like \"object\" or \"object file\" or probably wrong-ish. I \nunderstand an object file to be a .o file, which you can't dlopen directly.\n\nShared library is semi-ok because on many platforms, link-time shared \nlibraries (like libpq) and dynamically loadable libraries (like plpgsql) \nare the same file format. But on some they're not, so it leads to \nconfusion.\n\nI think we can unify this around terms like \"dynamically loadable \nlibrary\" and \"dynamically loadable module\" (or \"loaded\" in cases where \nit's talking about a file that has already been loaded).\n\n\n",
"msg_date": "Tue, 23 Jul 2024 13:26:31 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: DSO Terms Galore"
},
{
"msg_contents": "On Jul 23, 2024, at 07:26, Peter Eisentraut <peter@eisentraut.org> wrote:\n\n> Things like \"object\" or \"object file\" or probably wrong-ish. I understand an object file to be a .o file, which you can't dlopen directly.\n\nAgreed.\n\nAnother option, however, is “dynamically shared object” (DSO), which corresponds to the usual *nix extension, .so. I think I know the term most from Apache. It’s curious that I didn’t run across it while perusing the Postgres docs.\n\n> I think we can unify this around terms like \"dynamically loadable library\" and \"dynamically loadable module\" (or \"loaded\" in cases where it's talking about a file that has already been loaded).\n\n+1 for “dynamically loadable module” and, in common usage, “module”, since I don’t think it would be confused for anything else. “dynamically loadable library” would either have to always be used in full --- because “library” can be static, too --- or to “DLL”, which has strong Windows associations.\n\nBest,\n\nDavid\n\n\n\n",
"msg_date": "Tue, 23 Jul 2024 10:26:10 -0400",
"msg_from": "\"David E. Wheeler\" <david@justatheory.com>",
"msg_from_op": true,
"msg_subject": "Re: DSO Terms Galore"
}
] |
[
{
"msg_contents": "You're right, thanks!\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nWen Yi\r\nwen-yi@qq.com\r\n\r\n\r\n\r\n\r\n\r\n\r\n---Original---\r\nFrom: \"David G. Johnston\"<david.g.johnston@gmail.com>\r\nDate: Fri, Jul 19, 2024 21:04 PM\r\nTo: \"Wen Yi\"<wen-yi@qq.com>;\r\nCc: \"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org>;\r\nSubject: How can udf c function return table, not the rows?\r\n\r\n\r\nOn Thursday, July 18, 2024, Wen Yi <wen-yi@qq.com> wrote:\r\n\r\n pg_get_functiondef \r\n----------------------------------------------------------------------\r\n\r\n\r\n\r\nIn my expectations, it should be:\r\n\r\n\r\n oid | pg_get_functiondef \r\n------+------------------------------------------------------------------------\r\n\r\n\r\nCan someone give my some advice?\r\n\r\n\r\n\r\n\r\nWrite:\r\n\r\n\r\nSelect * from function_call()\r\n\r\n\r\nInstead of\r\n\r\n\r\nSelect function_call() \r\n\r\n\r\nGuessing a bit since you never did show the first query.\r\n\r\n\r\nDavid J.\nYou're right, thanks!Wen Yiwen-yi@qq.com---Original---From: \"David G. Johnston\"<david.g.johnston@gmail.com>Date: Fri, Jul 19, 2024 21:04 PMTo: \"Wen Yi\"<wen-yi@qq.com>;Cc: \"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org>;Subject: How can udf c function return table, not the rows?On Thursday, July 18, 2024, Wen Yi <wen-yi@qq.com> wrote: pg_get_functiondef ----------------------------------------------------------------------In my expectations, it should be: oid | pg_get_functiondef ------+------------------------------------------------------------------------Can someone give my some advice?Write:Select * from function_call()Instead ofSelect function_call() Guessing a bit since you never did show the first query.David J.",
"msg_date": "Sat, 20 Jul 2024 07:26:46 +0800",
"msg_from": "\"=?utf-8?B?V2VuIFlp?=\" <wen-yi@qq.com>",
"msg_from_op": true,
"msg_subject": "Re:How can udf c function return table, not the rows?"
}
] |
[
{
"msg_contents": "Hi hackers.\n\nRecently I came to an issue about logical replicating very big\ntransactions. Since we already have logical_decoding_work_mem to keep\nthe memory usage, there is no risk of OOM during decoding. However, the\nmemory usage still goes out of control in 'Tuples' memory context of\nreorder buffer. It seems that when restoring the spilled transactions\nfrom disk, the memory usage is still limited by max_changes_in_memory\nwhich is hard coded to 4096 like what decoding does before v13.\n\nFor big transactions, we have already supported streaming mode since\nv14, which should solve this issue, but using streaming mode relies on\nthe subscriptor's support. There are still a lot of PostgreSQL running\nv12/13 in production, or maybe v11 or older even though EOLed. Also,\nthere are a lot of CDCs which logical-replicates PostgreSQL seem not\nsupport streaming either.\n\nWould it be possible to make max_changes_in_memory a GUC so it can be\nadjusted dynamically? Make the default value 4096 as what current is.\nWhen coming with big transactions on memory-constrained machine, at\nleast we can adjust max_changes_in_memory to a lower value to make\nlogical WAL sender passing through this transaction. Or WAL sender may\nget kill -9 and recovery is needed. After recovery, WAL sender needs to\nrestart from a point before this transaction starts, and keep this loop\nwithout anything useful. It would never have a chance to pass through\nthis transaction except adding more memory to the machine, which is\nusually not practical in reality.\n\nSincerely, Jingtang\n\nHi hackers.Recently I came to an issue about logical replicating very bigtransactions. Since we already have logical_decoding_work_mem to keepthe memory usage, there is no risk of OOM during decoding. However, thememory usage still goes out of control in 'Tuples' memory context ofreorder buffer. It seems that when restoring the spilled transactionsfrom disk, the memory usage is still limited by max_changes_in_memorywhich is hard coded to 4096 like what decoding does before v13.For big transactions, we have already supported streaming mode sincev14, which should solve this issue, but using streaming mode relies onthe subscriptor's support. There are still a lot of PostgreSQL runningv12/13 in production, or maybe v11 or older even though EOLed. Also,there are a lot of CDCs which logical-replicates PostgreSQL seem notsupport streaming either.Would it be possible to make max_changes_in_memory a GUC so it can beadjusted dynamically? Make the default value 4096 as what current is.When coming with big transactions on memory-constrained machine, atleast we can adjust max_changes_in_memory to a lower value to makelogical WAL sender passing through this transaction. Or WAL sender mayget kill -9 and recovery is needed. After recovery, WAL sender needs torestart from a point before this transaction starts, and keep this loopwithout anything useful. It would never have a chance to pass throughthis transaction except adding more memory to the machine, which isusually not practical in reality.Sincerely, Jingtang",
"msg_date": "Sun, 21 Jul 2024 13:19:22 +0800",
"msg_from": "Jingtang Zhang <mrdrivingduck@gmail.com>",
"msg_from_op": true,
"msg_subject": "Make reorder buffer max_changes_in_memory adjustable?"
},
{
"msg_contents": "On 7/21/24 07:19, Jingtang Zhang wrote:\n> Hi hackers.\n> \n> Recently I came to an issue about logical replicating very big\n> transactions. Since we already have logical_decoding_work_mem to keep\n> the memory usage, there is no risk of OOM during decoding. However, the\n> memory usage still goes out of control in 'Tuples' memory context of\n> reorder buffer. It seems that when restoring the spilled transactions\n> from disk, the memory usage is still limited by max_changes_in_memory\n> which is hard coded to 4096 like what decoding does before v13.\n> \n> For big transactions, we have already supported streaming mode since\n> v14, which should solve this issue, but using streaming mode relies on\n> the subscriptor's support. There are still a lot of PostgreSQL running\n> v12/13 in production, or maybe v11 or older even though EOLed. Also,\n> there are a lot of CDCs which logical-replicates PostgreSQL seem not\n> support streaming either.\n> \n> Would it be possible to make max_changes_in_memory a GUC so it can be\n> adjusted dynamically? Make the default value 4096 as what current is.\n> When coming with big transactions on memory-constrained machine, at\n> least we can adjust max_changes_in_memory to a lower value to make\n> logical WAL sender passing through this transaction. Or WAL sender may\n> get kill -9 and recovery is needed. After recovery, WAL sender needs to\n> restart from a point before this transaction starts, and keep this loop\n> without anything useful. It would never have a chance to pass through\n> this transaction except adding more memory to the machine, which is\n> usually not practical in reality.\n> \n\nTheoretically, yes, we could make max_changes_in_memory a GUC, but it's\nnot clear to me how would that help 12/13, because there's ~0% chance\nwe'd backpatch that ...\n\nBut even for master, is GUC really the appropriate solution? It's still\na manual action, so if things go wrong some human has to connect, try a\nsetting a lower value, which might or might not work, etc.\n\nWouldn't it be better to have adjusts the value automatically, somehow?\nFor example, before restoring the changes, we could count the number of\ntransactions, and set it to 4096/ntransactions or something like that.\nOr do something smarter by estimating tuple size, to count it in the\nlogical__decoding_work_mem budget.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 21 Jul 2024 11:51:22 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Make reorder buffer max_changes_in_memory adjustable?"
},
{
"msg_contents": "Thanks, Tomas.\n\n> Theoretically, yes, we could make max_changes_in_memory a GUC, but it's\n> not clear to me how would that help 12/13, because there's ~0% chance\n> we'd backpatch that ...\n\nWhat I mean is not about back-patch work. Things should happen on publisher\nside?\n\nConsider when the publisher is a PostgreSQL v14+~master (with streaming\nsupport) and subscriber is a 12/13 where streaming is not supported, the\npublisher\nwould still have the risk of OOM. The same thing should happen when we use a\nv14+~master as publisher and a whatever open source CDC as subscriber.\n\n> Wouldn't it be better to have adjusts the value automatically, somehow?\n> For example, before restoring the changes, we could count the number of\n> transactions, and set it to 4096/ntransactions or something like that.\n> Or do something smarter by estimating tuple size, to count it in the\n> logical__decoding_work_mem budget.\n\nYes, I think this issue should have been solved when\nlogical_decoding_work_mem\nwas initially been introduced, but it didn't. There could be some reasons\nlike\nsub-transaction stuff which has been commented in the header of\nreorderbuffer.c.\n\nregards, Jingtang\n\nThanks, Tomas.> Theoretically, yes, we could make max_changes_in_memory a GUC, but it's> not clear to me how would that help 12/13, because there's ~0% chance> we'd backpatch that ...What I mean is not about back-patch work. Things should happen on publisherside?Consider when the publisher is a PostgreSQL v14+~master (with streamingsupport) and subscriber is a 12/13 where streaming is not supported, the publisherwould still have the risk of OOM. The same thing should happen when we use av14+~master as publisher and a whatever open source CDC as subscriber.> Wouldn't it be better to have adjusts the value automatically, somehow?> For example, before restoring the changes, we could count the number of> transactions, and set it to 4096/ntransactions or something like that.> Or do something smarter by estimating tuple size, to count it in the> logical__decoding_work_mem budget.Yes, I think this issue should have been solved when logical_decoding_work_memwas initially been introduced, but it didn't. There could be some reasons likesub-transaction stuff which has been commented in the header of reorderbuffer.c.regards, Jingtang",
"msg_date": "Mon, 22 Jul 2024 11:28:56 +0800",
"msg_from": "Jingtang Zhang <mrdrivingduck@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Make reorder buffer max_changes_in_memory adjustable?"
},
{
"msg_contents": "\nOn 7/22/24 05:28, Jingtang Zhang wrote:\n> Thanks, Tomas.\n> \n>> Theoretically, yes, we could make max_changes_in_memory a GUC, but it's\n>> not clear to me how would that help 12/13, because there's ~0% chance\n>> we'd backpatch that ...\n> \n> What I mean is not about back-patch work. Things should happen on publisher\n> side?\n> \n> Consider when the publisher is a PostgreSQL v14+~master (with streaming\n> support) and subscriber is a 12/13 where streaming is not supported, the\n> publisher\n> would still have the risk of OOM. The same thing should happen when we use a\n> v14+~master as publisher and a whatever open source CDC as subscriber.\n> \n\nYes, you're right - if we're talking about mixed setups with just the\nsubscriber running the old version, then it would benefit from this\nimprovement even without backpatching.\n\n>> Wouldn't it be better to have adjusts the value automatically, somehow?\n>> For example, before restoring the changes, we could count the number of\n>> transactions, and set it to 4096/ntransactions or something like that.\n>> Or do something smarter by estimating tuple size, to count it in the\n>> logical__decoding_work_mem budget.\n> \n> Yes, I think this issue should have been solved when\n> logical_decoding_work_mem\n> was initially been introduced, but it didn't. There could be some reasons\n> like\n> sub-transaction stuff which has been commented in the header of\n> reorderbuffer.c.\n> \n\nTrue, but few patches are perfect/complete from V1. There's often stuff\nthat's unlikely to happen, left for future improvements. And this is one\nof those cases, I believe. The fact that the comment even mentions this\nis a sign the developers considered this, and chose to ignore it.\n\nThat being said, I think it'd be nice to improve this, and I'm willing\nto take a look if someone prepares a patch. But I don't think making\nthis a GUC is the right approach - it's the simplest patch, but every\nnew GUC just makes the database harder to manage.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 22 Jul 2024 11:46:20 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Make reorder buffer max_changes_in_memory adjustable?"
}
] |
[
{
"msg_contents": "\nI noticed this when working on the PostgreSQL::Test::Session project I \nhave in hand. All the tests pass except occasionally the xid_wraparound \ntests fail. It's not always the same test script that fails either. I \ntried everything but couldn't make the failure stop. So then I switched \nout my patch so it's running on plain master and set things running in a \nloop. Lo and behold it can be relied on to fail after only a few \niterations.\n\nIn the latest iteration the failure looks like this\n\n\nstderr:\n# poll_query_until timed out executing this query:\n#\n# SELECT NOT EXISTS (\n# SELECT *\n# FROM pg_database\n# WHERE age(datfrozenxid) > \ncurrent_setting('autovacuum_freeze_max_age')::int)\n#\n# expecting this output:\n# t\n# last actual query output:\n# f\n# with stderr:\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 29 just after 1.\n\n(test program exited with status code 29)\n――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\n\n\nSummary of Failures:\n\n295/295 postgresql:xid_wraparound / xid_wraparound/001_emergency_vacuum \nERROR 211.76s exit status 29\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 21 Jul 2024 12:20:03 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "xid_wraparound tests intermittent failure."
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I noticed this when working on the PostgreSQL::Test::Session project I \n> have in hand. All the tests pass except occasionally the xid_wraparound \n> tests fail. It's not always the same test script that fails either. I \n> tried everything but couldn't make the failure stop. So then I switched \n> out my patch so it's running on plain master and set things running in a \n> loop. Lo and behold it can be relied on to fail after only a few \n> iterations.\n\nI have been noticing xid_wraparound failures in the buildfarm too.\nThey seemed quite infrequent, but it wasn't till just now that\nI realized that xid_wraparound is not run by default. (You have to\nput \"xid_wraparound\" in PG_TEST_EXTRA to enable it.) AFAICS the\nonly buildfarm animals that have enabled it are dodo and perentie.\ndodo is failing this test fairly often:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=dodo&br=HEAD\n\nperentie doesn't seem to be having a problem, but I will bet that\npart of the reason is it's running with cranked-up timeouts:\n\n 'build_env' => {\n 'PG_TEST_EXTRA' => 'xid_wraparound',\n 'PG_TEST_TIMEOUT_DEFAULT' => '360'\n },\n\nOne thing that seems quite interesting is that the test seems to\ntake about 10 minutes when successful on dodo, but when it fails\nit's twice that. Why the instability? (Perhaps dodo has highly\nvariable background load, and the thing simply times out in some\nruns but not others?)\n\nLocally, I've not managed to reproduce the failure yet; so perhaps\nthere is some platform dependency. What are you testing on?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Jul 2024 13:34:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "\nOn 2024-07-21 Su 1:34 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I noticed this when working on the PostgreSQL::Test::Session project I\n>> have in hand. All the tests pass except occasionally the xid_wraparound\n>> tests fail. It's not always the same test script that fails either. I\n>> tried everything but couldn't make the failure stop. So then I switched\n>> out my patch so it's running on plain master and set things running in a\n>> loop. Lo and behold it can be relied on to fail after only a few\n>> iterations.\n> I have been noticing xid_wraparound failures in the buildfarm too.\n> They seemed quite infrequent, but it wasn't till just now that\n> I realized that xid_wraparound is not run by default. (You have to\n> put \"xid_wraparound\" in PG_TEST_EXTRA to enable it.) AFAICS the\n> only buildfarm animals that have enabled it are dodo and perentie.\n> dodo is failing this test fairly often:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=dodo&br=HEAD\n>\n> perentie doesn't seem to be having a problem, but I will bet that\n> part of the reason is it's running with cranked-up timeouts:\n>\n> 'build_env' => {\n> 'PG_TEST_EXTRA' => 'xid_wraparound',\n> 'PG_TEST_TIMEOUT_DEFAULT' => '360'\n> },\n>\n> One thing that seems quite interesting is that the test seems to\n> take about 10 minutes when successful on dodo, but when it fails\n> it's twice that. Why the instability? (Perhaps dodo has highly\n> variable background load, and the thing simply times out in some\n> runs but not others?)\n>\n> Locally, I've not managed to reproduce the failure yet; so perhaps\n> there is some platform dependency. What are you testing on?\n\n\nLinux ub22arm 5.15.0-116-generic #126-Ubuntu SMP Mon Jul 1 10:08:40 UTC \n2024 aarch64 aarch64 aarch64 GNU/Linux\n\nIt's a VM running on UTM/Apple Silicon\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 21 Jul 2024 14:46:28 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "Hello,\n\n21.07.2024 20:34, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I noticed this when working on the PostgreSQL::Test::Session project I\n>> have in hand. All the tests pass except occasionally the xid_wraparound\n>> tests fail. It's not always the same test script that fails either. I\n>> tried everything but couldn't make the failure stop. So then I switched\n>> out my patch so it's running on plain master and set things running in a\n>> loop. Lo and behold it can be relied on to fail after only a few\n>> iterations.\n> I have been noticing xid_wraparound failures in the buildfarm too.\n> They seemed quite infrequent, but it wasn't till just now that\n> I realized that xid_wraparound is not run by default. (You have to\n> put \"xid_wraparound\" in PG_TEST_EXTRA to enable it.) AFAICS the\n> only buildfarm animals that have enabled it are dodo and perentie.\n> dodo is failing this test fairly often:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=dodo&br=HEAD\n\nI think this failure is counted at [1]. Please look at the linked message\n[2], where I described what makes the test fail.\n\n[1] \nhttps://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#001_emergency_vacuum.pl_fails_to_wait_for_datfrozenxid_advancing\n[2] https://www.postgresql.org/message-id/5811175c-1a31-4869-032f-7af5e3e4506a@gmail.com\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sun, 21 Jul 2024 23:08:08 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2024-07-21 Su 1:34 PM, Tom Lane wrote:\n>> Locally, I've not managed to reproduce the failure yet; so perhaps\n>> there is some platform dependency. What are you testing on?\n\n> Linux ub22arm 5.15.0-116-generic #126-Ubuntu SMP Mon Jul 1 10:08:40 UTC \n> 2024 aarch64 aarch64 aarch64 GNU/Linux\n> It's a VM running on UTM/Apple Silicon\n\nHmm, doesn't sound like that ought to be slow.\n\nI did manage to reproduce dodo's failures by running xid_wraparound\nmanually on mamba's very slow host:\n\n$ time make -s installcheck PROVE_FLAGS=--timer\n# +++ tap install-check in src/test/modules/xid_wraparound +++\n[13:37:49] t/001_emergency_vacuum.pl .. 1/? # poll_query_until timed out executing this query:\n# \n# SELECT NOT EXISTS (\n# SELECT *\n# FROM pg_database\n# WHERE age(datfrozenxid) > current_setting('autovacuum_freeze_max_age')::int)\n# \n# expecting this output:\n# t\n# last actual query output:\n# f\n# with stderr:\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 4 just after 1.\n[13:37:49] t/001_emergency_vacuum.pl .. Dubious, test returned 4 (wstat 1024, 0x400)\nAll 1 subtests passed \n[14:06:51] t/002_limits.pl ............ 2/? # Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 29 just after 2.\n[14:06:51] t/002_limits.pl ............ Dubious, test returned 29 (wstat 7424, 0x1d00)\nAll 2 subtests passed \n[14:31:16] t/003_wraparounds.pl ....... ok 7564763 ms ( 0.00 usr 0.01 sys + 13.82 cusr 9.26 csys = 23.09 CPU)\n[16:37:21]\n\nTest Summary Report\n-------------------\nt/001_emergency_vacuum.pl (Wstat: 1024 (exited 4) Tests: 1 Failed: 0)\n Non-zero exit status: 4\n Parse errors: No plan found in TAP output\nt/002_limits.pl (Wstat: 7424 (exited 29) Tests: 2 Failed: 0)\n Non-zero exit status: 29\n Parse errors: No plan found in TAP output\nFiles=3, Tests=4, 10772 wallclock secs ( 0.15 usr 0.06 sys + 58.50 cusr 59.88 csys = 118.59 CPU)\nResult: FAIL\nmake: *** [../../../../src/makefiles/pgxs.mk:442: installcheck] Error 1\n 10772.99 real 59.34 user 60.14 sys\n\nEach of those two failures looks just like something that dodo has\nshown at one time or another. So it's at least plausible that\n\"slow machine\" is the whole explanation. I'm still wondering\nthough if there's some effect that causes the test's runtime to\nbe unstable in itself, sometimes leading to timeouts.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Jul 2024 17:36:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 8:08 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures\n\nThis is great. Thanks for collating all this info here! And of\ncourse all the research and reports behind it.\n\n\n",
"msg_date": "Mon, 22 Jul 2024 11:27:37 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "\nOn 2024-07-21 Su 4:08 PM, Alexander Lakhin wrote:\n> Hello,\n>\n> 21.07.2024 20:34, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> I noticed this when working on the PostgreSQL::Test::Session project I\n>>> have in hand. All the tests pass except occasionally the xid_wraparound\n>>> tests fail. It's not always the same test script that fails either. I\n>>> tried everything but couldn't make the failure stop. So then I switched\n>>> out my patch so it's running on plain master and set things running \n>>> in a\n>>> loop. Lo and behold it can be relied on to fail after only a few\n>>> iterations.\n>> I have been noticing xid_wraparound failures in the buildfarm too.\n>> They seemed quite infrequent, but it wasn't till just now that\n>> I realized that xid_wraparound is not run by default. (You have to\n>> put \"xid_wraparound\" in PG_TEST_EXTRA to enable it.) AFAICS the\n>> only buildfarm animals that have enabled it are dodo and perentie.\n>> dodo is failing this test fairly often:\n>>\n>> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=dodo&br=HEAD\n>\n> I think this failure is counted at [1]. Please look at the linked message\n> [2], where I described what makes the test fail.\n>\n> [1] \n> https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#001_emergency_vacuum.pl_fails_to_wait_for_datfrozenxid_advancing\n> [2] \n> https://www.postgresql.org/message-id/5811175c-1a31-4869-032f-7af5e3e4506a@gmail.com\n\n\nIt's sad nothing has happened abut this for 2 months.\n\nThere's no point in having unreliable tests. What's not 100% clear to me \nis whether this failure indicates a badly formulated test or the test is \ncorrect and has identified an underlying bug.\n\nRegarding the point in [2] about the test being run twice in buildfarm \nclients, I think we should mark the module as NO_INSTALLCHECK in the \nMakefile.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 22 Jul 2024 08:54:02 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On Sun, Jul 21, 2024 at 7:28 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Jul 22, 2024 at 8:08 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> > https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures\n>\n> This is great. Thanks for collating all this info here! And of\n> course all the research and reports behind it.\n\nWow, that's an incredible wiki page.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 22 Jul 2024 09:13:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On Sun, Jul 21, 2024 at 2:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 2024-07-21 Su 1:34 PM, Tom Lane wrote:\n> >> Locally, I've not managed to reproduce the failure yet; so perhaps\n> >> there is some platform dependency. What are you testing on?\n>\n> > Linux ub22arm 5.15.0-116-generic #126-Ubuntu SMP Mon Jul 1 10:08:40 UTC\n> > 2024 aarch64 aarch64 aarch64 GNU/Linux\n> > It's a VM running on UTM/Apple Silicon\n>\n> Hmm, doesn't sound like that ought to be slow.\n>\n> I did manage to reproduce dodo's failures by running xid_wraparound\n> manually on mamba's very slow host:\n>\n> $ time make -s installcheck PROVE_FLAGS=--timer\n> # +++ tap install-check in src/test/modules/xid_wraparound +++\n> [13:37:49] t/001_emergency_vacuum.pl .. 1/? # poll_query_until timed out executing this query:\n> #\n> # SELECT NOT EXISTS (\n> # SELECT *\n> # FROM pg_database\n> # WHERE age(datfrozenxid) > current_setting('autovacuum_freeze_max_age')::int)\n> #\n> # expecting this output:\n> # t\n> # last actual query output:\n> # f\n> # with stderr:\n> # Tests were run but no plan was declared and done_testing() was not seen.\n> # Looks like your test exited with 4 just after 1.\n> [13:37:49] t/001_emergency_vacuum.pl .. Dubious, test returned 4 (wstat 1024, 0x400)\n> All 1 subtests passed\n> [14:06:51] t/002_limits.pl ............ 2/? # Tests were run but no plan was declared and done_testing() was not seen.\n> # Looks like your test exited with 29 just after 2.\n> [14:06:51] t/002_limits.pl ............ Dubious, test returned 29 (wstat 7424, 0x1d00)\n> All 2 subtests passed\n> [14:31:16] t/003_wraparounds.pl ....... ok 7564763 ms ( 0.00 usr 0.01 sys + 13.82 cusr 9.26 csys = 23.09 CPU)\n> [16:37:21]\n>\n> Test Summary Report\n> -------------------\n> t/001_emergency_vacuum.pl (Wstat: 1024 (exited 4) Tests: 1 Failed: 0)\n> Non-zero exit status: 4\n> Parse errors: No plan found in TAP output\n> t/002_limits.pl (Wstat: 7424 (exited 29) Tests: 2 Failed: 0)\n> Non-zero exit status: 29\n> Parse errors: No plan found in TAP output\n> Files=3, Tests=4, 10772 wallclock secs ( 0.15 usr 0.06 sys + 58.50 cusr 59.88 csys = 118.59 CPU)\n> Result: FAIL\n> make: *** [../../../../src/makefiles/pgxs.mk:442: installcheck] Error 1\n> 10772.99 real 59.34 user 60.14 sys\n>\n> Each of those two failures looks just like something that dodo has\n> shown at one time or another. So it's at least plausible that\n> \"slow machine\" is the whole explanation. I'm still wondering\n> though if there's some effect that causes the test's runtime to\n> be unstable in itself, sometimes leading to timeouts.\n>\n\nSince the server writes a lot of logs during the xid_wraparound test,\n\"slow disk\" could also be a reason.\n\nLooking at dodo's failures, it seems that while it passes\nmodule-xid_wraparound-check, all failures happened only during\ntestmodules-install-check-C. Can we check the server logs written\nduring xid_wraparound test in testmodules-install-check-C? I thought\nthe following link is the server logs but since it seems there were no\nautovacuum logs I suspected there is another log file:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=dodo&dt=2024-07-20%2020%3A35%3A39&stg=testmodules-install-check-C&raw=1\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 22 Jul 2024 09:07:32 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> Looking at dodo's failures, it seems that while it passes\n> module-xid_wraparound-check, all failures happened only during\n> testmodules-install-check-C. Can we check the server logs written\n> during xid_wraparound test in testmodules-install-check-C?\n\nOooh, that is indeed an interesting observation. There are enough\nexamples now that it's hard to dismiss it as chance, but why would\nthe two runs be different?\n\n(I agree with the comment that we shouldn't be running this test\ntwice, but that's a separate matter.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2024 12:46:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 9:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> > Looking at dodo's failures, it seems that while it passes\n> > module-xid_wraparound-check, all failures happened only during\n> > testmodules-install-check-C. Can we check the server logs written\n> > during xid_wraparound test in testmodules-install-check-C?\n>\n> Oooh, that is indeed an interesting observation. There are enough\n> examples now that it's hard to dismiss it as chance, but why would\n> the two runs be different?\n\nDuring the xid_wraparound test in testmodules-install-check-C two\nclusters are running at the same time. This fact could make the\nxid_wraparound test slower by any chance.\n\n>\n> (I agree with the comment that we shouldn't be running this test\n> twice, but that's a separate matter.)\n\n+1 not running it twice.\n\nThere are test modules that have only TAP tests and are not marked as\nNO_INSTALLCHECK, for example test_custom_rmgrs. Probably we don't want\nto run these tests twice too?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 22 Jul 2024 11:50:34 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On 2024-07-22 Mo 12:46 PM, Tom Lane wrote:\n> Masahiko Sawada<sawada.mshk@gmail.com> writes:\n>> Looking at dodo's failures, it seems that while it passes\n>> module-xid_wraparound-check, all failures happened only during\n>> testmodules-install-check-C. Can we check the server logs written\n>> during xid_wraparound test in testmodules-install-check-C?\n> Oooh, that is indeed an interesting observation. There are enough\n> examples now that it's hard to dismiss it as chance, but why would\n> the two runs be different?\n\n\nIt's not deterministic.\n\nI tested the theory that it was some other concurrent tests causing the \nissue, but that didn't wash. Here's what I did:\n\n for f in `seq 1 100`\n do echo iteration = $f\n meson test --suite xid_wraparound || break\n done\n\nIt took until iteration 6 to get an error. I don't think my Ubuntu \ninstance is especially slow. e.g. \"meson compile\" normally takes a \nhandful of seconds. Maybe concurrent tests make it more likely, but they \ncan't be the only cause.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-22 Mo 12:46 PM, Tom Lane\n wrote:\n\n\nMasahiko Sawada <sawada.mshk@gmail.com> writes:\n\n\nLooking at dodo's failures, it seems that while it passes\nmodule-xid_wraparound-check, all failures happened only during\ntestmodules-install-check-C. Can we check the server logs written\nduring xid_wraparound test in testmodules-install-check-C?\n\n\n\nOooh, that is indeed an interesting observation. There are enough\nexamples now that it's hard to dismiss it as chance, but why would\nthe two runs be different?\n\n\n\nIt's not deterministic.\nI tested the theory that it was some other concurrent tests\n causing the issue, but that didn't wash. Here's what I did:\n for f in `seq 1 100`\n do echo iteration = $f\n meson test --suite xid_wraparound || break\n done\n\nIt took until iteration 6 to get an error. I don't think my\n Ubuntu instance is especially slow. e.g. \"meson compile\" normally\n takes a handful of seconds. Maybe concurrent tests make it more\n likely, but they can't be the only cause.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 22 Jul 2024 15:53:08 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 12:53 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2024-07-22 Mo 12:46 PM, Tom Lane wrote:\n>\n> Masahiko Sawada <sawada.mshk@gmail.com> writes:\n>\n> Looking at dodo's failures, it seems that while it passes\n> module-xid_wraparound-check, all failures happened only during\n> testmodules-install-check-C. Can we check the server logs written\n> during xid_wraparound test in testmodules-install-check-C?\n>\n> Oooh, that is indeed an interesting observation. There are enough\n> examples now that it's hard to dismiss it as chance, but why would\n> the two runs be different?\n>\n>\n> It's not deterministic.\n>\n> I tested the theory that it was some other concurrent tests causing the issue, but that didn't wash. Here's what I did:\n>\n> for f in `seq 1 100`\n> do echo iteration = $f\n> meson test --suite xid_wraparound || break\n> done\n>\n> It took until iteration 6 to get an error. I don't think my Ubuntu instance is especially slow. e.g. \"meson compile\" normally takes a handful of seconds. Maybe concurrent tests make it more likely, but they can't be the only cause.\n\nCould you provide server logs in both OK and NG tests? I want to see\nif there's a difference in the rate at which tables are vacuumed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 22 Jul 2024 18:29:51 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2024-07-22 Mo 12:46 PM, Tom Lane wrote:\n>> Masahiko Sawada<sawada.mshk@gmail.com> writes:\n>>> Looking at dodo's failures, it seems that while it passes\n>>> module-xid_wraparound-check, all failures happened only during\n>>> testmodules-install-check-C. Can we check the server logs written\n>>> during xid_wraparound test in testmodules-install-check-C?\n\n>> Oooh, that is indeed an interesting observation. There are enough\n>> examples now that it's hard to dismiss it as chance, but why would\n>> the two runs be different?\n\n> It's not deterministic.\n\nPerhaps. I tried \"make check\" on mamba's host and got exactly the\nsame failures as with \"make installcheck\", which counts in favor of\ndodo's results being just luck. Still, dodo has now shown 11 failures\nin \"make installcheck\" and zero in \"make check\", so it's getting hard\nto credit that there's no difference.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2024 22:11:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On 2024-07-22 Mo 9:29 PM, Masahiko Sawada wrote:\n> On Mon, Jul 22, 2024 at 12:53 PM Andrew Dunstan<andrew@dunslane.net> wrote:\n>>\n>> On 2024-07-22 Mo 12:46 PM, Tom Lane wrote:\n>>\n>> Masahiko Sawada<sawada.mshk@gmail.com> writes:\n>>\n>> Looking at dodo's failures, it seems that while it passes\n>> module-xid_wraparound-check, all failures happened only during\n>> testmodules-install-check-C. Can we check the server logs written\n>> during xid_wraparound test in testmodules-install-check-C?\n>>\n>> Oooh, that is indeed an interesting observation. There are enough\n>> examples now that it's hard to dismiss it as chance, but why would\n>> the two runs be different?\n>>\n>>\n>> It's not deterministic.\n>>\n>> I tested the theory that it was some other concurrent tests causing the issue, but that didn't wash. Here's what I did:\n>>\n>> for f in `seq 1 100`\n>> do echo iteration = $f\n>> meson test --suite xid_wraparound || break\n>> done\n>>\n>> It took until iteration 6 to get an error. I don't think my Ubuntu instance is especially slow. e.g. \"meson compile\" normally takes a handful of seconds. Maybe concurrent tests make it more likely, but they can't be the only cause.\n> Could you provide server logs in both OK and NG tests? I want to see\n> if there's a difference in the rate at which tables are vacuumed.\n\n\nSee \n<https://bitbucket.org/adunstan/rotfang-fdw/downloads/xid-wraparound-result.tar.bz2>\n\n\nThe failure logs are from a run where both tests 1 and 2 failed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-22 Mo 9:29 PM, Masahiko\n Sawada wrote:\n\n\nOn Mon, Jul 22, 2024 at 12:53 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\n\n\nOn 2024-07-22 Mo 12:46 PM, Tom Lane wrote:\n\nMasahiko Sawada <sawada.mshk@gmail.com> writes:\n\nLooking at dodo's failures, it seems that while it passes\nmodule-xid_wraparound-check, all failures happened only during\ntestmodules-install-check-C. Can we check the server logs written\nduring xid_wraparound test in testmodules-install-check-C?\n\nOooh, that is indeed an interesting observation. There are enough\nexamples now that it's hard to dismiss it as chance, but why would\nthe two runs be different?\n\n\nIt's not deterministic.\n\nI tested the theory that it was some other concurrent tests causing the issue, but that didn't wash. Here's what I did:\n\n for f in `seq 1 100`\n do echo iteration = $f\n meson test --suite xid_wraparound || break\n done\n\nIt took until iteration 6 to get an error. I don't think my Ubuntu instance is especially slow. e.g. \"meson compile\" normally takes a handful of seconds. Maybe concurrent tests make it more likely, but they can't be the only cause.\n\n\n\nCould you provide server logs in both OK and NG tests? I want to see\nif there's a difference in the rate at which tables are vacuumed.\n\n\n\nSee\n<https://bitbucket.org/adunstan/rotfang-fdw/downloads/xid-wraparound-result.tar.bz2>\n\n\nThe failure logs are from a run where both tests 1 and 2 failed.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 23 Jul 2024 06:49:05 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "\nOn 2024-07-22 Mo 10:11 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 2024-07-22 Mo 12:46 PM, Tom Lane wrote:\n>>> Masahiko Sawada<sawada.mshk@gmail.com> writes:\n>>>> Looking at dodo's failures, it seems that while it passes\n>>>> module-xid_wraparound-check, all failures happened only during\n>>>> testmodules-install-check-C. Can we check the server logs written\n>>>> during xid_wraparound test in testmodules-install-check-C?\n>>> Oooh, that is indeed an interesting observation. There are enough\n>>> examples now that it's hard to dismiss it as chance, but why would\n>>> the two runs be different?\n>> It's not deterministic.\n> Perhaps. I tried \"make check\" on mamba's host and got exactly the\n> same failures as with \"make installcheck\", which counts in favor of\n> dodo's results being just luck. Still, dodo has now shown 11 failures\n> in \"make installcheck\" and zero in \"make check\", so it's getting hard\n> to credit that there's no difference.\n>\n> \t\t\t\n\n\nYeah, I agree that's perplexing. That step doesn't run with \"make -j \nnn\", so it's a bit hard to see why it should get different results from \none run rather than the other. The only thing that's different is that \nthere's another postgres instance running. Maybe that's just enough to \nslow the test down? After all, this is an RPi.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 23 Jul 2024 09:07:19 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 3:49 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2024-07-22 Mo 9:29 PM, Masahiko Sawada wrote:\n>\n> On Mon, Jul 22, 2024 at 12:53 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 2024-07-22 Mo 12:46 PM, Tom Lane wrote:\n>\n> Masahiko Sawada <sawada.mshk@gmail.com> writes:\n>\n> Looking at dodo's failures, it seems that while it passes\n> module-xid_wraparound-check, all failures happened only during\n> testmodules-install-check-C. Can we check the server logs written\n> during xid_wraparound test in testmodules-install-check-C?\n>\n> Oooh, that is indeed an interesting observation. There are enough\n> examples now that it's hard to dismiss it as chance, but why would\n> the two runs be different?\n>\n>\n> It's not deterministic.\n>\n> I tested the theory that it was some other concurrent tests causing the issue, but that didn't wash. Here's what I did:\n>\n> for f in `seq 1 100`\n> do echo iteration = $f\n> meson test --suite xid_wraparound || break\n> done\n>\n> It took until iteration 6 to get an error. I don't think my Ubuntu instance is especially slow. e.g. \"meson compile\" normally takes a handful of seconds. Maybe concurrent tests make it more likely, but they can't be the only cause.\n>\n> Could you provide server logs in both OK and NG tests? I want to see\n> if there's a difference in the rate at which tables are vacuumed.\n>\n>\n> See <https://bitbucket.org/adunstan/rotfang-fdw/downloads/xid-wraparound-result.tar.bz2>\n>\n>\n> The failure logs are from a run where both tests 1 and 2 failed.\n>\n\nThank you for sharing the logs.\n\nI think that the problem seems to match what Alexander Lakhin\nmentioned[1]. Probably we can fix such a race condition somehow but\nI'm not sure it's worth it as setting autovacuum = off and\nautovacuum_max_workers = 1 (or a low number) is an extremely rare\ncase. I think it would be better to stabilize these tests. One idea is\nto turn the autovacuum GUC parameter on while setting\nautovacuum_enabled = off for each table. That way, we can ensure that\nautovacuum workers are launched. And I think it seems to align real\nuse cases.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/02373ec3-50c6-df5a-0d65-5b9b1c0c86d6%40gmail.com\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 23 Jul 2024 15:59:28 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On 2024-07-23 Tu 6:59 PM, Masahiko Sawada wrote:\n>> See<https://bitbucket.org/adunstan/rotfang-fdw/downloads/xid-wraparound-result.tar.bz2>\n>>\n>>\n>> The failure logs are from a run where both tests 1 and 2 failed.\n>>\n> Thank you for sharing the logs.\n>\n> I think that the problem seems to match what Alexander Lakhin\n> mentioned[1]. Probably we can fix such a race condition somehow but\n> I'm not sure it's worth it as setting autovacuum = off and\n> autovacuum_max_workers = 1 (or a low number) is an extremely rare\n> case. I think it would be better to stabilize these tests. One idea is\n> to turn the autovacuum GUC parameter on while setting\n> autovacuum_enabled = off for each table. That way, we can ensure that\n> autovacuum workers are launched. And I think it seems to align real\n> use cases.\n>\n> Regards,\n>\n> [1]https://www.postgresql.org/message-id/02373ec3-50c6-df5a-0d65-5b9b1c0c86d6%40gmail.com\n\n\nOK, do you want to propose a patch?\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-23 Tu 6:59 PM, Masahiko\n Sawada wrote:\n\n\n\n\n\nSee <https://bitbucket.org/adunstan/rotfang-fdw/downloads/xid-wraparound-result.tar.bz2>\n\n\nThe failure logs are from a run where both tests 1 and 2 failed.\n\n\n\n\nThank you for sharing the logs.\n\nI think that the problem seems to match what Alexander Lakhin\nmentioned[1]. Probably we can fix such a race condition somehow but\nI'm not sure it's worth it as setting autovacuum = off and\nautovacuum_max_workers = 1 (or a low number) is an extremely rare\ncase. I think it would be better to stabilize these tests. One idea is\nto turn the autovacuum GUC parameter on while setting\nautovacuum_enabled = off for each table. That way, we can ensure that\nautovacuum workers are launched. And I think it seems to align real\nuse cases.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/02373ec3-50c6-df5a-0d65-5b9b1c0c86d6%40gmail.com\n\n\n\n\n\nOK, do you want to propose a patch?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 25 Jul 2024 13:56:18 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On Thu, Jul 25, 2024 at 10:56 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2024-07-23 Tu 6:59 PM, Masahiko Sawada wrote:\n>\n> See <https://bitbucket.org/adunstan/rotfang-fdw/downloads/xid-wraparound-result.tar.bz2>\n>\n>\n> The failure logs are from a run where both tests 1 and 2 failed.\n>\n> Thank you for sharing the logs.\n>\n> I think that the problem seems to match what Alexander Lakhin\n> mentioned[1]. Probably we can fix such a race condition somehow but\n> I'm not sure it's worth it as setting autovacuum = off and\n> autovacuum_max_workers = 1 (or a low number) is an extremely rare\n> case. I think it would be better to stabilize these tests. One idea is\n> to turn the autovacuum GUC parameter on while setting\n> autovacuum_enabled = off for each table. That way, we can ensure that\n> autovacuum workers are launched. And I think it seems to align real\n> use cases.\n>\n> Regards,\n>\n> [1] https://www.postgresql.org/message-id/02373ec3-50c6-df5a-0d65-5b9b1c0c86d6%40gmail.com\n>\n>\n> OK, do you want to propose a patch?\n>\n\nYes, I'll prepare and share it soon.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 25 Jul 2024 11:06:45 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On Thu, Jul 25, 2024 at 11:06 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jul 25, 2024 at 10:56 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >\n> >\n> > On 2024-07-23 Tu 6:59 PM, Masahiko Sawada wrote:\n> >\n> > See <https://bitbucket.org/adunstan/rotfang-fdw/downloads/xid-wraparound-result.tar.bz2>\n> >\n> >\n> > The failure logs are from a run where both tests 1 and 2 failed.\n> >\n> > Thank you for sharing the logs.\n> >\n> > I think that the problem seems to match what Alexander Lakhin\n> > mentioned[1]. Probably we can fix such a race condition somehow but\n> > I'm not sure it's worth it as setting autovacuum = off and\n> > autovacuum_max_workers = 1 (or a low number) is an extremely rare\n> > case. I think it would be better to stabilize these tests. One idea is\n> > to turn the autovacuum GUC parameter on while setting\n> > autovacuum_enabled = off for each table. That way, we can ensure that\n> > autovacuum workers are launched. And I think it seems to align real\n> > use cases.\n> >\n> > Regards,\n> >\n> > [1] https://www.postgresql.org/message-id/02373ec3-50c6-df5a-0d65-5b9b1c0c86d6%40gmail.com\n> >\n> >\n> > OK, do you want to propose a patch?\n> >\n>\n> Yes, I'll prepare and share it soon.\n>\n\nI've attached the patch. Could you please test if the patch fixes the\ninstability you observed?\n\nSince we turn off autovacuum on all three tests and we wait for\nautovacuum to complete processing databases, these tests potentially\nhave a similar (but lower) risk. So I modified these tests to turn it\non so we can ensure the autovacuum runs periodically.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 25 Jul 2024 12:40:02 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On 2024-07-25 Th 3:40 PM, Masahiko Sawada wrote:\n> On Thu, Jul 25, 2024 at 11:06 AM Masahiko Sawada<sawada.mshk@gmail.com> wrote:\n>> On Thu, Jul 25, 2024 at 10:56 AM Andrew Dunstan<andrew@dunslane.net> wrote:\n>>>\n>>> On 2024-07-23 Tu 6:59 PM, Masahiko Sawada wrote:\n>>>\n>>> See<https://bitbucket.org/adunstan/rotfang-fdw/downloads/xid-wraparound-result.tar.bz2>\n>>>\n>>>\n>>> The failure logs are from a run where both tests 1 and 2 failed.\n>>>\n>>> Thank you for sharing the logs.\n>>>\n>>> I think that the problem seems to match what Alexander Lakhin\n>>> mentioned[1]. Probably we can fix such a race condition somehow but\n>>> I'm not sure it's worth it as setting autovacuum = off and\n>>> autovacuum_max_workers = 1 (or a low number) is an extremely rare\n>>> case. I think it would be better to stabilize these tests. One idea is\n>>> to turn the autovacuum GUC parameter on while setting\n>>> autovacuum_enabled = off for each table. That way, we can ensure that\n>>> autovacuum workers are launched. And I think it seems to align real\n>>> use cases.\n>>>\n>>> Regards,\n>>>\n>>> [1]https://www.postgresql.org/message-id/02373ec3-50c6-df5a-0d65-5b9b1c0c86d6%40gmail.com\n>>>\n>>>\n>>> OK, do you want to propose a patch?\n>>>\n>> Yes, I'll prepare and share it soon.\n>>\n> I've attached the patch. Could you please test if the patch fixes the\n> instability you observed?\n>\n> Since we turn off autovacuum on all three tests and we wait for\n> autovacuum to complete processing databases, these tests potentially\n> have a similar (but lower) risk. So I modified these tests to turn it\n> on so we can ensure the autovacuum runs periodically.\n>\n\nI assume you actually meant to remove the \"autovacuum = off\" in \n003_wraparound.pl. With that change in your patch I retried my test, but \non iteration 100 out of 100 it failed on test 002_limits.pl.\n\nYou can see the logs at \n<https://f001.backblazeb2.com/file/net-dunslane-public/002_limits-failure-log.tar.bz2>\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-25 Th 3:40 PM, Masahiko\n Sawada wrote:\n\n\nOn Thu, Jul 25, 2024 at 11:06 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n\n\nOn Thu, Jul 25, 2024 at 10:56 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\n\n\nOn 2024-07-23 Tu 6:59 PM, Masahiko Sawada wrote:\n\nSee <https://bitbucket.org/adunstan/rotfang-fdw/downloads/xid-wraparound-result.tar.bz2>\n\n\nThe failure logs are from a run where both tests 1 and 2 failed.\n\nThank you for sharing the logs.\n\nI think that the problem seems to match what Alexander Lakhin\nmentioned[1]. Probably we can fix such a race condition somehow but\nI'm not sure it's worth it as setting autovacuum = off and\nautovacuum_max_workers = 1 (or a low number) is an extremely rare\ncase. I think it would be better to stabilize these tests. One idea is\nto turn the autovacuum GUC parameter on while setting\nautovacuum_enabled = off for each table. That way, we can ensure that\nautovacuum workers are launched. And I think it seems to align real\nuse cases.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/02373ec3-50c6-df5a-0d65-5b9b1c0c86d6%40gmail.com\n\n\nOK, do you want to propose a patch?\n\n\n\n\nYes, I'll prepare and share it soon.\n\n\n\n\nI've attached the patch. Could you please test if the patch fixes the\ninstability you observed?\n\nSince we turn off autovacuum on all three tests and we wait for\nautovacuum to complete processing databases, these tests potentially\nhave a similar (but lower) risk. So I modified these tests to turn it\non so we can ensure the autovacuum runs periodically.\n\n\n\n\n\nI assume you actually meant to remove the \"autovacuum = off\" in 003_wraparound.pl. With that change in your patch I retried my test, but on iteration 100 out of 100 it failed on test 002_limits.pl.\n\nYou can see the logs at <https://f001.backblazeb2.com/file/net-dunslane-public/002_limits-failure-log.tar.bz2>\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 25 Jul 2024 21:52:13 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On Thu, Jul 25, 2024 at 6:52 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2024-07-25 Th 3:40 PM, Masahiko Sawada wrote:\n>\n> On Thu, Jul 25, 2024 at 11:06 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jul 25, 2024 at 10:56 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 2024-07-23 Tu 6:59 PM, Masahiko Sawada wrote:\n>\n> See <https://bitbucket.org/adunstan/rotfang-fdw/downloads/xid-wraparound-result.tar.bz2>\n>\n>\n> The failure logs are from a run where both tests 1 and 2 failed.\n>\n> Thank you for sharing the logs.\n>\n> I think that the problem seems to match what Alexander Lakhin\n> mentioned[1]. Probably we can fix such a race condition somehow but\n> I'm not sure it's worth it as setting autovacuum = off and\n> autovacuum_max_workers = 1 (or a low number) is an extremely rare\n> case. I think it would be better to stabilize these tests. One idea is\n> to turn the autovacuum GUC parameter on while setting\n> autovacuum_enabled = off for each table. That way, we can ensure that\n> autovacuum workers are launched. And I think it seems to align real\n> use cases.\n>\n> Regards,\n>\n> [1] https://www.postgresql.org/message-id/02373ec3-50c6-df5a-0d65-5b9b1c0c86d6%40gmail.com\n>\n>\n> OK, do you want to propose a patch?\n>\n> Yes, I'll prepare and share it soon.\n>\n> I've attached the patch. Could you please test if the patch fixes the\n> instability you observed?\n>\n> Since we turn off autovacuum on all three tests and we wait for\n> autovacuum to complete processing databases, these tests potentially\n> have a similar (but lower) risk. So I modified these tests to turn it\n> on so we can ensure the autovacuum runs periodically.\n>\n>\n> I assume you actually meant to remove the \"autovacuum = off\" in 003_wraparound.pl. With that change in your patch I retried my test, but on iteration 100 out of 100 it failed on test 002_limits.pl.\n>\n\nI think we need to remove the \"autovacuum = off' also in 002_limits.pl\nas it waits for autovacuum to process both template0 and template1\ndatabases. Just to be clear, the failure happened even without\n\"autovacuum = off\"?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 26 Jul 2024 10:46:58 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On 2024-07-26 Fr 1:46 PM, Masahiko Sawada wrote:\n> On Thu, Jul 25, 2024 at 6:52 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> On 2024-07-25 Th 3:40 PM, Masahiko Sawada wrote:\n>>\n>> On Thu, Jul 25, 2024 at 11:06 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Thu, Jul 25, 2024 at 10:56 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> On 2024-07-23 Tu 6:59 PM, Masahiko Sawada wrote:\n>>\n>> See <https://bitbucket.org/adunstan/rotfang-fdw/downloads/xid-wraparound-result.tar.bz2>\n>>\n>>\n>> The failure logs are from a run where both tests 1 and 2 failed.\n>>\n>> Thank you for sharing the logs.\n>>\n>> I think that the problem seems to match what Alexander Lakhin\n>> mentioned[1]. Probably we can fix such a race condition somehow but\n>> I'm not sure it's worth it as setting autovacuum = off and\n>> autovacuum_max_workers = 1 (or a low number) is an extremely rare\n>> case. I think it would be better to stabilize these tests. One idea is\n>> to turn the autovacuum GUC parameter on while setting\n>> autovacuum_enabled = off for each table. That way, we can ensure that\n>> autovacuum workers are launched. And I think it seems to align real\n>> use cases.\n>>\n>> Regards,\n>>\n>> [1] https://www.postgresql.org/message-id/02373ec3-50c6-df5a-0d65-5b9b1c0c86d6%40gmail.com\n>>\n>>\n>> OK, do you want to propose a patch?\n>>\n>> Yes, I'll prepare and share it soon.\n>>\n>> I've attached the patch. Could you please test if the patch fixes the\n>> instability you observed?\n>>\n>> Since we turn off autovacuum on all three tests and we wait for\n>> autovacuum to complete processing databases, these tests potentially\n>> have a similar (but lower) risk. So I modified these tests to turn it\n>> on so we can ensure the autovacuum runs periodically.\n>>\n>>\n>> I assume you actually meant to remove the \"autovacuum = off\" in 003_wraparound.pl. With that change in your patch I retried my test, but on iteration 100 out of 100 it failed on test 002_limits.pl.\n>>\n> I think we need to remove the \"autovacuum = off' also in 002_limits.pl\n> as it waits for autovacuum to process both template0 and template1\n> databases. Just to be clear, the failure happened even without\n> \"autovacuum = off\"?\n>\n\nThe attached patch, a slight modification of yours, removes \"autovacuum \n= off\" for all three tests, and given that a set of 200 runs was clean \nfor me.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 27 Jul 2024 16:05:55 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On Sat, Jul 27, 2024 at 1:06 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2024-07-26 Fr 1:46 PM, Masahiko Sawada wrote:\n> > On Thu, Jul 25, 2024 at 6:52 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >>\n> >> On 2024-07-25 Th 3:40 PM, Masahiko Sawada wrote:\n> >>\n> >> On Thu, Jul 25, 2024 at 11:06 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>\n> >> On Thu, Jul 25, 2024 at 10:56 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >>\n> >> On 2024-07-23 Tu 6:59 PM, Masahiko Sawada wrote:\n> >>\n> >> See <https://bitbucket.org/adunstan/rotfang-fdw/downloads/xid-wraparound-result.tar.bz2>\n> >>\n> >>\n> >> The failure logs are from a run where both tests 1 and 2 failed.\n> >>\n> >> Thank you for sharing the logs.\n> >>\n> >> I think that the problem seems to match what Alexander Lakhin\n> >> mentioned[1]. Probably we can fix such a race condition somehow but\n> >> I'm not sure it's worth it as setting autovacuum = off and\n> >> autovacuum_max_workers = 1 (or a low number) is an extremely rare\n> >> case. I think it would be better to stabilize these tests. One idea is\n> >> to turn the autovacuum GUC parameter on while setting\n> >> autovacuum_enabled = off for each table. That way, we can ensure that\n> >> autovacuum workers are launched. And I think it seems to align real\n> >> use cases.\n> >>\n> >> Regards,\n> >>\n> >> [1] https://www.postgresql.org/message-id/02373ec3-50c6-df5a-0d65-5b9b1c0c86d6%40gmail.com\n> >>\n> >>\n> >> OK, do you want to propose a patch?\n> >>\n> >> Yes, I'll prepare and share it soon.\n> >>\n> >> I've attached the patch. Could you please test if the patch fixes the\n> >> instability you observed?\n> >>\n> >> Since we turn off autovacuum on all three tests and we wait for\n> >> autovacuum to complete processing databases, these tests potentially\n> >> have a similar (but lower) risk. So I modified these tests to turn it\n> >> on so we can ensure the autovacuum runs periodically.\n> >>\n> >>\n> >> I assume you actually meant to remove the \"autovacuum = off\" in 003_wraparound.pl. With that change in your patch I retried my test, but on iteration 100 out of 100 it failed on test 002_limits.pl.\n> >>\n> > I think we need to remove the \"autovacuum = off' also in 002_limits.pl\n> > as it waits for autovacuum to process both template0 and template1\n> > databases. Just to be clear, the failure happened even without\n> > \"autovacuum = off\"?\n> >\n>\n> The attached patch, a slight modification of yours, removes \"autovacuum\n> = off\" for all three tests, and given that a set of 200 runs was clean\n> for me.\n\nOh I missed that I left \"autovacuum = off' for some reason in 002\ntest. Thank you for attaching the patch, it looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 29 Jul 2024 14:25:11 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On 2024-07-29 Mo 5:25 PM, Masahiko Sawada wrote:\n>>>> I've attached the patch. Could you please test if the patch fixes the\n>>>> instability you observed?\n>>>>\n>>>> Since we turn off autovacuum on all three tests and we wait for\n>>>> autovacuum to complete processing databases, these tests potentially\n>>>> have a similar (but lower) risk. So I modified these tests to turn it\n>>>> on so we can ensure the autovacuum runs periodically.\n>>>>\n>>>>\n>>>> I assume you actually meant to remove the \"autovacuum = off\" in 003_wraparound.pl. With that change in your patch I retried my test, but on iteration 100 out of 100 it failed on test 002_limits.pl.\n>>>>\n>>> I think we need to remove the \"autovacuum = off' also in 002_limits.pl\n>>> as it waits for autovacuum to process both template0 and template1\n>>> databases. Just to be clear, the failure happened even without\n>>> \"autovacuum = off\"?\n>>>\n>> The attached patch, a slight modification of yours, removes \"autovacuum\n>> = off\" for all three tests, and given that a set of 200 runs was clean\n>> for me.\n> Oh I missed that I left \"autovacuum = off' for some reason in 002\n> test. Thank you for attaching the patch, it looks good to me.\n>\n\nThanks, pushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-29 Mo 5:25 PM, Masahiko\n Sawada wrote:\n\n\n\n\n\n\n\nI've attached the patch. Could you please test if the patch fixes the\ninstability you observed?\n\nSince we turn off autovacuum on all three tests and we wait for\nautovacuum to complete processing databases, these tests potentially\nhave a similar (but lower) risk. So I modified these tests to turn it\non so we can ensure the autovacuum runs periodically.\n\n\nI assume you actually meant to remove the \"autovacuum = off\" in 003_wraparound.pl. With that change in your patch I retried my test, but on iteration 100 out of 100 it failed on test 002_limits.pl.\n\n\n\nI think we need to remove the \"autovacuum = off' also in 002_limits.pl\nas it waits for autovacuum to process both template0 and template1\ndatabases. Just to be clear, the failure happened even without\n\"autovacuum = off\"?\n\n\n\n\nThe attached patch, a slight modification of yours, removes \"autovacuum\n= off\" for all three tests, and given that a set of 200 runs was clean\nfor me.\n\n\n\nOh I missed that I left \"autovacuum = off' for some reason in 002\ntest. Thank you for attaching the patch, it looks good to me.\n\n\n\n\n\nThanks, pushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 30 Jul 2024 06:29:45 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
},
{
"msg_contents": "On Tue, Jul 30, 2024 at 3:29 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2024-07-29 Mo 5:25 PM, Masahiko Sawada wrote:\n>\n> I've attached the patch. Could you please test if the patch fixes the\n> instability you observed?\n>\n> Since we turn off autovacuum on all three tests and we wait for\n> autovacuum to complete processing databases, these tests potentially\n> have a similar (but lower) risk. So I modified these tests to turn it\n> on so we can ensure the autovacuum runs periodically.\n>\n>\n> I assume you actually meant to remove the \"autovacuum = off\" in 003_wraparound.pl. With that change in your patch I retried my test, but on iteration 100 out of 100 it failed on test 002_limits.pl.\n>\n> I think we need to remove the \"autovacuum = off' also in 002_limits.pl\n> as it waits for autovacuum to process both template0 and template1\n> databases. Just to be clear, the failure happened even without\n> \"autovacuum = off\"?\n>\n> The attached patch, a slight modification of yours, removes \"autovacuum\n> = off\" for all three tests, and given that a set of 200 runs was clean\n> for me.\n>\n> Oh I missed that I left \"autovacuum = off' for some reason in 002\n> test. Thank you for attaching the patch, it looks good to me.\n>\n>\n> Thanks, pushed.\n\nThanks!\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 30 Jul 2024 14:57:02 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xid_wraparound tests intermittent failure."
}
] |
[
{
"msg_contents": "I saw a database recently where some app was inserting the source port into\nthe application_name field, which meant that pg_stat_statements.max was\nquickly reached and queries were simply pouring in and out of\npg_stat_statements, dominated by some \"SET application_name = 'myapp\n10.0.0.1:1234'\" calls. Which got me thinking, is there really any value to\nhaving non-normalized 'SET application_name' queries inside of\npg_stat_statements? Or any SET stuff, for that matter?\n\nAttached please find a small proof-of-concept for normalizing/de-jumbling\ncertain SET queries. Because we only want to cover the VAR_SET_VALUE parts\nof VariableSetStmt, a custom jumble func was needed. There are a lot of\nfunky SET things inside of gram.y as well that don't do the standard SET X\n= Y formula (e.g. SET TIME ZONE, SET SCHEMA). I tried to handle those as\nbest I could, and carved a couple of exceptions for time zones and xml.\n\nI'm not sure where else to possibly draw lines. Obviously calls to time\nzone have a small and finite pool of possible values, so easy enough to\nexclude them, while things like application_name and work_mem are fairly\ninfinite, so great candidates for normalizing. One could argue for simply\nnormalizing everything, as SET is trivially fast for purposes of\nperformance tracking via pg_stat_statements, so who cares if we don't have\nthe exact string? That's what regular logging is for, after all. Most\nimportantly, less unique queryids means less chance that errant SETs will\ncrowd out the more important stuff.\n\nIn summary, we want to change this:\n\nSELECT calls, query from pg_stat_statements where query ~ 'set' order by 1;\n 1 | set application_name = 'alice'\n 1 | set application_name = 'bob'\n 1 | set application_name = 'eve'\n 1 | set application_name = 'mallory'\n\nto this:\n\nSELECT calls, query from pg_stat_statements where query ~ 'set' order by 1;\n 4 | set application_name = $1\n\nI haven't updated the regression tests yet, until we reach a consensus on\nhow thorough the normalizing should be. But there is a new test to exercise\nthe changes in gram.y.\n\nCheers,\nGreg",
"msg_date": "Mon, 22 Jul 2024 15:23:50 -0400",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": true,
"msg_subject": "Normalize queries starting with SET for pg_stat_statements"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 03:23:50PM -0400, Greg Sabino Mullane wrote:\n> I saw a database recently where some app was inserting the source port into\n> the application_name field, which meant that pg_stat_statements.max was\n> quickly reached and queries were simply pouring in and out of\n> pg_stat_statements, dominated by some \"SET application_name = 'myapp\n> 10.0.0.1:1234'\" calls. Which got me thinking, is there really any value to\n> having non-normalized 'SET application_name' queries inside of\n> pg_stat_statements? Or any SET stuff, for that matter?\n\nThanks for beginning this discussion. This has been mentioned in the\npast, like here, but I never came back to it:\nhttps://www.postgresql.org/message-id/B44FA29D-EBD0-4DD9-ABC2-16F1CB087074@amazon.com\n\nI've seen complaints about that in the field myself, and like any\nother specific workloads, the bloat this stuff creates can be really\nannoying when utilities are tracked. So yes, I really want more stuff\nto happen here.\n\n> Attached please find a small proof-of-concept for normalizing/de-jumbling\n> certain SET queries. Because we only want to cover the VAR_SET_VALUE parts\n> of VariableSetStmt, a custom jumble func was needed. There are a lot of\n> funky SET things inside of gram.y as well that don't do the standard SET X\n> = Y formula (e.g. SET TIME ZONE, SET SCHEMA). I tried to handle those as\n> best I could, and carved a couple of exceptions for time zones and xml.\n\nAgreed about the use of a custom jumble function. A huge issue that I\nhave with this parsing node is how much we want to control back in the\nmonitoring reports while not making the changes too invasive in the\nstructure of VariableSetStmt. The balance between invasiveness and\nlevel of normalization was the tricky part for me.\n\n> I'm not sure where else to possibly draw lines. Obviously calls to time\n> zone have a small and finite pool of possible values, so easy enough to\n> exclude them, while things like application_name and work_mem are fairly\n> infinite, so great candidates for normalizing. One could argue for simply\n> normalizing everything, as SET is trivially fast for purposes of\n> performance tracking via pg_stat_statements, so who cares if we don't have\n> the exact string? That's what regular logging is for, after all. Most\n> importantly, less unique queryids means less chance that errant SETs will\n> crowd out the more important stuff.\n\n> In summary, we want to change this:\n> \n> SELECT calls, query from pg_stat_statements where query ~ 'set' order by 1;\n> 1 | set application_name = 'alice'\n> 1 | set application_name = 'bob'\n> 1 | set application_name = 'eve'\n> 1 | set application_name = 'mallory'\n> \n> to this:\n> \n> SELECT calls, query from pg_stat_statements where query ~ 'set' order by 1;\n> 4 | set application_name = $1\n\nYep. That makes sense to me. We should keep the parameter name, hide\nthe value. CallStmt does that.\n\n> I haven't updated the regression tests yet, until we reach a consensus on\n> how thorough the normalizing should be. But there is a new test to exercise\n> the changes in gram.y.\n\nIt would be nice to maximize the contents of the tests at SQL level.\nThe number of patterns to track makes the debuggability much harder to\ntrack correctly in TAP as we may rely on more complex quals in\npg_stat_statements or even other catalogs.\n\n+ if (expr->kind != VAR_SET_VALUE ||\n+ strcmp(expr->name, \"timezone\") == 0\n+ || strcmp(expr->name, \"xmloption\") == 0)\n+ JUMBLE_NODE(args);\n\nWe should do this kind of tracking with a new flag in the structure\nitself, not in the custom function. The point would be to have\nfolks introducing new hardcoded names in VariableSetStmt in the parser \nthink about what they want to do, not second-guess it by tweaking the\ncustom jumbling function. Relying on the location would not be enough\nas we need to cover document_or_content for xmloption or the default\nkeyword for TIME ZONE. Let's just use the same trick as\nDeallocateStmt.isall, with a new field to differentiate all these\ncases :)\n\nThere are some funky cases you are not mentioning, though, like SET in\na CREATE FUNCTION:\nCREATE OR REPLACE FUNCTION foo_function(data1 text) RETURNS text AS $$\nDECLARE\n res text;\nBEGIN\n SELECT data1::text INTO res;\n RETURN res;\nEND;\n$$ LANGUAGE plpgsql IMMUTABLE SET search_path = 'pg_class,public';\n\nYour patch silences the SET value, but perhaps we should not do that\nfor this case. I am not against normalizing that, actually, I am in\nfavor of it, because it makes the implementation much easier and\nthe FunctionParameter List of parameters is jumbled with\nCreateFunctionStmt. All that requires test coverage.\n\nIt's nice to see that you are able to keep SET TRANSACTION at their\ncurrent level with the location trick.\n\nYou have issues with RESET SESSION AUTHORIZATION. This one is easy:\nupdate the location field to -1 in reset_rest for all the subcommands.\n\nThe coverage of the regression tests in pg_stat_statements is mostly\nright. I may be missing something, but all the SQL queries you have\nin your 020_jumbles.pl would work fine with SQL tests, and some like\n`SET param = value` are already covered.\n--\nMichael",
"msg_date": "Tue, 23 Jul 2024 11:36:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Normalize queries starting with SET for pg_stat_statements"
},
{
"msg_contents": "Now that I've spent some time away from this, I'm reconsidering why we are\ngoing through all the trouble of semi-jumbling SET statements. Maybe we\njust keep it simple and everything becomes \"SET myvar = $1\" or even \"SET\nmyvar\" full stop? I'm having a hard time finding a real-world situation in\nwhich we need to distinguish different SET/RESET items within\npg_stat_statements.\n\nCheers,\nGreg\n\nNow that I've spent some time away from this, I'm reconsidering why we are going through all the trouble of semi-jumbling SET statements. Maybe we just keep it simple and everything becomes \"SET myvar = $1\" or even \"SET myvar\" full stop? I'm having a hard time finding a real-world situation in which we need to distinguish different SET/RESET items within pg_stat_statements.Cheers,Greg",
"msg_date": "Tue, 13 Aug 2024 10:54:34 -0400",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Normalize queries starting with SET for pg_stat_statements"
},
{
"msg_contents": "On Tue, Aug 13, 2024 at 10:54:34AM -0400, Greg Sabino Mullane wrote:\n> Now that I've spent some time away from this, I'm reconsidering why we are\n> going through all the trouble of semi-jumbling SET statements. Maybe we\n> just keep it simple and everything becomes \"SET myvar = $1\" or even \"SET\n> myvar\" full stop?\n\nShowing a dollar-character to show the fact that we have a value\nbehind makes the post sense to me.\n\n> I'm having a hard time finding a real-world situation in\n> which we need to distinguish different SET/RESET items within\n> pg_stat_statements.\n\nI'm -1 on keeping the distinction, and AFAIK it's not really different\nwith the underlying problems that we need to solve for SET TRANSACTION\nand the kind, no? \n\nFWIW, I'm OK with hiding the value when it comes to a SET clause in a\nCREATE FUNCTION. We already hide the contents of SQL queries inside\nthe SQL functions when these are queries that can be normalized, so\nthere is a kind of thin argument for consistency, or something close\nto that.\n--\nMichael",
"msg_date": "Mon, 19 Aug 2024 15:28:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Normalize queries starting with SET for pg_stat_statements"
},
{
"msg_contents": "On Mon, Aug 19, 2024 at 03:28:52PM +0900, Michael Paquier wrote:\n> FWIW, I'm OK with hiding the value when it comes to a SET clause in a\n> CREATE FUNCTION. We already hide the contents of SQL queries inside\n> the SQL functions when these are queries that can be normalized, so\n> there is a kind of thin argument for consistency, or something close\n> to that.\n\nThis thread was one of the things I wanted to look at for this commit\nfest because that's an issue I have on my stack for some time. And\nhere you go with a revised patch set.\n\nFirst, the TAP test proposed upthread is not required, so let's remove\nit and rely on pg_stat_statements. It is true that there are a lot of\ncoverage holes in pg_stat_statements with the various flavors of SET\nqueries. The TAP test was able to cover a good part of them, still\nmissed a few spots.\n\nA second thing is that like you I have settled down to a custom\nimplementation for VariableSetStmt because we have too many grammar\npatterns, some of them with values nested in clauses (SET TIME ZONE is\none example), and we should report the grammar keywords without\nhardcoding a location. Like for the TIME ZONE part, this comes to a\nlimitation because it is not possible to normalize the case of SET\nTIME ZONE 'value' without some refactoring of gram.y. Perhaps we\ncould do that in the long-term, but I come down to the fact that I'm\nalso OK with letting things as they are in the patch, because the\nprimary case I want to tackle at the SET name = value patterns, and\nSET TIME ZONE is just a different flavor of that that can be\ntranslated as well to the most common \"name = value\" pattern. A\nsecond case is SET SESSION CHARACTERISTICS AS TRANSACTION with its\nlist of options. Contrary to the patch proposed, I don't think that\nit is a good idea to hide that arguments may be included in the\njumbling in the custom function, so I have added one field in\nVariableSetStmt to do that, and documented why we need the field in\nparsenodes.h. That strikes me as the best balance, and that's going\nto be hard to miss each time somebody adds a new grammar for\nVariableSetStmt. That's very unlikely at this stage of the project,\nbut who knows, people like fancy new features.\n\nAttached are two patches:\n- 0001 adds a bunch of tests in pg_stat_statements, to cover the SET\npatterns. (typo in commit message of this patch, will fix later)\n- 0002 is the addition of the normalization. It is possible to see\nhow the normalization changes things in pg_stat_statements.\n\n0001 is straight-forward and that was I think a mistake to not include\nthat from the start when I've expanded these tests in the v16 cycle\n(well, time..). 0002 also is quite conservative at the end, and this\ndesign can be used to tune easily the jumbling patterns from gram.y\ndepending on the feedback we'd get.\n--\nMichael",
"msg_date": "Tue, 24 Sep 2024 16:57:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Normalize queries starting with SET for pg_stat_statements"
},
{
"msg_contents": "On Tue, Sep 24, 2024 at 04:57:28PM +0900, Michael Paquier wrote:\n> 0001 is straight-forward and that was I think a mistake to not include\n> that from the start when I've expanded these tests in the v16 cycle\n> (well, time..). 0002 also is quite conservative at the end, and this\n> design can be used to tune easily the jumbling patterns from gram.y\n> depending on the feedback we'd get.\n\nApplied 0001 for now to expand the tests, with one tweak: the removal\nof SET NAMES. It was proving tricky to use something else than UTF-8,\nthe CI complaining on Windows. True that this could use like unaccent\nand an alternate output in a different file, but I'm not inclined to\ntake the cost just for this specific query pattern.\n\nThe remaining 0002 is attached for now. I am planning to wrap that\nnext week after a second lookup, except if there are any comments, of\ncourse.\n--\nMichael",
"msg_date": "Wed, 25 Sep 2024 12:10:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Normalize queries starting with SET for pg_stat_statements"
},
{
"msg_contents": "On Wed, Sep 25, 2024 at 12:10:02PM +0900, Michael Paquier wrote:\n> The remaining 0002 is attached for now. I am planning to wrap that\n> next week after a second lookup, except if there are any comments, of\n> course.\n\nAnd done with that, after a second round, tweaking some comments.\n\nThanks Greg for sending the patch and pushing this feature forward.\nThis finishes what I had on my TODO bucket in terms of normalization\nwhen this stuff has begun in v16 with 3db72ebcbe20.\n--\nMichael",
"msg_date": "Mon, 30 Sep 2024 15:24:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Normalize queries starting with SET for pg_stat_statements"
}
] |
[
{
"msg_contents": "I see that there'd been some chatter but not a lot of discussion about\na GROUP BY ALL feature/functionality. There certainly is utility in\nsuch a construct IMHO.\n\nThe grammar is unambiguous, so can support this construct in lieu of\nthe traditional GROUP BY clause. Enclosed is a patch which adds this\nvia just scanning the TargetEntry list and adding anything that is not\nan aggregate function call to the groupList.\n\nStill need some docs; just throwing this out there and getting some feedback.\n\nThanks,\n\nDavid",
"msg_date": "Mon, 22 Jul 2024 15:55:20 -0500",
"msg_from": "David Christensen <david@pgguru.net>",
"msg_from_op": true,
"msg_subject": "[PATCH] GROUP BY ALL"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 1:55 PM David Christensen <david@pgguru.net> wrote:\n\n> I see that there'd been some chatter but not a lot of discussion about\n> a GROUP BY ALL feature/functionality. There certainly is utility in\n> such a construct IMHO.\n>\n> Still need some docs; just throwing this out there and getting some\n> feedback.\n>\n>\nI strongly dislike adding this feature. I'd only consider supporting it if\nit was part of the SQL standard.\n\nCode is written once and read many times. This feature caters to\nthe writer, not the reader. And furthermore usage of this is prone to be\nto the writer's detriment as well.\n\nDavid J.\n\nOn Mon, Jul 22, 2024 at 1:55 PM David Christensen <david@pgguru.net> wrote:I see that there'd been some chatter but not a lot of discussion about\na GROUP BY ALL feature/functionality. There certainly is utility in\nsuch a construct IMHO.\nStill need some docs; just throwing this out there and getting some feedback.I strongly dislike adding this feature. I'd only consider supporting it if it was part of the SQL standard.Code is written once and read many times. This feature caters to the writer, not the reader. And furthermore usage of this is prone to be to the writer's detriment as well.David J.",
"msg_date": "Mon, 22 Jul 2024 14:33:57 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "On Mon, 22 Jul 2024 at 17:34, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Mon, Jul 22, 2024 at 1:55 PM David Christensen <david@pgguru.net>\n> wrote:\n>\n>> I see that there'd been some chatter but not a lot of discussion about\n>> a GROUP BY ALL feature/functionality. There certainly is utility in\n>> such a construct IMHO.\n>>\n>> Still need some docs; just throwing this out there and getting some\n>> feedback.\n>>\n>>\n> I strongly dislike adding this feature. I'd only consider supporting it\n> if it was part of the SQL standard.\n>\n> Code is written once and read many times. This feature caters to\n> the writer, not the reader. And furthermore usage of this is prone to be\n> to the writer's detriment as well.\n>\n\nAnd for when this might be useful, the syntax for it already exists,\nalthough a spurious error message is generated:\n\nodyssey=> select (uw_term).*, count(*) from uw_term group by uw_term;\nERROR: column \"uw_term.term_id\" must appear in the GROUP BY clause or be\nused in an aggregate function\nLINE 1: select (uw_term).*, count(*) from uw_term group by uw_term;\n ^\n\nI'm not sure exactly what's going on here — it's like it's still seeing the\ntable name in the field list as only a table name and not the value\ncorresponding to the whole table as a row value (But in general I'm not\nhappy with the system's ability to figure out that a column's value has\nonly one possibility given the grouping columns). You can work around:\n\nodyssey=> with t as (select uw_term, count(*) from uw_term group by\nuw_term) select (uw_term).*, count from t;\n\nThis query works.\n\nOn Mon, 22 Jul 2024 at 17:34, David G. Johnston <david.g.johnston@gmail.com> wrote:On Mon, Jul 22, 2024 at 1:55 PM David Christensen <david@pgguru.net> wrote:I see that there'd been some chatter but not a lot of discussion about\na GROUP BY ALL feature/functionality. There certainly is utility in\nsuch a construct IMHO.\nStill need some docs; just throwing this out there and getting some feedback.I strongly dislike adding this feature. I'd only consider supporting it if it was part of the SQL standard.Code is written once and read many times. This feature caters to the writer, not the reader. And furthermore usage of this is prone to be to the writer's detriment as well.And for when this might be useful, the syntax for it already exists, although a spurious error message is generated:odyssey=> select (uw_term).*, count(*) from uw_term group by uw_term;ERROR: column \"uw_term.term_id\" must appear in the GROUP BY clause or be used in an aggregate functionLINE 1: select (uw_term).*, count(*) from uw_term group by uw_term; ^I'm not sure exactly what's going on here — it's like it's still seeing the table name in the field list as only a table name and not the value corresponding to the whole table as a row value (But in general I'm not happy with the system's ability to figure out that a column's value has only one possibility given the grouping columns). You can work around:odyssey=> with t as (select uw_term, count(*) from uw_term group by uw_term) select (uw_term).*, count from t;This query works.",
"msg_date": "Mon, 22 Jul 2024 17:40:55 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Mon, Jul 22, 2024 at 1:55 PM David Christensen <david@pgguru.net> wrote:\n>> I see that there'd been some chatter but not a lot of discussion about\n>> a GROUP BY ALL feature/functionality. There certainly is utility in\n>> such a construct IMHO.\n\n> I strongly dislike adding this feature. I'd only consider supporting it if\n> it was part of the SQL standard.\n\nYeah ... my recollection is that we already rejected this idea.\nIf you want to re-litigate that, \"throwing this out there\" is\nnot a sufficient argument.\n\n(Personally, I'd wonder exactly what ALL is quantified over: the\nwhole output of the FROM clause, or only columns mentioned in the\nSELECT tlist, or what? And why that choice rather than another?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2024 18:29:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> And for when this might be useful, the syntax for it already exists,\n> although a spurious error message is generated:\n\n> odyssey=> select (uw_term).*, count(*) from uw_term group by uw_term;\n> ERROR: column \"uw_term.term_id\" must appear in the GROUP BY clause or be\n> used in an aggregate function\n> LINE 1: select (uw_term).*, count(*) from uw_term group by uw_term;\n> ^\n\n> I'm not sure exactly what's going on here\n\nThe SELECT entry is expanded into \"uw_term.col1, uw_term.col2,\nuw_term.col3, ...\", and those single-column Vars don't match the\nwhole-row Var appearing in the GROUP BY list. I guess if we\nthink this is important, we could add a proof rule saying that\na per-column Var is functionally dependent on a whole-row Var\nof the same relation. Odd that the point hasn't come up before\n(though I guess that suggests that few people try this).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Jul 2024 18:43:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 4:34 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Mon, Jul 22, 2024 at 1:55 PM David Christensen <david@pgguru.net> wrote:\n>>\n>> I see that there'd been some chatter but not a lot of discussion about\n>> a GROUP BY ALL feature/functionality. There certainly is utility in\n>> such a construct IMHO.\n>>\n>> Still need some docs; just throwing this out there and getting some feedback.\n>>\n>\n> I strongly dislike adding this feature. I'd only consider supporting it if it was part of the SQL standard.\n>\n> Code is written once and read many times. This feature caters to the writer, not the reader. And furthermore usage of this is prone to be to the writer's detriment as well.\n\nI'd say this feature (at least for me) caters to the investigator;\nsomeone who is interactively looking at data hence why it would cater\nto the writer. Consider acquainting yourself with a large table that\nhas a large number of annoying-named fields where you want to look at\nhow different data is correlated or broken-down. Something along the\nlines of:\n\nSELECT last_name, substring(first_name,1,1) as first_initial,\nincome_range, count(*) FROM census_data GROUP BY ALL;\n\nIf you are iteratively looking at things, adding or removing fields\nfrom your breakdown, you only need to change it in a single place, the\ntlist. Additionally, expressions can be used transparently without\nneeding to repeat them. (Yes, in practice, I'd often use GROUP BY 1,\n2, say, but if you add more fields to this you need to edit in\nmultiple places.)\n\nDavid\n\n\n",
"msg_date": "Tue, 23 Jul 2024 08:22:55 -0500",
"msg_from": "David Christensen <david@pgguru.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 5:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Mon, Jul 22, 2024 at 1:55 PM David Christensen <david@pgguru.net> wrote:\n> >> I see that there'd been some chatter but not a lot of discussion about\n> >> a GROUP BY ALL feature/functionality. There certainly is utility in\n> >> such a construct IMHO.\n>\n> > I strongly dislike adding this feature. I'd only consider supporting it if\n> > it was part of the SQL standard.\n>\n> Yeah ... my recollection is that we already rejected this idea.\n> If you want to re-litigate that, \"throwing this out there\" is\n> not a sufficient argument.\n\nHeh, fair enough. I actually wrote the patch after encountering the\nsyntax in DuckDB and it seemed easy enough to add to Postgres while\nproviding some utility, then ended up seeing a thread about it later.\nI did not get the sense that this had been globally rejected; though\nthere were definitely voices against it, it seemed to trail off rather\nthan coming to a conclusion.\n\n> (Personally, I'd wonder exactly what ALL is quantified over: the\n> whole output of the FROM clause, or only columns mentioned in the\n> SELECT tlist, or what? And why that choice rather than another?)\n\nMy intention here was to basically be a shorthand for \"group by\nspecified non-aggregate fields in the select list\". Perhaps I'm not\nbeing creative enough, but what is the interpretation/use case for\nanything else? :-)\n\nWhile there are other ways to accomplish these things, making an easy\nway to GROUP BY with aggregate queries would be useful in the field,\nparticularly when doing iterative discovery work would save a lot of\ntime with a situation that is both detectable and hits users with\nerrors all the time.\n\nI'm not married to the exact syntax of this feature; anything else\nshort and consistent could work if `ALL` is considered to potentially\ngain a different interpretation in the future.\n\nDavid\n\n\n",
"msg_date": "Tue, 23 Jul 2024 08:37:02 -0500",
"msg_from": "David Christensen <david@pgguru.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "On Mon, Jul 22, 2024 at 4:41 PM Isaac Morland <isaac.morland@gmail.com> wrote:\n\n> And for when this might be useful, the syntax for it already exists, although a spurious error message is generated:\n>\n> odyssey=> select (uw_term).*, count(*) from uw_term group by uw_term;\n> ERROR: column \"uw_term.term_id\" must appear in the GROUP BY clause or be used in an aggregate function\n> LINE 1: select (uw_term).*, count(*) from uw_term group by uw_term;\n> ^\n\nThis is with my patch, or existing postgres? Grouping by record is\nnot actually what this patch is trying to do, so perhaps there is some\nambiguity; this is intended to GROUP BY any select item that is not an\naggregate function.\n\n\n",
"msg_date": "Tue, 23 Jul 2024 08:42:07 -0500",
"msg_from": "David Christensen <david@pgguru.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "On Tue, 2024-07-23 at 08:37 -0500, David Christensen wrote:\n> My intention here was to basically be a shorthand for \"group by\n> specified non-aggregate fields in the select list\". Perhaps I'm not\n> being creative enough, but what is the interpretation/use case for\n> anything else? :-)\n\nI am somewhat against this feature.\nIt is too much magic for my taste.\n\nIt might be handy for interactive use, but I would frown at an application\nthat uses code like that, much like I'd frown at \"SELECT *\" in application code.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 23 Jul 2024 17:57:51 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 10:57 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Tue, 2024-07-23 at 08:37 -0500, David Christensen wrote:\n> > My intention here was to basically be a shorthand for \"group by\n> > specified non-aggregate fields in the select list\". Perhaps I'm not\n> > being creative enough, but what is the interpretation/use case for\n> > anything else? :-)\n>\n> I am somewhat against this feature.\n> It is too much magic for my taste.\n>\n> It might be handy for interactive use, but I would frown at an application\n> that uses code like that, much like I'd frown at \"SELECT *\" in application code.\n\nSure, not everything that makes things easier is strictly necessary;\nwe could require `CAST(field AS text)` instead of `::text`, make\nsubqueries required for transforming oids into specific system tables\ninstead of `::regfoo` casts, any number of other choices, remove\n`SELECT *` as a parse option, but making it easier to do common things\ninteractively as a DBA has value as well.\n\nDavid\n\n\n",
"msg_date": "Tue, 23 Jul 2024 11:48:35 -0500",
"msg_from": "David Christensen <david@pgguru.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 9:48 AM David Christensen <david@pgguru.net> wrote:\n\n>\n> Sure, not everything that makes things easier is strictly necessary;\n> we could require `CAST(field AS text)` instead of `::text`,\n\n\nProbably should have...being standard and all. Though syntactic sugar is\nquite different from new behavior - transforming :: to CAST is\nstraight-forward.\n\nmake\n> subqueries required for transforming oids into specific system tables\n> instead of `::regfoo` casts,\n\n\nSince OID is non-standard this falls within our purview.\n\n any number of other choices, remove\n> `SELECT *` as a parse option,\n\n\nAgain, standard dictated.\n\nbut making it easier to do common things\n> interactively as a DBA has value as well.\n>\n>\nAgreed, but this isn't a clear-cut win, and doesn't have standard\nconformance to tip the scale over fully.\n\nAlso, there are so many better tools for data exploration. Removing this\nquirk only marginally closes that gap.\n\nDavid J.\n\nOn Tue, Jul 23, 2024 at 9:48 AM David Christensen <david@pgguru.net> wrote:\nSure, not everything that makes things easier is strictly necessary;\nwe could require `CAST(field AS text)` instead of `::text`,Probably should have...being standard and all. Though syntactic sugar is quite different from new behavior - transforming :: to CAST is straight-forward. make\nsubqueries required for transforming oids into specific system tables\ninstead of `::regfoo` casts,Since OID is non-standard this falls within our purview. any number of other choices, remove\n`SELECT *` as a parse option,Again, standard dictated. but making it easier to do common things\ninteractively as a DBA has value as well.Agreed, but this isn't a clear-cut win, and doesn't have standard conformance to tip the scale over fully.Also, there are so many better tools for data exploration. Removing this quirk only marginally closes that gap.David J.",
"msg_date": "Tue, 23 Jul 2024 09:59:29 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "On 7/22/24 15:43, Tom Lane wrote:\n> Isaac Morland <isaac.morland@gmail.com> writes:\n>> And for when this might be useful, the syntax for it already exists,\n>> although a spurious error message is generated:\n> \n>> odyssey=> select (uw_term).*, count(*) from uw_term group by uw_term;\n>> ERROR: column \"uw_term.term_id\" must appear in the GROUP BY clause or be\n>> used in an aggregate function\n>> LINE 1: select (uw_term).*, count(*) from uw_term group by uw_term;\n>> ^\n> \n>> I'm not sure exactly what's going on here\n> \n> The SELECT entry is expanded into \"uw_term.col1, uw_term.col2,\n> uw_term.col3, ...\", and those single-column Vars don't match the\n> whole-row Var appearing in the GROUP BY list. I guess if we\n> think this is important, we could add a proof rule saying that\n> a per-column Var is functionally dependent on a whole-row Var\n> of the same relation. Odd that the point hasn't come up before\n> (though I guess that suggests that few people try this).\n\nI was just using this group-by-row feature last week to implement a temporal outer join in a way \nthat would work for arbitrary tables. Here is some example SQL:\n\nhttps://github.com/pjungwir/temporal_ops/blob/b10d65323749faa6c47956db2e8f95441e508fce/sql/outer_join.sql#L48-L66\n\nThat does `GROUP BY a` then `SELECT (x.a).*`.[1]\n\nIt is very useful for writing queries that don't want to know about the structure of the row.\n\nI noticed the same error as Isaac. I worked around the problem by wrapping it in a subquery and \ndecomposing the row outside. It's already an obscure feature, and an easy workaround might be why \nyou haven't heard complaints before. I wouldn't mind writing a patch for that rule when I get a \nchance (if no one else gets to it first.)\n\n[1] Actually I see it does `GROUP BY a, a.valid_at`, but that is surely more than I need. I think \nthat `a.valid_at` is leftover from a previous version of the query.\n\nYours,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com\n\n\n",
"msg_date": "Tue, 23 Jul 2024 10:02:36 -0700",
"msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "On 23.07.24 00:29, Tom Lane wrote:\n> (Personally, I'd wonder exactly what ALL is quantified over: the\n> whole output of the FROM clause, or only columns mentioned in the\n> SELECT tlist, or what? And why that choice rather than another?)\n\nLooks like the main existing implementations take it to mean all entries \nin the SELECT list that are not aggregate functions.\n\nhttps://duckdb.org/docs/sql/query_syntax/groupby.html#group-by-all\nhttps://docs.databricks.com/en/sql/language-manual/sql-ref-syntax-qry-select-groupby.html#parameters\nhttps://docs.snowflake.com/en/sql-reference/constructs/group-by#parameters\n\n\n\n",
"msg_date": "Tue, 23 Jul 2024 22:02:15 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "On Mon, 22 Jul 2024 at 22:55, David Christensen <david@pgguru.net> wrote:\n> I see that there'd been some chatter but not a lot of discussion about\n> a GROUP BY ALL feature/functionality. There certainly is utility in\n> such a construct IMHO.\n\n+1 from me. When exploring data, this is extremely useful because you\ndon't have to update the GROUP BY clause every time\n\nRegarding the arguments against this:\n1. I don't think this is any more unreadable than being able to GROUP\nBY 1, 2, 3. Or being able to use column aliases from the SELECT in the\nGROUP BY clause. Again this is already allowed. Personally I actually\nthink it's more readable than specifying e.g. 5 columns in the group\nby, because then I have to cross-reference with columns in the SELECT\nclause to find out if they are the same. With ALL I instantly know\nit's grouped by all\n2. This is indeed not part of the standard. But we have many things\nthat are not part of the standard. I think as long as we use the same\nsyntax as snowflake, databricks and duckdb I personally don't see a\nbig problem. Then we could try and make this be part of the standard\nin the next version of the standard.\n\n\n",
"msg_date": "Wed, 24 Jul 2024 10:56:43 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "Hi\n\nst 24. 7. 2024 v 10:57 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl>\nnapsal:\n\n> On Mon, 22 Jul 2024 at 22:55, David Christensen <david@pgguru.net> wrote:\n> > I see that there'd been some chatter but not a lot of discussion about\n> > a GROUP BY ALL feature/functionality. There certainly is utility in\n> > such a construct IMHO.\n>\n> +1 from me. When exploring data, this is extremely useful because you\n> don't have to update the GROUP BY clause every time\n>\n> Regarding the arguments against this:\n> 1. I don't think this is any more unreadable than being able to GROUP\n> BY 1, 2, 3. Or being able to use column aliases from the SELECT in the\n> GROUP BY clause. Again this is already allowed. Personally I actually\n> think it's more readable than specifying e.g. 5 columns in the group\n> by, because then I have to cross-reference with columns in the SELECT\n> clause to find out if they are the same. With ALL I instantly know\n> it's grouped by all\n> 2. This is indeed not part of the standard. But we have many things\n> that are not part of the standard. I think as long as we use the same\n> syntax as snowflake, databricks and duckdb I personally don't see a\n> big problem. Then we could try and make this be part of the standard\n> in the next version of the standard.\n>\n\n Aggregation against more columns is pretty slow and memory expensive in\nPostgres.\n\nDuckDB is an analytic database with different storage, different executors.\nI like it very much, but I am not sure if I want to see these features in\nPostgres.\n\nLot of developers are not very smart, and with proposed feature, then\ninstead to write correct and effective query, then they use just GROUP BY\nALL. Slow query should look like a slow query :-)\n\nRegards\n\nPavel\n\nHist 24. 7. 2024 v 10:57 odesílatel Jelte Fennema-Nio <postgres@jeltef.nl> napsal:On Mon, 22 Jul 2024 at 22:55, David Christensen <david@pgguru.net> wrote:\n> I see that there'd been some chatter but not a lot of discussion about\n> a GROUP BY ALL feature/functionality. There certainly is utility in\n> such a construct IMHO.\n\n+1 from me. When exploring data, this is extremely useful because you\ndon't have to update the GROUP BY clause every time\n\nRegarding the arguments against this:\n1. I don't think this is any more unreadable than being able to GROUP\nBY 1, 2, 3. Or being able to use column aliases from the SELECT in the\nGROUP BY clause. Again this is already allowed. Personally I actually\nthink it's more readable than specifying e.g. 5 columns in the group\nby, because then I have to cross-reference with columns in the SELECT\nclause to find out if they are the same. With ALL I instantly know\nit's grouped by all\n2. This is indeed not part of the standard. But we have many things\nthat are not part of the standard. I think as long as we use the same\nsyntax as snowflake, databricks and duckdb I personally don't see a\nbig problem. Then we could try and make this be part of the standard\nin the next version of the standard. Aggregation against more columns is pretty slow and memory expensive in Postgres.DuckDB is an analytic database with different storage, different executors. I like it very much, but I am not sure if I want to see these features in Postgres. Lot of developers are not very smart, and with proposed feature, then instead to write correct and effective query, then they use just GROUP BY ALL. Slow query should look like a slow query :-)RegardsPavel",
"msg_date": "Wed, 24 Jul 2024 11:07:21 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 6:53 PM David Christensen <david@pgguru.net> wrote:\n>\n> On Mon, Jul 22, 2024 at 4:34 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > On Mon, Jul 22, 2024 at 1:55 PM David Christensen <david@pgguru.net> wrote:\n> >>\n> >> I see that there'd been some chatter but not a lot of discussion about\n> >> a GROUP BY ALL feature/functionality. There certainly is utility in\n> >> such a construct IMHO.\n> >>\n> >> Still need some docs; just throwing this out there and getting some feedback.\n> >>\n> >\n> > I strongly dislike adding this feature. I'd only consider supporting it if it was part of the SQL standard.\n> >\n> > Code is written once and read many times. This feature caters to the writer, not the reader. And furthermore usage of this is prone to be to the writer's detriment as well.\n>\n> I'd say this feature (at least for me) caters to the investigator;\n> someone who is interactively looking at data hence why it would cater\n> to the writer. Consider acquainting yourself with a large table that\n> has a large number of annoying-named fields where you want to look at\n> how different data is correlated or broken-down. Something along the\n> lines of:\n\nTo me this looks like a feature that a data exploration tool may\nimplement instead of being part of the server. It would then provide\nmore statistics about each correlation/column set etc.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 24 Jul 2024 19:42:01 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 9:37 AM David Christensen <david@pgguru.net> wrote:\n\n> I'm not married to the exact syntax of this feature; anything else short\n> and consistent could work if `ALL` is considered to potentially\n> gain a different interpretation in the future.\n>\n\nGROUP BY *\n\nJust kidding. But a big +1 to the whole concept. It would have been\nextraordinarily useful over the years.\n\nCheers,\nGreg\n\nOn Tue, Jul 23, 2024 at 9:37 AM David Christensen <david@pgguru.net> wrote:I'm not married to the exact syntax of this feature; anything else short and consistent could work if `ALL` is considered to potentially\ngain a different interpretation in the future.GROUP BY *Just kidding. But a big +1 to the whole concept. It would have been extraordinarily useful over the years.Cheers,Greg",
"msg_date": "Tue, 13 Aug 2024 10:57:31 -0400",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
}
] |
[
{
"msg_contents": "Currently, the new fields are only supported at the end, Cancannot move the location of the field when editing the table, This does not seem to be an elegant approach\nCurrently, the new fields are only supported at the end, Cancannot move the location of the field when editing the table, This does not seem to be an elegant approach",
"msg_date": "Tue, 23 Jul 2024 10:54:38 +0800",
"msg_from": "\"=?ISO-8859-1?B?Lg==?=\" <962440713@qq.com>",
"msg_from_op": true,
"msg_subject": "Add new fielids only at last"
},
{
"msg_contents": "Hi,\n\n> Currently, the new fields are only supported at the end, Cancannot move the location of the field when editing the table, This does not seem to be an elegant approach\n\nPretty confident it was discussed many times before. Please use the search.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 23 Jul 2024 14:08:42 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Add new fielids only at last"
}
] |
[
{
"msg_contents": "Hi!\n\nThe permissions given by GRANT and POLICY statements seem to always be\ncombined \"permissively\". In other words, if role `foo` inherits from roles\n`can_access_all_columns_but_no_rows` and\n`can_access_all_rows_but_no_columns`, then `foo` would be able to access\nall rows and all columns of the table in question. I wonder, what it would\ntake to extend Postgres to allow to combine them \"restrictively\".\n\nOne hacky way to do so would be to apply the following logic when\nevaluating a query:\n1) Use the RLS policies to filter out the rows that should be visible to\nthe given user. On each row, record the set roles that allow the operation.\n2) For each row and for each column, iterate through the intersection of\n(recorded roles, roles the current roles inherits from) to see if the\ncolumn should be given access to. If not, return a null in that position.\n(For updates/inserts error out).\n\nObviously, this would be a departure from SQL standard. But other than\nthat, is this a valid feature idea? I am not a fan of shoehorning nulls for\nthis, but given that the database can omit rows when using RLS, nulls don't\nseem to be too far from that.\n\nThe reason I'm bringing it up is that it seems to solve the following\nproblem nicely: imagine you have a table `people`, and an association table\nbetween two people called `friends`. Each person should see their own data\nin `people` and a subset of columns of `people` if they are friends.\n(Please refer to the attached file for definitions).\n\nIf there's an easier solution that's possible today I'd be curious to learn\nabout it. The best I could come up with (for queries only) is defining\nviews that do this \"null-masking\".Something like this:\n\nCREATE VIEW people_for_person AS\nSELECT\n id,\n CASE WHEN roles.is_self OR roles.is_friend THEN email END AS email,\n CASE WHEN roles.is_self THEN password END AS password\nFROM people p\nJOIN LATERAL (\n SELECT p.id = current_setting('app.user_id')::INTEGER AS is_self,\n EXISTS (\n SELECT true\n FROM friends f\n WHERE f.person_id = p.id\n AND f.friend_id = current_setting('app.user_id')::INTEGER\n ) AS is_friend\n) roles ON true;\n\nCheers,\nBakhtiyar",
"msg_date": "Mon, 22 Jul 2024 22:03:30 -0700",
"msg_from": "Bakhtiyar Neyman <bakhtiyarneyman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Restrictive combination of GRANT and POLICY"
}
] |
[
{
"msg_contents": "On 23 Jul 2024, at 00:40, Isaac Morland <isaac.morland@gmail.com> wrote:odyssey=> select (uw_term).*, count(*) from uw_term group by uw_term;ERROR: column \"uw_term.term_id\" must appear in the GROUP BY clause or be used in an aggregate functionLINE 1: select (uw_term).*, count(*) from uw_term group by uw_term;AFAIR this problem was solved in my implementation [0]On 23 Jul 2024, at 01:29, Tom Lane <tgl@sss.pgh.pa.us> wrote:(Personally, I'd wonder exactly what ALL is quantified over: thewhole output of the FROM clause, or only columns mentioned in theSELECT tlist, or what? And why that choice rather than another?)I'd like to have GROUP BY AUTO (I also proposed version GROUP BY SURPRISE ME). But I wouldn't like to open pandora box of syntax sugar extensions which may will be incompatible with future standards.If we could have extensible grammar - I'd be happy to have a lot of such enhancements. My top 2 are FROM table SELECT column and better GROUP BY.Best regards, Andrey Borodin.[0] https://www.postgresql.org/message-id/flat/CAAhFRxjyTO5BHn9y1oOSEp0TtpTDTTTb7HJBNhTG%2Bi3-hXC0XQ%40mail.gmail.com",
"msg_date": "Tue, 23 Jul 2024 18:21:55 +0500",
"msg_from": "Andrei Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "On Tue, Jul 23, 2024 at 8:21 AM Andrei Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> On 23 Jul 2024, at 00:40, Isaac Morland <isaac.morland@gmail.com> wrote:\n>\n> odyssey=> select (uw_term).*, count(*) from uw_term group by uw_term;\n> ERROR: column \"uw_term.term_id\" must appear in the GROUP BY clause or be used in an aggregate function\n> LINE 1: select (uw_term).*, count(*) from uw_term group by uw_term;\n>\n>\n> AFAIR this problem was solved in my implementation [0]\n>\n> On 23 Jul 2024, at 01:29, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> (Personally, I'd wonder exactly what ALL is quantified over: the\n> whole output of the FROM clause, or only columns mentioned in the\n> SELECT tlist, or what? And why that choice rather than another?)\n>\n>\n> I'd like to have GROUP BY AUTO (I also proposed version GROUP BY SURPRISE ME). But I wouldn't like to open pandora box of syntax sugar extensions which may will be incompatible with future standards.\n> If we could have extensible grammar - I'd be happy to have a lot of such enhancements. My top 2 are FROM table SELECT column and better GROUP BY.\n\nGROUP BY AUTO also seems fine here to me; I understand the desire to\navoid major incompatible syntax changes; GROUP BY ALL does exist in\nmultiple products so it's not unprecedented.\n\nI wrote my patch before seeing your thread, sorry I missed that. :-)\n\nDavid\n\n\n",
"msg_date": "Tue, 23 Jul 2024 08:39:20 -0500",
"msg_from": "David Christensen <david@pgguru.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "On Tue, 23 Jul 2024 at 15:22, Andrei Borodin <x4mmm@yandex-team.ru> wrote:\n> I'd like to have GROUP BY AUTO (I also proposed version GROUP BY SURPRISE ME). But I wouldn't like to open pandora box of syntax sugar extensions which may will be incompatible with future standards.\n> If we could have extensible grammar - I'd be happy to have a lot of such enhancements. My top 2 are FROM table SELECT column and better GROUP BY.\n\nPersonally my number one enhancement would be allowing a trailing\ncomma after the last column in the SELECT clause.\n\n\n",
"msg_date": "Wed, 24 Jul 2024 10:58:12 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
},
{
"msg_contents": "\n\n> On 24 Jul 2024, at 13:58, Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> \n> On Tue, 23 Jul 2024 at 15:22, Andrei Borodin <x4mmm@yandex-team.ru> wrote:\n>> I'd like to have GROUP BY AUTO (I also proposed version GROUP BY SURPRISE ME). But I wouldn't like to open pandora box of syntax sugar extensions which may will be incompatible with future standards.\n>> If we could have extensible grammar - I'd be happy to have a lot of such enhancements. My top 2 are FROM table SELECT column and better GROUP BY.\n> \n> Personally my number one enhancement would be allowing a trailing\n> comma after the last column in the SELECT clause.\n\nYes, trailing comma sounds great too.\n\nOne more similar syntax sugar I can think of. I see lots of queries like\nSELECT somtheing\nFROM table1\nWHERE 1=1\nand id = x\n--and col1 = val1\nand col2 = val2\n\nI was wondering where does that \"1=1\" comes from. It's because developer comment condition one by one like \"--, col1 = val1\". And they do not want to cope with and\\or continuation.\n\n\nBest regards, Andrey Borodin.\n\nPS. Seems like Mail.App mangled my previous message despite using plain text. It's totally broken in archives...\n\n",
"msg_date": "Wed, 24 Jul 2024 16:08:01 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] GROUP BY ALL"
}
] |
[
{
"msg_contents": "Using version 16, seems strange when toast needs to be created. Tested with\ndomain being numeric or varchar(10) with the same results.\n\nAnd If that domain is integer then no toast is created.\n\nI think none of these tables should have a toast, right ?\n\npostgres=# create domain mynum as numeric(15,2);\nCREATE DOMAIN\npostgres=# create table tab1(id integer, num numeric(15,2));\nCREATE TABLE\npostgres=# create table tab2(id integer, num mynum);\nCREATE TABLE\npostgres=# create table tab3(id integer, num mynum storage main);\nCREATE TABLE\npostgres=# create table tab4(id integer, num mynum storage plain);\nCREATE TABLE\npostgres=# select relname, reltoastrelid from pg_class where relname ~\n'tab\\d$' order by 1;\n relname | reltoastrelid\n---------+---------------\n tab1 | 0\n tab2 | 25511\n tab3 | 25516\n tab4 | 0\n(4 rows)\n\nregards\nMarcos\n\nUsing version 16, seems strange when toast needs to be created. Tested with domain being numeric or varchar(10) with the same results.And If that domain is integer then no toast is created.I think none of these tables should have a toast, right ?postgres=# create domain mynum as numeric(15,2);CREATE DOMAINpostgres=# create table tab1(id integer, num numeric(15,2));CREATE TABLEpostgres=# create table tab2(id integer, num mynum);CREATE TABLEpostgres=# create table tab3(id integer, num mynum storage main);CREATE TABLEpostgres=# create table tab4(id integer, num mynum storage plain);CREATE TABLEpostgres=# select relname, reltoastrelid from pg_class where relname ~ 'tab\\d$' order by 1; relname | reltoastrelid---------+--------------- tab1 | 0 tab2 | 25511 tab3 | 25516 tab4 | 0(4 rows)regardsMarcos",
"msg_date": "Tue, 23 Jul 2024 15:35:13 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Useless toast"
},
{
"msg_contents": "On 23.07.24 20:35, Marcos Pegoraro wrote:\n> Using version 16, seems strange when toast needs to be created. \n> Tested with domain being numeric or varchar(10) with the same results.\n> \n> And If that domain is integer then no toast is created.\n> \n> I think none of these tables should have a toast, right ?\n\nThe mechanism that determines whether a toast table is needed only \nconsiders the data type, not the \"typmod\" (arguments of the data type). \nSo this is perhaps suboptimal, but this logic just doesn't exist.\n\nAlso, note that varchar(10) means 10 characters, not 10 bytes, so you \ncan't necessarily draw conclusions about storage size from that. There \naren't any supported character encodings that would encode 10 characters \ninto more bytes than the toast threshold, so this is just theoretical, \nbut it would be hard to decide what the actual threshold would be in \npractice.\n\n\n\n",
"msg_date": "Tue, 23 Jul 2024 21:39:40 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Useless toast"
},
{
"msg_contents": "Marcos Pegoraro <marcos@f10.com.br> writes:\n> Using version 16, seems strange when toast needs to be created. Tested with\n> domain being numeric or varchar(10) with the same results.\n\nDomains are fairly opaque when it comes to maximum length.\nI cannot get excited about adding code to make them less so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Jul 2024 15:40:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Useless toast"
},
{
"msg_contents": "Peter Eisentraut <peter@eisentraut.org> writes:\n> On 23.07.24 20:35, Marcos Pegoraro wrote:\n>> I think none of these tables should have a toast, right ?\n\n> The mechanism that determines whether a toast table is needed only \n> considers the data type, not the \"typmod\" (arguments of the data type). \n> So this is perhaps suboptimal, but this logic just doesn't exist.\n\nNot true, see type_maximum_size() in format_type.c. But I'm\nuninterested in making that drill down into domains, or at least\nthat would not be my first concern if we were trying to improve it.\n(The first concern would be to let extension types in on the fun.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Jul 2024 15:45:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Useless toast"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nA recent buildfarm failure [1] shows that the test 031_recovery_conflict.pl\ncan fail yet another way:\n 23/296 postgresql:recovery / recovery/031_recovery_conflict ERROR 11.55s exit status 1\n\n[07:58:53.979](0.255s) ok 11 - tablespace conflict: logfile contains terminated connection due to recovery conflict\n[07:58:54.058](0.080s) not ok 12 - tablespace conflict: stats show conflict on standby\n[07:58:54.059](0.000s) # Failed test 'tablespace conflict: stats show conflict on standby'\n# at /home/bf/bf-build/rorqual/REL_17_STABLE/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 332.\n[07:58:54.059](0.000s) # got: '0'\n# expected: '1'\n\nI managed to reproduce a similar failure by running multiple test instances\nin parallel on a slow VM. With extra logging added, I see in a failed run\nlog:\n10\n10 # Failed test 'startup deadlock: stats show conflict on standby'\n10 # at t/031_recovery_conflict.pl line 368.\n10 # got: '0'\n10 # expected: '1'\n10 [19:48:19] t/031_recovery_conflict.pl ..\n10 Dubious, test returned 1 (wstat 256, 0x100)\n\n2024-07-23 19:48:13.966 UTC [1668402:12][client backend][1/2:0] LOG: !!!pgstat_report_recovery_conflict| reason: 13, \npgstat_track_counts: 1 at character 15\n2024-07-23 19:48:13.966 UTC [1668402:13][client backend][1/2:0] STATEMENT: SELECT * FROM test_recovery_conflict_table2;\n2024-07-23 19:48:13.966 UTC [1668402:14][client backend][1/2:0] ERROR: canceling statement due to conflict with \nrecovery at character 15\n2024-07-23 19:48:13.966 UTC [1668402:15][client backend][1/2:0] DETAIL: User transaction caused buffer deadlock with \nrecovery.\n...\n2024-07-23 19:48:14.129 UTC [1668805:8][client backend][5/2:0] LOG: statement: SELECT confl_deadlock FROM \npg_stat_database_conflicts WHERE datname='test_db';\n...\n2024-07-23 19:48:14.148 UTC [1668402:16][client backend][1/0:0] LOG: !!!pgstat_database_flush_cb| nowait: 0\n\nThis failure can be reproduced easily with a sleep added as below:\n@@ -514,6 +514,7 @@ pgstat_shutdown_hook(int code, Datum arg)\n if (OidIsValid(MyDatabaseId))\n pgstat_report_disconnect(MyDatabaseId);\n\n+if (rand() % 5 == 0) pg_usleep(100000);\n pgstat_report_stat(true);\n\nBy running the test in a loop, I get miscellaneous\n\"stats show conflict on standby\" failures, including:\niteration 19\n# +++ tap check in src/test/recovery +++\nt/031_recovery_conflict.pl .. 1/?\n# Failed test 'buffer pin conflict: stats show conflict on standby'\n# at t/031_recovery_conflict.pl line 332.\n# got: '0'\n# expected: '1'\nt/031_recovery_conflict.pl .. 17/? # Looks like you failed 1 test of 18.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-07-23%2007%3A56%3A35\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 24 Jul 2024 06:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "The 031_recovery_conflict.pl test might fail due to late pgstat\n entries flushing"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile creating a patch which allows ALTER SUBSCRIPTION SET (two_phase) [1],\nwe found some issues related with logical replication and two_phase. I think this\ncan happen not only HEAD but PG14+, but for now I shared patches for HEAD.\n\nIssue #1\n\nWhen handling a PREPARE message, the subscriber mistook the wrong lsn position\n(the end position of the last commit) as the end position of the current prepare.\nThis can be fixed by adding a new global variable to record the end position of\nthe last prepare. 0001 patch fixes the issue.\n\nIssue #2\n\nWhen the subscriber enables two-phase commit but doesn't set max_prepared_transaction >0\nand a transaction is prepared on the publisher, the apply worker reports an ERROR\non the subscriber. After that, the prepared transaction is not replayed, which\nmeans it's lost forever. Attached script can emulate the situation.\n\n--\nERROR: prepared transactions are disabled\nHINT: Set \"max_prepared_transactions\" to a nonzero value.\n--\n\nThe reason is that we advanced the origin progress when aborting the\ntransaction as well (RecordTransactionAbort->replorigin_session_advance). So,\nafter setting replorigin_session_origin_lsn, if any ERROR happens when preparing\nthe transaction, the transaction aborts which incorrectly advances the origin lsn.\n\nAn easiest fix is to reset session replication origin before calling the\nRecordTransactionAbort(). I think this can happen when 1) LogicalRepApplyLoop()\nraises an ERROR or 2) apply worker exits. 0002 patch fixes the issue.\n\n\nHow do you think?\n\n[1]: https://www.postgresql.org/message-id/flat/8fab8-65d74c80-1-2f28e880@39088166\n\nBest regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Wed, 24 Jul 2024 06:55:24 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Found issues related with logical replication and 2PC"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 12:25 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Hi hackers,\n>\n> While creating a patch which allows ALTER SUBSCRIPTION SET (two_phase) [1],\n> we found some issues related with logical replication and two_phase. I think this\n> can happen not only HEAD but PG14+, but for now I shared patches for HEAD.\n>\n> Issue #1\n>\n> When handling a PREPARE message, the subscriber mistook the wrong lsn position\n> (the end position of the last commit) as the end position of the current prepare.\n> This can be fixed by adding a new global variable to record the end position of\n> the last prepare. 0001 patch fixes the issue.\n\nThanks for the patches. I have started reviewing this. I reviewed and\ntested patch001 alone.\n\nI have a query, shouldn't the local-lsn stored in\napply_handle_commit_prepared() be the end position of\n'COMMIT_PREPARED' instead of 'PREPARE'? I put additional logging on\nsub and got this:\n\nLOG: apply_handle_prepare - prepare_data.end_lsn: 0/15892E0 ,\nXactLastPrepareEnd: 0/1537FD8.\nLOG: apply_handle_commit_prepared - prepare_data.end_lsn: 0/1589318\n, XactLastPrepareEnd: 0/1537FD8.\n\nIn apply_handle_prepare(), remote-lsn ('0/15892E0') is end position of\n'PREPARE' and in apply_handle_commit_prepared(), remote-lsn\n('0/1589318') is end position of 'COMMIT_PREPARED', while local-lsn in\nboth cases is end-lsn of 'PREPARE'. Details at [1].\n\nShouldn't we use 'XactLastCommitEnd' in apply_handle_commit_prepared()\nwhich is the end position of last COMMIT_PREPARED? It is assigned in\nthe below flow:\napply_handle_commit_prepared-->CommitTransactionCommand...->RecordTransactionCommit?\n\nPlease let me know if I have misunderstood.\n\n[1]:\n\nPub:\n------\nSELECT * FROM pg_get_wal_record_info('0/15892E0');\nstart_lsn | end_lsn | prev_lsn | record_type\n---------+-----------+-----------+-----------------\n0/15892E0 | 0/1589318 | 0/15891E8 | COMMIT_PREPARED\n\n--see prev_lsn\nSELECT * FROM pg_get_wal_record_info('0/15891E8');\nstart_lsn | end_lsn | prev_lsn | record_type\n---------+-----------+-----------+-------------\n0/15891E8 | 0/15892E0 | 0/15891A8 | PREPARE\n\nSELECT * FROM pg_get_wal_record_info('0/1589318');\nstart_lsn | end_lsn | prev_lsn | record_type\n---------+-----------+-----------+---------------\n0/1589318 | 0/1589350 | 0/15892E0 | RUNNING_XACTS\n\n--see prev_lsn\nSELECT * FROM pg_get_wal_record_info('0/15892E0');\nstart_lsn | end_lsn | prev_lsn | record_type\n---------+-----------+-----------+-----------------\n0/15892E0 | 0/1589318 | 0/15891E8 | COMMIT_PREPARED\n\n\nSub:\n------\nSELECT * FROM pg_get_wal_record_info('0/1537FD8');\nstart_lsn | end_lsn | prev_lsn | record_type\n---------+-----------+-----------+------------------\n0/1537FD8 | 0/1538030 | 0/1537ED0 | COMMIT_PREPARED\n\n--see prev_lsn:\nSELECT * FROM pg_get_wal_record_info('0/1537ED0');\nstart_lsn | end_lsn | prev_lsn |record_type\n---------+-----------+-----------+------------\n0/1537ED0 | 0/1537FD8 | 0/1537E90 |PREPARE\n\nthanks\nShveta\n\n\n",
"msg_date": "Wed, 7 Aug 2024 12:37:41 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Found issues related with logical replication and 2PC"
},
{
"msg_contents": "On Wed, Aug 7, 2024 at 12:38 PM shveta malik <shveta.malik@gmail.com> wrote:\n>\n> On Wed, Jul 24, 2024 at 12:25 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Issue #1\n> >\n> > When handling a PREPARE message, the subscriber mistook the wrong lsn position\n> > (the end position of the last commit) as the end position of the current prepare.\n> > This can be fixed by adding a new global variable to record the end position of\n> > the last prepare. 0001 patch fixes the issue.\n>\n> Thanks for the patches. I have started reviewing this. I reviewed and\n> tested patch001 alone.\n>\n\nIt makes sense. As both are different bugs we should discuss them separately.\n\n> I have a query, shouldn't the local-lsn stored in\n> apply_handle_commit_prepared() be the end position of\n> 'COMMIT_PREPARED' instead of 'PREPARE'? I put additional logging on\n> sub and got this:\n>\n> LOG: apply_handle_prepare - prepare_data.end_lsn: 0/15892E0 ,\n> XactLastPrepareEnd: 0/1537FD8.\n> LOG: apply_handle_commit_prepared - prepare_data.end_lsn: 0/1589318\n> , XactLastPrepareEnd: 0/1537FD8.\n>\n> In apply_handle_prepare(), remote-lsn ('0/15892E0') is end position of\n> 'PREPARE' and in apply_handle_commit_prepared(), remote-lsn\n> ('0/1589318') is end position of 'COMMIT_PREPARED', while local-lsn in\n> both cases is end-lsn of 'PREPARE'. Details at [1].\n>\n> Shouldn't we use 'XactLastCommitEnd' in apply_handle_commit_prepared()\n> which is the end position of last COMMIT_PREPARED? It is assigned in\n> the below flow:\n> apply_handle_commit_prepared-->CommitTransactionCommand...->RecordTransactionCommit?\n>\n\nI also think so. Additionally, I feel a test case (or some description\nof the bug that can arise) should be provided for issue-1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 7 Aug 2024 15:32:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Found issues related with logical replication and 2PC"
},
{
"msg_contents": "On Wed, Aug 7, 2024 at 3:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I also think so. Additionally, I feel a test case (or some description\n> of the bug that can arise) should be provided for issue-1.\n>\n\nIIUC, the problem could be that we would end up updating the wrong\nlocal_end LSN in lsn_mappings via store_flush_position(). Then\nget_flush_position() could end up computing the wrong flush position\nand send the confirmation of flush to the publisher even when it is\nnot flushed. This ideally could lead to a situation where the prepared\ntransaction is not flushed to disk on the subscriber and because\npublisher would have gotten the confirmation earlier than required, it\nwon't send the prepared transaction again. I think this theory is not\ntrue for prepare transactions because we always flush WAL of prepare\neven for asynchronous commit mode. See EndPrepare(). So, if my\nanalysis is correct, this shouldn't be a bug and ideally, we should\nupdate local_end LSN as InvalidXLogRecPtr and add appropriate\ncomments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 7 Aug 2024 17:43:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Found issues related with logical replication and 2PC"
},
{
"msg_contents": "On Wed, Aug 7, 2024 at 5:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 7, 2024 at 3:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I also think so. Additionally, I feel a test case (or some description\n> > of the bug that can arise) should be provided for issue-1.\n> >\n>\n> IIUC, the problem could be that we would end up updating the wrong\n> local_end LSN in lsn_mappings via store_flush_position(). Then\n> get_flush_position() could end up computing the wrong flush position\n> and send the confirmation of flush to the publisher even when it is\n> not flushed. This ideally could lead to a situation where the prepared\n> transaction is not flushed to disk on the subscriber and because\n> publisher would have gotten the confirmation earlier than required, it\n> won't send the prepared transaction again.\n\nYes, that is what my understanding was.\n\n> I think this theory is not\n> true for prepare transactions because we always flush WAL of prepare\n> even for asynchronous commit mode. See EndPrepare().\n\nOkay, I was not aware of this. Thanks for explaining.\n\n> So, if my\n> analysis is correct, this shouldn't be a bug and ideally, we should\n> update local_end LSN as InvalidXLogRecPtr and add appropriate\n> comments.\n\nOkay, we can do that. Then get_flush_position() can also be changed to\n*explicitly* deal with the case where local_end is InvalidXLogRecPtr.\nHaving said that, even though it is not a bug, shouldn't we still have\nthe correct mapping updated in lsn_mapping? When remote_end is PREPARE\nOr COMMIT_PREPARED, local_end should also point to the same?\n\nthanks\nShveta\n\n\n",
"msg_date": "Thu, 8 Aug 2024 08:53:49 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Found issues related with logical replication and 2PC"
},
{
"msg_contents": "On Thu, Aug 8, 2024 at 8:54 AM shveta malik <shveta.malik@gmail.com> wrote:\n>\n> On Wed, Aug 7, 2024 at 5:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > So, if my\n> > analysis is correct, this shouldn't be a bug and ideally, we should\n> > update local_end LSN as InvalidXLogRecPtr and add appropriate\n> > comments.\n>\n> Okay, we can do that. Then get_flush_position() can also be changed to\n> *explicitly* deal with the case where local_end is InvalidXLogRecPtr.\n>\n\nAFAICS, it should be handled without any change as the value of\nInvalidXLogRecPtr is 0. So, it should be less than equal to the\nlocal_flush position.\n\n> Having said that, even though it is not a bug, shouldn't we still have\n> the correct mapping updated in lsn_mapping? When remote_end is PREPARE\n> Or COMMIT_PREPARED, local_end should also point to the same?\n>\n\nIdeally yes, but introducing a new global variable just for this\npurpose doesn't sound advisable. We can add in comments that in the\nfuture, if adding such a variable serves some purpose then we can\nsurely extend the functionality.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 8 Aug 2024 09:53:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Found issues related with logical replication and 2PC"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 12:25 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> While creating a patch which allows ALTER SUBSCRIPTION SET (two_phase) [1],\n> we found some issues related with logical replication and two_phase. I think this\n> can happen not only HEAD but PG14+, but for now I shared patches for HEAD.\n>\n> Issue #1\n>\n> When handling a PREPARE message, the subscriber mistook the wrong lsn position\n> (the end position of the last commit) as the end position of the current prepare.\n> This can be fixed by adding a new global variable to record the end position of\n> the last prepare. 0001 patch fixes the issue.\n>\n> Issue #2\n>\n> When the subscriber enables two-phase commit but doesn't set max_prepared_transaction >0\n> and a transaction is prepared on the publisher, the apply worker reports an ERROR\n> on the subscriber. After that, the prepared transaction is not replayed, which\n> means it's lost forever. Attached script can emulate the situation.\n>\n> --\n> ERROR: prepared transactions are disabled\n> HINT: Set \"max_prepared_transactions\" to a nonzero value.\n> --\n>\n> The reason is that we advanced the origin progress when aborting the\n> transaction as well (RecordTransactionAbort->replorigin_session_advance). So,\n> after setting replorigin_session_origin_lsn, if any ERROR happens when preparing\n> the transaction, the transaction aborts which incorrectly advances the origin lsn.\n>\n> An easiest fix is to reset session replication origin before calling the\n> RecordTransactionAbort(). I think this can happen when 1) LogicalRepApplyLoop()\n> raises an ERROR or 2) apply worker exits. 0002 patch fixes the issue.\n>\n\nCan we start a separate thread to issue 2? I understand that this one\nis also related to two_phase but since both are different issues it is\nbetter to discuss in separate threads. This will also help us refer to\nthe discussion in future if required.\n\nBTW, why did the 0002 patch change the below code:\n--- a/src/include/replication/worker_internal.h\n+++ b/src/include/replication/worker_internal.h\n@@ -164,7 +164,8 @@ typedef struct ParallelApplyWorkerShared\n\n /*\n * XactLastCommitEnd or XactLastPrepareEnd from the parallel apply worker.\n- * This is required by the leader worker so it can update the lsn_mappings.\n+ * This is required by the leader worker so it can update the\n+ * lsn_mappings.\n */\n XLogRecPtr last_commit_end;\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 8 Aug 2024 10:22:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Found issues related with logical replication and 2PC"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> Can we start a separate thread to issue 2? I understand that this one\r\n> is also related to two_phase but since both are different issues it is\r\n> better to discuss in separate threads. This will also help us refer to\r\n> the discussion in future if required.\r\n\r\nYou are right, we should discuss one topic per thread. Forked: [1].\r\n\r\n> BTW, why did the 0002 patch change the below code:\r\n> --- a/src/include/replication/worker_internal.h\r\n> +++ b/src/include/replication/worker_internal.h\r\n> @@ -164,7 +164,8 @@ typedef struct ParallelApplyWorkerShared\r\n> \r\n> /*\r\n> * XactLastCommitEnd or XactLastPrepareEnd from the parallel apply worker.\r\n> - * This is required by the leader worker so it can update the lsn_mappings.\r\n> + * This is required by the leader worker so it can update the\r\n> + * lsn_mappings.\r\n> */\r\n> XLogRecPtr last_commit_end;\r\n>\r\n\r\nOpps. Fixed version is posted in [1].\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5692FAC23BE40C69DA8ED4AFF5B92%40TYAPR01MB5692.jpnprd01.prod.outlook.com\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 8 Aug 2024 05:09:25 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Found issues related with logical replication and 2PC"
},
{
"msg_contents": "On Thu, Aug 8, 2024 at 9:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 8, 2024 at 8:54 AM shveta malik <shveta.malik@gmail.com> wrote:\n> >\n> > On Wed, Aug 7, 2024 at 5:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > So, if my\n> > > analysis is correct, this shouldn't be a bug and ideally, we should\n> > > update local_end LSN as InvalidXLogRecPtr and add appropriate\n> > > comments.\n> >\n> > Okay, we can do that. Then get_flush_position() can also be changed to\n> > *explicitly* deal with the case where local_end is InvalidXLogRecPtr.\n> >\n>\n> AFAICS, it should be handled without any change as the value of\n> InvalidXLogRecPtr is 0. So, it should be less than equal to the\n> local_flush position.\n\nYes, existing code will work, no doubt about that. But generally we\nexplictly use XLogRecPtrIsInvalid if we need to include or exclude\nlsn=0 in some logic. We do not consider 0 lsn for comparisons like\nthis which we currently have in get_flush_position. Thus stated for an\nexplicit check. But, yes, the current code will work.\n\n> > Having said that, even though it is not a bug, shouldn't we still have\n> > the correct mapping updated in lsn_mapping? When remote_end is PREPARE\n> > Or COMMIT_PREPARED, local_end should also point to the same?\n> >\n>\n> Ideally yes, but introducing a new global variable just for this\n> purpose doesn't sound advisable. We can add in comments that in the\n> future, if adding such a variable serves some purpose then we can\n> surely extend the functionality.\n\nOkay. Sounds reasonable.\n\nthanks\nShveta\n\n\n",
"msg_date": "Thu, 8 Aug 2024 14:24:46 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Found issues related with logical replication and 2PC"
},
{
"msg_contents": "Dear Amit, Shveta,\r\n\r\nThanks for discussing!\r\n\r\nI reported the issue because 1) I feared the risk of data loss and 2) simply\r\nbecause the coding looked incorrect. However, per discussion, I understood that\r\nit wouldn't lead to loss, and adding a global variable was unacceptable in this\r\ncase. I modified the patch completely.\r\n\r\nThe attached patch avoids using the LastCommitLSN as the local_lsn while applying\r\nPREPARE. get_flush_position() was not changed. Also, it contains changes that\r\nhave not been discussed yet:\r\n\r\n- Set last_commit_end to InvaldXLogPtr in the PREPARE case.\r\n This causes the same result as when the stream option is not \"parallel.\"\r\n- XactLastCommitEnd was replaced even ROLLBACK PREPARED case.\r\n Since the COMMIT PREPARED record is flushed in RecordTransactionAbortPrepared(),\r\n there is no need to ensure the WAL must be sent.\r\n\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 8 Aug 2024 09:07:00 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Found issues related with logical replication and 2PC"
},
{
"msg_contents": "On Thu, Aug 8, 2024 at 2:37 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Thanks for discussing!\n>\n> I reported the issue because 1) I feared the risk of data loss and 2) simply\n> because the coding looked incorrect. However, per discussion, I understood that\n> it wouldn't lead to loss, and adding a global variable was unacceptable in this\n> case. I modified the patch completely.\n>\n> The attached patch avoids using the LastCommitLSN as the local_lsn while applying\n> PREPARE. get_flush_position() was not changed. Also, it contains changes that\n> have not been discussed yet:\n>\n> - Set last_commit_end to InvaldXLogPtr in the PREPARE case.\n> This causes the same result as when the stream option is not \"parallel.\"\n> - XactLastCommitEnd was replaced even ROLLBACK PREPARED case.\n> Since the COMMIT PREPARED record is flushed in RecordTransactionAbortPrepared(),\n> there is no need to ensure the WAL must be sent.\n>\n\nThe code changes look mostly good to me. I have changed/added a few\ncomments in the attached modified version.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 8 Aug 2024 17:53:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Found issues related with logical replication and 2PC"
},
{
"msg_contents": "On Thu, Aug 8, 2024 at 5:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 8, 2024 at 2:37 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Thanks for discussing!\n> >\n> > I reported the issue because 1) I feared the risk of data loss and 2) simply\n> > because the coding looked incorrect. However, per discussion, I understood that\n> > it wouldn't lead to loss, and adding a global variable was unacceptable in this\n> > case. I modified the patch completely.\n> >\n> > The attached patch avoids using the LastCommitLSN as the local_lsn while applying\n> > PREPARE. get_flush_position() was not changed. Also, it contains changes that\n> > have not been discussed yet:\n> >\n> > - Set last_commit_end to InvaldXLogPtr in the PREPARE case.\n> > This causes the same result as when the stream option is not \"parallel.\"\n> > - XactLastCommitEnd was replaced even ROLLBACK PREPARED case.\n> > Since the COMMIT PREPARED record is flushed in RecordTransactionAbortPrepared(),\n> > there is no need to ensure the WAL must be sent.\n> >\n>\n> The code changes look mostly good to me. I have changed/added a few\n> comments in the attached modified version.\n>\n\nCode changes with Amit's correction patch look good to me.\n\nthanks\nShveta\n\n\n",
"msg_date": "Fri, 9 Aug 2024 09:03:10 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Found issues related with logical replication and 2PC"
},
{
"msg_contents": "Dear Amit,\r\n\r\n>\r\n> The code changes look mostly good to me. I have changed/added a few\r\n> comments in the attached modified version.\r\n>\r\n\r\nThanks for updating the patch! It LGTM. I've tested your patch and confirmed\r\nit did not cause the data loss. I used the source which was applied v3 and additional\r\nfix to visualize the replication command [1].\r\n\r\nMethod\r\n======\r\n\r\n1. Construct a logical replication system with two_phase = true and\r\n synchronous_commit = false\r\n2. attach a walwriter of the subscriber to stop the process\r\n3. Start a transaction and prepare it for the publisher.\r\n4. Wait until the worker replies to the publisher.\r\n5. Stop the subscriber\r\n6. Restart subscriber.\r\n7. Do COMMIT PREPARED\r\n\r\nAttached script can construct the same situation.\r\n\r\nResult\r\n======\r\n\r\nAfter the step 5, I ran pg_waldump and confirmed PREPARE record existed on\r\nthe subscriber.\r\n\r\n```\r\n$ pg_waldump data_sub/pg_wal/000000010000000000000001\r\n...\r\nrmgr: Transaction len..., desc: PREPARE gid pg_gid_16389_741: ...\r\nrmgr: XLOG len..., desc: CHECKPOINT_SHUTDOWN ...\r\n```\r\n\r\nAlso, after the step 7, I confirmed that only the COMMIT PREPARED record\r\nwas sent because log output the below line. \"75\" means the ASCII character 'K';\r\nthis indicated that the replication message corresponded to COMMIT PREPARED.\r\n```\r\nLOG: XXX got message 75\r\n```\r\n\r\n\r\n\r\nAdditionally, I did another test, which is basically same as above but 1) XLogFlush()\r\nin EndPrepare() was commented out and 2) kill -9 was used at step 5 to emulate a\r\ncrash. Since the PREPAREd transaction cannot survive on the subscriber in this case,\r\nso COMMIT PREPARED command on publisher causes an ERROR on the subscriber.\r\n```\r\nERROR: prepared transaction with identifier \"pg_gid_16389_741\" does not exist\r\nCONTEXT: processing remote data for replication origin \"pg_16389\" during message\r\n type \"COMMIT PREPARED\" in transaction 741, finished at 0/15463C0\r\n```\r\nI think this shows that the backend process can ensure the WAL is persisted so data loss\r\nwon't occur.\r\n\r\n\r\n[1]:\r\n```\r\n@@ -3297,6 +3297,8 @@ apply_dispatch(StringInfo s)\r\n saved_command = apply_error_callback_arg.command;\r\n apply_error_callback_arg.command = action;\r\n \r\n+ elog(LOG, \"XXX got message %d\", action);\r\n```\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 9 Aug 2024 05:03:55 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Found issues related with logical replication and 2PC"
},
{
"msg_contents": "On Fri, Aug 9, 2024 at 10:34 AM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> >\n> > The code changes look mostly good to me. I have changed/added a few\n> > comments in the attached modified version.\n> >\n>\n> Thanks for updating the patch! It LGTM. I've tested your patch and confirmed\n> it did not cause the data loss.\n>\n\nThanks for the additional testing. I have pushed this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 9 Aug 2024 10:42:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Found issues related with logical replication and 2PC"
}
] |
[
{
"msg_contents": "Today I compiled PostgreSQL master branch with -fno-strict-aliasing\r\ncompile option removed (previous discussions on the $subject [1]). gcc\r\nversion is 9.4.0.\r\n\r\nThere are a few places where $subject warning printed.\r\n\r\nIn file included from ../../../src/include/nodes/pg_list.h:42,\r\n from ../../../src/include/access/tupdesc.h:19,\r\n from ../../../src/include/access/htup_details.h:19,\r\n from ../../../src/include/access/heaptoast.h:16,\r\n from execExprInterp.c:59:\r\nexecExprInterp.c: In function ‘ExecEvalJsonExprPath’:\r\n../../../src/include/nodes/nodes.h:133:29: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]\r\n 133 | #define nodeTag(nodeptr) (((const Node*)(nodeptr))->type)\r\n | ~^~~~~~~~~~~~~~~~~~~~~~~\r\n../../../src/include/nodes/nodes.h:158:31: note: in expansion of macro ‘nodeTag’\r\n 158 | #define IsA(nodeptr,_type_) (nodeTag(nodeptr) == T_##_type_)\r\n | ^~~~~~~\r\n../../../src/include/nodes/miscnodes.h:53:26: note: in expansion of macro ‘IsA’\r\n 53 | ((escontext) != NULL && IsA(escontext, ErrorSaveContext) && \\\r\n | ^~~\r\nexecExprInterp.c:4399:7: note: in expansion of macro ‘SOFT_ERROR_OCCURRED’\r\n 4399 | if (SOFT_ERROR_OCCURRED(&jsestate->escontext))\r\n | ^~~~~~~~~~~~~~~~~~~\r\nexecExprInterp.c: In function ‘ExecEvalJsonCoercionFinish’:\r\n../../../src/include/nodes/nodes.h:133:29: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]\r\n 133 | #define nodeTag(nodeptr) (((const Node*)(nodeptr))->type)\r\n | ~^~~~~~~~~~~~~~~~~~~~~~~\r\n../../../src/include/nodes/nodes.h:158:31: note: in expansion of macro ‘nodeTag’\r\n 158 | #define IsA(nodeptr,_type_) (nodeTag(nodeptr) == T_##_type_)\r\n | ^~~~~~~\r\n../../../src/include/nodes/miscnodes.h:53:26: note: in expansion of macro ‘IsA’\r\n 53 | ((escontext) != NULL && IsA(escontext, ErrorSaveContext) && \\\r\n | ^~~\r\nexecExprInterp.c:4556:6: note: in expansion of macro ‘SOFT_ERROR_OCCURRED’\r\n 4556 | if (SOFT_ERROR_OCCURRED(&jsestate->escontext))\r\n | ^~~~~~~~~~~~~~~~~~~\r\norigin.c: In function ‘StartupReplicationOrigin’:\r\norigin.c:773:16: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]\r\n 773 | file_crc = *(pg_crc32c *) &disk_state;\r\n | ^~~~~~~~~~~~~~~~~~~~~~~~~\r\n\r\nIn my understanding from the discussion [1], it would be better to fix\r\nour code to avoid the warning because it *might* point out that there\r\nis something wrong with our code. However the consensus at the time\r\nwas, we will not remove -fno-strict-aliasing option for now. It will\r\ntake long time before it would happen...\r\n\r\nSo I think the warnings in ExecEvalJsonExprPath are better fixed\r\nbecause these are the only places where IsA (nodeTag) macro are used\r\nand the warning is printed. Patch attached.\r\n\r\nI am not so sure about StartupReplicationOrigin. Should we fix it now?\r\nFor me the code looks sane as long as we keep -fno-strict-aliasing\r\noption. Or maybe better to fix so that someday we could remove the\r\ncompiler option?\r\n\r\n[1] https://www.postgresql.org/message-id/flat/366.1535731324%40sss.pgh.pa.us#bd93089182d13c79b74593ec70bac435\r\n\r\nBest reagards,\r\n--\r\nTatsuo Ishii\r\nSRA OSS LLC\r\nEnglish: http://www.sraoss.co.jp/index_en/\r\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Wed, 24 Jul 2024 15:55:25 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@postgresql.org>",
"msg_from_op": true,
"msg_subject": "warning: dereferencing type-punned pointer"
},
{
"msg_contents": "Tatsuo Ishii <ishii@postgresql.org> writes:\n> So I think the warnings in ExecEvalJsonExprPath are better fixed\n> because these are the only places where IsA (nodeTag) macro are used\n> and the warning is printed. Patch attached.\n\nI'm not very thrilled with these changes. It's not apparent why\nyour compiler is warning about these usages of IsA and not any other\nones, nor is it apparent why these changes suppress the warnings.\n(The code's not fundamentally different, so I'd think the underlying\nproblem is still there, if there really is one at all.)\nI'm afraid we'd be playing whack-a-mole to suppress similar warnings\non various compiler versions, with no end result other than making\nthe code uglier and less consistent.\n\nIf we can figure out why the warning is appearing, maybe it'd be\npossible to adjust the definition of IsA() to prevent it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jul 2024 10:05:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: warning: dereferencing type-punned pointer"
},
{
"msg_contents": "On 24.07.24 08:55, Tatsuo Ishii wrote:\n> origin.c: In function ‘StartupReplicationOrigin’:\n> origin.c:773:16: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]\n> 773 | file_crc = *(pg_crc32c *) &disk_state;\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~\n\n> I am not so sure about StartupReplicationOrigin. Should we fix it now?\n> For me the code looks sane as long as we keep -fno-strict-aliasing\n> option. Or maybe better to fix so that someday we could remove the\n> compiler option?\n\nThis is basically the textbook example of aliasing violation, isn't it? \nWouldn't it be just as simple to do\n\nmemcpy(&file_crc, &disk_state, sizeof(file_crc));\n\n\n\n",
"msg_date": "Wed, 24 Jul 2024 19:41:13 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: warning: dereferencing type-punned pointer"
},
{
"msg_contents": "Peter Eisentraut <peter@eisentraut.org> writes:\n> This is basically the textbook example of aliasing violation, isn't it? \n> Wouldn't it be just as simple to do\n\n> memcpy(&file_crc, &disk_state, sizeof(file_crc));\n\n+1. Also, it seems thoroughly bizarre to me that this case is handled\nbefore checking for read failure. I'd move the stanza to after the\n\"if (readBytes < 0)\" one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jul 2024 13:48:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: warning: dereferencing type-punned pointer"
},
{
"msg_contents": "On 24.07.24 16:05, Tom Lane wrote:\n> I'm not very thrilled with these changes. It's not apparent why\n> your compiler is warning about these usages of IsA and not any other\n> ones,\n\nI think one difference is that normally IsA is called on a Node * (since \nyou call IsA to decide what to cast it to), but in this case it's called \non a pointer that is already of type ErrorSaveContext *. You wouldn't \nnormally need to call IsA on that, but it comes with the \nSOFT_ERROR_OCCURRED macro. Another difference is that most nodes are \ndynamically allocated but in this case the ErrorSaveContext object (not \na pointer to it) is part of another struct, and so I think some \ndifferent aliasing rules might apply, but I'm not sure.\n\nI think here you could just bypass the SOFT_ERROR_OCCURRED macro:\n\n- if (SOFT_ERROR_OCCURRED(&jsestate->escontext))\n+ if (jsestate->escontext.error_occurred)\n\n\n",
"msg_date": "Wed, 24 Jul 2024 19:53:47 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: warning: dereferencing type-punned pointer"
},
{
"msg_contents": "Peter Eisentraut <peter@eisentraut.org> writes:\n> On 24.07.24 16:05, Tom Lane wrote:\n>> I'm not very thrilled with these changes. It's not apparent why\n>> your compiler is warning about these usages of IsA and not any other\n>> ones,\n\n> I think one difference is that normally IsA is called on a Node * (since \n> you call IsA to decide what to cast it to), but in this case it's called \n> on a pointer that is already of type ErrorSaveContext *.\n\nHmm. But there are boatloads of places where we call IsA on a\npointer of type Expr *, or sometimes other things. Why aren't\nthose triggering the same warning?\n\n> I think here you could just bypass the SOFT_ERROR_OCCURRED macro:\n> - if (SOFT_ERROR_OCCURRED(&jsestate->escontext))\n> + if (jsestate->escontext.error_occurred)\n\nPerhaps. That's a bit sad because it's piercing a layer of\nabstraction. I do not like compiler warnings that can't be\ngotten rid of without making the code objectively worse.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jul 2024 14:09:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: warning: dereferencing type-punned pointer"
},
{
"msg_contents": "On 24.07.24 20:09, Tom Lane wrote:\n> Peter Eisentraut<peter@eisentraut.org> writes:\n>> On 24.07.24 16:05, Tom Lane wrote:\n>>> I'm not very thrilled with these changes. It's not apparent why\n>>> your compiler is warning about these usages of IsA and not any other\n>>> ones,\n>> I think one difference is that normally IsA is called on a Node * (since\n>> you call IsA to decide what to cast it to), but in this case it's called\n>> on a pointer that is already of type ErrorSaveContext *.\n> Hmm. But there are boatloads of places where we call IsA on a\n> pointer of type Expr *, or sometimes other things. Why aren't\n> those triggering the same warning?\n\nIt must have to do with the fact that the escontext field in \nJsonExprState has the object inline, not as a pointer. AIUI, with \ndynamically allocated objects you have more liberties about what type to \ninterpret them as than with actually declared objects.\n\nIf you change the member to a pointer\n\n- ErrorSaveContext escontext;\n+ ErrorSaveContext *escontext;\n } JsonExprState;\n\nand make the required adjustments elsewhere in the code, the warning \ngoes away.\n\nThis arrangement would also appear to be more consistent with other \nexecutor nodes (e.g., ExprState, ExprEvalStep), so it might be worth it \nfor consistency in any case.\n\n\n\n",
"msg_date": "Wed, 24 Jul 2024 20:26:55 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: warning: dereferencing type-punned pointer"
},
{
"msg_contents": "BTW, I tried the same experiment of building without\n-fno-strict-aliasing using gcc 11.4.1 (from RHEL9).\nI see one more warning than Tatsuo-san did:\n\nIn file included from verify_heapam.c:18:\nverify_heapam.c: In function ‘check_tuple_attribute’:\n../../src/include/access/toast_internals.h:37:11: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]\n 37 | (((toast_compress_header *) (ptr))->tcinfo >> VARLENA_EXTSIZE_BITS)\n | ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nverify_heapam.c:1693:24: note: in expansion of macro ‘TOAST_COMPRESS_METHOD’\n 1693 | cmid = TOAST_COMPRESS_METHOD(&toast_pointer);\n | ^~~~~~~~~~~~~~~~~~~~~\n\nThis looks a bit messy to fix: we surely don't want to pierce\nthe abstraction TOAST_COMPRESS_METHOD provides. Perhaps\nthe toast_pointer local variable could be turned into a union\nof struct varatt_external and toast_compress_header, but that\nwould impose a good deal of notational overhead on the rest\nof this function.\n\nThe good news is that we get through check-world (although\nI didn't try very many build options).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jul 2024 14:29:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: warning: dereferencing type-punned pointer"
},
{
"msg_contents": "Peter Eisentraut <peter@eisentraut.org> writes:\n> If you change the member to a pointer\n\n> - ErrorSaveContext escontext;\n> + ErrorSaveContext *escontext;\n> } JsonExprState;\n\n> and make the required adjustments elsewhere in the code, the warning \n> goes away.\n\n> This arrangement would also appear to be more consistent with other \n> executor nodes (e.g., ExprState, ExprEvalStep), so it might be worth it \n> for consistency in any case.\n\n+1, makes sense to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Jul 2024 14:31:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: warning: dereferencing type-punned pointer"
},
{
"msg_contents": "> On 24.07.24 20:09, Tom Lane wrote:\r\n>> Peter Eisentraut<peter@eisentraut.org> writes:\r\n>>> On 24.07.24 16:05, Tom Lane wrote:\r\n>>>> I'm not very thrilled with these changes. It's not apparent why\r\n>>>> your compiler is warning about these usages of IsA and not any other\r\n>>>> ones,\r\n>>> I think one difference is that normally IsA is called on a Node *\r\n>>> (since\r\n>>> you call IsA to decide what to cast it to), but in this case it's\r\n>>> called\r\n>>> on a pointer that is already of type ErrorSaveContext *.\r\n>> Hmm. But there are boatloads of places where we call IsA on a\r\n>> pointer of type Expr *, or sometimes other things. Why aren't\r\n>> those triggering the same warning?\r\n> \r\n> It must have to do with the fact that the escontext field in\r\n> JsonExprState has the object inline, not as a pointer. AIUI, with\r\n> dynamically allocated objects you have more liberties about what type\r\n> to interpret them as than with actually declared objects.\r\n\r\nI don't agree. I think the compiler just dislike that nodeTag macro's\r\nargument is a pointer created by '&' operator in this case:\r\n\r\n#define nodeTag(nodeptr) (((const Node*)(nodeptr))->type)\r\n\r\nIf we just give a pointer variable either it's type is Node * or\r\nErrorSaveContext * to nodeTag macro, the compiler becomes happy.\r\n\r\nMoreover I think whether the object is inline or not is\r\nirrelevant. Attached is a self contained test case. In the program:\r\n\r\n\tif (IsA(&f, List))\r\n\r\nproduces the strict aliasing rule violation but\r\n\r\n\tif (IsA(fp, List))\r\n\r\ndoes not. Here \"f\" is an object defined as:\r\n\r\ntypedef struct Foo\r\n{\r\n\tNodeTag\t\ttype;\r\n\tint\t\td;\r\n} Foo;\r\n\r\nFoo f;\r\n\r\nand fp is defined as:\r\n\r\n\tFoo\t*fp = &f;\r\n\r\n$ gcc -Wall -O2 -c strict2.c\r\nstrict2.c: In function ‘sub’:\r\nstrict2.c:1:29: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]\r\n 1 | #define nodeTag(nodeptr) (((const Node*)(nodeptr))->type)\r\n | ~^~~~~~~~~~~~~~~~~~~~~~~\r\nstrict2.c:2:31: note: in expansion of macro ‘nodeTag’\r\n 2 | #define IsA(nodeptr,_type_) (nodeTag(nodeptr) == T_##_type_)\r\n | ^~~~~~~\r\nstrict2.c:26:6: note: in expansion of macro ‘IsA’\r\n 26 | if (IsA(&f, List))\r\n | ^~~\r\nAt top level:\r\nstrict2.c:21:12: warning: ‘sub’ defined but not used [-Wunused-function]\r\n 21 | static int sub(void)\r\n | ^~~\r\n\r\n> If you change the member to a pointer\r\n> \r\n> - ErrorSaveContext escontext;\r\n> + ErrorSaveContext *escontext;\r\n> } JsonExprState;\r\n> \r\n> and make the required adjustments elsewhere in the code, the warning\r\n> goes away.\r\n\r\nI think this is not necessary. Just my patch in the upthread is enough.\r\n\r\nBest reagards,\r\n--\r\nTatsuo Ishii\r\nSRA OSS LLC\r\nEnglish: http://www.sraoss.co.jp/index_en/\r\nJapanese:http://www.sraoss.co.jp\r\n",
"msg_date": "Fri, 26 Jul 2024 16:11:32 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: warning: dereferencing type-punned pointer"
},
{
"msg_contents": "Sorry, I forgot to attach the file...\r\n\r\n> I don't agree. I think the compiler just dislike that nodeTag macro's\r\n> argument is a pointer created by '&' operator in this case:\r\n> \r\n> #define nodeTag(nodeptr) (((const Node*)(nodeptr))->type)\r\n> \r\n> If we just give a pointer variable either it's type is Node * or\r\n> ErrorSaveContext * to nodeTag macro, the compiler becomes happy.\r\n> \r\n> Moreover I think whether the object is inline or not is\r\n> irrelevant. Attached is a self contained test case. In the program:\r\n> \r\n> \tif (IsA(&f, List))\r\n> \r\n> produces the strict aliasing rule violation but\r\n> \r\n> \tif (IsA(fp, List))\r\n> \r\n> does not. Here \"f\" is an object defined as:\r\n> \r\n> typedef struct Foo\r\n> {\r\n> \tNodeTag\t\ttype;\r\n> \tint\t\td;\r\n> } Foo;\r\n> \r\n> Foo f;\r\n> \r\n> and fp is defined as:\r\n> \r\n> \tFoo\t*fp = &f;\r\n> \r\n> $ gcc -Wall -O2 -c strict2.c\r\n> strict2.c: In function ‘sub’:\r\n> strict2.c:1:29: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]\r\n> 1 | #define nodeTag(nodeptr) (((const Node*)(nodeptr))->type)\r\n> | ~^~~~~~~~~~~~~~~~~~~~~~~\r\n> strict2.c:2:31: note: in expansion of macro ‘nodeTag’\r\n> 2 | #define IsA(nodeptr,_type_) (nodeTag(nodeptr) == T_##_type_)\r\n> | ^~~~~~~\r\n> strict2.c:26:6: note: in expansion of macro ‘IsA’\r\n> 26 | if (IsA(&f, List))\r\n> | ^~~\r\n> At top level:\r\n> strict2.c:21:12: warning: ‘sub’ defined but not used [-Wunused-function]\r\n> 21 | static int sub(void)\r\n> | ^~~\r\n> \r\n>> If you change the member to a pointer\r\n>> \r\n>> - ErrorSaveContext escontext;\r\n>> + ErrorSaveContext *escontext;\r\n>> } JsonExprState;\r\n>> \r\n>> and make the required adjustments elsewhere in the code, the warning\r\n>> goes away.\r\n> \r\n> I think this is not necessary. Just my patch in the upthread is enough.\r\n> \r\n> Best reagards,\r\n> --\r\n> Tatsuo Ishii\r\n> SRA OSS LLC\r\n> English: http://www.sraoss.co.jp/index_en/\r\n> Japanese:http://www.sraoss.co.jp\r\n\n/*\n * Minimum definitions copied from PostgreSQL to make the\n * test self-contained.\n */\n#define nodeTag(nodeptr) (((const Node*)(nodeptr))->type)\n#define IsA(nodeptr,_type_) (nodeTag(nodeptr) == T_##_type_)\n\ntypedef enum NodeTag\n{\n\tT_Invalid = 0,\n\tT_List = 1\n} NodeTag;\n\ntypedef struct Node\n{\n\tNodeTag\t\ttype;\n} Node;\n\n/* Home brew node */\ntypedef struct Foo\n{\n\tNodeTag\t\ttype;\n\tint\t\td;\n} Foo;\n\nstatic int\tsub(void)\n{\n\tFoo\tf;\n\tFoo\t*fp = &f;\n\tf.type = T_List;\n\n\t/* strict aliasing rule error */\n\tif (IsA(&f, List))\n\t\treturn 1;\n\n\t/* This is ok */\n\tif (IsA(fp, List))\n\t\treturn 1;\n\treturn 0;\n}",
"msg_date": "Fri, 26 Jul 2024 16:49:10 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: warning: dereferencing type-punned pointer"
}
] |
[
{
"msg_contents": "Hello everyone,\n\nIn src/backend/utils/adt/formatting.c:1516, there is a get_th() function utilized to return ST/ND/RD/TH suffixes for simple numbers.\nUpon reviewing its behavior, it appears capable of receiving non-numeric inputs (this is verified by a check at formatting.c:1527).\n\nGiven that the function can accept non-numeric inputs,\nit is plausible that it could also receive an empty input,\nalthough a brief examination of its calls did not reveal any such instances.\n\nNevertheless, if the function were to receive an empty input of zero length,\na buffer underflow would occur when attempting to compute *(num + (len - 1)), as (len - 1) would result in a negative shift.\nTo mitigate this issue, I propose a patch incorporating the zero_length_character_string error code, as detailed in the attachment.\n\n-- \nBest regards,\nAlexander Kuznetsov",
"msg_date": "Wed, 24 Jul 2024 12:43:19 +0300",
"msg_from": "Alexander Kuznetsov <kuznetsovam@altlinux.org>",
"msg_from_op": true,
"msg_subject": "Detect buffer underflow in get_th()"
},
{
"msg_contents": "On 24.07.24 11:43, Alexander Kuznetsov wrote:\n> Hello everyone,\n> \n> In src/backend/utils/adt/formatting.c:1516, there is a get_th() function \n> utilized to return ST/ND/RD/TH suffixes for simple numbers.\n> Upon reviewing its behavior, it appears capable of receiving non-numeric \n> inputs (this is verified by a check at formatting.c:1527).\n> \n> Given that the function can accept non-numeric inputs,\n> it is plausible that it could also receive an empty input,\n> although a brief examination of its calls did not reveal any such \n> instances.\n> \n> Nevertheless, if the function were to receive an empty input of zero \n> length,\n> a buffer underflow would occur when attempting to compute *(num + (len - \n> 1)), as (len - 1) would result in a negative shift.\n> To mitigate this issue, I propose a patch incorporating the \n> zero_length_character_string error code, as detailed in the attachment.\n\nIf it can't happen in practice, maybe an assertion would be enough?\n\n\n\n",
"msg_date": "Wed, 24 Jul 2024 17:39:00 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Detect buffer underflow in get_th()"
},
{
"msg_contents": "\n24.07.2024 18:39, Peter Eisentraut wrote:\n> If it can't happen in practice, maybe an assertion would be enough?\n>\n\nIn practice, the function should not receive non-numeric strings either; however, since there is an exception in place for such cases, I thought it would be good to add a check for zero-length input in a similar manner.\n\nBut of course it's open for discussion and team decision whether this should be addressed as an assertion or handled differently.\n\n-- \nBest regards,\nAlexander Kuznetsov\n\n\n\n",
"msg_date": "Wed, 24 Jul 2024 18:53:53 +0300",
"msg_from": "Alexander Kuznetsov <kuznetsovam@altlinux.org>",
"msg_from_op": true,
"msg_subject": "Re: Detect buffer underflow in get_th()"
},
{
"msg_contents": "Hello,\n\nis there anything else we can help with or discuss in order to apply this fix?\n\n24.07.2024 18:53, Alexander Kuznetsov пишет:\n> \n> 24.07.2024 18:39, Peter Eisentraut wrote:\n>> If it can't happen in practice, maybe an assertion would be enough?\n>>\n> \n> In practice, the function should not receive non-numeric strings either; however, since there is an exception in place for such cases, I thought it would be good to add a check for zero-length input in a similar manner.\n> \n> But of course it's open for discussion and team decision whether this should be addressed as an assertion or handled differently.\n> \n\n-- \nBest regards,\nAlexander Kuznetsov\n\n\n",
"msg_date": "Tue, 24 Sep 2024 17:52:32 +0300",
"msg_from": "Alexander Kuznetsov <kuznetsovam@altlinux.org>",
"msg_from_op": true,
"msg_subject": "Re: Detect buffer underflow in get_th()"
}
] |
[
{
"msg_contents": "Hello\n\nA couple of days ago, PG buildfarm member cisticola started failing:\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=cisticola&br=HEAD\n\nThe failures[1] are pretty mysterious:\n\nmake[3]: \\347\\246\\273\\345\\274\\200\\347\\233\\256\\345\\275\\225\\342\\200\\234/home/postgres/buildfarm/HEAD/pgsql.build/src/backend/utils\\342\\200\\235\nccache gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wshadow=compatible-local -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -O2 access/brin/brin.o [...] libpq/be-secure-openssl.o [...] ../../src/timezone/localtime.o ../../src/timezone/pgtz.o ../../src/timezone/strftime.o jit/jit.o ../../src/common/libpgcommon_srv.a ../../src/port/libpgport_srv.a -L../../src/port -L../../src/common -Wl,--as-needed -Wl,-rpath,'/home/postgres/buildfarm/HEAD/inst/lib',--enable-new-dtags -Wl,--export-dynamic -lxslt -lxml2 -lssl -lcrypto -lgssapi_krb5 -lz -lpthread -lrt -ldl -lm -lldap -licui18n -licuuc -licudata -o postgres\n/usr/bin/ld: libpq/be-secure-openssl.o: in function `.L388':\nbe-secure-openssl.c:(.text+0x1f8c): undefined reference to `ERR_new'\n/usr/bin/ld: libpq/be-secure-openssl.o: in function `.L530':\nbe-secure-openssl.c:(.text+0x1fa4): undefined reference to `ERR_set_debug'\n/usr/bin/ld: libpq/be-secure-openssl.o: in function `.LVL614':\nbe-secure-openssl.c:(.text+0x1fb8): undefined reference to `ERR_set_error'\n/usr/bin/ld: libpq/be-secure-openssl.o: in function `.L539':\nbe-secure-openssl.c:(.text+0x2010): undefined reference to `ERR_new'\n/usr/bin/ld: libpq/be-secure-openssl.o: in function `.L417':\nbe-secure-openssl.c:(.text+0x20b0): undefined reference to `SSL_get1_peer_certificate'\ncollect2: error: ld returned 1 exit status\n\nPreviously, this was working fine.\n\nIn this run, openssl is \n checking for openssl... /usr/bin/openssl\n configure: using openssl: OpenSSL 1.1.1g FIPS 21 Apr 2020\n\nbut that's the same that was used in the last successful run[2], too.\n\nNot sure what else could be relevant.\n\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=cisticola&dt=2024-07-24%2010%3A20%3A37\n[2] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=cisticola&dt=2024-07-20%2022%3A20%3A38&stg=configure\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No renuncies a nada. No te aferres a nada.\"\n\n\n",
"msg_date": "Wed, 24 Jul 2024 12:57:41 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "PG buildfarm member cisticola"
},
{
"msg_contents": "On 2024-Jul-24, Alvaro Herrera wrote:\n\n> be-secure-openssl.c:(.text+0x1f8c): undefined reference to `ERR_new'\n> be-secure-openssl.c:(.text+0x1fa4): undefined reference to `ERR_set_debug'\n> be-secure-openssl.c:(.text+0x1fb8): undefined reference to `ERR_set_error'\n> be-secure-openssl.c:(.text+0x2010): undefined reference to `ERR_new'\n> be-secure-openssl.c:(.text+0x20b0): undefined reference to `SSL_get1_peer_certificate'\n\n> In this run, openssl is \n> checking for openssl... /usr/bin/openssl\n> configure: using openssl: OpenSSL 1.1.1g FIPS 21 Apr 2020\n\nHmm, it appears that these symbols did not exist in 1.1.1g, and since\nneither of them is invoked directly by the Postgres code, maybe the\nreason for this is that the OpenSSL headers being used do not correspond\nto the library being linked.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 24 Jul 2024 13:09:22 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: PG buildfarm member cisticola"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 01:09:22PM +0200, Alvaro Herrera wrote:\n> Hmm, it appears that these symbols did not exist in 1.1.1g, and since\n> neither of them is invoked directly by the Postgres code, maybe the\n> reason for this is that the OpenSSL headers being used do not correspond\n> to the library being linked.\n\nI am as puzzled as you are. The version of OpenSSL detected by\n./configure is the same before and after the first failure. Well,\nthat's the output of `openssl` so perhaps we are just being lied here\nand that we are trying to link to a different version in the\nbackground. Impossible to say without more input from the machine\nowner.\n\nThe two commits between the last success and the first failure are:\n7e187a7386 Mon Jul 22 02:29:21 2024 UTC Fix unstable test in select_parallel.sql \n2d8ef5e24f Mon Jul 22 00:28:01 2024 UTC Add new error code for \"file name too long\"\n\nDiscarding the first one points to the second, still I don't see a\nrelation, which would come down to a conflict with the new\nERRCODE_FILE_NAME_TOO_LONG (?).\n--\nMichael",
"msg_date": "Thu, 25 Jul 2024 17:38:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PG buildfarm member cisticola"
},
{
"msg_contents": "On Thu, Jul 25, 2024 at 8:38 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Jul 24, 2024 at 01:09:22PM +0200, Alvaro Herrera wrote:\n> > Hmm, it appears that these symbols did not exist in 1.1.1g, and since\n> > neither of them is invoked directly by the Postgres code, maybe the\n> > reason for this is that the OpenSSL headers being used do not correspond\n> > to the library being linked.\n\nYeah, the OpenSSL 3.x header ssl.h contains:\n\n/* Deprecated in 3.0.0 */\n# ifndef OPENSSL_NO_DEPRECATED_3_0\n# define SSL_get_peer_certificate SSL_get1_peer_certificate\n# endif\n\nPostgreSQL uses the name on the left, but OpenSSL 1.x library does not\ncontain the symbol on the right with the 1 in it. Part of a strategy\nfor deprecating that API, but perhaps on that OS it is possible to\ninstall both OpenSSL versions, and something extra is needed for the\nheader search path and library search path to agree? I don't know\nwhat Loongnix is exactly, but it has the aura of a RHEL-derivative.\n\n\n",
"msg_date": "Fri, 2 Aug 2024 13:27:53 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG buildfarm member cisticola"
},
{
"msg_contents": "Sorry for the late reply. \nYes , it's the openssl , i accidentally install two version openssl the 3.x and 1.x on the same machine.\nI remove the 3.x openssl and it's ok now.\n----------\nBest regards,\nhuchangqi\n\n> -----原始邮件-----\n> 发件人: \"Thomas Munro\" <thomas.munro@gmail.com>\n> 发送时间:2024-08-02 09:27:53 (星期五)\n> 收件人: \"Michael Paquier\" <michael@paquier.xyz>\n> 抄送: \"Alvaro Herrera\" <alvherre@alvh.no-ip.org>, huchangqi@loongson.cn, pgsql-hackers@lists.postgresql.org\n> 主题: Re: PG buildfarm member cisticola\n> \n> On Thu, Jul 25, 2024 at 8:38 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Wed, Jul 24, 2024 at 01:09:22PM +0200, Alvaro Herrera wrote:\n> > > Hmm, it appears that these symbols did not exist in 1.1.1g, and since\n> > > neither of them is invoked directly by the Postgres code, maybe the\n> > > reason for this is that the OpenSSL headers being used do not correspond\n> > > to the library being linked.\n> \n> Yeah, the OpenSSL 3.x header ssl.h contains:\n> \n> /* Deprecated in 3.0.0 */\n> # ifndef OPENSSL_NO_DEPRECATED_3_0\n> # define SSL_get_peer_certificate SSL_get1_peer_certificate\n> # endif\n> \n> PostgreSQL uses the name on the left, but OpenSSL 1.x library does not\n> contain the symbol on the right with the 1 in it. Part of a strategy\n> for deprecating that API, but perhaps on that OS it is possible to\n> install both OpenSSL versions, and something extra is needed for the\n> header search path and library search path to agree? I don't know\n> what Loongnix is exactly, but it has the aura of a RHEL-derivative.\n> \n\r\n\r\n本邮件及其附件含有龙芯中科的商业秘密信息,仅限于发送给上面地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制或散发)本邮件及其附件中的信息。如果您错收本邮件,请您立即电话或邮件通知发件人并删除本邮件。 \r\nThis email and its attachments contain confidential information from Loongson Technology , which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this email in error, please notify the sender by phone or email immediately and delete it. \r\n\r\n\r\n",
"msg_date": "Tue, 13 Aug 2024 15:26:41 +0800 (GMT+08:00)",
"msg_from": "=?UTF-8?B?6IOh5bi46b2Q?= <huchangqi@loongson.cn>",
"msg_from_op": false,
"msg_subject": "Re: Re: PG buildfarm member cisticola"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nA recent lorikeet (a Cygwin animal) failure [1] revealed one more\nlong-standing (see also [2], [3], [4]) issue related to Cygwin:\n SELECT dblink_connect('dtest1', connection_parameters());\n- dblink_connect\n-----------------\n- OK\n-(1 row)\n-\n+ERROR: could not establish connection\n+DETAIL: could not connect to server: Connection refused\n\nwhere inst/logfile contains:\n2024-07-16 05:38:21.492 EDT [66963f67.7823:4] LOG: could not accept new connection: Software caused connection abort\n2024-07-16 05:38:21.492 EDT [66963f8c.79e5:170] pg_regress/dblink ERROR: could not establish connection\n2024-07-16 05:38:21.492 EDT [66963f8c.79e5:171] pg_regress/dblink DETAIL: could not connect to server: Connection refused\n Is the server running locally and accepting\n connections on Unix domain socket \"/home/andrew/bf/root/tmp/buildfarm-DK1yh4/.s.PGSQL.5838\"?\n\nI made a standalone reproducing script (assuming the dblink extension\ninstalled):\nnumclients=50\nfor ((i=1;i<=1000;i++)); do\necho \"iteration $i\"\n\nfor ((c=1;c<=numclients;c++)); do\ncat << 'EOF' | /usr/local/pgsql/bin/psql >/dev/null 2>&1 &\n\nSELECT 'dbname='|| current_database()||' port='||current_setting('port')\n AS connstr\n\\gset\n\nSELECT * FROM dblink('service=no_service', 'SELECT 1') AS t(i int);\n\nSELECT * FROM\ndblink(:'connstr', 'SELECT 1') AS t1(i int),\ndblink(:'connstr', 'SELECT 2') AS t2(i int),\ndblink(:'connstr', 'SELECT 3') AS t3(i int),\ndblink(:'connstr', 'SELECT 4') AS t4(i int),\ndblink(:'connstr', 'SELECT 5') AS t5(i int);\nEOF\ndone\nwait\n\ngrep -A1 \"Software caused connection abort\" server.log && break;\ndone\n\nwhich fails for me as below:\niteration 318\n2024-07-24 04:19:46.511 PDT [29062:6][postmaster][:0] LOG: could not accept new connection: Software caused connection \nabort\n2024-07-24 04:19:46.512 PDT [25312:8][client backend][36/1996:0] ERROR: could not establish connection\n\nThe important fact here is that this failure is not reproduced after\n7389aad63 (in v16), so it seems that it's somehow related to signal\nprocessing. Given that, I'm inclined to stop here, without digging deeper,\nat least until there are plans to backport that fix or something...\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2024-07-16%2009%3A18%3A31 (REL_13_STABLE)\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2022-07-21%2000%3A36%3A44 (REL_14_STABLE)\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2023-07-06%2009%3A19%3A36 (REL_12_STABLE)\n[4] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2022-02-12%2001%3A40%3A56 (REL_13_STABLE, \npostgres_fdw)\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 24 Jul 2024 16:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Sporadic connection-setup-related test failures on Cygwin in v15-"
},
{
"msg_contents": "On Thu, Jul 25, 2024 at 1:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> The important fact here is that this failure is not reproduced after\n> 7389aad63 (in v16), so it seems that it's somehow related to signal\n> processing. Given that, I'm inclined to stop here, without digging deeper,\n> at least until there are plans to backport that fix or something...\n\n+1. I'm not planning to back-patch that work. Perhaps lorikeet\ncould stop testing releases < 16? They don't work and it's not our\nbug[1]. We decided not to drop Cygwin support[2], but I don't think\nwe're learning anything from investigating that noise in the\nknown-broken branches.\n\n[1] https://sourceware.org/legacy-ml/cygwin/2017-08/msg00048.html\n[2] https://www.postgresql.org/message-id/5e6797e9-bc26-ced7-6c9c-59bca415598b%40dunslane.net\n\n\n",
"msg_date": "Thu, 25 Jul 2024 08:58:08 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sporadic connection-setup-related test failures on Cygwin in v15-"
},
{
"msg_contents": "24.07.2024 23:58, Thomas Munro wrote:\n> +1. I'm not planning to back-patch that work. Perhaps lorikeet\n> could stop testing releases < 16? They don't work and it's not our\n> bug[1]. We decided not to drop Cygwin support[2], but I don't think\n> we're learning anything from investigating that noise in the\n> known-broken branches.\n\nYeah, it looks like lorikeet votes +[1] for your proposal.\n(I suppose it failed due to the same signal processing issue, just another\nway.)\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lorikeet&dt=2024-07-24%2008%3A54%3A07\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 25 Jul 2024 06:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Sporadic connection-setup-related test failures on Cygwin in v15-"
},
{
"msg_contents": "On 2024-07-24 We 4:58 PM, Thomas Munro wrote:\n> On Thu, Jul 25, 2024 at 1:00 AM Alexander Lakhin<exclusion@gmail.com> wrote:\n>> The important fact here is that this failure is not reproduced after\n>> 7389aad63 (in v16), so it seems that it's somehow related to signal\n>> processing. Given that, I'm inclined to stop here, without digging deeper,\n>> at least until there are plans to backport that fix or something...\n> +1. I'm not planning to back-patch that work. Perhaps lorikeet\n> could stop testing releases < 16? They don't work and it's not our\n> bug[1]. We decided not to drop Cygwin support[2], but I don't think\n> we're learning anything from investigating that noise in the\n> known-broken branches.\n\n\nSure, it can. I've made that change.\n\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-24 We 4:58 PM, Thomas Munro\n wrote:\n\n\nOn Thu, Jul 25, 2024 at 1:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n\n\nThe important fact here is that this failure is not reproduced after\n7389aad63 (in v16), so it seems that it's somehow related to signal\nprocessing. Given that, I'm inclined to stop here, without digging deeper,\nat least until there are plans to backport that fix or something...\n\n\n\n+1. I'm not planning to back-patch that work. Perhaps lorikeet\ncould stop testing releases < 16? They don't work and it's not our\nbug[1]. We decided not to drop Cygwin support[2], but I don't think\nwe're learning anything from investigating that noise in the\nknown-broken branches.\n\n\n\nSure, it can. I've made that change. \n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 25 Jul 2024 12:25:49 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Sporadic connection-setup-related test failures on Cygwin in v15-"
},
{
"msg_contents": "25.07.2024 19:25, Andrew Dunstan wrote:\n>> +1. I'm not planning to back-patch that work. Perhaps lorikeet\n>> could stop testing releases < 16? They don't work and it's not our\n>> bug[1]. We decided not to drop Cygwin support[2], but I don't think\n>> we're learning anything from investigating that noise in the\n>> known-broken branches.\n>\n>\n> Sure, it can. I've made that change.\n>\n\nThank you, Andrew!\n\nI've moved those issues to the \"Fixed\" category.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 25 Jul 2024 20:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Sporadic connection-setup-related test failures on Cygwin in v15-"
}
] |
[
{
"msg_contents": "I've noticed that after running `pg_upgrade` that my `pg_depend` table\ncontains unexpected dependencies for sequences. Before the upgrade\nfrom PostgreSQL 15.7:\n\n```\n% psql -d gitlabhq_production\npsql (16.3, server 15.7)\nType \"help\" for help.\n\ngitlabhq_production=# SELECT seq_pg_class.relname AS seq_name,\n dep_pg_class.relname AS table_name,\n pg_attribute.attname AS col_name,\n pg_depend.classid,\n classid_class.relname AS classid_relname,\n pg_depend.refclassid,\n refclassid_class.relname AS refclassid_relname\nFROM pg_class seq_pg_class\nINNER JOIN pg_depend ON seq_pg_class.oid = pg_depend.objid\nINNER JOIN pg_class dep_pg_class ON pg_depend.refobjid = dep_pg_class.oid\nINNER JOIN pg_attribute ON dep_pg_class.oid = pg_attribute.attrelid\n AND pg_depend.refobjsubid = pg_attribute.attnum\nINNER JOIN pg_class classid_class ON pg_depend.classid = classid_class.oid\nINNER JOIN pg_class refclassid_class ON pg_depend.refclassid =\nrefclassid_class.oid\nWHERE seq_pg_class.relkind = 'S'\n AND (dep_pg_class.relname = 'p_ci_builds' OR dep_pg_class.relname =\n'ci_builds');\n seq_name | table_name | col_name | classid | classid_relname\n| refclassid | refclassid_relname\n------------------+-------------+----------+---------+-----------------+------------+--------------------\n ci_builds_id_seq | p_ci_builds | id | 1259 | pg_class\n| 1259 | pg_class\n(1 row)\n```\n\nAfter the upgrade to PostgreSQL 16.3, I see these dependencies:\n\n```\n seq_name | table_name | col_name\n| classid | classid_relname | refclassid | refclassid_relname\n-----------------------------------------+-------------+--------------+---------+-----------------+------------+--------------------\n ci_builds_id_seq | p_ci_builds | id\n| 1259 | pg_class | 1259 | pg_class\n note_metadata_note_id_seq | ci_builds | stage_id\n| 2606 | pg_constraint | 1259 | pg_class\n note_metadata_note_id_seq | ci_builds | partition_id\n| 2606 | pg_constraint | 1259 | pg_class\n project_repository_storage_moves_id_seq | ci_builds | id\n| 2606 | pg_constraint | 1259 | pg_class\n project_repository_storage_moves_id_seq | ci_builds | partition_id\n| 2606 | pg_constraint | 1259 | pg_class\n x509_commit_signatures_id_seq | ci_builds | id\n| 2606 | pg_constraint | 1259 | pg_class\n x509_commit_signatures_id_seq | ci_builds | partition_id\n| 2606 | pg_constraint | 1259 | pg_class\n(7 rows)\n```\n\nWhat's odd is that the `pg_constraint` entries don't seem to be\ndeterministic: I often see different entries every time I run\n`pg_upgrade`.\n\nAre these entries expected to be there, or is this a bug?\n\nHere's what I did to reproduce. I use `asdf` to manage multiple\nversions, so I used the ASDF_POSTGRES_VERSION environment variable to\noverride which version to use:\n\n\n1. First, install both PostgreSQL 15.7 and 16.3 via `asdf` (e.g. `asdf\ninstall postgres 15.7 && asdf install postgres 16.3`). You may use any\ntwo major versions.\n\n2. Then run:\n\n```shell\nexport ASDF_POSTGRES_VERSION=15.7\ninitdb /tmp/data.15\ncurl -O https://gitlab.com/gitlab-org/gitlab/-/raw/16-11-stable-ee/db/structure.sql\npostgres -D /tmp/data.15\n```\n\n3. In another terminal, load this schema:\n\n```shell\npsql -d template1 -c 'create database gitlabhq_production'\npsql -d gitlabhq_production < structure.sql\n```\n\n4. Check the constraints that `ci_builds_id_seq` is the only entry:\n\n```sql\npsql -d gitlabhq_production\n<snip>\ngitlabhq_production=# SELECT seq_pg_class.relname AS seq_name,\ndep_pg_class.relname AS table_name, pg_attribute.attname AS col_name,\ndeptype\nFROM pg_class seq_pg_class\nINNER JOIN pg_depend ON seq_pg_class.oid = pg_depend.objid\nINNER JOIN pg_class dep_pg_class ON pg_depend.refobjid = dep_pg_class.oid\nINNER JOIN pg_attribute ON dep_pg_class.oid = pg_attribute.attrelid\nAND pg_depend.refobjsubid = pg_attribute.attnum\nWHERE seq_pg_class.relkind = 'S'\ndep_pg_class.relname = 'p_ci_builds';\n seq_name | table_name | col_name | deptype\n------------------+-------------+----------+---------\n ci_builds_id_seq | p_ci_builds | id | a\n(1 row)\n```\n\n5. Terminate `postgres` in the other window.\n6. Now let's upgrade to PostgreSQL 16 and run the database:\n\n```shell\nexport ASDF_POSTGRES_VERSION=16.3\ninitdb /tmp/data.16\npg_upgrade -b ~/.asdf/installs/postgres/15.7/bin -B\n~/.asdf/installs/postgres/16.3/bin -d /tmp/data.15 -D /tmp/data.16\npostgres -D /tmp/data.16\n```\n\n7. Now try the query and see the new entries:\n\n```sql\ngitlabhq_production=# SELECT seq_pg_class.relname AS seq_name,\ndep_pg_class.relname AS table_name, pg_attribute.attname AS col_name,\ndeptype\nFROM pg_class seq_pg_class\nINNER JOIN pg_depend ON seq_pg_class.oid = pg_depend.objid\nINNER JOIN pg_class dep_pg_class ON pg_depend.refobjid = dep_pg_class.oid\nINNER JOIN pg_attribute ON dep_pg_class.oid = pg_attribute.attrelid\nAND pg_depend.refobjsubid = pg_attribute.attnum\nWHERE seq_pg_class.relkind = 'S'\nAND (dep_pg_class.relname = 'ci_builds' OR dep_pg_class.relname =\n'p_ci_builds');\n seq_name | table_name |\ncol_name | deptype\n------------------------------------------------+-------------+-------------------+---------\n ci_builds_id_seq | p_ci_builds | id\n | a\n dast_profiles_tags_id_seq | p_ci_builds | id\n | a\n dast_profiles_tags_id_seq | p_ci_builds |\npartition_id | a\n merge_request_diff_commit_users_id_seq | p_ci_builds |\nresource_group_id | a\n ml_models_id_seq | ci_builds | id\n | n\n ml_models_id_seq | ci_builds |\npartition_id | n\n packages_debian_group_distribution_keys_id_seq | ci_builds | id\n | n\n packages_debian_group_distribution_keys_id_seq | ci_builds |\npartition_id | n\n pages_deployments_id_seq | ci_builds | id\n | n\n pages_deployments_id_seq | ci_builds |\npartition_id | n\n project_repositories_id_seq | p_ci_builds | id\n | n\n project_repositories_id_seq | p_ci_builds |\npartition_id | n\n user_custom_attributes_id_seq | ci_builds | id\n | n\n user_custom_attributes_id_seq | ci_builds |\npartition_id | n\n(14 rows)\n```\n\n\n",
"msg_date": "Wed, 24 Jul 2024 07:30:30 -0700",
"msg_from": "Stan Hu <stanhu@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade adds unexpected pg_constraint entries to pg_depend"
}
] |
[
{
"msg_contents": "Neon added a t_cid field to heap WAL records https://github.com/yibit/neon-postgresql/blob/main/docs/core_changes.md#add-t_cid-to-heap-wal-records.\n\nHowever, when replaying the delete log record, it is discarding the combo flag and storing the raw cmax on the old tuple https://github.com/neondatabase/neon/blob/main/pgxn/neon_rmgr/neon_rmgr.c#L376. This will make the tuple header different from what is in the buffer cache if the deleted tuple was using a combocid. Similarly, there was no t_cid added for the old tuple in xl_neon_heap_update, and it is using the t_cid of the new tuple to set cmax on the old tuple during redo_neon_heap_update.\n\nWhy is this not a problem when a visibility check is performed on the tuple after reading from storage, since it won't get the correct cmin value on the old tuple?\nAlso, what is the need of adding the t_cid of the new tuple in xl_neon_heap_update when it is already present in the xl_neon_heap_header? Seems like it is sending the same t_cid twice with the update WAL record.\nThanks,\nMuhammad\n\n\n\n\n\n\n\n\nNeon added a t_cid field to heap WAL records \nhttps://github.com/yibit/neon-postgresql/blob/main/docs/core_changes.md#add-t_cid-to-heap-wal-records.\n\n\n\n\nHowever, when replaying the delete log record, it is discarding the combo flag and storing the raw cmax on the old tuple\n\nhttps://github.com/neondatabase/neon/blob/main/pgxn/neon_rmgr/neon_rmgr.c#L376. This will make the tuple header different from what is in the buffer cache if the deleted tuple was using a combocid. Similarly, there was no t_cid added for the old tuple in\n xl_neon_heap_update, and it is using the t_cid of the new tuple to set cmax on the old tuple during redo_neon_heap_update.\n\n\n\n\nWhy is this not a problem when a visibility check is performed on the tuple after reading from storage, since it won't get the correct cmin value on the old tuple?\n\nAlso, what is the need of adding the t_cid of the new tuple in xl_neon_heap_update when it is already present in the xl_neon_heap_header? Seems like it is sending the same t_cid twice with the update WAL record.\n\nThanks,\n\nMuhammad",
"msg_date": "Wed, 24 Jul 2024 18:44:04 +0000",
"msg_from": "Muhammad Malik <muhammad.malik1@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Regarding t_cid in Neon heap WAL records"
},
{
"msg_contents": "On 24/07/2024 21:44, Muhammad Malik wrote:\n> Neon added a t_cid field to heap WAL records \n> https://github.com/yibit/neon-postgresql/blob/main/docs/core_changes.md#add-t_cid-to-heap-wal-records <https://github.com/yibit/neon-postgresql/blob/main/docs/core_changes.md#add-t_cid-to-heap-wal-records>.\n\nThis isn't really a pgsql-hackers topic, so I've opened an issue on the \nneon github issue tracker for this: \nhttps://github.com/neondatabase/neon/issues/8499. (Thanks for the report \nthough!)\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 24 Jul 2024 22:27:14 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Regarding t_cid in Neon heap WAL records"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile preparing for my presentation on PostgreSQL Wait Events at\nPgConf India, I was trying to understand *IPC:XactGroupUpdate* in more\ndetail. PostgreSQL documentation [1] mentions:\n\n> A process is waiting for the group leader to update the transaction status at the end of a _parallel operation_.\n\nI looked at `TransactionGroupUpdateXidStatus` in PostgreSQL code (`clog.c`)\nLine `481` [2] sets this wait event.\n\nAnd after reading the code, my understanding is - It does not\nnecessarily need to be a \"_parallel operation_\". Or maybe I am just\nmisinterpreting \"parallel operation\" in this context. But it is\npossible for other users to confuse it with the parallel query (and\nparallel workers) feature.\n\n**My understanding is -**\n\nIn order to avoid `XactSLRULock` being passed between backends,\nbackends waiting for it will add themselves to the queue [3]. The\nfirst backend in the queue (also the leader) will be the only one to\nacquire `XactSLRULock` and update the XID status for all those pids\nwhich are in the queue. This IPC wait event (`XactGroupUpdate`) is\nobserved in other backened processes who are in the queue, waiting for\nthe group leader to update the XID status.\n\nWe can add more clarity on what this wait event means. A similar\nchange should be done for `ProcArrayGroupUpdate` to indicate that the\nwait event is a result of concurrent backend processes trying to clear\nthe transaction id (instead of saying \"parallel operation”).\n\nI am attaching a patch for consideration. This should also be\nbackpatched-through version 13.\n\n\n[1] https://www.postgresql.org/docs/current/monitoring-stats.html#WAIT-EVENT-IPC-TABLE\n[2] https://github.com/postgres/postgres/blob/master/src/backend/access/transam/clog.c#L481\n[3] https://github.com/postgres/postgres/blob/master/src/backend/access/transam/clog.c#L399\n\n\nThanks,\nSameer\nDB Specialist,\nAmazon Web Services",
"msg_date": "Thu, 25 Jul 2024 11:13:39 +0800",
"msg_from": "SAMEER KUMAR <sameer.kasi200x@gmail.com>",
"msg_from_op": true,
"msg_subject": "Adding clarification to description of IPC wait events\n XactGroupUpdate and ProcArrayGroupUpdate"
},
{
"msg_contents": "On Thu, Jul 25, 2024 at 11:13:39AM +0800, SAMEER KUMAR wrote:\n> While preparing for my presentation on PostgreSQL Wait Events at\n> PgConf India, I was trying to understand *IPC:XactGroupUpdate* in more\n> detail. PostgreSQL documentation [1] mentions:\n> \n>> A process is waiting for the group leader to update the transaction status at the end of a _parallel operation_.\n> \n> I looked at `TransactionGroupUpdateXidStatus` in PostgreSQL code (`clog.c`)\n> Line `481` [2] sets this wait event.\n> \n> And after reading the code, my understanding is - It does not\n> necessarily need to be a \"_parallel operation_\". Or maybe I am just\n> misinterpreting \"parallel operation\" in this context. But it is\n> possible for other users to confuse it with the parallel query (and\n> parallel workers) feature.\n>\n> [...]\n> \n> We can add more clarity on what this wait event means. A similar\n> change should be done for `ProcArrayGroupUpdate` to indicate that the\n> wait event is a result of concurrent backend processes trying to clear\n> the transaction id (instead of saying \"parallel operation\").\n\nBoth of these wait events had descriptions similar to what you are\nproposing when they were first introduced (commits d4116a7 and baaf272),\nbut they were changed to the current wording by commit 3048898. I skimmed\nthrough the thread for the latter commit [0] but didn't see anything that\nexplained why it was changed.\n\n[0] https://postgr.es/m/21247.1589296570%40sss.pgh.pa.us\n\n-- \nnathan\n\n\n",
"msg_date": "Wed, 14 Aug 2024 09:29:35 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding clarification to description of IPC wait events\n XactGroupUpdate and ProcArrayGroupUpdate"
},
{
"msg_contents": "Thanks for responding.\n\nOn Wed, Aug 14, 2024 at 10:29 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Thu, Jul 25, 2024 at 11:13:39AM +0800, SAMEER KUMAR wrote:\n> > While preparing for my presentation on PostgreSQL Wait Events at\n> > PgConf India, I was trying to understand *IPC:XactGroupUpdate* in more\n> > detail. PostgreSQL documentation [1] mentions:\n> >\n> >> A process is waiting for the group leader to update the transaction\n> status at the end of a _parallel operation_.\n> >\n> > I looked at `TransactionGroupUpdateXidStatus` in PostgreSQL code\n> (`clog.c`)\n> > Line `481` [2] sets this wait event.\n> >\n> > And after reading the code, my understanding is - It does not\n> > necessarily need to be a \"_parallel operation_\". Or maybe I am just\n> > misinterpreting \"parallel operation\" in this context. But it is\n> > possible for other users to confuse it with the parallel query (and\n> > parallel workers) feature.\n> >\n> > [...]\n> >\n> > We can add more clarity on what this wait event means. A similar\n> > change should be done for `ProcArrayGroupUpdate` to indicate that the\n> > wait event is a result of concurrent backend processes trying to clear\n> > the transaction id (instead of saying \"parallel operation\").\n>\n> Both of these wait events had descriptions similar to what you are\n> proposing when they were first introduced (commits d4116a7 and baaf272),\n> but they were changed to the current wording by commit 3048898. I skimmed\n> through the thread for the latter commit [0] but didn't see anything that\n> explained why it was changed.\n>\n\nYes, while reviewing the history of changes, I too noticed the same. The\ndocumentation of older versions (v12 [1]) still has old descriptions.\n\n\n\n>\n> [0] https://postgr.es/m/21247.1589296570%40sss.pgh.pa.us\n>\n> --\n> nathan\n>\n\n[1]\nhttps://www.postgresql.org/docs/12/monitoring-stats.html#WAIT-EVENT-TABLE\n\nThanks for responding.On Wed, Aug 14, 2024 at 10:29 PM Nathan Bossart <nathandbossart@gmail.com> wrote:On Thu, Jul 25, 2024 at 11:13:39AM +0800, SAMEER KUMAR wrote:\n> While preparing for my presentation on PostgreSQL Wait Events at\n> PgConf India, I was trying to understand *IPC:XactGroupUpdate* in more\n> detail. PostgreSQL documentation [1] mentions:\n> \n>> A process is waiting for the group leader to update the transaction status at the end of a _parallel operation_.\n> \n> I looked at `TransactionGroupUpdateXidStatus` in PostgreSQL code (`clog.c`)\n> Line `481` [2] sets this wait event.\n> \n> And after reading the code, my understanding is - It does not\n> necessarily need to be a \"_parallel operation_\". Or maybe I am just\n> misinterpreting \"parallel operation\" in this context. But it is\n> possible for other users to confuse it with the parallel query (and\n> parallel workers) feature.\n>\n> [...]\n> \n> We can add more clarity on what this wait event means. A similar\n> change should be done for `ProcArrayGroupUpdate` to indicate that the\n> wait event is a result of concurrent backend processes trying to clear\n> the transaction id (instead of saying \"parallel operation\").\n\nBoth of these wait events had descriptions similar to what you are\nproposing when they were first introduced (commits d4116a7 and baaf272),\nbut they were changed to the current wording by commit 3048898. I skimmed\nthrough the thread for the latter commit [0] but didn't see anything that\nexplained why it was changed.Yes, while reviewing the history of changes, I too noticed the same. The documentation of older versions (v12 [1]) still has old descriptions. \n\n[0] https://postgr.es/m/21247.1589296570%40sss.pgh.pa.us\n\n-- \nnathan[1] https://www.postgresql.org/docs/12/monitoring-stats.html#WAIT-EVENT-TABLE",
"msg_date": "Wed, 14 Aug 2024 22:38:49 +0800",
"msg_from": "SAMEER KUMAR <sameer.kasi200x@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding clarification to description of IPC wait events\n XactGroupUpdate and ProcArrayGroupUpdate"
},
{
"msg_contents": "On Wed, Aug 14, 2024 at 10:38:49PM +0800, SAMEER KUMAR wrote:\n> Yes, while reviewing the history of changes, I too noticed the same. The\n> documentation of older versions (v12 [1]) still has old descriptions.\n\nAfter reading the related threads and code, I'm inclined to agree that this\nis a mistake, or at least that the current wording is likely to mislead\nfolks into thinking it has something to do with parallel query. I noticed\nthat your patch changed a few things in the description, but IMHO we should\nkeep the fix focused, i.e., just replace \"end of a parallel operation\" with\n\"transaction end.\" I've attached a modified version of the patch with this\nchange.\n\n-- \nnathan",
"msg_date": "Wed, 14 Aug 2024 15:00:15 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding clarification to description of IPC wait events\n XactGroupUpdate and ProcArrayGroupUpdate"
},
{
"msg_contents": "On Thu, Aug 15, 2024 at 4:00 AM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Wed, Aug 14, 2024 at 10:38:49PM +0800, SAMEER KUMAR wrote:\n> > Yes, while reviewing the history of changes, I too noticed the same. The\n> > documentation of older versions (v12 [1]) still has old descriptions.\n>\n> After reading the related threads and code, I'm inclined to agree that this\n> is a mistake, or at least that the current wording is likely to mislead\n> folks into thinking it has something to do with parallel query. I noticed\n> that your patch changed a few things in the description, but IMHO we should\n> keep the fix focused, i.e., just replace \"end of a parallel operation\" with\n> \"transaction end.\" I've attached a modified version of the patch with this\n> change.\n>\n\nThanks for the feedback Nathan.\n\nI think it is important to indicate that the group leader is responsible\nfor clearing the transaction ID/transaction status of other backends\n(including this one).\n\nIf you suggest that we keep it simple, I don't see any other issues with\nyour patch.\n\n\n>\n> --\n> nathan\n>\n\nOn Thu, Aug 15, 2024 at 4:00 AM Nathan Bossart <nathandbossart@gmail.com> wrote:On Wed, Aug 14, 2024 at 10:38:49PM +0800, SAMEER KUMAR wrote:\n> Yes, while reviewing the history of changes, I too noticed the same. The\n> documentation of older versions (v12 [1]) still has old descriptions.\n\nAfter reading the related threads and code, I'm inclined to agree that this\nis a mistake, or at least that the current wording is likely to mislead\nfolks into thinking it has something to do with parallel query. I noticed\nthat your patch changed a few things in the description, but IMHO we should\nkeep the fix focused, i.e., just replace \"end of a parallel operation\" with\n\"transaction end.\" I've attached a modified version of the patch with this\nchange.Thanks for the feedback Nathan.I think it is important to indicate that the group leader is responsible for clearing the transaction ID/transaction status of other backends (including this one). If you suggest that we keep it simple, I don't see any other issues with your patch. \n\n-- \nnathan",
"msg_date": "Thu, 15 Aug 2024 11:25:25 +0800",
"msg_from": "SAMEER KUMAR <sameer.kasi200x@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding clarification to description of IPC wait events\n XactGroupUpdate and ProcArrayGroupUpdate"
},
{
"msg_contents": "On Thu, Aug 15, 2024 at 11:25:25AM +0800, SAMEER KUMAR wrote:\n> I think it is important to indicate that the group leader is responsible\n> for clearing the transaction ID/transaction status of other backends\n> (including this one).\n\nYour proposal is\n\n\tWaiting for the group leader process to clear the transaction ID of\n\tthis backend at the end of a transaction.\n\n\tWaiting for the group leader process to update the transaction status\n\tfor this backend.\n\nMine is\n\n\tWaiting for the group leader to clear the transaction ID at transaction\n\tend.\n\n\tWaiting for the group leader to update transaction status at\n\ttransaction end.\n\nIMHO the latter doesn't convey substantially less information, and it fits\na little better with the terse style of the other wait events nearby. But\nI'll yield to majority opinion here.\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 15 Aug 2024 09:41:05 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding clarification to description of IPC wait events\n XactGroupUpdate and ProcArrayGroupUpdate"
},
{
"msg_contents": "On Thu, Aug 15, 2024 at 8:11 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Thu, Aug 15, 2024 at 11:25:25AM +0800, SAMEER KUMAR wrote:\n> > I think it is important to indicate that the group leader is responsible\n> > for clearing the transaction ID/transaction status of other backends\n> > (including this one).\n>\n> Your proposal is\n>\n> Waiting for the group leader process to clear the transaction ID of\n> this backend at the end of a transaction.\n>\n> Waiting for the group leader process to update the transaction status\n> for this backend.\n>\n> Mine is\n>\n> Waiting for the group leader to clear the transaction ID at transaction\n> end.\n>\n> Waiting for the group leader to update transaction status at\n> transaction end.\n>\n> IMHO the latter doesn't convey substantially less information, and it fits\n> a little better with the terse style of the other wait events nearby.\n>\n\n+1 for Nathan's version. It is quite close to the previous version,\nfor which we haven't heard any complaints since they were introduced.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Aug 2024 14:12:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding clarification to description of IPC wait events\n XactGroupUpdate and ProcArrayGroupUpdate"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 02:12:25PM +0530, Amit Kapila wrote:\n> +1 for Nathan's version. It is quite close to the previous version,\n> for which we haven't heard any complaints since they were introduced.\n\nCommitted, thanks.\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 20 Aug 2024 13:58:13 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding clarification to description of IPC wait events\n XactGroupUpdate and ProcArrayGroupUpdate"
}
] |
[
{
"msg_contents": "Hello all,\n\nMessage Id:\nhttps://www.postgresql.org/message-id/flat/3808dc548e144c860fc3fe57de013809%40xs4all.nl#72629a188e99e14387230fccc8eef518\n\nActually I'm also facing this issue. Any solution regarding this? .Kindly\ngive me the explanation of why this error occurs.\n\nThanks and regards\nPradeep\n\nHello all,Message Id: https://www.postgresql.org/message-id/flat/3808dc548e144c860fc3fe57de013809%40xs4all.nl#72629a188e99e14387230fccc8eef518Actually I'm also facing this issue. Any solution regarding this? .Kindly give me the explanation of why this error occurs.Thanks and regardsPradeep",
"msg_date": "Thu, 25 Jul 2024 13:09:50 +0530",
"msg_from": "Pradeep Kumar <spradeepkumar29@gmail.com>",
"msg_from_op": true,
"msg_subject": "rare crash - FailedAssertion snapbuild.c Line: 580"
},
{
"msg_contents": "Hello all,\n\nAny update on this?\n\nThanks and regards\nPradeep\n\nOn Thu, Jul 25, 2024 at 1:09 PM Pradeep Kumar <spradeepkumar29@gmail.com>\nwrote:\n\n> Hello all,\n>\n> Message Id:\n> https://www.postgresql.org/message-id/flat/3808dc548e144c860fc3fe57de013809%40xs4all.nl#72629a188e99e14387230fccc8eef518\n>\n> Actually I'm also facing this issue. Any solution regarding this? .Kindly\n> give me the explanation of why this error occurs.\n>\n> Thanks and regards\n> Pradeep\n>\n\nHello all,Any update on this?Thanks and regardsPradeepOn Thu, Jul 25, 2024 at 1:09 PM Pradeep Kumar <spradeepkumar29@gmail.com> wrote:Hello all,Message Id: https://www.postgresql.org/message-id/flat/3808dc548e144c860fc3fe57de013809%40xs4all.nl#72629a188e99e14387230fccc8eef518Actually I'm also facing this issue. Any solution regarding this? .Kindly give me the explanation of why this error occurs.Thanks and regardsPradeep",
"msg_date": "Thu, 1 Aug 2024 15:29:20 +0530",
"msg_from": "Pradeep Kumar <spradeepkumar29@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: rare crash - FailedAssertion snapbuild.c Line: 580"
},
{
"msg_contents": "On Thu, Aug 1, 2024, at 6:59 AM, Pradeep Kumar wrote:\n> Any update on this?\n\nAre you using version 11? This version is obsolete and is not supported\nanymore. Consider a supported version [1].\n\nPer the following commit (in version 16), this Assert was replaced by\nelog.\n\ncommit: 240e0dbacd390a8465552e27c5af11f67d747adb\nauthor: Amit Kapila <akapila@postgresql.org>\ndate: Mon, 21 Nov 2022 08:54:43 +0530\ncommitter: Amit Kapila <akapila@postgresql.org>\ndate: Mon, 21 Nov 2022 08:54:43 +0530\nAdd additional checks while creating the initial decoding snapshot.\n\nAs per one of the CI reports, there is an assertion failure which\nindicates that we were trying to use an unenforced xmin horizon for\ndecoding snapshots. Though, we couldn't figure out the reason for\nassertion failure these checks would help us in finding the reason if the\nproblem happens again in the future.\n\nAuthor: Amit Kapila based on suggestions by Andres Freund\nReviewd by: Andres Freund\nDiscussion: https://postgr.es/m/CAA4eK1L8wYcyTPxNzPGkhuO52WBGoOZbT0A73Le=ZUWYAYmdfw@mail.gmail.com\n\nAccording to this discussion, there isn't a clue about the root cause.\nIf you have a test case, share it (mainly if you are observing it in\nversion 16+ that exposes some data which may be useful for analysis).\n\n\n[1] https://www.postgresql.org/support/versioning/\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Aug 1, 2024, at 6:59 AM, Pradeep Kumar wrote:Any update on this?Are you using version 11? This version is obsolete and is not supportedanymore. Consider a supported version [1].Per the following commit (in version 16), this Assert was replaced byelog.commit: 240e0dbacd390a8465552e27c5af11f67d747adbauthor: Amit Kapila <akapila@postgresql.org>date: Mon, 21 Nov 2022 08:54:43 +0530committer: Amit Kapila <akapila@postgresql.org>date: Mon, 21 Nov 2022 08:54:43 +0530Add additional checks while creating the initial decoding snapshot.As per one of the CI reports, there is an assertion failure whichindicates that we were trying to use an unenforced xmin horizon fordecoding snapshots. Though, we couldn't figure out the reason forassertion failure these checks would help us in finding the reason if theproblem happens again in the future.Author: Amit Kapila based on suggestions by Andres FreundReviewd by: Andres FreundDiscussion: https://postgr.es/m/CAA4eK1L8wYcyTPxNzPGkhuO52WBGoOZbT0A73Le=ZUWYAYmdfw@mail.gmail.comAccording to this discussion, there isn't a clue about the root cause.If you have a test case, share it (mainly if you are observing it inversion 16+ that exposes some data which may be useful for analysis).[1] https://www.postgresql.org/support/versioning/--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 01 Aug 2024 15:09:15 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: rare crash - FailedAssertion snapbuild.c Line: 580"
},
{
"msg_contents": "Hello Euler,\n\n01.08.2024 21:09, Euler Taveira wrote:\n> According to this discussion, there isn't a clue about the root cause.\n> If you have a test case, share it (mainly if you are observing it in\n> version 16+ that exposes some data which may be useful for analysis).\n>\n\nPlease take a look at [1], where I presented a reproducer for apparently\nthe same issue.\n\n[1] https://www.postgresql.org/message-id/b91cf8ef-b5af-5def-ff05-bd67336ef907%40gmail.com\n\nBest regards,\nAlexander\n\n\n\n\n\nHello Euler,\n\n 01.08.2024 21:09, Euler Taveira wrote:\n\n\n\n\nAccording\n to this discussion, there isn't a clue about the root cause.\nIf you have a test case, share it (mainly if you are\n observing it in\n\nversion 16+ that exposes some data which may be useful for\n analysis).\n\n\n\n\n\n Please take a look at [1], where I presented a reproducer for\n apparently\n the same issue.\n\n [1]\nhttps://www.postgresql.org/message-id/b91cf8ef-b5af-5def-ff05-bd67336ef907%40gmail.com\n\n Best regards,\n Alexander",
"msg_date": "Fri, 2 Aug 2024 07:00:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: rare crash - FailedAssertion snapbuild.c Line: 580"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen reviewing the code in logical/worker.c, I noticed that when applying a\ncross-partition update action, it scans the old partition twice.\nI am attaching the patch 0001 to remove this duplicate table scan.\n\nThe test shows that it brings noticeable improvement:\n\nSteps\n-----\nPub:\ncreate table tab (a int not null, b int);\nalter table tab replica identity full;\ninsert into tab select 1,generate_series(1, 1000000, 1);\n\nSub:\ncreate table tab (a int not null, b int) partition by range (b);\ncreate table tab_1 partition of tab for values from (minvalue) to (5000000);\ncreate table tab_2 partition of tab for values from (5000000) to (maxvalue);\nalter table tab replica identity full;\n\n\nTest query:\nupdate tab set b = 6000000 where b > 999900; -- UPDATE 100\n\nResults (The time spent by apply worker to apply the all the UPDATEs):\nBefore\t14s\nAfter\t7s\n-----\n\nApart from above, I found there are quite a few duplicate codes related to partition\nhandling(e.g. apply_handle_tuple_routing), so I tried to extract some\ncommon logic to simplify the codes. Please see 0002 for this refactoring.\n\nBest Regards,\nHou Zhijie",
"msg_date": "Thu, 25 Jul 2024 10:30:21 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Remove duplicate table scan in logical apply worker and code\n refactoring"
},
{
"msg_contents": "Hi!\n\n> When reviewing the code in logical/worker.c, I noticed that when applying a\n> cross-partition update action, it scans the old partition twice.\n\nNice catch!\n\n\n> -/*\n> - * Workhorse for apply_handle_update()\n> - * relinfo is for the relation we're actually updating in\n> - * (could be a child partition of edata->targetRelInfo)\n> - */\n> -static void\n> -apply_handle_update_internal(ApplyExecutionData *edata,\n> - ResultRelInfo *relinfo,\n> - TupleTableSlot *remoteslot,\n> - LogicalRepTupleData *newtup,\n> - Oid localindexoid)\n> -{\n\nWhat's the necessity of this change? Can we modify a function in-place\ninstead of removing and rewriting it in the same file?\nThis will reduce diff, making patch a bit more clear.\n\n> +/*\n> + * If the tuple to be modified could not be found, a log message is emitted.\n> + */\n> +static void\n> +report_tuple_not_found(CmdType cmd, Relation targetrel, bool is_partition)\n> +{\n> + Assert(cmd == CMD_UPDATE || cmd == CMD_DELETE);\n> +\n> + /* XXX should this be promoted to ereport(LOG) perhaps? */\n> + elog(DEBUG1,\n> + \"logical replication did not find row to be %s in replication target relation%s \\\"%s\\\"\",\n> + cmd == CMD_UPDATE ? \"updated\" : \"deleted\",\n> + is_partition ? \"'s partition\" : \"\",\n> + RelationGetRelationName(targetrel));\n> +}\n\nEncapsulating report logic inside function is ok, but double ternary\nexpression is a bit out of line. I do not see similar places within\nPostgreSQL,\nso it probably violates code style.\n\n\n",
"msg_date": "Thu, 25 Jul 2024 17:25:53 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove duplicate table scan in logical apply worker and code\n refactoring"
},
{
"msg_contents": "On Thu, Jul 25, 2024 at 5:56 PM Kirill Reshke <reshkekirill@gmail.com> wrote:\n\n> > +/*\n> > + * If the tuple to be modified could not be found, a log message is emitted.\n> > + */\n> > +static void\n> > +report_tuple_not_found(CmdType cmd, Relation targetrel, bool is_partition)\n> > +{\n> > + Assert(cmd == CMD_UPDATE || cmd == CMD_DELETE);\n> > +\n> > + /* XXX should this be promoted to ereport(LOG) perhaps? */\n> > + elog(DEBUG1,\n> > + \"logical replication did not find row to be %s in replication target relation%s \\\"%s\\\"\",\n> > + cmd == CMD_UPDATE ? \"updated\" : \"deleted\",\n> > + is_partition ? \"'s partition\" : \"\",\n> > + RelationGetRelationName(targetrel));\n> > +}\n>\n> Encapsulating report logic inside function is ok, but double ternary\n> expression is a bit out of line. I do not see similar places within\n> PostgreSQL,\n> so it probably violates code style.\n>\n\nThey it's written, it would make it hard for the translations. See [1]\nwhich redirects to [2].\n\n[1] https://www.postgresql.org/docs/current/error-style-guide.html\n[2] https://www.postgresql.org/docs/current/nls-programmer.html#NLS-GUIDELINES\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 26 Jul 2024 14:24:16 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove duplicate table scan in logical apply worker and code\n refactoring"
},
{
"msg_contents": "On Thu, Jul 25, 2024 at 4:00 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> When reviewing the code in logical/worker.c, I noticed that when applying a\n> cross-partition update action, it scans the old partition twice.\n> I am attaching the patch 0001 to remove this duplicate table scan.\n>\n> The test shows that it brings noticeable improvement:\n>\n> Steps\n> -----\n> Pub:\n> create table tab (a int not null, b int);\n> alter table tab replica identity full;\n> insert into tab select 1,generate_series(1, 1000000, 1);\n>\n> Sub:\n> create table tab (a int not null, b int) partition by range (b);\n> create table tab_1 partition of tab for values from (minvalue) to (5000000);\n> create table tab_2 partition of tab for values from (5000000) to (maxvalue);\n> alter table tab replica identity full;\n>\n>\n> Test query:\n> update tab set b = 6000000 where b > 999900; -- UPDATE 100\n>\n> Results (The time spent by apply worker to apply the all the UPDATEs):\n> Before 14s\n> After 7s\n> -----\n>\n\nThe idea sounds good to me. BTW, we don't need the following comment\nin the 0001 patch:\nWe\n+ * don't call apply_handle_delete_internal() here to avoid\n+ * repeating some work already done above to find the\n+ * local tuple in the partition.\n\nIt is implied by the change and we already follow the same for the update.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 29 Jul 2024 16:16:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove duplicate table scan in logical apply worker and code\n refactoring"
},
{
"msg_contents": "On Thu, Jul 25, 2024 at 5:56 PM Kirill Reshke <reshkekirill@gmail.com> wrote:\n>\n> > +/*\n> > + * If the tuple to be modified could not be found, a log message is emitted.\n> > + */\n> > +static void\n> > +report_tuple_not_found(CmdType cmd, Relation targetrel, bool is_partition)\n> > +{\n> > + Assert(cmd == CMD_UPDATE || cmd == CMD_DELETE);\n> > +\n> > + /* XXX should this be promoted to ereport(LOG) perhaps? */\n> > + elog(DEBUG1,\n> > + \"logical replication did not find row to be %s in replication target relation%s \\\"%s\\\"\",\n> > + cmd == CMD_UPDATE ? \"updated\" : \"deleted\",\n> > + is_partition ? \"'s partition\" : \"\",\n> > + RelationGetRelationName(targetrel));\n> > +}\n>\n> Encapsulating report logic inside function is ok,\n>\n\nThis could even be a separate patch as it is not directly to other\nparts of the 0002 patch. BTW, the problem statement for 0002 is not\nexplicitly stated like which part of the code we want to optimize by\nremoving duplication. Also, as proposed the name\napply_handle_tuple_routing() for the function doesn't seem suitable as\nit no longer remains similar to other apply_handle_* functions where\nwe perform the required operation like insert or update the tuple. How\nabout naming it as apply_tuple_routing()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 29 Jul 2024 17:12:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove duplicate table scan in logical apply worker and code\n refactoring"
},
{
"msg_contents": "Dear Hou,\n\nThanks for creating a patch!\n\n> When reviewing the code in logical/worker.c, I noticed that when applying a\n> cross-partition update action, it scans the old partition twice.\n> I am attaching the patch 0001 to remove this duplicate table scan.\n\nJust to clarify, you meant that FindReplTupleInLocalRel() are called in\napply_handle_tuple_routing() and apply_handle_tuple_routing()->apply_handle_delete_internal(),\nwhich requires the index or sequential scan, right? LGTM.\n\n> Apart from above, I found there are quite a few duplicate codes related to partition\n> handling(e.g. apply_handle_tuple_routing), so I tried to extract some\n> common logic to simplify the codes. Please see 0002 for this refactoring.\n\nIIUC, you wanted to remove the application code from apply_handle_tuple_routing()\nand put only a part partition detection. Is it right? Anyway, here are comments.\n\n01. apply_handle_insert()\n\n```\n+ targetRelInfo = edata->targetRelInfo;\n+\n /* For a partitioned table, insert the tuple into a partition. */\n if (rel->localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n- apply_handle_tuple_routing(edata,\n- remoteslot, NULL, CMD_INSERT);\n- else\n- apply_handle_insert_internal(edata, edata->targetRelInfo,\n- remoteslot);\n+ remoteslot = apply_handle_tuple_routing(edata, CMD_INSERT, remoteslot,\n+ &targetRelInfo);\n+\n+ /* For a partitioned table, insert the tuple into a partition. */\n+ apply_handle_insert_internal(edata, targetRelInfo, remoteslot);\n```\n\nThis part contains same comments, and no need to subsctitute in case of normal tables.\nHow about:\n\n```\n- /* For a partitioned table, insert the tuple into a partition. */\n+ /*\n+ * Find the actual target table if the table is partitioned. Otherwise, use\n+ * the same table as the remote one.\n+ */\n if (rel->localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n- apply_handle_tuple_routing(edata,\n- remoteslot, NULL, CMD_INSERT);\n+ remoteslot = apply_handle_tuple_routing(edata, CMD_INSERT, remoteslot,\n+ &targetRelInfo);\n else\n- apply_handle_insert_internal(edata, edata->targetRelInfo,\n- remoteslot);\n+ targetRelInfo = edata->targetRelInfo;\n+\n+ /* Insert a tuple to the target table */\n+ apply_handle_insert_internal(edata, targetRelInfo, remoteslot);\n```\n\n02. apply_handle_tuple_routing()\n\n```\n /*\n- * This handles insert, update, delete on a partitioned table.\n+ * Determine the partition in which the tuple in slot is to be inserted, and\n...\n```\n\nBut this function is called from delete_internal(). How about \"Determine the\npartition to which the tuple in the slot belongs.\"?\n\n03. apply_handle_tuple_routing()\n\nDo you have a reason why this does not return `ResultRelInfo *` but `TupleTableSlot *`?\nNot sure, but it is more proper for me to return the table info because this is a\nrouting function. \n\n04. apply_handle_update()\n\n```\n+ targetRelInfo = edata->targetRelInfo;\n+ targetrel = rel;\n+ remoteslot_root = remoteslot;\n```\n\nHere I can say the same thing as 1.\n\n05. apply_handle_update_internal()\n\nIt looks the ordering of function's implementations are changed. Is it intentaional?\n\nbefore\n\napply_handle_update\napply_handle_update_internal\napply_handle_delete\napply_handle_delete_internal\nFindReplTupleInLocalRel\napply_handle_tuple_routing\n\nafter\n\napply_handle_update\napply_handle_delete\napply_handle_delete_internal\nFindReplTupleInLocalRel\napply_handle_tuple_routing\napply_handle_update_internal\n\n06. apply_handle_delete_internal()\n\n```\n+ targetRelInfo = edata->targetRelInfo;\n+ targetrel = rel;\n+\n```\n\nHere I can say the same thing as 1.\n\nBest regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Wed, 31 Jul 2024 09:07:07 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Remove duplicate table scan in logical apply worker and code\n refactoring"
},
{
"msg_contents": "On Wednesday, July 31, 2024 5:07 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\n> \n> Dear Hou,\n> \n> > When reviewing the code in logical/worker.c, I noticed that when\n> > applying a cross-partition update action, it scans the old partition twice.\n> > I am attaching the patch 0001 to remove this duplicate table scan.\n> \n> Just to clarify, you meant that FindReplTupleInLocalRel() are called in\n> apply_handle_tuple_routing() and\n> apply_handle_tuple_routing()->apply_handle_delete_internal(),\n> which requires the index or sequential scan, right? LGTM.\n\nThanks for reviewing the patch, and your understanding is correct.\n\nHere is the updated patch 0001. I removed the comments as suggested by Amit.\n\nSince 0002 patch is only refactoring the code and I need some time to review\nthe comments for it, I will hold it until the 0001 is committed.\n\nBest Regards,\nHou zj",
"msg_date": "Wed, 31 Jul 2024 11:21:23 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Remove duplicate table scan in logical apply worker and code\n refactoring"
},
{
"msg_contents": "Dear Hou,\n\n> Thanks for reviewing the patch, and your understanding is correct.\n> \n> Here is the updated patch 0001. I removed the comments as suggested by Amit.\n> \n> Since 0002 patch is only refactoring the code and I need some time to review\n> the comments for it, I will hold it until the 0001 is committed.\n\nThanks for updating the patch. I did a performance testing with v2-0001.\n\nBefore:\t15.553 [s]\nAfter:\t7.593 [s]\n\nI used the attached script for setting up. I used almost the same setting and synchronous\nreplication is used.\n\n[machine]\nCPU(s): 120\nModel name: Intel(R) Xeon(R) CPU E7-4890 v2 @ 2.80GHz\nCore(s) per socket: 15\nSocket(s): 4\n\nBest regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Thu, 1 Aug 2024 01:59:11 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Remove duplicate table scan in logical apply worker and code\n refactoring"
},
{
"msg_contents": "On Thu, Aug 1, 2024 at 7:29 AM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> > Thanks for reviewing the patch, and your understanding is correct.\n> >\n> > Here is the updated patch 0001. I removed the comments as suggested by Amit.\n> >\n> > Since 0002 patch is only refactoring the code and I need some time to review\n> > the comments for it, I will hold it until the 0001 is committed.\n>\n> Thanks for updating the patch. I did a performance testing with v2-0001.\n>\n> Before: 15.553 [s]\n> After: 7.593 [s]\n>\n\nThanks for the testing. I have pushed the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 1 Aug 2024 14:25:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove duplicate table scan in logical apply worker and code\n refactoring"
}
] |
[
{
"msg_contents": "Hi, I see that materialized view cannot be unlogged now, but when I use\npsql and type CREATE UNLOGGED, pressing the Tab key for auto-completion\nsuggests `TABLE` and MATERIALIZED VIEW.\nShouldn't `MATERIALIZED VIEW ` be suggested?\n\nHi, I see that materialized view cannot be unlogged now, but when I use psql and type CREATE UNLOGGED, pressing the Tab key for auto-completion suggests `TABLE` and MATERIALIZED VIEW. Shouldn't `MATERIALIZED VIEW ` be suggested?",
"msg_date": "Thu, 25 Jul 2024 19:48:01 +0800",
"msg_from": "px shi <spxlyy123@gmail.com>",
"msg_from_op": true,
"msg_subject": "CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "px shi <spxlyy123@gmail.com> writes:\n\n> Hi, I see that materialized view cannot be unlogged now, but when I use\n> psql and type CREATE UNLOGGED, pressing the Tab key for auto-completion\n> suggests `TABLE` and MATERIALIZED VIEW.\n> Shouldn't `MATERIALIZED VIEW ` be suggested?\n\nThat's my fault, I added it in commit c951e9042dd1, presumably because\nthe grammar allows it, but it turns transformCreateTableAsStmt() rejects\nit.\n\nAttached is a patch to fix it, which sholud be backpatched to v17.\n\n- ilmari",
"msg_date": "Thu, 25 Jul 2024 14:56:07 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n\n> px shi <spxlyy123@gmail.com> writes:\n>\n>> Hi, I see that materialized view cannot be unlogged now, but when I use\n>> psql and type CREATE UNLOGGED, pressing the Tab key for auto-completion\n>> suggests `TABLE` and MATERIALIZED VIEW.\n>> Shouldn't `MATERIALIZED VIEW ` be suggested?\n>\n> That's my fault, I added it in commit c951e9042dd1, presumably because\n> the grammar allows it, but it turns transformCreateTableAsStmt() rejects\n> it.\n\nScratch that, I misread the diff. The tab completion has been there\nsince matviews were added in commit 3bf3ab8c5636, but the restriction on\nunlogged matviews was added later in commit 3223b25ff73, which failed to\nupdate the tab completion code to match.\n\nHere's a updated patch with a corrected commit message.\n\n- ilmari",
"msg_date": "Thu, 25 Jul 2024 15:09:09 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "On Thu, Jul 25, 2024 at 03:09:09PM +0100, Dagfinn Ilmari Manns�ker wrote:\n> Scratch that, I misread the diff. The tab completion has been there\n> since matviews were added in commit 3bf3ab8c5636, but the restriction on\n> unlogged matviews was added later in commit 3223b25ff73, which failed to\n> update the tab completion code to match.\n\nAs noted a few years ago [0], the commit message for 3223b25ff73 indicates\nthis was intentional:\n\n\tI left the grammar and tab-completion support for CREATE UNLOGGED\n\tMATERIALIZED VIEW in place, since it's harmless and allows delivering a\n\tmore specific error message about the unsupported feature.\n\nHowever, since it looks like the feature was never actually supported in a\nrelease, and the revert has been in place for over a decade, I think it'd\nbe reasonable to remove the tab completion now. It looks like the folks on\nthe 2021 thread felt similarly.\n\n[0] https://postgr.es/m/flat/ZR0P278MB092093E92263DE16734208A5D2C59%40ZR0P278MB0920.CHEP278.PROD.OUTLOOK.COM\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 25 Jul 2024 09:38:30 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n\n> On Thu, Jul 25, 2024 at 03:09:09PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>> Scratch that, I misread the diff. The tab completion has been there\n>> since matviews were added in commit 3bf3ab8c5636, but the restriction on\n>> unlogged matviews was added later in commit 3223b25ff73, which failed to\n>> update the tab completion code to match.\n>\n> As noted a few years ago [0], the commit message for 3223b25ff73 indicates\n> this was intentional:\n>\n> \tI left the grammar and tab-completion support for CREATE UNLOGGED\n> \tMATERIALIZED VIEW in place, since it's harmless and allows delivering a\n> \tmore specific error message about the unsupported feature.\n\nD'oh, I'm clearly struggling with reading things properly today.\n\n> However, since it looks like the feature was never actually supported in a\n> release, and the revert has been in place for over a decade, I think it'd\n> be reasonable to remove the tab completion now. It looks like the folks on\n> the 2021 thread felt similarly.\n\nKeeping it in the grammar makes sense for the more specific error\nmessage, but I don't think the tab completion should be suggesting bogus\nthings, regardless of whether it's the grammar or the parse analysis\nthat rejects it.\n\n> [0] https://postgr.es/m/flat/ZR0P278MB092093E92263DE16734208A5D2C59%40ZR0P278MB0920.CHEP278.PROD.OUTLOOK.COM\n\n- ilmari\n\n\n",
"msg_date": "Thu, 25 Jul 2024 15:49:02 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "On Thu, Jul 25, 2024 at 03:49:02PM +0100, Dagfinn Ilmari Manns�ker wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> However, since it looks like the feature was never actually supported in a\n>> release, and the revert has been in place for over a decade, I think it'd\n>> be reasonable to remove the tab completion now. It looks like the folks on\n>> the 2021 thread felt similarly.\n> \n> Keeping it in the grammar makes sense for the more specific error\n> message, but I don't think the tab completion should be suggesting bogus\n> things, regardless of whether it's the grammar or the parse analysis\n> that rejects it.\n\nWould you mind creating a commitfest entry for this one? I'll plan on\ncommitting this early next week unless any objections materialize.\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 25 Jul 2024 09:54:01 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n\n> On Thu, Jul 25, 2024 at 03:49:02PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>> Nathan Bossart <nathandbossart@gmail.com> writes:\n>>> However, since it looks like the feature was never actually supported in a\n>>> release, and the revert has been in place for over a decade, I think it'd\n>>> be reasonable to remove the tab completion now. It looks like the folks on\n>>> the 2021 thread felt similarly.\n>> \n>> Keeping it in the grammar makes sense for the more specific error\n>> message, but I don't think the tab completion should be suggesting bogus\n>> things, regardless of whether it's the grammar or the parse analysis\n>> that rejects it.\n>\n> Would you mind creating a commitfest entry for this one? I'll plan on\n> committing this early next week unless any objections materialize.\n\nDone: https://commitfest.postgresql.org/49/5139/\n\nI've taken the liberty of setting you as the committer, and the target\nversion to 17 even though it turns out to be an older bug, since it's\narguably a follow-on fix to the incomplete fix in c951e9042dd1.\n\n- ilmari\n\n\n",
"msg_date": "Thu, 25 Jul 2024 16:10:37 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "On Thu, Jul 25, 2024 at 04:10:37PM +0100, Dagfinn Ilmari Manns�ker wrote:\n> Done: https://commitfest.postgresql.org/49/5139/\n> \n> I've taken the liberty of setting you as the committer, and the target\n> version to 17 even though it turns out to be an older bug, since it's\n> arguably a follow-on fix to the incomplete fix in c951e9042dd1.\n\nThanks. I'm -1 on back-patching since it was intentionally left around and\nis basically harmless.\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 25 Jul 2024 10:20:54 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "Committed.\n\n-- \nnathan\n\n\n",
"msg_date": "Mon, 29 Jul 2024 11:38:42 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n\n> Committed.\n\nThanks!\n\n- ilmari\n\n\n",
"msg_date": "Mon, 29 Jul 2024 17:53:47 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: CREATE MATERIALIZED VIEW"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nPlease take a look at today's failure [1], that raises several questions\nat once:\n117/244 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade TIMEOUT 3001.48s exit status 1\n180/244 postgresql:pg_amcheck / pg_amcheck/005_opclass_damage TIMEOUT 3001.43s exit status 1\n\nOk: 227\nExpected Fail: 0\nFail: 0\nUnexpected Pass: 0\nSkipped: 15\nTimeout: 2\n\nBut looking at the previous test run [2], marked 'OK', I can see almost\nthe same:\n115/244 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade TIMEOUT 3001.54s exit status 1\n176/244 postgresql:pg_amcheck / pg_amcheck/005_opclass_damage TIMEOUT 3001.26s exit status 1\n\nOk: 227\nExpected Fail: 0\nFail: 0\nUnexpected Pass: 0\nSkipped: 15\nTimeout: 2\n\nSo it's not clear to me, why is the latter test run considered failed\nunlike the former?\n\nAs to the 005_opclass_damage failure itself, we can find successful test\nruns with duration close to 3000s:\n[3] 212/242 postgresql:pg_amcheck / pg_amcheck/005_opclass_damage OK 2894.75s 10 subtests passed\n[4] 212/242 postgresql:pg_amcheck / pg_amcheck/005_opclass_damage OK 2941.73s 10 subtests passed\n\n(The average duration across 35 successful runs in June was around 1300s;\nbut in July only 5 test runs were successful.)\n\nThe other question is: why is 005_opclass_damage taking so much time there?\nLooking at the last three runs in REL_17_STABLE, I see:\n 87/243 postgresql:pg_amcheck / pg_amcheck/005_opclass_damage OK 22.80s 10 subtests passed\n 87/243 postgresql:pg_amcheck / pg_amcheck/005_opclass_damage OK 19.60s 10 subtests passed\n 87/243 postgresql:pg_amcheck / pg_amcheck/005_opclass_damage OK 6.09s 10 subtests passed\n\nI suppose, the most significant factor here is\n'HEAD' => [\n 'debug_parallel_query = regress'\n]\nin the buildfarm client's config.\n\nIndeed, it affects timing of the test very much, judging by what I'm\nseeing locally, on Linux (I guess, process-related overhead might be\nhigher on Windows):\n$ make -s check -C src/bin/pg_amcheck/ PROVE_TESTS=\"t/005*\" PROVE_FLAGS=\"--timer\"\n[11:11:53] t/005_opclass_damage.pl .. ok 1370 ms ( 0.00 usr 0.00 sys + 0.10 cusr 0.07 csys = 0.17 CPU)\n\n$ echo \"debug_parallel_query = regress\" >/tmp/extra.config\n$ TEMP_CONFIG=/tmp/extra.config make -s check -C src/bin/pg_amcheck/ PROVE_TESTS=\"t/005*\" PROVE_FLAGS=\"--timer\"\n[11:12:46] t/005_opclass_damage.pl .. ok 40854 ms ( 0.00 usr 0.00 sys + 0.10 cusr 0.10 csys = 0.20 CPU)\n\n(Thus we can see 30x duration increase.)\n\nIt's worth to note that such increase is rather new (introduced by\n5ae208720); in REL_16_STABLE I'm seeing:\n$ make -s check -C src/bin/pg_amcheck/ PROVE_TESTS=\"t/005*\" PROVE_FLAGS=\"--timer\"\n[11:18:52] t/005_opclass_damage.pl .. ok 1453 ms ( 0.00 usr 0.00 sys + 0.82 cusr 0.11 csys = 0.93 CPU)\n\n$ TEMP_CONFIG=/tmp/extra.config make -s check -C src/bin/pg_amcheck/ PROVE_TESTS=\"t/005*\" PROVE_FLAGS=\"--timer\"\n[11:19:18] t/005_opclass_damage.pl .. ok 8032 ms ( 0.00 usr 0.00 sys + 0.82 cusr 0.11 csys = 0.93 CPU)\n\nSo maybe at least this test should be improved for testing with\ndebug_parallel_query enabled, if such active use of parallel workers by\npg_amcheck can't be an issue to end users?\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-07-25%2003%3A23%3A19\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-07-23%2023%3A01%3A55\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-05-02%2019%3A03%3A08\n[4] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-04-30%2014%3A03%3A06\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 25 Jul 2024 15:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "fairywren timeout failures on the pg_amcheck/005_opclass_damage test"
},
{
"msg_contents": "\nOn 2024-07-25 Th 8:00 AM, Alexander Lakhin wrote:\n> Hello hackers,\n>\n> Please take a look at today's failure [1], that raises several questions\n> at once:\n> 117/244 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade \n> TIMEOUT 3001.48s exit status 1\n> 180/244 postgresql:pg_amcheck / pg_amcheck/005_opclass_damage \n> TIMEOUT 3001.43s exit status 1\n>\n> Ok: 227\n> Expected Fail: 0\n> Fail: 0\n> Unexpected Pass: 0\n> Skipped: 15\n> Timeout: 2\n>\n> But looking at the previous test run [2], marked 'OK', I can see almost\n> the same:\n> 115/244 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade \n> TIMEOUT 3001.54s exit status 1\n> 176/244 postgresql:pg_amcheck / pg_amcheck/005_opclass_damage \n> TIMEOUT 3001.26s exit status 1\n>\n> Ok: 227\n> Expected Fail: 0\n> Fail: 0\n> Unexpected Pass: 0\n> Skipped: 15\n> Timeout: 2\n>\n> So it's not clear to me, why is the latter test run considered failed\n> unlike the former?\n\n\nYesterday I changed the way we detect failure in the buildfarm client, \nand pushed that to fairywren (and a few more of my animals). Previously \nit did not consider a timeout to be a failure, but now it does. I'm \ngoing to release that soon.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 25 Jul 2024 08:12:38 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: fairywren timeout failures on the pg_amcheck/005_opclass_damage\n test"
},
{
"msg_contents": "25.07.2024 15:00, Alexander Lakhin wrote:\n\n>\n> The other question is: why is 005_opclass_damage taking so much time there?\n> ...\n> $ make -s check -C src/bin/pg_amcheck/ PROVE_TESTS=\"t/005*\" PROVE_FLAGS=\"--timer\"\n> [11:11:53] t/005_opclass_damage.pl .. ok 1370 ms ( 0.00 usr 0.00 sys + 0.10 cusr 0.07 csys = 0.17 CPU)\n>\n> $ echo \"debug_parallel_query = regress\" >/tmp/extra.config\n> $ TEMP_CONFIG=/tmp/extra.config make -s check -C src/bin/pg_amcheck/ PROVE_TESTS=\"t/005*\" PROVE_FLAGS=\"--timer\"\n> [11:12:46] t/005_opclass_damage.pl .. ok 40854 ms ( 0.00 usr 0.00 sys + 0.10 cusr 0.10 csys = 0.20 CPU)\n>\n> ...\n> So maybe at least this test should be improved for testing with\n> debug_parallel_query enabled, if such active use of parallel workers by\n> pg_amcheck can't be an issue to end users?\n>\n\nWhen running this test with \"log_min_messages = DEBUG2\" in my extra.config,\nI see thousands of the following messages in the test log:\n2024-07-26 09:32:54.544 UTC [2572189:46] DEBUG: postmaster received pmsignal signal\n2024-07-26 09:32:54.544 UTC [2572189:47] DEBUG: registering background worker \"parallel worker for PID 2572197\"\n2024-07-26 09:32:54.544 UTC [2572189:48] DEBUG: starting background worker process \"parallel worker for PID 2572197\"\n2024-07-26 09:32:54.547 UTC [2572189:49] DEBUG: unregistering background worker \"parallel worker for PID 2572197\"\n2024-07-26 09:32:54.547 UTC [2572189:50] DEBUG: background worker \"parallel worker\" (PID 2572205) exited with exit code 0\n2024-07-26 09:32:54.547 UTC [2572189:51] DEBUG: postmaster received pmsignal signal\n2024-07-26 09:32:54.547 UTC [2572189:52] DEBUG: registering background worker \"parallel worker for PID 2572197\"\n2024-07-26 09:32:54.547 UTC [2572189:53] DEBUG: starting background worker process \"parallel worker for PID 2572197\"\n2024-07-26 09:32:54.549 UTC [2572189:54] DEBUG: unregistering background worker \"parallel worker for PID 2572197\"\n2024-07-26 09:32:54.549 UTC [2572189:55] DEBUG: background worker \"parallel worker\" (PID 2572206) exited with exit code 0\n...\n\ngrep ' registering background worker' \\\n src/bin/pg_amcheck/tmp_check/log/005_opclass_damage_test.log | wc -l\n15669\n\nSo this test launches more than 15000 processes when debug_parallel_query\nis enabled.\n\nAs far as I can see, this is happening because of the \"PARALLEL UNSAFE\"\nmarking is ignored when the function is called by CREATE INDEX/amcheck.\n\nNamely, with a function defined as:\n CREATE FUNCTION int4_asc_cmp (a int4, b int4) RETURNS int LANGUAGE sql AS $$\n SELECT CASE WHEN $1 = $2 THEN 0 WHEN $1 > $2 THEN 1 ELSE -1 END; $$;\n\nSELECT int4_asc_cmp(1, 2);\nexecuted without parallel workers. Whilst when it's used by an index:\nCREATE OPERATOR CLASS int4_fickle_ops FOR TYPE int4 USING btree AS\n...\nOPERATOR 5 > (int4, int4), FUNCTION 1 int4_asc_cmp(int4, int4);\n\nINSERT INTO int4tbl (SELECT * FROM generate_series(1,1000) gs);\n\nCREATE INDEX fickleidx ON int4tbl USING btree (i int4_fickle_ops);\nlaunches 1000 parallel workers.\n\n(This is reminiscent of bug #18314.)\n\nOne way to workaround this is to disable debug_parallel_query in the test\nand another I find possible is to set max_parallel_workers = 0.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 26 Jul 2024 14:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fairywren timeout failures on the pg_amcheck/005_opclass_damage\n test"
},
{
"msg_contents": "\nOn 2024-07-26 Fr 7:00 AM, Alexander Lakhin wrote:\n> 25.07.2024 15:00, Alexander Lakhin wrote:\n>\n>>\n>> The other question is: why is 005_opclass_damage taking so much time \n>> there?\n>> ...\n>> $ make -s check -C src/bin/pg_amcheck/ PROVE_TESTS=\"t/005*\" \n>> PROVE_FLAGS=\"--timer\"\n>> [11:11:53] t/005_opclass_damage.pl .. ok 1370 ms ( 0.00 usr 0.00 \n>> sys + 0.10 cusr 0.07 csys = 0.17 CPU)\n>>\n>> $ echo \"debug_parallel_query = regress\" >/tmp/extra.config\n>> $ TEMP_CONFIG=/tmp/extra.config make -s check -C src/bin/pg_amcheck/ \n>> PROVE_TESTS=\"t/005*\" PROVE_FLAGS=\"--timer\"\n>> [11:12:46] t/005_opclass_damage.pl .. ok 40854 ms ( 0.00 usr 0.00 \n>> sys + 0.10 cusr 0.10 csys = 0.20 CPU)\n>>\n>> ...\n>> So maybe at least this test should be improved for testing with\n>> debug_parallel_query enabled, if such active use of parallel workers by\n>> pg_amcheck can't be an issue to end users?\n>>\n>\n> When running this test with \"log_min_messages = DEBUG2\" in my \n> extra.config,\n> I see thousands of the following messages in the test log:\n> 2024-07-26 09:32:54.544 UTC [2572189:46] DEBUG: postmaster received \n> pmsignal signal\n> 2024-07-26 09:32:54.544 UTC [2572189:47] DEBUG: registering \n> background worker \"parallel worker for PID 2572197\"\n> 2024-07-26 09:32:54.544 UTC [2572189:48] DEBUG: starting background \n> worker process \"parallel worker for PID 2572197\"\n> 2024-07-26 09:32:54.547 UTC [2572189:49] DEBUG: unregistering \n> background worker \"parallel worker for PID 2572197\"\n> 2024-07-26 09:32:54.547 UTC [2572189:50] DEBUG: background worker \n> \"parallel worker\" (PID 2572205) exited with exit code 0\n> 2024-07-26 09:32:54.547 UTC [2572189:51] DEBUG: postmaster received \n> pmsignal signal\n> 2024-07-26 09:32:54.547 UTC [2572189:52] DEBUG: registering \n> background worker \"parallel worker for PID 2572197\"\n> 2024-07-26 09:32:54.547 UTC [2572189:53] DEBUG: starting background \n> worker process \"parallel worker for PID 2572197\"\n> 2024-07-26 09:32:54.549 UTC [2572189:54] DEBUG: unregistering \n> background worker \"parallel worker for PID 2572197\"\n> 2024-07-26 09:32:54.549 UTC [2572189:55] DEBUG: background worker \n> \"parallel worker\" (PID 2572206) exited with exit code 0\n> ...\n>\n> grep ' registering background worker' \\\n> src/bin/pg_amcheck/tmp_check/log/005_opclass_damage_test.log | wc -l\n> 15669\n>\n> So this test launches more than 15000 processes when debug_parallel_query\n> is enabled.\n>\n> As far as I can see, this is happening because of the \"PARALLEL UNSAFE\"\n> marking is ignored when the function is called by CREATE INDEX/amcheck.\n>\n> Namely, with a function defined as:\n> CREATE FUNCTION int4_asc_cmp (a int4, b int4) RETURNS int LANGUAGE \n> sql AS $$\n> SELECT CASE WHEN $1 = $2 THEN 0 WHEN $1 > $2 THEN 1 ELSE -1 \n> END; $$;\n>\n> SELECT int4_asc_cmp(1, 2);\n> executed without parallel workers. Whilst when it's used by an index:\n> CREATE OPERATOR CLASS int4_fickle_ops FOR TYPE int4 USING btree AS\n> ...\n> OPERATOR 5 > (int4, int4), FUNCTION 1 int4_asc_cmp(int4, int4);\n>\n> INSERT INTO int4tbl (SELECT * FROM generate_series(1,1000) gs);\n>\n> CREATE INDEX fickleidx ON int4tbl USING btree (i int4_fickle_ops);\n> launches 1000 parallel workers.\n>\n> (This is reminiscent of bug #18314.)\n>\n> One way to workaround this is to disable debug_parallel_query in the test\n> and another I find possible is to set max_parallel_workers = 0.\n>\n>\n\nBut wouldn't either of those just be masking the problem?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 26 Jul 2024 08:41:05 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: fairywren timeout failures on the pg_amcheck/005_opclass_damage\n test"
},
{
"msg_contents": "26.07.2024 15:41, Andrew Dunstan wrote:\n>\n>>\n>> One way to workaround this is to disable debug_parallel_query in the test\n>> and another I find possible is to set max_parallel_workers = 0.\n>>\n>>\n>\n> But wouldn't either of those just be masking the problem?\n>\n\nYes, I'm inclined to consider this behavior a problem (what if the table\ncontained 1M rows?), that's why I called those solutions workarounds.\n\nOf course, there are parallel_setup_cost and parallel_tuple_cost\nparameters, which can prevent this from happening in the wild, but still...\n\nBest regards.\nAlexander\n\n\n",
"msg_date": "Fri, 26 Jul 2024 16:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fairywren timeout failures on the pg_amcheck/005_opclass_damage\n test"
}
] |
[
{
"msg_contents": "Hello everyone,\n\nIn src/backend/commands/trigger.c:4031, there is an afterTriggerAddEvent() function. The variable chunk is assigned the value of events->tail at line 4050. Subsequently, chunk is compared to NULL at lines 4051 and 4079, indicating that events->tail could potentially be NULL.\n\nHowever, at line 4102, we dereference events->tail by accessing events->tail->next without first checking if it is NULL.\n\nTo address this issue, I propose at least adding an assertion to ensure that events->tail != NULL before the dereference. The suggested patch is included in the attachment.\n\n-- \nBest regards,\nAlexander Kuznetsov",
"msg_date": "Thu, 25 Jul 2024 16:32:59 +0300",
"msg_from": "Alexander Kuznetsov <kuznetsovam@altlinux.org>",
"msg_from_op": true,
"msg_subject": "Possible null pointer dereference in afterTriggerAddEvent()"
},
{
"msg_contents": "On 2024-Jul-25, Alexander Kuznetsov wrote:\n\nHello Alexander,\n\n> In src/backend/commands/trigger.c:4031, there is an\n> afterTriggerAddEvent() function. The variable chunk is assigned the\n> value of events->tail at line 4050. Subsequently, chunk is compared to\n> NULL at lines 4051 and 4079, indicating that events->tail could\n> potentially be NULL.\n> \n> However, at line 4102, we dereference events->tail by accessing\n> events->tail->next without first checking if it is NULL.\n\nThanks for reporting this. I think the unwritten assumption is that\n->tail and ->head are NULL or not simultaneously, so testing for one of\nthem is equivalent to testing both. Failing to comply with this would\nbe data structure corruption.\n\n> To address this issue, I propose at least adding an assertion to\n> ensure that events->tail != NULL before the dereference. The suggested\n> patch is included in the attachment.\n\nHmm, this doesn't actually change any behavior AFAICS. If events->tail\nis NULL and we reach that code, then the dereference to get ->next is\ngoing to crash, whether the assertion is there or not.\n\nMaybe for sanity (and perhaps for Svace compliance) we could do it the\nother way around, i.e. by testing events->tail for nullness instead of\nevents->head, then add the assertion:\n\n\t\tif (events->tail == NULL)\n\t\t{\n\t\t\tAssert(events->head == NULL);\n\t\t\tevents->head = chunk;\n\t\t}\n\t\telse\n\t\t\tevents->tail->next = chunk;\n\nThis way, it's not wholly redundant.\n\nThat said, I'm not sure we actually *need* to change this.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte\"\n(Ijon Tichy en Viajes, Stanislaw Lem)\n\n\n",
"msg_date": "Thu, 25 Jul 2024 19:07:21 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Possible null pointer dereference in afterTriggerAddEvent()"
},
{
"msg_contents": "25.07.2024 20:07, Alvaro Herrera wrote:\n> Maybe for sanity (and perhaps for Svace compliance) we could do it the\n> other way around, i.e. by testing events->tail for nullness instead of\n> events->head, then add the assertion:\n>\n> \t\tif (events->tail == NULL)\n> \t\t{\n> \t\t\tAssert(events->head == NULL);\n> \t\t\tevents->head = chunk;\n> \t\t}\n> \t\telse\n> \t\t\tevents->tail->next = chunk;\n>\n> This way, it's not wholly redundant.\nThanks for your response!\nI agree with the proposed changes and have updated the patch accordingly. Version 2 is attached.\n> That said, I'm not sure we actually *need* to change this.\nI understand and partly agree. But it appears that with these changes, the dereference of a null pointer is impossible even in builds where assertions are disabled. Previously, this issue could theoretically occur. Consequently, these changes slightly enhance overall security.\n\n-- \nBest regards,\nAlexander Kuznetsov",
"msg_date": "Fri, 26 Jul 2024 12:16:00 +0300",
"msg_from": "Alexander Kuznetsov <kuznetsovam@altlinux.org>",
"msg_from_op": true,
"msg_subject": "Re: Possible null pointer dereference in afterTriggerAddEvent()"
},
{
"msg_contents": "Hello,\n\nis there anything else we can help with or discuss in order to apply this fix?\n\n26.07.2024 12:16, Alexander Kuznetsov пишет:\n> 25.07.2024 20:07, Alvaro Herrera wrote:\n>> Maybe for sanity (and perhaps for Svace compliance) we could do it the\n>> other way around, i.e. by testing events->tail for nullness instead of\n>> events->head, then add the assertion:\n>>\n>> if (events->tail == NULL)\n>> {\n>> Assert(events->head == NULL);\n>> events->head = chunk;\n>> }\n>> else\n>> events->tail->next = chunk;\n>>\n>> This way, it's not wholly redundant.\n> Thanks for your response!\n> I agree with the proposed changes and have updated the patch accordingly. Version 2 is attached.\n>> That said, I'm not sure we actually *need* to change this.\n> I understand and partly agree. But it appears that with these changes, the dereference of a null pointer is impossible even in builds where assertions are disabled. Previously, this issue could theoretically occur. Consequently, these changes slightly enhance overall security.\n> \n\n-- \nBest regards,\nAlexander Kuznetsov\n\n\n",
"msg_date": "Tue, 24 Sep 2024 17:50:59 +0300",
"msg_from": "Alexander Kuznetsov <kuznetsovam@altlinux.org>",
"msg_from_op": true,
"msg_subject": "Re: Possible null pointer dereference in afterTriggerAddEvent()"
},
{
"msg_contents": "On 2024-Sep-24, Alexander Kuznetsov wrote:\n\n> Hello,\n> \n> is there anything else we can help with or discuss in order to apply this fix?\n\nI don't think so, it seems a no-brainer to me and there are no\nobjections. I'll get it pushed tomorrow.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Linux transformó mi computadora, de una `máquina para hacer cosas',\nen un aparato realmente entretenido, sobre el cual cada día aprendo\nalgo nuevo\" (Jaime Salinas)\n\n\n",
"msg_date": "Tue, 24 Sep 2024 21:58:21 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Possible null pointer dereference in afterTriggerAddEvent()"
},
{
"msg_contents": "On 2024-Sep-24, Alvaro Herrera wrote:\n\n> On 2024-Sep-24, Alexander Kuznetsov wrote:\n> \n> > is there anything else we can help with or discuss in order to apply this fix?\n> \n> I don't think so, it seems a no-brainer to me and there are no\n> objections. I'll get it pushed tomorrow.\n\nOK, done.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The Postgresql hackers have what I call a \"NASA space shot\" mentality.\n Quite refreshing in a world of \"weekend drag racer\" developers.\"\n(Scott Marlowe)\n\n\n",
"msg_date": "Wed, 25 Sep 2024 18:05:27 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Possible null pointer dereference in afterTriggerAddEvent()"
}
] |
[
{
"msg_contents": "Definitions:\n\n - collation is text ordering and comparison\n - ctype affects case mapping (e.g. LOWER()) and pattern\n matching/regexes\n\nCurrently, there is only one version field, and it represents the\nversion of the collation. So, if your provider is libc and datcollate\nis \"C\" and datctype is \"en_US.utf8\", then the datcollversion will\nalways be NULL. Other providers use datcolllocale, which is only one\nfield, so it doesn't matter.\n\nGiven the discussion here:\n\nhttps://www.postgresql.org/message-id/1078884.1721762815@sss.pgh.pa.us\n\nit seems like it may be a good idea to version collation and ctype\nseparately. The ctype version is, more or less, the Unicode version,\nand we know what that is for the builtin provider as well as ICU.\n\n(Aside: ICU could theoretically report the same Unicode version and\nstill make some change that would affect us, but I have not observed\nthat to be the case. I use exhaustive code point coverage to test that\nour Unicode functions return the same results as the corresponding ICU\nfunctions when the Unicode version matches.)\n\nAdding more collation fields is getting to be messy, though, because\nthey all have to be present in pg_database, as well. It's hard to move\nthose fields into pg_collation, because that's not a shared catalog, so\nthat could cause problems with CREATE/ALTER DATABASE. Is it worth\nthinking about how we can clean this up, or should we just put up with\nthe idea that almost half the fields in pg_database will be locale-\nrelated?\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 25 Jul 2024 13:29:08 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "[18] separate collation and ctype versions, and cleanup of\n pg_database locale fields"
},
{
"msg_contents": "On Thu, 2024-07-25 at 13:29 -0700, Jeff Davis wrote:\n> it may be a good idea to version collation and ctype\n> separately. The ctype version is, more or less, the Unicode version,\n> and we know what that is for the builtin provider as well as ICU.\n\nAttached a rough patch for the purposes of discussion. It tracks the\nctype version separately, but doesn't do anything with it yet.\n\nThe main problem is that it's one more slightly confusing thing to\nunderstand, especially in pg_database because it's the ctype version of\nthe database default collation, not necessarily datctype.\n\nMaybe we can do something with the naming or catalog representation to\nmake this more clear?\n\nRegards,\n\tJeff Davis",
"msg_date": "Sat, 27 Jul 2024 08:34:10 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: [18] separate collation and ctype versions, and cleanup of\n pg_database locale fields"
}
] |
[
{
"msg_contents": "I wrote a simple script to get all issues from the pgsql-bugs list that are\nreported by the form (basically getting just the issues without any replies\nto them) and now while searching through these issues I can't know directly\nwhether it is solved or not, just a workaround is checking any discussions\nin its thread whether there is a patch file attached or not but it is not\naccurate as submitting a patch is not related to the issue itself neither\nby mentioning the issue id or the issue itself is marked as solved or at\nleast someone is working on it, so if anyone can help it will be very\nappreciated.\n\nI wrote a simple script to get all issues from the pgsql-bugs list that are reported by the form (basically getting just the issues without any replies to them) and now while searching through these issues I can't know directly whether it is solved or not, just a workaround is checking any discussions in its thread whether there is a patch file attached or not but it is not accurate as submitting a patch is not related to the issue itself neither by mentioning the issue id or the issue itself is marked as solved or at least someone is working on it, so if anyone can help it will be very appreciated.",
"msg_date": "Fri, 26 Jul 2024 04:35:08 +0300",
"msg_from": "Mohab Yaser <mohabyaserofficial2003@gmail.com>",
"msg_from_op": true,
"msg_subject": "How to check if issue is solved?"
},
{
"msg_contents": "On Thursday, July 25, 2024, Mohab Yaser <mohabyaserofficial2003@gmail.com>\nwrote:\n\n> I wrote a simple script to get all issues from the pgsql-bugs list that\n> are reported by the form (basically getting just the issues without any\n> replies to them) and now while searching through these issues I can't know\n> directly whether it is solved or not, just a workaround is checking any\n> discussions in its thread whether there is a patch file attached or not but\n> it is not accurate as submitting a patch is not related to the issue itself\n> neither by mentioning the issue id or the issue itself is marked as solved\n> or at least someone is working on it, so if anyone can help it will be very\n> appreciated.\n>\n\nIf there are zero replies the odds of it being resolved are reasonably\nclose to zero. But you are correct that seeing a reply ultimately means\nthat whether or not the issue is resolved is unknown. Historical data\nconforms to this and I don’t foresee any meaningful change in reality in\nthe near future either. Mostly because the vast majority of bugs get\nconfirmed and resolved in a timely manner by those responsible for them.\nBut the bug report only generally gets acknowledgement, patches get worked\non in -hackers. Stronger/actual enforcement of bug tracking in it commit\nmessages is possible but IMO the political capital needed is better spent\nelsewhere.\n\nWhile it is presently not the easiest to use, the commitfest is where\neffort should be expended. There is at least problem recognition [1] and\nsome concrete changes [2] suggested for improvements there which could use\npeople needing those changes in order to effectively contribute to the\nproject to use the current system, look over the commentary and ideas, and\nchime in with their own perspective.\n\n[1]\nhttps://www.postgresql.org/message-id/CA+TgmobnTcNq1xQE_+jxBEtj+AjKg0r_p5YNFHDE+EDnpcpFxA@mail.gmail.com\n\n[2]\nhttps://www.postgresql.org/message-id/CAKFQuwZYAigt%2BsRJrwXp5E1p3haAkTto8LU4EwFR5C8gowfKuA%40mail.gmail.com\n\nDavid J.\n\nOn Thursday, July 25, 2024, Mohab Yaser <mohabyaserofficial2003@gmail.com> wrote:I wrote a simple script to get all issues from the pgsql-bugs list that are reported by the form (basically getting just the issues without any replies to them) and now while searching through these issues I can't know directly whether it is solved or not, just a workaround is checking any discussions in its thread whether there is a patch file attached or not but it is not accurate as submitting a patch is not related to the issue itself neither by mentioning the issue id or the issue itself is marked as solved or at least someone is working on it, so if anyone can help it will be very appreciated.If there are zero replies the odds of it being resolved are reasonably close to zero. But you are correct that seeing a reply ultimately means that whether or not the issue is resolved is unknown. Historical data conforms to this and I don’t foresee any meaningful change in reality in the near future either. Mostly because the vast majority of bugs get confirmed and resolved in a timely manner by those responsible for them. But the bug report only generally gets acknowledgement, patches get worked on in -hackers. Stronger/actual enforcement of bug tracking in it commit messages is possible but IMO the political capital needed is better spent elsewhere.While it is presently not the easiest to use, the commitfest is where effort should be expended. There is at least problem recognition [1] and some concrete changes [2] suggested for improvements there which could use people needing those changes in order to effectively contribute to the project to use the current system, look over the commentary and ideas, and chime in with their own perspective.[1] https://www.postgresql.org/message-id/CA+TgmobnTcNq1xQE_+jxBEtj+AjKg0r_p5YNFHDE+EDnpcpFxA@mail.gmail.com[2] https://www.postgresql.org/message-id/CAKFQuwZYAigt%2BsRJrwXp5E1p3haAkTto8LU4EwFR5C8gowfKuA%40mail.gmail.comDavid J.",
"msg_date": "Thu, 25 Jul 2024 19:06:41 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to check if issue is solved?"
},
{
"msg_contents": "Mohab Yaser <mohabyaserofficial2003@gmail.com> writes:\n> I wrote a simple script to get all issues from the pgsql-bugs list that are\n> reported by the form (basically getting just the issues without any replies\n> to them) and now while searching through these issues I can't know directly\n> whether it is solved or not, just a workaround is checking any discussions\n> in its thread whether there is a patch file attached or not but it is not\n> accurate as submitting a patch is not related to the issue itself neither\n> by mentioning the issue id or the issue itself is marked as solved or at\n> least someone is working on it, so if anyone can help it will be very\n> appreciated.\n\nOne way is to trawl the commit log and see if there are commits\nlinking to the bug thread with a Discussion: marker.\n\nThis isn't 100% accurate, because a commit could possibly mention a\nbug thread without claiming to resolve the bug fully. But it's\npretty reliable for issues raised in the last half dozen years or so.\nA bigger hole in what you're doing is that bug reports don't\nnecessarily come in via the web form.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Jul 2024 00:35:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How to check if issue is solved?"
}
] |
[
{
"msg_contents": "Hi,\n\nWe got some trouble when using bulk_insert of COPY FROM around CopyMultiInsertInfoFlush(), the target table is our own access method storage.\n\nThe reason is we mistakenly set am routine finish_bulk_insert() to clean some resources or context. We thought it’s used as normal insert_finish and etc,\nbut COPY will call that multiple times if copy buffer is full.\n\nAnd I check this am routine of heap am, found that heapam_finish_bulk_insert() has been removed at commit c6b9204.\n\nCurrent in codes, there is no place to call finish_bulk_insert(), it’s may be used for other am like ours, but also make it easy to cause\nunnecessary troubles.\n\nShall we remove it before we provide an example for developers to avoid that?\n\n\nZhang Mingli\nwww.hashdata.xyz\n\n\n\n\n\n\n\nHi, \n\nWe got some trouble when using bulk_insert of COPY FROM around CopyMultiInsertInfoFlush(), the target table is our own access method storage.\n\nThe reason is we mistakenly set am routine finish_bulk_insert() to clean some resources or context. We thought it’s used as normal insert_finish and etc,\nbut COPY will call that multiple times if copy buffer is full.\n\nAnd I check this am routine of heap am, found that heapam_finish_bulk_insert() has been removed at commit c6b9204.\n\nCurrent in codes, there is no place to call finish_bulk_insert(), it’s may be used for other am like ours, but also make it easy to cause\nunnecessary troubles.\n\nShall we remove it before we provide an example for developers to avoid that?\n\n\n\n\nZhang Mingli\nwww.hashdata.xyz",
"msg_date": "Fri, 26 Jul 2024 14:02:05 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Is it necessary to keep am routine finish_bulk_insert?"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile looking into the commit b4da732fd64e936970f38c792f8b32c4bdf2bcd5,\nI noticed that we can create a materialized view using Ephemeral Named\nRelation in PostgreSQL 16 or earler. \n\n\npostgres=# create table tbl (i int);\nCREATE TABLE\n ^\npostgres=# create or replace function f() returns trigger as $$ begin \n create materialized view mv as select * from enr; return new; end; $$ language plpgsql;\nCREATE FUNCTION\n\npostgres=# create trigger trig after insert on tbl referencing new table as enr execute function f();\nCREATE TRIGGER\n\npostgres=# insert into tbl values (10);\n\npostgres=# \\d\n List of relations\n Schema | Name | Type | Owner \n--------+------+-------------------+--------\n public | mv | materialized view | yugo-n\n public | tbl | table | yugo-n\n(2 rows)\n\n\nWe cannot refresh or get the deinition of it, though.\n\npostgres=# refresh materialized view mv;\nERROR: executor could not find named tuplestore \"enr\"\n\npostgres=# \\d+ mv\nERROR: unrecognized RTE kind: 7\n\nIn PostgreSQL 17, materialized view using ENR cannot be created \nbecause queryEnv is not pass to RefreshMatViewByOid introduced by b4da732fd64.\nWhen we try to create it, the error is raised.\n\n ERROR: executor could not find named tuplestore \"enr\"\n\nAlthough it is hard to imagine users actually try to create materialized view\nusing ENR, how about prohibiting it even in PG16 or earlier by passing NULL\nas queryEnv arg in CreateQueryDesc to avoid to create useless matviews accidentally,\nas the attached patch?\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Fri, 26 Jul 2024 16:07:14 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "EphemeralNamedRelation and materialized view"
},
{
"msg_contents": "On Fri, 26 Jul 2024 at 12:07, Yugo Nagata <nagata@sraoss.co.jp> wrote:\n>\n> Hi,\n>\n> While looking into the commit b4da732fd64e936970f38c792f8b32c4bdf2bcd5,\n> I noticed that we can create a materialized view using Ephemeral Named\n> Relation in PostgreSQL 16 or earler.\n>\n>\n> postgres=# create table tbl (i int);\n> CREATE TABLE\n> ^\n> postgres=# create or replace function f() returns trigger as $$ begin\n> create materialized view mv as select * from enr; return new; end; $$ language plpgsql;\n> CREATE FUNCTION\n>\n> postgres=# create trigger trig after insert on tbl referencing new table as enr execute function f();\n> CREATE TRIGGER\n>\n> postgres=# insert into tbl values (10);\n>\n> postgres=# \\d\n> List of relations\n> Schema | Name | Type | Owner\n> --------+------+-------------------+--------\n> public | mv | materialized view | yugo-n\n> public | tbl | table | yugo-n\n> (2 rows)\n>\n>\n> We cannot refresh or get the deinition of it, though.\n>\n> postgres=# refresh materialized view mv;\n> ERROR: executor could not find named tuplestore \"enr\"\n>\n> postgres=# \\d+ mv\n> ERROR: unrecognized RTE kind: 7\n>\n> In PostgreSQL 17, materialized view using ENR cannot be created\n> because queryEnv is not pass to RefreshMatViewByOid introduced by b4da732fd64.\n> When we try to create it, the error is raised.\n>\n> ERROR: executor could not find named tuplestore \"enr\"\n>\n> Although it is hard to imagine users actually try to create materialized view\n> using ENR, how about prohibiting it even in PG16 or earlier by passing NULL\n> as queryEnv arg in CreateQueryDesc to avoid to create useless matviews accidentally,\n> as the attached patch?\n>\n>\n> Regards,\n> Yugo Nagata\n>\n> --\n> Yugo Nagata <nagata@sraoss.co.jp>\n\nHi\nI think this is a clear bug fix, and should be backported in pg v12-v16.\nLTGM\n\nP.S should be set https://commitfest.postgresql.org/49/5153/ entry as RFC?\n\n-- \nBest regards,\nKirill Reshke\n\n\n",
"msg_date": "Sat, 10 Aug 2024 12:22:00 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: EphemeralNamedRelation and materialized view"
}
] |
[
{
"msg_contents": "I'm now using version 14 and planning to update to 17 as soon as it comes\navailable. Then looking carefully to release notes to see exactly what I'll\nget when updated I see lots of unexplained features. Just because release\nnotes does not explain exactly what that change does. And I don't have a\nway to get what code or messages generated that feature.\n\n\n -\n\n Allow query nodes to be run in parallel in more cases (Tom Lane)\n Cool this feature, but when and what kind of query will use this ?\n -\n\n Improve EXPLAIN's display of SubPlan nodes and output parameters (Tom\n Lane, Dean Rasheed)\n hmm, interesting, but what exactly ?\n\nEverything that is done in Postgres is public, all messages and code are\navailable to anyone, but when I want to know what that feature is exactly\nusing release notes, I don't know how to find it.\n\nI think it would be very interesting if we have on release notes what was\ndiscussed for that change.\n\n -\n\n Allow query nodes to be run in parallel in more cases (Tom Lane) CF1\n <https://commitfest.postgresql.org/47/4798/> and CF2\n <https://commitfest.postgresql.org/48/4810/>\n -\n\n Improve EXPLAIN's display of SubPlan nodes and output parameters (Tom\n Lane, Dean Rasheed) CF1 <https://commitfest.postgresql.org/47/4782/>\n\nAnd these CF links could point to commitfest or email messages or even a\ndetailed tutorial of that feature.\n\nregards\nMarcos\n\nI'm now using version 14 and planning to update to 17 as soon as it comes available. Then looking carefully to release notes to see exactly what I'll get when updated I see lots of unexplained features. Just because release notes does not explain exactly what that change does. And I don't have a way to get what code or messages generated that feature.Allow query nodes to be run in parallel in more cases (Tom Lane)Cool this feature, but when and what kind of query will use this ?Improve EXPLAIN's display of SubPlan nodes and output parameters (Tom Lane, Dean Rasheed)hmm, interesting, but what exactly ?Everything that is done in Postgres is public, all messages and code are available to anyone, but when I want to know what that feature is exactly using release notes, I don't know how to find it.I think it would be very interesting if we have on release notes what was discussed for that change. Allow query nodes to be run in parallel in more cases (Tom Lane) CF1 and CF2Improve EXPLAIN's display of SubPlan nodes and output parameters (Tom Lane, Dean Rasheed) CF1And these CF links could point to commitfest or email messages or even a detailed tutorial of that feature. regardsMarcos",
"msg_date": "Fri, 26 Jul 2024 09:30:55 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Detailed release notes"
},
{
"msg_contents": "> On 26 Jul 2024, at 14:30, Marcos Pegoraro <marcos@f10.com.br> wrote:\n> \n> I'm now using version 14 and planning to update to 17 as soon as it comes available. Then looking carefully to release notes to see exactly what I'll get when updated I see lots of unexplained features. Just because release notes does not explain exactly what that change does. And I don't have a way to get what code or messages generated that feature.\n\nThere is a way, but it's not exactly visible from reading the release notes.\n\n> • Allow query nodes to be run in parallel in more cases (Tom Lane)\n> Cool this feature, but when and what kind of query will use this ?\n\nReading the source of the release notes will show a comment which links to the\ncommit. The source can be seen here:\n\n https://github.com/postgres/postgres/blob/REL_17_STABLE/doc/src/sgml/release-17.sgml\n\n..and the comment for this item is:\n\n<!--\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2023-07-14 [e08d74ca1] Allow plan nodes with initPlans to be considered paralle\n-->\n\n <listitem>\n <para>\n Allow query nodes to be run in parallel in more cases (Tom Lane)\n </para>\n </listitem>\n\nThis comment tells us the relevant commit is e08d74ca1, which can be found here:\n\n https://github.com/postgres/postgres/commit/e08d74ca1\n\nThis in turn leads to the mailinglist discussion for this specific feature:\n\n https://www.postgresql.org/message-id/1129530.1681317832@sss.pgh.pa.us\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 26 Jul 2024 14:45:20 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Em sex., 26 de jul. de 2024 às 09:45, Daniel Gustafsson <daniel@yesql.se>\nescreveu:\n\n>\n> There is a way, but it's not exactly visible from reading the release\n> notes.\n>\n\nCool, didn't know that.\nBut why is that just a hidden comment and not a visible link for us ?\n\nregards\nMarcos\n\nEm sex., 26 de jul. de 2024 às 09:45, Daniel Gustafsson <daniel@yesql.se> escreveu:\nThere is a way, but it's not exactly visible from reading the release notes.\nCool, didn't know that. But why is that just a hidden comment and not a visible link for us ?regardsMarcos",
"msg_date": "Fri, 26 Jul 2024 10:00:33 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "> On 26 Jul 2024, at 15:00, Marcos Pegoraro <marcos@f10.com.br> wrote:\n\n> But why is that just a hidden comment and not a visible link for us ?\n\nThat's likely the wrong level of detail for the overwhelming majority of\nrelease notes readers. I have a feeling this was discussed not too long ago\nbut (if so) I fail to find that discussion now.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 26 Jul 2024 15:11:20 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Em sex., 26 de jul. de 2024 às 10:11, Daniel Gustafsson <daniel@yesql.se>\nescreveu:\n\n>\n> That's likely the wrong level of detail for the overwhelming majority of\n> release notes readers. I have a feeling this was discussed not too long\n> ago\n> but (if so) I fail to find that discussion now.\n\n\nWrong level ? Where is the appropriate place on DOCs to see exactly what\nI'll get when updated ?\nA separate page \"Detailed Release Notes\" ? I don't think so.\nI think release notes are sometimes the only place we read to decide if an\nupgrade is doable or not.\n\nWell, that opened my eyes, now I can see detailed info about every feature\nwhen it's committed.\nAnd I'm really convinced that a small link to that commit wouldn't\nget dirty release notes.\n\nEm sex., 26 de jul. de 2024 às 10:11, Daniel Gustafsson <daniel@yesql.se> escreveu:\nThat's likely the wrong level of detail for the overwhelming majority of\nrelease notes readers. I have a feeling this was discussed not too long ago\nbut (if so) I fail to find that discussion now.Wrong level ? Where is the appropriate place on DOCs to see exactly what I'll get when updated ? A separate page \"Detailed Release Notes\" ? I don't think so.I think release notes are sometimes the only place we read to decide if an upgrade is doable or not.Well, that opened my eyes, now I can see detailed info about every feature when it's committed.And I'm really convinced that a small link to that commit wouldn't get dirty release notes.",
"msg_date": "Fri, 26 Jul 2024 10:25:16 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Jul 26, 2024, at 9:26 AM, Marcos Pegoraro <marcos@f10.com.br> wrote:Em sex., 26 de jul. de 2024 às 10:11, Daniel Gustafsson <daniel@yesql.se> escreveu:\nThat's likely the wrong level of detail for the overwhelming majority of\nrelease notes readers. I have a feeling this was discussed not too long ago\nbut (if so) I fail to find that discussion now.Wrong level ? Where is the appropriate place on DOCs to see exactly what I'll get when updated ? A separate page \"Detailed Release Notes\" ? I don't think so.I think release notes are sometimes the only place we read to decide if an upgrade is doable or not.Well, that opened my eyes, now I can see detailed info about every feature when it's committed.And I'm really convinced that a small link to that commit wouldn't get dirty release notes.FWIW, pgPedia has a version of the release notes that does get that granular:https://pgpedia.info/postgresql-versions/postgresql-17.htmlJonathan",
"msg_date": "Fri, 26 Jul 2024 09:34:47 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 9:26 AM Marcos Pegoraro <marcos@f10.com.br> wrote:\n> Well, that opened my eyes, now I can see detailed info about every feature when it's committed.\n> And I'm really convinced that a small link to that commit wouldn't get dirty release notes.\n\n+1. I think those links would be useful to a lot of people.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Jul 2024 09:56:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 26 Jul 2024, at 15:00, Marcos Pegoraro <marcos@f10.com.br> wrote:\n>> But why is that just a hidden comment and not a visible link for us ?\n\n> That's likely the wrong level of detail for the overwhelming majority of\n> release notes readers. I have a feeling this was discussed not too long ago\n> but (if so) I fail to find that discussion now.\n\nYeah, I too recall some discussion of surfacing the commit links\nsomehow, perhaps as a popup tooltip. Nobody's got round to it yet.\nIt's not real clear how to handle multiple links per <para>, which\nhappens from time to time in major release notes and just about\neveryplace in minor release notes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Jul 2024 10:11:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 6:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jul 26, 2024 at 9:26 AM Marcos Pegoraro <marcos@f10.com.br> wrote:\n> > Well, that opened my eyes, now I can see detailed info about every feature when it's committed.\n> > And I'm really convinced that a small link to that commit wouldn't get dirty release notes.\n>\n> +1. I think those links would be useful to a lot of people.\n\n+1. I've been asked a lot of times how to find the associated commit\nIDs from release note items. These links would help users know the\ndetails of the changes, and I believe many users would like to do\nthat.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 26 Jul 2024 09:01:19 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Em sex., 26 de jul. de 2024 às 13:01, Masahiko Sawada <sawada.mshk@gmail.com>\nescreveu:\n\n>\n> +1. I've been asked a lot of times how to find the associated commit\n> IDs from release note items. These links would help users know the\n> details of the changes, and I believe many users would like to do\n> that.\n\n\nYes, this way release notes would explain itself.\n\nFor now my release notes will be this\nhttps://github.com/postgres/postgres/blob/REL_17_STABLE/doc/src/sgml/release-17.sgml\nand not this\nhttps://www.postgresql.org/docs/17/release-17.html\n\nregards\nMarcos\n\nEm sex., 26 de jul. de 2024 às 13:01, Masahiko Sawada <sawada.mshk@gmail.com> escreveu:\n+1. I've been asked a lot of times how to find the associated commit\nIDs from release note items. These links would help users know the\ndetails of the changes, and I believe many users would like to do\nthat.Yes, this way release notes would explain itself.For now my release notes will be thishttps://github.com/postgres/postgres/blob/REL_17_STABLE/doc/src/sgml/release-17.sgmland not thishttps://www.postgresql.org/docs/17/release-17.htmlregardsMarcos",
"msg_date": "Fri, 26 Jul 2024 13:26:15 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 10:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> >> On 26 Jul 2024, at 15:00, Marcos Pegoraro <marcos@f10.com.br> wrote:\n> >> But why is that just a hidden comment and not a visible link for us ?\n>\n> > That's likely the wrong level of detail for the overwhelming majority of\n> > release notes readers. I have a feeling this was discussed not too long ago\n> > but (if so) I fail to find that discussion now.\n>\n> Yeah, I too recall some discussion of surfacing the commit links\n> somehow, perhaps as a popup tooltip. Nobody's got round to it yet.\n> It's not real clear how to handle multiple links per <para>, which\n> happens from time to time in major release notes and just about\n> everyplace in minor release notes.\n>\n\nsimilar to https://docs.python.org/3/whatsnew/changelog.html\n\nChange functions to use a safe search_path during maintenance\noperations (Jeff Davis)\n\nchange to\n\n[commitId_link1, commitId_link2]: Change functions to use a safe\nsearch_path during maintenance operations (Jeff Davis)\n\nDoes this make sense?\nIf so, then we can hardcode and some automation can change to that way.\n\n\n",
"msg_date": "Mon, 5 Aug 2024 18:54:10 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Em seg., 5 de ago. de 2024 às 07:54, jian he <jian.universality@gmail.com>\nescreveu:\n\n>\n> [commitId_link1, commitId_link2]: Change functions to use a safe\n> search_path during maintenance operations (Jeff Davis)\n>\n\nI don't like that prefix dirtying each item.\nI think having just a link after every item would be better.\nFirstly because in English we read left to right and\nalso because we don't need the commit code. So it would be\n\n - Change functions to use a safe search_path during maintenance\n operations (Jeff Davis) [link1, link2]\n\nor just a number to it\n\n - Change functions to use a safe search_path during maintenance\n operations (Jeff Davis) [1, 2]\n\n\nregards\nMarcos\n\nEm seg., 5 de ago. de 2024 às 07:54, jian he <jian.universality@gmail.com> escreveu:\n[commitId_link1, commitId_link2]: Change functions to use a safe\nsearch_path during maintenance operations (Jeff Davis)I don't like that prefix dirtying each item. I think having just a link after every item would be better. Firstly because in English we read left to right and also because we don't need the commit code. So it would be Change functions to use a safe search_path during maintenance operations (Jeff Davis) [link1, link2]or just a number to itChange functions to use a safe search_path during maintenance operations (Jeff Davis) [1, 2]regardsMarcos",
"msg_date": "Mon, 5 Aug 2024 08:16:23 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "> On 5 Aug 2024, at 13:16, Marcos Pegoraro <marcos@f10.com.br> wrote:\n> \n> Em seg., 5 de ago. de 2024 às 07:54, jian he <jian.universality@gmail.com <mailto:jian.universality@gmail.com>> escreveu:\n>> \n>> [commitId_link1, commitId_link2]: Change functions to use a safe\n>> search_path during maintenance operations (Jeff Davis)\n> \n> I don't like that prefix dirtying each item. \n\nI too would prefer links at the end, not least since we might have 2 or 3 (or\nmore) links for an item.\n\nPython also links to the Github issue and not the commit, in our project the\nanalog to that would be linking to the thread in the mailinglist archive as\nwell as the commit. For us linking to the commit is probably preferrable since\nit will link to the thread but the thread often wont link to the commit (like a\nGithub issue will). Maybe icons for code/emailthread can be used to make it\nclear what the link is, and cut down to horizontal space required?\n\n--\nDaniel Gustafsson\n\n\nOn 5 Aug 2024, at 13:16, Marcos Pegoraro <marcos@f10.com.br> wrote:Em seg., 5 de ago. de 2024 às 07:54, jian he <jian.universality@gmail.com> escreveu:\n[commitId_link1, commitId_link2]: Change functions to use a safe\nsearch_path during maintenance operations (Jeff Davis)I don't like that prefix dirtying each item. I too would prefer links at the end, not least since we might have 2 or 3 (ormore) links for an item.Python also links to the Github issue and not the commit, in our project theanalog to that would be linking to the thread in the mailinglist archive aswell as the commit. For us linking to the commit is probably preferrable sinceit will link to the thread but the thread often wont link to the commit (like aGithub issue will). Maybe icons for code/emailthread can be used to make itclear what the link is, and cut down to horizontal space required?\n--Daniel Gustafsson",
"msg_date": "Mon, 5 Aug 2024 15:33:17 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Mon, Aug 5, 2024, at 10:33 AM, Daniel Gustafsson wrote:\n>> On 5 Aug 2024, at 13:16, Marcos Pegoraro <marcos@f10.com.br> wrote:\n>> \n>> Em seg., 5 de ago. de 2024 às 07:54, jian he <jian.universality@gmail.com> escreveu:\n>>> \n>>> [commitId_link1, commitId_link2]: Change functions to use a safe\n>>> search_path during maintenance operations (Jeff Davis)\n>> \n>> I don't like that prefix dirtying each item. \n> \n> I too would prefer links at the end, not least since we might have 2 or 3 (or\n> more) links for an item.\n\n+1.\n\n> Python also links to the Github issue and not the commit, in our project the\n> analog to that would be linking to the thread in the mailinglist archive as\n> well as the commit. For us linking to the commit is probably preferrable since\n> it will link to the thread but the thread often wont link to the commit (like a\n> Github issue will). Maybe icons for code/emailthread can be used to make it\n> clear what the link is, and cut down to horizontal space required?\n\nPgBouncer adds the PR number at the end [1] too. I generally prefer this style\nbecause you read the message and after that if you want additional detail you\ncan click on the link at the end.\n\nI don't know if linking to the thread is a good idea. We have long long threads\nthat cannot provide quick information for the reader. Since we don't have a\nconcept of every commit has a CF entry, IMO we should use only commits here. The\ncommit message points to the discussion so it is a good start point if you want\nto research about that specific feature.\n\nIf one commit is not sufficient for that item, we can always add multiple\ncommits as we usually do in the release notes comments.\n\n\n[1] https://www.pgbouncer.org/changelog.html\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Aug 5, 2024, at 10:33 AM, Daniel Gustafsson wrote:On 5 Aug 2024, at 13:16, Marcos Pegoraro <marcos@f10.com.br> wrote:Em seg., 5 de ago. de 2024 às 07:54, jian he <jian.universality@gmail.com> escreveu:[commitId_link1, commitId_link2]: Change functions to use a safe search_path during maintenance operations (Jeff Davis)I don't like that prefix dirtying each item. I too would prefer links at the end, not least since we might have 2 or 3 (ormore) links for an item.+1.Python also links to the Github issue and not the commit, in our project theanalog to that would be linking to the thread in the mailinglist archive aswell as the commit. For us linking to the commit is probably preferrable sinceit will link to the thread but the thread often wont link to the commit (like aGithub issue will). Maybe icons for code/emailthread can be used to make itclear what the link is, and cut down to horizontal space required?PgBouncer adds the PR number at the end [1] too. I generally prefer this stylebecause you read the message and after that if you want additional detail youcan click on the link at the end.I don't know if linking to the thread is a good idea. We have long long threadsthat cannot provide quick information for the reader. Since we don't have aconcept of every commit has a CF entry, IMO we should use only commits here. Thecommit message points to the discussion so it is a good start point if you wantto research about that specific feature.If one commit is not sufficient for that item, we can always add multiplecommits as we usually do in the release notes comments.[1] https://www.pgbouncer.org/changelog.html--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 05 Aug 2024 12:38:10 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "hi.\n\nplease check the attached patch file and apply against\nb18b3a8150dbb150124bd345e000d6dc92f3d6dd.\n\nyou can also preview the attached screenshot to see the rendered effect.",
"msg_date": "Tue, 6 Aug 2024 15:30:25 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Em ter., 6 de ago. de 2024 às 04:30, jian he <jian.universality@gmail.com>\nescreveu:\n\n>\n> you can also preview the attached screenshot to see the rendered effect.\n>\n\nLoved, except that the commit id does not help too much, so I don't think\nwe need it.\nI think a numbered link would be better.\n\n - Change functions to use a safe search_path during maintenance\n operations (Jeff Davis) [1, 2]\n\nAnd your patch has an additional space before comma before second, third\nlinks, [1 , 2] instead of [1, 2]\n\nregards\nMarcos\n\nEm ter., 6 de ago. de 2024 às 04:30, jian he <jian.universality@gmail.com> escreveu:\nyou can also preview the attached screenshot to see the rendered effect.Loved, except that the commit id does not help too much, so I don't think we need it.I think a numbered link would be better.Change functions to use a safe search_path during maintenance operations (Jeff Davis) [1, 2]And your patch has an additional space before comma before second, third links, [1 , 2] instead of [1, 2]regardsMarcos",
"msg_date": "Tue, 6 Aug 2024 10:57:00 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Tue, Aug 6, 2024 at 9:57 AM Marcos Pegoraro <marcos@f10.com.br> wrote:\n> Loved, except that the commit id does not help too much, so I don't think we need it.\n> I think a numbered link would be better.\n\nI think the commit ID is quite useful. If you're using git, you can do\n\"git show $COMMITID\". If you're using the web, you can go to\nhttps://git.postgresql.org/pg/commitdiff/$COMMITID\n\nBig -1 for removing the commit ID.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Aug 2024 10:02:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Tue, Aug 6, 2024, at 11:02 AM, Robert Haas wrote:\n> On Tue, Aug 6, 2024 at 9:57 AM Marcos Pegoraro <marcos@f10.com.br> wrote:\n> > Loved, except that the commit id does not help too much, so I don't think we need it.\n> > I think a numbered link would be better.\n> \n> I think the commit ID is quite useful. If you're using git, you can do\n> \"git show $COMMITID\". If you're using the web, you can go to\n> https://git.postgresql.org/pg/commitdiff/$COMMITID\n> \n> Big -1 for removing the commit ID.\n\nAgree. Numbers mean nothing in this context. You are searching for detailed\ninformation about the referred feature. A visual information (commit hash)\nprovides a context that you will find the source code modifications for that\nfeature.\n\nTalking about the patch, do we want to rely on an external resource? I suggest\nthat we use a postgresql.org subdomain. It can point to\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=$COMMIT\n\nor even better use a rewrite rule to define a user-friendly URL (and probably a\nmechanism to hide the gitweb URL) like\n\nhttps://www.postgresql.org/commit/da4017a694d\n\nand the short version that we usually use for the mailing list.\n\nhttps://postgr.es/c/da4017a694d\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Aug 6, 2024, at 11:02 AM, Robert Haas wrote:On Tue, Aug 6, 2024 at 9:57 AM Marcos Pegoraro <marcos@f10.com.br> wrote:> Loved, except that the commit id does not help too much, so I don't think we need it.> I think a numbered link would be better.I think the commit ID is quite useful. If you're using git, you can do\"git show $COMMITID\". If you're using the web, you can go tohttps://git.postgresql.org/pg/commitdiff/$COMMITIDBig -1 for removing the commit ID.Agree. Numbers mean nothing in this context. You are searching for detailedinformation about the referred feature. A visual information (commit hash)provides a context that you will find the source code modifications for thatfeature.Talking about the patch, do we want to rely on an external resource? I suggestthat we use a postgresql.org subdomain. It can point tohttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=$COMMITor even better use a rewrite rule to define a user-friendly URL (and probably amechanism to hide the gitweb URL) likehttps://www.postgresql.org/commit/da4017a694dand the short version that we usually use for the mailing list.https://postgr.es/c/da4017a694d--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Tue, 06 Aug 2024 12:02:59 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Hi,\n\nOn 2024-08-06 12:02:59 -0300, Euler Taveira wrote:\n> Talking about the patch, do we want to rely on an external resource? I suggest\n> that we use a postgresql.org subdomain. It can point to\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=$COMMIT\n\nI wonder if we should make that a configurable base domain? We have a few\nother variables in the sgml that can optionally be set.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Aug 2024 08:12:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Tue, Aug 6, 2024 at 11:12 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2024-08-06 12:02:59 -0300, Euler Taveira wrote:\n> > Talking about the patch, do we want to rely on an external resource? I suggest\n> > that we use a postgresql.org subdomain. It can point to\n> >\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=$COMMIT\n>\n> I wonder if we should make that a configurable base domain? We have a few\n> other variables in the sgml that can optionally be set.\n>\n\nThanks for the tip.\n\nadding the following line to postgres.sgml saved me.\n+<!ENTITY commit_baseurl \"https://postgr.es/c/\">\n\n\nif people think https://postgr.es/c/da4017a694d no good\nwe have\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=da4017a694d\nand\nhttps://git.postgresql.org/cgit/postgresql.git/commit/?id=da4017a694d\n\n\nnow we don't need to repeat the url prefix in release-17.sgml.\nit is not configurable though.",
"msg_date": "Thu, 8 Aug 2024 11:10:00 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Em qui., 8 de ago. de 2024 às 00:11, jian he <jian.universality@gmail.com>\nescreveu:\n\n>\n> adding the following line to postgres.sgml saved me.\n> +<!ENTITY commit_baseurl \"https://postgr.es/c/\">\n>\n\nComma at end of line of these should be removed\n<ulink url=\"&commit_baseurl;165d581f1\"> [165d581f1] </ulink>,\n<ulink url=\"&commit_baseurl;6dbb49026\"> [6dbb49026] </ulink>,\n\nMaybe those items which have lots of commits would be strange\nA space before commas on next links I think should be removed because an\nitem like this\n<ulink url=\"&commit_baseurl;c951e9042\"> [c951e9042] </ulink>,\n<ulink url=\"&commit_baseurl;d16eb83ab\"> [d16eb83ab] </ulink>,\n<ulink url=\"&commit_baseurl;cd3424748\"> [cd3424748] </ulink>,\n<ulink url=\"&commit_baseurl;816f10564\"> [816f10564] </ulink>,\n<ulink url=\"&commit_baseurl;927332b95\"> [927332b95] </ulink>,\n<ulink url=\"&commit_baseurl;f1bb9284f\"> [f1bb9284f] </ulink>,\n<ulink url=\"&commit_baseurl;304b6b1a6\"> [304b6b1a6] </ulink>,\n<ulink url=\"&commit_baseurl;60ae37a8b\"> [60ae37a8b] </ulink>,\n<ulink url=\"&commit_baseurl;2800fbb2b\"> [2800fbb2b] </ulink>\nwould be like\n[c951e9042] <https://postgr.es/c/c951e9042> , [d16eb83ab]\n<https://postgr.es/c/d16eb83ab> , [cd3424748]\n<https://postgr.es/c/cd3424748> , [816f10564]\n<https://postgr.es/c/816f10564> , [927332b95]\n<https://postgr.es/c/927332b95> , [f1bb9284f]\n<https://postgr.es/c/f1bb9284f> , [304b6b1a6]\n<https://postgr.es/c/304b6b1a6> , [60ae37a8b]\n<https://postgr.es/c/60ae37a8b> , [2800fbb2b]\n<https://postgr.es/c/2800fbb2b>\nAnd if removed that space would be\n[c951e9042 <https://postgr.es/c/c951e9042>], [d16eb83ab\n<https://postgr.es/c/d16eb83ab>], [cd3424748 <https://postgr.es/c/cd3424748>],\n[816f10564], [927332b95], [f1bb9284f], [304b6b1a6], [60ae37a8b], [2800fbb2b]\nBut if we use just one bracket, then would be\n[c951e9042 <https://postgr.es/c/c951e9042>, d16eb83ab\n<https://postgr.es/c/d16eb83ab>, cd3424748 <https://postgr.es/c/cd3424748>,\n816f10564, 927332b95, f1bb9284f, 304b6b1a6, 60ae37a8b, 2800fbb2b]\n\nregards\nMarcos\n\nEm qui., 8 de ago. de 2024 às 00:11, jian he <jian.universality@gmail.com> escreveu:\nadding the following line to postgres.sgml saved me.\n+<!ENTITY commit_baseurl \"https://postgr.es/c/\">Comma at end of line of these should be removed<ulink url=\"&commit_baseurl;165d581f1\"> [165d581f1] </ulink>,<ulink url=\"&commit_baseurl;6dbb49026\"> [6dbb49026] </ulink>,Maybe those items which have lots of commits would be strangeA space before commas on next links I think should be removed because an item like this<ulink url=\"&commit_baseurl;c951e9042\"> [c951e9042] </ulink>,<ulink url=\"&commit_baseurl;d16eb83ab\"> [d16eb83ab] </ulink>,<ulink url=\"&commit_baseurl;cd3424748\"> [cd3424748] </ulink>,<ulink url=\"&commit_baseurl;816f10564\"> [816f10564] </ulink>,<ulink url=\"&commit_baseurl;927332b95\"> [927332b95] </ulink>,<ulink url=\"&commit_baseurl;f1bb9284f\"> [f1bb9284f] </ulink>,<ulink url=\"&commit_baseurl;304b6b1a6\"> [304b6b1a6] </ulink>,<ulink url=\"&commit_baseurl;60ae37a8b\"> [60ae37a8b] </ulink>,<ulink url=\"&commit_baseurl;2800fbb2b\"> [2800fbb2b] </ulink>would be like[c951e9042] , [d16eb83ab] , [cd3424748] , [816f10564] , [927332b95] , [f1bb9284f] , [304b6b1a6] , [60ae37a8b] , [2800fbb2b]And if removed that space would be[c951e9042], [d16eb83ab], [cd3424748], [816f10564], [927332b95], [f1bb9284f], [304b6b1a6], [60ae37a8b], [2800fbb2b]But if we use just one bracket, then would be[c951e9042, d16eb83ab, cd3424748, 816f10564, 927332b95, f1bb9284f, 304b6b1a6, 60ae37a8b, 2800fbb2b]regardsMarcos",
"msg_date": "Thu, 8 Aug 2024 09:50:25 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Thu, Aug 8, 2024 at 8:51 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>\n> Em qui., 8 de ago. de 2024 às 00:11, jian he <jian.universality@gmail.com> escreveu:\n>>\n>>\n>> adding the following line to postgres.sgml saved me.\n>> +<!ENTITY commit_baseurl \"https://postgr.es/c/\">\n>\n>\n> Comma at end of line of these should be removed\n> <ulink url=\"&commit_baseurl;165d581f1\"> [165d581f1] </ulink>,\n> <ulink url=\"&commit_baseurl;6dbb49026\"> [6dbb49026] </ulink>,\n>\n> Maybe those items which have lots of commits would be strange\n> A space before commas on next links I think should be removed because an item like this\n> <ulink url=\"&commit_baseurl;c951e9042\"> [c951e9042] </ulink>,\n\nplease check attached.",
"msg_date": "Fri, 9 Aug 2024 08:57:39 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Em qui., 8 de ago. de 2024 às 21:57, jian he <jian.universality@gmail.com>\nescreveu:\n\n>\n> please check attached.\n>\nI still think this way would be better\n+ Sawada, John Naylor). [\n+ <ulink url=\"&commit_baseurl;ee1b30f12\">ee1b30f12</ulink>,\n+ <ulink url=\"&commit_baseurl;30e144287\">30e144287</ulink>,\n+ <ulink url=\"&commit_baseurl;667e65aac\">667e65aac</ulink>,\n+ <ulink url=\"&commit_baseurl;6dbb49026\">6dbb49026</ulink>]\ninstead of this\n+ Sawada, John Naylor).\n+ <ulink url=\"&commit_baseurl;ee1b30f12\"> [ee1b30f12]</ulink>,\n+ <ulink url=\"&commit_baseurl;30e144287\"> [30e144287]</ulink>,\n+ <ulink url=\"&commit_baseurl;667e65aac\"> [667e65aac]</ulink>,\n+ <ulink url=\"&commit_baseurl;6dbb49026\"> [6dbb49026]</ulink>\n\nregards\nMarcos\n\nEm qui., 8 de ago. de 2024 às 21:57, jian he <jian.universality@gmail.com> escreveu:\nplease check attached.I still think this way would be better + Sawada, John Naylor). [+ <ulink url=\"&commit_baseurl;ee1b30f12\">ee1b30f12</ulink>,+ <ulink url=\"&commit_baseurl;30e144287\">30e144287</ulink>,+ <ulink url=\"&commit_baseurl;667e65aac\">667e65aac</ulink>,+ <ulink url=\"&commit_baseurl;6dbb49026\">6dbb49026</ulink>]instead of this+ Sawada, John Naylor). + <ulink url=\"&commit_baseurl;ee1b30f12\"> [ee1b30f12]</ulink>,+ <ulink url=\"&commit_baseurl;30e144287\"> [30e144287]</ulink>,+ <ulink url=\"&commit_baseurl;667e65aac\"> [667e65aac]</ulink>,+ <ulink url=\"&commit_baseurl;6dbb49026\"> [6dbb49026]</ulink>regardsMarcos",
"msg_date": "Fri, 9 Aug 2024 09:42:53 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Fri, Aug 9, 2024 at 8:43 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>\n> I still think this way would be better\n\n> + Sawada, John Naylor). [\n> + <ulink url=\"&commit_baseurl;ee1b30f12\">ee1b30f12</ulink>,\n> + <ulink url=\"&commit_baseurl;30e144287\">30e144287</ulink>,\n> + <ulink url=\"&commit_baseurl;667e65aac\">667e65aac</ulink>,\n> + <ulink url=\"&commit_baseurl;6dbb49026\">6dbb49026</ulink>]\n> instead of this\n> + Sawada, John Naylor).\n> + <ulink url=\"&commit_baseurl;ee1b30f12\"> [ee1b30f12]</ulink>,\n> + <ulink url=\"&commit_baseurl;30e144287\"> [30e144287]</ulink>,\n> + <ulink url=\"&commit_baseurl;667e65aac\"> [667e65aac]</ulink>,\n> + <ulink url=\"&commit_baseurl;6dbb49026\"> [6dbb49026]</ulink>\n>\n\nas your wish.",
"msg_date": "Mon, 12 Aug 2024 17:20:00 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Em seg., 12 de ago. de 2024 às 06:21, jian he <jian.universality@gmail.com>\nescreveu:\n\n>\n> as your wish.\n>\n\nLooks good.\n\nregards\nMarcos\n\nEm seg., 12 de ago. de 2024 às 06:21, jian he <jian.universality@gmail.com> escreveu:\nas your wish.Looks good.regardsMarcos",
"msg_date": "Mon, 12 Aug 2024 11:24:42 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 10:11:22AM -0400, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> >> On 26 Jul 2024, at 15:00, Marcos Pegoraro <marcos@f10.com.br> wrote:\n> >> But why is that just a hidden comment and not a visible link for us ?\n> \n> > That's likely the wrong level of detail for the overwhelming majority of\n> > release notes readers. I have a feeling this was discussed not too long ago\n> > but (if so) I fail to find that discussion now.\n> \n> Yeah, I too recall some discussion of surfacing the commit links\n> somehow, perhaps as a popup tooltip. Nobody's got round to it yet.\n> It's not real clear how to handle multiple links per <para>, which\n> happens from time to time in major release notes and just about\n> everyplace in minor release notes.\n\nYes, some kind of popup is what I remember. Looking at the HTML output\nfor the docs, the SGML comments don't appear the HTML, so I think we\nneed to write a Perl or shell script to match all the SGML <listitem>\ncomment text with the HTML <li> text, and insert tooltip text with links\nfor every one of them. Should I work on this?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 19 Aug 2024 18:10:56 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Em seg., 19 de ago. de 2024 às 19:10, Bruce Momjian <bruce@momjian.us>\nescreveu:\n\n> Should I work on this?\n>\n> Well, a process which does this automatically would be cool, but a\nmodified version of release notes for version 17 was done manually and\nseems fine.\nSo, why not commit this version and later for version 18 then create this\nprocess ?\n\nregards\nMarcos\n\nEm seg., 19 de ago. de 2024 às 19:10, Bruce Momjian <bruce@momjian.us> escreveu:Should I work on this?\nWell, a process which does this automatically would be cool, but a modified version of release notes for version 17 was done manually and seems fine. So, why not commit this version and later for version 18 then create this process ?regardsMarcos",
"msg_date": "Thu, 22 Aug 2024 14:18:10 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Marcos Pegoraro <marcos@f10.com.br> writes:\n> Well, a process which does this automatically would be cool, but a\n> modified version of release notes for version 17 was done manually and\n> seems fine.\n> So, why not commit this version and later for version 18 then create this\n> process ?\n\nI'd prefer to see this implemented in the website based on our\nexisting markup practices. That way it would work for quite a\nfew years' worth of existing release notes, not only future ones.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Aug 2024 13:27:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Em qui., 22 de ago. de 2024 às 14:27, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n>\n> I'd prefer to see this implemented in the website based on our\n> existing markup practices. That way it would work for quite a\n> few years' worth of existing release notes, not only future ones.\n>\n> I understand your point, and agree with that for previous releases, but\nsince we have a month only for version 17, will this process work properly\nuntil that date ?\nI think a release notes is more read as soon as it is available than other\nmonths, isn't it ?\nAnd this feature is just a HTML page, so if it's done manually or\nautomatically, from the reader point of view it'll be exactly the same.\n\nregards\nMarcos\n\nEm qui., 22 de ago. de 2024 às 14:27, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\nI'd prefer to see this implemented in the website based on our\nexisting markup practices. That way it would work for quite a\nfew years' worth of existing release notes, not only future ones.\nI understand your point, and agree with that for previous releases, but since we have a month only for version 17, will this process work properly until that date ?I think a release notes is more read as soon as it is available than other months, isn't it ? And this feature is just a HTML page, so if it's done manually or automatically, from the reader point of view it'll be exactly the same.regardsMarcos",
"msg_date": "Thu, 22 Aug 2024 16:33:18 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 6:11 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Fri, Jul 26, 2024 at 10:11:22AM -0400, Tom Lane wrote:\n> > Daniel Gustafsson <daniel@yesql.se> writes:\n> > >> On 26 Jul 2024, at 15:00, Marcos Pegoraro <marcos@f10.com.br> wrote:\n> > >> But why is that just a hidden comment and not a visible link for us ?\n> >\n> > > That's likely the wrong level of detail for the overwhelming majority of\n> > > release notes readers. I have a feeling this was discussed not too long ago\n> > > but (if so) I fail to find that discussion now.\n> >\n> > Yeah, I too recall some discussion of surfacing the commit links\n> > somehow, perhaps as a popup tooltip. Nobody's got round to it yet.\n> > It's not real clear how to handle multiple links per <para>, which\n> > happens from time to time in major release notes and just about\n> > everyplace in minor release notes.\n>\n> Yes, some kind of popup is what I remember. Looking at the HTML output\n> for the docs, the SGML comments don't appear the HTML, so I think we\n> need to write a Perl or shell script to match all the SGML <listitem>\n> comment text with the HTML <li> text, and insert tooltip text with links\n> for every one of them. Should I work on this?\n>\n\ndo you mean this thread [1]? but the output is the attached image.png,\nwhich looks more invasive.\nalso that does not link to git commit url.\n\n\nor do you mean automate the process, like add\n<a href=\"commit_url\">commit</a>\nautomatically? using perl script, and the output is the same as [2]\n\n\nor a popup like this kind of\nhttps://stackoverflow.com/a/47821773/15603477\nbut this thread, people seem to want to display the git commit as is,\nno need for extra action?\n\n\n[1] https://www.postgresql.org/message-id/CAAM3qnLMh19gXBCa6QME_%3Dp6UTNXajpvyoJm3DLZEcfKiC3rzA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CACJufxF5q5iuENPOHh2pvoWkRUrw0A9wY0ubbUPVnGMUnOt7MQ%40mail.gmail.com",
"msg_date": "Fri, 23 Aug 2024 11:26:30 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Fri, Aug 23, 2024 at 11:26:30AM +0800, jian he wrote:\n> do you mean this thread [1]? but the output is the attached image.png,\n> which looks more invasive.\n> also that does not link to git commit url.\n> \n> or do you mean automate the process, like add\n> <a href=\"commit_url\">commit</a>\n> automatically? using perl script, and the output is the same as [2]\n> \n> \n> or a popup like this kind of\n> https://stackoverflow.com/a/47821773/15603477\n> but this thread, people seem to want to display the git commit as is,\n> no need for extra action?\n\nSure, we can display it however people want.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 22 Aug 2024 23:33:36 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On 22.08.24 19:27, Tom Lane wrote:\n> Marcos Pegoraro <marcos@f10.com.br> writes:\n>> Well, a process which does this automatically would be cool, but a\n>> modified version of release notes for version 17 was done manually and\n>> seems fine.\n>> So, why not commit this version and later for version 18 then create this\n>> process ?\n> \n> I'd prefer to see this implemented in the website based on our\n> existing markup practices. That way it would work for quite a\n> few years' worth of existing release notes, not only future ones.\n\nSeems difficult, because the annotations right now are in XML comments. \nYou'd need to put them in some kind of markup around it, like from\n\n<!--\nAuthor: Bruce Momjian <bruce@momjian.us>\n2023-09-26 [15d5d7405] pgrowlocks: change lock mode output labels for \nconsiste\n-->\n\n <listitem>\n <para>\n Change <application><xref linkend=\"pgrowlocks\"/></application>\n lock mode output labels (Bruce Momjian)\n </para>\n </listitem>\n\nto\n\n <listitem>\n <para>\n Change <application><xref linkend=\"pgrowlocks\"/></application>\n lock mode output labels (Bruce Momjian)\n </para>\n<literallayout role=\"something\">\nAuthor: Bruce Momjian <bruce@momjian.us>\n2023-09-26 [15d5d7405] pgrowlocks: change lock mode output labels for \nconsiste\n</literallayout>\n </listitem>\n\nThe fact that the comment is before the main item might also be a bit \ntricky to sort out.\n\n\n\n",
"msg_date": "Wed, 28 Aug 2024 10:47:32 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 10:47:32AM +0200, Peter Eisentraut wrote:\n> <!--\n> Author: Bruce Momjian <bruce@momjian.us>\n> 2023-09-26 [15d5d7405] pgrowlocks: change lock mode output labels for\n> consiste\n> -->\n> \n> <listitem>\n> <para>\n> Change <application><xref linkend=\"pgrowlocks\"/></application>\n> lock mode output labels (Bruce Momjian)\n> </para>\n> </listitem>\n> \n> to\n> \n> <listitem>\n> <para>\n> Change <application><xref linkend=\"pgrowlocks\"/></application>\n> lock mode output labels (Bruce Momjian)\n> </para>\n> <literallayout role=\"something\">\n> Author: Bruce Momjian <bruce@momjian.us>\n> 2023-09-26 [15d5d7405] pgrowlocks: change lock mode output labels for\n> consiste\n> </literallayout>\n> </listitem>\n> \n> The fact that the comment is before the main item might also be a bit tricky\n> to sort out.\n\nI think we can just assume any XML comment in that file that has Author:\non line line and an ISO date on the next is a commit comment.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 28 Aug 2024 13:45:09 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Thu, 22 Aug 2024 at 21:34, Marcos Pegoraro <marcos@f10.com.br> wrote:\n> I understand your point, and agree with that for previous releases, but since we have a month only for version 17, will this process work properly until that date ?\n> I think a release notes is more read as soon as it is available than other months, isn't it ?\n> And this feature is just a HTML page, so if it's done manually or automatically, from the reader point of view it'll be exactly the same.\n\nBig +1 to this. It would definitely be great if we would have these\ncommit links for previous release notes. But the PG17 GA release is\nprobably happening in 19 days on September 26th. I feel like we should\nfocus on getting the manual improvements from Jian He merged,\notherwise we'll end up with release notes for PG17 on release day that\nare significantly less useful than they could have been. Let's not\nmake perfect the enemy of good here.\n\n\n",
"msg_date": "Sat, 7 Sep 2024 00:30:31 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Sat, Sep 7, 2024 at 6:30 AM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n>\n> On Thu, 22 Aug 2024 at 21:34, Marcos Pegoraro <marcos@f10.com.br> wrote:\n> > I understand your point, and agree with that for previous releases, but since we have a month only for version 17, will this process work properly until that date ?\n> > I think a release notes is more read as soon as it is available than other months, isn't it ?\n> > And this feature is just a HTML page, so if it's done manually or automatically, from the reader point of view it'll be exactly the same.\n>\n> Big +1 to this. It would definitely be great if we would have these\n> commit links for previous release notes. But the PG17 GA release is\n> probably happening in 19 days on September 26th. I feel like we should\n> focus on getting the manual improvements from Jian He merged,\n> otherwise we'll end up with release notes for PG17 on release day that\n> are significantly less useful than they could have been. Let's not\n> make perfect the enemy of good here.\n>\n\nhi. Thanks for your interest.\npatch updated.\n\n\nI have proof-read release-17.sgml again,\nmaking sure the commit url's commit is the same as\nrelease-17.sgml comment's git commit.\n\nlike:\n\n<!--\nAuthor: Jeff Davis <jdavis@postgresql.org>\n2024-03-04 [2af07e2f7] Fix search_path to a safe value during maintenance opera\n-->\n\n <listitem>\n <para>\n Change functions to use a safe <xref linkend=\"guc-search-path\"/>\n during maintenance operations (Jeff Davis).\n [<ulink url=\"&commit_baseurl;2af07e2f7\">2af07e2f7</ulink>]\n </para>\n\nmake sure there are three occurence of \"2af07e2f7\".\n\n\nI didn't manually click each git commit url to test it though.\n\n\n----------------------------\nthere are at least several typos in sgml comment.\nlike\n\n<!--\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2023-11-17 [f7816aec2] Extract column statistics from CTE references, if possib\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2024-03-26 [a65724dfa] Propagate pathkeys from CTEs up to the outer query.\n-->\n\nhere should be \"possible\".\n\nit seems using sgml comments to automate some things, we still need\nextra effort to proof-read.",
"msg_date": "Sat, 7 Sep 2024 10:12:00 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Em sex., 6 de set. de 2024 às 23:13, jian he <jian.universality@gmail.com>\nescreveu:\n\n> I didn't manually click each git commit url to test it though.\n\n\nChecked, all commit links are working and matching with their <listitem>\n\nregards\nMarcos\n\nEm sex., 6 de set. de 2024 às 23:13, jian he <jian.universality@gmail.com> escreveu:I didn't manually click each git commit url to test it though. Checked, all commit links are working and matching with their <listitem>regardsMarcos",
"msg_date": "Sat, 7 Sep 2024 11:16:20 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Sat, Sep 7, 2024 at 10:12:00AM +0800, jian he wrote:\n> On Sat, Sep 7, 2024 at 6:30 AM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> >\n> > On Thu, 22 Aug 2024 at 21:34, Marcos Pegoraro <marcos@f10.com.br> wrote:\n> > > I understand your point, and agree with that for previous releases, but since we have a month only for version 17, will this process work properly until that date ?\n> > > I think a release notes is more read as soon as it is available than other months, isn't it ?\n> > > And this feature is just a HTML page, so if it's done manually or automatically, from the reader point of view it'll be exactly the same.\n> >\n> > Big +1 to this. It would definitely be great if we would have these\n> > commit links for previous release notes. But the PG17 GA release is\n> > probably happening in 19 days on September 26th. I feel like we should\n> > focus on getting the manual improvements from Jian He merged,\n> > otherwise we'll end up with release notes for PG17 on release day that\n> > are significantly less useful than they could have been. Let's not\n> > make perfect the enemy of good here.\n> >\n> \n> hi. Thanks for your interest.\n> patch updated.\n> \n> \n> I have proof-read release-17.sgml again,\n> making sure the commit url's commit is the same as\n> release-17.sgml comment's git commit.\n\nI will write a script to do this, but I am not sure I will have it done\nby the time we release Postgres 17.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n",
"msg_date": "Wed, 11 Sep 2024 10:16:04 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 10:16:04AM -0400, Bruce Momjian wrote:\n> On Sat, Sep 7, 2024 at 10:12:00AM +0800, jian he wrote:\n> > On Sat, Sep 7, 2024 at 6:30 AM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> > >\n> > > On Thu, 22 Aug 2024 at 21:34, Marcos Pegoraro <marcos@f10.com.br> wrote:\n> > > > I understand your point, and agree with that for previous releases, but since we have a month only for version 17, will this process work properly until that date ?\n> > > > I think a release notes is more read as soon as it is available than other months, isn't it ?\n> > > > And this feature is just a HTML page, so if it's done manually or automatically, from the reader point of view it'll be exactly the same.\n> > >\n> > > Big +1 to this. It would definitely be great if we would have these\n> > > commit links for previous release notes. But the PG17 GA release is\n> > > probably happening in 19 days on September 26th. I feel like we should\n> > > focus on getting the manual improvements from Jian He merged,\n> > > otherwise we'll end up with release notes for PG17 on release day that\n> > > are significantly less useful than they could have been. Let's not\n> > > make perfect the enemy of good here.\n> > >\n> > \n> > hi. Thanks for your interest.\n> > patch updated.\n> > \n> > \n> > I have proof-read release-17.sgml again,\n> > making sure the commit url's commit is the same as\n> > release-17.sgml comment's git commit.\n> \n> I will write a script to do this, but I am not sure I will have it done\n> by the time we release Postgres 17.\n\nI now see you have done the entire PG 17 release notes, so I can just\nuse your patch and work on a script later. I will apply your patch\nsoon.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n",
"msg_date": "Wed, 11 Sep 2024 11:19:59 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Sat, Sep 7, 2024 at 10:12:00AM +0800, jian he wrote:\n> On Sat, Sep 7, 2024 at 6:30 AM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n> >\n> > On Thu, 22 Aug 2024 at 21:34, Marcos Pegoraro <marcos@f10.com.br> wrote:\n> > > I understand your point, and agree with that for previous releases, but since we have a month only for version 17, will this process work properly until that date ?\n> > > I think a release notes is more read as soon as it is available than other months, isn't it ?\n> > > And this feature is just a HTML page, so if it's done manually or automatically, from the reader point of view it'll be exactly the same.\n> >\n> > Big +1 to this. It would definitely be great if we would have these\n> > commit links for previous release notes. But the PG17 GA release is\n> > probably happening in 19 days on September 26th. I feel like we should\n> > focus on getting the manual improvements from Jian He merged,\n> > otherwise we'll end up with release notes for PG17 on release day that\n> > are significantly less useful than they could have been. Let's not\n> > make perfect the enemy of good here.\n> >\n> \n> hi. Thanks for your interest.\n> patch updated.\n\nI applied this patch to PG 17. You can see the results at:\n\n\thttps://momjian.us/pgsql_docs/release-17.html\n\nThe community doc build only shows the master branch, which is PG 18,\nand the PG 17 docs are only built before the release.\n\nI changed the patch to use the section symbol \"§\" instead of showing\nthe hashes. The hashes seemed too detailed. Does anyone see a better\nsymbol to use from here?\n\n\thttp://www.zipcon.net/~swhite/docs/computers/browsers/entities_page.html\n\nI think we are limited to the HTML entities listed on that page. I also\nremoved the brackets and the period you added at the end of the text. \n\n> there are at least several typos in sgml comment.\n> like\n> \n> <!--\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2023-11-17 [f7816aec2] Extract column statistics from CTE references, if possib\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2024-03-26 [a65724dfa] Propagate pathkeys from CTEs up to the outer query.\n> -->\n> \n> here should be \"possible\".\n\nYes, the git output is cut off at a certain length --- I don't think we\nshould change it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n",
"msg_date": "Fri, 13 Sep 2024 12:39:28 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Em sex., 13 de set. de 2024 às 13:39, Bruce Momjian <bruce@momjian.us>\nescreveu:\n\n> I changed the patch to use the section symbol \"§\" instead of showing\n> the hashes. The hashes seemed too detailed. Does anyone see a better\n> symbol to use from here?\n>\n\nRobert and others liked commit hashes because you can do \"git show\n$COMMITID\"\n\nOn my first email I used numbered links, because when you have several\nlinks you know which one was clicked lastly\n\nAnd brackets were useful to separate description, authors and commit links\nof that item. <https://postgr.es/c/9acae56ce>\n\nAnyway, one way or another, I'm ok because it's committed.\n\nregards\nMarcos\n\nEm sex., 13 de set. de 2024 às 13:39, Bruce Momjian <bruce@momjian.us> escreveu:\nI changed the patch to use the section symbol \"§\" instead of showing\nthe hashes. The hashes seemed too detailed. Does anyone see a better\nsymbol to use from here?Robert and others liked commit hashes because you can do \"git show $COMMITID\"On my first email I used numbered links, because when you have several links you know which one was clicked lastlyAnd brackets were useful to separate description, authors and commit links of that item.Anyway, one way or another, I'm ok because it's committed.regardsMarcos",
"msg_date": "Fri, 13 Sep 2024 18:41:30 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Fri, Sep 13, 2024 at 06:41:30PM -0300, Marcos Pegoraro wrote:\n> Em sex., 13 de set. de 2024 às 13:39, Bruce Momjian <bruce@momjian.us>\n> escreveu:\n> \n> I changed the patch to use the section symbol \"§\" instead of showing\n> the hashes. The hashes seemed too detailed. Does anyone see a better\n> symbol to use from here?\n> \n> \n> Robert and others liked commit hashes because you can do \"git show $COMMITID\"\n\nWe have to consider the small percentage of people who will want to do\nthat vs the number of people reading the release notes. I think it is\nreasonable to require someone to click on the link to see the hash.\n\n> On my first email I used numbered links, because when you have several links\n> you know which one was clicked lastly\n\nTrue, and the section symbol makes it even less clear.\n\n> And brackets were useful to separate description, authors and commit links of\n> that item. \n\nRight, but without hashes, the section mark is clear.\n\n> Anyway, one way or another, I'm ok because it's committed.\n\nWe can easily change it if we decide we want something else.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n",
"msg_date": "Fri, 13 Sep 2024 17:54:31 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Fri, Sep 13, 2024 at 05:54:31PM -0400, Bruce Momjian wrote:\n> On Fri, Sep 13, 2024 at 06:41:30PM -0300, Marcos Pegoraro wrote:\n> > Em sex., 13 de set. de 2024 às 13:39, Bruce Momjian <bruce@momjian.us>\n> > escreveu:\n> > \n> > I changed the patch to use the section symbol \"§\" instead of showing\n> > the hashes. The hashes seemed too detailed. Does anyone see a better\n> > symbol to use from here?\n> > \n> > \n> > Robert and others liked commit hashes because you can do \"git show $COMMITID\"\n> \n> We have to consider the small percentage of people who will want to do\n> that vs the number of people reading the release notes. I think it is\n> reasonable to require someone to click on the link to see the hash.\n\nThinking some more, I think that displaying the hashes would give the\naverage reader the impression that the release notes are too complex for\nthem to understand.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n",
"msg_date": "Sat, 14 Sep 2024 09:44:14 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Fri, Sep 13, 2024 at 12:39:28PM -0400, Bruce Momjian wrote:\n> I applied this patch to PG 17. You can see the results at:\n> \n> \thttps://momjian.us/pgsql_docs/release-17.html\n> \n> The community doc build only shows the master branch, which is PG 18,\n> and the PG 17 docs are only built before the release.\n> \n> I changed the patch to use the section symbol \"§\" instead of showing\n> the hashes. The hashes seemed too detailed. Does anyone see a better\n> symbol to use from here?\n> \n> \thttp://www.zipcon.net/~swhite/docs/computers/browsers/entities_page.html\n> \n> I think we are limited to the HTML entities listed on that page. I also\n> removed the brackets and the period you added at the end of the text. \n\nI wrote the attached Perl script that automatically adds commit links. \nI tested it against PG 12-17 and the results were good. I plan to add\nthis script to all branches and run it on all branch release notes in a\nfew days.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"",
"msg_date": "Sat, 14 Sep 2024 20:37:31 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Sat, Sep 14, 2024 at 08:37:31PM -0400, Bruce Momjian wrote:\n> On Fri, Sep 13, 2024 at 12:39:28PM -0400, Bruce Momjian wrote:\n> > I applied this patch to PG 17. You can see the results at:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-17.html\n> > \n> > The community doc build only shows the master branch, which is PG 18,\n> > and the PG 17 docs are only built before the release.\n> > \n> > I changed the patch to use the section symbol \"§\" instead of showing\n> > the hashes. The hashes seemed too detailed. Does anyone see a better\n> > symbol to use from here?\n> > \n> > \thttp://www.zipcon.net/~swhite/docs/computers/browsers/entities_page.html\n> > \n> > I think we are limited to the HTML entities listed on that page. I also\n> > removed the brackets and the period you added at the end of the text. \n> \n> I wrote the attached Perl script that automatically adds commit links. \n> I tested it against PG 12-17 and the results were good. I plan to add\n> this script to all branches and run it on all branch release notes in a\n> few days.\n\nI have added the Perl script, added instructions on when to run the\nscript, and ran the script on all release notes back to PG 12. I think\nwe can call this item closed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n",
"msg_date": "Mon, 16 Sep 2024 14:15:34 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Em seg., 16 de set. de 2024 às 15:15, Bruce Momjian <bruce@momjian.us>\nescreveu:\n\n> > > I changed the patch to use the section symbol \"§\" instead of showing\n> > > the hashes. The hashes seemed too detailed. Does anyone see a better\n> > > symbol to use from here\n\n\nWell, I think section symbol is not a good choice for all commit links\n\nAdd backend support for injection points (Michael Paquier) § § § §\nAdd backend support for injection points (Michael Paquier) 1 2 3 4\nAdd backend support for injection points (Michael Paquier) Commit 1 2 3 4\n\nI don't know which one is better, but I don't think the section symbol is\nthe best.\n\nEm seg., 16 de set. de 2024 às 15:15, Bruce Momjian <bruce@momjian.us> escreveu:> > I changed the patch to use the section symbol \"§\" instead of showing\n> > the hashes. The hashes seemed too detailed. Does anyone see a better\n> > symbol to use from hereWell, I think section symbol is not a good choice for all commit linksAdd backend support for injection points (Michael Paquier) § § § §Add backend support for injection points (Michael Paquier) 1 2 3 4Add backend support for injection points (Michael Paquier) Commit 1 2 3 4I don't know which one is better, but I don't think the section symbol is the best.",
"msg_date": "Mon, 16 Sep 2024 15:42:22 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Mon, Sep 16, 2024 at 03:42:22PM -0300, Marcos Pegoraro wrote:\n> Em seg., 16 de set. de 2024 às 15:15, Bruce Momjian <bruce@momjian.us>\n> escreveu:\n> \n> > > I changed the patch to use the section symbol \"§\" instead of showing\n> > > the hashes. The hashes seemed too detailed. Does anyone see a better\n> > > symbol to use from here\n> \n> \n> Well, I think section symbol is not a good choice for all commit links\n> \n> Add backend support for injection points (Michael Paquier) § § § §\n> Add backend support for injection points (Michael Paquier) 1 2 3 4\n> Add backend support for injection points (Michael Paquier) Commit 1 2 3 4\n> \n> I don't know which one is better, but I don't think the section symbol is the\n> best. \n\nWe could try:\n\n Add backend support for injection points (Michael Paquier) §1 §2 §3 §4\n\nand maybe only add numbers if there is more than one commit.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n",
"msg_date": "Mon, 16 Sep 2024 15:18:31 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On 2024-Sep-16, Bruce Momjian wrote:\n\n> We could try:\n> \n> Add backend support for injection points (Michael Paquier) §1 §2 §3 §4\n> \n> and maybe only add numbers if there is more than one commit.\n\nWell, this reads like references to sections 1, 2, 3 and 4, but they\naren't that, and probably such sections don't even exist. I think the\nuse of the section symbol is misleading and typographically wrong.\nhttps://www.monotype.com/resources/punctuation-series-section-sign\n\n\nMaybe it would work to use numbers in square brackets instead:\n\n Add backend support for injection points (Michael Paquier) [1] [2] [3] [4]\n\nMaybe use the word \"commit\" for the first one, which would make it\nclearer what the links are about:\n\n Add backend support for injection points (Michael Paquier) [commit 1] [2] [3] [4]\n\nand if there's only one, say just \"[commit]\".\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 17 Sep 2024 10:12:44 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Tue, 17 Sept 2024 at 10:12, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Maybe it would work to use numbers in square brackets instead:\n>\n> Add backend support for injection points (Michael Paquier) [1] [2] [3] [4]\n\nAnother option would be to add them in superscript using the <sup>\nhtml tag (or even using some unicode stuff):\n\n Add backend support for injection points (Michael Paquier)<sup>1,\n2, 3, 4</sup>\n\nSo similar to footnotes in a sense, i.e. if you want details click on\nthe small numbers\n\n\n",
"msg_date": "Tue, 17 Sep 2024 10:36:22 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Em ter., 17 de set. de 2024 às 05:12, Alvaro Herrera <\nalvherre@alvh.no-ip.org> escreveu:\n\n> Add backend support for injection points (Michael Paquier) [commit 1] [2]\n> [3] [4]\n>\nI think this way would be fine.\n\nAnd it would be good to have a\ntarget=\"_blank\"\non commit links so a new window or tab would be opened instead of going\nback and forth.\nThis way you can open these 4 links and then navigate on them to see\nexactly what was done on every commit.\n\nregards\nMarcos\n\nEm ter., 17 de set. de 2024 às 05:12, Alvaro Herrera <alvherre@alvh.no-ip.org> escreveu:Add backend support for injection points (Michael Paquier) [commit 1] [2] [3] [4]I think this way would be fine.And it would be good to have a target=\"_blank\"on commit links so a new window or tab would be opened instead of going back and forth. This way you can open these 4 links and then navigate on them to see exactly what was done on every commit.regardsMarcos",
"msg_date": "Tue, 17 Sep 2024 15:29:54 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Tue, Sep 17, 2024 at 03:29:54PM -0300, Marcos Pegoraro wrote:\n> Em ter., 17 de set. de 2024 às 05:12, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> escreveu:\n> \n> Add backend support for injection points (Michael Paquier) [commit 1] [2]\n> [3] [4]\n> \n> I think this way would be fine.\n> \n> And it would be good to have a \n> target=\"_blank\"\n> on commit links so a new window or tab would be opened instead of going back\n> and forth. \n> This way you can open these 4 links and then navigate on them to see exactly\n> what was done on every commit.\n\nI think trying to add text to each item is both redundant and confusing,\nbecause I am guessing that many people will not even know what a commit\nis, and will be confused by clicking on the link.\n\nWhat I have done is to add text to the top of \"Appendix E. Release\nNotes\" explaining the symbol and what it links to. Patch attached.\n\nWe can still consider a different symbol or number labeling, but I think\nthis patch is in the right direction.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"",
"msg_date": "Tue, 17 Sep 2024 20:13:20 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Tue, Sep 17, 2024 at 03:29:54PM -0300, Marcos Pegoraro wrote:\n>> Em ter., 17 de set. de 2024 às 05:12, Alvaro Herrera <alvherre@alvh.no-ip.org>\n>>> Add backend support for injection points (Michael Paquier) [commit 1] [2]\n\n> I think trying to add text to each item is both redundant and confusing,\n\nAlso very clutter-y. I'm not convinced that any of this is a good\nidea that will stand the test of time: I estimate that approximately\n0.01% of people who read the release notes will want these links.\nBut if we keep it I think the annotations have to be very unobtrusive.\n\n(Has anyone looked at the PDF rendering of this? It seems rather\nunfortunate to me.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Sep 2024 20:22:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Tue, Sep 17, 2024 at 08:22:41PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Tue, Sep 17, 2024 at 03:29:54PM -0300, Marcos Pegoraro wrote:\n> >> Em ter., 17 de set. de 2024 às 05:12, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> >>> Add backend support for injection points (Michael Paquier) [commit 1] [2]\n> \n> > I think trying to add text to each item is both redundant and confusing,\n> \n> Also very clutter-y. I'm not convinced that any of this is a good\n> idea that will stand the test of time: I estimate that approximately\n> 0.01% of people who read the release notes will want these links.\n\nYes, I think 0.01% is accurate.\n\n> But if we keep it I think the annotations have to be very unobtrusive.\n\nI very much agree.\n\n> (Has anyone looked at the PDF rendering of this? It seems rather\n> unfortunate to me.)\n\nOh, not good. See my build:\n\n\thttps://momjian.us/expire/postgres-US.pdf#page=2890\n\nThose numbers are there because all external links get numbers that\nstart a 1 at the top of the section and increment for every link --- I\nhave no idea why. You can see that acronyms that have external links\nhave the same numbering:\n\n\thttps://momjian.us/expire/postgres-US.pdf#page=3150\n\nI wonder if we should remove the numbers in both cases.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n",
"msg_date": "Tue, 17 Sep 2024 20:54:59 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On 18.09.24 02:54, Bruce Momjian wrote:\n> On Tue, Sep 17, 2024 at 08:22:41PM -0400, Tom Lane wrote:\n>> Bruce Momjian <bruce@momjian.us> writes:\n>>> On Tue, Sep 17, 2024 at 03:29:54PM -0300, Marcos Pegoraro wrote:\n>>>> Em ter., 17 de set. de 2024 às 05:12, Alvaro Herrera <alvherre@alvh.no-ip.org>\n>>>>> Add backend support for injection points (Michael Paquier) [commit 1] [2]\n>>\n>>> I think trying to add text to each item is both redundant and confusing,\n>>\n>> Also very clutter-y. I'm not convinced that any of this is a good\n>> idea that will stand the test of time: I estimate that approximately\n>> 0.01% of people who read the release notes will want these links.\n> \n> Yes, I think 0.01% is accurate.\n> \n>> But if we keep it I think the annotations have to be very unobtrusive.\n> \n> I very much agree.\n> \n>> (Has anyone looked at the PDF rendering of this? It seems rather\n>> unfortunate to me.)\n> \n> Oh, not good. See my build:\n> \n> \thttps://momjian.us/expire/postgres-US.pdf#page=2890\n> \n> Those numbers are there because all external links get numbers that\n> start a 1 at the top of the section and increment for every link --- I\n> have no idea why. You can see that acronyms that have external links\n> have the same numbering:\n> \n> \thttps://momjian.us/expire/postgres-US.pdf#page=3150\n> \n> I wonder if we should remove the numbers in both cases.\n\nMaybe this shouldn't be done between RC1 and GA. This is clearly a more \ncomplex topic that should go through a proper review and testing cycle.\n\n\n\n",
"msg_date": "Wed, 18 Sep 2024 11:02:36 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On 2024-Sep-17, Bruce Momjian wrote:\n\n> I think trying to add text to each item is both redundant and confusing,\n> because I am guessing that many people will not even know what a commit\n> is, and will be confused by clicking on the link.\n\nUh, I 100% disagree that Postgres users reading the release notes would\nnot know what a commit is. I think 99.9%(*) of them would know that(**).\nDoes a small fraction _care_ about the commit that each release note\nitem is related to? Sure, it's a small audience, and I think the\ncurrent implementation far too intrusive for the PDF form to be\nacceptable, given that the audience is so small. But the audience is\nnot an empty set, so it's acceptable to have something nicer-looking.\n\nIMO we should be looking at a more surgical approach to implement this,\nperhaps using a custom SGML tag and some custom XSLT code to process\nsuch tags that adds links the way we want, rather than generic <ulink>\ntags. Or maybe <ulink> is OK if we add some class to it that XSLT knows\nto process differently than generic ulinks, like func_table_entry and\ncatalog_table_entry.\n\nI tend to agree with Peter that this came in way too late, and looking\nat the PDF I think it should be reverted for now.\n\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No tengo por qué estar de acuerdo con lo que pienso\"\n (Carlos Caszeli)\n\n(*) Not a scientifically-determined number\n\n(**) You said \"many people will not even know\", and I said 99.9% will\nknow. Maybe we're both right, and if we accept both things to be true,\nthen we conclude that 0.01% of Postgres users is many people. Whatever\nelse results from this thread, I think that's a fantastic conclusion.\n\n\n",
"msg_date": "Wed, 18 Sep 2024 11:26:33 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Wed, 18 Sept 2024 at 02:55, Bruce Momjian <bruce@momjian.us> wrote:\n> > Also very clutter-y. I'm not convinced that any of this is a good\n> > idea that will stand the test of time: I estimate that approximately\n> > 0.01% of people who read the release notes will want these links.\n>\n> Yes, I think 0.01% is accurate.\n\nI think that is a severe underestimation. People that read the release\nnotes obviously read it because they want to know what changed. But\nthe release notes are very terse (on purpose, and with good reason),\nand people likely want a bit more details on some of the items that\nthey are particularly interested in. These links allow people to\neasily find details on those items. They might not care about the\nactual code from the commit, but the commit message is very likely\nuseful to them.\n\n\n",
"msg_date": "Wed, 18 Sep 2024 12:12:19 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Wed, 18 Sept 2024 at 11:26, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2024-Sep-17, Bruce Momjian wrote:\n>\n> > I think trying to add text to each item is both redundant and confusing,\n> > because I am guessing that many people will not even know what a commit\n> > is, and will be confused by clicking on the link.\n>\n> Uh, I 100% disagree that Postgres users reading the release notes would\n> not know what a commit is.\n\n+1, IMO it seems completely reasonable to assume that anyone\ninterested enough in postgres to read the release notes also knows\nwhat a commit is.\n\n> IMO we should be looking at a more surgical approach to implement this,\n> perhaps using a custom SGML tag and some custom XSLT code to process\n> such tags that adds links the way we want, rather than generic <ulink>\n> tags. Or maybe <ulink> is OK if we add some class to it that XSLT knows\n> to process differently than generic ulinks, like func_table_entry and\n> catalog_table_entry.\n\nIs it easy to just remove/hide the links completely from the PDF? I\ndoubt many people read the release notes by going to Appendix E in the\nPDF anyway.\n\nIt seems a shame to remove those links from the HTML view where they\nlook acceptable and which most people will use, just because they look\nbad in the pdf. And honestly, they don't even look that terrible in\nthe PDF imo.\n\n\n",
"msg_date": "Wed, 18 Sep 2024 12:15:34 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On 2024-Sep-18, Jelte Fennema-Nio wrote:\n\n> It seems a shame to remove those links from the HTML view where they\n> look acceptable and which most people will use, just because they look\n> bad in the pdf. And honestly, they don't even look that terrible in\n> the PDF imo.\n\nEh, someday maybe we should just get rid of the PDF build of the docs.\nIt's given a ton of trouble over the years, it is not terribly\nconvenient, and we don't even know whether it has any users or not.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"At least to kernel hackers, who really are human, despite occasional\nrumors to the contrary\" (LWN.net)\n\n\n",
"msg_date": "Wed, 18 Sep 2024 12:27:09 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Wed, Sep 18, 2024 at 8:55 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, Sep 17, 2024 at 08:22:41PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > On Tue, Sep 17, 2024 at 03:29:54PM -0300, Marcos Pegoraro wrote:\n> > >> Em ter., 17 de set. de 2024 às 05:12, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > >>> Add backend support for injection points (Michael Paquier) [commit 1] [2]\n> >\n> > > I think trying to add text to each item is both redundant and confusing,\n> >\n> > Also very clutter-y. I'm not convinced that any of this is a good\n> > idea that will stand the test of time: I estimate that approximately\n> > 0.01% of people who read the release notes will want these links.\n>\n> Yes, I think 0.01% is accurate.\n>\n> > But if we keep it I think the annotations have to be very unobtrusive.\n>\n> I very much agree.\n>\n\ngoogle \"PostgreSQL 16: What's New?\"\nIn private browsing mode\nshows we have at least 15 English blogs explaining new features.\n\nI think including the git commit info, can help people quickly write\nthese kinds of blogs.\nWithout it, writing blog and sql examples would be harder or require\nyou to know more about postgres internal.\n\n\n",
"msg_date": "Wed, 18 Sep 2024 19:04:40 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Em qua., 18 de set. de 2024 às 06:02, Peter Eisentraut <peter@eisentraut.org>\nescreveu:\n\n> Maybe this shouldn't be done between RC1 and GA. This is clearly a more\n> complex topic that should go through a proper review and testing cycle.\n>\n\nI think this is just a question of decision, not reviewing or testing.\nAnd I'm sure there will be more people reading Release Notes now, in\nSeptember and October, than in January or April.\n\nregards\nMarcos\n\nEm qua., 18 de set. de 2024 às 06:02, Peter Eisentraut <peter@eisentraut.org> escreveu:Maybe this shouldn't be done between RC1 and GA. This is clearly a more \ncomplex topic that should go through a proper review and testing cycle.I think this is just a question of decision, not reviewing or testing.And I'm sure there will be more people reading Release Notes now, in September and October, than in January or April.regardsMarcos",
"msg_date": "Wed, 18 Sep 2024 08:37:38 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Wed, 18 Sept 2024 at 13:38, Marcos Pegoraro <marcos@f10.com.br> wrote:\n>> Maybe this shouldn't be done between RC1 and GA. This is clearly a more\n>> complex topic that should go through a proper review and testing cycle.\n>\n> I think this is just a question of decision, not reviewing or testing.\n> And I'm sure there will be more people reading Release Notes now, in September and October, than in January or April.\n\n+1. Also, IMHO the links even in their current simple form of \"§ § §\n§\" are a huge net-positive on the release notes. So yes, I'm sure we\ncan format them more clearly with some bikeshedding, but removing them\ncompletely seems like an overreaction to me.\n\n\n",
"msg_date": "Wed, 18 Sep 2024 13:46:39 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Jelte Fennema-Nio <postgres@jeltef.nl> writes:\n> On Wed, 18 Sept 2024 at 02:55, Bruce Momjian <bruce@momjian.us> wrote:\n>>> Also very clutter-y. I'm not convinced that any of this is a good\n>>> idea that will stand the test of time: I estimate that approximately\n>>> 0.01% of people who read the release notes will want these links.\n\n>> Yes, I think 0.01% is accurate.\n\n> I think that is a severe underestimation.\n\nI'm sure a very large fraction of the people commenting on this thread\nwould like to have these links. But we are by no means representative\nof the average Postgres user.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Sep 2024 10:16:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "Marcos Pegoraro <marcos@f10.com.br> writes:\n> Em qua., 18 de set. de 2024 às 06:02, Peter Eisentraut <peter@eisentraut.org>\n> escreveu:\n>> Maybe this shouldn't be done between RC1 and GA. This is clearly a more\n>> complex topic that should go through a proper review and testing cycle.\n\n> I think this is just a question of decision, not reviewing or testing.\n\nI'd say the opposite: the thing we lack is exactly testing, in the\nsense of how non-hackers will react to this. Nonetheless, I'm not\nupset about trying to do it now. We will get more feedback about\nmajor-release notes than minor-release notes. And the key point\nis that it's okay to consider this experimental. Unlike say a SQL\nfeature, there are no compatibility concerns that put a premium on\ngetting it right the first time. We can adjust the annotations or\ngive up on them without much cost.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Sep 2024 10:46:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Wed, Sep 18, 2024 at 10:16:48AM -0400, Tom Lane wrote:\n> Jelte Fennema-Nio <postgres@jeltef.nl> writes:\n> > On Wed, 18 Sept 2024 at 02:55, Bruce Momjian <bruce@momjian.us> wrote:\n> >>> Also very clutter-y. I'm not convinced that any of this is a good\n> >>> idea that will stand the test of time: I estimate that approximately\n> >>> 0.01% of people who read the release notes will want these links.\n> \n> >> Yes, I think 0.01% is accurate.\n> \n> > I think that is a severe underestimation.\n> \n> I'm sure a very large fraction of the people commenting on this thread\n> would like to have these links. But we are by no means representative\n> of the average Postgres user.\n\nI am confused how to proceed. I think Jian He did the work for PG 17\nbecause it will help many users once PG 17 is released soon, and I wrote\na script to automate this so we can easily continue this going forward.\n\nI agree with Tom that the PDF superscripts look odd. After researching\nXML and XPath, I have applied the attached patch to all branches, which\nsupresses the section symbol and footnotes for PDF output.\n\nWe only get a few requests a year for more details on release note\nitems, and we tell them to look at the SGML source file to see the\ncommit hashes, so this just isn't a frequently-requested need.\n\nOn the flip side, this has been discussed for several years among active\ncommunity members, usually around major release time. As I remember,\nthe last idea was to do something with Javascript on mouse-over of an\nitem. However, that would involve parsing the SGML in Javascript, which\nseems much more error-prone and harder to test than the Perl script I\nwrote.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"",
"msg_date": "Wed, 18 Sep 2024 16:45:26 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
},
{
"msg_contents": "On Tue, Sep 17, 2024 at 08:13:20PM -0400, Bruce Momjian wrote:\n> I think trying to add text to each item is both redundant and confusing,\n> because I am guessing that many people will not even know what a commit\n> is, and will be confused by clicking on the link.\n> \n> What I have done is to add text to the top of \"Appendix E. Release\n> Notes\" explaining the symbol and what it links to. Patch attached.\n\nI have applied an updated version of this patch, attached, which\nincludes the paragraph, but only in non-print output.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"",
"msg_date": "Wed, 18 Sep 2024 17:29:54 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Detailed release notes"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a prototype for an ALL_CANDIDATES option for EXPLAIN. The goal\nof this option is to print all plan candidates instead of only the\ncheapest plan. It will output the plans from the most expensive at the\ntop to the cheapest. Here's an example:\n\nexplain (all_candidates) select * from pgbench_accounts where aid=1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Plan 1\n -> Gather (cost=1000.00..3375.39 rows=1 width=97)\n Workers Planned: 1\n -> Parallel Seq Scan on pgbench_accounts\n(cost=0.00..2375.29 rows=1 width=97)\n Filter: (aid = 1)\n Plan 2\n -> Seq Scan on pgbench_accounts (cost=0.00..2890.00 rows=1 width=97)\n Filter: (aid = 1)\n Plan 3\n -> Bitmap Heap Scan on pgbench_accounts (cost=4.30..8.31 rows=1 width=97)\n Recheck Cond: (aid = 1)\n -> Bitmap Index Scan on pgbench_accounts_pkey\n(cost=0.00..4.30 rows=1 width=0)\n Index Cond: (aid = 1)\n Plan 4\n -> Index Scan using pgbench_accounts_pkey on pgbench_accounts\n(cost=0.29..8.31 rows=1 width=97)\n Index Cond: (aid = 1)\n\nThis can provide very useful insight on the planner's decisions like\nwhether it tried to use a specific index and how much cost difference\nthere is with the top plan. Additionally, it makes it possible to spot\ndiscrepancies in generated plans like incorrect row estimation [1].\n\nThe plan list is generated from the upper_rel's pathlist. However, due\nto how planning mutates the PlannerGlobal state, we can't directly\niterate the path list generated by the subquery_planner and create a\nplanned statement for them. Instead, we need to plan from scratch for\nevery path in the pathlist to generate the list of PlannedStmt.\n\nThe patch is split in two mostly to ease the review:\n001: Propagate PlannerInfo root to add_path. This is needed to prevent\nadd_path from discarding path if all_candidates is enabled which will\nbe stored in PlannerGlobal.\n002: Add the planner_all_candidates function and print of candidate\nlist in explain\n\n[1] https://www.postgresql.org/message-id/flat/CAO6_Xqr9+51NxgO=XospEkUeAg-p=EjAWmtpdcZwjRgGKJ53iA@mail.gmail.com\n\nRegards,\nAnthonin",
"msg_date": "Fri, 26 Jul 2024 18:59:07 +0200",
"msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>",
"msg_from_op": true,
"msg_subject": "Add ALL_CANDIDATES option to EXPLAIN"
},
{
"msg_contents": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> writes:\n> I have a prototype for an ALL_CANDIDATES option for EXPLAIN. The goal\n> of this option is to print all plan candidates instead of only the\n> cheapest plan. It will output the plans from the most expensive at the\n> top to the cheapest.\n\nThis doesn't seem feasible at all to me. If we don't prune plans\nfairly aggressively at lower plan levels, we'll have a combinatorial\nexplosion in the amount of work the planner has to do. Have you\ntried this on even slightly complex plans --- say, a dozen-way join?\nI do not think you'll like the runtime, the amount of memory consumed,\nor the verboseness of the output. (I wonder how it interacts with\nGEQO, too.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Jul 2024 13:13:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add ALL_CANDIDATES option to EXPLAIN"
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 12:59 PM Anthonin Bonnefoy\n<anthonin.bonnefoy@datadoghq.com> wrote:\n> I have a prototype for an ALL_CANDIDATES option for EXPLAIN. The goal\n> of this option is to print all plan candidates instead of only the\n> cheapest plan. It will output the plans from the most expensive at the\n> top to the cheapest. Here's an example:\n\nIt's difficult for me to understand how this can work. Either it's not\nreally going to print out all candidates, or it's going to print out\ngigabytes of output for moderately complex queries.\n\nI've thought about trying to figure out some way of identifying and\nprinting out plans that are \"interestingly different\" from the chosen\nplan, with the costs they would have had, but I haven't been able to\ncome up with a good algorithm. Printing out absolutely everything\ndoesn't seem viable, because planning would be slow and use amazing\namounts of memory and the output would be so large as to be useless.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Jul 2024 13:16:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add ALL_CANDIDATES option to EXPLAIN"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I've thought about trying to figure out some way of identifying and\n> printing out plans that are \"interestingly different\" from the chosen\n> plan, with the costs they would have had, but I haven't been able to\n> come up with a good algorithm.\n\nI wonder how far you'd get by just printing the surviving paths\n(that is, something like Anthonin's patch, but without lobotomizing\nadd_path). The survivors would have to dominate the cheapest-total\npath along one of the other metrics add_path considers, which\nseems like a rough approximation of \"interestingly different\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Jul 2024 13:40:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add ALL_CANDIDATES option to EXPLAIN"
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 1:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wonder how far you'd get by just printing the surviving paths\n> (that is, something like Anthonin's patch, but without lobotomizing\n> add_path). The survivors would have to dominate the cheapest-total\n> path along one of the other metrics add_path considers, which\n> seems like a rough approximation of \"interestingly different\".\n\nMy guess is it wouldn't be that great. It seems easy to imagine that\nthe key decision for a particular plan might be whether to use table A\nor B as the driving table, or whether to sequential scan or index scan\nsome table involved in the query. It could well be that you end up\nwith the same output ordering either way (or no output ordering) for\none reason or another. I'm actually not sure \"interestingly different\"\ncan be defined in a useful, general way, because how is the computer\nto know what the human being cares about in a particular case? In\npractice, it feels like what you'd often like to do is say \"show me\nthe plan if you { started with table | used scan method Y on table X |\ndid not use index X | whatever }\". I haven't given up on the idea that\nthere could be some way of defining interesting-ness that avoids the\nneed for the user to say what they think is interesting, but it\ncertainly feels like one needs to be a lot more clever to make it work\nwithout user input.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Jul 2024 13:53:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add ALL_CANDIDATES option to EXPLAIN"
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 10:47 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jul 26, 2024 at 12:59 PM Anthonin Bonnefoy\n> <anthonin.bonnefoy@datadoghq.com> wrote:\n> > I have a prototype for an ALL_CANDIDATES option for EXPLAIN. The goal\n> > of this option is to print all plan candidates instead of only the\n> > cheapest plan. It will output the plans from the most expensive at the\n> > top to the cheapest. Here's an example:\n>\n> It's difficult for me to understand how this can work. Either it's not\n> really going to print out all candidates, or it's going to print out\n> gigabytes of output for moderately complex queries.\n>\n> I've thought about trying to figure out some way of identifying and\n> printing out plans that are \"interestingly different\" from the chosen\n> plan, with the costs they would have had, but I haven't been able to\n> come up with a good algorithm. Printing out absolutely everything\n> doesn't seem viable, because planning would be slow and use amazing\n> amounts of memory and the output would be so large as to be useless.\n\nIf we print the path forest as a forest as against individual path\ntrees, we will be able to cut down on the size but it will still be\nhuge. Irrespective of that even with slightly non-trivial queries it's\ngoing to be difficult to analyze these paths. The way I think of it is\ndumping this information in the form of tables. Roughly something like\na table containing RelOptInfo id and RelOptInfo itself and another\ncontaining all the paths identified by id and RelOptInfo id. The path\nlinkages are stored as path ids. That's a minimum. We will need more\ntables to store query, and other metadata. If we do so we can use SQL\nto carry out investigations.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 29 Jul 2024 15:28:34 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add ALL_CANDIDATES option to EXPLAIN"
}
] |
[
{
"msg_contents": "Hi,\n\nI work on a driver for Postgres that utilizes the extended query protocol\nand have a question about the row description message for interval types.\n\nI am trying to use the type modifier value from the message to determine\nthe precision of the interval. This is something I can do for timestamp\ntypes because the type modifier in the message is equal to the precision\n(or -1 if not specified).\n\nFor interval types it seems like the type modifier is related to the\nprecision but the values are not equal. Using Postgres 13.10 I see the\nfollowing values for pg_attribute.attttypmod when creating columns of type\ninterval(1), interval(2), ..., interval(6)\n\n 2147418113\n\n 2147418114\n\n 2147418115\n\n 2147418116\n\n 2147418117\n\n 2147418118\n\n\nI can see the value goes up by 1 each time the precision is increased, but\nI'm not sure how to interpret the fact that it starts at 2147418113\ninstead of 1.\n\nMy question is: how are the values meant to be interpreted for interval\ntypes? Thanks very much for your help.\n\nHi,I work on a driver for Postgres that utilizes the extended query protocol and have a question about the row description message for interval types.I am trying to use the type modifier value from the message to determine the precision of the interval. This is something I can do for timestamp types because the type modifier in the message is equal to the precision (or -1 if not specified).For interval types it seems like the type modifier is related to the precision but the values are not equal. Using Postgres 13.10 I see the following values for pg_attribute.attttypmod when creating columns of type interval(1), interval(2), ..., interval(6)\n 2147418113\n 2147418114\n 2147418115\n 2147418116\n 2147418117\n 2147418118I can see the value goes up by 1 each time the precision is increased, but I'm not sure how to interpret the fact that it starts at 2147418113 instead of 1.My question is: how are the values meant to be interpreted for interval types? Thanks very much for your help.",
"msg_date": "Fri, 26 Jul 2024 23:57:24 -0400",
"msg_from": "Greg Rychlewski <greg.rychlewski@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_attribute.atttypmod for interval type"
},
{
"msg_contents": "Greg Rychlewski <greg.rychlewski@gmail.com> writes:\n> My question is: how are the values meant to be interpreted for interval\n> types? Thanks very much for your help.\n\nInterval typmods include a fractional-seconds-precision field as well\nas a bitmask indicating the allowed interval fields (per the SQL\nstandard's weird syntax such as INTERVAL DAY TO SECOND). Looking at\nthe source code for intervaltypmodout() might be helpful:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/utils/adt/timestamp.c;h=69fe7860ede062fc8be42e7545b35e69c3e068c4;hb=HEAD#l1136\n\nThe referenced macros are mostly in utils/timestamp.h and\nutils/datetime.h.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Jul 2024 00:32:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_attribute.atttypmod for interval type"
},
{
"msg_contents": "On 07/27/24 00:32, Tom Lane wrote:\n> Interval typmods include a fractional-seconds-precision field as well\n> as a bitmask indicating the allowed interval fields (per the SQL\n> standard's weird syntax such as INTERVAL DAY TO SECOND). Looking at\n> the source code for intervaltypmodout() might be helpful:\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/utils/adt/timestamp.c;h=69fe7860ede062fc8be42e7545b35e69c3e068c4;hb=HEAD#l1136\n\nAlso, for this kind of question, an overview of a type modifier's\ncontents can be found in the javadoc for the WIP PL/Java 1.7, which is\nintended to model such things accurately.[0]\n\nThe model is aimed at the logical level, that is, to represent\nwhat information is present in the typmod, the precise semantics, what\ncombinations are allowed/disallowed, and so on, but not the way PostgreSQL\nphysically packs the bits. So, for this case, what you would find there\nis essentially what Tom already said, about what's logically present; it\ndoesn't save you the effort of looking in the PostgreSQL source if you want\nto independently implement unpacking the bits.\n\nFor possible future typmod questions, it may serve as a quick way to\nget that kind of logical-level description at moments when Tom is away\nfrom the keyboard.\n\nRegards,\n-Chap\n\n\n[0]\nhttps://tada.github.io/pljava/preview1.7/pljava-api/apidocs/org.postgresql.pljava/org/postgresql/pljava/adt/Timespan.Interval.Modifier.html\n\nI just noticed a nit in that javadoc: it says the field combination\nmust be \"one of the named constants in this interface\" but you don't\nfind them in the Interval.Modifier interface; they're in the containing\ninterface Interval itself.\n\n\n",
"msg_date": "Sun, 28 Jul 2024 10:34:53 -0400",
"msg_from": "Chapman Flack <jcflack@acm.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_attribute.atttypmod for interval type"
}
] |
[
{
"msg_contents": "Hi PostgreSQL community,\n\nI wanted to modify the SQL script of an extension by creating multiple\nobjects within it.\nMy aim is to make minimal changes to the existing script. To achieve this,\nI have created an\nexternal script and am attempting to run it within the extension script\nusing the \\i command.\n\nHowever, when I try to create the extension, I encounter the following\nerror:\nERROR: syntax error at or near \"\\\"\n\nCould anyone advise on how I might resolve this issue? Alternatively, is\nthere another way to create SQL objects outside of the main extension\nscript, such that these objects are created when the extension script is\nexecuted?\n\nRegards\nAyush Vatsa\n\nHi PostgreSQL community,I wanted to modify the SQL script of an extension by creating multiple objects within it. My aim is to make minimal changes to the existing script. To achieve this, I have created anexternal script and am attempting to run it within the extension script using the \\i command.However, when I try to create the extension, I encounter the following error:ERROR: syntax error at or near \"\\\"Could anyone advise on how I might resolve this issue? Alternatively, is there another way to create SQL objects outside of the main extension script, such that these objects are created when the extension script is executed?RegardsAyush Vatsa",
"msg_date": "Sat, 27 Jul 2024 11:54:54 +0530",
"msg_from": "Ayush Vatsa <ayushvatsa1810@gmail.com>",
"msg_from_op": true,
"msg_subject": "Help Needed with Including External SQL Script in Extension Script"
},
{
"msg_contents": "On Friday, July 26, 2024, Ayush Vatsa <ayushvatsa1810@gmail.com> wrote:\n>\n> I wanted to modify the SQL script of an extension by creating multiple\n> objects within it.\n> My aim is to make minimal changes to the existing script. To achieve this,\n> I have created an\n> external script and am attempting to run it within the extension script\n> using the \\i command.\n>\n> However, when I try to create the extension, I encounter the following\n> error:\n> ERROR: syntax error at or near \"\\\"\n>\n> Could anyone advise on how I might resolve this issue? Alternatively, is\n> there another way to create SQL objects outside of the main extension\n> script, such that these objects are created when the extension script is\n> executed?\n>\n\nExtension scripts are executed directly by the server as SQL, not via psql,\nso using psql features will not work. In short, your extension script for\na given version must be singular as the concept of “include files” is not\nunderstood by the execution environment.\n\nDavid J.\n\nOn Friday, July 26, 2024, Ayush Vatsa <ayushvatsa1810@gmail.com> wrote:I wanted to modify the SQL script of an extension by creating multiple objects within it. My aim is to make minimal changes to the existing script. To achieve this, I have created anexternal script and am attempting to run it within the extension script using the \\i command.However, when I try to create the extension, I encounter the following error:ERROR: syntax error at or near \"\\\"Could anyone advise on how I might resolve this issue? Alternatively, is there another way to create SQL objects outside of the main extension script, such that these objects are created when the extension script is executed?Extension scripts are executed directly by the server as SQL, not via psql, so using psql features will not work. In short, your extension script for a given version must be singular as the concept of “include files” is not understood by the execution environment.David J.",
"msg_date": "Fri, 26 Jul 2024 23:39:23 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Help Needed with Including External SQL Script in Extension\n Script"
}
] |
[
{
"msg_contents": "\nAs part of its 002_pgupgrade.pl, the pg_upgrade tests do a rerun of the \nnormal regression tests. That in itself is sad, but what concerns me \nhere is why it's so much slower than the regular run? This is apparent \neverywhere (e.g. on crake the standard run takes about 30 to 90 s, but \npg_upgrade's run takes 5 minutes or more). On Windows, it's \ncatastrophic, and only hasn't been noticed because the buildfarm client \nwasn't counting a timeout as a failure. That was an error on my part and \nI have switched a few of my machines to code that checks more robustly \nfor failure of meson tests - specifically by looking for the absence of \ntest.success rather than the presence of test.fail. That means that \ndrongo and fairywren are getting timeout errors. e.g. on the latest run \non fairywren, the regular regression run took 226s, but pg_upgrade's run \nof what should be the same set of tests took 2418s. What the heck is \ngoing on here? Is it because there are the concurrent tests running? \nThat doesn't seem enough to make the tests run more than 10 times as long.\n\nI have a strong suspicion this is exacerbated by \"debug_parallel_query = \nregress\", especially since the tests run much faster on REL_17_STABLE \nwhere I am not setting that, but that can't be the whole explanation, \nsince that setting should apply to both sets of tests.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 27 Jul 2024 09:08:08 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "why is pg_upgrade's regression run so slow?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> As part of its 002_pgupgrade.pl, the pg_upgrade tests do a rerun of the \n> normal regression tests. That in itself is sad, but what concerns me \n> here is why it's so much slower than the regular run? This is apparent \n> everywhere (e.g. on crake the standard run takes about 30 to 90 s, but \n> pg_upgrade's run takes 5 minutes or more).\n\nJust to add some more fuel to the fire: I do *not* observe such an\neffect on my own animals. For instance, sifaka's last run shows\n00:09 for install-check-C and the same (within rounding error)\nfor the regression test step in 002_pgupgrade; on the much slower\nmamba, the last run took 07:33 for install-check-C and 07:40 for the\n002_pgupgrade regression test step.\n\nI'm still using the makefile-based build, and I'm not using\ndebug_parallel_query, but it doesn't make a lot of sense to me\nthat either of those things should affect this.\n\nLooking at crake's last run, there are other oddities: why does\nthe \"check\" step take 00:24 while \"install-check-en_US.utf8\" (which\nshould be doing strictly less work) takes 01:00? Again, there are\nnot similar discrepancies on my animals. Are you sure there's not\nbackground load on the machine?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Jul 2024 10:20:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: why is pg_upgrade's regression run so slow?"
},
{
"msg_contents": "\nOn 2024-07-27 Sa 10:20 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> As part of its 002_pgupgrade.pl, the pg_upgrade tests do a rerun of the\n>> normal regression tests. That in itself is sad, but what concerns me\n>> here is why it's so much slower than the regular run? This is apparent\n>> everywhere (e.g. on crake the standard run takes about 30 to 90 s, but\n>> pg_upgrade's run takes 5 minutes or more).\n> Just to add some more fuel to the fire: I do *not* observe such an\n> effect on my own animals. For instance, sifaka's last run shows\n> 00:09 for install-check-C and the same (within rounding error)\n> for the regression test step in 002_pgupgrade; on the much slower\n> mamba, the last run took 07:33 for install-check-C and 07:40 for the\n> 002_pgupgrade regression test step.\n>\n> I'm still using the makefile-based build, and I'm not using\n> debug_parallel_query, but it doesn't make a lot of sense to me\n> that either of those things should affect this.\n>\n> Looking at crake's last run, there are other oddities: why does\n> the \"check\" step take 00:24 while \"install-check-en_US.utf8\" (which\n> should be doing strictly less work) takes 01:00? Again, there are\n> not similar discrepancies on my animals. Are you sure there's not\n> background load on the machine?\n>\n> \t\t\t\n\n\nQuite sure. Running crake and koel all it does. It's Fedora 40 running \non bare metal, a Lenovo Y70 with an Intel Core i7-4720HQ CPU @ 2.60GHz \nand a Samsung SSD.\n\nThe culprit appears to be meson. When I tested running crake with \n\"using_meson => 0\" I got results in line with yours. The regression test \ntimes were consistent, including the installcheck tests. That's \nespecially ugly on Windows as we don't have any alternative way of \nrunning, and the results are so much more catastrophic. \n\"debug_parallel_query\" didn't seem to matter.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 27 Jul 2024 18:43:55 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: why is pg_upgrade's regression run so slow?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2024-07-27 Sa 10:20 AM, Tom Lane wrote:\n>> Just to add some more fuel to the fire: I do *not* observe such an\n>> effect on my own animals.\n\n> The culprit appears to be meson. When I tested running crake with \n> \"using_meson => 0\" I got results in line with yours.\n\nInteresting. Maybe meson is over-aggressively trying to run these\ntest suites in parallel?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Jul 2024 18:48:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: why is pg_upgrade's regression run so slow?"
},
{
"msg_contents": "On Sun, Jul 28, 2024 at 10:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Interesting. Maybe meson is over-aggressively trying to run these\n> test suites in parallel?\n\nHypothesis: NTFS might not be as good at linking/unlinking lots of\nfiles concurrently due to forced synchronous I/O, causing queuing?\n\nThat's what [1] was about for FreeBSD CI. I couldn't immediately see\nhow to make a RAM disks on Windows but at the time I had a hunch that\nthat OS was struggling in the same way.\n\nCould some tuning help? Disable 8dot3name (a thing that creates a\nye-olde-MSDOS-compatible second directory entry for every file),\nadjust disablelastaccess (something like noatime), disable USN journal\n(a kind of secondary journal of all file system operations that is\nused to drive the change notification API that we don't care about),\ndisable write cache flush so that any synchronous I/O operations don't\nwait for that (at the risk of corruption on power loss, but maybe it's\nOK on a device dedicated to temporary workspace)? This is just from\nsome quick googling, but perhaps someone who actually knows how to\ndrive Windows locally and use the performance monitoring tools could\ntell us what it's actually waiting on...\n\nI noticed there is a new thing called Dev Drive[2] on Windows 11,\nwhich claims to be faster for developer workloads and there are\ngraphs[3] showing various projects' test suites going faster. It's\nReFS, a COW file system. From some quick googling, the CopyFile()\nsystem does a fast clone, and that should affect the robocopy command\nin Cluster.pm (note: Unixoid cp in there also uses COW cloning on at\nleast xfs, zfs, probably apfs too). So I would be interested to know\nif that goes faster ... or slower. I'm also interested in how it\nreacts to the POSIX-semantics mode[4]; that might affect whether we\ncan ever pull the trigger on that idea.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BFXLcEg1dyTqJjDiNQ8pGom4KrJj4wF38C90thti9dVA%40mail.gmail.com\n[2] https://learn.microsoft.com/en-us/windows/dev-drive/\n[3] https://devblogs.microsoft.com/visualstudio/devdrive/\n[4] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BajSQ_8eu2AogTncOnZ5me2D-Cn66iN_-wZnRjLN%2Bicg%40mail.gmail.com\n\n\n",
"msg_date": "Sun, 28 Jul 2024 13:19:38 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why is pg_upgrade's regression run so slow?"
},
{
"msg_contents": "\nOn 2024-07-27 Sa 6:48 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 2024-07-27 Sa 10:20 AM, Tom Lane wrote:\n>>> Just to add some more fuel to the fire: I do *not* observe such an\n>>> effect on my own animals.\n>> The culprit appears to be meson. When I tested running crake with\n>> \"using_meson => 0\" I got results in line with yours.\n> Interesting. Maybe meson is over-aggressively trying to run these\n> test suites in parallel?\n\n\n\nMaybe, IDK. Meanwhile, I disabled \"debug_parallel_query = regress\" on \nHEAD for fairywren and drongo - fairwren has just gone green, and I \nexpect drongo will when it reports in a few hours.\n\nI'm at a loss for an explanation.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 28 Jul 2024 15:43:23 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: why is pg_upgrade's regression run so slow?"
},
{
"msg_contents": "Hello Andrew,\n\n28.07.2024 22:43, Andrew Dunstan wrote:\n>\n> Maybe, IDK. Meanwhile, I disabled \"debug_parallel_query = regress\" on HEAD for fairywren and drongo - fairwren has \n> just gone green, and I expect drongo will when it reports in a few hours.\n>\n> I'm at a loss for an explanation.\n>\n\nI'm observing the same here (using a Windows 10 VM).\n\nWith no TEMP_CONFIG set, `meson test` gives me these numbers:\ntest: postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade\nduration: 72.34s\n\ntest: postgresql:regress / regress/regress\nduration: 41.98s\n\nBut with debug_parallel_query=regress in TEMP_CONFIG, I see:\ntest: postgresql:regress / regress/regress\nduration: 50.08s\n\ntest: postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade\nduration: 398.45s\n\nWith log_min_messages=DEBUG2 added to TEMP_CONFIG, I can see how many\nparallel workers were started during the test:\n...\\postgresql\\build>grep \"starting background worker process\" -r testrun/pg_upgrade | wc -l\n17532\n\nAnd another interesting fact is that TEMP_CONFIG is apparently ignored by\n`meson test regress/regress`. For example, with temp.config containing\ninvalid settings, `meson test pg_upgrade/002_pg_upgrade` fails, but\n`meson test regress/regress` passes just fine.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 29 Jul 2024 11:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why is pg_upgrade's regression run so slow?"
},
{
"msg_contents": "\nOn 2024-07-29 Mo 4:00 AM, Alexander Lakhin wrote:\n> Hello Andrew,\n>\n> 28.07.2024 22:43, Andrew Dunstan wrote:\n>>\n>> Maybe, IDK. Meanwhile, I disabled \"debug_parallel_query = regress\" on \n>> HEAD for fairywren and drongo - fairwren has just gone green, and I \n>> expect drongo will when it reports in a few hours.\n>>\n>> I'm at a loss for an explanation.\n>>\n>\n> I'm observing the same here (using a Windows 10 VM).\n>\n> With no TEMP_CONFIG set, `meson test` gives me these numbers:\n> test: postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade\n> duration: 72.34s\n>\n> test: postgresql:regress / regress/regress\n> duration: 41.98s\n>\n> But with debug_parallel_query=regress in TEMP_CONFIG, I see:\n> test: postgresql:regress / regress/regress\n> duration: 50.08s\n>\n> test: postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade\n> duration: 398.45s\n>\n> With log_min_messages=DEBUG2 added to TEMP_CONFIG, I can see how many\n> parallel workers were started during the test:\n> ...\\postgresql\\build>grep \"starting background worker process\" -r \n> testrun/pg_upgrade | wc -l\n> 17532\n>\n> And another interesting fact is that TEMP_CONFIG is apparently ignored by\n> `meson test regress/regress`. For example, with temp.config containing\n> invalid settings, `meson test pg_upgrade/002_pg_upgrade` fails, but\n> `meson test regress/regress` passes just fine.\n>\n>\n\nWell, that last fact explains the discrepancy I originally complained \nabout. How the heck did that happen? It looks like we just ignored its \nuse in Makefile.global.in :-(\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 29 Jul 2024 06:54:15 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: why is pg_upgrade's regression run so slow?"
},
{
"msg_contents": "Hello Andrew,\n\n29.07.2024 13:54, Andrew Dunstan wrote:\n>\n> On 2024-07-29 Mo 4:00 AM, Alexander Lakhin wrote:\n>>\n>> And another interesting fact is that TEMP_CONFIG is apparently ignored by\n>> `meson test regress/regress`. For example, with temp.config containing\n>> invalid settings, `meson test pg_upgrade/002_pg_upgrade` fails, but\n>> `meson test regress/regress` passes just fine.\n>>\n>>\n>\n> Well, that last fact explains the discrepancy I originally complained about. How the heck did that happen? It looks \n> like we just ignored its use in Makefile.global.in :-(\n\nPlease look at the attached patch (partially based on ideas from [1]) for\nmeson.build, that aligns it with `make` in regard to use of TEMP_CONFIG.\n\nMaybe it could be implemented via a separate meson option, but that would\nalso require modifying at least the buildfarm client...\n\n[1] https://www.postgresql.org/message-id/CAN55FZ304Kp%2B510-iU5-Nx6hh32ny9jgGn%2BOB5uqPExEMK1gQQ%40mail.gmail.com\n\nBest regards,\nAlexander",
"msg_date": "Mon, 19 Aug 2024 15:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: why is pg_upgrade's regression run so slow?"
},
{
"msg_contents": "\nOn 2024-08-19 Mo 8:00 AM, Alexander Lakhin wrote:\n> Hello Andrew,\n>\n> 29.07.2024 13:54, Andrew Dunstan wrote:\n>>\n>> On 2024-07-29 Mo 4:00 AM, Alexander Lakhin wrote:\n>>>\n>>> And another interesting fact is that TEMP_CONFIG is apparently \n>>> ignored by\n>>> `meson test regress/regress`. For example, with temp.config containing\n>>> invalid settings, `meson test pg_upgrade/002_pg_upgrade` fails, but\n>>> `meson test regress/regress` passes just fine.\n>>>\n>>>\n>>\n>> Well, that last fact explains the discrepancy I originally complained \n>> about. How the heck did that happen? It looks like we just ignored \n>> its use in Makefile.global.in :-(\n>\n> Please look at the attached patch (partially based on ideas from [1]) for\n> meson.build, that aligns it with `make` in regard to use of TEMP_CONFIG.\n>\n> Maybe it could be implemented via a separate meson option, but that would\n> also require modifying at least the buildfarm client...\n>\n> [1] \n> https://www.postgresql.org/message-id/CAN55FZ304Kp%2B510-iU5-Nx6hh32ny9jgGn%2BOB5uqPExEMK1gQQ%40mail.gmail.com\n>\n>\n\nI think this is the way to go. The patch LGTM.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 20 Aug 2024 09:31:14 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: why is pg_upgrade's regression run so slow?"
}
] |
[
{
"msg_contents": "Hello,\n\nI hope this message finds you well. I am currently working on a PostgreSQL extension and have encountered an issue where a static pointer becomes null between different AM routines. My problem is as follows:\n\nI am working with a few AM routines, specifically “ambuild” and “amrescan”. During “ambuild”, I use “ShmemInitStruct” to initialize a segment of shared memory and save the pointer to this location in my static, global pointer. I then set some values of the structure that the pointer points to, which I believe works correctly. I have ensured to acquire, and later release, the “AddinShmemInitLock” as well as check if we have found a segment of the same name in shared memory. I can access the pointer and any data I save in the struct perfectly fine during this AM routine.\n\nWhen the extension later performs “amrescan”, the static pointer I had set is null. I am not quite sure why this is happening. I would greatly appreciate any guidance or suggestions! Perhaps I need to use the startup hooks when calling the “ShmemInitStruct” function (although I would like to avoid this as the size of the segment I am initializing varies at run time) or use dynamic shared memory?\n\nPlease let me know if there are any more details I can provide or if anything is unclear. Thanks for your time and assistance!\n\nBest,\nAditya Gupta\n\n\n\n\n\n\n\n\n\n\nHello,\n \nI hope this message finds you well. I am currently working on a PostgreSQL extension and have encountered an issue where a static pointer becomes null between different AM routines. My problem\n is as follows:\n \nI am working with a few AM routines, specifically “ambuild” and “amrescan”. During “ambuild”, I use “ShmemInitStruct” to initialize a segment of shared memory and save the pointer to this location\n in my static, global pointer. I then set some values of the structure that the pointer points to, which I believe works correctly. I have ensured to acquire, and later release, the “AddinShmemInitLock” as well as check if we have found a segment of the same\n name in shared memory. I can access the pointer and any data I save in the struct perfectly fine during this AM routine. \n \nWhen the extension later performs “amrescan”, the static pointer I had set is null. I am not quite sure why this is happening. I would greatly appreciate any guidance or suggestions! Perhaps\n I need to use the startup hooks when calling the “ShmemInitStruct” function (although I would like to avoid this as the size of the segment I am initializing varies at run time) or use dynamic shared memory?\n \nPlease let me know if there are any more details I can provide or if anything is unclear. Thanks for your time and assistance!\n \nBest,\nAditya Gupta",
"msg_date": "Sat, 27 Jul 2024 16:58:14 +0000",
"msg_from": "Aditya Gupta <adgupta1003@gmail.com>",
"msg_from_op": true,
"msg_subject": "Unexpected Null Pointer For Static Shared Memory Segment"
}
] |
[
{
"msg_contents": "Hi all,\n\nAttached is a patch that resolves an overflow in pg_size_pretty() that\nresulted in unexpected behavior when PG_INT64_MIN was passed in as an\nargument.\n\nThe pg_abs_s64() helper function is extracted and simplified from patch\n0001 from [0]. I didn't add similar functions for other sized integers\nsince they'd be unused, but I'd be happy to add them if others\ndisagree.\n\n`SELECT -9223372036854775808::bigint` results in an out of range error,\neven though `-9223372036854775808` can fit in a `bigint` and\n`SELECT pg_typeof(-9223372036854775808)` returns `bigint`. That's why\nthe `::bigint` cast is omitted from my test.\n\n[0]\nhttps://www.postgresql.org/message-id/flat/CAAvxfHdBPOyEGS7s+xf4iaW0-cgiq25jpYdWBqQqvLtLe_t6tw@mail.gmail.com\n\nThanks,\nJoseph Koshakow",
"msg_date": "Sat, 27 Jul 2024 15:18:00 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix overflow in pg_size_pretty"
},
{
"msg_contents": "On Sat, Jul 27, 2024 at 3:18 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> `SELECT -9223372036854775808::bigint` results in an out of range error,\n> even though `-9223372036854775808` can fit in a `bigint` and\n> `SELECT pg_typeof(-9223372036854775808)` returns `bigint`. That's why\n> the `::bigint` cast is omitted from my test.\n\nTurns out it was just an order of operations issue. Fix is attached.\n\nThanks,\nJoseph Koshakow",
"msg_date": "Sat, 27 Jul 2024 17:16:38 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix overflow in pg_size_pretty"
},
{
"msg_contents": "On Sun, 28 Jul 2024 at 07:18, Joseph Koshakow <koshy44@gmail.com> wrote:\n> Attached is a patch that resolves an overflow in pg_size_pretty() that\n> resulted in unexpected behavior when PG_INT64_MIN was passed in as an\n> argument.\n\nCould we just fix this more simply by assigning the absolute value of\nthe signed variable into an unsigned type? It's a bit less code and\ngets rid of the explicit test for PG_INT64_MIN.\n\nDavid",
"msg_date": "Sun, 28 Jul 2024 10:28:23 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix overflow in pg_size_pretty"
},
{
"msg_contents": "On Sat, Jul 27, 2024 at 6:28 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sun, 28 Jul 2024 at 07:18, Joseph Koshakow <koshy44@gmail.com> wrote:\n>> Attached is a patch that resolves an overflow in pg_size_pretty() that\n>> resulted in unexpected behavior when PG_INT64_MIN was passed in as an\n>> argument.\n>\n> Could we just fix this more simply by assigning the absolute value of\n> the signed variable into an unsigned type?\n\nI might be misunderstanding, but my previous patch does assign the\nabsolute value of the signed variable into an unsigned type.\n\n> It's a bit less code and\n> gets rid of the explicit test for PG_INT64_MIN.\n\n> + uint64 usize = size < 0 ? (uint64) (-size) : (uint64) size;\n\nI think that the explicit test for PG_INT64_MIN is still required. If\n`size` is equal to PG_INT64_MIN then `-size` will overflow. You end up\nwith the correct behavior if `size` wraps around, but that's only\nguaranteed on platforms that support the `-fwrapv` flag.\n\nThanks,\nJoseph Koshakow\n\nOn Sat, Jul 27, 2024 at 6:28 PM David Rowley <dgrowleyml@gmail.com> wrote:>> On Sun, 28 Jul 2024 at 07:18, Joseph Koshakow <koshy44@gmail.com> wrote:>> Attached is a patch that resolves an overflow in pg_size_pretty() that>> resulted in unexpected behavior when PG_INT64_MIN was passed in as an>> argument.>> Could we just fix this more simply by assigning the absolute value of> the signed variable into an unsigned type? I might be misunderstanding, but my previous patch does assign theabsolute value of the signed variable into an unsigned type.> It's a bit less code and> gets rid of the explicit test for PG_INT64_MIN.> +\t\tuint64\t\tusize = size < 0 ? (uint64) (-size) : (uint64) size;I think that the explicit test for PG_INT64_MIN is still required. If`size` is equal to PG_INT64_MIN then `-size` will overflow. You end upwith the correct behavior if `size` wraps around, but that's onlyguaranteed on platforms that support the `-fwrapv` flag.Thanks,Joseph Koshakow",
"msg_date": "Sat, 27 Jul 2024 19:06:30 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix overflow in pg_size_pretty"
},
{
"msg_contents": "On Sun, 28 Jul 2024 at 11:06, Joseph Koshakow <koshy44@gmail.com> wrote:\n> > + uint64 usize = size < 0 ? (uint64) (-size) : (uint64) size;\n>\n> I think that the explicit test for PG_INT64_MIN is still required. If\n> `size` is equal to PG_INT64_MIN then `-size` will overflow. You end up\n> with the correct behavior if `size` wraps around, but that's only\n> guaranteed on platforms that support the `-fwrapv` flag.\n\nWhat if we spelt it out the same way as pg_lltoa() does?\n\ni.e: uint64 usize = size < 0 ? 0 - (uint64) size : (uint64) size;\n\nDavid\n\n\n",
"msg_date": "Sun, 28 Jul 2024 12:00:21 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix overflow in pg_size_pretty"
},
{
"msg_contents": "On Sat, Jul 27, 2024 at 8:00 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sun, 28 Jul 2024 at 11:06, Joseph Koshakow <koshy44@gmail.com> wrote:\n>> > + uint64 usize = size < 0 ? (uint64) (-size) : (uint64) size;\n>>\n>> I think that the explicit test for PG_INT64_MIN is still required. If\n>> `size` is equal to PG_INT64_MIN then `-size` will overflow. You end up\n>> with the correct behavior if `size` wraps around, but that's only\n>> guaranteed on platforms that support the `-fwrapv` flag.\n>\n> What if we spelt it out the same way as pg_lltoa() does?\n>\n> i.e: uint64 usize = size < 0 ? 0 - (uint64) size : (uint64) size;\n\nMy understanding of pg_lltoa() is that it produces an underflow and\nrelies wrapping around from 0 to PG_UINT64_MAX. In fact the following\nSQL, which relies on pg_lltoa() under the hood, panics with `-ftrapv`\nenabled (which panics on underflows and overflows):\n\n SELECT int8out(-9223372036854775808);\n\nSo we should actually probably modify pg_lltoa() to use pg_abs_s64()\ntoo.\n\nThanks,\nJoe Koshakow\n\nOn Sat, Jul 27, 2024 at 8:00 PM David Rowley <dgrowleyml@gmail.com> wrote:>> On Sun, 28 Jul 2024 at 11:06, Joseph Koshakow <koshy44@gmail.com> wrote:>> > + uint64 usize = size < 0 ? (uint64) (-size) : (uint64) size;>>>> I think that the explicit test for PG_INT64_MIN is still required. If>> `size` is equal to PG_INT64_MIN then `-size` will overflow. You end up>> with the correct behavior if `size` wraps around, but that's only>> guaranteed on platforms that support the `-fwrapv` flag.>> What if we spelt it out the same way as pg_lltoa() does?>> i.e: uint64 usize = size < 0 ? 0 - (uint64) size : (uint64) size;My understanding of pg_lltoa() is that it produces an underflow andrelies wrapping around from 0 to PG_UINT64_MAX. In fact the followingSQL, which relies on pg_lltoa() under the hood, panics with `-ftrapv`enabled (which panics on underflows and overflows): SELECT int8out(-9223372036854775808);So we should actually probably modify pg_lltoa() to use pg_abs_s64()too.Thanks,Joe Koshakow",
"msg_date": "Sat, 27 Jul 2024 21:10:23 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix overflow in pg_size_pretty"
},
{
"msg_contents": "On Sun, 28 Jul 2024 at 13:10, Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> On Sat, Jul 27, 2024 at 8:00 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > What if we spelt it out the same way as pg_lltoa() does?\n> >\n> > i.e: uint64 usize = size < 0 ? 0 - (uint64) size : (uint64) size;\n>\n> My understanding of pg_lltoa() is that it produces an underflow and\n> relies wrapping around from 0 to PG_UINT64_MAX. In fact the following\n> SQL, which relies on pg_lltoa() under the hood, panics with `-ftrapv`\n> enabled (which panics on underflows and overflows):\n>\n> SELECT int8out(-9223372036854775808);\n\nI didn't test to see where that's coming from, but I did test the two\nattached .c files. int.c uses the 0 - (unsigned int) var method and\nint2.c uses (unsigned int) (-var). Using clang and -ftrapv, I get:\n\n$ clang int.c -o int -O2 -ftrapv\n$ ./int\n2147483648\n$ clang int2.c -o int2 -O2 -ftrapv\n$ ./int2\nIllegal instruction\n\nSimilar with gcc:\n$ gcc int.c -o int -O2 -ftrapv\n$ ./int\n2147483648\n$ gcc int2.c -o int2 -O2 -ftrapv\n$ ./int2\nAborted\n\nI suspect your trap must be coming from somewhere else. It looks to me\nlike the \"uint64 usize = size < 0 ? 0 - (uint64) size : (uint64)\nsize;\" will be fine.\n\nDavid",
"msg_date": "Sun, 28 Jul 2024 15:42:36 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix overflow in pg_size_pretty"
},
{
"msg_contents": "On Sat, Jul 27, 2024 at 11:42 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I didn't test to see where that's coming from, but I did test the two\n> attached .c files. int.c uses the 0 - (unsigned int) var method and\n> int2.c uses (unsigned int) (-var). Using clang and -ftrapv, I get:\n>\n> $ clang int.c -o int -O2 -ftrapv\n> $ ./int\n> 2147483648\n> $ clang int2.c -o int2 -O2 -ftrapv\n> $ ./int2\n> Illegal instruction\n>\n> Similar with gcc:\n> $ gcc int.c -o int -O2 -ftrapv\n> $ ./int\n> 2147483648\n> $ gcc int2.c -o int2 -O2 -ftrapv\n> $ ./int2\n> Aborted\n>\n> I suspect your trap must be coming from somewhere else. It looks to me\n> like the \"uint64 usize = size < 0 ? 0 - (uint64) size : (uint64)\n> size;\" will be fine.\n\nMy mistake, you're absolutely right. The trap is coming from\n`pg_strtoint64_safe()`.\n\n return -((int64) tmp);\n\nWhich I had already addressed in the other thread and completely forgot\nabout.\n\nI did some more research and it looks like unsigned integer arithmetic\nis guaranteed to wrap around, unlike signed integer arithmetic [0].\n\nAttached is an updated patch with your approach. I removed the 0 from\nthe negative case because I think it was unnecessary, but happy to add\nit back in if I missed something.\n\nThanks for the review!\n\nThanks,\nJoseph Koshakow\n\n[0]\nhttps://www.gnu.org/software/autoconf/manual/autoconf-2.63/html_node/Integer-Overflow-Basics.html",
"msg_date": "Sun, 28 Jul 2024 00:30:10 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix overflow in pg_size_pretty"
},
{
"msg_contents": "On Sun, 28 Jul 2024 at 16:30, Joseph Koshakow <koshy44@gmail.com> wrote:\n> Attached is an updated patch with your approach. I removed the 0 from\n> the negative case because I think it was unnecessary, but happy to add\n> it back in if I missed something.\n\nI made a few adjustments and pushed this. I did keep the 0 - part as\nsome compilers don't seem to like not having the 0. e.g MSVC gives:\n\n../src/backend/utils/adt/dbsize.c(578): warning C4146: unary minus\noperator applied to unsigned type, result still unsigned\n\nI thought a bit about if it was really worth keeping the regression\ntest or not and in the end decided it was likely worthwhile keeping\nit, so I expanded it slightly to cover both PG_INT64_MIN and\nPG_INT64_MAX values. It looks slightly less like we're earmarking the\nfact that there was a bug that way, and also seems to be of some\nadditional value.\n\nPG15 did see quite a significant rewrite of the pg_size_pretty code.\nThe bug does still exist in PG14 and earlier, but on looking at what\nit would take to fix it there I got a bit unexcited at the risk to\nreward ratio of adjusting that code and just left it alone. I've\nbackpatched only as far as PG15. I'm sure someone else will feel I\nshould have done something else there, but that's the judgement call I\nmade.\n\nThanks for the patch.\n\nDavid\n\n\n",
"msg_date": "Sun, 28 Jul 2024 22:34:49 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix overflow in pg_size_pretty"
}
] |
[
{
"msg_contents": "Hello Tomas,\n\nPlease take a look at a recent dikkop's failure [1]. The\nregress_log_006_db_file_copy file from that run shows:\n[02:08:57.929](0.014s) # initializing database system by copying initdb template\n...\n[02:09:22.511](24.583s) ok 1 - full backup\n...\n[02:10:35.758](73.247s) not ok 2 - incremental backup\n\n006_db_file_copy_primary.log contains:\n2024-07-28 02:09:29.441 UTC [67785:12] 006_db_file_copy.pl LOG: received replication command: START_REPLICATION SLOT \n\"pg_basebackup_67785\" 0/4000000 TIMELINE 1\n2024-07-28 02:09:29.441 UTC [67785:13] 006_db_file_copy.pl STATEMENT: START_REPLICATION SLOT \"pg_basebackup_67785\" \n0/4000000 TIMELINE 1\n2024-07-28 02:09:29.441 UTC [67785:14] 006_db_file_copy.pl LOG: acquired physical replication slot \"pg_basebackup_67785\"\n2024-07-28 02:09:29.441 UTC [67785:15] 006_db_file_copy.pl STATEMENT: START_REPLICATION SLOT \"pg_basebackup_67785\" \n0/4000000 TIMELINE 1\n2024-07-28 02:10:29.487 UTC [67785:16] 006_db_file_copy.pl LOG: terminating walsender process due to replication timeout\n2024-07-28 02:10:29.487 UTC [67785:17] 006_db_file_copy.pl STATEMENT: START_REPLICATION SLOT \"pg_basebackup_67785\" \n0/4000000 TIMELINE 1\n\nIt looks like this incremental backup operation was performed slower than\nusual (it took more than 60 seconds and apparently was interrupted due to\nwal_sender_timeout). But looking at regress_log_006_db_file_copy from the\n6 previous (successful) test runs, we can see:\n[14:22:16.841](43.215s) ok 2 - incremental backup\n[02:14:42.888](34.595s) ok 2 - incremental backup\n[17:51:16.152](43.708s) ok 2 - incremental backup\n[04:07:16.757](31.087s) ok 2 - incremental backup\n[12:15:01.256](49.432s) ok 2 - incremental backup\n[01:06:02.482](52.364s) ok 2 - incremental backup\n\nThus reaching 60s (e.g., due to some background activity) on this animal\nseems pretty possible. So maybe it would make sense to increase\nwal_sender_timeout for it, say, to 120s?\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2024-07-27%2023%3A22%3A57\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 29 Jul 2024 07:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "dikkop failed the pg_combinebackupCheck/006_db_file_copy.pl test"
}
] |
[
{
"msg_contents": "Prior to PG16, postmaster children would manually detach from shared memory\nif it was not needed. However, this behavior was removed in fork mode in\ncommit aafc05d.\nDetaching shared memory when it is no longer needed is beneficial, as \npostmaster children (like syslogger) don't wish to take any risk of\naccidentally corrupting shared memory. Additionally, any panic in these\nprocesses will not reset shared memory.\nThe attached patch addresses this issue by detaching shared memory after\nfork_process().\nBest regard,\nRui Zhao",
"msg_date": "Mon, 29 Jul 2024 17:57:06 +0800",
"msg_from": "\"Rui Zhao\" <xiyuan.zr@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?RGV0YWNoIHNoYXJlZCBtZW1vcnkgaW4gUG9zdG1hc3RlciBjaGlsZCBpZiBub3QgbmVlZGVk?="
},
{
"msg_contents": "Hi Rui,\n\n> Prior to PG16, postmaster children would manually detach from shared memory\n> if it was not needed. However, this behavior was removed in fork mode in\n> commit aafc05d.\n>\n> Detaching shared memory when it is no longer needed is beneficial, as\n> postmaster children (like syslogger) don't wish to take any risk of\n> accidentally corrupting shared memory. Additionally, any panic in these\n> processes will not reset shared memory.\n>\n> The attached patch addresses this issue by detaching shared memory after\n> fork_process().\n\nThanks for the patch. How do you estimate its performance impact?\n\nNote the comments for postmaster_child_launch(). This function is\nexposed to the third-party code and guarantees to attach shared\nmemory. I doubt that there is much third-party code in existence to\nbreak but you should change to comment.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 29 Jul 2024 14:44:20 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Detach shared memory in Postmaster child if not needed"
},
{
"msg_contents": "Thanks for your reply.\n> Thanks for the patch. How do you estimate its performance impact?\nIn my patch, ony child processes that set\n(child_process_kinds[child_type].shmem_attach == false)\nwill detach from shared memory.\nChild processes with B_STANDALONE_BACKEND and B_INVALID don't call\npostmaster_child_launch().\nTherefore, currently, only the syslogger will be affected,\nwhich should be harmless.\n> Note the comments for postmaster_child_launch(). This function is\n> exposed to the third-party code and guarantees to attach shared\n> memory. I doubt that there is much third-party code in existence to\n> break but you should change to comment.\nThank you for your reminder. My v2 patch will include the comments for\npostmaster_child_launch().\n--\nBest regards,\nRui Zhao",
"msg_date": "Tue, 30 Jul 2024 01:25:02 +0800",
"msg_from": "\"Rui Zhao\" <xiyuan.zr@alibaba-inc.com>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?B?UmXvvJpEZXRhY2ggc2hhcmVkIG1lbW9yeSBpbiBQb3N0bWFzdGVyIGNoaWxkIGlmIG5vdCBu?=\n =?UTF-8?B?ZWVkZWQ=?="
},
{
"msg_contents": "On Mon, Jul 29, 2024 at 5:57 AM Rui Zhao <xiyuan.zr@alibaba-inc.com> wrote:\n> Prior to PG16, postmaster children would manually detach from shared memory\n> if it was not needed. However, this behavior was removed in fork mode in\n> commit aafc05d.\n\nOh. The commit message makes no mention of that. I wonder whether it\nwas inadvertent.\n\n> Detaching shared memory when it is no longer needed is beneficial, as\n> postmaster children (like syslogger) don't wish to take any risk of\n> accidentally corrupting shared memory. Additionally, any panic in these\n> processes will not reset shared memory.\n\n+1.\n\n> The attached patch addresses this issue by detaching shared memory after\n> fork_process().\n\nI don't know whether this is the correct approach or not, but\nhopefully Heikki can comment.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Jul 2024 14:10:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Detach shared memory in Postmaster child if not needed"
},
{
"msg_contents": "On 29/07/2024 21:10, Robert Haas wrote:\n> On Mon, Jul 29, 2024 at 5:57 AM Rui Zhao <xiyuan.zr@alibaba-inc.com> wrote:\n>> Prior to PG16, postmaster children would manually detach from shared memory\n>> if it was not needed. However, this behavior was removed in fork mode in\n>> commit aafc05d.\n> \n> Oh. The commit message makes no mention of that. I wonder whether it\n> was inadvertent.\n> \n>> Detaching shared memory when it is no longer needed is beneficial, as\n>> postmaster children (like syslogger) don't wish to take any risk of\n>> accidentally corrupting shared memory. Additionally, any panic in these\n>> processes will not reset shared memory.\n> \n> +1.\n> \n>> The attached patch addresses this issue by detaching shared memory after\n>> fork_process().\n> \n> I don't know whether this is the correct approach or not, but\n> hopefully Heikki can comment.\n\nGood catch, it was not intentional. The patch looks good to me, so \ncommitted. Thanks Rui!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 29 Jul 2024 22:41:44 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Detach shared memory in Postmaster child if not needed"
}
] |
[
{
"msg_contents": "Today I was busy rebasing my Greenplum-related Extendable SMGR\npatches on Cloudberry, and I faced some conflicts.\nWhile resolving them I noticed code & comments inconsistency in smgr.c\nin smgrtruncate function, which tracks down to\nc5315f4f44843c20ada876fdb0d0828795dfbdf5. In this commit,\nsmgr_fsm_nblocks & smgr_vm_nblocks fields were removed, however\ncomments were not fixed accordingly.\n\nSo i suggest to fix this, PFA",
"msg_date": "Mon, 29 Jul 2024 15:50:27 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix smgrtruncate code comment."
},
{
"msg_contents": "On 29/07/2024 13:50, Kirill Reshke wrote:\n> Today I was busy rebasing my Greenplum-related Extendable SMGR\n> patches on Cloudberry, and I faced some conflicts.\n> While resolving them I noticed code & comments inconsistency in smgr.c\n> in smgrtruncate function, which tracks down to\n> c5315f4f44843c20ada876fdb0d0828795dfbdf5. In this commit,\n> smgr_fsm_nblocks & smgr_vm_nblocks fields were removed, however\n> comments were not fixed accordingly.\n> \n> So i suggest to fix this, PFA\n\nApplied, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 29 Jul 2024 14:27:03 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Fix smgrtruncate code comment."
}
] |
[
{
"msg_contents": "These programs in src/backend/utils/mb/ are unused, and have been unused \nand unusable since 2003:\n\niso.c\nwin1251.c\nwin866.c\n\nAttached patch removes them. See commit message for a little more \ndetailed analysis.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Mon, 29 Jul 2024 14:18:40 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Remove dead code generation tools in src/backend/utils/mb/"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> These programs in src/backend/utils/mb/ are unused, and have been unused \n> and unusable since 2003:\n> iso.c\n> win1251.c\n> win866.c\n> Attached patch removes them. See commit message for a little more \n> detailed analysis.\n\n+1. Seems to have been my oversight in 4c3c8c048d.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Jul 2024 10:15:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove dead code generation tools in src/backend/utils/mb/"
},
{
"msg_contents": "Hello Tom and Heikki,\n\n29.07.2024 17:15, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> These programs in src/backend/utils/mb/ are unused, and have been unused\n>> and unusable since 2003:\n>> iso.c\n>> win1251.c\n>> win866.c\n>> Attached patch removes them. See commit message for a little more\n>> detailed analysis.\n> +1. Seems to have been my oversight in 4c3c8c048d.\n\nI also wonder whether src/test/locale/ still makes sense; does anyone\nrun those tests (I could not run a single one on a quick attempt)?\n\n(As far as I can tell, KOI8-R fallen out of mainstream usage in Russia\ntwenty years ago...)\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 29 Jul 2024 18:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove dead code generation tools in src/backend/utils/mb/"
},
{
"msg_contents": "On 7/29/24 5:00 PM, Alexander Lakhin wrote:\n> I also wonder whether src/test/locale/ still makes sense; does anyone\n> run those tests (I could not run a single one on a quick attempt)?\n\nI was actually wondering about those yesterday and they should probably \nbe removed (or fixed if anyone can see a use for them). As they are \nright now they do not seem very useful, especially with the current \nselection of locales: de_DE.ISO8859-1, gr_GR.ISO8859-7 and koi8-r.\n\nAndreas\n\n\n",
"msg_date": "Mon, 29 Jul 2024 17:06:40 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Remove dead code generation tools in src/backend/utils/mb/"
},
{
"msg_contents": "On 29/07/2024 17:15, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> These programs in src/backend/utils/mb/ are unused, and have been unused\n>> and unusable since 2003:\n>> iso.c\n>> win1251.c\n>> win866.c\n>> Attached patch removes them. See commit message for a little more\n>> detailed analysis.\n> \n> +1. Seems to have been my oversight in 4c3c8c048d.\n\nRemoved.\n\n(Aleksander, you forgot to CC the mailing list, but thanks for your \nreview too.)\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 29 Jul 2024 20:41:06 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Remove dead code generation tools in src/backend/utils/mb/"
}
] |
[
{
"msg_contents": "Fix double-release of spinlock\n\nCommit 9d9b9d46f3 added spinlocks to protect the fields in ProcSignal\nflags, but in EmitProcSignalBarrier(), the spinlock was released\ntwice. With most spinlock implementations, releasing a lock that's not\nheld is not easy to notice, because most of the time it does nothing,\nbut if the spinlock was concurrently acquired by another process, it\ncould lead to more serious issues. Fortunately, with the\n--disable-spinlocks emulation implementation, it caused more visible\nfailures.\n\nIn the passing, fix a type in comment and add an assertion that the\nprocNumber passed to SendProcSignal looks valid.\n\nDiscussion: https://www.postgresql.org/message-id/b8ce284c-18a2-4a79-afd3-1991a2e7d246@iki.fi\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/0393f542d72c6182271c392d9a83d0fc775113c7\n\nModified Files\n--------------\nsrc/backend/storage/ipc/procsignal.c | 6 ++++--\n1 file changed, 4 insertions(+), 2 deletions(-)",
"msg_date": "Mon, 29 Jul 2024 15:24:52 +0000",
"msg_from": "Heikki Linnakangas <heikki.linnakangas@iki.fi>",
"msg_from_op": true,
"msg_subject": "pgsql: Fix double-release of spinlock"
},
{
"msg_contents": "Heikki Linnakangas <heikki.linnakangas@iki.fi> writes:\n> Commit 9d9b9d46f3 added spinlocks to protect the fields in ProcSignal\n> flags, but in EmitProcSignalBarrier(), the spinlock was released\n> twice. With most spinlock implementations, releasing a lock that's not\n> held is not easy to notice, because most of the time it does nothing,\n> but if the spinlock was concurrently acquired by another process, it\n> could lead to more serious issues. Fortunately, with the\n> --disable-spinlocks emulation implementation, it caused more visible\n> failures.\n\nThere was some recent discussion about getting rid of\n--disable-spinlocks on the grounds that nobody would use\nhardware that lacked native spinlocks. But now I wonder\nif there is a testing/debugging reason to keep it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Jul 2024 11:31:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix double-release of spinlock"
},
{
"msg_contents": "Hi,\n\nOn 2024-07-29 11:31:56 -0400, Tom Lane wrote:\n> Heikki Linnakangas <heikki.linnakangas@iki.fi> writes:\n> > Commit 9d9b9d46f3 added spinlocks to protect the fields in ProcSignal\n> > flags, but in EmitProcSignalBarrier(), the spinlock was released\n> > twice. With most spinlock implementations, releasing a lock that's not\n> > held is not easy to notice, because most of the time it does nothing,\n> > but if the spinlock was concurrently acquired by another process, it\n> > could lead to more serious issues. Fortunately, with the\n> > --disable-spinlocks emulation implementation, it caused more visible\n> > failures.\n> \n> There was some recent discussion about getting rid of\n> --disable-spinlocks on the grounds that nobody would use\n> hardware that lacked native spinlocks. But now I wonder\n> if there is a testing/debugging reason to keep it.\n\nSeems it'd be a lot more straightforward to just add an assertion to the\nx86-64 spinlock implementation verifying that the spinlock isn't already free?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Jul 2024 09:18:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix double-release of spinlock"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2024-07-29 11:31:56 -0400, Tom Lane wrote:\n>> There was some recent discussion about getting rid of\n>> --disable-spinlocks on the grounds that nobody would use\n>> hardware that lacked native spinlocks. But now I wonder\n>> if there is a testing/debugging reason to keep it.\n\n> Seems it'd be a lot more straightforward to just add an assertion to the\n> x86-64 spinlock implementation verifying that the spinlock isn't already free?\n\nI dunno, is that the only extra check that the --disable-spinlocks\nimplementation is providing?\n\nI'm kind of allergic to putting Asserts into spinlocked code segments,\nmostly on the grounds that it violates the straight-line-code precept.\nI suppose it's not really that bad for tests that you don't expect\nto fail, but still ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Jul 2024 12:33:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix double-release of spinlock"
},
{
"msg_contents": "Hi,\n\nOn 2024-07-29 12:33:13 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2024-07-29 11:31:56 -0400, Tom Lane wrote:\n> >> There was some recent discussion about getting rid of\n> >> --disable-spinlocks on the grounds that nobody would use\n> >> hardware that lacked native spinlocks. But now I wonder\n> >> if there is a testing/debugging reason to keep it.\n> \n> > Seems it'd be a lot more straightforward to just add an assertion to the\n> > x86-64 spinlock implementation verifying that the spinlock isn't already free?\n\nFWIW, I quickly hacked that up, and it indeed quickly fails with 0393f542d72^\nand passes with 0393f542d72.\n\n\n> I dunno, is that the only extra check that the --disable-spinlocks\n> implementation is providing?\n\nI think it also provides the (valuable!) check that spinlocks were actually\ninitialized. But that again seems like something we'd be better off adding\nmore general infrastructure for - nobody runs --disable-spinlocks locally, we\nshouldn't need to run this on the buildfarm to find problems like this.\n\n\n> I'm kind of allergic to putting Asserts into spinlocked code segments,\n> mostly on the grounds that it violates the straight-line-code precept.\n> I suppose it's not really that bad for tests that you don't expect\n> to fail, but still ...\n\nI don't think the spinlock implementation itself is really affected by that\nrule - after all, the --disable-spinlocks implementation actually consists out\nof several layers of external function calls (including syscalls in some\ncases!).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Jul 2024 09:40:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix double-release of spinlock"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2024-07-29 12:33:13 -0400, Tom Lane wrote:\n>> I dunno, is that the only extra check that the --disable-spinlocks\n>> implementation is providing?\n\n> I think it also provides the (valuable!) check that spinlocks were actually\n> initialized. But that again seems like something we'd be better off adding\n> more general infrastructure for - nobody runs --disable-spinlocks locally, we\n> shouldn't need to run this on the buildfarm to find problems like this.\n\nHmm, but how? One of the things we gave up by nuking HPPA support\nwas that that platform's representation of an initialized, free\nspinlock was not all-zeroes, so that it'd catch this type of problem.\nI think all the remaining platforms do use zeroes, so it's hard to\nsee how anything short of valgrind would be likely to catch it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Jul 2024 12:45:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix double-release of spinlock"
},
{
"msg_contents": "Hi,\n\nOn 2024-07-29 09:40:26 -0700, Andres Freund wrote:\n> On 2024-07-29 12:33:13 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2024-07-29 11:31:56 -0400, Tom Lane wrote:\n> > >> There was some recent discussion about getting rid of\n> > >> --disable-spinlocks on the grounds that nobody would use\n> > >> hardware that lacked native spinlocks. But now I wonder\n> > >> if there is a testing/debugging reason to keep it.\n> > \n> > > Seems it'd be a lot more straightforward to just add an assertion to the\n> > > x86-64 spinlock implementation verifying that the spinlock isn't already free?\n> \n> FWIW, I quickly hacked that up, and it indeed quickly fails with 0393f542d72^\n> and passes with 0393f542d72.\n\nThought it'd be valuable to post a patch to go along with this, to\n-hackers. The thread started at [1]\n\nOther context from this discussion:\n> > I dunno, is that the only extra check that the --disable-spinlocks\n> > implementation is providing?\n> \n> I think it also provides the (valuable!) check that spinlocks were actually\n> initialized. But that again seems like something we'd be better off adding\n> more general infrastructure for - nobody runs --disable-spinlocks locally, we\n> shouldn't need to run this on the buildfarm to find problems like this.\n> \n> \n> > I'm kind of allergic to putting Asserts into spinlocked code segments,\n> > mostly on the grounds that it violates the straight-line-code precept.\n> > I suppose it's not really that bad for tests that you don't expect\n> > to fail, but still ...\n> \n> I don't think the spinlock implementation itself is really affected by that\n> rule - after all, the --disable-spinlocks implementation actually consists out\n> of several layers of external function calls (including syscalls in some\n> cases!).\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/E1sYSF2-001lEB-D1%40gemulon.postgresql.org",
"msg_date": "Mon, 29 Jul 2024 09:51:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Detect double-release of spinlock"
},
{
"msg_contents": "On 29/07/2024 19:51, Andres Freund wrote:\n> On 2024-07-29 09:40:26 -0700, Andres Freund wrote:\n>> On 2024-07-29 12:33:13 -0400, Tom Lane wrote:\n>>> Andres Freund <andres@anarazel.de> writes:\n>>>> On 2024-07-29 11:31:56 -0400, Tom Lane wrote:\n>>>>> There was some recent discussion about getting rid of\n>>>>> --disable-spinlocks on the grounds that nobody would use\n>>>>> hardware that lacked native spinlocks. But now I wonder\n>>>>> if there is a testing/debugging reason to keep it.\n>>>\n>>>> Seems it'd be a lot more straightforward to just add an assertion to the\n>>>> x86-64 spinlock implementation verifying that the spinlock isn't already free?\n>>\n>> FWIW, I quickly hacked that up, and it indeed quickly fails with 0393f542d72^\n>> and passes with 0393f542d72.\n\n+1. Thanks!\n\n>>> Other context from this discussion:\n>>> I dunno, is that the only extra check that the --disable-spinlocks\n>>> implementation is providing?\n>>\n>> I think it also provides the (valuable!) check that spinlocks were actually\n>> initialized. But that again seems like something we'd be better off adding\n>> more general infrastructure for - nobody runs --disable-spinlocks locally, we\n>> shouldn't need to run this on the buildfarm to find problems like this.\n\nNote that the \"check\" for double-release with the fallback \nimplementation wasn't an explicit check either. It just incremented the \nunderlying semaphore, which caused very weird failures later in \ncompletely unrelated code. An explicit assert would be much nicer.\n\n+1 for removing --disable-spinlocks, but let's add this assertion first.\n\n>>> I'm kind of allergic to putting Asserts into spinlocked code segments,\n>>> mostly on the grounds that it violates the straight-line-code precept.\n>>> I suppose it's not really that bad for tests that you don't expect\n>>> to fail, but still ...\n>>\n>> I don't think the spinlock implementation itself is really affected by that\n>> rule - after all, the --disable-spinlocks implementation actually consists out\n>> of several layers of external function calls (including syscalls in some\n>> cases!).\n\nYeah I'm not worried about that at all. Also, the assert is made when \nyou have already released the spinlock; you are already out of the \ncritical section.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 29 Jul 2024 20:07:56 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Detect double-release of spinlock"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> Yeah I'm not worried about that at all. Also, the assert is made when \n> you have already released the spinlock; you are already out of the \n> critical section.\n\nNot in the patch Andres posted.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Jul 2024 13:25:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Detect double-release of spinlock"
},
{
"msg_contents": "On Mon, Jul 29, 2024 at 12:40 PM Andres Freund <andres@anarazel.de> wrote:\n> I think it also provides the (valuable!) check that spinlocks were actually\n> initialized. But that again seems like something we'd be better off adding\n> more general infrastructure for - nobody runs --disable-spinlocks locally, we\n> shouldn't need to run this on the buildfarm to find problems like this.\n\n+1. It sucks to have to do special builds to catch a certain kind of\nproblem. I know I've been guilty of that (ahem, debug_parallel_query\nf/k/a force_parallel_mode) but I'm not going to put it on my CV as one\nof my great accomplishments. It's much better if we can find a way for\na standard 'make check-world' to tell us about as many things as\npossible, so that we don't commit and then find out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Jul 2024 13:37:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix double-release of spinlock"
},
{
"msg_contents": "On 2024-07-29 12:45:19 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2024-07-29 12:33:13 -0400, Tom Lane wrote:\n> >> I dunno, is that the only extra check that the --disable-spinlocks\n> >> implementation is providing?\n>\n> > I think it also provides the (valuable!) check that spinlocks were actually\n> > initialized. But that again seems like something we'd be better off adding\n> > more general infrastructure for - nobody runs --disable-spinlocks locally, we\n> > shouldn't need to run this on the buildfarm to find problems like this.\n>\n> Hmm, but how?\n\nI think there's a few ways:\n\n> One of the things we gave up by nuking HPPA support\n> was that that platform's representation of an initialized, free\n> spinlock was not all-zeroes, so that it'd catch this type of problem.\n> I think all the remaining platforms do use zeroes, so it's hard to\n> see how anything short of valgrind would be likely to catch it.\n\n1) There's nothing forcing us to use 0/1 for most of the spinlock\nimplementations. E.g. for x86-64 we could use 0 for uninitialized, 1 for free\nand 2 for locked.\n\n2) We could also change the layout of slock_t in assert enabled builds, adding\na dedicated 'initialized' field when assertions are enabled. But that might be\nannoying from an ABI POV?\n\n\n1) seems preferrable, so I gave it a quick try. Seems to work. There's a\n*slight* difference in the instruction sequence:\n\nold:\n 41f6:\tf0 86 10 \tlock xchg %dl,(%rax)\n 41f9:\t84 d2 \ttest %dl,%dl\n 41fb:\t75 1b \tjne 4218 <GetRecoveryState+0x38>\n\nnew:\n 4216:\tf0 86 10 \tlock xchg %dl,(%rax)\n 4219:\t80 fa 02 \tcmp $0x2,%dl\n 421c:\t74 22 \tje 4240 <GetRecoveryState+0x40>\n\nI.e. the version using 2 as the locked state uses a three byte instruction vs\na two byte instruction before.\n\n\n*If* we are worried about this, we could\n\na) Change the representation only for assert enabled builds, but that'd have\n ABI issues again.\n\nb) Instead define the spinlock to have 1 as the unlocked state and 0 as the\n locked state. That makes it a bit harder to understand that initialization\n is missing, compared to a dedicated state, as the first use of the spinlock\n just blocks.\n\n\nTo make 1) b) easier to understand it might be worth changing the slock_t\ntypedef to be something like\n\ntypedef struct slock_t\n{\n char is_free;\n} slock_t;\n\nwhich also might help catch some cases of type confusion - the char typedef is\ntoo forgiving imo.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Jul 2024 10:46:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix double-release of spinlock"
},
{
"msg_contents": "Hi,\n\nOn 2024-07-29 13:25:22 -0400, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> > Yeah I'm not worried about that at all. Also, the assert is made when \n> > you have already released the spinlock; you are already out of the \n> > critical section.\n> \n> Not in the patch Andres posted.\n\nWhich seems fairly fundamental - once outside of the critical section, we\ncan't actually assert that the lock isn't acquired, somebody else *validly*\nmight have acquired it by then.\n\nHowever, I still don't think it's a problem to assert that the lock is held in\nin the unlock \"routine\". As mentioned before, the spinlock implementation\nitself has never followed the \"just straight line code\" rule that users of\nspinlocks are supposed to follow.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Jul 2024 10:48:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Detect double-release of spinlock"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2024-07-29 12:45:19 -0400, Tom Lane wrote:\n>> Hmm, but how?\n\n> ...\n> I.e. the version using 2 as the locked state uses a three byte instruction vs\n> a two byte instruction before.\n> *If* we are worried about this, we could\n> a) Change the representation only for assert enabled builds, but that'd have\n> ABI issues again.\n\nAgreed, that would be a very bad idea. It would for example break the\ncase of a non-assert-enabled extension used with an assert-enabled\ncore or vice versa, which is something we've gone out of our way to\nallow.\n\n> b) Instead define the spinlock to have 1 as the unlocked state and 0 as the\n> locked state. That makes it a bit harder to understand that initialization\n> is missing, compared to a dedicated state, as the first use of the spinlock\n> just blocks.\n\nThis option works for me.\n\n> To make 1) b) easier to understand it might be worth changing the slock_t\n> typedef to be something like\n\n> typedef struct slock_t\n> {\n> char is_free;\n> } slock_t;\n\n+1\n\nHow much of this would we change across platforms, and how much\nwould be x86-only? I think there are enough people developing on\nARM (e.g. Mac) now to make it worth covering that, but maybe we\ndon't care so much about anything else.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Jul 2024 13:56:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix double-release of spinlock"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> However, I still don't think it's a problem to assert that the lock is held in\n> in the unlock \"routine\". As mentioned before, the spinlock implementation\n> itself has never followed the \"just straight line code\" rule that users of\n> spinlocks are supposed to follow.\n\nYeah, that's fair.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Jul 2024 13:57:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Detect double-release of spinlock"
},
{
"msg_contents": "On 29/07/2024 20:48, Andres Freund wrote:\n> On 2024-07-29 13:25:22 -0400, Tom Lane wrote:\n>> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>>> Yeah I'm not worried about that at all. Also, the assert is made when\n>>> you have already released the spinlock; you are already out of the\n>>> critical section.\n>>\n>> Not in the patch Andres posted.\n> \n> Which seems fairly fundamental - once outside of the critical section, we\n> can't actually assert that the lock isn't acquired, somebody else *validly*\n> might have acquired it by then.\n\nYou could do:\n\nbool was_free = S_LOCK_FREE(lock);\n\nS_UNLOCK(lock);\nAssert(!was_free);\n\nDepending on the underlying implementation, you could also use \ncompare-and-exchange. That makes the assertion-enabled instructions a \nlittle different than without assertions though.\n\n> However, I still don't think it's a problem to assert that the lock is held in\n> in the unlock \"routine\". As mentioned before, the spinlock implementation\n> itself has never followed the \"just straight line code\" rule that users of\n> spinlocks are supposed to follow.\n\nAgreed.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 29 Jul 2024 21:00:35 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Detect double-release of spinlock"
},
{
"msg_contents": "Hi,\n\nOn 2024-07-29 21:00:35 +0300, Heikki Linnakangas wrote:\n> On 29/07/2024 20:48, Andres Freund wrote:\n> > On 2024-07-29 13:25:22 -0400, Tom Lane wrote:\n> > > Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> > > > Yeah I'm not worried about that at all. Also, the assert is made when\n> > > > you have already released the spinlock; you are already out of the\n> > > > critical section.\n> > > \n> > > Not in the patch Andres posted.\n> > \n> > Which seems fairly fundamental - once outside of the critical section, we\n> > can't actually assert that the lock isn't acquired, somebody else *validly*\n> > might have acquired it by then.\n> \n> You could do:\n> \n> bool was_free = S_LOCK_FREE(lock);\n> \n> S_UNLOCK(lock);\n> Assert(!was_free);\n\nI don't really see the point - we're about to crash with an assertion failure,\nwhy would we want to do that outside of the critical section? If anything that\nwill make it harder to debug the issue in a core dump, because other backends\nmight \"destroy evidence\" due to being able to acquire the spinlock.\n\n\n> Depending on the underlying implementation, you could also use\n> compare-and-exchange.\n\nThat'd scale a lot worse, at least on x86-64, as it requires the unlock to be\nan atomic op, whereas today it's a simple store (+ compiler barrier).\n\nI've experimented with replacing all spinlocks with lwlocks, and the fact that\nyou need an atomic op for an rwlock release is one of the two major reasons\nthey have a higher overhead (the remainder is boring stuff like the overhead\nof external function calls and ownership management).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 29 Jul 2024 11:12:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Detect double-release of spinlock"
},
{
"msg_contents": "Hi,\n\nPartially replying here to an email on -committers [1].\n\nOn 2024-07-29 13:57:02 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > However, I still don't think it's a problem to assert that the lock is held in\n> > in the unlock \"routine\". As mentioned before, the spinlock implementation\n> > itself has never followed the \"just straight line code\" rule that users of\n> > spinlocks are supposed to follow.\n>\n> Yeah, that's fair.\n\nCool.\n\n\nOn 2024-07-29 13:56:05 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > b) Instead define the spinlock to have 1 as the unlocked state and 0 as the\n> > locked state. That makes it a bit harder to understand that initialization\n> > is missing, compared to a dedicated state, as the first use of the spinlock\n> > just blocks.\n>\n> This option works for me.\n\n> > To make 1) b) easier to understand it might be worth changing the slock_t\n> > typedef to be something like\n>\n> > typedef struct slock_t\n> > {\n> > char is_free;\n> > } slock_t;\n>\n> +1\n\nCool. I've attached a prototype.\n\n\nI just realized there's a nice little advantage to the \"inverted\"\nrepresentation - it detects missing initialization even in optimized builds.\n\n\n> How much of this would we change across platforms, and how much\n> would be x86-only? I think there are enough people developing on\n> ARM (e.g. Mac) now to make it worth covering that, but maybe we\n> don't care so much about anything else.\n\nNot sure. Right now I've only hacked up x86-64 (not even touching i386), but\nit shouldn't be hard to change at least some additional platforms.\n\nMy current prototype requires S_UNLOCK, S_LOCK_FREE, S_INIT_LOCK to be\nimplemented for x86-64 instead of using the \"generic\" implementation. That'd\nbe mildly annoying duplication if we did so for a few more platforms.\n\n\nIt'd be more palatable to just change all platforms if we made more of them\nuse __sync_lock_test_and_set (or some other intrinsic(s))...\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/2812376.1722275765%40sss.pgh.pa.us",
"msg_date": "Mon, 29 Jul 2024 11:29:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Detect double-release of spinlock"
}
] |
[
{
"msg_contents": "Hi,\n\nAs part of [1] I was staring at the assembly code generated for\nSpinLockAcquire(), fairly randomly using GetRecoveryState() as the example.\n\nOn master, in an optimized build this generates the following code (gcc 12 in\nthis case, but it doesn't really matter):\n\n0000000000004220 <GetRecoveryState>:\n 4220:\t55 \tpush %rbp\n 4221:\t48 8b 05 00 00 00 00 \tmov 0x0(%rip),%rax # 4228 <GetRecoveryState+0x8>\n 4228:\tba 01 00 00 00 \tmov $0x1,%edx\n 422d:\t48 89 e5 \tmov %rsp,%rbp\n 4230:\t48 05 c0 01 00 00 \tadd $0x1c0,%rax\n 4236:\tf0 86 10 \tlock xchg %dl,(%rax)\n 4239:\t84 d2 \ttest %dl,%dl\n 423b:\t75 23 \tjne 4260 <GetRecoveryState+0x40>\n 423d:\t48 8b 05 00 00 00 00 \tmov 0x0(%rip),%rax # 4244 <GetRecoveryState+0x24>\n 4244:\t8b 80 44 01 00 00 \tmov 0x144(%rax),%eax\n 424a:\t48 8b 15 00 00 00 00 \tmov 0x0(%rip),%rdx # 4251 <GetRecoveryState+0x31>\n 4251:\tc6 82 c0 01 00 00 00 \tmovb $0x0,0x1c0(%rdx)\n 4258:\t5d \tpop %rbp\n 4259:\tc3 \tret\n 425a:\t66 0f 1f 44 00 00 \tnopw 0x0(%rax,%rax,1)\n 4260:\t48 8b 05 00 00 00 00 \tmov 0x0(%rip),%rax # 4267 <GetRecoveryState+0x47>\n 4267:\t48 8d 0d 00 00 00 00 \tlea 0x0(%rip),%rcx # 426e <GetRecoveryState+0x4e>\n 426e:\t48 8d b8 c0 01 00 00 \tlea 0x1c0(%rax),%rdi\n 4275:\tba c8 18 00 00 \tmov $0x18c8,%edx\n 427a:\t48 8d 35 00 00 00 00 \tlea 0x0(%rip),%rsi # 4281 <GetRecoveryState+0x61>\n 4281:\tff 15 00 00 00 00 \tcall *0x0(%rip) # 4287 <GetRecoveryState+0x67>\n 4287:\teb b4 \tjmp 423d <GetRecoveryState+0x1d>\n\nThe main thing I want to raise attention about is the following bit:\n add $0x1c0,%rax\n lock xchg %dl,(%rax)\n\n0x1c0 is the offset of info_lck in XLogCtlData. So the code first computes the\naddress of the lock in %rax and then does the xchg on that.\n\nThat's pretty odd, because on x86 this could just be encoded as an offset to\nthe address - as shown in the code for the unlock a bit later:\n 4251: c6 82 c0 01 00 00 00 movb $0x0,0x1c0(%rdx)\n\n\nAfter being confused for a while, the explanation is fairly simple: We use\nvolatile and dereference the address:\n\nstatic __inline__ int\ntas(volatile slock_t *lock)\n{\n\tslock_t\t\t_res = 1;\n\n\t__asm__ __volatile__(\n\t\t\"\tlock\t\t\t\\n\"\n\t\t\"\txchgb\t%0,%1\t\\n\"\n:\t\t\"+q\"(_res), \"+m\"(*lock)\n:\t\t/* no inputs */\n:\t\t\"memory\", \"cc\");\n\treturn (int) _res;\n}\n\n(note the (*lock) and the volatile in the signature).\n\nI think it'd be just as defensible to not emit a separate load here, despite\nthe volatile, and indeed clang doesn't emit a separate load. But it also does\nseem defensible to take translate the code very literally, as gcc does.\n\n\nIf I remove the volatile from the signature or cast it away, gcc indeed\ngenerates the offset version:\n 4230:\tf0 86 82 c0 01 00 00 \tlock xchg %al,0x1c0(%rdx)\n\n\nA second, even smaller, issue with the code is that we use \"lock xchgb\"\ndespite xchg having implied lock approximately forever ([2]). That makes the code\nslightly wider than necessary (the lock prefix is one byte).\n\n\nI doubt there's a lot of situations where these end up having a meaningful\nperformance impact, but it still seems suboptimal. I may be seeing a *small*\ngain in a workload inserting lots of tiny records, but it's hard to be sure if\nit's above the noise floor.\n\n\nI'm wondering in how many places our fairly broad use of volatiles causes\nmore substantially worse code being generated.\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20240729165154.56zqyg34x2ywkpsh%40awork3.anarazel.de\n[2] https://www.felixcloutier.com/x86/xchg#description\n\n\n",
"msg_date": "Mon, 29 Jul 2024 12:59:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Suboptimal spinlock code due to volatile"
},
{
"msg_contents": "On 29/07/2024 22:59, Andres Freund wrote:\n> After being confused for a while, the explanation is fairly simple: We use\n> volatile and dereference the address:\n> \n> static __inline__ int\n> tas(volatile slock_t *lock)\n> {\n> \tslock_t\t\t_res = 1;\n> \n> \t__asm__ __volatile__(\n> \t\t\"\tlock\t\t\t\\n\"\n> \t\t\"\txchgb\t%0,%1\t\\n\"\n> :\t\t\"+q\"(_res), \"+m\"(*lock)\n> :\t\t/* no inputs */\n> :\t\t\"memory\", \"cc\");\n> \treturn (int) _res;\n> }\n> \n> (note the (*lock) and the volatile in the signature).\n> \n> I think it'd be just as defensible to not emit a separate load here, despite\n> the volatile, and indeed clang doesn't emit a separate load. But it also does\n> seem defensible to take translate the code very literally, as gcc does.\n> \n> \n> If I remove the volatile from the signature or cast it away, gcc indeed\n> generates the offset version:\n> 4230:\tf0 86 82 c0 01 00 00 \tlock xchg %al,0x1c0(%rdx)\n\nGood catch. Seems safe to just remove the volatile.\n\n> A second, even smaller, issue with the code is that we use \"lock xchgb\"\n> despite xchg having implied lock approximately forever ([2]). That makes the code\n> slightly wider than necessary (the lock prefix is one byte).\n> \n> \n> I doubt there's a lot of situations where these end up having a meaningful\n> performance impact, but it still seems suboptimal. I may be seeing a *small*\n> gain in a workload inserting lots of tiny records, but it's hard to be sure if\n> it's above the noise floor.\n> \n> \n> I'm wondering in how many places our fairly broad use of volatiles causes\n> more substantially worse code being generated.\n\nAside from performance, I find \"volatile\" difficult to reason about. I \nfeel more comfortable with atomics and memory barriers.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 30 Jul 2024 10:45:54 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Suboptimal spinlock code due to volatile"
},
{
"msg_contents": "On Tue, Jul 30, 2024 at 3:46 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Aside from performance, I find \"volatile\" difficult to reason about. I\n> feel more comfortable with atomics and memory barriers.\n\nI think nearly everyone feels more comfortable with atomics and memory\nbarriers. The semantics of volatile are terrible. It almost does more\nor less than what you actually wanted, sometimes both.\n\nReading Andres's original message, I couldn't help wondering if this\nis an argument against rolling our own spinlock implementations.\nPresumably a compiler intrinsic wouldn't cause this kind of\nunfortunate artifact. Our position in the past has essentially been\n\"we know better,\" but this seems like a counterexample.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Jul 2024 12:38:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Suboptimal spinlock code due to volatile"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nI would like your help to collectively try to answer these three questions:\n\n1. Who are the current users of 32-bit PostgreSQL?\n\n2. Among these users, how many are upgrading to new major versions?\n\n3. For how many of these users is performance critical?\n\nThis came up during ongoing work on optimizing numeric_mul [1].\n\nTo me, it's non-obvious whether introducing `#if SIZEOF_DATUM < 8` with\nseparate 32-bit and 64-bit code paths is worthwhile to maintain performance\nfor both.\n\nKnowing more about $subject can hopefully help us reason about how much\nadditional code complication is justifiable for *fast* 32-bit support.\n\nI checked the archives but only found a discussion on *dropping* 32-bit support\n[2], which is a different topic.\n\nThanks for input!\n\n/Joel\n\n[1] https://postgr.es/m/9d8a4a42-c354-41f3-bbf3-199e1957db97@app.fastmail.com\n[2] https://postgr.es/m/0a71b43129fb447988f152941e1dbcb3@nidsa.net\n\n\n",
"msg_date": "Mon, 29 Jul 2024 22:40:16 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Is *fast* 32-bit support still important?"
},
{
"msg_contents": "On 29/07/2024 23:40, Joel Jacobson wrote:\n> To me, it's non-obvious whether introducing `#if SIZEOF_DATUM < 8` with\n> separate 32-bit and 64-bit code paths is worthwhile to maintain performance\n> for both.\n> \n> Knowing more about $subject can hopefully help us reason about how much\n> additional code complication is justifiable for *fast* 32-bit support.\n\nIMO I don't think it's worth adding extra code for fast 32-bit support \nanymore. However, I'd still be wary of *regressing* performance on \n32-bit systems.\n\nSo if you're adding a new fast path to a function, it's OK to make it \n64-bit only, and fall back to the old slower code on 32-bit systems. But \n-1 on *removing* existing 32-bit fast path code, or rewriting things in \na way that makes an existing function significantly slower than before \non 32-bit systems.\n\nThis isn't black or white though. It depends on how big a gain or \nregression we're talking about, and how complex the extra code would be.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 30 Jul 2024 10:25:06 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Is *fast* 32-bit support still important?"
},
{
"msg_contents": "Hi Joel,\n\nHere are my two cents.\n\n> 1. Who are the current users of 32-bit PostgreSQL?\n\nPretty much any embedded system that uses just a few GB of memory may\nwin from using a 32-bit processor (not necessarily in terms of\nperformance, maybe in terms of price). Think of WiFi-routers, smart\nTVs, 3D printers, etc.\n\nGenerally speaking it's hard to give an exact answer due to lack of\n\"telemetry\" in PostgreSQL.\n\n> 2. Among these users, how many are upgrading to new major versions?\n\nI would guess it very much depends on the product and manufacturer. I\nwouldn't assume though that users of 32-bit systems don't do major\nupgrades. (Not to mention that it's beneficial for us to test the code\non 32-bit systems.)\n\n> 3. For how many of these users is performance critical?\n\nDepends on how you define performance. Performance of a single OLTP\nquery is important. The performance of the upgrade procedure is\nprobably not that important. The ability of processing a lot of data\nis probably also not extremely important, at least I wouldn't expect a\nlot of data and/or fast storage devices on 32-bit systems.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 30 Jul 2024 12:06:29 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Is *fast* 32-bit support still important?"
},
{
"msg_contents": "On Tue, Jul 30, 2024, at 09:25, Heikki Linnakangas wrote:\n> On 29/07/2024 23:40, Joel Jacobson wrote:\n>> To me, it's non-obvious whether introducing `#if SIZEOF_DATUM < 8` with\n>> separate 32-bit and 64-bit code paths is worthwhile to maintain performance\n>> for both.\n>> \n>> Knowing more about $subject can hopefully help us reason about how much\n>> additional code complication is justifiable for *fast* 32-bit support.\n>\n> IMO I don't think it's worth adding extra code for fast 32-bit support \n> anymore. However, I'd still be wary of *regressing* performance on \n> 32-bit systems.\n>\n> So if you're adding a new fast path to a function, it's OK to make it \n> 64-bit only, and fall back to the old slower code on 32-bit systems. But \n> -1 on *removing* existing 32-bit fast path code, or rewriting things in \n> a way that makes an existing function significantly slower than before \n> on 32-bit systems.\n>\n> This isn't black or white though. It depends on how big a gain or \n> regression we're talking about, and how complex the extra code would be.\n\nThanks for input.\n\nI still haven't got any reports from real users of 32-bit PostgreSQL,\nso my comments below are based on the assumption that such users exist\nand have high performance needs.\n\nI agree that it's not a black or white decision since quantifying complexity\nis inherently challenging.\n\nHowever, perhaps it would be possible to say something about\nlower and upper bounds on 32-bit slowdown?\n\n- Below a certain percentage slowdown,\n extra 32-bit code optimization is definitively unnecessary.\n\n- Above a certain percentage slowdown,\n extra 32-bit code optimization is definitively necessary.\n\nIn the range between these bounds, I guess the decision should depend on\nthe specific added code complexity required?\n\nIt's also the question what percentage we're reasoning about here.\nIs it the time spent in the function, or is it the total execution time?\n\n/Joel\n\n\n",
"msg_date": "Mon, 05 Aug 2024 13:46:51 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: Is *fast* 32-bit support still important?"
},
{
"msg_contents": "On Tue, Jul 30, 2024, at 11:06, Aleksander Alekseev wrote:\n> Hi Joel,\n>\n> Here are my two cents.\n>\n>> 1. Who are the current users of 32-bit PostgreSQL?\n>\n> Pretty much any embedded system that uses just a few GB of memory may\n> win from using a 32-bit processor (not necessarily in terms of\n> performance, maybe in terms of price). Think of WiFi-routers, smart\n> TVs, 3D printers, etc.\n\nThanks for feedback!\n\nDo we know of any such products or users?\n\nI found an OS that runs on many 32-bit chips, FreeRTOS, that seems quite popular.\nCouldn't find anything about PostgreSQL and FreeRTOS though.\nI've posted a question on their forum. [1] Let's wait and see if we hear from any real user.\n\nI see one i386 and i686 build farm animals runs Debian.\nPerhaps it makes sense to try to reach out to the Debian community,\nand see if they know of any PostgreSQL users on 32-bit?\n\n> Generally speaking it's hard to give an exact answer due to lack of\n> \"telemetry\" in PostgreSQL.\n\nCould we add a text message that is displayed to a user,\nwhen compiling PostgreSQL on a 32-bit platform?\n\n*****\nNOTICE: You are compiling PostgreSQL on a 32-bit platform.\n\nWe are interested in learning about your use case.\nPlease contact us with details about how you are using\nPostgreSQL on this platform.\n\nThank you!\n\nContact: pgsql-hackers@postgresql.org\nSubject: Report from 32-bit user\n*****\n\n/Joel\n\n\n",
"msg_date": "Mon, 05 Aug 2024 14:04:08 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: Is *fast* 32-bit support still important?"
},
{
"msg_contents": "Hi,\n\n> > Pretty much any embedded system that uses just a few GB of memory may\n> > win from using a 32-bit processor (not necessarily in terms of\n> > performance, maybe in terms of price). Think of WiFi-routers, smart\n> > TVs, 3D printers, etc.\n>\n> Thanks for feedback!\n>\n> Do we know of any such products or users?\n>\n> I found an OS that runs on many 32-bit chips, FreeRTOS, that seems quite popular.\n> Couldn't find anything about PostgreSQL and FreeRTOS though.\n> I've posted a question on their forum. [1] Let's wait and see if we hear from any real user.\n\nI'm not extremely familiar with FreeRTOS but my humble understanding\nis that it's very different from what we would typically call an\n\"operating system\". If I'm not wrong, basically this is a framework\nthat adds multitasking to STM32 microcontrollers. This is not exactly\na target hardware for PostgreSQL (although running PostgreSQL on STM32\nMCUs could be a fun project for someone looking for a challenging\ntask.)\n\n> I see one i386 and i686 build farm animals runs Debian.\n> Perhaps it makes sense to try to reach out to the Debian community,\n> and see if they know of any PostgreSQL users on 32-bit?\n>\n> > Generally speaking it's hard to give an exact answer due to lack of\n> > \"telemetry\" in PostgreSQL.\n>\n> Could we add a text message that is displayed to a user,\n> when compiling PostgreSQL on a 32-bit platform?\n\nWhat would be actionable items depending on the results? Option A:\nsomeone claims to use PostgreSQL on 32-bit hardware. Option B: no one\nadmits that they use PostgreSQL on 32-bit hardware (but maybe someone\nactually does and/or will in the future). Regardless of the results\nyou can't drop the support of 32-bit software (until it gets as\ndifficult and pointless as with AIX that was dropped recently) and it\nwill not tell you how slow the 32-bit version of PostgreSQL can be.\n\nIf there are no actionable items why create a poll?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 5 Aug 2024 15:24:38 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Is *fast* 32-bit support still important?"
},
{
"msg_contents": "On Mon, Aug 5, 2024, at 14:24, Aleksander Alekseev wrote:\n>> Could we add a text message that is displayed to a user,\n>> when compiling PostgreSQL on a 32-bit platform?\n>\n> What would be actionable items depending on the results? Option A:\n> someone claims to use PostgreSQL on 32-bit hardware. Option B: no one\n> admits that they use PostgreSQL on 32-bit hardware (but maybe someone\n> actually does and/or will in the future). Regardless of the results\n> you can't drop the support of 32-bit software (until it gets as\n> difficult and pointless as with AIX that was dropped recently) and it\n> will not tell you how slow the 32-bit version of PostgreSQL can be.\n>\n> If there are no actionable items why create a poll?\n\nNever suggested *dropping* 32-bit support; that's a different question.\n\nHowever, if we were to receive input from 32-bit PostgreSQL users explaining\nwhy they need *high performance*, then I imagine that could affect how we\nfeel about 32-bit optimizations.\n\nRight now, there doesn't seem to be a consensus on whether we should optimize\nfor 32-bit or not.\n\nOn the one hand, we have commit 5e1f3b9 that says:\n\"While it adds some space on 32-bit machines, we aren't optimizing\nfor that case anymore.\"\n\nOn the other hand, we have the ongoing work on optimizing numeric_mul,\nwhere a patch is suggested with extra code to optimize for 32-bit.\n\nI agree, however, that the absence of any comments from such a poll would not\ngive us any more information.\n\n/Joel\n\n\n",
"msg_date": "Mon, 05 Aug 2024 14:54:39 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: Is *fast* 32-bit support still important?"
},
{
"msg_contents": "On Mon, Jul 29, 2024 at 4:41 PM Joel Jacobson <joel@compiler.org> wrote:\n> To me, it's non-obvious whether introducing `#if SIZEOF_DATUM < 8` with\n> separate 32-bit and 64-bit code paths is worthwhile to maintain performance\n> for both.\n\nI feel like it's probably mostly up to the patch author. Reviewers\nand/or the eventual committer might propose to reverse whatever\ndecision the patch author made, but personally, I wouldn't bounce a\npatch for either having or not having separate code paths, unless\nthere was some particular reason to believe that whatever decision the\npatch author made was likely to cause a problem.\n\nOne generally difficult thing about working in the PostgreSQL\ncommunity is that we have pretty limited formal decision-making\nstructures. A consensus on the list is valid only until the issue is\nrelitigated, which can happen at any time and for any reason. With\ninfrequent exceptions such as RMT or core pronouncements, a previous\nconsensus doesn't bind people who weren't participants in the previous\ndiscussion, or even people who were. Everybody is free to change their\nmind at any time. For that reason, I often find it more helpful to\napproach questions like this from a pragmatic standpoint: rather than\nasking \"what is the policy about X?\" I find it better to ask \"If I do\nX, what is probably going to happen?\"\n\nWhat I think is: if you write a patch that caters mostly to 64-bit\nsystems, probably nobody will notice or care that the 32-bit\nperformance is not great, because there just aren't that many 32-bit\nusers left out there. I think it's been a very long time since we got\na complaint about 32-bit performance, or a patch to improve 32-bit\nperformance, but we get other kinds of performance-optimizing patches\nall the time, for all sorts of things. Perhaps that's because we\nhaven't yet done anything too terrible to 32-bit performance, but it's\nprobably also that if you're running a 32-bit system in 2024, you're\nprobably not expecting it to come under serious load. You likely care\nmore about getting the job done with limited machine resources than\nanything else. And you likely expect it to be slow. I don't actually\nknow how much we'd have to regress 32-bit performance before users\nstarted to show up and complain about it, and I'm certainly not\nrecommending that we do so gratuitously. At the same time, evidence\nthat people are displeased with our current 32-bit performance or that\nthey care about improving our 32-bit performance is very thin on the\nground, AFAIK. If at some point people do start showing up to\ncomplain, then we'll know we've gone too far and we can back off --\nwith the benefit of knowing what people are actually unhappy about at\nthat point, vs. what we think they might be unhappy about.\n\nAnd on the other hand, if you take the opposite approach and write a\npatch that includes a separate code path for 32-bit systems, as long\nas it works and doesn't look terribly ugly, I bet nobody's going to\nwaste time demanding that you rip it out.\n\nSo I would just do whatever you feel like doing and then defend your\nchoice in the comments and/or commit message.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 5 Aug 2024 13:40:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is *fast* 32-bit support still important?"
}
] |
[
{
"msg_contents": "When deparsing queries or expressions, ruleutils.c has to generate\nunique names for RTEs and columns of RTEs. (Often, they're unique\nalready, but this isn't reliably true.) The original logic for that\ninvolved just strcmp'ing a proposed name against all the ones already\nassigned, which obviously is O(N^2) in the number of names being\nconsidered. Back in commit 8004953b5, we fixed that problem for\ngeneration of unique RTE names, by using a hash table to remember the\nalready-assigned names. However, that commit did not touch the logic\nfor de-duplicating the column names within each RTE, explaining\n \n In principle the same problem applies to the column-name-de-duplication\n code; but in practice that seems to be less of a problem, first because\n N is limited since we don't support extremely wide tables, and second\n because duplicate column names within an RTE are fairly rare, so that in\n practice the cost is more like O(N^2) not O(N^3). It would be very much\n messier to fix the column-name code, so for now I've left that alone.\n\nBut I think the time has come to do something about it. In [1]\nI presented this Perl script to generate a database that gives\npg_upgrade indigestion:\n\n-----\nfor (my $i = 0; $i < 100; $i++)\n{\n\tprint \"CREATE TABLE test_inh_check$i (\\n\";\n\tfor (my $j = 0; $j < 1000; $j++)\n\t{\n\t\tprint \"a$j float check (a$j > 10.2),\\n\";\n\t}\n\tprint \"b float);\\n\";\n\tprint \"CREATE TABLE test_inh_check_child$i() INHERITS(test_inh_check$i);\\n\";\n}\n-----\n\nOn my development machine, it takes over 14 minutes to pg_upgrade\nthis, and it turns out that that time is largely spent in column\nname de-duplication while deparsing the CHECK constraints. The\nattached patch reduces that to about 3m45s.\n\n(I think that we ought to reconsider MergeConstraintsIntoExisting's\nuse of deparsing to compare check constraints: it'd be faster and\nprobably more reliable to apply attnum translation to one parsetree\nand then use equal(). But that's a matter for a different patch, and\nthis patch would still be useful for the pg_dump side of the problem.)\n\nI was able to avoid a lot of the complexity I'd feared before by not\nattempting to use hashing during set_using_names(), which only has to\nconsider columns merged by USING clauses, so it shouldn't have enough\nof a performance problem to be worth touching. The hashing code needs\nto be optional anyway because it's unlikely to be a win for narrow\ntables, so we can simply ignore it until we reach the potentially\nexpensive steps. Also, things are already factored in such a way that\nwe only need to have one hashtable at a time, so this shouldn't cause\nany large memory bloat.\n\nI'll park this in the next CF.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/2422717.1722201869%40sss.pgh.pa.us",
"msg_date": "Mon, 29 Jul 2024 18:14:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Speeding up ruleutils' name de-duplication code, redux"
},
{
"msg_contents": "On Tue, 30 Jul 2024 at 10:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> -----\n> for (my $i = 0; $i < 100; $i++)\n> {\n> print \"CREATE TABLE test_inh_check$i (\\n\";\n> for (my $j = 0; $j < 1000; $j++)\n> {\n> print \"a$j float check (a$j > 10.2),\\n\";\n> }\n> print \"b float);\\n\";\n> print \"CREATE TABLE test_inh_check_child$i() INHERITS(test_inh_check$i);\\n\";\n> }\n> -----\n>\n> On my development machine, it takes over 14 minutes to pg_upgrade\n> this, and it turns out that that time is largely spent in column\n> name de-duplication while deparsing the CHECK constraints. The\n> attached patch reduces that to about 3m45s.\n\nI think this is worth doing. Reducing the --schema-only time in\npg_dump is a worthy goal to reduce downtime during upgrades.\n\nI looked at the patch and tried it out. I wondered about the choice of\n32 as the cut-off point so decided to benchmark using the attached\nscript. Here's an extract from the attached results:\n\nPatched with 10 child tables\npg_dump for 16 columns real 0m0.068s\npg_dump for 31 columns real 0m0.080s\npg_dump for 32 columns real 0m0.083s\n\nThis gives me what I'd expect to see. I wanted to ensure the point\nwhere you're switching to the hashing method was about the right\nplace. It seems to be, at least for my test.\n\nThe performance looks good too:\n\n10 tables:\nmaster: pg_dump for 1024 columns real 0m23.053s\npatched: pg_dump for 1024 columns real 0m1.573s\n\n100 tables:\nmaster: pg_dump for 1024 columns real 3m29.857s\npatched: pg_dump for 1024 columns real 0m23.053s\n\nPerhaps you don't think it's worth the additional complexity, but I\nsee that in both locations you're calling build_colinfo_names_hash(),\nit's done just after a call to expand_colnames_array_to(). I wondered\nif it was worthwhile unifying both of those functions maybe with a new\nname so that you don't need to loop over the always NULL element of\nthe colnames[] array when building the hash table. This is likely\nquite a small overhead compared to the quadratic search you've\nremoved, so it might not move the needle any. I just wanted to point\nit out as I've little else I can find to comment on.\n\nDavid",
"msg_date": "Tue, 10 Sep 2024 21:57:21 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up ruleutils' name de-duplication code, redux"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 30 Jul 2024 at 10:14, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> On my development machine, it takes over 14 minutes to pg_upgrade\n>> this, and it turns out that that time is largely spent in column\n>> name de-duplication while deparsing the CHECK constraints. The\n>> attached patch reduces that to about 3m45s.\n\n> I looked at the patch and tried it out.\n\nThanks for looking!\n\n> This gives me what I'd expect to see. I wanted to ensure the point\n> where you're switching to the hashing method was about the right\n> place. It seems to be, at least for my test.\n\nYeah, I was just going by gut feel there. It's good to have some\nnumbers showing it's not a totally silly choice.\n\n> Perhaps you don't think it's worth the additional complexity, but I\n> see that in both locations you're calling build_colinfo_names_hash(),\n> it's done just after a call to expand_colnames_array_to(). I wondered\n> if it was worthwhile unifying both of those functions maybe with a new\n> name so that you don't need to loop over the always NULL element of\n> the colnames[] array when building the hash table. This is likely\n> quite a small overhead compared to the quadratic search you've\n> removed, so it might not move the needle any. I just wanted to point\n> it out as I've little else I can find to comment on.\n\nHmm, but there are quite a few expand_colnames_array_to calls that\nare not associated with build_colinfo_names_hash. On the whole it\nfeels like those are separate concerns that are better kept separate.\n\nWe could accomplish what you suggest by re-ordering the calls so that\nwe build the hash table before enlarging the array. 0001 attached\nis the same as before (modulo line number changes from being rebased\nup to HEAD) and then 0002 implements this idea on top. On the whole\nthough I find 0002 fairly ugly and would prefer to stick to 0001.\nI really doubt that scanning any newly-created column positions is\ngoing to take long enough to justify intertwining things like this.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 10 Sep 2024 11:06:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up ruleutils' name de-duplication code, redux"
},
{
"msg_contents": "On Wed, 11 Sept 2024 at 03:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> We could accomplish what you suggest by re-ordering the calls so that\n> we build the hash table before enlarging the array. 0001 attached\n> is the same as before (modulo line number changes from being rebased\n> up to HEAD) and then 0002 implements this idea on top. On the whole\n> though I find 0002 fairly ugly and would prefer to stick to 0001.\n> I really doubt that scanning any newly-created column positions is\n> going to take long enough to justify intertwining things like this.\n\nI'm fine with that. I did test the performance with and without\nv2-0002 and the performance is just a little too noisy to tell. Both\nruns I did with v2-0002, it was slower, so I agree it's not worth\nmaking the code uglier for.\n\nI've no more comments. Looks good.\n\nDavid\n\n\n",
"msg_date": "Wed, 11 Sep 2024 08:33:59 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up ruleutils' name de-duplication code, redux"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Wed, 11 Sept 2024 at 03:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> We could accomplish what you suggest by re-ordering the calls so that\n>> we build the hash table before enlarging the array. 0001 attached\n>> is the same as before (modulo line number changes from being rebased\n>> up to HEAD) and then 0002 implements this idea on top. On the whole\n>> though I find 0002 fairly ugly and would prefer to stick to 0001.\n>> I really doubt that scanning any newly-created column positions is\n>> going to take long enough to justify intertwining things like this.\n\n> I'm fine with that. I did test the performance with and without\n> v2-0002 and the performance is just a little too noisy to tell. Both\n> runs I did with v2-0002, it was slower, so I agree it's not worth\n> making the code uglier for.\n> I've no more comments. Looks good.\n\nThanks for the review! I'll go push just 0001.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Sep 2024 16:36:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up ruleutils' name de-duplication code, redux"
}
] |
[
{
"msg_contents": "Hi, all\n\nI got a crash when copy partition tables with mass data in Cloudberry DB[0](based on Postgres14.4, Greenplum 7).\n\nI have a test on Postgres and it has the similar issue(different places but same function).\n\nHowever it’s a little hard to reproduce because it happened when inserting next tuple after a previous copy multi insert buffer is flushed.\n\nTo reproduce easily, change the Macros to:\n\n#define MAX_BUFFERED_TUPLES\t1\n#define MAX_PARTITION_BUFFERS\t0\n\nConfig and make install, when initdb, a core dump will be as:\n\n#0 0x000055de617211b9 in CopyMultiInsertInfoNextFreeSlot (miinfo=0x7ffce496d360, rri=0x55de6368ba88)\n at copyfrom.c:592\n#1 0x000055de61721ff1 in CopyFrom (cstate=0x55de63592ce8) at copyfrom.c:985\n#2 0x000055de6171dd86 in DoCopy (pstate=0x55de63589e00, stmt=0x55de635347d8, stmt_location=0, stmt_len=195,\n processed=0x7ffce496d590) at copy.c:306\n#3 0x000055de61ad7ce8 in standard_ProcessUtility (pstmt=0x55de635348a8,\n queryString=0x55de63533960 \"COPY information_schema.sql_features (feature_id, feature_name, sub_feature_id, sub\n_feature_name, is_supported, comments) FROM E'/home/gpadmin/install/pg17/share/postgresql/sql_features.txt';\\n\",\n readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x55de620b0ce0 <debugtupDR>,\n qc=0x7ffce496d910) at utility.c:735\n#4 0x000055de61ad7614 in ProcessUtility (pstmt=0x55de635348a8,\n queryString=0x55de63533960 \"COPY information_schema.sql_features (feature_id, feature_name, sub_feature_id, sub\n_feature_name, is_supported, comments) FROM E'/home/gpadmin/install/pg17/share/postgresql/sql_features.txt';\\n\",\n readOnlyTree=false, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x55de620b0ce0 <debugtupDR>,\n qc=0x7ffce496d910) at utility.c:523\n#5 0x000055de61ad5e8f in PortalRunUtility (portal=0x55de633dd7a0, pstmt=0x55de635348a8, isTopLevel=true,\n setHoldSnapshot=false, dest=0x55de620b0ce0 <debugtupDR>, qc=0x7ffce496d910) at pquery.c:1158\n#6 0x000055de61ad6106 in PortalRunMulti (portal=0x55de633dd7a0, isTopLevel=true, setHoldSnapshot=false,\n dest=0x55de620b0ce0 <debugtupDR>, altdest=0x55de620b0ce0 <debugtupDR>, qc=0x7ffce496d910) at pquery.c:1315\n#7 0x000055de61ad5550 in PortalRun (portal=0x55de633dd7a0, count=9223372036854775807, isTopLevel=true,\n run_once=true, dest=0x55de620b0ce0 <debugtupDR>, altdest=0x55de620b0ce0 <debugtupDR>, qc=0x7ffce496d910)\n at pquery.c:791```\n\n\nThe root cause is: we may call CopyMultiInsertInfoFlush() to flush buffer during COPY tuples, ex: insert from next tuple,\nCopyMultiInsertInfoNextFreeSlot() will get a crash due to null pointer of buffer.\n\nTo fix it: instead of call CopyMultiInsertInfoSetupBuffer() outside, I put it into CopyMultiInsertInfoNextFreeSlot() to avoid such issues.\n\n[0] https://github.com/cloudberrydb/cloudberrydb\n\n\nZhang Mingli\nwww.hashdata.xyz",
"msg_date": "Tue, 30 Jul 2024 11:50:54 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "COPY FROM crash"
},
{
"msg_contents": "On Tue, 30 Jul 2024 at 15:52, Zhang Mingli <zmlpostgres@gmail.com> wrote:\n> I have a test on Postgres and it has the similar issue(different places but same function).\n>\n> However it’s a little hard to reproduce because it happened when inserting next tuple after a previous copy multi insert buffer is flushed.\n>\n> To reproduce easily, change the Macros to:\n>\n> #define MAX_BUFFERED_TUPLES 1\n> #define MAX_PARTITION_BUFFERS 0\n\nI think you're going to need to demonstrate to us there's an actual\nPostgreSQL bug here with a test case that causes a crash without\nchanging the above definitions.\n\nIt seems to me that it's not valid to set MAX_PARTITION_BUFFERS to\nanything less than 2 due to the code inside\nCopyMultiInsertInfoFlush(). If we find the CopyMultiInsertBuffer for\n'curr_rri' then that code would misbehave if the list only contained a\nsingle CopyMultiInsertBuffer due to the expectation there's another\nitem in the list after the list_delete_first(). If you're only able\nto get it to misbehave by setting MAX_PARTITION_BUFFERS to less than\n2, then my suggested fix would be to add a comment to say that values\nless than to are not supported.\n\nDavid\n\n\n",
"msg_date": "Tue, 30 Jul 2024 17:35:36 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FROM crash"
},
{
"msg_contents": "Hi!\n\nOn Tue, 30 Jul 2024 at 08:52, Zhang Mingli <zmlpostgres@gmail.com> wrote:\n>\n> Hi, all\n>\n> I got a crash when copy partition tables with mass data in Cloudberry DB[0](based on Postgres14.4, Greenplum 7).\n>\n> I have a test on Postgres and it has the similar issue(different places but same function).\n\nJust to be clear, you are facing this on HEAD, on on REL_14_STABLE?\n\n\n> However it’s a little hard to reproduce because it happened when inserting next tuple after a previous copy multi insert buffer is flushed.\n>\n> To reproduce easily, change the Macros to:\n>\n> #define MAX_BUFFERED_TUPLES 1\n> #define MAX_PARTITION_BUFFERS 0\n\nThis way it's harder to believe that the problem persists with the\noriginal settings. Are these values valid?\n\n\n",
"msg_date": "Tue, 30 Jul 2024 10:37:11 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FROM crash"
},
{
"msg_contents": "Hi,\n\n\nZhang Mingli\nwww.hashdata.xyz\n\nOn Jul 30, 2024 at 13:35 +0800, David Rowley <dgrowleyml@gmail.com>, wrote:\n>\n> If you're only able\n> to get it to misbehave by setting MAX_PARTITION_BUFFERS to less than\n> 2, then my suggested fix would be to add a comment to say that values\n> less than to are not supported.\nRight.\nOn Postgres this crash could only happen if it is set to zero.\nI’ve updated the comments in patch v1 with an additional Assert.",
"msg_date": "Tue, 30 Jul 2024 15:06:01 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY FROM crash"
},
{
"msg_contents": "Hi,\n\n\nZhang Mingli\nwww.hashdata.xyz\nOn Jul 30, 2024 at 13:37 +0800, Kirill Reshke <reshkekirill@gmail.com>, wrote:\n>\n> Just to be clear, you are facing this on HEAD, on on REL_14_STABLE?\nPostgres HEAD.\n\n\n\n\n\n\n\nHi, \n\n\n\n\nZhang Mingli\nwww.hashdata.xyz\n\n\n\nOn Jul 30, 2024 at 13:37 +0800, Kirill Reshke <reshkekirill@gmail.com>, wrote:\n\nJust to be clear, you are facing this on HEAD, on on REL_14_STABLE?\nPostgres HEAD.",
"msg_date": "Tue, 30 Jul 2024 15:07:10 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: COPY FROM crash"
},
{
"msg_contents": "On Tue, 30 Jul 2024 at 19:06, Zhang Mingli <zmlpostgres@gmail.com> wrote:\n> I’ve updated the comments in patch v1 with an additional Assert.\n\nThanks. I adjusted a little and used a StaticAssert instead then pushed.\n\nStaticAssert seems better as invalid values will result in compilation failure.\n\nDavid\n\n\n",
"msg_date": "Tue, 30 Jul 2024 20:22:42 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: COPY FROM crash"
}
] |
[
{
"msg_contents": "*Description:*The connection fails with a non-blocking socket error when\nusing psql on\nWindows to connect to a PostgreSQL server with GSSAPI enabled. The error is\nbecause the socket error code is obtained by WSAGetLastError() instead of\nerrno. This causes the value of errno to be incorrect when handling a\nnon-blocking socket error.\n\n\n*Steps to Reproduce:*1. Compile PostgreSQL client (psql) on Windows.\na. Make sure MIT Kerberos is installed. I use the latest version MIT\nKerberos\nVersion 4.1.\nb. Make sure GSSAPI is enabled\n2. Attempt to connect to a PostgreSQL server using psql.\na. Set up the Kerberos server and configure the PostgreSQL server by\nreferring\nto https://github.com/50wu/kerberos-docker/blob/main/POSTGRES.README.md\nb. change the entry to hostgssenc on PostgreSQL server pg_hba.conf and\nrestart\nhostgssenc all all 0.0.0.0/0 gss include_realm=0\nkrb_realm=GPDB.KRB\nc. Use the following command to connect to the database server\npsql -h <hostname> -U \"postgres/krb5-service-example-com.example.com\" -d\npostgres\n3. The connection fails with a non-blocking socket error. The error is\nsomething like:\npsql: error: connection to server at \"xxx\", port 5432 failed:\n\n*Environment*:\nPostgreSQL version: 16.3\nOperating System: Windows 11\n\n\n*Fix Steps:*In the gss_read function of\nsrc/interfaces/libpq/fe-secure-gssapi.c, change the\ncheck of the error code to use the SOCK_ERRNO to make sure that EAGAIN,\nEWOULDBLOCK and EINTR can be properly handled on Windows and other\nplatforms.\n\nThe patch file is attached to this email, please review and consider\nmerging it to\nthe main code library.\n\nThanks,\nNing Wu",
"msg_date": "Tue, 30 Jul 2024 16:47:35 +0800",
"msg_from": "Ning <ning94803@gmail.com>",
"msg_from_op": true,
"msg_subject": "psql client does not handle WSAEWOULDBLOCK on Windows"
},
{
"msg_contents": "I have not reproduce your test scenario, looking at code please see following comments:\r\n\r\nIf you check the function definition of pqsecure_raw_read() it actually do set errno like bellow\r\n\r\nSOCK_ERRNO_SET(result_errno);\r\nwhere result_errno = SOCK_ERRNO\r\n\r\nMeans anybody using those function pqsecure_raw_read/write, does not need to take care of portable ERRNO.\r\n\r\nRegards\r\nUmar Hayat\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Mon, 05 Aug 2024 15:17:38 +0000",
"msg_from": "Umar Hayat <postgresql.wizard@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql client does not handle WSAEWOULDBLOCK on Windows"
},
{
"msg_contents": "Hi Umar,\n\nIn the function of gss_read() if print the value of errno and SOCK_ERRNO\nseparately, I found the values are different:\n *ret = pqsecure_raw_read(conn, recv_buffer, length);\nif (*ret < 0)\n{\nprintf(\"errno: %d\\n\", errno);\nprintf(\"result_errno: %d\\n\", SOCK_ERRNO);\n...\n\nerrno: 0\nresult_errno: 10035\n\nAlso refer to the\nhttps://learn.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-wsagetlasterror\n,\nIt shows that the Windows Sockets function does not set errno, but uses\nWSAGetLastError to report errors. And refer to the\nhttps://learn.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-recv\n,\nIf the function fails, it should call the WSAGetLastError to get the\nexpansion\nerror message. This further shows that the socket operation error will not\nbe\nreported through the errno.\n\nSo changing the error code check to use the SOCK_ERRNO instead of errno can\nbe\nproperly handled on both Windows and other platforms.\n\nTo reproduce the issue, I used the following version of Postgres and MIT\nKerberos:\nPostgreSQL version: 16.3\nMIT Kerberos Version 4.1\nOperating System: Windows 11\nVisual Studio 2022\n\n\nOn Tue, Jul 30, 2024 at 4:47 PM Ning <ning94803@gmail.com> wrote:\n\n>\n> *Description:*The connection fails with a non-blocking socket error when\n> using psql on\n> Windows to connect to a PostgreSQL server with GSSAPI enabled. The error is\n> because the socket error code is obtained by WSAGetLastError() instead of\n> errno. This causes the value of errno to be incorrect when handling a\n> non-blocking socket error.\n>\n>\n> *Steps to Reproduce:*1. Compile PostgreSQL client (psql) on Windows.\n> a. Make sure MIT Kerberos is installed. I use the latest version MIT\n> Kerberos\n> Version 4.1.\n> b. Make sure GSSAPI is enabled\n> 2. Attempt to connect to a PostgreSQL server using psql.\n> a. Set up the Kerberos server and configure the PostgreSQL server by\n> referring\n> to https://github.com/50wu/kerberos-docker/blob/main/POSTGRES.README.md\n> b. change the entry to hostgssenc on PostgreSQL server pg_hba.conf and\n> restart\n> hostgssenc all all 0.0.0.0/0 gss include_realm=0\n> krb_realm=GPDB.KRB\n> c. Use the following command to connect to the database server\n> psql -h <hostname> -U \"postgres/krb5-service-example-com.example.com\" -d\n> postgres\n> 3. The connection fails with a non-blocking socket error. The error is\n> something like:\n> psql: error: connection to server at \"xxx\", port 5432 failed:\n>\n> *Environment*:\n> PostgreSQL version: 16.3\n> Operating System: Windows 11\n>\n>\n> *Fix Steps:*In the gss_read function of\n> src/interfaces/libpq/fe-secure-gssapi.c, change the\n> check of the error code to use the SOCK_ERRNO to make sure that EAGAIN,\n> EWOULDBLOCK and EINTR can be properly handled on Windows and other\n> platforms.\n>\n> The patch file is attached to this email, please review and consider\n> merging it to\n> the main code library.\n>\n> Thanks,\n> Ning Wu\n>\n\nHi Umar,In the function of gss_read() if print the value of errno and SOCK_ERRNOseparately, I found the values are different: *ret = pqsecure_raw_read(conn, recv_buffer, length);if (*ret < 0){printf(\"errno: %d\\n\", errno);printf(\"result_errno: %d\\n\", SOCK_ERRNO);...errno: 0result_errno: 10035Also refer to thehttps://learn.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-wsagetlasterror,It shows that the Windows Sockets function does not set errno, but usesWSAGetLastError to report errors. And refer to thehttps://learn.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-recv,If the function fails, it should call the WSAGetLastError to get the expansionerror message. This further shows that the socket operation error will not bereported through the errno.So changing the error code check to use the SOCK_ERRNO instead of errno can beproperly handled on both Windows and other platforms.To reproduce the issue, I used the following version of Postgres and MIT Kerberos:PostgreSQL version: 16.3MIT Kerberos Version 4.1Operating System: Windows 11Visual Studio 2022On Tue, Jul 30, 2024 at 4:47 PM Ning <ning94803@gmail.com> wrote:Description:The connection fails with a non-blocking socket error when using psql onWindows to connect to a PostgreSQL server with GSSAPI enabled. The error isbecause the socket error code is obtained by WSAGetLastError() instead oferrno. This causes the value of errno to be incorrect when handling anon-blocking socket error.Steps to Reproduce:1. Compile PostgreSQL client (psql) on Windows.a. Make sure MIT Kerberos is installed. I use the latest version MIT KerberosVersion 4.1.b. Make sure GSSAPI is enabled2. Attempt to connect to a PostgreSQL server using psql.a. Set up the Kerberos server and configure the PostgreSQL server by referringto https://github.com/50wu/kerberos-docker/blob/main/POSTGRES.README.mdb. change the entry to hostgssenc on PostgreSQL server pg_hba.conf and restarthostgssenc all all 0.0.0.0/0 gss include_realm=0 krb_realm=GPDB.KRBc. Use the following command to connect to the database serverpsql -h <hostname> -U \"postgres/krb5-service-example-com.example.com\" -d postgres3. The connection fails with a non-blocking socket error. The error is something like:psql: error: connection to server at \"xxx\", port 5432 failed:Environment:PostgreSQL version: 16.3Operating System: Windows 11Fix Steps:In the gss_read function of src/interfaces/libpq/fe-secure-gssapi.c, change thecheck of the error code to use the SOCK_ERRNO to make sure that EAGAIN,EWOULDBLOCK and EINTR can be properly handled on Windows and other platforms.The patch file is attached to this email, please review and consider merging it tothe main code library.Thanks,Ning Wu",
"msg_date": "Tue, 6 Aug 2024 16:01:42 +0800",
"msg_from": "Ning <ning94803@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql client does not handle WSAEWOULDBLOCK on Windows"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed $SUBJECT while working on something else:\n\n /*\n * Where we store tuples for a held cursor or a PORTAL_ONE_RETURNING or\n * PORTAL_UTIL_SELECT query. (A cursor held past the end of its\n * transaction no longer has any active executor state.)\n */\n Tuplestorestate *holdStore; /* store for holdable cursors */\n MemoryContext holdContext; /* memory containing holdStore */\n\nWe do that for PORTAL_ONE_MOD_WITH as well, so the comment should be\n\"Where we store tuples for a held cursor or a PORTAL_ONE_RETURNING,\nPORTAL_ONE_MOD_WITH, or PORTAL_UTIL_SELECT query.\". Attached is a\npatch for that.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Tue, 30 Jul 2024 19:39:27 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Comment in portal.h"
},
{
"msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n\n> Hi,\n>\n> I noticed $SUBJECT while working on something else:\n>\n> /*\n> * Where we store tuples for a held cursor or a PORTAL_ONE_RETURNING or\n> * PORTAL_UTIL_SELECT query. (A cursor held past the end of its\n> * transaction no longer has any active executor state.)\n> */\n> Tuplestorestate *holdStore; /* store for holdable cursors */\n> MemoryContext holdContext; /* memory containing holdStore */\n>\n> We do that for PORTAL_ONE_MOD_WITH as well, so the comment should be\n> \"Where we store tuples for a held cursor or a PORTAL_ONE_RETURNING,\n> PORTAL_ONE_MOD_WITH, or PORTAL_UTIL_SELECT query.\". Attached is a\n> patch for that.\n\nPatch looks good to me.\n\nAll the codes of PortalRun & FillPortalStore & PortalRunSelect are\nconsistent with this idea. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Wed, 31 Jul 2024 07:55:39 +0800",
"msg_from": "Andy Fan <zhihuifan1213@163.com>",
"msg_from_op": false,
"msg_subject": "Re: Comment in portal.h"
},
{
"msg_contents": "Hi,\n\nOn Wed, Jul 31, 2024 at 8:55 AM Andy Fan <zhihuifan1213@163.com> wrote:\n> Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> > I noticed $SUBJECT while working on something else:\n> >\n> > /*\n> > * Where we store tuples for a held cursor or a PORTAL_ONE_RETURNING or\n> > * PORTAL_UTIL_SELECT query. (A cursor held past the end of its\n> > * transaction no longer has any active executor state.)\n> > */\n> > Tuplestorestate *holdStore; /* store for holdable cursors */\n> > MemoryContext holdContext; /* memory containing holdStore */\n> >\n> > We do that for PORTAL_ONE_MOD_WITH as well, so the comment should be\n> > \"Where we store tuples for a held cursor or a PORTAL_ONE_RETURNING,\n> > PORTAL_ONE_MOD_WITH, or PORTAL_UTIL_SELECT query.\". Attached is a\n> > patch for that.\n>\n> Patch looks good to me.\n>\n> All the codes of PortalRun & FillPortalStore & PortalRunSelect are\n> consistent with this idea.\n\nPushed. Thanks for looking!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 1 Aug 2024 18:05:47 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Comment in portal.h"
}
] |
[
{
"msg_contents": "As part of the multithreading work, it'd be nice to get rid of as many \nglobal or static variables as possible. Remaining ones can be converted \nto thread locals as appropriate, but where possible, it's better to just \nget rid of them.\n\nHere are patches to get rid of a few static variables, by e.g. \nconverting them to regular local variables or palloc'd return values, as \nappropriate.\n\nThis doesn't move the needle much, but every little helps, and these \nseem like nice little changes in any case.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Tue, 30 Jul 2024 14:22:19 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Minor refactorings to eliminate some static buffers"
},
{
"msg_contents": "On Tue, Jul 30, 2024 at 7:22 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> As part of the multithreading work, it'd be nice to get rid of as many\n> global or static variables as possible. Remaining ones can be converted\n> to thread locals as appropriate, but where possible, it's better to just\n> get rid of them.\n>\n> Here are patches to get rid of a few static variables, by e.g.\n> converting them to regular local variables or palloc'd return values, as\n> appropriate.\n>\n> This doesn't move the needle much, but every little helps, and these\n> seem like nice little changes in any case.\n\nI spent a few minutes looking through these patches and they seem like\ngood cleanups. I couldn't think of a plausible reason why someone\nwould object to any of these.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Jul 2024 11:44:57 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Minor refactorings to eliminate some static buffers"
},
{
"msg_contents": "On 30/07/2024 18:44, Robert Haas wrote:\n> On Tue, Jul 30, 2024 at 7:22 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> As part of the multithreading work, it'd be nice to get rid of as many\n>> global or static variables as possible. Remaining ones can be converted\n>> to thread locals as appropriate, but where possible, it's better to just\n>> get rid of them.\n>>\n>> Here are patches to get rid of a few static variables, by e.g.\n>> converting them to regular local variables or palloc'd return values, as\n>> appropriate.\n>>\n>> This doesn't move the needle much, but every little helps, and these\n>> seem like nice little changes in any case.\n> \n> I spent a few minutes looking through these patches and they seem like\n> good cleanups. I couldn't think of a plausible reason why someone\n> would object to any of these.\n\nCommitted, thanks for having a look.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 30 Jul 2024 22:24:57 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Minor refactorings to eliminate some static buffers"
},
{
"msg_contents": "Here's another batch of little changes in the same vein. Mostly \nconverting static bufs that are never modified to consts.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Tue, 6 Aug 2024 18:13:56 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Minor refactorings to eliminate some static buffers"
},
{
"msg_contents": "Hi,\n\nOn 2024-08-06 18:13:56 +0300, Heikki Linnakangas wrote:\n> From 6dd5a4a413212a61d9a4f5b9db73e812c8b5dcbd Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> Date: Tue, 6 Aug 2024 17:58:29 +0300\n> Subject: [PATCH 1/5] Turn a few 'validnsps' static variables into locals\n> \n> There was no need for these to be static buffers, a local variable\n> works just as well. I think they were marked as 'static' to imply that\n> they are read-only, but 'const' is more appropriate for that, so\n> change them to const.\n\nI looked at these at some point in the past. Unfortunately compilers don't\nalways generate better code with const than static :( (the static\ninitialization can be done once in a global var, the const one not\nnecessarily). Arguably what you'd want here is both.\n\nI doubt that matters here though.\n\n\n> diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c\n> index 702a6c3a0b..2732d6bfc9 100644\n> --- a/src/backend/tcop/utility.c\n> +++ b/src/backend/tcop/utility.c\n> @@ -1155,7 +1155,7 @@ ProcessUtilitySlow(ParseState *pstate,\n> \t\t\t\t\t\t{\n> \t\t\t\t\t\t\tCreateStmt *cstmt = (CreateStmt *) stmt;\n> \t\t\t\t\t\t\tDatum\t\ttoast_options;\n> -\t\t\t\t\t\t\tstatic char *validnsps[] = HEAP_RELOPT_NAMESPACES;\n> +\t\t\t\t\t\t\tconst char *validnsps[] = HEAP_RELOPT_NAMESPACES;\n> \n> \t\t\t\t\t\t\t/* Remember transformed RangeVar for LIKE */\n> \t\t\t\t\t\t\ttable_rv = cstmt->relation;\n\nIn the other places you used \"const char * const\", here just \"const char *\" - it\ndoesn't look like that's a required difference?\n\n\n\n> From f108ae4c2ddfa6ca77e9736dc3fb20e6bda7b67c Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> Date: Tue, 6 Aug 2024 17:59:33 +0300\n> Subject: [PATCH 2/5] Make nullSemAction const, add 'const' decorators to\n> related functions\n\n> To make it more clear that these should never be modified.\n\nYep - and it reduces the size of writable mappings to boot.\n\nLGTM.\n\n\n> From da6f101b0ecc2d4e4b33bbcae316dbaf72e67d14 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> Date: Tue, 6 Aug 2024 17:59:45 +0300\n> Subject: [PATCH 3/5] Mark misc static global variables as const\n\nLGTM\n\n\n> From 5d562f15aaba0bb082e714e844995705f0ca1368 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> Date: Tue, 6 Aug 2024 17:59:52 +0300\n> Subject: [PATCH 4/5] Constify fields and parameters in spell.c\n> \n> I started by marking VoidString as const, and fixing the fallout by\n> marking more fields and function arguments as const. It proliferated\n> quite a lot, but all within spell.c and spell.h.\n> \n> A more narrow patch to get rid of the static VoidString buffer would\n> be to replace it with '#define VoidString \"\"', as C99 allows assigning\n> \"\" to a non-const pointer, even though you're not allowed to modify\n> it. But it seems like good hygiene to mark all these as const. In the\n> structs, the pointers can point to the constant VoidString, or a\n> buffer allocated with palloc(), or with compact_palloc(), so you\n> should not modify them.\n\nLooks reasonable to me.\n\n\n> From bb66efccf4f97d0001b730a1376845c0a19c7f27 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n> Date: Tue, 6 Aug 2024 18:00:01 +0300\n> Subject: [PATCH 5/5] Use psprintf to simplify gtsvectorout()\n> \n> The buffer allocation was correct, but looked archaic and scary:\n> \n> - It was weird to calculate the buffer size before determining which\n> format string was used. With the same effort, we could've used the\n> right-sized buffer for each branch.\n> \n> - Commit aa0d3504560 added one more possible return string (\"all true\n> bits\"), but didn't adjust the code at the top of the function to\n> calculate the returned string's max size. It was not a live bug,\n> because the new string was smaller than the existing ones, but\n> seemed wrong in principle.\n> \n> - Use of sprintf() is generally eyebrow-raising these days\n> \n> Switch to psprintf(). psprintf() allocates a larger buffer than what\n> was allocated before, 128 bytes vs 80 bytes, which is acceptable as\n> this code is not performance or space critical.\n\nI find the undocumented EXTRALEN the most confusing :)\n\nLGTM.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Aug 2024 08:52:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Minor refactorings to eliminate some static buffers"
},
{
"msg_contents": "On 06/08/2024 18:52, Andres Freund wrote:\n> On 2024-08-06 18:13:56 +0300, Heikki Linnakangas wrote:\n>> diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c\n>> index 702a6c3a0b..2732d6bfc9 100644\n>> --- a/src/backend/tcop/utility.c\n>> +++ b/src/backend/tcop/utility.c\n>> @@ -1155,7 +1155,7 @@ ProcessUtilitySlow(ParseState *pstate,\n>> \t\t\t\t\t\t{\n>> \t\t\t\t\t\t\tCreateStmt *cstmt = (CreateStmt *) stmt;\n>> \t\t\t\t\t\t\tDatum\t\ttoast_options;\n>> -\t\t\t\t\t\t\tstatic char *validnsps[] = HEAP_RELOPT_NAMESPACES;\n>> +\t\t\t\t\t\t\tconst char *validnsps[] = HEAP_RELOPT_NAMESPACES;\n>> \n>> \t\t\t\t\t\t\t/* Remember transformed RangeVar for LIKE */\n>> \t\t\t\t\t\t\ttable_rv = cstmt->relation;\n> \n> In the other places you used \"const char * const\", here just \"const char *\" - it\n> doesn't look like that's a required difference?\n\nJust an oversight.\n\nI went back and forth on whether to use plain \"char *validnps[]\", \"const \nchar *validnsps[]\" or \"const char *const validnsps[]\". The first form \ndoesn't convey the fact it's read-only, like the \"static\" used to. The \nsecond form hints that, but only for the strings, not for the pointers \nin the array. The last form is what we want, but it's a bit verbose and \nugly. I chose the last form in the end, but missed this one.\n\nFixed that and pushed. Thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 6 Aug 2024 23:12:12 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Minor refactorings to eliminate some static buffers"
},
{
"msg_contents": "On 06.08.24 17:13, Heikki Linnakangas wrote:\n > --- a/src/backend/access/transam/xlogprefetcher.c\n > +++ b/src/backend/access/transam/xlogprefetcher.c\n > @@ -362,7 +362,7 @@ XLogPrefetcher *\n > XLogPrefetcherAllocate(XLogReaderState *reader)\n > {\n > XLogPrefetcher *prefetcher;\n > - static HASHCTL hash_table_ctl = {\n > + const HASHCTL hash_table_ctl = {\n\nIs there a reason this is not changed to\n\nstatic const HASHCTL ...\n\n? Most other places where changed in that way.\n\n\n\n",
"msg_date": "Tue, 6 Aug 2024 22:41:45 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Minor refactorings to eliminate some static buffers"
},
{
"msg_contents": "On 06/08/2024 23:41, Peter Eisentraut wrote:\n> On 06.08.24 17:13, Heikki Linnakangas wrote:\n> > --- a/src/backend/access/transam/xlogprefetcher.c\n> > +++ b/src/backend/access/transam/xlogprefetcher.c\n> > @@ -362,7 +362,7 @@ XLogPrefetcher *\n> > XLogPrefetcherAllocate(XLogReaderState *reader)\n> > {\n> > XLogPrefetcher *prefetcher;\n> > - static HASHCTL hash_table_ctl = {\n> > + const HASHCTL hash_table_ctl = {\n> \n> Is there a reason this is not changed to\n> \n> static const HASHCTL ...\n> \n> ? Most other places where changed in that way.\n\nNo particular reason. Grepping for HASHCTL's, this is actually different \nfrom all other uses of HASHCTL and hash_create. All others use a plain \nlocal variable, and fill the fields like this:\n\n HASHCTL hash_ctl;\n\n hash_ctl.keysize = sizeof(missing_cache_key);\n hash_ctl.entrysize = sizeof(missing_cache_key);\n hash_ctl.hcxt = TopMemoryContext;\n hash_ctl.hash = missing_hash;\n hash_ctl.match = missing_match;\n\nI think that's just because we haven't allowed C99 designated \ninitializers for very long.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 7 Aug 2024 01:15:57 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Minor refactorings to eliminate some static buffers"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on rebasing the patches of Neon's fork onto the\nREL_17_STABLE branch, I noticed that the nblocks arguments of various\nsmgr functions have inconsistent types: smgrzeroextend accepts\n`nblocks` as signed integer, as does the new signature for\nsmgrprefetch, but the new vectorized operations of *readv and *writev,\nand the older *writeback all use an unsigned BlockNumber as indicator\nfor number of blocks.\n\nCan we update the definition to be consistent across this (new, or\nalso older) API? As far as I can see, in none of these cases are\nnegative numbers allowed or expected, so updating this all to be\nconsistently BlockNumber across the API seems like a straigthforward\npatch.\n\ncc-ed Thomas as committer of the PG17 smgr API changes.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 30 Jul 2024 13:24:04 +0200",
"msg_from": "Matthias van de Meent <boekewurm@gmail.com>",
"msg_from_op": true,
"msg_subject": "PG17beta2: SMGR: inconsistent type for nblocks"
},
{
"msg_contents": "On Tue, Jul 30, 2024 at 11:24 PM Matthias van de Meent\n<boekewurm@gmail.com> wrote:\n> While working on rebasing the patches of Neon's fork onto the\n> REL_17_STABLE branch, I noticed that the nblocks arguments of various\n> smgr functions have inconsistent types: smgrzeroextend accepts\n> `nblocks` as signed integer, as does the new signature for\n> smgrprefetch, but the new vectorized operations of *readv and *writev,\n> and the older *writeback all use an unsigned BlockNumber as indicator\n> for number of blocks.\n>\n> Can we update the definition to be consistent across this (new, or\n> also older) API? As far as I can see, in none of these cases are\n> negative numbers allowed or expected, so updating this all to be\n> consistently BlockNumber across the API seems like a straigthforward\n> patch.\n>\n> cc-ed Thomas as committer of the PG17 smgr API changes.\n\nHi Matthias,\n\nYeah, right, I noticed that once myself[1]. For the cases from my\nkeyboard, I guess I was trying to be consistent with nearby existing\nstuff in each case, which was already inconsistent... Do you have a\npatch?\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGLx5bLwezZKAYB2O_qHj%3Dov10RpgRVY7e8TSJVE74oVjg%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 31 Jul 2024 00:33:05 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG17beta2: SMGR: inconsistent type for nblocks"
},
{
"msg_contents": "On Tue, 30 Jul 2024 at 14:32, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Tue, Jul 30, 2024 at 11:24 PM Matthias van de Meent\n> <boekewurm@gmail.com> wrote:\n> > While working on rebasing the patches of Neon's fork onto the\n> > REL_17_STABLE branch, I noticed that the nblocks arguments of various\n> > smgr functions have inconsistent types: smgrzeroextend accepts\n> > `nblocks` as signed integer, as does the new signature for\n> > smgrprefetch, but the new vectorized operations of *readv and *writev,\n> > and the older *writeback all use an unsigned BlockNumber as indicator\n> > for number of blocks.\n> >\n> > Can we update the definition to be consistent across this (new, or\n> > also older) API? As far as I can see, in none of these cases are\n> > negative numbers allowed or expected, so updating this all to be\n> > consistently BlockNumber across the API seems like a straigthforward\n> > patch.\n> >\n> > cc-ed Thomas as committer of the PG17 smgr API changes.\n>\n> Yeah, right, I noticed that once myself[1]. For the cases from my\n> keyboard, I guess I was trying to be consistent with nearby existing\n> stuff in each case, which was already inconsistent... Do you have a\n> patch?\n\nHere's one that covers both master and the v17 backbranch.\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Thu, 1 Aug 2024 12:45:16 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG17beta2: SMGR: inconsistent type for nblocks"
},
{
"msg_contents": "Hi,\n\nOn 2024-08-01 12:45:16 +0200, Matthias van de Meent wrote:\n> On Tue, 30 Jul 2024 at 14:32, Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > On Tue, Jul 30, 2024 at 11:24 PM Matthias van de Meent\n> > <boekewurm@gmail.com> wrote:\n> > > While working on rebasing the patches of Neon's fork onto the\n> > > REL_17_STABLE branch, I noticed that the nblocks arguments of various\n> > > smgr functions have inconsistent types: smgrzeroextend accepts\n> > > `nblocks` as signed integer, as does the new signature for\n> > > smgrprefetch, but the new vectorized operations of *readv and *writev,\n> > > and the older *writeback all use an unsigned BlockNumber as indicator\n> > > for number of blocks.\n> > >\n> > > Can we update the definition to be consistent across this (new, or\n> > > also older) API? As far as I can see, in none of these cases are\n> > > negative numbers allowed or expected, so updating this all to be\n> > > consistently BlockNumber across the API seems like a straigthforward\n> > > patch.\n> > >\n> > > cc-ed Thomas as committer of the PG17 smgr API changes.\n> >\n> > Yeah, right, I noticed that once myself[1]. For the cases from my\n> > keyboard, I guess I was trying to be consistent with nearby existing\n> > stuff in each case, which was already inconsistent... Do you have a\n> > patch?\n> \n> Here's one that covers both master and the v17 backbranch.\n\nFWIW, I find it quite ugly to use BlockNumber to indicate the number of blocks\nto be written. It's just further increasing the type confusion by conflating\n\"the first block to be targeted\" and \"number of blocks\".\n\n\n> diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c\n> index 6796756358..1d02766978 100644\n> --- a/src/backend/storage/smgr/md.c\n> +++ b/src/backend/storage/smgr/md.c\n> @@ -523,11 +523,11 @@ mdextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,\n> */\n> void\n> mdzeroextend(SMgrRelation reln, ForkNumber forknum,\n> -\t\t\t BlockNumber blocknum, int nblocks, bool skipFsync)\n> +\t\t\t BlockNumber blocknum, BlockNumber nblocks, bool skipFsync)\n> {\n> \tMdfdVec *v;\n> \tBlockNumber curblocknum = blocknum;\n> -\tint\t\t\tremblocks = nblocks;\n> +\tint64\t\tremblocks = nblocks;\n> \n> \tAssert(nblocks > 0);\n\nIsn't this particularly bogus? What's the point of using a 64bit remblocks\nhere?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Aug 2024 09:44:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG17beta2: SMGR: inconsistent type for nblocks"
},
{
"msg_contents": "Hi,\n\nOn Thu, 1 Aug 2024 at 18:44, Andres Freund <andres@anarazel.de> wrote:\n> On 2024-08-01 12:45:16 +0200, Matthias van de Meent wrote:\n> > Here's one that covers both master and the v17 backbranch.\n>\n> FWIW, I find it quite ugly to use BlockNumber to indicate the number of blocks\n> to be written. It's just further increasing the type confusion by conflating\n> \"the first block to be targeted\" and \"number of blocks\".\n\nIIf BlockNumber doesn't do it for you, then between plain uint32 and\nint64, which would you prefer? int itself doesn't allow syncing of all\nblocks of a relation's fork, so that's out for me.\n\n> > diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c\n> > index 6796756358..1d02766978 100644\n> > --- a/src/backend/storage/smgr/md.c\n> > +++ b/src/backend/storage/smgr/md.c\n> > @@ -523,11 +523,11 @@ mdextend(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,\n> > */\n> > void\n> > mdzeroextend(SMgrRelation reln, ForkNumber forknum,\n> > - BlockNumber blocknum, int nblocks, bool skipFsync)\n> > + BlockNumber blocknum, BlockNumber nblocks, bool skipFsync)\n> > {\n> > MdfdVec *v;\n> > BlockNumber curblocknum = blocknum;\n> > - int remblocks = nblocks;\n> > + int64 remblocks = nblocks;\n> >\n> > Assert(nblocks > 0);\n>\n> Isn't this particularly bogus? What's the point of using a 64bit remblocks\n> here?\n\nTo prevent underflows in the loop below, if any would happen to exist.\nCould've been BlockNumber too, but I went with a slightly more\ndefensive approach.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Thu, 1 Aug 2024 19:29:18 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG17beta2: SMGR: inconsistent type for nblocks"
}
] |
[
{
"msg_contents": "Sometime in the last month or so, flaviventris's bleeding-edge\nversion of gcc has started whining[1] about truncation of a\nstring literal's implicit trailing '\\0' in contexts like this:\n\n../pgsql/src/backend/commands/copyto.c:106:41: warning: initializer-string for array of 'char' is too long [-Wunterminated-string-initialization]\n 106 | static const char BinarySignature[11] = \"PGCOPY\\n\\377\\r\\n\\0\";\n | ^~~~~~~~~~~~~~~~~~~~\n\n../pgsql/src/backend/utils/adt/numutils.c:29:1: warning: initializer-string for array of 'char' is too long [-Wunterminated-string-initialization]\n 29 | \"00\" \"01\" \"02\" \"03\" \"04\" \"05\" \"06\" \"07\" \"08\" \"09\"\n | ^~~~\n\nPresumably this'll appear in less-bleeding-edge releases in a\nfew months' time.\n\nIn the BinarySignature case, we could silence it in at least two ways.\nWe could remove the explicit trailing \\0 and rely on the implicit one:\n\n-static const char BinarySignature[11] = \"PGCOPY\\n\\377\\r\\n\\0\";\n+static const char BinarySignature[11] = \"PGCOPY\\n\\377\\r\\n\";\n\nOr just drop the unnecessary array length specification:\n\n-static const char BinarySignature[11] = \"PGCOPY\\n\\377\\r\\n\\0\";\n+static const char BinarySignature[] = \"PGCOPY\\n\\377\\r\\n\\0\";\n\nOr we could do both things, but that feels perhaps too magic.\n\nIn the numutils.c case, I think that dropping the array length\nspec is the only reasonable fix, since the last desired character\nisn't a \\0:\n\n-static const char DIGIT_TABLE[200] =\n+static const char DIGIT_TABLE[] =\n \"00\" \"01\" \"02\" \"03\" \"04\" \"05\" \"06\" \"07\" \"08\" \"09\"\n ...\n \"90\" \"91\" \"92\" \"93\" \"94\" \"95\" \"96\" \"97\" \"98\" \"99\";\n\nThere is one more similar complaint:\n\n../pgsql/contrib/fuzzystrmatch/daitch_mokotoff.c:92:20: warning: initializer-string for array of 'char' is too long [-Wunterminated-string-initialization]\n 92 | .soundex = \"000000\", /* Six digits */\n | ^~~~~~~~\n\nHere, the struct field is declared\n\n\tchar\t\tsoundex[DM_CODE_DIGITS];\t/* Soundex code */\n\nand we probably don't want to mess with that. However, elsewhere in\nthe same struct initialization I see\n\n\tchar\t\tprev_code_digits[2];\n\tchar\t\tnext_code_digits[2];\n\n...\n\n\t.prev_code_digits = {'\\0', '\\0'},\n\t.next_code_digits = {'\\0', '\\0'},\n\nand that's *not* drawing a warning. So the most plausible fix\nseems to be\n\n- .soundex = \"000000\", /* Six digits */\n+ .soundex = {'0', '0', '0', '0', '0', '0'}, /* Six digits */\n\n(In principle, we could fix the COPY and numutils cases the same\nway, but I don't care for the readability loss that'd entail.)\n\nPreferences, other suggestions?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=flaviventris&dt=2024-07-30%2012%3A29%3A41&stg=build\n\n\n",
"msg_date": "Tue, 30 Jul 2024 12:19:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "New compiler warnings in buildfarm"
},
{
"msg_contents": "Em ter., 30 de jul. de 2024 às 13:19, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Sometime in the last month or so, flaviventris's bleeding-edge\n> version of gcc has started whining[1] about truncation of a\n> string literal's implicit trailing '\\0' in contexts like this:\n>\n> ../pgsql/src/backend/commands/copyto.c:106:41: warning: initializer-string\n> for array of 'char' is too long [-Wunterminated-string-initialization]\n> 106 | static const char BinarySignature[11] = \"PGCOPY\\n\\377\\r\\n\\0\";\n> | ^~~~~~~~~~~~~~~~~~~~\n>\n> ../pgsql/src/backend/utils/adt/numutils.c:29:1: warning:\n> initializer-string for array of 'char' is too long\n> [-Wunterminated-string-initialization]\n> 29 | \"00\" \"01\" \"02\" \"03\" \"04\" \"05\" \"06\" \"07\" \"08\" \"09\"\n> | ^~~~\n>\n> Presumably this'll appear in less-bleeding-edge releases in a\n> few months' time.\n>\n> In the BinarySignature case, we could silence it in at least two ways.\n> We could remove the explicit trailing \\0 and rely on the implicit one:\n>\n> -static const char BinarySignature[11] = \"PGCOPY\\n\\377\\r\\n\\0\";\n> +static const char BinarySignature[11] = \"PGCOPY\\n\\377\\r\\n\";\n>\n> Or just drop the unnecessary array length specification:\n>\n+1 for dropping the length specification.\nThe trailing \\0 the compiler will automatically fill in.\nNote this came from copyfromparse.c, who also have the same problem.\n\nbest regards,\nRanier Vilela\n\nEm ter., 30 de jul. de 2024 às 13:19, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Sometime in the last month or so, flaviventris's bleeding-edge\nversion of gcc has started whining[1] about truncation of a\nstring literal's implicit trailing '\\0' in contexts like this:\n\n../pgsql/src/backend/commands/copyto.c:106:41: warning: initializer-string for array of 'char' is too long [-Wunterminated-string-initialization]\n 106 | static const char BinarySignature[11] = \"PGCOPY\\n\\377\\r\\n\\0\";\n | ^~~~~~~~~~~~~~~~~~~~\n\n../pgsql/src/backend/utils/adt/numutils.c:29:1: warning: initializer-string for array of 'char' is too long [-Wunterminated-string-initialization]\n 29 | \"00\" \"01\" \"02\" \"03\" \"04\" \"05\" \"06\" \"07\" \"08\" \"09\"\n | ^~~~\n\nPresumably this'll appear in less-bleeding-edge releases in a\nfew months' time.\n\nIn the BinarySignature case, we could silence it in at least two ways.\nWe could remove the explicit trailing \\0 and rely on the implicit one:\n\n-static const char BinarySignature[11] = \"PGCOPY\\n\\377\\r\\n\\0\";\n+static const char BinarySignature[11] = \"PGCOPY\\n\\377\\r\\n\";\n\nOr just drop the unnecessary array length specification:+1 for dropping the length specification.The trailing \\0 the compiler will automatically fill in. Note this came from copyfromparse.c, who also have the same problem.best regards,Ranier Vilela",
"msg_date": "Tue, 30 Jul 2024 13:56:16 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New compiler warnings in buildfarm"
},
{
"msg_contents": "On 30.07.24 18:19, Tom Lane wrote:\n> Sometime in the last month or so, flaviventris's bleeding-edge\n> version of gcc has started whining[1] about truncation of a\n> string literal's implicit trailing '\\0' in contexts like this:\n> \n> ../pgsql/src/backend/commands/copyto.c:106:41: warning:\n> initializer-string for array of 'char' is too long\n> [-Wunterminated-string-initialization]\n> 106 | static const char BinarySignature[11] = \"PGCOPY\\n\\377\\r\\n\\0\";\n> | ^~~~~~~~~~~~~~~~~~~~\n> \n> ../pgsql/src/backend/utils/adt/numutils.c:29:1: warning:\n> initializer-string for array of 'char' is too long\n> [-Wunterminated-string-initialization]\n> 29 | \"00\" \"01\" \"02\" \"03\" \"04\" \"05\" \"06\" \"07\" \"08\" \"09\"\n> | ^~~~\n> \n> Presumably this'll appear in less-bleeding-edge releases in a\n> few months' time.\n\nAccording to the gcc documentation, this warning is part of -Wextra. \nAnd indeed flaviventris runs with -Wextra:\n\n'CFLAGS' => '-O1 -ggdb -g3 -fno-omit-frame-pointer -Wall -Wextra \n-Wno-unused-parameter -Wno-sign-compare -Wno-missing-field-initializers \n-O0',\n\nSo I think the appropriate fix here for now is to add \n-Wno-unterminated-string-initialization to this buildfarm configuration.\n\nMaybe we find this warning useful, in which case we should add \n-Wunterminated-string-initialization to the standard set of warning \noptions before undertaking code changes. But gcc-15 is still about a \nyear away from being released, so it seems too early for that.\n\n\n\n",
"msg_date": "Wed, 31 Jul 2024 07:37:53 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: New compiler warnings in buildfarm"
},
{
"msg_contents": "Peter Eisentraut <peter@eisentraut.org> writes:\n> On 30.07.24 18:19, Tom Lane wrote:\n>> Sometime in the last month or so, flaviventris's bleeding-edge\n>> version of gcc has started whining[1] about truncation of a\n>> string literal's implicit trailing '\\0' in contexts like this:\n>> ../pgsql/src/backend/commands/copyto.c:106:41: warning:\n>> initializer-string for array of 'char' is too long\n>> [-Wunterminated-string-initialization]\n>> 106 | static const char BinarySignature[11] = \"PGCOPY\\n\\377\\r\\n\\0\";\n>> | ^~~~~~~~~~~~~~~~~~~~\n\n> According to the gcc documentation, this warning is part of -Wextra. \n> And indeed flaviventris runs with -Wextra:\n\n> 'CFLAGS' => '-O1 -ggdb -g3 -fno-omit-frame-pointer -Wall -Wextra \n> -Wno-unused-parameter -Wno-sign-compare -Wno-missing-field-initializers \n> -O0',\n\nAh --- and it was not doing that a month ago. So maybe the compiler\nhas had this warning for longer. I don't see it with gcc 13.3 though,\nwhich is the newest I have handy.\n\n> So I think the appropriate fix here for now is to add \n> -Wno-unterminated-string-initialization to this buildfarm configuration.\n\nAgreed; our policy so far has been to treat -Wextra warnings with\nsuspicion, and there is not anything really wrong with these bits\nof code.\n\nIt looks like serinus needs this fix too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2024 10:11:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: New compiler warnings in buildfarm"
},
{
"msg_contents": "Hi,\n\nOn 2024-07-31 10:11:07 -0400, Tom Lane wrote:\n> Peter Eisentraut <peter@eisentraut.org> writes:\n> > On 30.07.24 18:19, Tom Lane wrote:\n> >> Sometime in the last month or so, flaviventris's bleeding-edge\n> >> version of gcc has started whining[1] about truncation of a\n> >> string literal's implicit trailing '\\0' in contexts like this:\n> >> ../pgsql/src/backend/commands/copyto.c:106:41: warning:\n> >> initializer-string for array of 'char' is too long\n> >> [-Wunterminated-string-initialization]\n> >> 106 | static const char BinarySignature[11] = \"PGCOPY\\n\\377\\r\\n\\0\";\n> >> | ^~~~~~~~~~~~~~~~~~~~\n>\n> > According to the gcc documentation, this warning is part of -Wextra.\n> > And indeed flaviventris runs with -Wextra:\n>\n> > 'CFLAGS' => '-O1 -ggdb -g3 -fno-omit-frame-pointer -Wall -Wextra\n> > -Wno-unused-parameter -Wno-sign-compare -Wno-missing-field-initializers\n> > -O0',\n>\n> Ah --- and it was not doing that a month ago.\n\nHm? I've not touched flaviventris config since at least the 26th of March. And\na buildfarm run from before then also shows -Wextra:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2024-03-17%2011%3A17%3A01\n\n\n> So maybe the compiler has had this warning for longer.\n\nIt's very new:\n\ncommit 44c9403ed1833ae71a59e84f9e37af3182be0df5\nAuthor: Alejandro Colomar <alx@kernel.org>\nAuthorDate: 2024-06-29 15:10:43 +0200\nCommit: Martin Uecker <uecker@gcc.gnu.org>\nCommitDate: 2024-07-14 11:41:00 +0200\n\n c, objc: Add -Wunterminated-string-initialization\n\n\nIt might be worth piping up in the gcc bugtracker and suggesting that the\nwarning isn't issued when there's an explicit \\0?\n\n\n> > So I think the appropriate fix here for now is to add\n> > -Wno-unterminated-string-initialization to this buildfarm configuration.\n>\n> Agreed; our policy so far has been to treat -Wextra warnings with\n> suspicion, and there is not anything really wrong with these bits\n> of code.\n>\n> It looks like serinus needs this fix too.\n\nAdded to both. I've forced runs for both animals, so the bf should show\nresults of that soon.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Jul 2024 11:32:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New compiler warnings in buildfarm"
},
{
"msg_contents": "Hi,\n\nOn 2024-07-31 11:32:56 -0700, Andres Freund wrote:\n> On 2024-07-31 10:11:07 -0400, Tom Lane wrote:\n> > It looks like serinus needs this fix too.\n>\n> Added to both. I've forced runs for both animals, so the bf should show\n> results of that soon.\n\nI Wonder if I should also should add -Wno-clobbered to serinus' config. Afaict\n-Wclobbered is pretty useless once optimizations are used. I've long added\nthat to my local dev environment flags because it's so noisy (which is too\nbad, in theory a good warning for this would be quite helpful).\n\nOr whether we should just do that on a project level?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Jul 2024 11:39:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New compiler warnings in buildfarm"
},
{
"msg_contents": "On 31.07.24 20:39, Andres Freund wrote:\n> On 2024-07-31 11:32:56 -0700, Andres Freund wrote:\n>> On 2024-07-31 10:11:07 -0400, Tom Lane wrote:\n>>> It looks like serinus needs this fix too.\n>>\n>> Added to both. I've forced runs for both animals, so the bf should show\n>> results of that soon.\n> \n> I Wonder if I should also should add -Wno-clobbered to serinus' config. Afaict\n> -Wclobbered is pretty useless once optimizations are used. I've long added\n> that to my local dev environment flags because it's so noisy (which is too\n> bad, in theory a good warning for this would be quite helpful).\n\nIt's unclear to me what to make of this. We have in the past fixed a \nnumber of these, IIRC, and clearly in theory the risk that the warning \npoints out does exist. But these warnings appear erratic and \ninconsistent. I'm using the same compiler versions but I don't see any \nof these warnings. So I don't understand exactly what triggers these.\n\n\n\n",
"msg_date": "Fri, 2 Aug 2024 19:22:02 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: New compiler warnings in buildfarm"
},
{
"msg_contents": "Peter Eisentraut <peter@eisentraut.org> writes:\n> On 31.07.24 20:39, Andres Freund wrote:\n>> I Wonder if I should also should add -Wno-clobbered to serinus' config. Afaict\n>> -Wclobbered is pretty useless once optimizations are used. I've long added\n>> that to my local dev environment flags because it's so noisy (which is too\n>> bad, in theory a good warning for this would be quite helpful).\n\n> It's unclear to me what to make of this. We have in the past fixed a \n> number of these, IIRC, and clearly in theory the risk that the warning \n> points out does exist. But these warnings appear erratic and \n> inconsistent. I'm using the same compiler versions but I don't see any \n> of these warnings. So I don't understand exactly what triggers these.\n\nYeah, -Wclobbered's results seem to vary quite a lot across different\ncompiler versions and perhaps different compiler options. I'd be more\nexcited about trying to silence it if there were some consistency to\nthe reports, but there's not that much; plus, we've never seen any\nevidence that the reports from the noisier compilers correspond to\nreal bugs.\n\nJust for context, here's a quick count of -Wclobbered warnings in\nthe buildfarm:\n\n 71 calliphoridae\n 66 canebrake\n 71 culicidae\n 67 grassquit\n 65 serinus\n 89 skink\n 66 taipan\n 68 tamandua\n\nThe other hundred-plus animals report zero such warnings.\n\nI also tried slicing the data by the variable being complained of:\n\n$ grep 'Wclobbered' currentwarnings | sed -e 's/.*: argument //' -e 's/.*: variable //' | awk '{print $1}' | sort | uniq -c\n\n 118 '_do_rethrow'\n 24 '_do_rethrow2'\n 8 'arrayname'\n 6 'bump_level'\n 1 'cell__state'\n 7 'commandCollected'\n 8 'commands'\n 3 'cstr'\n 6 'cur_datname'\n 6 'cur_nspname'\n 6 'cur_relname'\n 7 'data'\n 2 'dboid'\n 8 'dbstrategy'\n 8 'elevel'\n 14 'error'\n 1 'fd'\n 8 'found_concurrent_worker'\n 4 'ft_htab'\n 8 'has_pending_wal'\n 8 'ib'\n 8 'import_collate'\n 8 'import_default'\n 8 'import_generated'\n 8 'import_not_null'\n 1 'is_program'\n 1 'iter'\n 8 'loop_body'\n 8 'method'\n 6 'named_arg_strings'\n 7 'nulls'\n 5 'objname'\n 1 'options'\n 7 'params'\n 8 'primary'\n 8 'processed'\n 8 'rel'\n 8 'relations'\n 8 'relids_logged'\n 8 'reltuples'\n 44 'result'\n 8 'retry'\n 17 'retval'\n 16 'rv'\n 6 'seq_relids'\n 8 'sqlstate'\n 8 'stats'\n 3 'success'\n 8 'switch_lsn'\n 8 'switch_to_superuser'\n 8 'sync_slotname'\n 5 'tab'\n 7 'table_oids'\n 8 'tb'\n 6 'update_failover'\n 2 'update_tuple'\n 8 'update_two_phase'\n 8 'vob'\n\nThat shows that the apparent similarity of the total number of reports\nper animal is illusory: there are some that all eight animals agree\non, but a lot where they don't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2024 15:00:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: New compiler warnings in buildfarm"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nAfter the patch [1] that adds a path column to pg_backend_memory_context,\nthe parent context can also be found in the path array. Since there are\ncurrently two ways to retrieve information related to the parent of a\ncontext, I wonder whether we still want to keep the parent column.\n\nThe path column represents the path from TopMemoryContext to the current\nmemory context. There is always \"level\" number of elements in a path array\nfor any memory context. The first element in the array is TopMemoryContext,\nand the last element (path[level]) is the current memory context. The\npath[level-1] element will simply show us the parent context ID.\n\nI understand that having the parent name instead of the transient parent\ncontext ID can be easier to use in some cases. While I suspect that the\nmemory contexts most users are interested in are close to\nTopMemoryContext—which means their context IDs are much less likely to\nchange with each execution—it's still not guaranteed.\n\nI'm also unsure how common it is to use or rely on the parent column. I\nquickly searched here [2] to see how pg_backend_memory_context is used.\nThere are a few places where the parent column is used in extensions. I\nbelieve these places should be easy to update if we decide to remove the\nparent column.\n\nAttached is a patch to remove parent from the view.\n\n[1]\nhttps://www.postgresql.org/message-id/CAGPVpCThLyOsj3e_gYEvLoHkr5w%3DtadDiN_%3Dz2OwsK3VJppeBA%40mail.gmail.com\n[2]\nhttps://codesearch.debian.net/search?q=pg_backend_memory_context&literal=1&page=3\n\n\nRegards,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Tue, 30 Jul 2024 20:19:30 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Do we still need parent column in pg_backend_memory_context?"
},
{
"msg_contents": "On Wed, 31 Jul 2024 at 05:19, Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> After the patch [1] that adds a path column to pg_backend_memory_context, the parent context can also be found in the path array. Since there are currently two ways to retrieve information related to the parent of a context, I wonder whether we still want to keep the parent column.\n\nMy vote is to remove it.\n\nI think the parent column is only maybe useful as a rough visual\nindication of what the parent is. It's dangerous to assume using it\nis a reliable way to write a recursive query:\n\nwith recursive contexts as (\n select name, ident, level, path, parent from pg_backend_memory_contexts\n),\nc as (\n select path[level] as context_id, NULL::int as parent_id,* from\ncontexts where parent is null\n union all\n select c1.path[c1.level], c.context_id,c1.* from contexts c1 inner\njoin c on c.name = c1.parent\n)\nselect count(*) as all_including_false_dups, count(distinct\ncontext_id) as unique from c;\n\n all_including_false_dups | unique\n--------------------------+--------\n 159 | 150\n\nSo, with the backend in the state I had it in during this query, the\nrecursive query shows 9 additional contexts because the recursive\nquery joining parent to name found a false parent with a name matching\nthe actual parent because the names are not unique. Given that I\ndidn't do anything special to create contexts with duplicate names, it\nseems duplicates are not rare.\n\nselect name,count(*) from pg_backend_memory_contexts group by 1 order\nby 2 desc limit 3;\n name | count\n-------------+-------\n index info | 94\n dynahash | 15\n ExprContext | 7\n(3 rows)\n\nI think the first two of the above won't have any children, but the\nExprContext ones can.\n\nDavid\n\n\n",
"msg_date": "Wed, 31 Jul 2024 12:21:22 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we still need parent column in pg_backend_memory_context?"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Wed, 31 Jul 2024 at 05:19, Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>> After the patch [1] that adds a path column to pg_backend_memory_context, the parent context can also be found in the path array. Since there are currently two ways to retrieve information related to the parent of a context, I wonder whether we still want to keep the parent column.\n\n> My vote is to remove it.\n\nWhile it's certainly somewhat redundant now, removing it would break\nany application queries that are using the column. Simply adding\na column in a system view is a much easier sell than replacing or\nremoving one.\n\nPerhaps you can make an argument that nobody would be depending\non that column, but I fear that's wishful thinking. Or maybe you\ncan argue that any query using it is already broken --- but I\nthink that's only true if someone tries to do the specific sort\nof recursive traversal that you illustrated.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2024 20:35:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we still need parent column in pg_backend_memory_context?"
},
{
"msg_contents": "On Wed, 31 Jul 2024 at 12:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Perhaps you can make an argument that nobody would be depending\n> on that column, but I fear that's wishful thinking. Or maybe you\n> can argue that any query using it is already broken --- but I\n> think that's only true if someone tries to do the specific sort\n> of recursive traversal that you illustrated.\n\nIt's true that people could be using it, I certainly don't dispute\nthat. It's just we don't have any rule that we can't do this sort of\nthing. Take f66e8bf87, for example. It removed relhaspkey from\npg_class. It's true that doing that did upset at least one person\n[1], but our response to that complaint was to reiterate that the\nexample query was broken.\n\nI feel the bar is a bit lower for pg_backend_memory_contexts as it was\nonly added in v14, so it's not been around as long as pg_class had\nbeen around in 2018 when we removed relhaspkey. My concern here is\nthat the longer we leave the parent column in, the higher the bar gets\nto remove it. That's why I feel like it is worth considering this\nnow.\n\nOne thing we could do is remove it and see if anyone complains. If we\ndid that today, there's about a year-long window for people to\ncomplain where we could still reverse the decision. Now is probably\nthe best time where we can consider this so I'd be sad if this\ndiscussion ended on \"someone might be using it.\".\n\nDavid\n\n[1] https://postgr.es/m/CANu8Fiy2RZL+uVnnrzaCTJxMgcKBDOnAR7bDx3n0P=KycbSNhA@mail.gmail.com\n\n\n",
"msg_date": "Wed, 31 Jul 2024 13:21:15 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we still need parent column in pg_backend_memory_context?"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I feel the bar is a bit lower for pg_backend_memory_contexts as it was\n> only added in v14, so it's not been around as long as pg_class had\n> been around in 2018 when we removed relhaspkey.\n\nYeah, and also it's very much a developer-focused view with a limited\naudience. It's certainly possible that we could remove the column\nand nobody would complain. I just wanted to point out that there is\na compatibility worry here.\n\n> One thing we could do is remove it and see if anyone complains. If we\n> did that today, there's about a year-long window for people to\n> complain where we could still reverse the decision.\n\nSeems like a plausible compromise.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2024 21:27:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Do we still need parent column in pg_backend_memory_context?"
},
{
"msg_contents": "On Wed, 31 Jul 2024 at 13:27, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > One thing we could do is remove it and see if anyone complains. If we\n> > did that today, there's about a year-long window for people to\n> > complain where we could still reverse the decision.\n>\n> Seems like a plausible compromise.\n\nDoes anyone object to making this happen? i.e. remove\npg_backend_memory_contexts.parent column and see if anyone complains?\n\nIf nobody comes up with any reasons against it, then I propose making\nthis happen.\n\nDavid\n\n\n",
"msg_date": "Tue, 6 Aug 2024 17:48:11 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we still need parent column in pg_backend_memory_context?"
},
{
"msg_contents": "On Tue, 6 Aug 2024 at 17:48, David Rowley <dgrowleyml@gmail.com> wrote:\n> Does anyone object to making this happen? i.e. remove\n> pg_backend_memory_contexts.parent column and see if anyone complains?\n>\n> If nobody comes up with any reasons against it, then I propose making\n> this happen.\n\nI made a few adjustments and pushed the patch. Let's see if anyone complains.\n\nDavid\n\n\n",
"msg_date": "Mon, 12 Aug 2024 15:43:55 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we still need parent column in pg_backend_memory_context?"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com>, 12 Ağu 2024 Pzt, 06:44 tarihinde şunu\nyazdı:\n\n> I made a few adjustments and pushed the patch. Let's see if anyone\n> complains.\n>\n\nThanks David.\n\nDavid Rowley <dgrowleyml@gmail.com>, 12 Ağu 2024 Pzt, 06:44 tarihinde şunu yazdı:\nI made a few adjustments and pushed the patch. Let's see if anyone complains.Thanks David.",
"msg_date": "Mon, 12 Aug 2024 12:19:30 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Do we still need parent column in pg_backend_memory_context?"
}
] |
[
{
"msg_contents": "While looking into converting pgssEntry->mutex to an LWLock (per a\nsuggestion elsewhere [0]), I noticed that pg_stat_statements uses\n\"volatile\" quite liberally. IIUC we can remove these as of commit 0709b7e\n(like commits 8f6bb85, df4077c, and 6ba4ecb did in other areas). All of\nthe uses in pg_stat_statements except those added by commit 9fbc3f3 predate\nthat commit (0709b7e), and I assume commit 9fbc3f3 was just following the\nexamples in surrounding code.\n\nAm I missing something? Or can we remove these qualifiers now?\n\n[0] https://postgr.es/m/20200911223254.isq7veutwxat4n2w%40alap3.anarazel.de\n\n-- \nnathan",
"msg_date": "Tue, 30 Jul 2024 13:24:54 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "remove volatile qualifiers from pg_stat_statements"
},
{
"msg_contents": "Hi,\n\nOn Tue, Jul 30, 2024 at 01:24:54PM -0500, Nathan Bossart wrote:\n> While looking into converting pgssEntry->mutex to an LWLock (per a\n> suggestion elsewhere [0]), I noticed that pg_stat_statements uses\n> \"volatile\" quite liberally. IIUC we can remove these as of commit 0709b7e\n> (like commits 8f6bb85, df4077c, and 6ba4ecb did in other areas). All of\n> the uses in pg_stat_statements except those added by commit 9fbc3f3 predate\n> that commit (0709b7e), and I assume commit 9fbc3f3 was just following the\n> examples in surrounding code.\n> \n> Am I missing something? Or can we remove these qualifiers now?\n\nI share the same understanding and I think those can be removed.\n\nThe patch LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 31 Jul 2024 07:01:38 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove volatile qualifiers from pg_stat_statements"
},
{
"msg_contents": "On Wed, Jul 31, 2024 at 07:01:38AM +0000, Bertrand Drouvot wrote:\n> I share the same understanding and I think those can be removed.\n> \n> The patch LGTM.\n\nThat sounds about right. All the volatile references we have here\nhave been kept under the assumption that a memory barrier is required.\nAs we hold spin locks in these areas, that should not be necessary\nanyway. So LGTM as well.\n\nA quick lookup at the rest of contrib/ is showing me that the\nremaining volatile references point at uses with TRY/CATCH blocks,\nwhere we require them.\n--\nMichael",
"msg_date": "Tue, 6 Aug 2024 16:04:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove volatile qualifiers from pg_stat_statements"
},
{
"msg_contents": "On Tue, Aug 06, 2024 at 04:04:01PM +0900, Michael Paquier wrote:\n> On Wed, Jul 31, 2024 at 07:01:38AM +0000, Bertrand Drouvot wrote:\n>> I share the same understanding and I think those can be removed.\n>> \n>> The patch LGTM.\n> \n> That sounds about right. All the volatile references we have here\n> have been kept under the assumption that a memory barrier is required.\n> As we hold spin locks in these areas, that should not be necessary\n> anyway. So LGTM as well.\n\nCommitted, thanks.\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 6 Aug 2024 10:59:49 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: remove volatile qualifiers from pg_stat_statements"
}
] |
[
{
"msg_contents": "I noticed this comment in selfuncs.c regarding strxfrm():\n\n /* \n * Some systems (e.g., glibc) can return a smaller value from the \n * second call than the first; thus the Assert must be <= not ==. \n */\n\nSome callers of pg_strnxfrm() are not allowing for that possibility.\n\nPatch attached, which should be backported to 17.\n\nRegards,\n\tJeff Davis",
"msg_date": "Tue, 30 Jul 2024 12:31:24 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "[17+] check after second call to pg_strnxfrm is too strict, relax it"
},
{
"msg_contents": "On 30/07/2024 22:31, Jeff Davis wrote:\n> I noticed this comment in selfuncs.c regarding strxfrm():\n> \n> /*\n> * Some systems (e.g., glibc) can return a smaller value from the\n> * second call than the first; thus the Assert must be <= not ==.\n> */\n> \n> Some callers of pg_strnxfrm() are not allowing for that possibility.\n> \n> Patch attached, which should be backported to 17.\n\n+1. A comment in the pg_strnxfrm() function itself would be good too.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 30 Jul 2024 23:01:00 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: [17+] check after second call to pg_strnxfrm is too strict, relax\n it"
},
{
"msg_contents": "On Tue, 2024-07-30 at 23:01 +0300, Heikki Linnakangas wrote:\n> +1. A comment in the pg_strnxfrm() function itself would be good too.\n\nCommitted, thank you. Backported to 16 (original email said 17, but\nthat was a mistake).\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Wed, 31 Jul 2024 12:44:36 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: [17+] check after second call to pg_strnxfrm is too strict,\n relax it"
}
] |
[
{
"msg_contents": "\nSeveral years ago, an EDB customer complained that if they used a \nfunctional index involving upper(), lower(), or textlike(), where RLS \nwas involved, the indexes were not used because these functions are not \nmarked leakproof. We presented the customer with several options for \naddressing the problem, the simplest of which was simply to mark the \nfunctions as leakproof, and this was the solution they adopted.\n\nThe consensus of discussion at the time among our senior developers was \nthat there was probably no reason why at least upper() and lower() \nshould not be marked leakproof, and quite possibly initcap() and \ntextlike() also. It was suggested that we had not been terribly rigorous \nin assessing whether or not functions can be considered leakproof.\n\nAt the time we should have asked the community about it, but we didn't.\n\nFast forward to now. The customer has found no observable ill effects of \nmarking these functions leakproof. The would like to know if there is \nany reason why we can't mark them leakproof, so that they don't have to \ndo this in every database of every cluster they use.\n\nThoughts?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 30 Jul 2024 17:35:27 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Wed, 31 Jul 2024 at 09:35, Andrew Dunstan <andrew@dunslane.net> wrote:\n> Fast forward to now. The customer has found no observable ill effects of\n> marking these functions leakproof. The would like to know if there is\n> any reason why we can't mark them leakproof, so that they don't have to\n> do this in every database of every cluster they use.\n>\n> Thoughts?\n\nAccording to [1], it's just not been done yet due to concerns about\nrisk to reward ratios. Nobody mentioned any reason why it couldn't\nbe, but there were some fears that future code changes could yield new\nfailure paths.\n\nDavid\n\n[1] https://postgr.es/m/02BDFCCF-BDBB-4658-9717-4D95F9A91561%40thebuild.com\n\n\n",
"msg_date": "Wed, 31 Jul 2024 10:51:25 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On 2024-07-30 Tu 6:51 PM, David Rowley wrote:\n> On Wed, 31 Jul 2024 at 09:35, Andrew Dunstan<andrew@dunslane.net> wrote:\n>> Fast forward to now. The customer has found no observable ill effects of\n>> marking these functions leakproof. The would like to know if there is\n>> any reason why we can't mark them leakproof, so that they don't have to\n>> do this in every database of every cluster they use.\n>>\n>> Thoughts?\n> According to [1], it's just not been done yet due to concerns about\n> risk to reward ratios. Nobody mentioned any reason why it couldn't\n> be, but there were some fears that future code changes could yield new\n> failure paths.\n>\n> David\n>\n> [1]https://postgr.es/m/02BDFCCF-BDBB-4658-9717-4D95F9A91561%40thebuild.com\n\n\nHmm, somehow I missed that thread in searching, and clearly I'd \nforgotten it.\n\nStill, I'm not terribly convinced by arguments along the lines you're \nsuggesting. \"Sufficient unto the day is the evil thereof.\" Maybe we need \na test to make sure we don't make changes along those lines, although I \nhave no idea what such a test would look like.\n\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-30 Tu 6:51 PM, David Rowley\n wrote:\n\n\nOn Wed, 31 Jul 2024 at 09:35, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nFast forward to now. The customer has found no observable ill effects of\nmarking these functions leakproof. The would like to know if there is\nany reason why we can't mark them leakproof, so that they don't have to\ndo this in every database of every cluster they use.\n\nThoughts?\n\n\n\nAccording to [1], it's just not been done yet due to concerns about\nrisk to reward ratios. Nobody mentioned any reason why it couldn't\nbe, but there were some fears that future code changes could yield new\nfailure paths.\n\nDavid\n\n[1] https://postgr.es/m/02BDFCCF-BDBB-4658-9717-4D95F9A91561%40thebuild.com\n\n\n\nHmm, somehow I missed that thread in searching, and clearly I'd\n forgotten it.\nStill, I'm not terribly convinced by arguments along the lines\n you're suggesting. \"Sufficient unto the day is the evil thereof.\"\n Maybe we need a test to make sure we don't make changes along\n those lines, although I have no idea what such a test would look\n like.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 31 Jul 2024 05:47:40 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On 7/31/24 05:47, Andrew Dunstan wrote:\n> On 2024-07-30 Tu 6:51 PM, David Rowley wrote:\n>> On Wed, 31 Jul 2024 at 09:35, Andrew Dunstan<andrew@dunslane.net> wrote:\n>>> Fast forward to now. The customer has found no observable ill effects of\n>>> marking these functions leakproof. The would like to know if there is\n>>> any reason why we can't mark them leakproof, so that they don't have to\n>>> do this in every database of every cluster they use.\n>>>\n>>> Thoughts?\n>> According to [1], it's just not been done yet due to concerns about\n>> risk to reward ratios. Nobody mentioned any reason why it couldn't\n>> be, but there were some fears that future code changes could yield new\n>> failure paths.\n>>\n>> David\n>>\n>> [1]https://postgr.es/m/02BDFCCF-BDBB-4658-9717-4D95F9A91561%40thebuild.com\n> \n> Hmm, somehow I missed that thread in searching, and clearly I'd \n> forgotten it.\n> \n> Still, I'm not terribly convinced by arguments along the lines you're \n> suggesting. \"Sufficient unto the day is the evil thereof.\" Maybe we need \n> a test to make sure we don't make changes along those lines, although I \n> have no idea what such a test would look like.\n\n\nI think I have expressed this opinion before (which was shot down), and \nI will grant that it is hand-wavy, but I will give it another try.\n\nIn my opinion, for this use case and others, it should be possible to \nredact the values substituted into log messages based on some criteria. \nOne of those criteria could be \"I am in a leakproof call right now\". In \nfact in a similar fashion, an extension ought to be able to mutate the \nlog message based on the entire string, e.g. when \"ALTER \nROLE...PASSWORD...\" is spotted I would like to be able to redact \neverything after \"PASSWORD\".\n\nYes it might render the error message unhelpful, but I know of users \nthat would accept that tradeoff in order to get better performance and \nsecurity on their production workloads. Or in some cases (e.g. PASSWORD) \njust better security.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 31 Jul 2024 09:14:37 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "\nOn 2024-07-31 We 9:14 AM, Joe Conway wrote:\n> On 7/31/24 05:47, Andrew Dunstan wrote:\n>> On 2024-07-30 Tu 6:51 PM, David Rowley wrote:\n>>> On Wed, 31 Jul 2024 at 09:35, Andrew Dunstan<andrew@dunslane.net> \n>>> wrote:\n>>>> Fast forward to now. The customer has found no observable ill \n>>>> effects of\n>>>> marking these functions leakproof. The would like to know if there is\n>>>> any reason why we can't mark them leakproof, so that they don't \n>>>> have to\n>>>> do this in every database of every cluster they use.\n>>>>\n>>>> Thoughts?\n>>> According to [1], it's just not been done yet due to concerns about\n>>> risk to reward ratios. Nobody mentioned any reason why it couldn't\n>>> be, but there were some fears that future code changes could yield new\n>>> failure paths.\n>>>\n>>> David\n>>>\n>>> [1]https://postgr.es/m/02BDFCCF-BDBB-4658-9717-4D95F9A91561%40thebuild.com \n>>>\n>>\n>> Hmm, somehow I missed that thread in searching, and clearly I'd \n>> forgotten it.\n>>\n>> Still, I'm not terribly convinced by arguments along the lines you're \n>> suggesting. \"Sufficient unto the day is the evil thereof.\" Maybe we \n>> need a test to make sure we don't make changes along those lines, \n>> although I have no idea what such a test would look like.\n>\n>\n> I think I have expressed this opinion before (which was shot down), \n> and I will grant that it is hand-wavy, but I will give it another try.\n>\n> In my opinion, for this use case and others, it should be possible to \n> redact the values substituted into log messages based on some \n> criteria. One of those criteria could be \"I am in a leakproof call \n> right now\". In fact in a similar fashion, an extension ought to be \n> able to mutate the log message based on the entire string, e.g. when \n> \"ALTER ROLE...PASSWORD...\" is spotted I would like to be able to \n> redact everything after \"PASSWORD\".\n>\n> Yes it might render the error message unhelpful, but I know of users \n> that would accept that tradeoff in order to get better performance and \n> security on their production workloads. Or in some cases (e.g. \n> PASSWORD) just better security.\n>\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 31 Jul 2024 09:38:16 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Wed, Jul 31, 2024 at 9:14 AM Joe Conway <mail@joeconway.com> wrote:\n> In my opinion, for this use case and others, it should be possible to\n> redact the values substituted into log messages based on some criteria.\n> One of those criteria could be \"I am in a leakproof call right now\". In\n> fact in a similar fashion, an extension ought to be able to mutate the\n> log message based on the entire string, e.g. when \"ALTER\n> ROLE...PASSWORD...\" is spotted I would like to be able to redact\n> everything after \"PASSWORD\".\n\nThis might be helpful, and unfortunately I'm all too familiar with the\nALTER ROLE...PASSWORD example, but I think it's not really related to\nthe question of whether we can mark upper() and lower() leakproof.\n\nIf there are some inputs that cause upper() and lower() to fail and\nothers that do not, the functions aren't leakproof, because an\nattacker can extract information about values that they can't see by\nfeeding those values into these functions and seeing whether they get\na failure or not. It doesn't matter what error message is produced;\nthe fact that the function throws an error of any kind for some input\nvalues but enough for others is enough to make it unsafe, and it seems\nto me that we've repeatedly found that there's often a way to turn\neven what seems like a small leak into a very large one, so we need to\nbe quite careful here.\n\nAnnoyingly, we have a WHOLE BUNCH of different code paths that need to\nbe assessed individually here. I think we can ignore the fact that\nupper() can throw errors when it's unclear which collation to use; I\nthink that's a property of the query string rather than the input\nvalue. It's a bit less clear whether we can ignore out of memory\nconditions, but my judgement is that a possible OOM from a small\nallocation is not generally going to be useful as an attack vector. If\na long string produces an error and a short one doesn't, that might\nqualify as a real leak. And other errors that depend on the input\nvalue are also leaks. So let's go through the code paths:\n\n- When lc_ctype_is_c(), we call asc_toupper(), which calls\npg_ascii_toupper(), which just replaces a-z with A-Z. No errors.\n\n- When mylocale->provider == COLLPROVIDER_ICU, we call icu_to_uchar()\nand icu_convert_case(). icu_to_uchar() calls uchar_length(), which has\nthis:\n\n if (U_FAILURE(status) && status != U_BUFFER_OVERFLOW_ERROR)\n ereport(ERROR,\n (errmsg(\"%s failed: %s\",\n\"ucnv_toUChars\", u_errorName(status))));\n\nI don't know what errors can be reported here, or if this ever fails\nin practice, so I can't prove it never fails consistently for some\nparticular input string. uchar_convert() and icu_convert_case() have\nsimilar error reporting.\n\n- When mylocale->provider == COLLPROVIDER_BUILTIN, we call\nunicode_strupper() which calls unicode_to_utf8() which clearly throws\nno errors. Likewise unicode_utf8len() does not error. I don't see how\nwe can get an error out of this path.\n\n- In cases not covered by the above, we take different paths depending\non whether a multi-byte encoding is in use. In single-byte encodings,\nwe rely on either pg_toupper() or toupper_l(). The latter is an\nOS-provided function so can't ereport(), and the former calls either\ndoes the work itself or calls toupper() which again is an OS-provided\nfunction and can't report().\n\n- Finally, when mylocale == NULL or the provider is COLLPROVIDER_LIBC\nand the encoding is multi-byte, we use char2wchar(), then towupper_l()\nor towupper(), then wchar2char(). The whole thing can fall apart if\nthe string is too long, which might be enough to declare this\nleakproof but it depends on whether the error guarded by /* Overflow\nparanoia */ is really just paranoia or whether it's actually\nreachable. Otherwise, towupper*() won't ereport because it's not part\nof PG, so we need to assess char2wchar() and wchar2char(). Here I note\nthat char2wchar() claims that it does an ereport() if the input is\ninvalidly encoded. This is kind of surprising when you think about it,\nbecause our usual policy is not to allow invalidly encoded data into\nthe database in the first place, but whoever wrote this thought it\nwould be possible to hit this case \"if LC_CTYPE locale is different\nfrom the database encoding.\" But it seems that the logic here dates to\ncommit 2ab0796d7a3a7116a79b65531fd33f1548514b52 back in 2011, so it\nseems at least possible to me that we've tightened things up enough\nsince then that you can't actually hit this any more. But then again,\nmaybe not.\n\nSo in summary, I think upper() is ... pretty close to leakproof. But\nif ICU sometimes fails on certain strings, then it isn't. And if the\nmulti-byte libc path can be made to fail reliably either with really\nlong strings or with certain choices of the LC_CTYPE locale, then it\nisn't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Jul 2024 13:39:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> If there are some inputs that cause upper() and lower() to fail and\n> others that do not, the functions aren't leakproof, because an\n> attacker can extract information about values that they can't see by\n> feeding those values into these functions and seeing whether they get\n> a failure or not.\n\n> [ rather exhaustive analysis redacted ]\n\n> So in summary, I think upper() is ... pretty close to leakproof. But\n> if ICU sometimes fails on certain strings, then it isn't. And if the\n> multi-byte libc path can be made to fail reliably either with really\n> long strings or with certain choices of the LC_CTYPE locale, then it\n> isn't.\n\nThe problem here is that marking these functions leakproof is a\npromise about a *whole bunch* of code, much of it not under our\ncontrol; worse, there's no reason to think all that code is stable.\nA large fraction of it didn't even exist a few versions ago.\n\nEven if we could convince ourselves that the possible issues Robert\nmentions aren't real at the moment, I think marking these leakproof\nis mighty risky. It's unlikely we'd remember to revisit the marking\nthe next time someone drops a bunch of new code in here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2024 14:03:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On 7/31/24 14:03, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> If there are some inputs that cause upper() and lower() to fail and\n>> others that do not, the functions aren't leakproof, because an\n>> attacker can extract information about values that they can't see by\n>> feeding those values into these functions and seeing whether they get\n>> a failure or not.\n> \n>> [ rather exhaustive analysis redacted ]\n> \n>> So in summary, I think upper() is ... pretty close to leakproof. But\n>> if ICU sometimes fails on certain strings, then it isn't. And if the\n>> multi-byte libc path can be made to fail reliably either with really\n>> long strings or with certain choices of the LC_CTYPE locale, then it\n>> isn't.\n> \n> The problem here is that marking these functions leakproof is a\n> promise about a *whole bunch* of code, much of it not under our\n> control; worse, there's no reason to think all that code is stable.\n> A large fraction of it didn't even exist a few versions ago.\n> \n> Even if we could convince ourselves that the possible issues Robert\n> mentions aren't real at the moment, I think marking these leakproof\n> is mighty risky. It's unlikely we'd remember to revisit the marking\n> the next time someone drops a bunch of new code in here.\n\n\nI still maintain that there is a whole host of users that would accept \nthe risk of side channel attacks via existence of an error or not, if \nthey could only be sure nothing sensitive leaks directly into the logs \nor to the clients. We should give them that choice.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 31 Jul 2024 14:43:40 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "\nOn 2024-07-31 We 2:43 PM, Joe Conway wrote:\n> On 7/31/24 14:03, Tom Lane wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> If there are some inputs that cause upper() and lower() to fail and\n>>> others that do not, the functions aren't leakproof, because an\n>>> attacker can extract information about values that they can't see by\n>>> feeding those values into these functions and seeing whether they get\n>>> a failure or not.\n>>\n>>> [ rather exhaustive analysis redacted ]\n\n\nFirst, thanks you very much, Robert for the analysis.\n\n\n>>\n>>> So in summary, I think upper() is ... pretty close to leakproof. But\n>>> if ICU sometimes fails on certain strings, then it isn't. And if the\n>>> multi-byte libc path can be made to fail reliably either with really\n>>> long strings or with certain choices of the LC_CTYPE locale, then it\n>>> isn't.\n>>\n>> The problem here is that marking these functions leakproof is a\n>> promise about a *whole bunch* of code, much of it not under our\n>> control; worse, there's no reason to think all that code is stable.\n>> A large fraction of it didn't even exist a few versions ago.\n>>\n>> Even if we could convince ourselves that the possible issues Robert\n>> mentions aren't real at the moment, I think marking these leakproof\n>> is mighty risky. It's unlikely we'd remember to revisit the marking\n>> the next time someone drops a bunch of new code in here.\n>\n>\n> I still maintain that there is a whole host of users that would accept \n> the risk of side channel attacks via existence of an error or not, if \n> they could only be sure nothing sensitive leaks directly into the logs \n> or to the clients. We should give them that choice.\n>\n\nAs I meant to say in my previous empty reply, I think your suggestions \nmake lots of sense.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 31 Jul 2024 15:54:29 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Wed, Jul 31, 2024 at 2:43 PM Joe Conway <mail@joeconway.com> wrote:\n> I still maintain that there is a whole host of users that would accept\n> the risk of side channel attacks via existence of an error or not, if\n> they could only be sure nothing sensitive leaks directly into the logs\n> or to the clients. We should give them that choice.\n\nI'm not sure what design you have in mind. A lot of possible designs\nseem to end up like this:\n\n1. You can't directly select the invisible value.\n\n2. But you can write a plpgsql procedure that tries a bunch of things\nin a loop and catches errors and uses which things error and which\nthings don't to figure out and return the invisible value.\n\nAnd I would argue that's not really that useful. Especially if that\nplpgsql procedure can extract the hidden values in like 1ms/row.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Jul 2024 16:10:41 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Wed, Jul 31, 2024 at 2:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The problem here is that marking these functions leakproof is a\n> promise about a *whole bunch* of code, much of it not under our\n> control; worse, there's no reason to think all that code is stable.\n> A large fraction of it didn't even exist a few versions ago.\n\nI think it's a fair critique. It could be reasonable to say, well, a\nparticular user could reasonably judge that the risk of marking\nupper() as leak-proof is acceptable, but it's a little too risky for\nus to want to do it as a project. After all, they can know things like\n\"we don't even use ICU,\" which we as a project cannot know.\n\nHowever, the risk is that an end-user is going to be much less able to\nevaluate what is and isn't safe than we are. I think some people are\ngoing to be like -- well the core project doesn't mark enough stuff\nleakproof, so I'll just go add markings to a bunch of stuff myself.\nAnd they probably won't stop at stuff like UPPER which is almost\nleakproof. They might add it to stuff such as LIKE which results in\nimmediately giving away the farm. By not giving people any guidance,\nwe invite them to make up their own rules.\n\nPerhaps it's still right to be conservative; after all, no matter what\nan end-user does, it's not a CVE for us, and CVEs suck. On the other\nhand, shipping a system that is not useful in practice until you\nmodify a bunch of stuff also sucks, especially when it's not at all\nclear which things are safe to modify.\n\nI'm not sure what the right thing to do here is, but I think that it's\nwrong to imagine that being unwilling to endorse probably-leakproof\nthings as leakproof -- or unwilling to put in the work to MAKE them\nleakproof if they currently aren't -- has no security costs.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Jul 2024 16:26:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On 7/31/24 16:10, Robert Haas wrote:\n> On Wed, Jul 31, 2024 at 2:43 PM Joe Conway <mail@joeconway.com> wrote:\n>> I still maintain that there is a whole host of users that would accept\n>> the risk of side channel attacks via existence of an error or not, if\n>> they could only be sure nothing sensitive leaks directly into the logs\n>> or to the clients. We should give them that choice.\n> \n> I'm not sure what design you have in mind. A lot of possible designs\n> seem to end up like this:\n> \n> 1. You can't directly select the invisible value.\n> \n> 2. But you can write a plpgsql procedure that tries a bunch of things\n> in a loop and catches errors and uses which things error and which\n> things don't to figure out and return the invisible value.\n> \n> And I would argue that's not really that useful. Especially if that\n> plpgsql procedure can extract the hidden values in like 1ms/row.\n\n\nYou are assuming that everyone allows direct logins with the ability to \ncreate procedures. Plenty don't.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 31 Jul 2024 16:42:51 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I'm not sure what the right thing to do here is, but I think that it's\n> wrong to imagine that being unwilling to endorse probably-leakproof\n> things as leakproof -- or unwilling to put in the work to MAKE them\n> leakproof if they currently aren't -- has no security costs.\n\nWell, we *have* been a little bit spongy about that --- notably,\nthat texteq and friends are marked leakproof. But IMV, marking\nupper/lower as leakproof is substantially riskier and offers\nsubstantially less benefit than those did.\n\nIn general, I'm worried about a slippery slope here. If we\nstart marking things as leakproof because we cannot prove\nthey leak, rather than because we can prove they don't,\nwe are eventually going to find ourselves in a very bad place.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2024 17:28:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Wed, 2024-07-31 at 14:43 -0400, Joe Conway wrote:\n> I still maintain that there is a whole host of users that would accept \n> the risk of side channel attacks via existence of an error or not, if \n> they could only be sure nothing sensitive leaks directly into the logs \n> or to the clients. We should give them that choice.\n\nI think that you are right.\n\nBut what do you tell the users who would not accept that risk?\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 01 Aug 2024 13:17:22 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Wed, Jul 31, 2024 at 4:42 PM Joe Conway <mail@joeconway.com> wrote:\n> You are assuming that everyone allows direct logins with the ability to\n> create procedures. Plenty don't.\n\nWell, if you can send queries, then you can do the same thing, driven\nby client-side logic.\n\nIf you can't directly send queries in any form, then I guess things\nare different. But I don't really understand what kind of system you\nhave in mind.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Aug 2024 07:57:05 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On 8/1/24 07:17, Laurenz Albe wrote:\n> On Wed, 2024-07-31 at 14:43 -0400, Joe Conway wrote:\n>> I still maintain that there is a whole host of users that would accept \n>> the risk of side channel attacks via existence of an error or not, if \n>> they could only be sure nothing sensitive leaks directly into the logs \n>> or to the clients. We should give them that choice.\n> \n> I think that you are right.\n\nthanks\n\n> But what do you tell the users who would not accept that risk?\n\nDocument that the option should not be used if that is the case\n\n¯\\_(ツ)_/¯\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 1 Aug 2024 09:54:52 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On 8/1/24 07:57, Robert Haas wrote:\n> On Wed, Jul 31, 2024 at 4:42 PM Joe Conway <mail@joeconway.com> wrote:\n>> You are assuming that everyone allows direct logins with the ability to\n>> create procedures. Plenty don't.\n> \n> Well, if you can send queries, then you can do the same thing, driven\n> by client-side logic.\n\nSure. Of course you should be monitoring your production servers for \nanomalous workloads, no? \"Gee, why is Joe running the same query \nmillions of times that keeps throwing errors? Maybe we should go see \nwhat Joe is up to\"\n\n> If you can't directly send queries in any form, then I guess things\n> are different.\n\nRight, and there are plenty of those. I have even worked with at least \none rather large one on behalf of your employer some years ago.\n\n> But I don't really understand what kind of system you have in mind.\n\nWell I did say I was being hand wavy ;-)\n\nIt has been a long time since I thought deeply about this. I will try to \ncome back with something more concrete if no one beats me to it.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 1 Aug 2024 10:05:44 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 8/1/24 07:17, Laurenz Albe wrote:\n>> But what do you tell the users who would not accept that risk?\n\n> Document that the option should not be used if that is the case\n\nAre you proposing that we invent two levels of leakproofness\nwith different guarantees? That seems like a mess, not least\nbecause there are going to be varying opinions about where we\nshould set the bar for the lower level.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Aug 2024 10:26:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Thu, Aug 1, 2024 at 10:05 AM Joe Conway <mail@joeconway.com> wrote:\n> Sure. Of course you should be monitoring your production servers for\n> anomalous workloads, no? \"Gee, why is Joe running the same query\n> millions of times that keeps throwing errors? Maybe we should go see\n> what Joe is up to\"\n\nI think it's possible that something like this could be part of some\nuseful approach, but I think it's difficult. If it would take Joe a\nmonth of pounding on the server to steal enough data to matter, then I\nthink monitoring could be one required part of an information\nprotection strategy. However, if Joe, or someone with Joe's\ncredentials, can steal all the secret data in under an hour, the\nmonitoring system probably doesn't help much. A human being probably\nwon't react quickly enough to stop something bad from happening,\nespecially if the person with Joe's credentials begins the attack at\n2am on Christmas.\n\nMore generally, I think it's difficult for us to build infrastructure\ninto PostgreSQL that relies on complex assumptions about what the\ncustomer environment is. To some extent, we are already relying on\nusers to prevent certain types of attacks. For example, RLS supposes\nthat timing attacks or plan-shape based attacks won't be feasible, but\nwe don't do anything to prevent them; we just hope the user takes care\nof it. That's already a shaky assumption, because timing attacks could\nwell be feasible across a fairly deep application stack e.g. the user\nissues an HTTP query for a web page and can detect variations in the\nunderlying database query latency.\n\nWhen you start proposing assumptions that the user can't execute DDL\nor can't execute SQL queries or that there's monitoring of the error\nlog in place, I feel the whole thing gets very hard to reason about.\nFirst, you have to decide on exactly what the assumptions are - no\nDDL, no direct SQL at all, something else? Different situations could\nexist for different users, so whatever assumption we make will not\napply to everyone. Second, for some of this stuff, there's a sliding\nscale. If we stipulate that a user is going to need a monitoring\nsystem, how good does that monitoring system have to be? What does it\nhave to catch, and how quickly are the humans required to respond? If\nwe stipulate that the attacker can't execute SQL directly, how much\ncontrol over the generated SQL are they allowed to have?\n\nI don't want to make it sound like I think it's hopeless to come up\nwith something clever here. The current situation kind of sucks, and I\ndo have hopes that there are better ideas out there. At the same time,\nwe need to be able to articulate clearly what we are and are not\nguaranteeing and under what set of assumptions, and it doesn't seem\neasy to me to come up with something satisfying.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Aug 2024 10:43:54 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Wed, Jul 31, 2024 at 1:26 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> However, the risk is that an end-user is going to be much less able to\n> evaluate what is and isn't safe than we are. I think some people are\n> going to be like -- well the core project doesn't mark enough stuff\n> leakproof, so I'll just go add markings to a bunch of stuff myself.\n> And they probably won't stop at stuff like UPPER which is almost\n> leakproof. They might add it to stuff such as LIKE which results in\n> immediately giving away the farm. By not giving people any guidance,\n> we invite them to make up their own rules.\n\n+1.\n\nWould it provide enough value for effort to explicitly mark leaky\nprocedures as such? Maybe that could shrink the grey area enough to be\nprotective?\n\n--Jacob\n\n\n",
"msg_date": "Thu, 1 Aug 2024 13:45:00 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Thu, Aug 1, 2024 at 7:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Are you proposing that we invent two levels of leakproofness\n> with different guarantees? That seems like a mess, not least\n> because there are going to be varying opinions about where we\n> should set the bar for the lower level.\n\nIt kind of reminds me of the kernel's \"paranoia level\" tunables, which\nseem to proliferate in weird ways [1] and not make for a particularly\ngreat UX.\n\n--Jacob\n\n[1] https://askubuntu.com/questions/1400874/what-does-perf-paranoia-level-four-do\n\n\n",
"msg_date": "Thu, 1 Aug 2024 13:51:41 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Thu, Aug 1, 2024 at 4:45 PM Jacob Champion\n<jacob.champion@enterprisedb.com> wrote:\n> Would it provide enough value for effort to explicitly mark leaky\n> procedures as such? Maybe that could shrink the grey area enough to be\n> protective?\n\nYou mean like proleakproof = true/false/maybe?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Aug 2024 21:02:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Thu, Aug 1, 2024 at 6:03 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Aug 1, 2024 at 4:45 PM Jacob Champion\n> <jacob.champion@enterprisedb.com> wrote:\n> > Would it provide enough value for effort to explicitly mark leaky\n> > procedures as such? Maybe that could shrink the grey area enough to be\n> > protective?\n>\n> You mean like proleakproof = true/false/maybe?\n\nYeah, exactly.\n\n--Jacob\n\n\n",
"msg_date": "Fri, 2 Aug 2024 06:48:00 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On 8/2/24 09:48, Jacob Champion wrote:\n> On Thu, Aug 1, 2024 at 6:03 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>> On Thu, Aug 1, 2024 at 4:45 PM Jacob Champion\n>> <jacob.champion@enterprisedb.com> wrote:\n>> > Would it provide enough value for effort to explicitly mark leaky\n>> > procedures as such? Maybe that could shrink the grey area enough to be\n>> > protective?\n>>\n>> You mean like proleakproof = true/false/maybe?\n> \n> Yeah, exactly.\n\n<dons flameproof suit>\nHmmm, and then have \"leakproof_mode\" = strict/lax/off where 'strict' is \ncurrent behavior, 'lax' allows the 'maybe's to get pushed down, and \n'off' ignores the leakproof attribute entirely and pushes down anything \nthat merits being pushed?\n</dons flameproof suit>\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 2 Aug 2024 09:58:37 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Fri, Aug 2, 2024 at 6:58 AM Joe Conway <mail@joeconway.com> wrote:\n\n> On 8/2/24 09:48, Jacob Champion wrote:\n> > On Thu, Aug 1, 2024 at 6:03 PM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> >>\n> >> On Thu, Aug 1, 2024 at 4:45 PM Jacob Champion\n> >> <jacob.champion@enterprisedb.com> wrote:\n> >> > Would it provide enough value for effort to explicitly mark leaky\n> >> > procedures as such? Maybe that could shrink the grey area enough to be\n> >> > protective?\n> >>\n> >> You mean like proleakproof = true/false/maybe?\n> >\n> > Yeah, exactly.\n>\n> <dons flameproof suit>\n> Hmmm, and then have \"leakproof_mode\" = strict/lax/off where 'strict' is\n> current behavior, 'lax' allows the 'maybe's to get pushed down, and\n> 'off' ignores the leakproof attribute entirely and pushes down anything\n> that merits being pushed?\n> </dons flameproof suit>\n>\n>\nAnother approach would be to do something like:\n\nselect \"upper\"!(column)\n\nto demark within the query usage itself that this function with a maybe\nleakproof attribute gets interpreted as yes.\n\nor even something like:\n\nselect \"upper\"[leakproof](column)\n\nThis is both more readable and provides a way, not that we seem to need it,\nto place more than one label (or just alternative labels using the same\nsyntax construct, like with explain (...)) inside the \"array\".\n\nDiscoverability of when to add such a marker would be left up to the query\nauthor, with the safe default mode being a not leakproof interpretation.\n\nDavid J.\n\nOn Fri, Aug 2, 2024 at 6:58 AM Joe Conway <mail@joeconway.com> wrote:On 8/2/24 09:48, Jacob Champion wrote:\n> On Thu, Aug 1, 2024 at 6:03 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>> On Thu, Aug 1, 2024 at 4:45 PM Jacob Champion\n>> <jacob.champion@enterprisedb.com> wrote:\n>> > Would it provide enough value for effort to explicitly mark leaky\n>> > procedures as such? Maybe that could shrink the grey area enough to be\n>> > protective?\n>>\n>> You mean like proleakproof = true/false/maybe?\n> \n> Yeah, exactly.\n\n<dons flameproof suit>\nHmmm, and then have \"leakproof_mode\" = strict/lax/off where 'strict' is \ncurrent behavior, 'lax' allows the 'maybe's to get pushed down, and \n'off' ignores the leakproof attribute entirely and pushes down anything \nthat merits being pushed?\n</dons flameproof suit>Another approach would be to do something like:select \"upper\"!(column)to demark within the query usage itself that this function with a maybe leakproof attribute gets interpreted as yes.or even something like:select \"upper\"[leakproof](column)This is both more readable and provides a way, not that we seem to need it, to place more than one label (or just alternative labels using the same syntax construct, like with explain (...)) inside the \"array\".Discoverability of when to add such a marker would be left up to the query author, with the safe default mode being a not leakproof interpretation.David J.",
"msg_date": "Fri, 2 Aug 2024 07:14:52 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> <dons flameproof suit>\n> Hmmm, and then have \"leakproof_mode\" = strict/lax/off where 'strict' is \n> current behavior, 'lax' allows the 'maybe's to get pushed down, and \n> 'off' ignores the leakproof attribute entirely and pushes down anything \n> that merits being pushed?\n> </dons flameproof suit>\n\nSo in other words, we might as well just remove RLS.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2024 11:07:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On 8/2/24 11:07, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> <dons flameproof suit>\n>> Hmmm, and then have \"leakproof_mode\" = strict/lax/off where 'strict' is \n>> current behavior, 'lax' allows the 'maybe's to get pushed down, and \n>> 'off' ignores the leakproof attribute entirely and pushes down anything \n>> that merits being pushed?\n>> </dons flameproof suit>\n> \n> So in other words, we might as well just remove RLS.\n\nPerhaps deciding where to draw the line for 'maybe' is too hard, but I \ndon't understand how you can say that in a general sense.\n\n'strict' mode would provide the same guarantees as today. And even 'off' \nhas utility for cases where (1) no logins are allowed except for the app \n(which is quite common in production environments) and no database \nerrors are propagated to the end client (again pretty standard best \npractice); or (2) non-production environments, e.g. for testing or \ndebugging; or (3) use cases that utilize RLS as a notationally \nconvenient filtering mechanism and are not bothered by some leakage in \nthe worst case.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 2 Aug 2024 12:13:31 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Fri, Aug 2, 2024 at 9:13 AM Joe Conway <mail@joeconway.com> wrote:\n>\n> On 8/2/24 11:07, Tom Lane wrote:\n> > Joe Conway <mail@joeconway.com> writes:\n> >> <dons flameproof suit>\n> >> Hmmm, and then have \"leakproof_mode\" = strict/lax/off where 'strict' is\n> >> current behavior, 'lax' allows the 'maybe's to get pushed down, and\n> >> 'off' ignores the leakproof attribute entirely and pushes down anything\n> >> that merits being pushed?\n> >> </dons flameproof suit>\n> >\n> > So in other words, we might as well just remove RLS.\n>\n> Perhaps deciding where to draw the line for 'maybe' is too hard, but I\n> don't understand how you can say that in a general sense.\n\nI'm more sympathetic to the \"maybe\" case, but for the \"off\" proposal I\nfind myself agreeing with Tom. If you want \"off\", turn off RLS.\n\n> And even 'off'\n> has utility for cases where (1) no logins are allowed except for the app\n> (which is quite common in production environments) and no database\n> errors are propagated to the end client (again pretty standard best\n> practice);\n\nI'm extremely skeptical of this statement, but I've been out of the\nfull-stack world for a bit. How does a modern best-practice\napplication hide the fact that it ran into an error and couldn't\ncomplete a query?\n\n> or (2) non-production environments, e.g. for testing or\n> debugging; or\n\nChanging the execution behavior between dev and prod seems like an\nanti-goal. When would turning this off help you debug something?\n\n> (3) use cases that utilize RLS as a notationally\n> convenient filtering mechanism and are not bothered by some leakage in\n> the worst case.\n\nCatering to this use case doesn't seem like it should be a priority.\nIf a security feature is useful for you in a non-security setting,\nawesome, but we shouldn't poke holes in it for that situation, nor\nshould it be surprising if the security gets stronger and becomes\nharder to use for non-security.\n\n--Jacob\n\n\n",
"msg_date": "Fri, 2 Aug 2024 09:22:36 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Fri, Aug 2, 2024 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > <dons flameproof suit>\n> > Hmmm, and then have \"leakproof_mode\" = strict/lax/off where 'strict' is\n> > current behavior, 'lax' allows the 'maybe's to get pushed down, and\n> > 'off' ignores the leakproof attribute entirely and pushes down anything\n> > that merits being pushed?\n> > </dons flameproof suit>\n>\n> So in other words, we might as well just remove RLS.\n\n<stage-whisper>Hey, everybody, I don't think Tom likes the\nproposal.</stage-whisper>\n\nI'll be honest: I don't like it, either. I don't even like\nproleakproof=true/false/maybe; I asked about that to understand if\nthat was what Jacob was proposing, not because I actually think we\nshould do it. The problem is that there's likely to be a fairly wide\nrange contained inside of \"maybe\", with cases like \"upper\" at the\nsafer end of the spectrum. That's too fuzzy to use as a basis for any\nsort of real security, IMHO; we won't be able to find two hackers who\nagree on how anything should be marked.\n\nI think part of our problem here is that we have very few examples of\nhow to actually analyze a function for leakproof-ness, or how to\nexploit one that is erroneously so marked. The conversations then tend\nto degenerate into some people saying things are scary and some people\nsaying the scariness is overrated and then the whole thing just\nbecomes untethered from reality. Maybe we need to create some really\nrobust documentation in this area so that we can move toward a common\nconceptual framework, instead of everybody just having a lot of\nopinions.\n\nI can't shake the feeling that if PostgreSQL got the same level of\nattention from security researchers that Linux or OpenSSL do, this\nwould be a very different conversation. The fact that we have more\npeople complaining about RLS causing poor query performance than we do\nabout RLS leaking information is probably a sign that it's being used\nto provide more security theatre than actual security. Even the leaks\nwe intended to have are pretty significant, and I'm sure that we have\nsome we didn't intend.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Aug 2024 12:22:41 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Fri, Aug 2, 2024 at 9:22 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I'll be honest: I don't like it, either. I don't even like\n> proleakproof=true/false/maybe; I asked about that to understand if\n> that was what Jacob was proposing, not because I actually think we\n> should do it. The problem is that there's likely to be a fairly wide\n> range contained inside of \"maybe\", with cases like \"upper\" at the\n> safer end of the spectrum. That's too fuzzy to use as a basis for any\n> sort of real security, IMHO; we won't be able to find two hackers who\n> agree on how anything should be marked.\n\nI guess I wasn't trying to propose that the grey area be used as the\nbasis for security, but that we establish a lower bound for the grey.\nMake things strictly better than today, and cut down on the fear that\nsomeone's going to accidentally mark something that we all agree\nshouldn't be. And then shrink the grey area over time as we debate.\n\n(Now, if there aren't that many cases where we can all agree on\n\"unsafe\", then the proposal loses pretty much all value, because we'll\nnever shrink the uncertainty.)\n\n> I think part of our problem here is that we have very few examples of\n> how to actually analyze a function for leakproof-ness, or how to\n> exploit one that is erroneously so marked. The conversations then tend\n> to degenerate into some people saying things are scary and some people\n> saying the scariness is overrated and then the whole thing just\n> becomes untethered from reality. Maybe we need to create some really\n> robust documentation in this area so that we can move toward a common\n> conceptual framework, instead of everybody just having a lot of\n> opinions.\n\n+1\n\n--Jacob\n\n\n",
"msg_date": "Fri, 2 Aug 2024 09:33:31 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Fri, Aug 2, 2024 at 12:33 PM Jacob Champion\n<jacob.champion@enterprisedb.com> wrote:\n> (Now, if there aren't that many cases where we can all agree on\n> \"unsafe\", then the proposal loses pretty much all value, because we'll\n> never shrink the uncertainty.)\n\nMy belief is that nearly everything in unsafe. We ship with very\nlittle marked leakproof right now, and that might be too conservative,\nbut probably not by much. For example:\n\n-- +(int4, int4) is not leakproof\nrobert.haas=# select 2147483647::int4+1;\nERROR: integer out of range\n\n-- textcat is not leakproof\nrobert.haas=# create temp table x as select repeat('a',(2^29)::int4-2)::text a;\nSELECT 1\nrobert.haas=# select length(a||a) from x;\nERROR: invalid memory alloc request size 1073741824\n\n-- absolute value is not leakproof\nrobert.haas=# select @(-2147483648);\nERROR: integer out of range\n\n-- textlike is not leakproof\nrobert.haas=# select 'a' ~ 'b\\';\nERROR: invalid regular expression: invalid escape \\ sequence\n\n-- division is not leakproof\nrobert.haas=# select 2/0;\nERROR: division by zero\n\n-- to_date is not leakproof\nrobert.haas=# select to_date('abc', 'def');\nERROR: invalid value \"a\" for \"d\"\nDETAIL: Value must be an integer.\n\nI think it would be safe to mark the bitwise xor operator for integers\nas leakproof. But that isn't likely to be used in a query. Most of the\nstuff that people actually use in queries isn't even arguably\nleakproof.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Aug 2024 13:39:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
},
{
"msg_contents": "On Fri, Aug 2, 2024 at 10:40 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> My belief is that nearly everything in unsafe. We ship with very\n> little marked leakproof right now, and that might be too conservative,\n> but probably not by much.\n\nThen to me, that seems like the best-case scenario for a \"maybe\"\nclassification. I'm still not sold on the idea of automatically\ntreating all \"maybes\" as leakproof (personally I'd prefer that users\nsurgically opt in), but if the pool is small...\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 2 Aug 2024 11:18:16 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: can we mark upper/lower/textlike functions leakproof?"
}
] |
[
{
"msg_contents": "Hi,\n\nI think the forward declaration for heapam_methods variable in heapam_handler.c\nis unnecessary, right?\n\n-- \nRegrads,\nJapin Li",
"msg_date": "Wed, 31 Jul 2024 10:36:11 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Remove unnecessary forward declaration for heapam_methods variable"
},
{
"msg_contents": "On Wed, Jul 31, 2024 at 10:36:11AM +0800, Japin Li wrote:\n> I think the forward declaration for heapam_methods variable in heapam_handler.c\n> is unnecessary, right?\n\nTrue. This can be removed because all the code paths using\nheapam_methods are after its declaration, so duplicating it makes\nlittle sense. Thanks, applied.\n--\nMichael",
"msg_date": "Tue, 6 Aug 2024 16:35:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove unnecessary forward declaration for heapam_methods\n variable"
}
] |
[
{
"msg_contents": "Hello, hackers,\n\nI think there may be some duplicated codes.\nFunction smgrDoPendingDeletes() calls both smgrdounlinkall() and\nsmgrclose().\nBut both functions would close SMgrRelation object, it's dupliacted\nbehavior?\n\nSo I make this patch. Could someone take a look at it?\n\nThanks for your help,\nSteven\n\n From Highgo.com",
"msg_date": "Wed, 31 Jul 2024 11:15:47 +0800",
"msg_from": "Steven Niu <niushiji@gmail.com>",
"msg_from_op": true,
"msg_subject": "[Patch] remove duplicated smgrclose"
},
{
"msg_contents": "Hi Steven,\n\nOn Wed, Jul 31, 2024 at 11:16 AM Steven Niu <niushiji@gmail.com> wrote:\n>\n> Hello, hackers,\n>\n> I think there may be some duplicated codes.\n> Function smgrDoPendingDeletes() calls both smgrdounlinkall() and smgrclose().\n> But both functions would close SMgrRelation object, it's dupliacted behavior?\n>\n> So I make this patch. Could someone take a look at it?\n>\n> Thanks for your help,\n> Steven\n>\n> From Highgo.com\n>\n>\nYou change LGTM, but the patch seems not to be applied to HEAD,\nI generate the attached v2 using `git format` with some commit message.\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Thu, 1 Aug 2024 20:32:11 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] remove duplicated smgrclose"
},
{
"msg_contents": "Hi, Junwang,\n\nThank you for the review and excellent summary in commit message!\n\nThis is my first contribution to community, and not so familiar with the\noverall process.\nAfter reading the process again, it looks like that I'm not qualified to\nsubmit the patch to commitfest as I never had reviewed others' work. :(\nIf so, could you please help to submit it to commitfest?\n\nBest Regards,\nSteven\n\nJunwang Zhao <zhjwpku@gmail.com> 于2024年8月1日周四 20:32写道:\n\n> Hi Steven,\n>\n> On Wed, Jul 31, 2024 at 11:16 AM Steven Niu <niushiji@gmail.com> wrote:\n> >\n> > Hello, hackers,\n> >\n> > I think there may be some duplicated codes.\n> > Function smgrDoPendingDeletes() calls both smgrdounlinkall() and\n> smgrclose().\n> > But both functions would close SMgrRelation object, it's dupliacted\n> behavior?\n> >\n> > So I make this patch. Could someone take a look at it?\n> >\n> > Thanks for your help,\n> > Steven\n> >\n> > From Highgo.com\n> >\n> >\n> You change LGTM, but the patch seems not to be applied to HEAD,\n> I generate the attached v2 using `git format` with some commit message.\n>\n> --\n> Regards\n> Junwang Zhao\n>\n\nHi, Junwang, Thank you for the review and excellent summary in commit message! This is my first contribution to community, and not so familiar with the overall process. After reading the process again, it looks like that I'm not qualified to submit the patch to commitfest as I never had reviewed others' work. :(If so, could you please help to submit it to commitfest?Best Regards,StevenJunwang Zhao <zhjwpku@gmail.com> 于2024年8月1日周四 20:32写道:Hi Steven,\n\nOn Wed, Jul 31, 2024 at 11:16 AM Steven Niu <niushiji@gmail.com> wrote:\n>\n> Hello, hackers,\n>\n> I think there may be some duplicated codes.\n> Function smgrDoPendingDeletes() calls both smgrdounlinkall() and smgrclose().\n> But both functions would close SMgrRelation object, it's dupliacted behavior?\n>\n> So I make this patch. Could someone take a look at it?\n>\n> Thanks for your help,\n> Steven\n>\n> From Highgo.com\n>\n>\nYou change LGTM, but the patch seems not to be applied to HEAD,\nI generate the attached v2 using `git format` with some commit message.\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Fri, 2 Aug 2024 12:12:39 +0800",
"msg_from": "Steven Niu <niushiji@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Patch] remove duplicated smgrclose"
},
{
"msg_contents": "Hi Steven,\n\nOn Fri, Aug 2, 2024 at 12:12 PM Steven Niu <niushiji@gmail.com> wrote:\n>\n> Hi, Junwang,\n>\n> Thank you for the review and excellent summary in commit message!\n>\n> This is my first contribution to community, and not so familiar with the overall process.\n> After reading the process again, it looks like that I'm not qualified to submit the patch to commitfest as I never had reviewed others' work. :(\n> If so, could you please help to submit it to commitfest?\n>\n\nhttps://commitfest.postgresql.org/49/5149/\n\nI can not find your profile on commitfest so I left the author as empty,\nhave you ever registered? If you have a account, you can put your\nname in the Authors list.\n\n> Best Regards,\n> Steven\n>\n> Junwang Zhao <zhjwpku@gmail.com> 于2024年8月1日周四 20:32写道:\n>>\n>> Hi Steven,\n>>\n>> On Wed, Jul 31, 2024 at 11:16 AM Steven Niu <niushiji@gmail.com> wrote:\n>> >\n>> > Hello, hackers,\n>> >\n>> > I think there may be some duplicated codes.\n>> > Function smgrDoPendingDeletes() calls both smgrdounlinkall() and smgrclose().\n>> > But both functions would close SMgrRelation object, it's dupliacted behavior?\n>> >\n>> > So I make this patch. Could someone take a look at it?\n>> >\n>> > Thanks for your help,\n>> > Steven\n>> >\n>> > From Highgo.com\n>> >\n>> >\n>> You change LGTM, but the patch seems not to be applied to HEAD,\n>> I generate the attached v2 using `git format` with some commit message.\n>>\n>> --\n>> Regards\n>> Junwang Zhao\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Fri, 2 Aug 2024 13:22:38 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] remove duplicated smgrclose"
},
{
"msg_contents": "Thanks, I have set my name in the Authors column of CF.\n\nSteven\n\nJunwang Zhao <zhjwpku@gmail.com> 于2024年8月2日周五 13:22写道:\n\n> Hi Steven,\n>\n> On Fri, Aug 2, 2024 at 12:12 PM Steven Niu <niushiji@gmail.com> wrote:\n> >\n> > Hi, Junwang,\n> >\n> > Thank you for the review and excellent summary in commit message!\n> >\n> > This is my first contribution to community, and not so familiar with the\n> overall process.\n> > After reading the process again, it looks like that I'm not qualified to\n> submit the patch to commitfest as I never had reviewed others' work. :(\n> > If so, could you please help to submit it to commitfest?\n> >\n>\n> https://commitfest.postgresql.org/49/5149/\n>\n> I can not find your profile on commitfest so I left the author as empty,\n> have you ever registered? If you have a account, you can put your\n> name in the Authors list.\n>\n> > Best Regards,\n> > Steven\n> >\n> > Junwang Zhao <zhjwpku@gmail.com> 于2024年8月1日周四 20:32写道:\n> >>\n> >> Hi Steven,\n> >>\n> >> On Wed, Jul 31, 2024 at 11:16 AM Steven Niu <niushiji@gmail.com> wrote:\n> >> >\n> >> > Hello, hackers,\n> >> >\n> >> > I think there may be some duplicated codes.\n> >> > Function smgrDoPendingDeletes() calls both smgrdounlinkall() and\n> smgrclose().\n> >> > But both functions would close SMgrRelation object, it's dupliacted\n> behavior?\n> >> >\n> >> > So I make this patch. Could someone take a look at it?\n> >> >\n> >> > Thanks for your help,\n> >> > Steven\n> >> >\n> >> > From Highgo.com\n> >> >\n> >> >\n> >> You change LGTM, but the patch seems not to be applied to HEAD,\n> >> I generate the attached v2 using `git format` with some commit message.\n> >>\n> >> --\n> >> Regards\n> >> Junwang Zhao\n>\n>\n>\n> --\n> Regards\n> Junwang Zhao\n>\n\nThanks, I have set my name in the Authors column of CF. StevenJunwang Zhao <zhjwpku@gmail.com> 于2024年8月2日周五 13:22写道:Hi Steven,\n\nOn Fri, Aug 2, 2024 at 12:12 PM Steven Niu <niushiji@gmail.com> wrote:\n>\n> Hi, Junwang,\n>\n> Thank you for the review and excellent summary in commit message!\n>\n> This is my first contribution to community, and not so familiar with the overall process.\n> After reading the process again, it looks like that I'm not qualified to submit the patch to commitfest as I never had reviewed others' work. :(\n> If so, could you please help to submit it to commitfest?\n>\n\nhttps://commitfest.postgresql.org/49/5149/\n\nI can not find your profile on commitfest so I left the author as empty,\nhave you ever registered? If you have a account, you can put your\nname in the Authors list.\n\n> Best Regards,\n> Steven\n>\n> Junwang Zhao <zhjwpku@gmail.com> 于2024年8月1日周四 20:32写道:\n>>\n>> Hi Steven,\n>>\n>> On Wed, Jul 31, 2024 at 11:16 AM Steven Niu <niushiji@gmail.com> wrote:\n>> >\n>> > Hello, hackers,\n>> >\n>> > I think there may be some duplicated codes.\n>> > Function smgrDoPendingDeletes() calls both smgrdounlinkall() and smgrclose().\n>> > But both functions would close SMgrRelation object, it's dupliacted behavior?\n>> >\n>> > So I make this patch. Could someone take a look at it?\n>> >\n>> > Thanks for your help,\n>> > Steven\n>> >\n>> > From Highgo.com\n>> >\n>> >\n>> You change LGTM, but the patch seems not to be applied to HEAD,\n>> I generate the attached v2 using `git format` with some commit message.\n>>\n>> --\n>> Regards\n>> Junwang Zhao\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Fri, 2 Aug 2024 15:36:33 +0800",
"msg_from": "Steven Niu <niushiji@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Patch] remove duplicated smgrclose"
},
{
"msg_contents": "On Thu, 1 Aug 2024 at 17:32, Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> Hi Steven,\n>\n> On Wed, Jul 31, 2024 at 11:16 AM Steven Niu <niushiji@gmail.com> wrote:\n> >\n> > Hello, hackers,\n> >\n> > I think there may be some duplicated codes.\n> > Function smgrDoPendingDeletes() calls both smgrdounlinkall() and smgrclose().\n> > But both functions would close SMgrRelation object, it's dupliacted behavior?\n> >\n> > So I make this patch. Could someone take a look at it?\n> >\n> > Thanks for your help,\n> > Steven\n> >\n> > From Highgo.com\n> >\n> >\n> You change LGTM, but the patch seems not to be applied to HEAD,\n> I generate the attached v2 using `git format` with some commit message.\n>\n> --\n> Regards\n> Junwang Zhao\n\nHi all!\nThis change looks good to me. However, i have an objection to these\nlines from v2:\n\n> /* Close the forks at smgr level */\n> - for (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n> - smgrsw[which].smgr_close(rels[i], forknum);\n> + smgrclose(rels[i]);\n\nWhy do we do this? This seems to be an unrelated change given thread\n$subj. This is just a pure refactoring job, which deserves a separate\npatch. There is similar coding in\nsmgrdestroy function:\n\n```\nfor (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\nsmgrsw[reln->smgr_which].smgr_close(reln, forknum);\n```\n\nSo, I feel like these two places should be either changed together or\nnot be altered at all. And is it definitely a separate change.\n\n-- \nBest regards,\nKirill Reshke\n\n\n",
"msg_date": "Fri, 9 Aug 2024 14:20:15 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] remove duplicated smgrclose"
},
{
"msg_contents": "On Fri, Aug 9, 2024 at 5:20 PM Kirill Reshke <reshkekirill@gmail.com> wrote:\n>\n> On Thu, 1 Aug 2024 at 17:32, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >\n> > Hi Steven,\n> >\n> > On Wed, Jul 31, 2024 at 11:16 AM Steven Niu <niushiji@gmail.com> wrote:\n> > >\n> > > Hello, hackers,\n> > >\n> > > I think there may be some duplicated codes.\n> > > Function smgrDoPendingDeletes() calls both smgrdounlinkall() and smgrclose().\n> > > But both functions would close SMgrRelation object, it's dupliacted behavior?\n> > >\n> > > So I make this patch. Could someone take a look at it?\n> > >\n> > > Thanks for your help,\n> > > Steven\n> > >\n> > > From Highgo.com\n> > >\n> > >\n> > You change LGTM, but the patch seems not to be applied to HEAD,\n> > I generate the attached v2 using `git format` with some commit message.\n> >\n> > --\n> > Regards\n> > Junwang Zhao\n>\n> Hi all!\n> This change looks good to me. However, i have an objection to these\n> lines from v2:\n>\n> > /* Close the forks at smgr level */\n> > - for (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n> > - smgrsw[which].smgr_close(rels[i], forknum);\n> > + smgrclose(rels[i]);\n>\n> Why do we do this? This seems to be an unrelated change given thread\n> $subj. This is just a pure refactoring job, which deserves a separate\n> patch. There is similar coding in\n> smgrdestroy function:\n>\n> ```\n> for (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n> smgrsw[reln->smgr_which].smgr_close(reln, forknum);\n> ```\n>\n> So, I feel like these two places should be either changed together or\n> not be altered at all. And is it definitely a separate change.\n\nYeah, I tend to agree with you, maybe we should split the patch\ninto two.\n\nSteven, could you do this?\n\n>\n> --\n> Best regards,\n> Kirill Reshke\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Fri, 9 Aug 2024 18:19:11 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] remove duplicated smgrclose"
},
{
"msg_contents": "Kirill,\n\nGood catch!\nI will split the patch into two to cover both cases.\n\nThanks,\nSteven\n\n\nJunwang Zhao <zhjwpku@gmail.com> 于2024年8月9日周五 18:19写道:\n\n> On Fri, Aug 9, 2024 at 5:20 PM Kirill Reshke <reshkekirill@gmail.com>\n> wrote:\n> >\n> > On Thu, 1 Aug 2024 at 17:32, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> > >\n> > > Hi Steven,\n> > >\n> > > On Wed, Jul 31, 2024 at 11:16 AM Steven Niu <niushiji@gmail.com>\n> wrote:\n> > > >\n> > > > Hello, hackers,\n> > > >\n> > > > I think there may be some duplicated codes.\n> > > > Function smgrDoPendingDeletes() calls both smgrdounlinkall() and\n> smgrclose().\n> > > > But both functions would close SMgrRelation object, it's dupliacted\n> behavior?\n> > > >\n> > > > So I make this patch. Could someone take a look at it?\n> > > >\n> > > > Thanks for your help,\n> > > > Steven\n> > > >\n> > > > From Highgo.com\n> > > >\n> > > >\n> > > You change LGTM, but the patch seems not to be applied to HEAD,\n> > > I generate the attached v2 using `git format` with some commit message.\n> > >\n> > > --\n> > > Regards\n> > > Junwang Zhao\n> >\n> > Hi all!\n> > This change looks good to me. However, i have an objection to these\n> > lines from v2:\n> >\n> > > /* Close the forks at smgr level */\n> > > - for (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n> > > - smgrsw[which].smgr_close(rels[i], forknum);\n> > > + smgrclose(rels[i]);\n> >\n> > Why do we do this? This seems to be an unrelated change given thread\n> > $subj. This is just a pure refactoring job, which deserves a separate\n> > patch. There is similar coding in\n> > smgrdestroy function:\n> >\n> > ```\n> > for (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n> > smgrsw[reln->smgr_which].smgr_close(reln, forknum);\n> > ```\n> >\n> > So, I feel like these two places should be either changed together or\n> > not be altered at all. And is it definitely a separate change.\n>\n> Yeah, I tend to agree with you, maybe we should split the patch\n> into two.\n>\n> Steven, could you do this?\n>\n> >\n> > --\n> > Best regards,\n> > Kirill Reshke\n>\n>\n>\n> --\n> Regards\n> Junwang Zhao\n>\n\nKirill, Good catch! I will split the patch into two to cover both cases. Thanks,StevenJunwang Zhao <zhjwpku@gmail.com> 于2024年8月9日周五 18:19写道:On Fri, Aug 9, 2024 at 5:20 PM Kirill Reshke <reshkekirill@gmail.com> wrote:\n>\n> On Thu, 1 Aug 2024 at 17:32, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >\n> > Hi Steven,\n> >\n> > On Wed, Jul 31, 2024 at 11:16 AM Steven Niu <niushiji@gmail.com> wrote:\n> > >\n> > > Hello, hackers,\n> > >\n> > > I think there may be some duplicated codes.\n> > > Function smgrDoPendingDeletes() calls both smgrdounlinkall() and smgrclose().\n> > > But both functions would close SMgrRelation object, it's dupliacted behavior?\n> > >\n> > > So I make this patch. Could someone take a look at it?\n> > >\n> > > Thanks for your help,\n> > > Steven\n> > >\n> > > From Highgo.com\n> > >\n> > >\n> > You change LGTM, but the patch seems not to be applied to HEAD,\n> > I generate the attached v2 using `git format` with some commit message.\n> >\n> > --\n> > Regards\n> > Junwang Zhao\n>\n> Hi all!\n> This change looks good to me. However, i have an objection to these\n> lines from v2:\n>\n> > /* Close the forks at smgr level */\n> > - for (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n> > - smgrsw[which].smgr_close(rels[i], forknum);\n> > + smgrclose(rels[i]);\n>\n> Why do we do this? This seems to be an unrelated change given thread\n> $subj. This is just a pure refactoring job, which deserves a separate\n> patch. There is similar coding in\n> smgrdestroy function:\n>\n> ```\n> for (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n> smgrsw[reln->smgr_which].smgr_close(reln, forknum);\n> ```\n>\n> So, I feel like these two places should be either changed together or\n> not be altered at all. And is it definitely a separate change.\n\nYeah, I tend to agree with you, maybe we should split the patch\ninto two.\n\nSteven, could you do this?\n\n>\n> --\n> Best regards,\n> Kirill Reshke\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Mon, 12 Aug 2024 18:11:42 +0800",
"msg_from": "Steven Niu <niushiji@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Patch] remove duplicated smgrclose"
},
{
"msg_contents": "Junwang, Kirill,\n\nThe split work has been done. I created a new patch for removing\nredundant smgrclose() function as attached.\nPlease help review it.\n\nThanks,\nSteven\n\nSteven Niu <niushiji@gmail.com> 于2024年8月12日周一 18:11写道:\n\n> Kirill,\n>\n> Good catch!\n> I will split the patch into two to cover both cases.\n>\n> Thanks,\n> Steven\n>\n>\n> Junwang Zhao <zhjwpku@gmail.com> 于2024年8月9日周五 18:19写道:\n>\n>> On Fri, Aug 9, 2024 at 5:20 PM Kirill Reshke <reshkekirill@gmail.com>\n>> wrote:\n>> >\n>> > On Thu, 1 Aug 2024 at 17:32, Junwang Zhao <zhjwpku@gmail.com> wrote:\n>> > >\n>> > > Hi Steven,\n>> > >\n>> > > On Wed, Jul 31, 2024 at 11:16 AM Steven Niu <niushiji@gmail.com>\n>> wrote:\n>> > > >\n>> > > > Hello, hackers,\n>> > > >\n>> > > > I think there may be some duplicated codes.\n>> > > > Function smgrDoPendingDeletes() calls both smgrdounlinkall() and\n>> smgrclose().\n>> > > > But both functions would close SMgrRelation object, it's dupliacted\n>> behavior?\n>> > > >\n>> > > > So I make this patch. Could someone take a look at it?\n>> > > >\n>> > > > Thanks for your help,\n>> > > > Steven\n>> > > >\n>> > > > From Highgo.com\n>> > > >\n>> > > >\n>> > > You change LGTM, but the patch seems not to be applied to HEAD,\n>> > > I generate the attached v2 using `git format` with some commit\n>> message.\n>> > >\n>> > > --\n>> > > Regards\n>> > > Junwang Zhao\n>> >\n>> > Hi all!\n>> > This change looks good to me. However, i have an objection to these\n>> > lines from v2:\n>> >\n>> > > /* Close the forks at smgr level */\n>> > > - for (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n>> > > - smgrsw[which].smgr_close(rels[i], forknum);\n>> > > + smgrclose(rels[i]);\n>> >\n>> > Why do we do this? This seems to be an unrelated change given thread\n>> > $subj. This is just a pure refactoring job, which deserves a separate\n>> > patch. There is similar coding in\n>> > smgrdestroy function:\n>> >\n>> > ```\n>> > for (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n>> > smgrsw[reln->smgr_which].smgr_close(reln, forknum);\n>> > ```\n>> >\n>> > So, I feel like these two places should be either changed together or\n>> > not be altered at all. And is it definitely a separate change.\n>>\n>> Yeah, I tend to agree with you, maybe we should split the patch\n>> into two.\n>>\n>> Steven, could you do this?\n>>\n>> >\n>> > --\n>> > Best regards,\n>> > Kirill Reshke\n>>\n>>\n>>\n>> --\n>> Regards\n>> Junwang Zhao\n>>\n>",
"msg_date": "Wed, 14 Aug 2024 14:35:39 +0800",
"msg_from": "Steven Niu <niushiji@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Patch] remove duplicated smgrclose"
},
{
"msg_contents": "On Wed, Aug 14, 2024 at 2:35 PM Steven Niu <niushiji@gmail.com> wrote:\n>\n> Junwang, Kirill,\n>\n> The split work has been done. I created a new patch for removing redundant smgrclose() function as attached.\n> Please help review it.\n\nPatch looks good, actually you can make the refactoring code as v3-0002-xxx\nby using:\n\ngit format-patch -2 -v 3\n\nAnother kind reminder: Do not top-post when you reply ;)\n\n>\n> Thanks,\n> Steven\n>\n> Steven Niu <niushiji@gmail.com> 于2024年8月12日周一 18:11写道:\n>>\n>> Kirill,\n>>\n>> Good catch!\n>> I will split the patch into two to cover both cases.\n>>\n>> Thanks,\n>> Steven\n>>\n>>\n>> Junwang Zhao <zhjwpku@gmail.com> 于2024年8月9日周五 18:19写道:\n>>>\n>>> On Fri, Aug 9, 2024 at 5:20 PM Kirill Reshke <reshkekirill@gmail.com> wrote:\n>>> >\n>>> > On Thu, 1 Aug 2024 at 17:32, Junwang Zhao <zhjwpku@gmail.com> wrote:\n>>> > >\n>>> > > Hi Steven,\n>>> > >\n>>> > > On Wed, Jul 31, 2024 at 11:16 AM Steven Niu <niushiji@gmail.com> wrote:\n>>> > > >\n>>> > > > Hello, hackers,\n>>> > > >\n>>> > > > I think there may be some duplicated codes.\n>>> > > > Function smgrDoPendingDeletes() calls both smgrdounlinkall() and smgrclose().\n>>> > > > But both functions would close SMgrRelation object, it's dupliacted behavior?\n>>> > > >\n>>> > > > So I make this patch. Could someone take a look at it?\n>>> > > >\n>>> > > > Thanks for your help,\n>>> > > > Steven\n>>> > > >\n>>> > > > From Highgo.com\n>>> > > >\n>>> > > >\n>>> > > You change LGTM, but the patch seems not to be applied to HEAD,\n>>> > > I generate the attached v2 using `git format` with some commit message.\n>>> > >\n>>> > > --\n>>> > > Regards\n>>> > > Junwang Zhao\n>>> >\n>>> > Hi all!\n>>> > This change looks good to me. However, i have an objection to these\n>>> > lines from v2:\n>>> >\n>>> > > /* Close the forks at smgr level */\n>>> > > - for (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n>>> > > - smgrsw[which].smgr_close(rels[i], forknum);\n>>> > > + smgrclose(rels[i]);\n>>> >\n>>> > Why do we do this? This seems to be an unrelated change given thread\n>>> > $subj. This is just a pure refactoring job, which deserves a separate\n>>> > patch. There is similar coding in\n>>> > smgrdestroy function:\n>>> >\n>>> > ```\n>>> > for (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n>>> > smgrsw[reln->smgr_which].smgr_close(reln, forknum);\n>>> > ```\n>>> >\n>>> > So, I feel like these two places should be either changed together or\n>>> > not be altered at all. And is it definitely a separate change.\n>>>\n>>> Yeah, I tend to agree with you, maybe we should split the patch\n>>> into two.\n>>>\n>>> Steven, could you do this?\n>>>\n>>> >\n>>> > --\n>>> > Best regards,\n>>> > Kirill Reshke\n>>>\n>>>\n>>>\n>>> --\n>>> Regards\n>>> Junwang Zhao\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Thu, 15 Aug 2024 18:03:45 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] remove duplicated smgrclose"
},
{
"msg_contents": "Junwang Zhao <zhjwpku@gmail.com> 于2024年8月15日周四 18:03写道:\n\n> On Wed, Aug 14, 2024 at 2:35 PM Steven Niu <niushiji@gmail.com> wrote:\n> >\n> > Junwang, Kirill,\n> >\n> > The split work has been done. I created a new patch for removing\n> redundant smgrclose() function as attached.\n> > Please help review it.\n>\n> Patch looks good, actually you can make the refactoring code as v3-0002-xxx\n> by using:\n>\n> git format-patch -2 -v 3\n>\n> Another kind reminder: Do not top-post when you reply ;)\n>\n> >\n> > Thanks,\n> > Steven\n> >\n> > Steven Niu <niushiji@gmail.com> 于2024年8月12日周一 18:11写道:\n> >>\n> >> Kirill,\n> >>\n> >> Good catch!\n> >> I will split the patch into two to cover both cases.\n> >>\n> >> Thanks,\n> >> Steven\n> >>\n> >>\n> >> Junwang Zhao <zhjwpku@gmail.com> 于2024年8月9日周五 18:19写道:\n> >>>\n> >>> On Fri, Aug 9, 2024 at 5:20 PM Kirill Reshke <reshkekirill@gmail.com>\n> wrote:\n> >>> >\n> >>> > On Thu, 1 Aug 2024 at 17:32, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >>> > >\n> >>> > > Hi Steven,\n> >>> > >\n> >>> > > On Wed, Jul 31, 2024 at 11:16 AM Steven Niu <niushiji@gmail.com>\n> wrote:\n> >>> > > >\n> >>> > > > Hello, hackers,\n> >>> > > >\n> >>> > > > I think there may be some duplicated codes.\n> >>> > > > Function smgrDoPendingDeletes() calls both smgrdounlinkall() and\n> smgrclose().\n> >>> > > > But both functions would close SMgrRelation object, it's\n> dupliacted behavior?\n> >>> > > >\n> >>> > > > So I make this patch. Could someone take a look at it?\n> >>> > > >\n> >>> > > > Thanks for your help,\n> >>> > > > Steven\n> >>> > > >\n> >>> > > > From Highgo.com\n> >>> > > >\n> >>> > > >\n> >>> > > You change LGTM, but the patch seems not to be applied to HEAD,\n> >>> > > I generate the attached v2 using `git format` with some commit\n> message.\n> >>> > >\n> >>> > > --\n> >>> > > Regards\n> >>> > > Junwang Zhao\n> >>> >\n> >>> > Hi all!\n> >>> > This change looks good to me. However, i have an objection to these\n> >>> > lines from v2:\n> >>> >\n> >>> > > /* Close the forks at smgr level */\n> >>> > > - for (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n> >>> > > - smgrsw[which].smgr_close(rels[i], forknum);\n> >>> > > + smgrclose(rels[i]);\n> >>> >\n> >>> > Why do we do this? This seems to be an unrelated change given thread\n> >>> > $subj. This is just a pure refactoring job, which deserves a separate\n> >>> > patch. There is similar coding in\n> >>> > smgrdestroy function:\n> >>> >\n> >>> > ```\n> >>> > for (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n> >>> > smgrsw[reln->smgr_which].smgr_close(reln, forknum);\n> >>> > ```\n> >>> >\n> >>> > So, I feel like these two places should be either changed together or\n> >>> > not be altered at all. And is it definitely a separate change.\n> >>>\n> >>> Yeah, I tend to agree with you, maybe we should split the patch\n> >>> into two.\n> >>>\n> >>> Steven, could you do this?\n> >>>\n> >>> >\n> >>> > --\n> >>> > Best regards,\n> >>> > Kirill Reshke\n> >>>\n> >>>\n> >>>\n> >>> --\n> >>> Regards\n> >>> Junwang Zhao\n>\n>\n>\n> --\n> Regards\n> Junwang Zhao\n>\n\n\nOK, thanks for your kind help.\n\nSteven\n\nJunwang Zhao <zhjwpku@gmail.com> 于2024年8月15日周四 18:03写道:On Wed, Aug 14, 2024 at 2:35 PM Steven Niu <niushiji@gmail.com> wrote:\n>\n> Junwang, Kirill,\n>\n> The split work has been done. I created a new patch for removing redundant smgrclose() function as attached.\n> Please help review it.\n\nPatch looks good, actually you can make the refactoring code as v3-0002-xxx\nby using:\n\ngit format-patch -2 -v 3\n\nAnother kind reminder: Do not top-post when you reply ;)\n\n>\n> Thanks,\n> Steven\n>\n> Steven Niu <niushiji@gmail.com> 于2024年8月12日周一 18:11写道:\n>>\n>> Kirill,\n>>\n>> Good catch!\n>> I will split the patch into two to cover both cases.\n>>\n>> Thanks,\n>> Steven\n>>\n>>\n>> Junwang Zhao <zhjwpku@gmail.com> 于2024年8月9日周五 18:19写道:\n>>>\n>>> On Fri, Aug 9, 2024 at 5:20 PM Kirill Reshke <reshkekirill@gmail.com> wrote:\n>>> >\n>>> > On Thu, 1 Aug 2024 at 17:32, Junwang Zhao <zhjwpku@gmail.com> wrote:\n>>> > >\n>>> > > Hi Steven,\n>>> > >\n>>> > > On Wed, Jul 31, 2024 at 11:16 AM Steven Niu <niushiji@gmail.com> wrote:\n>>> > > >\n>>> > > > Hello, hackers,\n>>> > > >\n>>> > > > I think there may be some duplicated codes.\n>>> > > > Function smgrDoPendingDeletes() calls both smgrdounlinkall() and smgrclose().\n>>> > > > But both functions would close SMgrRelation object, it's dupliacted behavior?\n>>> > > >\n>>> > > > So I make this patch. Could someone take a look at it?\n>>> > > >\n>>> > > > Thanks for your help,\n>>> > > > Steven\n>>> > > >\n>>> > > > From Highgo.com\n>>> > > >\n>>> > > >\n>>> > > You change LGTM, but the patch seems not to be applied to HEAD,\n>>> > > I generate the attached v2 using `git format` with some commit message.\n>>> > >\n>>> > > --\n>>> > > Regards\n>>> > > Junwang Zhao\n>>> >\n>>> > Hi all!\n>>> > This change looks good to me. However, i have an objection to these\n>>> > lines from v2:\n>>> >\n>>> > > /* Close the forks at smgr level */\n>>> > > - for (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n>>> > > - smgrsw[which].smgr_close(rels[i], forknum);\n>>> > > + smgrclose(rels[i]);\n>>> >\n>>> > Why do we do this? This seems to be an unrelated change given thread\n>>> > $subj. This is just a pure refactoring job, which deserves a separate\n>>> > patch. There is similar coding in\n>>> > smgrdestroy function:\n>>> >\n>>> > ```\n>>> > for (forknum = 0; forknum <= MAX_FORKNUM; forknum++)\n>>> > smgrsw[reln->smgr_which].smgr_close(reln, forknum);\n>>> > ```\n>>> >\n>>> > So, I feel like these two places should be either changed together or\n>>> > not be altered at all. And is it definitely a separate change.\n>>>\n>>> Yeah, I tend to agree with you, maybe we should split the patch\n>>> into two.\n>>>\n>>> Steven, could you do this?\n>>>\n>>> >\n>>> > --\n>>> > Best regards,\n>>> > Kirill Reshke\n>>>\n>>>\n>>>\n>>> --\n>>> Regards\n>>> Junwang Zhao\n\n\n\n-- \nRegards\nJunwang ZhaoOK, thanks for your kind help. Steven",
"msg_date": "Fri, 16 Aug 2024 13:16:33 +0800",
"msg_from": "Steven Niu <niushiji@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Patch] remove duplicated smgrclose"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nHi\r\n\r\nthe patch looks good to me as well. Calling smgrclose() right after calling smgrdounlinkall() does seem\r\nunnecessary as it is already done inside smgrdounlinkall() as you mentioned. I checked the commit logs\r\nand seems like the code has been like this for over 10 years. One difference is that smgrdounlinkall() does\r\nnot reset smgr_cached_nblocks and smgr_targblock but that does not seem to matter as it is about to\r\nremove the physical files.\r\n\r\nWhile leaving them like this does no harm because smgrclose() simply does nothing if the relation has already\r\nbeen closed, it does look weird that the code tries to close the relation after smgrdounlinkall(), because the\r\nphysical files have just been deleted when smgrdounlinkall() completes, and we try to close something that\r\nhas been deleted ?!\r\n\r\n---------------------------\r\nCary Huang\r\nHighgo software Canada\r\nwww.highgo.ca",
"msg_date": "Fri, 23 Aug 2024 21:15:29 +0000",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: [Patch] remove duplicated smgrclose"
}
] |
[
{
"msg_contents": "I propose to rename the pg_createsubscriber option --socket-directory to \n--socketdir. This would make it match the equivalent option in \npg_upgrade. (It even has the same short option '-s'.) \npg_createsubscriber and pg_upgrade have a lot of common terminology and \na similar operating mode, so it would make sense to keep this consistent.",
"msg_date": "Wed, 31 Jul 2024 09:02:16 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "make pg_createsubscriber option names more consistent"
},
{
"msg_contents": "Dear Peter,\r\n\r\n> I propose to rename the pg_createsubscriber option --socket-directory to\r\n> --socketdir. This would make it match the equivalent option in\r\n> pg_upgrade. (It even has the same short option '-s'.)\r\n> pg_createsubscriber and pg_upgrade have a lot of common terminology and\r\n> a similar operating mode, so it would make sense to keep this consistent.\r\n\r\n+1. If so, should we say \"default current dir.\" instead of \"default current directory\" in usage()\r\nbecause pg_upgrade says like that?\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n \r\n",
"msg_date": "Wed, 31 Jul 2024 09:15:28 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: make pg_createsubscriber option names more consistent"
},
{
"msg_contents": "On Wed, Jul 31, 2024, at 4:02 AM, Peter Eisentraut wrote:\n> I propose to rename the pg_createsubscriber option --socket-directory to \n> --socketdir. This would make it match the equivalent option in \n> pg_upgrade. (It even has the same short option '-s'.) \n> pg_createsubscriber and pg_upgrade have a lot of common terminology and \n> a similar operating mode, so it would make sense to keep this consistent.\n\nWFM. \n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Jul 31, 2024, at 4:02 AM, Peter Eisentraut wrote:I propose to rename the pg_createsubscriber option --socket-directory to --socketdir. This would make it match the equivalent option in pg_upgrade. (It even has the same short option '-s'.) pg_createsubscriber and pg_upgrade have a lot of common terminology and a similar operating mode, so it would make sense to keep this consistent.WFM. --Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 31 Jul 2024 10:50:35 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: make pg_createsubscriber option names more consistent"
},
{
"msg_contents": "On 31.07.24 11:15, Hayato Kuroda (Fujitsu) wrote:\n> Dear Peter,\n> \n>> I propose to rename the pg_createsubscriber option --socket-directory to\n>> --socketdir. This would make it match the equivalent option in\n>> pg_upgrade. (It even has the same short option '-s'.)\n>> pg_createsubscriber and pg_upgrade have a lot of common terminology and\n>> a similar operating mode, so it would make sense to keep this consistent.\n> \n> +1. If so, should we say \"default current dir.\" instead of \"default current directory\" in usage()\n> because pg_upgrade says like that?\n\nCommitted with that change. Thanks.\n\n\n\n",
"msg_date": "Thu, 1 Aug 2024 12:32:14 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: make pg_createsubscriber option names more consistent"
}
] |
[
{
"msg_contents": "I have noticed $subj while working with other unrelated patches.\nThe question is, why there is no CREATE TABLE AS .... USING\n(some_access_method)?\nThis feature looks straightforward, and lack of it is a bit of\ninconsistency from my point of view.\nMaybe there are some unobvious caveats with implementing it?\nI have done a little research reading related threads [1][2], but\nthese do not address $subj, if i'm not missing anything.\nNeither can I find an open CF entry/thread implementing this (Im\nlooking here http://cfbot.cputube.org/) .\n\nThe same storage specification feature can actually be supported for\nCTAE (like CTAS but execute) and CREATE MATERIALIZED VIEW.\n\nI can try to propose a POC patch implementing $subj if there are no objections\nto having this functionality in the core.\n\n\n[1] https://www.postgresql.org/message-id/flat/20180703070645.wchpu5muyto5n647%40alap3.anarazel.de\n[2] https://www.postgresql.org/message-id/flat/20160812231527.GA690404%40alvherre.pgsql\n\n\n",
"msg_date": "Wed, 31 Jul 2024 12:03:23 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": true,
"msg_subject": "Lack of possibility to specify CTAS TAM"
},
{
"msg_contents": "\n\n> On 31 Jul 2024, at 12:03, Kirill Reshke <reshkekirill@gmail.com> wrote:\n> \n> CREATE TABLE AS .... USING\n> (some_access_method)\n\nThis looks in a line with usual CREATE TABLE.\n+1 for the feature.\nCurrently we do not have so many TAMs, but I hope eventually we will have some.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 31 Jul 2024 12:12:35 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Lack of possibility to specify CTAS TAM"
},
{
"msg_contents": "On Wednesday, July 31, 2024, Kirill Reshke <reshkekirill@gmail.com> wrote:\n\n> I have noticed $subj while working with other unrelated patches.\n> The question is, why there is no CREATE TABLE AS .... USING\n> (some_access_method)?\n\n\nThe syntax is documented…\n\nCREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [\nIF NOT EXISTS ] *table_name*\n [ (*column_name* [, ...] ) ]\n [ USING *method* ]\n\n… AS query\n\nhttps://www.postgresql.org/docs/current/sql-createtableas.html\n\nDavid J.\n\nOn Wednesday, July 31, 2024, Kirill Reshke <reshkekirill@gmail.com> wrote:I have noticed $subj while working with other unrelated patches.\nThe question is, why there is no CREATE TABLE AS .... USING\n(some_access_method)?The syntax is documented…CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name\n [ (column_name [, ...] ) ]\n [ USING method ]… AS queryhttps://www.postgresql.org/docs/current/sql-createtableas.htmlDavid J.",
"msg_date": "Wed, 31 Jul 2024 00:15:18 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lack of possibility to specify CTAS TAM"
},
{
"msg_contents": "On Wed, 31 Jul 2024, 12:12 Andrey M. Borodin, <x4mmm@yandex-team.ru> wrote:\n\n>\n> Currently we do not have so many TAMs\n>\nCurrently we do not have so many TAM in core. Outside core there is\nactually a quite a number of projects doing TAMs. Orioledb is one example.\n\n>\n\nOn Wed, 31 Jul 2024, 12:12 Andrey M. Borodin, <x4mmm@yandex-team.ru> wrote:\nCurrently we do not have so many TAMsCurrently we do not have so many TAM in core. Outside core there is actually a quite a number of projects doing TAMs. Orioledb is one example.",
"msg_date": "Wed, 31 Jul 2024 12:20:10 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lack of possibility to specify CTAS TAM"
},
{
"msg_contents": "On Wednesday, July 31, 2024, Kirill Reshke <reshkekirill@gmail.com> wrote:\n>\n>\n> The same storage specification feature can actually be supported for\n> CTAE (like CTAS but execute) and CREATE MATERIALIZED VIEW.\n>\n>\nOn a related note, the description here seems outdated.\n\n\nhttps://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TABLE-ACCESS-METHOD\n\nCMV also has this syntax already; we don’t actually have CTAE presently,\ncorrect?\n\nDavid J.\n\nOn Wednesday, July 31, 2024, Kirill Reshke <reshkekirill@gmail.com> wrote:\n\nThe same storage specification feature can actually be supported for\nCTAE (like CTAS but execute) and CREATE MATERIALIZED VIEW.\nOn a related note, the description here seems outdated. https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TABLE-ACCESS-METHODCMV also has this syntax already; we don’t actually have CTAE presently, correct?David J.",
"msg_date": "Wed, 31 Jul 2024 00:22:00 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lack of possibility to specify CTAS TAM"
},
{
"msg_contents": "On Wednesday, July 31, 2024, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Wednesday, July 31, 2024, Kirill Reshke <reshkekirill@gmail.com> wrote:\n>>\n>>\n>> The same storage specification feature can actually be supported for\n>> CTAE (like CTAS but execute) and CREATE MATERIALIZED VIEW.\n>>\n>>\n> On a related note, the description here seems outdated.\n>\n> https://www.postgresql.org/docs/current/runtime-config-\n> client.html#GUC-DEFAULT-TABLE-ACCESS-METHOD\n>\n\nNevermind, re-reading it I see it is correct. The others are all covered\nby “create” while “select into” is called out because of its reliance on\nthe default.\n\nDavid J.\n\nOn Wednesday, July 31, 2024, David G. Johnston <david.g.johnston@gmail.com> wrote:On Wednesday, July 31, 2024, Kirill Reshke <reshkekirill@gmail.com> wrote:\n\nThe same storage specification feature can actually be supported for\nCTAE (like CTAS but execute) and CREATE MATERIALIZED VIEW.\nOn a related note, the description here seems outdated. https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TABLE-ACCESS-METHODNevermind, re-reading it I see it is correct. The others are all covered by “create” while “select into” is called out because of its reliance on the default.David J.",
"msg_date": "Wed, 31 Jul 2024 00:27:15 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lack of possibility to specify CTAS TAM"
},
{
"msg_contents": "On Wed, 31 Jul 2024 at 12:15, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Wednesday, July 31, 2024, Kirill Reshke <reshkekirill@gmail.com> wrote:\n>>\n>> I have noticed $subj while working with other unrelated patches.\n>> The question is, why there is no CREATE TABLE AS .... USING\n>> (some_access_method)?\n>\n>\n> The syntax is documented…\nMy bad.\nEverything is supported in core actually..\n\n\n",
"msg_date": "Wed, 31 Jul 2024 12:31:09 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lack of possibility to specify CTAS TAM"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nYesterday, the buildfarm animal sungazer was benevolent enough to\ndemonstrate a rare anomaly, related to old stats collector:\ntest stats ... FAILED 469155 ms\n\n========================\n 1 of 212 tests failed.\n========================\n\n--- /home/nm/farm/gcc64/REL_14_STABLE/pgsql.build/src/test/regress/expected/stats.out 2022-03-30 01:18:17.000000000 +0000\n+++ /home/nm/farm/gcc64/REL_14_STABLE/pgsql.build/src/test/regress/results/stats.out 2024-07-30 09:49:39.000000000 +0000\n@@ -165,11 +165,11 @@\n WHERE relname like 'trunc_stats_test%' order by relname;\n relname | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup | n_dead_tup\n-------------------+-----------+-----------+-----------+------------+------------\n- trunc_stats_test | 3 | 0 | 0 | 0 | 0\n- trunc_stats_test1 | 4 | 2 | 1 | 1 | 0\n- trunc_stats_test2 | 1 | 0 | 0 | 1 | 0\n- trunc_stats_test3 | 4 | 0 | 0 | 2 | 2\n- trunc_stats_test4 | 2 | 0 | 0 | 0 | 2\n+ trunc_stats_test | 0 | 0 | 0 | 0 | 0\n+ trunc_stats_test1 | 0 | 0 | 0 | 0 | 0\n+ trunc_stats_test2 | 0 | 0 | 0 | 0 | 0\n+ trunc_stats_test3 | 0 | 0 | 0 | 0 | 0\n+ trunc_stats_test4 | 0 | 0 | 0 | 0 | 0\n...\n\ninst/logfile contains:\n2024-07-30 09:25:11.225 UTC [63307946:1] LOG: using stale statistics instead of current ones because stats collector is \nnot responding\n2024-07-30 09:25:11.345 UTC [11206724:559] pg_regress/create_index LOG: using stale statistics instead of current ones \nbecause stats collector is not responding\n...\n\nThat's not the only failure of that kind occurred on sungazer, there were\nalso [2] (REL_13_STABLE), [3] (REL_13_STABLE), [4] (REL_12_STABLE).\nMoreover, such failures were produced by all the other POWER7/AIX 7.1\nanimals: hornet ([5], [6]), tern ([7], [8]), mandrill ([9], [10], ...).\nBut I could not find such failures coming from POWER8 animals: hoverfly\n(running AIX 7200-04-03-2038), ayu, boa, chub, and I did not encounter such\nanomalies on x86 nor ARM platforms.\n\nThus, it looks like this stats collector issue is only happening on this\nconcrete platform, and given [11], I think such failures perhaps should\nbe just ignored for the next two years (until v14 EOL) unless AIX 7.1\nwill be upgraded and we see them on a vendor-supported OS version.\n\nSo I'm parking this information here just for reference.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2024-07-30%2003%3A49%3A35\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2023-02-09%2009%3A29%3A10\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2022-06-16%2009%3A52%3A47\n[4] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2023-12-13%2003%3A40%3A42\n[5] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2024-03-29%2005%3A27%3A09\n[6] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2024-03-19%2002%3A09%3A07\n[7] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2022-12-16%2009%3A17%3A38\n[8] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2021-04-01%2003%3A09%3A38\n[9] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2021-04-05%2004%3A22%3A17\n[10] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2021-07-12%2004%3A31%3A37\n[11] https://www.postgresql.org/message-id/3154146.1697661946%40sss.pgh.pa.us\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 31 Jul 2024 14:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "The stats.sql test is failing sporadically in v14- on POWER7/AIX 7.1\n buildfarm animals"
},
{
"msg_contents": "On Wed, Jul 31, 2024 at 02:00:00PM +0300, Alexander Lakhin wrote:\n> --- /home/nm/farm/gcc64/REL_14_STABLE/pgsql.build/src/test/regress/expected/stats.out 2022-03-30 01:18:17.000000000 +0000\n> +++ /home/nm/farm/gcc64/REL_14_STABLE/pgsql.build/src/test/regress/results/stats.out 2024-07-30 09:49:39.000000000 +0000\n> @@ -165,11 +165,11 @@\n> �� WHERE relname like 'trunc_stats_test%' order by relname;\n> ������� relname����� | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup | n_dead_tup\n> -------------------+-----------+-----------+-----------+------------+------------\n> -� trunc_stats_test� |�������� 3 |�������� 0 |�������� 0 | 0 |��������� 0\n> -� trunc_stats_test1 |�������� 4 |�������� 2 |�������� 1 | 1 |��������� 0\n> -� trunc_stats_test2 |�������� 1 |�������� 0 |�������� 0 | 1 |��������� 0\n> -� trunc_stats_test3 |�������� 4 |�������� 0 |�������� 0 | 2 |��������� 2\n> -� trunc_stats_test4 |�������� 2 |�������� 0 |�������� 0 | 0 |��������� 2\n> +� trunc_stats_test� |�������� 0 |�������� 0 |�������� 0 | 0 |��������� 0\n> +� trunc_stats_test1 |�������� 0 |�������� 0 |�������� 0 | 0 |��������� 0\n> +� trunc_stats_test2 |�������� 0 |�������� 0 |�������� 0 | 0 |��������� 0\n> +� trunc_stats_test3 |�������� 0 |�������� 0 |�������� 0 | 0 |��������� 0\n> +� trunc_stats_test4 |�������� 0 |�������� 0 |�������� 0 | 0 |��������� 0\n> ...\n> \n> inst/logfile contains:\n> 2024-07-30 09:25:11.225 UTC [63307946:1] LOG:� using stale statistics\n> instead of current ones because stats collector is not responding\n> 2024-07-30 09:25:11.345 UTC [11206724:559] pg_regress/create_index LOG:�\n> using stale statistics instead of current ones because stats collector is\n> not responding\n> ...\n\n> I could not find such failures coming from POWER8 animals: hoverfly\n> (running AIX 7200-04-03-2038), ayu, boa, chub, and I did not encounter such\n> anomalies on x86 nor ARM platforms.\n\nThe animals you list as affected share a filesystem. The failure arises from\nthe slow filesystem metadata operations of that filesystem.\n\n> Thus, it looks like this stats collector issue is only happening on this\n> concrete platform, and given [11], I think such failures perhaps should\n> be just ignored for the next two years (until v14 EOL) unless AIX 7.1\n> will be upgraded and we see them on a vendor-supported OS version.\n\nThis has happened on non-POWER, I/O-constrained machines. Still, I have been\nignoring these failures. The stats subsystem was designed to drop stats\nupdates at times, which was always at odds with the need for stable tests. So\nthe failures witness a defect of the test, not a defect of the backend.\nStabilizing this test was a known benefit of the new stats implementation.\n\n\n",
"msg_date": "Wed, 31 Jul 2024 16:27:29 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: The stats.sql test is failing sporadically in v14- on POWER7/AIX\n 7.1 buildfarm animals"
}
] |
[
{
"msg_contents": "I've reached the limit of my understanding and attempts at correcting my\ncode/use of temporary tables in the face of multixact members and have come\nto ask for your help! Here's a brief description of my software;\n\nPool of N connection sessions, persistent for the duration of the program\nlifetime.\nUpon each session initialisation, a set of CREATE TEMPORARY TABLE ON COMMIT\nDELETE ROWS statements are made for bulk ingest.\nEach session is acquired by a thread for use when ingesting data and\ntherefore each temporary table remains until the session is terminated\nThe thread performs a COPY <temp table> FROM STDIN in binary format\nThen an INSERT INTO <main table> SELECT FROM <temp table> WHERE...\n\nThis has been working great for a while and with excellent throughput.\nHowever, upon scaling up I eventually hit this error;\n\nERROR: multixact \"members\" limit exceeded\nDETAIL: This command would create a multixact with 2 members, but the\nremaining space is only enough for 0 members.\nHINT: Execute a database-wide VACUUM in database with OID 16467 with\nreduced vacuum_multixact_freeze_min_age and\nvacuum_multixact_freeze_table_age settings.\n\nAnd it took me quite a while to identify that it appears to be coming from\nthe temporary table (the other 'main' tables were being autovacuumed OK) -\nwhich makes sense because they have a long lifetime, aren't auto vacuumed\nand shared by transactions (in turn).\n\nI first attempted to overcome this by introducing an initial step of always\ncreating the temporary table before the copy (and using on commit drop) but\nthis lead to a terrible performance degradation.\nNext, I reverted the above and instead I introduced a VACUUM step every\n1000000 (configurable) ingest operations\nFinally, I introduced a TRUNCATE step in addition to the occasional VACUUM\nsince the TRUNCATE allowed the COPY option of FREEZE.\n\nThe new overhead appears minimal until after several hours and again I've\nhit a performance degradation seemingly dominated by the TRUNCATE.\n\nMy questions are;\n\n1) Is the VACUUM necessary if I use TRUNCATE + COPY FREEZE (on the\ntemporary table)?\n2) Is there really any benefit to using FREEZE here or is it best to just\nVACUUM the temporary tables occasionally?\n3) Is there a better way of managing all this!? Perhaps re-CREATING the TT\nevery day or something?\n\nI understand that I can create a Linux tmpfs partition for a tablespace for\nthe temporary tables and that may speed up the TRUNCATE but that seems like\na hack and I'd rather not do it at all if it's avoidable.\n\nThanks for your help,\n\nJim\n\nPS. PG version in use is 15.4 if that matters here\n\n-- \nJim Vanns\nPrincipal Production Engineer\nIndustrial Light & Magic, London\n\nI've reached the limit of my understanding and attempts at correcting my code/use of temporary tables in the face of multixact members and have come to ask for your help! Here's a brief description of my software;Pool of N connection sessions, persistent for the duration of the program lifetime.Upon each session initialisation, a set of CREATE TEMPORARY TABLE ON COMMIT DELETE ROWS statements are made for bulk ingest.Each session is acquired by a thread for use when ingesting data and therefore each temporary table remains until the session is terminatedThe thread performs a COPY <temp table> FROM STDIN in binary formatThen an INSERT INTO <main table> SELECT FROM <temp table> WHERE...This has been working great for a while and with excellent throughput. However, upon scaling up I eventually hit this error;ERROR: multixact \"members\" limit exceededDETAIL: This command would create a multixact with 2 members, but the remaining space is only enough for 0 members.HINT: Execute a database-wide VACUUM in database with OID 16467 with reduced vacuum_multixact_freeze_min_age and vacuum_multixact_freeze_table_age settings.And it took me quite a while to identify that it appears to be coming from the temporary table (the other 'main' tables were being autovacuumed OK) - which makes sense because they have a long lifetime, aren't auto vacuumed and shared by transactions (in turn).I first attempted to overcome this by introducing an initial step of always creating the temporary table before the copy (and using on commit drop) but this lead to a terrible performance degradation.Next, I reverted the above and instead I introduced a VACUUM step every 1000000 (configurable) ingest operationsFinally, I introduced a TRUNCATE step in addition to the occasional VACUUM since the TRUNCATE allowed the COPY option of FREEZE.The new overhead appears minimal until after several hours and again I've hit a performance degradation seemingly dominated by the TRUNCATE.My questions are;1) Is the VACUUM necessary if I use TRUNCATE + COPY FREEZE (on the temporary table)?2) Is there really any benefit to using FREEZE here or is it best to just VACUUM the temporary tables occasionally?3) Is there a better way of managing all this!? Perhaps re-CREATING the TT every day or something?I understand that I can create a Linux tmpfs partition for a tablespace for the temporary tables and that may speed up the TRUNCATE but that seems like a hack and I'd rather not do it at all if it's avoidable.Thanks for your help,JimPS. PG version in use is 15.4 if that matters here-- Jim VannsPrincipal Production EngineerIndustrial Light & Magic, London",
"msg_date": "Wed, 31 Jul 2024 15:16:45 +0100",
"msg_from": "Jim Vanns <jvanns@ilm.com>",
"msg_from_op": true,
"msg_subject": "Suggestions to overcome 'multixact \"members\" limit exceeded' in\n temporary tables"
},
{
"msg_contents": "I've been able to observe that the performance degradation with TRUNCATE\nappears to happen when other ancillary processes are running that are also\nheavy users of temporary tables. If I used an exclusive tablespace, would\nthat improve things?\n\nCheers\n\nJim\n\n\nOn Wed, 31 Jul 2024 at 15:16, Jim Vanns <jvanns@ilm.com> wrote:\n\n> I've reached the limit of my understanding and attempts at correcting my\n> code/use of temporary tables in the face of multixact members and have come\n> to ask for your help! Here's a brief description of my software;\n>\n> Pool of N connection sessions, persistent for the duration of the program\n> lifetime.\n> Upon each session initialisation, a set of CREATE TEMPORARY TABLE ON\n> COMMIT DELETE ROWS statements are made for bulk ingest.\n> Each session is acquired by a thread for use when ingesting data and\n> therefore each temporary table remains until the session is terminated\n> The thread performs a COPY <temp table> FROM STDIN in binary format\n> Then an INSERT INTO <main table> SELECT FROM <temp table> WHERE...\n>\n> This has been working great for a while and with excellent throughput.\n> However, upon scaling up I eventually hit this error;\n>\n> ERROR: multixact \"members\" limit exceeded\n> DETAIL: This command would create a multixact with 2 members, but the\n> remaining space is only enough for 0 members.\n> HINT: Execute a database-wide VACUUM in database with OID 16467 with\n> reduced vacuum_multixact_freeze_min_age and\n> vacuum_multixact_freeze_table_age settings.\n>\n> And it took me quite a while to identify that it appears to be coming from\n> the temporary table (the other 'main' tables were being autovacuumed OK) -\n> which makes sense because they have a long lifetime, aren't auto vacuumed\n> and shared by transactions (in turn).\n>\n> I first attempted to overcome this by introducing an initial step of\n> always creating the temporary table before the copy (and using on commit\n> drop) but this lead to a terrible performance degradation.\n> Next, I reverted the above and instead I introduced a VACUUM step every\n> 1000000 (configurable) ingest operations\n> Finally, I introduced a TRUNCATE step in addition to the occasional VACUUM\n> since the TRUNCATE allowed the COPY option of FREEZE.\n>\n> The new overhead appears minimal until after several hours and again I've\n> hit a performance degradation seemingly dominated by the TRUNCATE.\n>\n> My questions are;\n>\n> 1) Is the VACUUM necessary if I use TRUNCATE + COPY FREEZE (on the\n> temporary table)?\n> 2) Is there really any benefit to using FREEZE here or is it best to just\n> VACUUM the temporary tables occasionally?\n> 3) Is there a better way of managing all this!? Perhaps re-CREATING the TT\n> every day or something?\n>\n> I understand that I can create a Linux tmpfs partition for a tablespace\n> for the temporary tables and that may speed up the TRUNCATE but that seems\n> like a hack and I'd rather not do it at all if it's avoidable.\n>\n> Thanks for your help,\n>\n> Jim\n>\n> PS. PG version in use is 15.4 if that matters here\n>\n> --\n> Jim Vanns\n> Principal Production Engineer\n> Industrial Light & Magic, London\n>\n\n\n-- \nJim Vanns\nPrincipal Production Engineer\nIndustrial Light & Magic, London\n\nI've been able to observe that the performance degradation with TRUNCATE appears to happen when other ancillary processes are running that are also heavy users of temporary tables. If I used an exclusive tablespace, would that improve things?CheersJimOn Wed, 31 Jul 2024 at 15:16, Jim Vanns <jvanns@ilm.com> wrote:I've reached the limit of my understanding and attempts at correcting my code/use of temporary tables in the face of multixact members and have come to ask for your help! Here's a brief description of my software;Pool of N connection sessions, persistent for the duration of the program lifetime.Upon each session initialisation, a set of CREATE TEMPORARY TABLE ON COMMIT DELETE ROWS statements are made for bulk ingest.Each session is acquired by a thread for use when ingesting data and therefore each temporary table remains until the session is terminatedThe thread performs a COPY <temp table> FROM STDIN in binary formatThen an INSERT INTO <main table> SELECT FROM <temp table> WHERE...This has been working great for a while and with excellent throughput. However, upon scaling up I eventually hit this error;ERROR: multixact \"members\" limit exceededDETAIL: This command would create a multixact with 2 members, but the remaining space is only enough for 0 members.HINT: Execute a database-wide VACUUM in database with OID 16467 with reduced vacuum_multixact_freeze_min_age and vacuum_multixact_freeze_table_age settings.And it took me quite a while to identify that it appears to be coming from the temporary table (the other 'main' tables were being autovacuumed OK) - which makes sense because they have a long lifetime, aren't auto vacuumed and shared by transactions (in turn).I first attempted to overcome this by introducing an initial step of always creating the temporary table before the copy (and using on commit drop) but this lead to a terrible performance degradation.Next, I reverted the above and instead I introduced a VACUUM step every 1000000 (configurable) ingest operationsFinally, I introduced a TRUNCATE step in addition to the occasional VACUUM since the TRUNCATE allowed the COPY option of FREEZE.The new overhead appears minimal until after several hours and again I've hit a performance degradation seemingly dominated by the TRUNCATE.My questions are;1) Is the VACUUM necessary if I use TRUNCATE + COPY FREEZE (on the temporary table)?2) Is there really any benefit to using FREEZE here or is it best to just VACUUM the temporary tables occasionally?3) Is there a better way of managing all this!? Perhaps re-CREATING the TT every day or something?I understand that I can create a Linux tmpfs partition for a tablespace for the temporary tables and that may speed up the TRUNCATE but that seems like a hack and I'd rather not do it at all if it's avoidable.Thanks for your help,JimPS. PG version in use is 15.4 if that matters here-- Jim VannsPrincipal Production EngineerIndustrial Light & Magic, London\n-- Jim VannsPrincipal Production EngineerIndustrial Light & Magic, London",
"msg_date": "Wed, 31 Jul 2024 16:41:51 +0100",
"msg_from": "Jim Vanns <jvanns@ilm.com>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions to overcome 'multixact \"members\" limit exceeded' in\n temporary tables"
},
{
"msg_contents": "(resending to general since I believe I originally sent it to hackers by\nmistake)\n\nI've reached the limit of my understanding and attempts at correcting my\ncode/use of temporary tables in the face of multixact members and have come\nto ask for your help! Here's a brief description of my software;\n\nPool of N connection sessions, persistent for the duration of the program\nlifetime.\nUpon each session initialisation, a set of CREATE TEMPORARY TABLE ON COMMIT\nDELETE ROWS statements are made for bulk ingest.\nEach session is acquired by a thread for use when ingesting data and\ntherefore each temporary table remains until the session is terminated\nThe thread performs a COPY <temp table> FROM STDIN in binary format\nThen an INSERT INTO <main table> SELECT FROM <temp table> WHERE...\n\nThis has been working great for a while and with excellent throughput.\nHowever, upon scaling up I eventually hit this error;\n\nERROR: multixact \"members\" limit exceeded\nDETAIL: This command would create a multixact with 2 members, but the\nremaining space is only enough for 0 members.\nHINT: Execute a database-wide VACUUM in database with OID 16467 with\nreduced vacuum_multixact_freeze_min_age and\nvacuum_multixact_freeze_table_age settings.\n\nAnd it took me quite a while to identify that it appears to be coming from\nthe temporary table (the other 'main' tables were being autovacuumed OK) -\nwhich makes sense because they have a long lifetime, aren't auto vacuumed\nand shared by transactions (in turn).\n\nI first attempted to overcome this by introducing an initial step of always\ncreating the temporary table before the copy (and using on commit drop) but\nthis lead to a terrible performance degradation.\nNext, I reverted the above and instead I introduced a VACUUM step every\n1000000 (configurable) ingest operations\nFinally, I introduced a TRUNCATE step in addition to the occasional VACUUM\nsince the TRUNCATE allowed the COPY option of FREEZE.\n\nThe new overhead appears minimal until after several hours and again I've\nhit a performance degradation seemingly dominated by the TRUNCATE.\n\nMy questions are;\n\n1) Is the VACUUM necessary if I use TRUNCATE + COPY FREEZE (on the\ntemporary table)?\n2) Is there really any benefit to using FREEZE here or is it best to just\nVACUUM the temporary tables occasionally?\n3) Is there a better way of managing all this!? Perhaps re-CREATING the TT\nevery day or something?\n\nI understand that I can create a Linux tmpfs partition for a tablespace for\nthe temporary tables and that may speed up the TRUNCATE but that seems like\na hack and I'd rather not do it at all if it's avoidable.\n\nThanks for your help,\n\nJim\n\nPS. PG version in use is 15.4 if that matters here\n\n-- \nJim Vanns\nPrincipal Production Engineer\nIndustrial Light & Magic, London\n\n(resending to general since I believe I originally sent it to hackers by mistake)I've reached the limit of my understanding and attempts at correcting my code/use of temporary tables in the face of multixact members and have come to ask for your help! Here's a brief description of my software;Pool of N connection sessions, persistent for the duration of the program lifetime.Upon each session initialisation, a set of CREATE TEMPORARY TABLE ON COMMIT DELETE ROWS statements are made for bulk ingest.Each session is acquired by a thread for use when ingesting data and therefore each temporary table remains until the session is terminatedThe thread performs a COPY <temp table> FROM STDIN in binary formatThen an INSERT INTO <main table> SELECT FROM <temp table> WHERE...This has been working great for a while and with excellent throughput. However, upon scaling up I eventually hit this error;ERROR: multixact \"members\" limit exceededDETAIL: This command would create a multixact with 2 members, but the remaining space is only enough for 0 members.HINT: Execute a database-wide VACUUM in database with OID 16467 with reduced vacuum_multixact_freeze_min_age and vacuum_multixact_freeze_table_age settings.And it took me quite a while to identify that it appears to be coming from the temporary table (the other 'main' tables were being autovacuumed OK) - which makes sense because they have a long lifetime, aren't auto vacuumed and shared by transactions (in turn).I first attempted to overcome this by introducing an initial step of always creating the temporary table before the copy (and using on commit drop) but this lead to a terrible performance degradation.Next, I reverted the above and instead I introduced a VACUUM step every 1000000 (configurable) ingest operationsFinally, I introduced a TRUNCATE step in addition to the occasional VACUUM since the TRUNCATE allowed the COPY option of FREEZE.The new overhead appears minimal until after several hours and again I've hit a performance degradation seemingly dominated by the TRUNCATE.My questions are;1) Is the VACUUM necessary if I use TRUNCATE + COPY FREEZE (on the temporary table)?2) Is there really any benefit to using FREEZE here or is it best to just VACUUM the temporary tables occasionally?3) Is there a better way of managing all this!? Perhaps re-CREATING the TT every day or something?I understand that I can create a Linux tmpfs partition for a tablespace for the temporary tables and that may speed up the TRUNCATE but that seems like a hack and I'd rather not do it at all if it's avoidable.Thanks for your help,JimPS. PG version in use is 15.4 if that matters here-- Jim VannsPrincipal Production EngineerIndustrial Light & Magic, London",
"msg_date": "Wed, 31 Jul 2024 19:27:13 +0100",
"msg_from": "Jim Vanns <jvanns@ilm.com>",
"msg_from_op": true,
"msg_subject": "Fwd: Suggestions to overcome 'multixact \"members\" limit exceeded' in\n temporary tables"
},
{
"msg_contents": "I've been able to observe that the performance degradation with TRUNCATE\nappears to happen when other ancillary processes are running that are also\nheavy users of temporary tables. If I used an exclusive tablespace, would\nthat improve things?\n\nCheers\n\nJim\n\nOn Wed, 31 Jul 2024 at 19:27, Jim Vanns <jvanns@ilm.com> wrote:\n\n> (resending to general since I believe I originally sent it to hackers by\n> mistake)\n>\n> I've reached the limit of my understanding and attempts at correcting my\n> code/use of temporary tables in the face of multixact members and have come\n> to ask for your help! Here's a brief description of my software;\n>\n> Pool of N connection sessions, persistent for the duration of the program\n> lifetime.\n> Upon each session initialisation, a set of CREATE TEMPORARY TABLE ON\n> COMMIT DELETE ROWS statements are made for bulk ingest.\n> Each session is acquired by a thread for use when ingesting data and\n> therefore each temporary table remains until the session is terminated\n> The thread performs a COPY <temp table> FROM STDIN in binary format\n> Then an INSERT INTO <main table> SELECT FROM <temp table> WHERE...\n>\n> This has been working great for a while and with excellent throughput.\n> However, upon scaling up I eventually hit this error;\n>\n> ERROR: multixact \"members\" limit exceeded\n> DETAIL: This command would create a multixact with 2 members, but the\n> remaining space is only enough for 0 members.\n> HINT: Execute a database-wide VACUUM in database with OID 16467 with\n> reduced vacuum_multixact_freeze_min_age and\n> vacuum_multixact_freeze_table_age settings.\n>\n> And it took me quite a while to identify that it appears to be coming from\n> the temporary table (the other 'main' tables were being autovacuumed OK) -\n> which makes sense because they have a long lifetime, aren't auto vacuumed\n> and shared by transactions (in turn).\n>\n> I first attempted to overcome this by introducing an initial step of\n> always creating the temporary table before the copy (and using on commit\n> drop) but this lead to a terrible performance degradation.\n> Next, I reverted the above and instead I introduced a VACUUM step every\n> 1000000 (configurable) ingest operations\n> Finally, I introduced a TRUNCATE step in addition to the occasional VACUUM\n> since the TRUNCATE allowed the COPY option of FREEZE.\n>\n> The new overhead appears minimal until after several hours and again I've\n> hit a performance degradation seemingly dominated by the TRUNCATE.\n>\n> My questions are;\n>\n> 1) Is the VACUUM necessary if I use TRUNCATE + COPY FREEZE (on the\n> temporary table)?\n> 2) Is there really any benefit to using FREEZE here or is it best to just\n> VACUUM the temporary tables occasionally?\n> 3) Is there a better way of managing all this!? Perhaps re-CREATING the TT\n> every day or something?\n>\n> I understand that I can create a Linux tmpfs partition for a tablespace\n> for the temporary tables and that may speed up the TRUNCATE but that seems\n> like a hack and I'd rather not do it at all if it's avoidable.\n>\n> Thanks for your help,\n>\n> Jim\n>\n> PS. PG version in use is 15.4 if that matters here\n>\n> --\n> Jim Vanns\n> Principal Production Engineer\n> Industrial Light & Magic, London\n>\n\nI've been able to observe that the performance degradation with TRUNCATE appears to happen when other ancillary processes are running that are also heavy users of temporary tables. If I used an exclusive tablespace, would that improve things?CheersJimOn Wed, 31 Jul 2024 at 19:27, Jim Vanns <jvanns@ilm.com> wrote:(resending to general since I believe I originally sent it to hackers by mistake)I've reached the limit of my understanding and attempts at correcting my code/use of temporary tables in the face of multixact members and have come to ask for your help! Here's a brief description of my software;Pool of N connection sessions, persistent for the duration of the program lifetime.Upon each session initialisation, a set of CREATE TEMPORARY TABLE ON COMMIT DELETE ROWS statements are made for bulk ingest.Each session is acquired by a thread for use when ingesting data and therefore each temporary table remains until the session is terminatedThe thread performs a COPY <temp table> FROM STDIN in binary formatThen an INSERT INTO <main table> SELECT FROM <temp table> WHERE...This has been working great for a while and with excellent throughput. However, upon scaling up I eventually hit this error;ERROR: multixact \"members\" limit exceededDETAIL: This command would create a multixact with 2 members, but the remaining space is only enough for 0 members.HINT: Execute a database-wide VACUUM in database with OID 16467 with reduced vacuum_multixact_freeze_min_age and vacuum_multixact_freeze_table_age settings.And it took me quite a while to identify that it appears to be coming from the temporary table (the other 'main' tables were being autovacuumed OK) - which makes sense because they have a long lifetime, aren't auto vacuumed and shared by transactions (in turn).I first attempted to overcome this by introducing an initial step of always creating the temporary table before the copy (and using on commit drop) but this lead to a terrible performance degradation.Next, I reverted the above and instead I introduced a VACUUM step every 1000000 (configurable) ingest operationsFinally, I introduced a TRUNCATE step in addition to the occasional VACUUM since the TRUNCATE allowed the COPY option of FREEZE.The new overhead appears minimal until after several hours and again I've hit a performance degradation seemingly dominated by the TRUNCATE.My questions are;1) Is the VACUUM necessary if I use TRUNCATE + COPY FREEZE (on the temporary table)?2) Is there really any benefit to using FREEZE here or is it best to just VACUUM the temporary tables occasionally?3) Is there a better way of managing all this!? Perhaps re-CREATING the TT every day or something?I understand that I can create a Linux tmpfs partition for a tablespace for the temporary tables and that may speed up the TRUNCATE but that seems like a hack and I'd rather not do it at all if it's avoidable.Thanks for your help,JimPS. PG version in use is 15.4 if that matters here-- Jim VannsPrincipal Production EngineerIndustrial Light & Magic, London",
"msg_date": "Wed, 31 Jul 2024 19:27:50 +0100",
"msg_from": "Jim Vanns <jvanns@ilm.com>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions to overcome 'multixact \"members\" limit exceeded' in\n temporary tables"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nI've repeated the performance measurement for REL_17_STABLE (1e020258e)\nand REL_16_STABLE (6f6b0f193) and found several benchmarks where v16 is\nsignificantly better than v17. Please find attached an html table with\nall the benchmarking results.\n\nI had payed attention to:\nBest pg-src-17--.* worse than pg-src-16--.* by 57.9 percents (225.11 > 142.52): pg_tpcds.query15\nAverage pg-src-17--.* worse than pg-src-16--.* by 55.5 percents (230.57 > 148.29): pg_tpcds.query15\nin May, performed `git bisect` for this degradation, that led me to commit\nb7b0f3f27 [1].\n\nThis time I bisected the following anomaly:\nBest pg-src-17--.* worse than pg-src-16--.* by 23.6 percents (192.25 > 155.58): pg_tpcds.query21\nAverage pg-src-17--.* worse than pg-src-16--.* by 25.1 percents (196.19 > 156.85): pg_tpcds.query21\nand to my surprise I got \"b7b0f3f27 is the first bad commit\".\n\nMoreover, bisecting of another anomaly:\nBest pg-src-17--.* worse than pg-src-16--.* by 24.2 percents (24269.21 > 19539.89): pg_tpcds.query72\nAverage pg-src-17--.* worse than pg-src-16--.* by 24.2 percents (24517.66 > 19740.12): pg_tpcds.query72\npointed at the same commit again.\n\nSo it looks like q15 from TPC-DS is not the only query suffering from that\nchange.\n\nBut beside that, I've found a separate regression. Bisecting for this degradation:\nBest pg-src-17--.* worse than pg-src-16--.* by 105.0 percents (356.63 > 173.96): s64da_tpcds.query95\nAverage pg-src-17--.* worse than pg-src-16--.* by 105.2 percents (357.79 > 174.38): s64da_tpcds.query95\npointed at f7816aec2.\n\nDoes this deserve more analysis and maybe fixing?\n\n[1] https://www.postgresql.org/message-id/63a63690-dd92-c809-0b47-af05459e95d1%40gmail.com\n\nBest regards,\nAlexander",
"msg_date": "Thu, 1 Aug 2024 06:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "v17 vs v16 performance comparison"
},
{
"msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> I've repeated the performance measurement for REL_17_STABLE (1e020258e)\n> and REL_16_STABLE (6f6b0f193) and found several benchmarks where v16 is\n> significantly better than v17. Please find attached an html table with\n> all the benchmarking results.\n\nThanks for doing that!\n\nI have no opinion about b7b0f3f27, but as far as this goes:\n\n> But beside that, I've found a separate regression. Bisecting for this degradation:\n> Best pg-src-17--.* worse than pg-src-16--.* by 105.0 percents (356.63 > 173.96): s64da_tpcds.query95\n> Average pg-src-17--.* worse than pg-src-16--.* by 105.2 percents (357.79 > 174.38): s64da_tpcds.query95\n> pointed at f7816aec2.\n\nI'm not terribly concerned about that. The nature of planner changes\nlike that is that some queries will get worse and some better, because\nthe statistics and cost estimates we're dealing with are not perfect.\nIt is probably worth drilling down into that test case to understand\nwhere the planner is going wrong, with an eye to future improvements;\nbut I doubt it's something we need to address for v17.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 31 Jul 2024 23:41:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: v17 vs v16 performance comparison"
},
{
"msg_contents": "On Thu, Aug 1, 2024 at 3:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> So it looks like q15 from TPC-DS is not the only query suffering from that\n> change.\n\nI'm going to try to set up a local repro to study these new cases. If\nyou have a write-up somewhere of how exactly you run that, that'd be\nuseful.\n\n\n",
"msg_date": "Thu, 1 Aug 2024 17:57:36 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: v17 vs v16 performance comparison"
},
{
"msg_contents": "Hello Thomas.\n\n01.08.2024 08:57, Thomas Munro wrote:\n> On Thu, Aug 1, 2024 at 3:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>> So it looks like q15 from TPC-DS is not the only query suffering from that\n>> change.\n> I'm going to try to set up a local repro to study these new cases. If\n> you have a write-up somewhere of how exactly you run that, that'd be\n> useful.\n\nI'm using this instrumentation (on my Ubuntu 22.04 workstation):\nhttps://github.com/alexanderlaw/pg-mark.git\nREADME.md can probably serve as a such write-up.\n\nIf you install all the prerequisites (some tests, including pg_tpcds,\nrequire downloading additional resources; run-benchmarks.py will ask to\ndo that), there should be no problems with running benchmarks.\n\nI just added two instances to config.xml:\n <instance id=\"pg-src-16\" type=\"src\" pg_version=\"16devel\" git_branch=\"REL_16_STABLE\" />\n <instance id=\"pg-src-17\" type=\"src\" pg_version=\"17devel\" git_branch=\"REL_17_STABLE\" />\nand ran\n1)\n./prepare-instances.py -i pg-src-16 pg-src-17\n\n2)\ntime ./run-benchmarks.py -i pg-src-16 pg-src-17 pg-src-16 pg-src-17 pg-src-17 pg-src-16\n(it took 1045m55,215s on my machine so you may prefer to choose the single\nbenchmark (-b pg_tpcds or maybe s64da_tpcds))\n\n3)\n./analyze-benchmarks.py -i 'pg-src-17--.*' 'pg-src-16--.*'\n\nAll the upper-level commands to run benchmarks are contained in config.xml,\nso you can just execute them separately, but my instrumentation eases\nprocessing of the results by creating one unified benchmark-results.xml.\n\nPlease feel free to ask any questions or give your feedback.\n\nThank you for paying attention to this!\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 1 Aug 2024 10:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: v17 vs v16 performance comparison"
},
{
"msg_contents": "01.08.2024 06:41, Tom Lane wrote:\n>\n>> But beside that, I've found a separate regression. Bisecting for this degradation:\n>> Best pg-src-17--.* worse than pg-src-16--.* by 105.0 percents (356.63 > 173.96): s64da_tpcds.query95\n>> Average pg-src-17--.* worse than pg-src-16--.* by 105.2 percents (357.79 > 174.38): s64da_tpcds.query95\n>> pointed at f7816aec2.\n> I'm not terribly concerned about that. The nature of planner changes\n> like that is that some queries will get worse and some better, because\n> the statistics and cost estimates we're dealing with are not perfect.\n> It is probably worth drilling down into that test case to understand\n> where the planner is going wrong, with an eye to future improvements;\n> but I doubt it's something we need to address for v17.\n\nPlease find attached two plans for that query [1].\n(I repeated the benchmark for f7816aec2 and f7816aec2~1 five times and\nmade sure that both plans are stable.)\n\nMeanwhile I've bisected another degradation:\nBest pg-src-17--.* worse than pg-src-16--.* by 11.3 percents (7.17 > 6.44): job.query6f\nand came to the commit b7b0f3f27 again.\n\n[1] https://github.com/swarm64/s64da-benchmark-toolkit/blob/master/benchmarks/tpcds/queries/queries_10/95.sql\n\nBest regards,\nAlexander",
"msg_date": "Fri, 2 Aug 2024 12:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: v17 vs v16 performance comparison"
},
{
"msg_contents": "Hello Thomas,\n\n02.08.2024 12:00, Alexander Lakhin wrote:\n\n>\n>>\n>>>\n>>> I had payed attention to:\n>>> Best pg-src-17--.* worse than pg-src-16--.* by 57.9 percents (225.11 > 142.52): pg_tpcds.query15\n>>> Average pg-src-17--.* worse than pg-src-16--.* by 55.5 percents (230.57 > 148.29): pg_tpcds.query15\n>>> in May, performed `git bisect` for this degradation, that led me to commit\n>>> b7b0f3f27 [1].\n>>>\n>>> This time I bisected the following anomaly:\n>>> Best pg-src-17--.* worse than pg-src-16--.* by 23.6 percents (192.25 > 155.58): pg_tpcds.query21\n>>> Average pg-src-17--.* worse than pg-src-16--.* by 25.1 percents (196.19 > 156.85): pg_tpcds.query21\n>>> and to my surprise I got \"b7b0f3f27 is the first bad commit\".\n>>>\n>>> Moreover, bisecting of another anomaly:\n>>> Best pg-src-17--.* worse than pg-src-16--.* by 24.2 percents (24269.21 > 19539.89): pg_tpcds.query72\n>>> Average pg-src-17--.* worse than pg-src-16--.* by 24.2 percents (24517.66 > 19740.12): pg_tpcds.query72\n>>> pointed at the same commit again.\n>>>\n>>> ...\n>>>\n>>> But beside that, I've found a separate regression. Bisecting for this degradation:\n>>> Best pg-src-17--.* worse than pg-src-16--.* by 105.0 percents (356.63 > 173.96): s64da_tpcds.query95\n>>> Average pg-src-17--.* worse than pg-src-16--.* by 105.2 percents (357.79 > 174.38): s64da_tpcds.query95\n>>> pointed at f7816aec2.\n>>\n>\n> Meanwhile I've bisected another degradation:\n> Best pg-src-17--.* worse than pg-src-16--.* by 11.3 percents (7.17 > 6.44): job.query6f\n> and came to the commit b7b0f3f27 again.\n\nNow that the unfairness in all-cached parallel seq scan eliminated (with\n3ed3683618), I've re-run the same performance tests and got new results\n(please see attached). As we can see, the aforementioned pg_tpcds.query72\ngot better:\n 2024-05-15 2024-07-30 2024-09-03\npg-src-16--1 20492.58 19669.34 19913.32\npg-src-17--1 25286.10 24269.21 20654.95\npg-src-16--2 20769.88 19539.89 20429.72\npg-src-17--2 25771.90 24530.51 21244.92\npg-src-17--3 25978.55 24753.25 20904.09\npg-src-16--3 20943.10 20011.13 20086.61\n\nWe can also see the improvement of pg_tpcds.query16, but not on all runs:\n 2024-05-15 2024-07-30 2024-09-03\npg-src-16--1 105.36 94.31 97.74\npg-src-17--1 145.74 145.53 145.51\npg-src-16--2 101.82 98.36 96.63\npg-src-17--2 140.07 146.90 96.93\npg-src-17--3 154.89 148.11 106.18\npg-src-16--3 101.03 100.94 93.44\n\nSo it looks like now we see the same instability, that we observed before\n([1]).\n\nUnfortunately, the troublesome tpcds.query15 hasn't produced good numbers\nthis time too:\n 2024-05-15 2024-07-30 2024-09-03\npg-src-16--1 153.41 142.52 142.54\npg-src-17--1 229.84 225.11 212.51\npg-src-16--2 153.47 150.13 149.37\npg-src-17--2 236.34 227.15 232.73\npg-src-17--3 236.43 239.46 233.77\npg-src-16--3 151.03 152.23 144.90\n\n From a bird's eye view, new v17-vs-v16 comparison has only 87 \"worse\",\nwhile the previous one had 115 (it requires deeper analysis, of course, but\nstill...).\n\n[1] https://www.postgresql.org/message-id/d1fb5c09-dd03-2540-9ec2-86dbfdfa2c65%40gmail.com\n\nBest regards,\nAlexander",
"msg_date": "Tue, 3 Sep 2024 08:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: v17 vs v16 performance comparison"
},
{
"msg_contents": "On Tue, Sep 3, 2024 at 5:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> From a bird's eye view, new v17-vs-v16 comparison has only 87 \"worse\",\n> while the previous one had 115 (it requires deeper analysis, of course, but\n> still...).\n\nAny chance you could share that whole pgdata dir with me, assuming it\ncompresses to a manageable size? Perhaps we could discuss that\noff-list?\n\n\n",
"msg_date": "Tue, 3 Sep 2024 18:21:59 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: v17 vs v16 performance comparison"
}
] |
[
{
"msg_contents": "Hi,\n\nWe are seeing a gradual growth in the memory consumption of our process on\nWindows. Ours is a C++ application that directly loads libpq.dll and\nhandles the queries and functions. We use setSingleRowMethod to limit the\nnumber of rows returned simultaneously to the application. We do not\nobserve any memory increase when the application is run on Linux. There is\nno code difference between Windows and Linux from the\napplication standpoint. We ran valgrind against our application on Linux\nand found no memory leaks. Since the same code is being used on Windows as\nwell, we do not suspect any memory leak there. The question is if there\nare any known memory leaks with the version of the library we are using on\nWindows. Kindly let us know.\n\nThe version of the library on Linux is libpq.so.5.16\n\nThe windows version of the library is 16.0.3.0\n\n\n[image: image.png]\n\nThanks,\nRajesh",
"msg_date": "Thu, 1 Aug 2024 12:36:36 +0530",
"msg_from": "Rajesh Kokkonda <rajeshk.kokkonda@gmail.com>",
"msg_from_op": true,
"msg_subject": "Memory growth observed with C++ application consuming libpq.dll on\n Windows"
},
{
"msg_contents": "Hi Rajesh,\n\nCan you please attach a sample code snippet showing libpq's functions being\ncalled? It will help to identify the libpq's functions to investigate\nfurther for a potential mem leak.\n\nRegards...\n\nYasir Hussain\n\nOn Thu, Aug 1, 2024 at 4:30 PM Rajesh Kokkonda <rajeshk.kokkonda@gmail.com>\nwrote:\n\n> Hi,\n>\n> We are seeing a gradual growth in the memory consumption of our process on\n> Windows. Ours is a C++ application that directly loads libpq.dll and\n> handles the queries and functions. We use setSingleRowMethod to limit the\n> number of rows returned simultaneously to the application. We do not\n> observe any memory increase when the application is run on Linux. There is\n> no code difference between Windows and Linux from the\n> application standpoint. We ran valgrind against our application on Linux\n> and found no memory leaks. Since the same code is being used on Windows as\n> well, we do not suspect any memory leak there. The question is if there\n> are any known memory leaks with the version of the library we are using on\n> Windows. Kindly let us know.\n>\n> The version of the library on Linux is libpq.so.5.16\n>\n> The windows version of the library is 16.0.3.0\n>\n>\n> [image: image.png]\n>\n> Thanks,\n> Rajesh\n>",
"msg_date": "Thu, 1 Aug 2024 16:57:04 +0500",
"msg_from": "Yasir <yasir.hussain.shah@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory growth observed with C++ application consuming libpq.dll\n on Windows"
},
{
"msg_contents": "Hi Yasir,\n\nAre you looking for a fully functional sample program or only the APIs from\nlibpq library that our product uses? I am asking this because if the\nrequirement is to have a sample code, then I will have to work on creating\none on the same lines as our product.\n\nRajesh\n\nOn Thu, Aug 1, 2024 at 5:27 PM Yasir <yasir.hussain.shah@gmail.com> wrote:\n\n> Hi Rajesh,\n>\n> Can you please attach a sample code snippet showing libpq's functions\n> being called? It will help to identify the libpq's functions to investigate\n> further for a potential mem leak.\n>\n> Regards...\n>\n> Yasir Hussain\n>\n> On Thu, Aug 1, 2024 at 4:30 PM Rajesh Kokkonda <rajeshk.kokkonda@gmail.com>\n> wrote:\n>\n>> Hi,\n>>\n>> We are seeing a gradual growth in the memory consumption of our process\n>> on Windows. Ours is a C++ application that directly loads libpq.dll and\n>> handles the queries and functions. We use setSingleRowMethod to limit the\n>> number of rows returned simultaneously to the application. We do not\n>> observe any memory increase when the application is run on Linux. There is\n>> no code difference between Windows and Linux from the\n>> application standpoint. We ran valgrind against our application on Linux\n>> and found no memory leaks. Since the same code is being used on Windows as\n>> well, we do not suspect any memory leak there. The question is if there\n>> are any known memory leaks with the version of the library we are using on\n>> Windows. Kindly let us know.\n>>\n>> The version of the library on Linux is libpq.so.5.16\n>>\n>> The windows version of the library is 16.0.3.0\n>>\n>>\n>> [image: image.png]\n>>\n>> Thanks,\n>> Rajesh\n>>\n>",
"msg_date": "Fri, 2 Aug 2024 14:23:00 +0530",
"msg_from": "Rajesh Kokkonda <rajeshk.kokkonda@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory growth observed with C++ application consuming libpq.dll\n on Windows"
},
{
"msg_contents": "On Fri, Aug 2, 2024 at 1:53 PM Rajesh Kokkonda <rajeshk.kokkonda@gmail.com>\nwrote:\n\n> Hi Yasir,\n>\n> Are you looking for a fully functional sample program or only the APIs\n> from libpq library that our product uses? I am asking this because if the\n> requirement is to have a sample code, then I will have to work on creating\n> one on the same lines as our product.\n>\n>\nA functional sample is always best and preferred, however, only APIs used\nby your product would also be sufficient.\n\nRajesh\n>\n> On Thu, Aug 1, 2024 at 5:27 PM Yasir <yasir.hussain.shah@gmail.com> wrote:\n>\n>> Hi Rajesh,\n>>\n>> Can you please attach a sample code snippet showing libpq's functions\n>> being called? It will help to identify the libpq's functions to investigate\n>> further for a potential mem leak.\n>>\n>> Regards...\n>>\n>> Yasir Hussain\n>>\n>> On Thu, Aug 1, 2024 at 4:30 PM Rajesh Kokkonda <\n>> rajeshk.kokkonda@gmail.com> wrote:\n>>\n>>> Hi,\n>>>\n>>> We are seeing a gradual growth in the memory consumption of our process\n>>> on Windows. Ours is a C++ application that directly loads libpq.dll and\n>>> handles the queries and functions. We use setSingleRowMethod to limit the\n>>> number of rows returned simultaneously to the application. We do not\n>>> observe any memory increase when the application is run on Linux. There is\n>>> no code difference between Windows and Linux from the\n>>> application standpoint. We ran valgrind against our application on Linux\n>>> and found no memory leaks. Since the same code is being used on Windows as\n>>> well, we do not suspect any memory leak there. The question is if there\n>>> are any known memory leaks with the version of the library we are using on\n>>> Windows. Kindly let us know.\n>>>\n>>> The version of the library on Linux is libpq.so.5.16\n>>>\n>>> The windows version of the library is 16.0.3.0\n>>>\n>>>\n>>> [image: image.png]\n>>>\n>>> Thanks,\n>>> Rajesh\n>>>\n>>",
"msg_date": "Fri, 2 Aug 2024 16:36:35 +0500",
"msg_from": "Yasir <yasir.hussain.shah@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory growth observed with C++ application consuming libpq.dll\n on Windows"
},
{
"msg_contents": "Okay. I will try to create one sample program and send it to you sometime\nnext week. In the meantime, I am listing down all the methods we are\nconsuming from libpq.\n\nPQconnectdbParams\nPQstatus\nPQerrorMessage\nPQpingParams\nPQfinish\nPQresultStatus\nPQclear\nPQsetSingleRowMode\nPQntuples\nPQnfields\nPQftype\nPQgetvalue\nPQgetlength\nPQgetisnull\nPQgetCancel\nPQfreeCancel\nPQcancel\nPQsetErrorVerbosity\nPQsendPrepare\nPQsendQueryPrepared\nPQgetResult\nPQconsumeInput\nPQisBusy\nPQsetnonblocking\nPQflush\nPQsocket\nPQtransactionStatus\nPQresultErrorField\n\nRegards,\nRajesh\n\nOn Fri, Aug 2, 2024 at 5:06 PM Yasir <yasir.hussain.shah@gmail.com> wrote:\n\n>\n> On Fri, Aug 2, 2024 at 1:53 PM Rajesh Kokkonda <rajeshk.kokkonda@gmail.com>\n> wrote:\n>\n>> Hi Yasir,\n>>\n>> Are you looking for a fully functional sample program or only the APIs\n>> from libpq library that our product uses? I am asking this because if the\n>> requirement is to have a sample code, then I will have to work on creating\n>> one on the same lines as our product.\n>>\n>>\n> A functional sample is always best and preferred, however, only APIs used\n> by your product would also be sufficient.\n>\n> Rajesh\n>>\n>> On Thu, Aug 1, 2024 at 5:27 PM Yasir <yasir.hussain.shah@gmail.com>\n>> wrote:\n>>\n>>> Hi Rajesh,\n>>>\n>>> Can you please attach a sample code snippet showing libpq's functions\n>>> being called? It will help to identify the libpq's functions to investigate\n>>> further for a potential mem leak.\n>>>\n>>> Regards...\n>>>\n>>> Yasir Hussain\n>>>\n>>> On Thu, Aug 1, 2024 at 4:30 PM Rajesh Kokkonda <\n>>> rajeshk.kokkonda@gmail.com> wrote:\n>>>\n>>>> Hi,\n>>>>\n>>>> We are seeing a gradual growth in the memory consumption of our process\n>>>> on Windows. Ours is a C++ application that directly loads libpq.dll and\n>>>> handles the queries and functions. We use setSingleRowMethod to limit the\n>>>> number of rows returned simultaneously to the application. We do not\n>>>> observe any memory increase when the application is run on Linux. There is\n>>>> no code difference between Windows and Linux from the\n>>>> application standpoint. We ran valgrind against our application on Linux\n>>>> and found no memory leaks. Since the same code is being used on Windows as\n>>>> well, we do not suspect any memory leak there. The question is if there\n>>>> are any known memory leaks with the version of the library we are using on\n>>>> Windows. Kindly let us know.\n>>>>\n>>>> The version of the library on Linux is libpq.so.5.16\n>>>>\n>>>> The windows version of the library is 16.0.3.0\n>>>>\n>>>>\n>>>> [image: image.png]\n>>>>\n>>>> Thanks,\n>>>> Rajesh\n>>>>\n>>>",
"msg_date": "Fri, 2 Aug 2024 17:15:39 +0530",
"msg_from": "Rajesh Kokkonda <rajeshk.kokkonda@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory growth observed with C++ application consuming libpq.dll\n on Windows"
},
{
"msg_contents": "Rajesh Kokkonda <rajeshk.kokkonda@gmail.com> writes:\n> Are you looking for a fully functional sample program or only the APIs from\n> libpq library that our product uses? I am asking this because if the\n> requirement is to have a sample code, then I will have to work on creating\n> one on the same lines as our product.\n\nJust for the record, the last field-reported memory leak in libpq\nwas found/fixed in 2020, and it occurred only when using GSSAPI\nencryption. Previous reports weren't frequent either. So while\nit may be that you've found one, it seems far more likely that the\nfault is in your application. In any case, nobody is likely to\nspend time looking for a bug that may not be there unless you can\nproduce a self-contained test case demonstrating a leak.\n\nIf we had a test case, the first thing we'd likely do would be\nto run it under Valgrind, to see if automated analysis is enough\nto locate the logic fault. So an alternative you could consider\nbefore trying to extract a test case is to run your app under\nValgrind for yourself. As a bonus, that has a decent shot at\nlocating the fault whether it's ours or yours. I'm not sure\nif Valgrind is available for Windows though --- can you easily\nput the app on a different platform?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2024 09:53:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Memory growth observed with C++ application consuming libpq.dll\n on Windows"
},
{
"msg_contents": "Em sex., 2 de ago. de 2024 às 11:54, Rajesh Kokkonda <\nrajeshk.kokkonda@gmail.com> escreveu:\n\n> Okay. I will try to create one sample program and send it to you sometime\n> next week. In the meantime, I am listing down all the methods we are\n> consuming from libpq.\n>\n> PQconnectdbParams\n> PQstatus\n> PQerrorMessage\n> PQpingParams\n> PQfinish\n> PQresultStatus\n> PQclear\n> PQsetSingleRowMode\n> PQntuples\n> PQnfields\n> PQftype\n> PQgetvalue\n> PQgetlength\n> PQgetisnull\n> PQgetCancel\n> PQfreeCancel\n> PQcancel\n> PQsetErrorVerbosity\n> PQsendPrepare\n> PQsendQueryPrepared\n> PQgetResult\n> PQconsumeInput\n> PQisBusy\n> PQsetnonblocking\n> PQflush\n> PQsocket\n> PQtransactionStatus\n> PQresultErrorField\n>\n> It is highly likely that the memory consumption is caused by your\napplication.\nPerhaps due to the lack of freeing up the resources used by the library.\nYou can try using this tool, to find out the root cause.\n\nhttps://drmemory.org/\n\nbest regards,\nRanier Vilela\n\nEm sex., 2 de ago. de 2024 às 11:54, Rajesh Kokkonda <rajeshk.kokkonda@gmail.com> escreveu:Okay. I will try to create one sample program and send it to you sometime next week. In the meantime, I am listing down all the methods we are consuming from libpq.PQconnectdbParamsPQstatusPQerrorMessagePQpingParamsPQfinishPQresultStatusPQclearPQsetSingleRowModePQntuplesPQnfieldsPQftypePQgetvaluePQgetlengthPQgetisnullPQgetCancelPQfreeCancelPQcancelPQsetErrorVerbosityPQsendPreparePQsendQueryPreparedPQgetResultPQconsumeInputPQisBusyPQsetnonblockingPQflushPQsocketPQtransactionStatusPQresultErrorFieldIt is highly likely that the memory consumption is caused by your application.Perhaps due to the lack of freeing up the resources used by the library.You can try using this tool, to find out the root cause.https://drmemory.org/best regards,Ranier Vilela",
"msg_date": "Fri, 2 Aug 2024 13:15:07 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory growth observed with C++ application consuming libpq.dll\n on Windows"
},
{
"msg_contents": "We did run our application under valgrind on Linux. We did not see any\nleaks. There is no platform dependent code in our application. We are\nseeing gradual memory growth only on windows.\n\nThat is what lead me to believe the leak may be present in postgresql. I\nwill run under available memory tools on windows and get back to you.\n\nI will also try to create a sample and see if I can reproduce the problem.\n\nThanks,\nRajesh\n\nOn Fri, 2 Aug 2024, 21:45 Ranier Vilela, <ranier.vf@gmail.com> wrote:\n\n> Em sex., 2 de ago. de 2024 às 11:54, Rajesh Kokkonda <\n> rajeshk.kokkonda@gmail.com> escreveu:\n>\n>> Okay. I will try to create one sample program and send it to you sometime\n>> next week. In the meantime, I am listing down all the methods we are\n>> consuming from libpq.\n>>\n>> PQconnectdbParams\n>> PQstatus\n>> PQerrorMessage\n>> PQpingParams\n>> PQfinish\n>> PQresultStatus\n>> PQclear\n>> PQsetSingleRowMode\n>> PQntuples\n>> PQnfields\n>> PQftype\n>> PQgetvalue\n>> PQgetlength\n>> PQgetisnull\n>> PQgetCancel\n>> PQfreeCancel\n>> PQcancel\n>> PQsetErrorVerbosity\n>> PQsendPrepare\n>> PQsendQueryPrepared\n>> PQgetResult\n>> PQconsumeInput\n>> PQisBusy\n>> PQsetnonblocking\n>> PQflush\n>> PQsocket\n>> PQtransactionStatus\n>> PQresultErrorField\n>>\n>> It is highly likely that the memory consumption is caused by your\n> application.\n> Perhaps due to the lack of freeing up the resources used by the library.\n> You can try using this tool, to find out the root cause.\n>\n> https://drmemory.org/\n>\n> best regards,\n> Ranier Vilela\n>\n\nWe did run our application under valgrind on Linux. We did not see any leaks. There is no platform dependent code in our application. We are seeing gradual memory growth only on windows.That is what lead me to believe the leak may be present in postgresql. I will run under available memory tools on windows and get back to you.I will also try to create a sample and see if I can reproduce the problem.Thanks,Rajesh On Fri, 2 Aug 2024, 21:45 Ranier Vilela, <ranier.vf@gmail.com> wrote:Em sex., 2 de ago. de 2024 às 11:54, Rajesh Kokkonda <rajeshk.kokkonda@gmail.com> escreveu:Okay. I will try to create one sample program and send it to you sometime next week. In the meantime, I am listing down all the methods we are consuming from libpq.PQconnectdbParamsPQstatusPQerrorMessagePQpingParamsPQfinishPQresultStatusPQclearPQsetSingleRowModePQntuplesPQnfieldsPQftypePQgetvaluePQgetlengthPQgetisnullPQgetCancelPQfreeCancelPQcancelPQsetErrorVerbosityPQsendPreparePQsendQueryPreparedPQgetResultPQconsumeInputPQisBusyPQsetnonblockingPQflushPQsocketPQtransactionStatusPQresultErrorFieldIt is highly likely that the memory consumption is caused by your application.Perhaps due to the lack of freeing up the resources used by the library.You can try using this tool, to find out the root cause.https://drmemory.org/best regards,Ranier Vilela",
"msg_date": "Fri, 2 Aug 2024 22:19:55 +0530",
"msg_from": "Rajesh Kokkonda <rajeshk.kokkonda@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory growth observed with C++ application consuming libpq.dll\n on Windows"
},
{
"msg_contents": "I ran a trial version of a memory leak detector called Deleaker on windows\nand found some modules that are listed as having leaks. I ran the program\non Linux under valgrind and I do not see any leaks reported there. I have\nattached the reported leaks on windows as windows_leaks.txt and valgrind\nsummary report as valgrind.txt.\n\nI am working on generating a trimmed down version of the sample program to\nshare with you. Let me know if you have any questions.\n\nThanks,\nRajesh\n\nOn Fri, Aug 2, 2024 at 10:19 PM Rajesh Kokkonda <rajeshk.kokkonda@gmail.com>\nwrote:\n\n> We did run our application under valgrind on Linux. We did not see any\n> leaks. There is no platform dependent code in our application. We are\n> seeing gradual memory growth only on windows.\n>\n> That is what lead me to believe the leak may be present in postgresql. I\n> will run under available memory tools on windows and get back to you.\n>\n> I will also try to create a sample and see if I can reproduce the problem.\n>\n> Thanks,\n> Rajesh\n>\n> On Fri, 2 Aug 2024, 21:45 Ranier Vilela, <ranier.vf@gmail.com> wrote:\n>\n>> Em sex., 2 de ago. de 2024 às 11:54, Rajesh Kokkonda <\n>> rajeshk.kokkonda@gmail.com> escreveu:\n>>\n>>> Okay. I will try to create one sample program and send it to you\n>>> sometime next week. In the meantime, I am listing down all the methods we\n>>> are consuming from libpq.\n>>>\n>>> PQconnectdbParams\n>>> PQstatus\n>>> PQerrorMessage\n>>> PQpingParams\n>>> PQfinish\n>>> PQresultStatus\n>>> PQclear\n>>> PQsetSingleRowMode\n>>> PQntuples\n>>> PQnfields\n>>> PQftype\n>>> PQgetvalue\n>>> PQgetlength\n>>> PQgetisnull\n>>> PQgetCancel\n>>> PQfreeCancel\n>>> PQcancel\n>>> PQsetErrorVerbosity\n>>> PQsendPrepare\n>>> PQsendQueryPrepared\n>>> PQgetResult\n>>> PQconsumeInput\n>>> PQisBusy\n>>> PQsetnonblocking\n>>> PQflush\n>>> PQsocket\n>>> PQtransactionStatus\n>>> PQresultErrorField\n>>>\n>>> It is highly likely that the memory consumption is caused by your\n>> application.\n>> Perhaps due to the lack of freeing up the resources used by the library.\n>> You can try using this tool, to find out the root cause.\n>>\n>> https://drmemory.org/\n>>\n>> best regards,\n>> Ranier Vilela\n>>\n>",
"msg_date": "Tue, 6 Aug 2024 14:03:32 +0530",
"msg_from": "Rajesh Kokkonda <rajeshk.kokkonda@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory growth observed with C++ application consuming libpq.dll\n on Windows"
},
{
"msg_contents": "I attached the image from the utility showing the leaks.\n\nOn Tue, Aug 6, 2024 at 2:03 PM Rajesh Kokkonda <rajeshk.kokkonda@gmail.com>\nwrote:\n\n> I ran a trial version of a memory leak detector called Deleaker on windows\n> and found some modules that are listed as having leaks. I ran the program\n> on Linux under valgrind and I do not see any leaks reported there. I have\n> attached the reported leaks on windows as windows_leaks.txt and valgrind\n> summary report as valgrind.txt.\n>\n> I am working on generating a trimmed down version of the sample program to\n> share with you. Let me know if you have any questions.\n>\n> Thanks,\n> Rajesh\n>\n> On Fri, Aug 2, 2024 at 10:19 PM Rajesh Kokkonda <\n> rajeshk.kokkonda@gmail.com> wrote:\n>\n>> We did run our application under valgrind on Linux. We did not see any\n>> leaks. There is no platform dependent code in our application. We are\n>> seeing gradual memory growth only on windows.\n>>\n>> That is what lead me to believe the leak may be present in postgresql. I\n>> will run under available memory tools on windows and get back to you.\n>>\n>> I will also try to create a sample and see if I can reproduce the problem.\n>>\n>> Thanks,\n>> Rajesh\n>>\n>> On Fri, 2 Aug 2024, 21:45 Ranier Vilela, <ranier.vf@gmail.com> wrote:\n>>\n>>> Em sex., 2 de ago. de 2024 às 11:54, Rajesh Kokkonda <\n>>> rajeshk.kokkonda@gmail.com> escreveu:\n>>>\n>>>> Okay. I will try to create one sample program and send it to you\n>>>> sometime next week. In the meantime, I am listing down all the methods we\n>>>> are consuming from libpq.\n>>>>\n>>>> PQconnectdbParams\n>>>> PQstatus\n>>>> PQerrorMessage\n>>>> PQpingParams\n>>>> PQfinish\n>>>> PQresultStatus\n>>>> PQclear\n>>>> PQsetSingleRowMode\n>>>> PQntuples\n>>>> PQnfields\n>>>> PQftype\n>>>> PQgetvalue\n>>>> PQgetlength\n>>>> PQgetisnull\n>>>> PQgetCancel\n>>>> PQfreeCancel\n>>>> PQcancel\n>>>> PQsetErrorVerbosity\n>>>> PQsendPrepare\n>>>> PQsendQueryPrepared\n>>>> PQgetResult\n>>>> PQconsumeInput\n>>>> PQisBusy\n>>>> PQsetnonblocking\n>>>> PQflush\n>>>> PQsocket\n>>>> PQtransactionStatus\n>>>> PQresultErrorField\n>>>>\n>>>> It is highly likely that the memory consumption is caused by your\n>>> application.\n>>> Perhaps due to the lack of freeing up the resources used by the library.\n>>> You can try using this tool, to find out the root cause.\n>>>\n>>> https://drmemory.org/\n>>>\n>>> best regards,\n>>> Ranier Vilela\n>>>\n>>",
"msg_date": "Tue, 6 Aug 2024 14:06:18 +0530",
"msg_from": "Rajesh Kokkonda <rajeshk.kokkonda@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory growth observed with C++ application consuming libpq.dll\n on Windows"
},
{
"msg_contents": "Em ter., 6 de ago. de 2024 às 05:33, Rajesh Kokkonda <\nrajeshk.kokkonda@gmail.com> escreveu:\n\n> I ran a trial version of a memory leak detector called Deleaker on windows\n> and found some modules that are listed as having leaks. I ran the program\n> on Linux under valgrind and I do not see any leaks reported there. I have\n> attached the reported leaks on windows as windows_leaks.txt and valgrind\n> summary report as valgrind.txt.\n>\nNone of these sources are Postgres.\n\nhttps://www.gnu.org/software/libiconv/\nhttps://gnuwin32.sourceforge.net/packages/libintl.htm\n\nbest regards,\nRanier Vilela\n\nEm ter., 6 de ago. de 2024 às 05:33, Rajesh Kokkonda <rajeshk.kokkonda@gmail.com> escreveu:I ran a trial version of a memory leak detector called Deleaker on windows and found some modules that are listed as having leaks. I ran the program on Linux under valgrind and I do not see any leaks reported there. I have attached the reported leaks on windows as windows_leaks.txt and valgrind summary report as valgrind.txt.None of these sources are Postgres.https://www.gnu.org/software/libiconv/https://gnuwin32.sourceforge.net/packages/libintl.htmbest regards,Ranier Vilela",
"msg_date": "Tue, 6 Aug 2024 08:23:15 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory growth observed with C++ application consuming libpq.dll\n on Windows"
}
] |
[
{
"msg_contents": "While working on the grouping sets patches for queries with GROUP BY\nitems that are constants, I noticed $subject on master. As an\nexample, consider\n\nprepare q1(int) as\nselect $1 as c1, $1 as c2 from generate_series(1,2) t group by rollup(c1);\n\nset plan_cache_mode to force_custom_plan;\nexecute q1(3);\n c1 | c2\n----+----\n 3 | 3\n | 3\n(2 rows)\n\nset plan_cache_mode to force_generic_plan;\nexecute q1(3);\n c1 | c2\n----+----\n 3 | 3\n |\n(2 rows)\n\nThe reason can be seen in the plans under different modes.\n\n-- force_custom_plan\nexplain (verbose, costs off) execute q1(3);\n QUERY PLAN\n-----------------------------------------------------\n GroupAggregate\n Output: (3), 3\n Group Key: 3\n Group Key: ()\n -> Function Scan on pg_catalog.generate_series t\n Output: 3\n Function Call: generate_series(1, 2)\n(7 rows)\n\n-- force_generic_plan\nexplain (verbose, costs off) execute q1(3);\n QUERY PLAN\n-----------------------------------------------------\n GroupAggregate\n Output: ($1), ($1)\n Group Key: $1\n Group Key: ()\n -> Function Scan on pg_catalog.generate_series t\n Output: $1\n Function Call: generate_series(1, 2)\n(7 rows)\n\nIn custom mode, the target entry 'c2' is a Const expression, and\nsetrefs.c does not replace it with an OUTER_VAR, despite there happens\nto be an identical Const below. As a result, when this OUTER_VAR goes\nto NULL due to the grouping sets, 'c2' remains as constant 3. Look at\nthis code in search_indexed_tlist_for_non_var:\n\n/*\n * If it's a simple Const, replacing it with a Var is silly, even if there\n * happens to be an identical Const below; a Var is more expensive to\n * execute than a Const. What's more, replacing it could confuse some\n * places in the executor that expect to see simple Consts for, eg,\n * dropped columns.\n */\nif (IsA(node, Const))\n return NULL;\n\nIn generic mode, the target entry 'c2' is a Param expression, and is\nreplaced with the OUTER_VAR (indicated by the parentheses around the\nsecond '$1'). So it goes to NULL when we're grouping by the set that\ndoes not contain this Var.\n\nIs this inconsistent behavior in different plan cache modes expected,\nor does it indicate a bug that needs to be fixed?\n\nThanks\nRichard\n\n\n",
"msg_date": "Thu, 1 Aug 2024 16:44:22 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Query results vary depending on the plan cache mode used"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> While working on the grouping sets patches for queries with GROUP BY\n> items that are constants, I noticed $subject on master. As an\n> example, consider\n\nThis particular example seems like it's just an illustration of\nour known bugs with grouping sets using non-Var columns. I see\nno reason to blame the plan cache for it. Do you have any\nexamples that don't depend on such a bug?\n\n(And yes, I apologize for not yet having reviewed your patch\nto fix the grouping-sets bug.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Aug 2024 10:34:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query results vary depending on the plan cache mode used"
},
{
"msg_contents": "On Thu, Aug 1, 2024 at 10:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > While working on the grouping sets patches for queries with GROUP BY\n> > items that are constants, I noticed $subject on master. As an\n> > example, consider\n>\n> This particular example seems like it's just an illustration of\n> our known bugs with grouping sets using non-Var columns. I see\n> no reason to blame the plan cache for it. Do you have any\n> examples that don't depend on such a bug?\n\nYeah, it's not the fault of the plan cache. I noticed this because in\ncheck_ungrouped_columns, both Const and Param are treated as always\nacceptable. However, in setrefs.c these two expression types are\nhandled differently: Const is never matched to the lower tlist, while\nParam is always attempted to be matched to the lower tlist. This\ndiscrepancy causes the query in the example to behave differently\ndepending on the plan cache mode.\n\nI'm unsure which behavior is the expected one.\n\n\nAnother related question I have is about the behavior of the following\nquery:\n\nselect 3 as c1, 3 as c2 from generate_series(1,2) t group by\nrollup(c1) having 3 = 3;\n\nShould the target entry 'c2', as well as the Const expressions in\nhavingQual, be replaced with references to the grouping key?\n\nThanks\nRichard\n\n\n",
"msg_date": "Fri, 2 Aug 2024 09:27:59 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Query results vary depending on the plan cache mode used"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> Yeah, it's not the fault of the plan cache. I noticed this because in\n> check_ungrouped_columns, both Const and Param are treated as always\n> acceptable. However, in setrefs.c these two expression types are\n> handled differently: Const is never matched to the lower tlist, while\n> Param is always attempted to be matched to the lower tlist. This\n> discrepancy causes the query in the example to behave differently\n> depending on the plan cache mode.\n\n> I'm unsure which behavior is the expected one.\n\nI'm unsure why you think they need to have the same behavior.\n\nThis code is correct given the assumption that a Const is as constant\nas it seems on the surface. The trouble we're hitting is that in\nthe presence of grouping-set keys that reduce to Consts, a reference\nto such a key is *not* constant. The right answer to that IMO is to\nnot represent those references as plain Const nodes.\n\n> Another related question I have is about the behavior of the following\n> query:\n\n> select 3 as c1, 3 as c2 from generate_series(1,2) t group by\n> rollup(c1) having 3 = 3;\n\n> Should the target entry 'c2', as well as the Const expressions in\n> havingQual, be replaced with references to the grouping key?\n\nThis seems rather analogous to our occasional debates about whether\ntextually distinct uses of \"random()\" should refer to the same\nvalue or different evaluations. I can't get very excited about it,\nbecause it seems to me this is an academic example not corresponding\nto any real use-case. The current code seems to effectively assume\nthat the constant-instances are distinct even though textually\nidentical, and that's fine with me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2024 00:00:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query results vary depending on the plan cache mode used"
}
] |
[
{
"msg_contents": "Stack Overflow 2024 developer survey[1] said VSCode\nis the most used development environment.\n\nIn a PostgreSQL Hacker Mentoring discussion, we talked\nabout how to use vscode to debug and running postgres,\nAndrey(ccd) has tons of tips for new developers, and\nI post my daily used devcontainer config[2] , Jelte(ccd)\nsuggested that it might be a good idea we integrate the\nconfig into postgres repo so that the barrier to entry for\nnew developers will be much lower.\n\n**Note**\n\nThis is not intended to change the workflow of experienced\nhackers, it is just hoping to make the life easier for\nbeginner developers.\n\n**How to use**\n\nOpen VSCode Command Palette(cmd/ctrl + shift + p),\nsearch devcontainer, then choose something like\n`Dev containers: Rebuild and Reopen in Container`, you are\ngood to go.\n\n**About the patch**\n\ndevcontainer.json:\n\nThe .devcontainer/devcontainer.json is the entry point for\nVSCode to *open folder to develop in a container*, it will build\nthe docker image for the first time you open in container,\nthis will take some time.\n\nThere are some parameters(runArgs) for running the container,\nwe need some settings and privileges to run perf or generate\ncore dumps.\n\nIt has a mount point mapping the hosts $HOME/freedom\nto container's /opt/freedom, I chose the name *freedom*\nbecause the container is kind of a jail.\n\nIt also installed some vscode extensions and did some\ncustomizations.\n\nAfter diving into the container, the postCreateCommand.sh\nwill be automatically called, it will do some configurations\nlike git, perf, .vscode, core_pattern, etc. It also downloads\nmichaelpq's pg_plugins and FlameGraph.\n\nDockerfile:\n\nIt is based on debian bookworm, it installed dependencies\nto build postgres, also IPC::Run to run TAP tests I guess.\n\nIt also has a .gdbinit to break elog.c:errfinish for elevel 21,\nthis will make the debugging easier why error is logged.\n\ngdbpg.py is adapted from https://github.com/tvondra/gdbpg,\nI think putting it here will make it evolve as time goes.\n\ntasks.json:\n\nThis is kind of like a bookkeeping for developers, it has\nthe following commands:\n\n- meson debug setup\n- meson release setup\n- ninja build\n- regression tests\n- ninja install\n- init cluster\n- start cluster\n- stop cluster\n- install pg_bsd_indent\n- pgindent\n- apply patch\n- generate flamegraph\n\nlaunch.json:\n\nIt has one configuration that makes it possible to attach\nto one process(e.g. postgres backend) and debug\nwith vscode.\n\nPFA and please give it a try if you are a VSCode user.\n\n[1]: https://survey.stackoverflow.co/2024/technology#1-integrated-development-environment\n[2]: https://github.com/atomicdb/devcontainer/tree/main/postgres\n\n--\nRegards\nJunwang Zhao",
"msg_date": "Thu, 1 Aug 2024 22:56:40 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Official devcontainer config"
},
{
"msg_contents": "On Thu, 1 Aug 2024 at 16:56, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> I post my daily used devcontainer config[2] , Jelte(ccd)\n> suggested that it might be a good idea we integrate the\n> config into postgres repo so that the barrier to entry for\n> new developers will be much lower.\n\nIn my experience adding a devcontainer config has definitely made it\neasier for people to contribute to Citus. So thank you for working on\nthis! This is not a full review, but an initial pass.\n\n> After diving into the container, the postCreateCommand.sh\n> will be automatically called, it will do some configurations\n> like git, perf, .vscode, core_pattern, etc. It also downloads\n> michaelpq's pg_plugins and FlameGraph.\n\nI think the .git settings don't fit well here, they are mostly aliases\nwhich are very much based on personal preference and not related to\nPostgres development. It seems better recommend users to use the\ndevcontainer dotfiles support for this:\nhttps://code.visualstudio.com/docs/devcontainers/containers#_personalizing-with-dotfile-repositories\n\n> - pgindent\n\nIt might make sense to install Tristan (ccd) his Postgres Hacker\nextension for vscode to make this a bit more userfriendly:\nhttps://marketplace.visualstudio.com/items?itemName=tristan957.postgres-hacker\n\n\n",
"msg_date": "Thu, 1 Aug 2024 18:29:19 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Official devcontainer config"
},
{
"msg_contents": "\nOn 2024-08-01 Th 10:56 AM, Junwang Zhao wrote:\n> Stack Overflow 2024 developer survey[1] said VSCode\n> is the most used development environment.\n>\n> In a PostgreSQL Hacker Mentoring discussion, we talked\n> about how to use vscode to debug and running postgres,\n> Andrey(ccd) has tons of tips for new developers, and\n> I post my daily used devcontainer config[2] , Jelte(ccd)\n> suggested that it might be a good idea we integrate the\n> config into postgres repo so that the barrier to entry for\n> new developers will be much lower.\n>\n> **Note**\n>\n> This is not intended to change the workflow of experienced\n> hackers, it is just hoping to make the life easier for\n> beginner developers.\n>\n> **How to use**\n>\n> Open VSCode Command Palette(cmd/ctrl + shift + p),\n> search devcontainer, then choose something like\n> `Dev containers: Rebuild and Reopen in Container`, you are\n> good to go.\n>\n> **About the patch**\n>\n> devcontainer.json:\n>\n> The .devcontainer/devcontainer.json is the entry point for\n> VSCode to *open folder to develop in a container*, it will build\n> the docker image for the first time you open in container,\n> this will take some time.\n>\n> There are some parameters(runArgs) for running the container,\n> we need some settings and privileges to run perf or generate\n> core dumps.\n>\n> It has a mount point mapping the hosts $HOME/freedom\n> to container's /opt/freedom, I chose the name *freedom*\n> because the container is kind of a jail.\n>\n> It also installed some vscode extensions and did some\n> customizations.\n>\n> After diving into the container, the postCreateCommand.sh\n> will be automatically called, it will do some configurations\n> like git, perf, .vscode, core_pattern, etc. It also downloads\n> michaelpq's pg_plugins and FlameGraph.\n>\n> Dockerfile:\n>\n> It is based on debian bookworm, it installed dependencies\n> to build postgres, also IPC::Run to run TAP tests I guess.\n>\n> It also has a .gdbinit to break elog.c:errfinish for elevel 21,\n> this will make the debugging easier why error is logged.\n>\n> gdbpg.py is adapted from https://github.com/tvondra/gdbpg,\n> I think putting it here will make it evolve as time goes.\n>\n> tasks.json:\n>\n> This is kind of like a bookkeeping for developers, it has\n> the following commands:\n>\n> - meson debug setup\n> - meson release setup\n> - ninja build\n> - regression tests\n> - ninja install\n> - init cluster\n> - start cluster\n> - stop cluster\n> - install pg_bsd_indent\n> - pgindent\n> - apply patch\n> - generate flamegraph\n>\n> launch.json:\n>\n> It has one configuration that makes it possible to attach\n> to one process(e.g. postgres backend) and debug\n> with vscode.\n>\n> PFA and please give it a try if you are a VSCode user.\n>\n> [1]: https://survey.stackoverflow.co/2024/technology#1-integrated-development-environment\n> [2]: https://github.com/atomicdb/devcontainer/tree/main/postgres\n>\n\nNot totally opposed, and I will probably give it a try very soon, but \nI'm wondering if this really needs to go in the core repo. We've \ngenerally shied away from doing much in the way of editor / devenv \nsupport, trying to be fairly agnostic. It's true we carry .dir-locals.el \nand .editorconfig, so that's not entirely true, but those are really \njust about supporting our indentation etc. standards.\n\nAlso, might it not be better for this to be carried in a separate repo \nmaintained by people using vscode? I don't know how may committers do. \nMaybe lots do, and I'm just a dinosaur. If vscode really needs \n.devcontainer to live in the code root, maybe it could be symlinked. \nAnother reason not carry the code ourselves is that it will make it less \nconvenient for people who want to customize it.\n\nWithout having tried it, looks like a nice effort though. Well done.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 1 Aug 2024 17:38:17 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Official devcontainer config"
},
{
"msg_contents": "On 01.08.24 23:38, Andrew Dunstan wrote:\n> Not totally opposed, and I will probably give it a try very soon, but \n> I'm wondering if this really needs to go in the core repo. We've \n> generally shied away from doing much in the way of editor / devenv \n> support, trying to be fairly agnostic. It's true we carry .dir-locals.el \n> and .editorconfig, so that's not entirely true, but those are really \n> just about supporting our indentation etc. standards.\n\nYeah, the editor support in the tree ought to be minimal and factual, \nbased on coding standards and widely recognized best practices, not a \ncollection of one person's favorite aliases and scripts. If the scripts \nare good, let's look at them and maybe put them under src/tools/ for \neveryone to use. But a lot of this looks like it will requite active \nmaintenance if output formats or node formats or build targets etc. \nchange. And other things require specific local paths. That's fine for \na local script or something, but not for a mainline tool that the \ncommunity will need to maintain.\n\nI suggest to start with a very minimal configuration. What are the \nsettings that absolute everyone will need, maybe to set indentation \nstyle or something.\n\n\n\n",
"msg_date": "Fri, 2 Aug 2024 20:45:20 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Official devcontainer config"
},
{
"msg_contents": "\nOn 2024-08-02 Fr 2:45 PM, Peter Eisentraut wrote:\n> On 01.08.24 23:38, Andrew Dunstan wrote:\n>> Not totally opposed, and I will probably give it a try very soon, but \n>> I'm wondering if this really needs to go in the core repo. We've \n>> generally shied away from doing much in the way of editor / devenv \n>> support, trying to be fairly agnostic. It's true we carry \n>> .dir-locals.el and .editorconfig, so that's not entirely true, but \n>> those are really just about supporting our indentation etc. standards.\n>\n> Yeah, the editor support in the tree ought to be minimal and factual, \n> based on coding standards and widely recognized best practices, not a \n> collection of one person's favorite aliases and scripts. If the \n> scripts are good, let's look at them and maybe put them under \n> src/tools/ for everyone to use. But a lot of this looks like it will \n> requite active maintenance if output formats or node formats or build \n> targets etc. change. And other things require specific local paths. \n> That's fine for a local script or something, but not for a mainline \n> tool that the community will need to maintain.\n>\n> I suggest to start with a very minimal configuration. What are the \n> settings that absolute everyone will need, maybe to set indentation \n> style or something.\n>\n\nI believe you can get VS Code to support editorconfig, so from that POV \nmaybe we don't need to do anything.\n\nI did try yesterday with the code from the OP's patch symlinked into my \nrepo, but got an error with the Docker build, which kinda reinforces \nyour point.\n\nYour point about \"one person's preferences\" is well taken - some of the \ngit aliases supplied clash with mine.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 3 Aug 2024 07:30:08 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Official devcontainer config"
},
{
"msg_contents": "On Fri, Aug 2, 2024 at 12:29 AM Jelte Fennema-Nio <postgres@jeltef.nl> wrote:\n>\n> On Thu, 1 Aug 2024 at 16:56, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> > I post my daily used devcontainer config[2] , Jelte(ccd)\n> > suggested that it might be a good idea we integrate the\n> > config into postgres repo so that the barrier to entry for\n> > new developers will be much lower.\n>\n> In my experience adding a devcontainer config has definitely made it\n> easier for people to contribute to Citus. So thank you for working on\n> this! This is not a full review, but an initial pass.\n>\n> > After diving into the container, the postCreateCommand.sh\n> > will be automatically called, it will do some configurations\n> > like git, perf, .vscode, core_pattern, etc. It also downloads\n> > michaelpq's pg_plugins and FlameGraph.\n>\n> I think the .git settings don't fit well here, they are mostly aliases\n> which are very much based on personal preference and not related to\n> Postgres development. It seems better recommend users to use the\n> devcontainer dotfiles support for this:\n> https://code.visualstudio.com/docs/devcontainers/containers#_personalizing-with-dotfile-repositories\n>\n> > - pgindent\n>\n> It might make sense to install Tristan (ccd) his Postgres Hacker\n> extension for vscode to make this a bit more userfriendly:\n> https://marketplace.visualstudio.com/items?itemName=tristan957.postgres-hacker\n\nGood to know, I will try this later.\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Sun, 4 Aug 2024 09:59:45 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Official devcontainer config"
},
{
"msg_contents": "On Fri, Aug 2, 2024 at 5:38 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2024-08-01 Th 10:56 AM, Junwang Zhao wrote:\n> > Stack Overflow 2024 developer survey[1] said VSCode\n> > is the most used development environment.\n> >\n> > In a PostgreSQL Hacker Mentoring discussion, we talked\n> > about how to use vscode to debug and running postgres,\n> > Andrey(ccd) has tons of tips for new developers, and\n> > I post my daily used devcontainer config[2] , Jelte(ccd)\n> > suggested that it might be a good idea we integrate the\n> > config into postgres repo so that the barrier to entry for\n> > new developers will be much lower.\n> >\n> > **Note**\n> >\n> > This is not intended to change the workflow of experienced\n> > hackers, it is just hoping to make the life easier for\n> > beginner developers.\n> >\n> > **How to use**\n> >\n> > Open VSCode Command Palette(cmd/ctrl + shift + p),\n> > search devcontainer, then choose something like\n> > `Dev containers: Rebuild and Reopen in Container`, you are\n> > good to go.\n> >\n> > **About the patch**\n> >\n> > devcontainer.json:\n> >\n> > The .devcontainer/devcontainer.json is the entry point for\n> > VSCode to *open folder to develop in a container*, it will build\n> > the docker image for the first time you open in container,\n> > this will take some time.\n> >\n> > There are some parameters(runArgs) for running the container,\n> > we need some settings and privileges to run perf or generate\n> > core dumps.\n> >\n> > It has a mount point mapping the hosts $HOME/freedom\n> > to container's /opt/freedom, I chose the name *freedom*\n> > because the container is kind of a jail.\n> >\n> > It also installed some vscode extensions and did some\n> > customizations.\n> >\n> > After diving into the container, the postCreateCommand.sh\n> > will be automatically called, it will do some configurations\n> > like git, perf, .vscode, core_pattern, etc. It also downloads\n> > michaelpq's pg_plugins and FlameGraph.\n> >\n> > Dockerfile:\n> >\n> > It is based on debian bookworm, it installed dependencies\n> > to build postgres, also IPC::Run to run TAP tests I guess.\n> >\n> > It also has a .gdbinit to break elog.c:errfinish for elevel 21,\n> > this will make the debugging easier why error is logged.\n> >\n> > gdbpg.py is adapted from https://github.com/tvondra/gdbpg,\n> > I think putting it here will make it evolve as time goes.\n> >\n> > tasks.json:\n> >\n> > This is kind of like a bookkeeping for developers, it has\n> > the following commands:\n> >\n> > - meson debug setup\n> > - meson release setup\n> > - ninja build\n> > - regression tests\n> > - ninja install\n> > - init cluster\n> > - start cluster\n> > - stop cluster\n> > - install pg_bsd_indent\n> > - pgindent\n> > - apply patch\n> > - generate flamegraph\n> >\n> > launch.json:\n> >\n> > It has one configuration that makes it possible to attach\n> > to one process(e.g. postgres backend) and debug\n> > with vscode.\n> >\n> > PFA and please give it a try if you are a VSCode user.\n> >\n> > [1]: https://survey.stackoverflow.co/2024/technology#1-integrated-development-environment\n> > [2]: https://github.com/atomicdb/devcontainer/tree/main/postgres\n> >\n>\n> Not totally opposed, and I will probably give it a try very soon, but\n> I'm wondering if this really needs to go in the core repo. We've\n> generally shied away from doing much in the way of editor / devenv\n> support, trying to be fairly agnostic. It's true we carry .dir-locals.el\n> and .editorconfig, so that's not entirely true, but those are really\n> just about supporting our indentation etc. standards.\n>\n> Also, might it not be better for this to be carried in a separate repo\n> maintained by people using vscode? I don't know how may committers do.\n> Maybe lots do, and I'm just a dinosaur. If vscode really needs\n> .devcontainer to live in the code root, maybe it could be symlinked.\n> Another reason not carry the code ourselves is that it will make it less\n> convenient for people who want to customize it.\n\nAgree, if finally we make this into a separate repo, that can also\nbenefit new developers.\n\n>\n> Without having tried it, looks like a nice effort though. Well done.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Sun, 4 Aug 2024 10:02:55 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Official devcontainer config"
},
{
"msg_contents": "On Sat, Aug 3, 2024 at 2:45 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 01.08.24 23:38, Andrew Dunstan wrote:\n> > Not totally opposed, and I will probably give it a try very soon, but\n> > I'm wondering if this really needs to go in the core repo. We've\n> > generally shied away from doing much in the way of editor / devenv\n> > support, trying to be fairly agnostic. It's true we carry .dir-locals.el\n> > and .editorconfig, so that's not entirely true, but those are really\n> > just about supporting our indentation etc. standards.\n>\n> Yeah, the editor support in the tree ought to be minimal and factual,\n> based on coding standards and widely recognized best practices, not a\n> collection of one person's favorite aliases and scripts. If the scripts\n> are good, let's look at them and maybe put them under src/tools/ for\n> everyone to use. But a lot of this looks like it will requite active\n> maintenance if output formats or node formats or build targets etc.\n> change. And other things require specific local paths. That's fine for\n> a local script or something, but not for a mainline tool that the\n> community will need to maintain.\n\nYeah, personal favorite aliases and scripts are not good, that\nalso concerns me, I will delete those parts in future patches.\n\n>\n> I suggest to start with a very minimal configuration. What are the\n> settings that absolute everyone will need, maybe to set indentation\n> style or something.\n>\n\nYeah, reasonable, I will discuss it with Andrey after he tries .devcontainer.\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Sun, 4 Aug 2024 10:07:40 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Official devcontainer config"
},
{
"msg_contents": "On Sat, Aug 3, 2024 at 7:30 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2024-08-02 Fr 2:45 PM, Peter Eisentraut wrote:\n> > On 01.08.24 23:38, Andrew Dunstan wrote:\n> >> Not totally opposed, and I will probably give it a try very soon, but\n> >> I'm wondering if this really needs to go in the core repo. We've\n> >> generally shied away from doing much in the way of editor / devenv\n> >> support, trying to be fairly agnostic. It's true we carry\n> >> .dir-locals.el and .editorconfig, so that's not entirely true, but\n> >> those are really just about supporting our indentation etc. standards.\n> >\n> > Yeah, the editor support in the tree ought to be minimal and factual,\n> > based on coding standards and widely recognized best practices, not a\n> > collection of one person's favorite aliases and scripts. If the\n> > scripts are good, let's look at them and maybe put them under\n> > src/tools/ for everyone to use. But a lot of this looks like it will\n> > requite active maintenance if output formats or node formats or build\n> > targets etc. change. And other things require specific local paths.\n> > That's fine for a local script or something, but not for a mainline\n> > tool that the community will need to maintain.\n> >\n> > I suggest to start with a very minimal configuration. What are the\n> > settings that absolute everyone will need, maybe to set indentation\n> > style or something.\n> >\n>\n> I believe you can get VS Code to support editorconfig, so from that POV\n> maybe we don't need to do anything.\n>\n> I did try yesterday with the code from the OP's patch symlinked into my\n> repo, but got an error with the Docker build, which kinda reinforces\n> your point.\n\nThe reason symlink does not work is that configure_vscode needs to copy\nlaunch.json and tasks.json into .vscode, it has to be in the\nWORKDIR/.devcontainer.\n\n>\n> Your point about \"one person's preferences\" is well taken - some of the\n> git aliases supplied clash with mine.\n>\n\nYeah, I will remove that.\n\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Sun, 4 Aug 2024 10:13:49 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Official devcontainer config"
},
{
"msg_contents": "\nOn 2024-08-03 Sa 10:13 PM, Junwang Zhao wrote:\n> On Sat, Aug 3, 2024 at 7:30 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> On 2024-08-02 Fr 2:45 PM, Peter Eisentraut wrote:\n>>> On 01.08.24 23:38, Andrew Dunstan wrote:\n>>>> Not totally opposed, and I will probably give it a try very soon, but\n>>>> I'm wondering if this really needs to go in the core repo. We've\n>>>> generally shied away from doing much in the way of editor / devenv\n>>>> support, trying to be fairly agnostic. It's true we carry\n>>>> .dir-locals.el and .editorconfig, so that's not entirely true, but\n>>>> those are really just about supporting our indentation etc. standards.\n>>> Yeah, the editor support in the tree ought to be minimal and factual,\n>>> based on coding standards and widely recognized best practices, not a\n>>> collection of one person's favorite aliases and scripts. If the\n>>> scripts are good, let's look at them and maybe put them under\n>>> src/tools/ for everyone to use. But a lot of this looks like it will\n>>> requite active maintenance if output formats or node formats or build\n>>> targets etc. change. And other things require specific local paths.\n>>> That's fine for a local script or something, but not for a mainline\n>>> tool that the community will need to maintain.\n>>>\n>>> I suggest to start with a very minimal configuration. What are the\n>>> settings that absolute everyone will need, maybe to set indentation\n>>> style or something.\n>>>\n>> I believe you can get VS Code to support editorconfig, so from that POV\n>> maybe we don't need to do anything.\n>>\n>> I did try yesterday with the code from the OP's patch symlinked into my\n>> repo, but got an error with the Docker build, which kinda reinforces\n>> your point.\n> The reason symlink does not work is that configure_vscode needs to copy\n> launch.json and tasks.json into .vscode, it has to be in the\n> WORKDIR/.devcontainer.\n\n\nThat's kind of awful. Anyway, I think we don't need to do anything about \nignoring those. The user should simply add entries for them to \n.git/info/exclude or their local global exclude file (I have \ncore.excludesfile = /home/andrew/.gitignore set.)\n\nI was eventually able to get it to work without using a symlink.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 4 Aug 2024 10:12:29 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Official devcontainer config"
},
{
"msg_contents": "Hi hackers,\n\nOn Sun, Aug 4, 2024 at 10:12 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2024-08-03 Sa 10:13 PM, Junwang Zhao wrote:\n> > On Sat, Aug 3, 2024 at 7:30 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >>\n> >> On 2024-08-02 Fr 2:45 PM, Peter Eisentraut wrote:\n> >>> On 01.08.24 23:38, Andrew Dunstan wrote:\n> >>>> Not totally opposed, and I will probably give it a try very soon, but\n> >>>> I'm wondering if this really needs to go in the core repo. We've\n> >>>> generally shied away from doing much in the way of editor / devenv\n> >>>> support, trying to be fairly agnostic. It's true we carry\n> >>>> .dir-locals.el and .editorconfig, so that's not entirely true, but\n> >>>> those are really just about supporting our indentation etc. standards.\n> >>> Yeah, the editor support in the tree ought to be minimal and factual,\n> >>> based on coding standards and widely recognized best practices, not a\n> >>> collection of one person's favorite aliases and scripts. If the\n> >>> scripts are good, let's look at them and maybe put them under\n> >>> src/tools/ for everyone to use. But a lot of this looks like it will\n> >>> requite active maintenance if output formats or node formats or build\n> >>> targets etc. change. And other things require specific local paths.\n> >>> That's fine for a local script or something, but not for a mainline\n> >>> tool that the community will need to maintain.\n> >>>\n> >>> I suggest to start with a very minimal configuration. What are the\n> >>> settings that absolute everyone will need, maybe to set indentation\n> >>> style or something.\n> >>>\n> >> I believe you can get VS Code to support editorconfig, so from that POV\n> >> maybe we don't need to do anything.\n> >>\n> >> I did try yesterday with the code from the OP's patch symlinked into my\n> >> repo, but got an error with the Docker build, which kinda reinforces\n> >> your point.\n> > The reason symlink does not work is that configure_vscode needs to copy\n> > launch.json and tasks.json into .vscode, it has to be in the\n> > WORKDIR/.devcontainer.\n>\n>\n> That's kind of awful. Anyway, I think we don't need to do anything about\n> ignoring those. The user should simply add entries for them to\n> .git/info/exclude or their local global exclude file (I have\n> core.excludesfile = /home/andrew/.gitignore set.)\n>\n> I was eventually able to get it to work without using a symlink.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n\nI did some work on the **devcontainer config** today, hoping this can\naddress some of the concerns raised on this thread.\n\n1. I created a separate git repo named **Unofficial devcontainer\nconfig for Postgres**[1].\n2. The usage is quite simple, clone the repo directly into postgres's tree root,\n and reopen in DevContainer (See REAME in [1] if you are interested).\n3. Using Tristan's vscode code extension postgres-hacker as the formatter,\n pgindent will be called on file save, I found this quite convenient.\n4. Remove the git settings, Dev container copies the global\n~/.gitconfig into the container\n by default, so normally it is not necessary to add extra git aliases.\n\nWhat I haven't addressed is that the repo still uses specific local\npaths, I think\nthis is ok since the code is not going into the core.\n\nOne thing I want to ask is, is there any objection to adding the\n.devcontainer and .vscode to the .gitignore file?\n\nThere are *.vcproj and pgsql.sln in .gitignore, so I guess it's ok to add\n.devcontainer and .vscode?\n\n[1]: https://github.com/pghacking/.devcontainer\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Sat, 24 Aug 2024 20:49:34 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Official devcontainer config"
},
{
"msg_contents": "On 24.08.24 14:49, Junwang Zhao wrote:\n> What I haven't addressed is that the repo still uses specific local\n> paths, I think\n> this is ok since the code is not going into the core.\n\nI'm not among the target users of this, but I imagine that that would \nsignificantly reduce the utility of this for everyone besides you?\n\n> One thing I want to ask is, is there any objection to adding the\n> .devcontainer and .vscode to the .gitignore file?\n\nThe standing policy is that files related to IDEs and editors should not \nbe in our .gitignore, but you can put them into your personal ignore \nfile somewhere.\n\n> There are *.vcproj and pgsql.sln in .gitignore, so I guess it's ok to add\n> .devcontainer and .vscode?\n\nThose are files generated by the build process, so it is appropriate to \nhave them there. But in fact, they should have been removed now that \nthe MSVC build system is done.\n\n\n\n",
"msg_date": "Sat, 24 Aug 2024 15:47:23 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Official devcontainer config"
},
{
"msg_contents": "On Sat, Aug 24, 2024 at 9:47 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 24.08.24 14:49, Junwang Zhao wrote:\n> > What I haven't addressed is that the repo still uses specific local\n> > paths, I think\n> > this is ok since the code is not going into the core.\n>\n> I'm not among the target users of this, but I imagine that that would\n> significantly reduce the utility of this for everyone besides you?\n\nYeah, the reason why I started this thread is that we(at least Jelta and I)\nthink it may make some potential new contributors' lives easier.\n\n>\n> > One thing I want to ask is, is there any objection to adding the\n> > .devcontainer and .vscode to the .gitignore file?\n>\n> The standing policy is that files related to IDEs and editors should not\n> be in our .gitignore, but you can put them into your personal ignore\n> file somewhere.\n\nSure, I can put them in global ignore file in various ways.\n\nI just saw the policy in the comment, so I'm ok with it.\n\n>\n> > There are *.vcproj and pgsql.sln in .gitignore, so I guess it's ok to add\n> > .devcontainer and .vscode?\n>\n> Those are files generated by the build process, so it is appropriate to\n> have them there. But in fact, they should have been removed now that\n> the MSVC build system is done.\n>\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Sat, 24 Aug 2024 22:18:58 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Official devcontainer config"
}
] |
[
{
"msg_contents": "I've recently committed some optimizations for dumping sequences and\npg_class information (commits 68e9629, bd15b7d, and 2329cad), and I noticed\nthat we are also executing a query per function in pg_dump. Commit be85727\noptimized this by preparing the query ahead of time, but I found that we\ncan improve performance further by gathering all the relevant data in a\nsingle query. Here are the results I see for a database with 10k simple\nfunctions with and without the attached patch:\n\n\twith patch:\n\n\t$ time pg_dump postgres >/dev/null\n\tpg_dump postgres > /dev/null 0.04s user 0.01s system 40% cpu 0.118 total\n\t$ time pg_dump postgres >/dev/null\n\tpg_dump postgres > /dev/null 0.04s user 0.01s system 41% cpu 0.107 total\n\t$ time pg_dump postgres >/dev/null\n\tpg_dump postgres > /dev/null 0.04s user 0.01s system 42% cpu 0.103 total\n\t$ time pg_dump postgres >/dev/null\n\tpg_dump postgres > /dev/null 0.04s user 0.01s system 44% cpu 0.105 total\n\n\twithout patch:\n\n\t$ time pg_dump postgres >/dev/null\n\tpg_dump postgres > /dev/null 0.05s user 0.03s system 32% cpu 0.253 total\n\t$ time pg_dump postgres >/dev/null\n\tpg_dump postgres > /dev/null 0.05s user 0.03s system 32% cpu 0.252 total\n\t$ time pg_dump postgres >/dev/null\n\tpg_dump postgres > /dev/null 0.06s user 0.03s system 32% cpu 0.251 total\n\t$ time pg_dump postgres >/dev/null\n\tpg_dump postgres > /dev/null 0.06s user 0.03s system 33% cpu 0.254 total\n\nThis one looks a little different than the sequence/pg_class commits. Much\nof the function information isn't terribly conducive to parsing into\nfixed-size variables in an array, so instead I've opted to just leave the\nPGresult around for reference by dumpFunc(). This patch also creates an\nordered array of function OIDs to speed up locating the relevant index in\nthe PGresult for use in calls to PQgetvalue().\n\nI may be running out of opportunities where this style of optimization\nmakes much difference. I'll likely start focusing on the restore side\nsoon.\n\n-- \nnathan",
"msg_date": "Thu, 1 Aug 2024 11:52:57 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_dump: optimize dumpFunc()"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> I've recently committed some optimizations for dumping sequences and\n> pg_class information (commits 68e9629, bd15b7d, and 2329cad), and I noticed\n> that we are also executing a query per function in pg_dump. Commit be85727\n> optimized this by preparing the query ahead of time, but I found that we\n> can improve performance further by gathering all the relevant data in a\n> single query. Here are the results I see for a database with 10k simple\n> functions with and without the attached patch:\n\nI'm a bit concerned about this on two grounds:\n\n1. Is it a win for DBs with not so many functions?\n\n2. On the other end of the scale, if you've got a *boatload* of\nfunctions, what does it do to pg_dump's memory requirements?\nI'm recalling my days at Salesforce, where they had quite a few\nthousand pl/pgsql functions totalling very many megabytes of source\ntext. (Don't recall precise numbers offhand, and they'd be obsolete\nby now even if I did.)\n\nI'm not sure that the results you're showing justify taking any\nrisk here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2024 01:33:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: optimize dumpFunc()"
},
{
"msg_contents": "On Fri, Aug 02, 2024 at 01:33:45AM -0400, Tom Lane wrote:\n> I'm a bit concerned about this on two grounds:\n> \n> 1. Is it a win for DBs with not so many functions?\n> \n> 2. On the other end of the scale, if you've got a *boatload* of\n> functions, what does it do to pg_dump's memory requirements?\n> I'm recalling my days at Salesforce, where they had quite a few\n> thousand pl/pgsql functions totalling very many megabytes of source\n> text. (Don't recall precise numbers offhand, and they'd be obsolete\n> by now even if I did.)\n> \n> I'm not sure that the results you're showing justify taking any\n> risk here.\n\nHm. I think this is sufficient reason to withdraw this patch on the spot.\n\n-- \nnathan\n\n\n",
"msg_date": "Fri, 2 Aug 2024 15:00:45 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump: optimize dumpFunc()"
},
{
"msg_contents": "On Fri, Aug 2, 2024 at 4:00 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Fri, Aug 02, 2024 at 01:33:45AM -0400, Tom Lane wrote:\n> > 2. On the other end of the scale, if you've got a *boatload* of\n> > functions, what does it do to pg_dump's memory requirements?\n>\n> Hm. I think this is sufficient reason to withdraw this patch on the spot.\n\n\nWe could potentially rework the list of dumpable objects so that each list\nitem represents one or more objects of the same type that we can fetch via\na single query. We could then make whatever tradeoff we want in terms of\nfetch batch size vs client-side memory consumption.\n\nDebatable whether it is worth the hit to code readability though, and might\nbe a bit grotty to do the tsort we need to do for dependency handling...\n\nNeil\n\nOn Fri, Aug 2, 2024 at 4:00 PM Nathan Bossart <nathandbossart@gmail.com> wrote:On Fri, Aug 02, 2024 at 01:33:45AM -0400, Tom Lane wrote:> 2. On the other end of the scale, if you've got a *boatload* of\n> functions, what does it do to pg_dump's memory requirements?\nHm. I think this is sufficient reason to withdraw this patch on the spot.We could potentially rework the list of dumpable objects so that each list item represents one or more objects of the same type that we can fetch via a single query. We could then make whatever tradeoff we want in terms of fetch batch size vs client-side memory consumption.Debatable whether it is worth the hit to code readability though, and might be a bit grotty to do the tsort we need to do for dependency handling...Neil",
"msg_date": "Sat, 3 Aug 2024 14:50:14 -0400",
"msg_from": "Neil Conway <neil.conway@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: optimize dumpFunc()"
}
] |
[
{
"msg_contents": "EXPLAIN ANALYZE CREATE MATERIALIZED VIEW doesn't go through\nExecCreateTableAs(), but does use CreateIntoRelDestReceiver().\n\nThat bypasses the SECURITY_RESTRICTED_OPERATION in ExecCreateTableAs().\nThat is *not* a security problem, because the\nSECURITY_RESTRICTED_OPERATION in CREATE MATERIALIZED VIEW is merely for\nconsistency with a later REFRESH MATERIALIZED VIEW command where the\nSECURITY_RESTRICTED_OPERATION is important.\n\nBut it is inconsistent. For example:\n\n create or replace function set() returns int\n language plpgsql as $$\n begin\n create temp table x(i int);\n return 42;\n end;\n $$;\n create materialized view mv1 as select set(); -- fails\n explain analyze\n create materialized view mv1 as select set(); -- succeeds\n\nRelatedly, if we can EXPLAIN a CREATE MATERIALIZED VIEW, perhaps we\nshould be able to EXPLAIN a REFRESH MATERIALIZED VIEW, too?\n\nComments?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 01 Aug 2024 11:27:31 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Inconsistency with EXPLAIN ANALYZE CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "On Thu, 1 Aug 2024 at 23:27, Jeff Davis <pgsql@j-davis.com> wrote:\n> Relatedly, if we can EXPLAIN a CREATE MATERIALIZED VIEW, perhaps we\n> should be able to EXPLAIN a REFRESH MATERIALIZED VIEW, too?\nSure\n\n> Comments?\nSeems like this is indeed inconsistent behaviour and should be fixed\nin all PGDG-supported versions in the upcoming August release.\nDo you have any suggestions on how to fix this?\n\n\n> Regards,\n> Jeff Davis\n\n\n",
"msg_date": "Thu, 1 Aug 2024 23:41:18 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency with EXPLAIN ANALYZE CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "On Thu, 1 Aug 2024 at 23:27, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> EXPLAIN ANALYZE CREATE MATERIALIZED VIEW doesn't go through\n> ExecCreateTableAs(), but does use CreateIntoRelDestReceiver().\n\nEXPLAIN ANALYZE and regular query goes through create_ctas_internal\n(WITH NO DATA case too). Maybe we can simply move\nSetUserIdAndSecContext call in this function?\n\n\n",
"msg_date": "Fri, 2 Aug 2024 00:13:18 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency with EXPLAIN ANALYZE CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "On Fri, 2024-08-02 at 00:13 +0500, Kirill Reshke wrote:\n> On Thu, 1 Aug 2024 at 23:27, Jeff Davis <pgsql@j-davis.com> wrote:\n> > \n> > EXPLAIN ANALYZE CREATE MATERIALIZED VIEW doesn't go through\n> > ExecCreateTableAs(), but does use CreateIntoRelDestReceiver().\n> \n> EXPLAIN ANALYZE and regular query goes through create_ctas_internal\n> (WITH NO DATA case too). Maybe we can simply move\n> SetUserIdAndSecContext call in this function?\n\nWe need to set up the SECURITY_RESTRICTED_OPERATION before planning, in\ncase the planner executes some functions.\n\nI believe we need to do some more refactoring to make this work. In\nversion 17, I already refactored so that CREATE MATERIALIZED VIEW and\nREFRESH MATERIALIZED VIEW share the code. We can do something similar\nto extend that to EXPLAIN ... CREATE MATERIALIZED VIEW.\n\nAs for the August release, the code freeze is on Saturday. Perhaps it\ncan be done by then, but is there a reason we should rush it? This\naffects all supported versions, so we've lived with it for a while, and\nI don't see a security problem here. I wouldn't expect it to be a\ncommon use case, either.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 01 Aug 2024 13:34:51 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency with EXPLAIN ANALYZE CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "On Thu, 1 Aug 2024 23:41:18 +0500\nKirill Reshke <reshkekirill@gmail.com> wrote:\n\n> On Thu, 1 Aug 2024 at 23:27, Jeff Davis <pgsql@j-davis.com> wrote:\n> > Relatedly, if we can EXPLAIN a CREATE MATERIALIZED VIEW, perhaps we\n> > should be able to EXPLAIN a REFRESH MATERIALIZED VIEW, too?\n> Sure\n\nREFRESH MATERIALIZED VIEW consists of not only the view query\nexecution in refresh_matview_datafill but also refresh_by_heap_swap\nor refresh_by_match_merge. The former doesn't execute any query, so\nit would not a problem, but the latter executes additional queries\nincluding SELECT, some DDL, DELETE, and INSERT. \n\nIf we would make EXPLAIN support REFRESH MATERIALIZED VIEW CONCURRENTLY\ncommand, how should we handle these additional queries?\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 7 Aug 2024 01:06:44 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency with EXPLAIN ANALYZE CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, 6 Aug 2024 at 21:06, Yugo Nagata <nagata@sraoss.co.jp> wrote:\n>\n> On Thu, 1 Aug 2024 23:41:18 +0500\n> Kirill Reshke <reshkekirill@gmail.com> wrote:\n>\n> > On Thu, 1 Aug 2024 at 23:27, Jeff Davis <pgsql@j-davis.com> wrote:\n> > > Relatedly, if we can EXPLAIN a CREATE MATERIALIZED VIEW, perhaps we\n> > > should be able to EXPLAIN a REFRESH MATERIALIZED VIEW, too?\n> > Sure\n>\n> REFRESH MATERIALIZED VIEW consists of not only the view query\n> execution in refresh_matview_datafill but also refresh_by_heap_swap\n> or refresh_by_match_merge. The former doesn't execute any query, so\n> it would not a problem, but the latter executes additional queries\n> including SELECT, some DDL, DELETE, and INSERT.\n>\n> If we would make EXPLAIN support REFRESH MATERIALIZED VIEW CONCURRENTLY\n> command, how should we handle these additional queries?\n>\n> Regards,\n> Yugo Nagata\n>\n> --\n> Yugo Nagata <nagata@sraoss.co.jp>\n\nHmm, is it a big issue? Maybe we can just add them in proper places of\noutput the same way we do it with triggers?\n\n-- \nBest regards,\nKirill Reshke\n\n\n",
"msg_date": "Tue, 6 Aug 2024 21:20:20 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency with EXPLAIN ANALYZE CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "On Thu, 01 Aug 2024 13:34:51 -0700\nJeff Davis <pgsql@j-davis.com> wrote:\n\n> On Fri, 2024-08-02 at 00:13 +0500, Kirill Reshke wrote:\n> > On Thu, 1 Aug 2024 at 23:27, Jeff Davis <pgsql@j-davis.com> wrote:\n> > > \n> > > EXPLAIN ANALYZE CREATE MATERIALIZED VIEW doesn't go through\n> > > ExecCreateTableAs(), but does use CreateIntoRelDestReceiver().\n> > \n> > EXPLAIN ANALYZE and regular query goes through create_ctas_internal\n> > (WITH NO DATA case too). Maybe we can simply move\n> > SetUserIdAndSecContext call in this function?\n> \n> We need to set up the SECURITY_RESTRICTED_OPERATION before planning, in\n> case the planner executes some functions.\n> \n> I believe we need to do some more refactoring to make this work. In\n> version 17, I already refactored so that CREATE MATERIALIZED VIEW and\n> REFRESH MATERIALIZED VIEW share the code. We can do something similar\n> to extend that to EXPLAIN ... CREATE MATERIALIZED VIEW.\n\nI think the most simple way to fix this is to set up the userid and the\nthe SECURITY_RESTRICTED_OPERATION, and call RestrictSearchPath()\nbefore pg_plan_query in ExplainOneQuery(), as in the attached patch.\n\nHowever, this is similar to the old code of ExecCreateTableAs() before\nrefactored. If we want to reuse REFRESH even in EXPLAIN as similar as the\ncurrent CREATE MATERIALIZED code, I think we would have to refactor\nRefereshMatViewByOid to receive instrument_option and eflags as arguments\nand call it in ExplainOnePlan, for example.\n\nDo you think that is more preferred than setting up\nSECURITY_RESTRICTED_OPERATION in ExplainOneQuery?\n \n> As for the August release, the code freeze is on Saturday. Perhaps it\n> can be done by then, but is there a reason we should rush it? This\n> affects all supported versions, so we've lived with it for a while, and\n> I don't see a security problem here. I wouldn't expect it to be a\n> common use case, either.\n\nI agree that we don't have to rush it since it is not a security bug\nbut just it could make a materialized view that cannot be refreshed.\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Wed, 7 Aug 2024 02:13:04 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency with EXPLAIN ANALYZE CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, 6 Aug 2024 at 22:13, Yugo Nagata <nagata@sraoss.co.jp> wrote:\n>\n> On Thu, 01 Aug 2024 13:34:51 -0700\n> Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> > On Fri, 2024-08-02 at 00:13 +0500, Kirill Reshke wrote:\n> > > On Thu, 1 Aug 2024 at 23:27, Jeff Davis <pgsql@j-davis.com> wrote:\n> > > >\n> > > > EXPLAIN ANALYZE CREATE MATERIALIZED VIEW doesn't go through\n> > > > ExecCreateTableAs(), but does use CreateIntoRelDestReceiver().\n> > >\n> > > EXPLAIN ANALYZE and regular query goes through create_ctas_internal\n> > > (WITH NO DATA case too). Maybe we can simply move\n> > > SetUserIdAndSecContext call in this function?\n> >\n> > We need to set up the SECURITY_RESTRICTED_OPERATION before planning, in\n> > case the planner executes some functions.\n> >\n> > I believe we need to do some more refactoring to make this work. In\n> > version 17, I already refactored so that CREATE MATERIALIZED VIEW and\n> > REFRESH MATERIALIZED VIEW share the code. We can do something similar\n> > to extend that to EXPLAIN ... CREATE MATERIALIZED VIEW.\n>\n> I think the most simple way to fix this is to set up the userid and the\n> the SECURITY_RESTRICTED_OPERATION, and call RestrictSearchPath()\n> before pg_plan_query in ExplainOneQuery(), as in the attached patch.\n>\n> However, this is similar to the old code of ExecCreateTableAs() before\n> refactored. If we want to reuse REFRESH even in EXPLAIN as similar as the\n> current CREATE MATERIALIZED code, I think we would have to refactor\n> RefereshMatViewByOid to receive instrument_option and eflags as arguments\n> and call it in ExplainOnePlan, for example.\n>\n> Do you think that is more preferred than setting up\n> SECURITY_RESTRICTED_OPERATION in ExplainOneQuery?\n>\n> > As for the August release, the code freeze is on Saturday. Perhaps it\n> > can be done by then, but is there a reason we should rush it? This\n> > affects all supported versions, so we've lived with it for a while, and\n> > I don't see a security problem here. I wouldn't expect it to be a\n> > common use case, either.\n>\n> I agree that we don't have to rush it since it is not a security bug\n> but just it could make a materialized view that cannot be refreshed.\n>\n> Regards,\n> Yugo Nagata\n>\n> --\n> Yugo Nagata <nagata@sraoss.co.jp>\n\n> + /*\n> + * For CREATE MATERIALIZED VIEW command, switch to the owner's userid, so\n> + * that any functions are run as that user. Also lock down security-restricted\n> + * operations and arrange to make GUC variable changes local to this command.\n> + */\n> + if (into && into->viewQuery)\n> + {\n> + GetUserIdAndSecContext(&save_userid, &save_sec_context);\n> + SetUserIdAndSecContext(save_userid,\n> + save_sec_context | SECURITY_RESTRICTED_OPERATION);\n> + save_nestlevel = NewGUCNestLevel();\n> + RestrictSearchPath();\n> + }\n> +\n\nIn general, doing this ad-hoc coding for MV inside\nstandard_ExplainOneQuery function looks just out of place for me.\nstandard_ExplainOneQuery is on another level of abstraction.\n\n-- \nBest regards,\nKirill Reshke\n\n\n",
"msg_date": "Tue, 6 Aug 2024 22:35:50 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency with EXPLAIN ANALYZE CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "On Wed, 7 Aug 2024 02:13:04 +0900\nYugo Nagata <nagata@sraoss.co.jp> wrote:\n\n> On Thu, 01 Aug 2024 13:34:51 -0700\n> Jeff Davis <pgsql@j-davis.com> wrote:\n> \n> > On Fri, 2024-08-02 at 00:13 +0500, Kirill Reshke wrote:\n> > > On Thu, 1 Aug 2024 at 23:27, Jeff Davis <pgsql@j-davis.com> wrote:\n> > > > \n> > > > EXPLAIN ANALYZE CREATE MATERIALIZED VIEW doesn't go through\n> > > > ExecCreateTableAs(), but does use CreateIntoRelDestReceiver().\n> > > \n> > > EXPLAIN ANALYZE and regular query goes through create_ctas_internal\n> > > (WITH NO DATA case too). Maybe we can simply move\n> > > SetUserIdAndSecContext call in this function?\n> > \n> > We need to set up the SECURITY_RESTRICTED_OPERATION before planning, in\n> > case the planner executes some functions.\n> > \n> > I believe we need to do some more refactoring to make this work. In\n> > version 17, I already refactored so that CREATE MATERIALIZED VIEW and\n> > REFRESH MATERIALIZED VIEW share the code. We can do something similar\n> > to extend that to EXPLAIN ... CREATE MATERIALIZED VIEW.\n> \n> I think the most simple way to fix this is to set up the userid and the\n> the SECURITY_RESTRICTED_OPERATION, and call RestrictSearchPath()\n> before pg_plan_query in ExplainOneQuery(), as in the attached patch.\n\nI'm sorry. After sending the mail, I found the patch didn't work.\nIf we call RestrictSearchPath() before creating a relation, it tries\nto create the relation in pg_catalog and it causes an error.\n\nPerhaps, we would need to create the rel before RestrictSearchPath() by calling\ncreate_ctas_nodata or something like this...\n\n> \n> However, this is similar to the old code of ExecCreateTableAs() before\n> refactored. If we want to reuse REFRESH even in EXPLAIN as similar as the\n> current CREATE MATERIALIZED code, I think we would have to refactor\n> RefereshMatViewByOid to receive instrument_option and eflags as arguments\n> and call it in ExplainOnePlan, for example.\n> \n> Do you think that is more preferred than setting up\n> SECURITY_RESTRICTED_OPERATION in ExplainOneQuery?\n> \n> > As for the August release, the code freeze is on Saturday. Perhaps it\n> > can be done by then, but is there a reason we should rush it? This\n> > affects all supported versions, so we've lived with it for a while, and\n> > I don't see a security problem here. I wouldn't expect it to be a\n> > common use case, either.\n> \n> I agree that we don't have to rush it since it is not a security bug\n> but just it could make a materialized view that cannot be refreshed.\n> \n> Regards,\n> Yugo Nagata\n> \n> -- \n> Yugo Nagata <nagata@sraoss.co.jp>\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 7 Aug 2024 03:06:14 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency with EXPLAIN ANALYZE CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "On Wed, 2024-08-07 at 03:06 +0900, Yugo Nagata wrote:\n> I'm sorry. After sending the mail, I found the patch didn't work.\n> If we call RestrictSearchPath() before creating a relation, it tries\n> to create the relation in pg_catalog and it causes an error.\n\nYeah, that's exactly the problem I ran into in ExecCreateTableAs(),\nwhich was the reason I refactored it to use RefreshMatViewByOid().\n\n> Perhaps, we would need to create the rel before RestrictSearchPath()\n> by calling\n> create_ctas_nodata or something like this...\n\nI haven't looked at the details, but that makes sense: break it into\ntwo parts so that the create (without data) happens first without doing\nthe work of EXPLAIN, and the second part refreshes using the explain\nlogic. That means RefreshMatViewByOid() would need to work with\nEXPLAIN.\n\nAs you point out in the other email, it's not easy to make that all\nwork with REFRESH ... CONCURRENTLY, but perhaps it could work with\nCREATE MATERIALIZED VIEW and REFRESH (without CONCURRENTLY).\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 06 Aug 2024 11:24:03 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency with EXPLAIN ANALYZE CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> As you point out in the other email, it's not easy to make that all\n> work with REFRESH ... CONCURRENTLY, but perhaps it could work with\n> CREATE MATERIALIZED VIEW and REFRESH (without CONCURRENTLY).\n\nI'm not really sure I see the point of this, if it doesn't \"just work\"\nwith all variants of C.M.V. It's not like you can't easily EXPLAIN\nthe view's SELECT.\n\nIf REFRESH M. V. does something different than CREATE, there would\ncertainly be value in being able to EXPLAIN what that does --- but\nthat still isn't an argument for allowing EXPLAIN CREATE MATERIALIZED\nVIEW.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Aug 2024 14:36:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency with EXPLAIN ANALYZE CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, 06 Aug 2024 11:24:03 -0700\nJeff Davis <pgsql@j-davis.com> wrote:\n\n> On Wed, 2024-08-07 at 03:06 +0900, Yugo Nagata wrote:\n> > I'm sorry. After sending the mail, I found the patch didn't work.\n> > If we call RestrictSearchPath() before creating a relation, it tries\n> > to create the relation in pg_catalog and it causes an error.\n> \n> Yeah, that's exactly the problem I ran into in ExecCreateTableAs(),\n> which was the reason I refactored it to use RefreshMatViewByOid().\n\nI came up with an idea that we can rewrite into->rel to add the\ncurrent schemaname before calling RestrictSearchPath() as in the\nattached patch.\n\nIt seems a simpler fix at least, but I am not sure whether this design\nis better than using RefreshMatViewByOid from EXPLAIN considering\nto support EXPLAIN REFRESH ...\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Wed, 7 Aug 2024 04:06:29 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency with EXPLAIN ANALYZE CREATE MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, 2024-08-06 at 14:36 -0400, Tom Lane wrote:\n> I'm not really sure I see the point of this, if it doesn't \"just\n> work\"\n> with all variants of C.M.V. It's not like you can't easily EXPLAIN\n> the view's SELECT.\n> \n> If REFRESH M. V. does something different than CREATE, there would\n> certainly be value in being able to EXPLAIN what that does --- but\n> that still isn't an argument for allowing EXPLAIN CREATE MATERIALIZED\n> VIEW.\n\nWe already allow EXPLAIN ANALYZE CREATE MATERIALIZED VIEW in all\nsupported versions.\n\nThat seems strange and it surprised me, but the parser structure is\nshared between SELECT ... INTO and CREATE MATERIALIZED VIEW, so I\nsuppose it was supported out of convenience.\n\nThe problem is that the implentation is split between the EXPLAIN\nANALYZE path and the non-EXPLAIN path. The long-ago commit f3ab5d4696\nmissed the EXPLAIN path. NB: I do not believe this is a security\nconcern, but it does create the inconsistency described in the email\nstarting this thread.\n\nOptions:\n\n1. Do nothing on the grounds that EXPLAIN ANALYZE CREATE MATERIALIZED\nVIEW is not common enough to worry about, and the consequences of the\ninconsistency are not bad enough.\n\n2. Refactor some more to make EXPLAIN ANALYZE CREATE MATERIALIZED VIEW\nshare the query part of the code path with REFRESH so that it benefits\nfrom the SECURITY_RESTRICTED_OPERATION and RestrictSearchPath().\n\n3. Do #2 but also make it work for REFRESH, but not CONCURRENTLY.\n\n4. Do #3 but also make it work for REFRESH ... CONCURRENTLY and provide\nnew information that's not available by only explaining the query.\n\nAnd also figure out if any of this should be back-patched.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 06 Aug 2024 14:20:37 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency with EXPLAIN ANALYZE CREATE MATERIALIZED VIEW"
}
] |
[
{
"msg_contents": "I complained in the discussion of bug #18564 [1] that it's quite\ninconsistent that you can cast a jsonb null to text and get\na SQL NULL:\n\n=# select ('{\"a\": null}'::jsonb)->>'a';\n ?column? \n----------\n \n(1 row)\n\nbut if you cast it to any other type it's an error:\n\n=# select (('{\"a\": null}'::jsonb)->'a')::float8;\nERROR: cannot cast jsonb null to type double precision\n\nI think this should be allowed and should produce a SQL NULL.\nIt doesn't look hard: the attached POC patch fixes this for\nthe float8 case only. If there's not conceptual objections\nI can flesh this out to cover the other jsonb-to-XXX\ncast functions.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/18564-5985f90678ed7512%40postgresql.org",
"msg_date": "Thu, 01 Aug 2024 18:51:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Casts from jsonb to other types should cope with json null"
},
{
"msg_contents": "On Thu, Aug 1, 2024 at 3:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I complained in the discussion of bug #18564 [1] that it's quite\n> inconsistent that you can cast a jsonb null to text and get\n> a SQL NULL:\n>\n> =# select ('{\"a\": null}'::jsonb)->>'a';\n> ?column?\n> ----------\n>\n> (1 row)\n\nOddly, it looks like you only get a null if you use the '->>'\noperator. With '->' and a subsequent cast to text, you get the string\n\"null\":\n\nmaciek=# select (('{\"a\":null}'::jsonb)->'a')::text;\n text\n------\n null\n(1 row)\n\nIs that expected?\n\n\n",
"msg_date": "Thu, 1 Aug 2024 16:29:58 -0700",
"msg_from": "Maciek Sakrejda <maciek@pganalyze.com>",
"msg_from_op": false,
"msg_subject": "Re: Casts from jsonb to other types should cope with json null"
},
{
"msg_contents": "Maciek Sakrejda <maciek@pganalyze.com> writes:\n> Oddly, it looks like you only get a null if you use the '->>'\n> operator. With '->' and a subsequent cast to text, you get the string\n> \"null\":\n\n> maciek=# select (('{\"a\":null}'::jsonb)->'a')::text;\n> text\n> ------\n> null\n> (1 row)\n\n> Is that expected?\n\nI think what is happening there is you're getting the fallback\n\"cast via I/O\" behavior. There's no jsonb->text cast function\nin the catalogs.\n\nPerhaps it's worth adding one, so that it can be made to behave\nsimilarly to the casts to other types. However, that would be\na compatibility break for a case that doesn't fail today, so\nit might be a harder sell than changing cases that do fail.\nI'm mildly in favor of doing it, but somebody else might\nthink differently.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Aug 2024 19:44:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Casts from jsonb to other types should cope with json null"
},
{
"msg_contents": "On Thursday, August 1, 2024, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Maciek Sakrejda <maciek@pganalyze.com> writes:\n> > Oddly, it looks like you only get a null if you use the '->>'\n> > operator. With '->' and a subsequent cast to text, you get the string\n> > \"null\":\n>\n> > maciek=# select (('{\"a\":null}'::jsonb)->'a')::text;\n> > text\n> > ------\n> > null\n> > (1 row)\n>\n> > Is that expected?\n>\n> I think what is happening there is you're getting the fallback\n> \"cast via I/O\" behavior. There's no jsonb->text cast function\n> in the catalogs.\n>\n> Perhaps it's worth adding one, so that it can be made to behave\n> similarly to the casts to other types.\n>\n\nI’m not too keen on opening Pandora’s box here even if I do regret our\ncurrent choices. Semantic casting of json scalar strings only, and doing\ndocument serialization as a function, would have been better in hindsight.\n\nI am fine with implementing the conversion of json null types to SQL null\nfor all casts that already do semantic value casting, and thus recognize\nbut prohibit the cast, as shown for float.\n\nI read the discussion thread [1] that added this and while one person\nmentioned json null no one replied to that point and seemingly no explicit\nconsideration for treating json null semantically was ever done - i.e. this\nfails only because in json null has its own type, and the test were type,\nnot value, oriented. As SQL null is a value only, whose type is whatever\nholds it, I’d argue our lack of doing this even constitutes a bug but\nwouldn’t - and turning errors into non-errors has a lower “bug acceptance\nthreshold”.\n\nDavid J.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/0154d35a-24ae-f063-5273-9ffcdf1c7f2e%40postgrespro.ru\n\nOn Thursday, August 1, 2024, Tom Lane <tgl@sss.pgh.pa.us> wrote:Maciek Sakrejda <maciek@pganalyze.com> writes:\n> Oddly, it looks like you only get a null if you use the '->>'\n> operator. With '->' and a subsequent cast to text, you get the string\n> \"null\":\n\n> maciek=# select (('{\"a\":null}'::jsonb)->'a')::text;\n> text\n> ------\n> null\n> (1 row)\n\n> Is that expected?\n\nI think what is happening there is you're getting the fallback\n\"cast via I/O\" behavior. There's no jsonb->text cast function\nin the catalogs.\n\nPerhaps it's worth adding one, so that it can be made to behave\nsimilarly to the casts to other types. \nI’m not too keen on opening Pandora’s box here even if I do regret our current choices. Semantic casting of json scalar strings only, and doing document serialization as a function, would have been better in hindsight.I am fine with implementing the conversion of json null types to SQL null for all casts that already do semantic value casting, and thus recognize but prohibit the cast, as shown for float.I read the discussion thread [1] that added this and while one person mentioned json null no one replied to that point and seemingly no explicit consideration for treating json null semantically was ever done - i.e. this fails only because in json null has its own type, and the test were type, not value, oriented. As SQL null is a value only, whose type is whatever holds it, I’d argue our lack of doing this even constitutes a bug but wouldn’t - and turning errors into non-errors has a lower “bug acceptance threshold”.David J.[1] https://www.postgresql.org/message-id/flat/0154d35a-24ae-f063-5273-9ffcdf1c7f2e%40postgrespro.ru",
"msg_date": "Thu, 1 Aug 2024 21:36:21 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Casts from jsonb to other types should cope with json null"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Thursday, August 1, 2024, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think what is happening there is you're getting the fallback\n>> \"cast via I/O\" behavior. There's no jsonb->text cast function\n>> in the catalogs.\n>> Perhaps it's worth adding one, so that it can be made to behave\n>> similarly to the casts to other types.\n\n> I’m not too keen on opening Pandora’s box here even if I do regret our\n> current choices. Semantic casting of json scalar strings only, and doing\n> document serialization as a function, would have been better in hindsight.\n> ...\n> I read the discussion thread [1] that added this and while one person\n> mentioned json null no one replied to that point and seemingly no explicit\n> consideration for treating json null semantically was ever done - i.e. this\n> fails only because in json null has its own type, and the test were type,\n> not value, oriented. As SQL null is a value only, whose type is whatever\n> holds it, I’d argue our lack of doing this even constitutes a bug but\n> wouldn’t - and turning errors into non-errors has a lower “bug acceptance\n> threshold”.\n\nYeah, it's clear that this wasn't thought about too hard, because\nI discovered that changing this behavior affects exactly zero existing\nregression test cases :-(. But I still think we should probably\nchange it. Aside from ->>, we have other operators/functions that\nconvert jsonb values to SQL types, such as #>>,\njsonb_array_elements_text, jsonb_each_text, and AFAICS every one\nof those translates JSON null to SQL NULL.\n\nAttached are some actual patches for this. 0001 just changes the\nexisting json-to-scalar casts; since those previously threw\nan error, I think that change is not too controversial. 0002 is\nPOC for changing the behavior of jsonb::text. I don't think\nit's committable as-is, because we have the same behavior for\njsonb::varchar and other string-category target types. That's\nnot all that hard to fix, but I've not done so pending a decision\non whether we want 0002.\n\nIt strikes me that there's also a question of whether json::text\nshould translate 'null' like this. I'm inclined to think not,\nsince json is in some sense a domain over text, but it's certainly\ndebatable.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 03 Aug 2024 16:10:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Casts from jsonb to other types should cope with json null"
}
] |
[
{
"msg_contents": "Hi,\n\nI found that pg_createsubscriber doesn't use NLS files. This is due to\nthe wrong reference name \"pg_createsubscriber\" being passed to\nset_pglocale_pgservice(). It should be \"pg_basebackup\" instead. See\nthe attached patch.\n\n# Sorry for being away for a while. I should be able to return soon.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 02 Aug 2024 11:57:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "wrong translation file reference in pg_createsubscriber"
},
{
"msg_contents": "Hello,\n\nOn 2024-Aug-02, Kyotaro Horiguchi wrote:\n\n> I found that pg_createsubscriber doesn't use NLS files. This is due to\n> the wrong reference name \"pg_createsubscriber\" being passed to\n> set_pglocale_pgservice(). It should be \"pg_basebackup\" instead. See\n> the attached patch.\n\nAbsolutely right. Pushed, thanks.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 2 Aug 2024 12:07:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: wrong translation file reference in pg_createsubscriber"
}
] |
[
{
"msg_contents": "Hi all,\n\nSome utility statements like Explain, CreateTableAs and DeclareCursor\ncontain a query which will be planned and executed. During post parse,\nonly the top utility statement is jumbled, leaving the contained query\nwithout a query_id set. Explain does the query jumble in ExplainQuery\nbut for the contained query but CreateTableAs and DeclareCursor were\nleft with unset query_id.\n\nThis leads to extensions relying on query_id like pg_stat_statements\nto not be able to track those nested queries as the query_id was 0.\n\nThis patch fixes this by recursively jumbling queries contained in\nthose utility statements during post_parse, setting the query_id for\nthose contained queries and removing the need for ExplainQuery to do\nit for the Explain statements.\n\nRegards,\nAnthonin Bonnefoy",
"msg_date": "Fri, 2 Aug 2024 10:21:53 +0200",
"msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>",
"msg_from_op": true,
"msg_subject": "Set query_id for query contained in utility statement"
},
{
"msg_contents": "I've realised my initial approach was wrong, calling the post parse\nfor all nested queries in analyze.c prevents extension like pgss to\ncorrectly track the query's nesting level.\n\nI've changed the approach to replicate what's done in ExplainQuery to\nboth CreateTableAs and DeclareCursor: Jumble the query contained by\nthe utility statement and call the post parse hook before it is\nplanned or executed. Additionally, explain's nested query can itself\nbe a CreateTableAs or DeclareCursor which also needs to be jumbled.\nThe issue is visible when explaining a CreateTableAs or DeclareCursor\nQuery, the queryId is missing despite the verbose.\n\nEXPLAIN (verbose) create table test_t as select 1;\n QUERY PLAN\n------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=4)\n Output: 1\n\nPost patch, the query id is correctly displayed.\n\nEXPLAIN (verbose) create table test_t as select 1;\n QUERY PLAN\n------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=4)\n Output: 1\n Query Identifier: 2800308901962295548\n\nRegards,\nAnthonin Bonnefoy",
"msg_date": "Mon, 5 Aug 2024 09:19:08 +0200",
"msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>",
"msg_from_op": true,
"msg_subject": "Re: Set query_id for query contained in utility statement"
},
{
"msg_contents": "On Mon, Aug 5, 2024 at 3:19 PM Anthonin Bonnefoy\n<anthonin.bonnefoy@datadoghq.com> wrote:\n>\n> I've realised my initial approach was wrong, calling the post parse\n> for all nested queries in analyze.c prevents extension like pgss to\n> correctly track the query's nesting level.\n>\n> I've changed the approach to replicate what's done in ExplainQuery to\n> both CreateTableAs and DeclareCursor: Jumble the query contained by\n> the utility statement and call the post parse hook before it is\n> planned or executed. Additionally, explain's nested query can itself\n> be a CreateTableAs or DeclareCursor which also needs to be jumbled.\n> The issue is visible when explaining a CreateTableAs or DeclareCursor\n> Query, the queryId is missing despite the verbose.\n>\n> EXPLAIN (verbose) create table test_t as select 1;\n> QUERY PLAN\n> ------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=4)\n> Output: 1\n>\n> Post patch, the query id is correctly displayed.\n>\n> EXPLAIN (verbose) create table test_t as select 1;\n> QUERY PLAN\n> ------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=4)\n> Output: 1\n> Query Identifier: 2800308901962295548\n>\n\nplay with pg_stat_statements. settings:\n name | setting\n-----------------------------------+---------\n pg_stat_statements.max | 5000\n pg_stat_statements.save | on\n pg_stat_statements.track | all\n pg_stat_statements.track_planning | on\n pg_stat_statements.track_utility | on\n\nSELECT pg_stat_statements_reset();\nselect 1;\nselect 2;\nSELECT queryid, left(query, 60),plans, calls, rows FROM\npg_stat_statements ORDER BY calls DESC LIMIT 5;\nreturns:\n queryid | left\n | plans | calls | rows\n----------------------+--------------------------------------------------------------+-------+-------+------\n 2800308901962295548 | select $1\n | 2 | 2 | 2\n\nThe output is what we expect.\n\nnow after applying your patch.\nSELECT pg_stat_statements_reset();\nEXPLAIN (verbose) create table test_t as select 1;\nEXPLAIN (verbose) create table test_t as select 2;\nSELECT queryid, left(query, 60),plans, calls, rows FROM\npg_stat_statements ORDER BY calls DESC LIMIT 5;\n\nthe output is:\n queryid | left\n | plans | calls | rows\n----------------------+--------------------------------------------------------------+-------+-------+------\n 2800308901962295548 | EXPLAIN (verbose) create table test_t as\nselect 1; | 2 | 2 | 0\n 2093602470903273926 | EXPLAIN (verbose) create table test_t as\nselect $1 | 0 | 2 | 0\n -2694081619397734273 | SELECT pg_stat_statements_reset()\n | 0 | 1 | 1\n 2643570658797872815 | SELECT queryid, left(query, $1),plans, calls,\nrows FROM pg_s | 1 | 0 | 0\n\n\"EXPLAIN (verbose) create table test_t as select 1;\" called twice,\nis that what we expect?\n\nshould first row, the normalized query becomes\n\nEXPLAIN (verbose) create table test_t as select $1;\n\n?\n\n\n",
"msg_date": "Mon, 26 Aug 2024 11:26:00 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Set query_id for query contained in utility statement"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 5:26 AM jian he <jian.universality@gmail.com> wrote:\n> queryid | left\n> | plans | calls | rows\n> ----------------------+--------------------------------------------------------------+-------+-------+------\n> 2800308901962295548 | EXPLAIN (verbose) create table test_t as\n> select 1; | 2 | 2 | 0\n> 2093602470903273926 | EXPLAIN (verbose) create table test_t as\n> select $1 | 0 | 2 | 0\n>\n> \"EXPLAIN (verbose) create table test_t as select 1;\" called twice,\n> is that what we expect?\n\npg_stat_statements reports nested queries and toplevel queries\nseparately. You can check with the toplevel column.\n2800308901962295548 is the nested ctas part while 2093602470903273926\nis the top explain utility statement (which explain why there's 0\nplans since it's an utility statement). Since the explain ctas query\nwas called twice, it will be reported as 2 toplevel queries and 2\nnested queries.\n\n> should first row, the normalized query becomes\n> EXPLAIN (verbose) create table test_t as select $1;\n\nGood point, the issue in this case was the nested query was stored by\npg_stat_statements during the ExecutorEnd hook. This hook doesn't have\nthe jstate and parseState at that point so pg_stat_statements can't\nnormalize the query.\n\nI've modified the patch to fix this. Instead of just jumbling the\nquery in ExplainQuery, I've moved jumbling in ExplainOneUtility which\nalready has specific code to handle ctas and dcs. Calling the post\nparse hook here allows pg_stat_statements to store the normalized\nversion of the query for this queryid and nesting level.",
"msg_date": "Mon, 26 Aug 2024 10:54:56 +0200",
"msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>",
"msg_from_op": true,
"msg_subject": "Re: Set query_id for query contained in utility statement"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 4:55 PM Anthonin Bonnefoy\n<anthonin.bonnefoy@datadoghq.com> wrote:\n>\n\n /* Evaluate parameters, if any */\n if (entry->plansource->num_params)\n {\n- ParseState *pstate;\n-\n- pstate = make_parsestate(NULL);\n- pstate->p_sourcetext = queryString;\n\nyou deleted the above these lines, but passed (ParseState *pstate) in\nExplainExecuteQuery\nhow do you make sure ExplainExecuteQuery passed (ParseState *pstate)\nthe p_next_resno is 1 and p_resolve_unknowns is true.\nmaybe we can add some Asserts like in ExplainExecuteQuery\n\n /* Evaluate parameters, if any */\n if (entry->plansource->num_params)\n {\n Assert(pstate->p_next_resno == 1);\n Assert(pstate->p_resolve_unknowns == 1);\n }\n\n\n\nalso it's ok to use passed (ParseState *pstate) for\n{\n estate = CreateExecutorState();\n estate->es_param_list_info = params;\n paramLI = EvaluateParams(pstate, entry, execstmt->params, estate);\n}\n?\nI really don't know.\n\n\n\nsome of the change is refactoring, maybe you can put it into a separate patch.\n\n\n",
"msg_date": "Tue, 27 Aug 2024 17:12:00 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Set query_id for query contained in utility statement"
},
{
"msg_contents": "On Tue, Aug 27, 2024 at 11:14 AM jian he <jian.universality@gmail.com> wrote:\n> also it's ok to use passed (ParseState *pstate) for\n> {\n> estate = CreateExecutorState();\n> estate->es_param_list_info = params;\n> paramLI = EvaluateParams(pstate, entry, execstmt->params, estate);\n> }\n> ?\n> I really don't know.\n>\n> some of the change is refactoring, maybe you can put it into a separate patch.\n\nThanks for the review. I think the parser state is mostly used for the\nerror callbacks and parser_errposition but I'm not 100% sure. Either\nway, you're right and it probably shouldn't be in the patch. I've\nmodified the patch to restrict the changes to only add the necessary\nquery jumble and post parse hook calls.",
"msg_date": "Fri, 30 Aug 2024 09:37:03 +0200",
"msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>",
"msg_from_op": true,
"msg_subject": "Re: Set query_id for query contained in utility statement"
},
{
"msg_contents": "PREPARE test_prepare_pgss1(int, int) AS select generate_series($1, $2);\nSELECT pg_stat_statements_reset() IS NOT NULL AS t;\nEXECUTE test_prepare_pgss1 (1,2);\nEXECUTE test_prepare_pgss1 (1,3);\nSELECT toplevel, calls, query, queryid, rows FROM pg_stat_statements\nORDER BY query COLLATE \"C\", toplevel;\nSELECT pg_stat_statements_reset() IS NOT NULL AS t;\n\n---the above works just fine. just for demo purpose\n\nexplain(verbose) EXECUTE test_prepare_pgss1(1, 2);\nexplain(verbose) EXECUTE test_prepare_pgss1(1, 3);\n\n\nSELECT toplevel, calls, query, queryid, rows FROM pg_stat_statements\nORDER BY query COLLATE \"C\", toplevel;\n toplevel | calls | query\n | queryid | rows\n----------+-------+------------------------------------------------------------------------+----------------------+------\n f | 2 | PREPARE test_prepare_pgss1(int, int) AS select\ngenerate_series($1, $2) | -3421048434214482065 | 0\n t | 1 | SELECT pg_stat_statements_reset() IS NOT NULL AS t\n | 3366652201587963567 | 1\n t | 0 | SELECT toplevel, calls, query, queryid, rows FROM\npg_stat_statements +| -6410939316132384446 | 0\n | | ORDER BY query COLLATE \"C\", toplevel\n | |\n t | 1 | explain(verbose) EXECUTE test_prepare_pgss1(1, 2)\n | 7618807962395633001 | 0\n t | 1 | explain(verbose) EXECUTE test_prepare_pgss1(1, 3)\n | -2281958002956676857 | 0\n\nIs it possible to normalize top level utilities explain query, make\nthese two have the same queryid?\nexplain(verbose) EXECUTE test_prepare_pgss1(1, 2);\nexplain(verbose) EXECUTE test_prepare_pgss1(1, 3);\n\n\nI guess this is a corner case.\n\n\n",
"msg_date": "Mon, 30 Sep 2024 23:14:01 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Set query_id for query contained in utility statement"
}
] |
[
{
"msg_contents": "I hate to be \"that guy\", but there are many places in sources where we use\nLOCKMODE lockmode; variable and exactly one where we use LOCKMODE\nlmode: it is vacuum_open_relation function.\nIs it worth a patch?",
"msg_date": "Fri, 2 Aug 2024 13:25:25 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": true,
"msg_subject": "Small refactoring around vacuum_open_relation"
},
{
"msg_contents": "On Fri, Aug 2, 2024 at 1:55 PM Kirill Reshke <reshkekirill@gmail.com> wrote:\n>\n> I hate to be \"that guy\", but there are many places in sources where we use\n> LOCKMODE lockmode; variable and exactly one where we use LOCKMODE\n> lmode: it is vacuum_open_relation function.\n\nThere are more instances of LOCKMODE lmode; I spotted one in plancat.c as well.\n\nCase1:\ntoast_get_valid_index(Oid toastoid, LOCKMODE lock)\ntoast_close_indexes(Relation *toastidxs, int num_indexes, LOCKMODE lock)\nGetLockmodeName(LOCKMETHODID lockmethodid, LOCKMODE mode)\nLOCKMODE mode = 0;\n\nCase 2:\nqualified variable names like\nLOCKMODE heapLockmode;\nLOCKMODE memlockmode;\nLOCKMODE table_lockmode;\nLOCKMODE parentLockmode;\nLOCKMODE cmd_lockmode = AccessExclusiveLock; /* default for compiler */\nLOCK_PRINT(const char *where, const LOCK *lock, LOCKMODE type)\n\ncase3: some that have two LOCKMODE instances like\nDoLockModesConflict(LOCKMODE mode1, LOCKMODE mode2)\n\n> Is it worth a patch?\n\nWhen I see a variable with name lockmode, I know it's of type\nLOCKMODE. So changing the Case1 may be worth it. It's not a whole lot\nof code churn as well. May be patch backbranches.\n\nCase2 we should leave as is since the variable name has lockmode in it.\n\nCase3, worth changing to lockmode1 and lockmode2.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 2 Aug 2024 15:01:06 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Small refactoring around vacuum_open_relation"
},
{
"msg_contents": "Thanks for review!\n\nOn Fri, 2 Aug 2024 at 14:31, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Fri, Aug 2, 2024 at 1:55 PM Kirill Reshke <reshkekirill@gmail.com> wrote:\n> >\n> > I hate to be \"that guy\", but there are many places in sources where we use\n> > LOCKMODE lockmode; variable and exactly one where we use LOCKMODE\n> > lmode: it is vacuum_open_relation function.\n>\n> There are more instances of LOCKMODE lmode; I spotted one in plancat.c as well.\n\nNice catch!\n\n> Case1:\n> toast_get_valid_index(Oid toastoid, LOCKMODE lock)\n> toast_close_indexes(Relation *toastidxs, int num_indexes, LOCKMODE lock)\n> GetLockmodeName(LOCKMETHODID lockmethodid, LOCKMODE mode)\n> LOCKMODE mode = 0;\n> Case 2:\n> qualified variable names like\n> LOCKMODE heapLockmode;\n> LOCKMODE memlockmode;\n> LOCKMODE table_lockmode;\n> LOCKMODE parentLockmode;\n> LOCKMODE cmd_lockmode = AccessExclusiveLock; /* default for compiler */\n> LOCK_PRINT(const char *where, const LOCK *lock, LOCKMODE type)\n>\n> case3: some that have two LOCKMODE instances like\n> DoLockModesConflict(LOCKMODE mode1, LOCKMODE mode2)\n\nNice catch!\n\n> > Is it worth a patch?\n>\n> When I see a variable with name lockmode, I know it's of type\n> LOCKMODE. So changing the Case1 may be worth it. It's not a whole lot\n> of code churn as well. May be patch backbranches.\n>\n> Case2 we should leave as is since the variable name has lockmode in it.\n+1\n\n> Case3, worth changing to lockmode1 and lockmode2.\nAgree\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\nAttached v2 patch with your suggestions addressed.",
"msg_date": "Fri, 2 Aug 2024 15:30:45 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small refactoring around vacuum_open_relation"
}
] |
[
{
"msg_contents": "I propose to remove the obsolete RECHECK keyword completely. This used \nto be part of CREATE OPERATOR CLASS and ALTER OPERATOR FAMILY, but it \nhas done nothing (except issue a NOTICE) since PostgreSQL 8.4. Commit \n30e7c175b81 removed support for dumping from pre-9.2 servers, so this no \nlonger serves any need, it seems to me.\n\nThis now removes it completely, and you'd get a normal parse error if \nyou used it.",
"msg_date": "Fri, 2 Aug 2024 12:41:06 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Remove obsolete RECHECK keyword completely"
},
{
"msg_contents": "Peter Eisentraut <peter@eisentraut.org> writes:\n> I propose to remove the obsolete RECHECK keyword completely. This used \n> to be part of CREATE OPERATOR CLASS and ALTER OPERATOR FAMILY, but it \n> has done nothing (except issue a NOTICE) since PostgreSQL 8.4. Commit \n> 30e7c175b81 removed support for dumping from pre-9.2 servers, so this no \n> longer serves any need, it seems to me.\n\n+1 for the idea; didn't vet the patch closely.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2024 09:55:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove obsolete RECHECK keyword completely"
},
{
"msg_contents": "Hi,\n\n> I propose to remove the obsolete RECHECK keyword completely. This used\n> to be part of CREATE OPERATOR CLASS and ALTER OPERATOR FAMILY, but it\n> has done nothing (except issue a NOTICE) since PostgreSQL 8.4. Commit\n> 30e7c175b81 removed support for dumping from pre-9.2 servers, so this no\n> longer serves any need, it seems to me.\n>\n> This now removes it completely, and you'd get a normal parse error if\n> you used it.\n\nI reviewed and tested the code. LGTM.\n\n--\nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 5 Aug 2024 16:44:28 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove obsolete RECHECK keyword completely"
},
{
"msg_contents": "On 05.08.24 15:44, Aleksander Alekseev wrote:\n>> I propose to remove the obsolete RECHECK keyword completely. This used\n>> to be part of CREATE OPERATOR CLASS and ALTER OPERATOR FAMILY, but it\n>> has done nothing (except issue a NOTICE) since PostgreSQL 8.4. Commit\n>> 30e7c175b81 removed support for dumping from pre-9.2 servers, so this no\n>> longer serves any need, it seems to me.\n>>\n>> This now removes it completely, and you'd get a normal parse error if\n>> you used it.\n> \n> I reviewed and tested the code. LGTM.\n\ncommitted, thanks\n\n\n\n",
"msg_date": "Fri, 9 Aug 2024 07:27:53 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: Remove obsolete RECHECK keyword completely"
}
] |
[
{
"msg_contents": "Hi,\n\nI found a bug in the memory counter update in reorderbuffer. It was\nintroduced by commit 5bec1d6bc5e, so pg17 and master are affected.\n\nIn ReorderBufferCleanupTXN() we zero the transaction size and then\nfree the transaction entry as follows:\n\n /* Update the memory counter */\n ReorderBufferChangeMemoryUpdate(rb, NULL, txn, false, txn->size);\n\n /* deallocate */\n ReorderBufferReturnTXN(rb, txn);\n\nHowever, if the transaction entry has toast changes we free them in\nReorderBufferToastReset() called from ReorderBufferToastReset(), and\nend up subtracting the memory usage from zero. Which results in an\nassertion failure:\n\nTRAP: failed Assert(\"(rb->size >= sz) && (txn->size >= sz)\"), File:\n\"reorderbuffer.c\"\n\nThis can happen for example if an error occurs while replaying\ntransaction changes including toast changes.\n\nI've attached the patch that fixes the problem and includes a test\ncase (the test part might not be committed as it slows the test case).\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 2 Aug 2024 12:50:54 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix memory counter update in reorderbuffer"
},
{
"msg_contents": "On Sat, Aug 3, 2024 at 1:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I found a bug in the memory counter update in reorderbuffer. It was\n> introduced by commit 5bec1d6bc5e, so pg17 and master are affected.\n>\n> In ReorderBufferCleanupTXN() we zero the transaction size and then\n> free the transaction entry as follows:\n>\n> /* Update the memory counter */\n> ReorderBufferChangeMemoryUpdate(rb, NULL, txn, false, txn->size);\n>\n> /* deallocate */\n> ReorderBufferReturnTXN(rb, txn);\n>\n\nWhy do we need to zero the transaction size explicitly? Shouldn't it\nautomatically become zero after freeing all the changes?\n\n> However, if the transaction entry has toast changes we free them in\n> ReorderBufferToastReset() called from ReorderBufferToastReset(),\n>\n\nHere, you mean ReorderBufferToastReset() called from\nReorderBufferReturnTXN(), right?\n\nBTW, commit 5bec1d6bc5e also introduced\nReorderBufferChangeMemoryUpdate() in ReorderBufferTruncateTXN() which\nis also worth considering while fixing the reported problem. It may\nnot have the same problem as you have reported but we can consider\nwhether setting txn size as zero explicitly is required or not.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 6 Aug 2024 09:42:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix memory counter update in reorderbuffer"
},
{
"msg_contents": "On Tue, Aug 6, 2024 at 1:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Aug 3, 2024 at 1:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I found a bug in the memory counter update in reorderbuffer. It was\n> > introduced by commit 5bec1d6bc5e, so pg17 and master are affected.\n> >\n> > In ReorderBufferCleanupTXN() we zero the transaction size and then\n> > free the transaction entry as follows:\n> >\n> > /* Update the memory counter */\n> > ReorderBufferChangeMemoryUpdate(rb, NULL, txn, false, txn->size);\n> >\n> > /* deallocate */\n> > ReorderBufferReturnTXN(rb, txn);\n> >\n>\n> Why do we need to zero the transaction size explicitly? Shouldn't it\n> automatically become zero after freeing all the changes?\n\nIt will become zero after freeing all the changes. However, since\nupdating the max-heap when freeing each change could bring some\noverhead, we freed the changes without updating the memory counter,\nand then zerod it.\n\n>\n> > However, if the transaction entry has toast changes we free them in\n> > ReorderBufferToastReset() called from ReorderBufferToastReset(),\n> >\n>\n> Here, you mean ReorderBufferToastReset() called from\n> ReorderBufferReturnTXN(), right?\n\nRight. Thank you for pointing it out.\n\n> BTW, commit 5bec1d6bc5e also introduced\n> ReorderBufferChangeMemoryUpdate() in ReorderBufferTruncateTXN() which\n> is also worth considering while fixing the reported problem. It may\n> not have the same problem as you have reported but we can consider\n> whether setting txn size as zero explicitly is required or not.\n\nThe reason why it introduced ReorderBufferChangeMemoryUpdate() is the\nsame as I mentioned above. And yes, as you mentioned, it doesn't have\nthe same problem that I reported here.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 7 Aug 2024 11:12:05 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix memory counter update in reorderbuffer"
},
{
"msg_contents": "On Wed, Aug 7, 2024 at 7:42 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Aug 6, 2024 at 1:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Aug 3, 2024 at 1:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I found a bug in the memory counter update in reorderbuffer. It was\n> > > introduced by commit 5bec1d6bc5e, so pg17 and master are affected.\n> > >\n> > > In ReorderBufferCleanupTXN() we zero the transaction size and then\n> > > free the transaction entry as follows:\n> > >\n> > > /* Update the memory counter */\n> > > ReorderBufferChangeMemoryUpdate(rb, NULL, txn, false, txn->size);\n> > >\n> > > /* deallocate */\n> > > ReorderBufferReturnTXN(rb, txn);\n> > >\n> >\n> > Why do we need to zero the transaction size explicitly? Shouldn't it\n> > automatically become zero after freeing all the changes?\n>\n> It will become zero after freeing all the changes. However, since\n> updating the max-heap when freeing each change could bring some\n> overhead, we freed the changes without updating the memory counter,\n> and then zerod it.\n>\n\nI think this should be covered in comments as it is not apparent.\n\n>\n> > BTW, commit 5bec1d6bc5e also introduced\n> > ReorderBufferChangeMemoryUpdate() in ReorderBufferTruncateTXN() which\n> > is also worth considering while fixing the reported problem. It may\n> > not have the same problem as you have reported but we can consider\n> > whether setting txn size as zero explicitly is required or not.\n>\n> The reason why it introduced ReorderBufferChangeMemoryUpdate() is the\n> same as I mentioned above. And yes, as you mentioned, it doesn't have\n> the same problem that I reported here.\n>\n\nI checked again and found that ReorderBufferResetTXN() first calls\nReorderBufferTruncateTXN() and then ReorderBufferToastReset(). After\nthat, it also tries to free spec_insert change which should have the\nsame problem. So, what saves this path from the same problem?\n\n*\n+ /*\n+ * Update the memory counter of the transaction, removing it from\n+ * the max-heap.\n+ */\n+ ReorderBufferChangeMemoryUpdate(rb, NULL, txn, false, txn->size);\n+ Assert(txn->size == 0);\n+\n pfree(txn);\n\nJust before freeing the TXN, updating the size looks odd though I\nunderstand the idea is to remove TXN from max_heap. Anyway, let's\nfirst discuss whether the same problem exists in\nReorderBufferResetTXN() code path, and if so, how we want to fix it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 7 Aug 2024 11:47:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix memory counter update in reorderbuffer"
},
{
"msg_contents": "On Wed, Aug 7, 2024 at 3:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 7, 2024 at 7:42 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Aug 6, 2024 at 1:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sat, Aug 3, 2024 at 1:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > I found a bug in the memory counter update in reorderbuffer. It was\n> > > > introduced by commit 5bec1d6bc5e, so pg17 and master are affected.\n> > > >\n> > > > In ReorderBufferCleanupTXN() we zero the transaction size and then\n> > > > free the transaction entry as follows:\n> > > >\n> > > > /* Update the memory counter */\n> > > > ReorderBufferChangeMemoryUpdate(rb, NULL, txn, false, txn->size);\n> > > >\n> > > > /* deallocate */\n> > > > ReorderBufferReturnTXN(rb, txn);\n> > > >\n> > >\n> > > Why do we need to zero the transaction size explicitly? Shouldn't it\n> > > automatically become zero after freeing all the changes?\n> >\n> > It will become zero after freeing all the changes. However, since\n> > updating the max-heap when freeing each change could bring some\n> > overhead, we freed the changes without updating the memory counter,\n> > and then zerod it.\n> >\n>\n> I think this should be covered in comments as it is not apparent.\n\nAgreed.\n\n>\n> >\n> > > BTW, commit 5bec1d6bc5e also introduced\n> > > ReorderBufferChangeMemoryUpdate() in ReorderBufferTruncateTXN() which\n> > > is also worth considering while fixing the reported problem. It may\n> > > not have the same problem as you have reported but we can consider\n> > > whether setting txn size as zero explicitly is required or not.\n> >\n> > The reason why it introduced ReorderBufferChangeMemoryUpdate() is the\n> > same as I mentioned above. And yes, as you mentioned, it doesn't have\n> > the same problem that I reported here.\n> >\n>\n> I checked again and found that ReorderBufferResetTXN() first calls\n> ReorderBufferTruncateTXN() and then ReorderBufferToastReset(). After\n> that, it also tries to free spec_insert change which should have the\n> same problem. So, what saves this path from the same problem?\n\nGood catch. I've not created a test case for that but it seems to be\npossible to happen.\n\nI think that subtracting txn->size to reduce the memory counter to\nzero seems to be a wrong idea in the first place. If we want to save\nupdating memory counter and max-heap, we should use the exact memory\nsize that we freed. In other words, just change the memory usage\nupdate to a batch operation.\n\n>\n> *\n> + /*\n> + * Update the memory counter of the transaction, removing it from\n> + * the max-heap.\n> + */\n> + ReorderBufferChangeMemoryUpdate(rb, NULL, txn, false, txn->size);\n> + Assert(txn->size == 0);\n> +\n> pfree(txn);\n>\n> Just before freeing the TXN, updating the size looks odd though I\n> understand the idea is to remove TXN from max_heap. Anyway, let's\n> first discuss whether the same problem exists in\n> ReorderBufferResetTXN() code path, and if so, how we want to fix it.\n\nAgreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 9 Aug 2024 01:12:29 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix memory counter update in reorderbuffer"
},
{
"msg_contents": "On Thu, Aug 8, 2024 at 9:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Aug 7, 2024 at 3:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Aug 7, 2024 at 7:42 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> >\n> > >\n> > > > BTW, commit 5bec1d6bc5e also introduced\n> > > > ReorderBufferChangeMemoryUpdate() in ReorderBufferTruncateTXN() which\n> > > > is also worth considering while fixing the reported problem. It may\n> > > > not have the same problem as you have reported but we can consider\n> > > > whether setting txn size as zero explicitly is required or not.\n> > >\n> > > The reason why it introduced ReorderBufferChangeMemoryUpdate() is the\n> > > same as I mentioned above. And yes, as you mentioned, it doesn't have\n> > > the same problem that I reported here.\n> > >\n> >\n> > I checked again and found that ReorderBufferResetTXN() first calls\n> > ReorderBufferTruncateTXN() and then ReorderBufferToastReset(). After\n> > that, it also tries to free spec_insert change which should have the\n> > same problem. So, what saves this path from the same problem?\n>\n> Good catch. I've not created a test case for that but it seems to be\n> possible to happen.\n>\n> I think that subtracting txn->size to reduce the memory counter to\n> zero seems to be a wrong idea in the first place. If we want to save\n> updating memory counter and max-heap, we should use the exact memory\n> size that we freed. In other words, just change the memory usage\n> update to a batch operation.\n>\n\nSounds reasonable but how would you find the size for a batch update\noperation? Are you planning to track it while freeing the individual\nchanges?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 9 Aug 2024 12:00:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix memory counter update in reorderbuffer"
},
{
"msg_contents": "On Fri, Aug 9, 2024 at 3:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 8, 2024 at 9:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Aug 7, 2024 at 3:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Aug 7, 2024 at 7:42 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > >\n> > > >\n> > > > > BTW, commit 5bec1d6bc5e also introduced\n> > > > > ReorderBufferChangeMemoryUpdate() in ReorderBufferTruncateTXN() which\n> > > > > is also worth considering while fixing the reported problem. It may\n> > > > > not have the same problem as you have reported but we can consider\n> > > > > whether setting txn size as zero explicitly is required or not.\n> > > >\n> > > > The reason why it introduced ReorderBufferChangeMemoryUpdate() is the\n> > > > same as I mentioned above. And yes, as you mentioned, it doesn't have\n> > > > the same problem that I reported here.\n> > > >\n> > >\n> > > I checked again and found that ReorderBufferResetTXN() first calls\n> > > ReorderBufferTruncateTXN() and then ReorderBufferToastReset(). After\n> > > that, it also tries to free spec_insert change which should have the\n> > > same problem. So, what saves this path from the same problem?\n> >\n> > Good catch. I've not created a test case for that but it seems to be\n> > possible to happen.\n> >\n> > I think that subtracting txn->size to reduce the memory counter to\n> > zero seems to be a wrong idea in the first place. If we want to save\n> > updating memory counter and max-heap, we should use the exact memory\n> > size that we freed. In other words, just change the memory usage\n> > update to a batch operation.\n> >\n>\n> Sounds reasonable but how would you find the size for a batch update\n> operation? Are you planning to track it while freeing the individual\n> changes?\n\nYes, one idea is to make ReorderBufferReturnChange() return the memory\nsize that it just freed. Then the caller who wants to update the\nmemory counter in a batch sums up the memory size.\n\nRegards,\n\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 10 Aug 2024 05:48:27 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix memory counter update in reorderbuffer"
},
{
"msg_contents": "On Sat, Aug 10, 2024 at 5:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Aug 9, 2024 at 3:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Aug 8, 2024 at 9:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Aug 7, 2024 at 3:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Wed, Aug 7, 2024 at 7:42 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > >\n> > > > >\n> > > > > > BTW, commit 5bec1d6bc5e also introduced\n> > > > > > ReorderBufferChangeMemoryUpdate() in ReorderBufferTruncateTXN() which\n> > > > > > is also worth considering while fixing the reported problem. It may\n> > > > > > not have the same problem as you have reported but we can consider\n> > > > > > whether setting txn size as zero explicitly is required or not.\n> > > > >\n> > > > > The reason why it introduced ReorderBufferChangeMemoryUpdate() is the\n> > > > > same as I mentioned above. And yes, as you mentioned, it doesn't have\n> > > > > the same problem that I reported here.\n> > > > >\n> > > >\n> > > > I checked again and found that ReorderBufferResetTXN() first calls\n> > > > ReorderBufferTruncateTXN() and then ReorderBufferToastReset(). After\n> > > > that, it also tries to free spec_insert change which should have the\n> > > > same problem. So, what saves this path from the same problem?\n> > >\n> > > Good catch. I've not created a test case for that but it seems to be\n> > > possible to happen.\n> > >\n> > > I think that subtracting txn->size to reduce the memory counter to\n> > > zero seems to be a wrong idea in the first place. If we want to save\n> > > updating memory counter and max-heap, we should use the exact memory\n> > > size that we freed. In other words, just change the memory usage\n> > > update to a batch operation.\n> > >\n> >\n> > Sounds reasonable but how would you find the size for a batch update\n> > operation? Are you planning to track it while freeing the individual\n> > changes?\n>\n> Yes, one idea is to make ReorderBufferReturnChange() return the memory\n> size that it just freed. Then the caller who wants to update the\n> memory counter in a batch sums up the memory size.\n\nI've drafted the patch. I didn't change ReorderBufferReturnChange()\nbut called ReorderBufferChangeSize() for individual change instead, as\nit's simpler.gi\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 12 Aug 2024 07:22:53 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix memory counter update in reorderbuffer"
},
{
"msg_contents": "On Wed, 7 Aug 2024 at 11:48, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 7, 2024 at 7:42 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Aug 6, 2024 at 1:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sat, Aug 3, 2024 at 1:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > I found a bug in the memory counter update in reorderbuffer. It was\n> > > > introduced by commit 5bec1d6bc5e, so pg17 and master are affected.\n> > > >\n> > > > In ReorderBufferCleanupTXN() we zero the transaction size and then\n> > > > free the transaction entry as follows:\n> > > >\n> > > > /* Update the memory counter */\n> > > > ReorderBufferChangeMemoryUpdate(rb, NULL, txn, false, txn->size);\n> > > >\n> > > > /* deallocate */\n> > > > ReorderBufferReturnTXN(rb, txn);\n> > > >\n> > >\n> > > Why do we need to zero the transaction size explicitly? Shouldn't it\n> > > automatically become zero after freeing all the changes?\n> >\n> > It will become zero after freeing all the changes. However, since\n> > updating the max-heap when freeing each change could bring some\n> > overhead, we freed the changes without updating the memory counter,\n> > and then zerod it.\n> >\n>\n> I think this should be covered in comments as it is not apparent.\n>\n> >\n> > > BTW, commit 5bec1d6bc5e also introduced\n> > > ReorderBufferChangeMemoryUpdate() in ReorderBufferTruncateTXN() which\n> > > is also worth considering while fixing the reported problem. It may\n> > > not have the same problem as you have reported but we can consider\n> > > whether setting txn size as zero explicitly is required or not.\n> >\n> > The reason why it introduced ReorderBufferChangeMemoryUpdate() is the\n> > same as I mentioned above. And yes, as you mentioned, it doesn't have\n> > the same problem that I reported here.\n> >\n>\n> I checked again and found that ReorderBufferResetTXN() first calls\n> ReorderBufferTruncateTXN() and then ReorderBufferToastReset(). After\n> that, it also tries to free spec_insert change which should have the\n> same problem. So, what saves this path from the same problem?\n\nI tried testing this scenario and I was able to reproduce the crash in\nHEAD with this scenario. I have created a patch for the testcase.\nI also tested the same scenario with the latest patch shared by\nSawada-san in [1]. And confirm that this resolves the issue.\n\n[1]: https://www.postgresql.org/message-id/CAD21AoDHC4Sob%3DNEYTxgu5wd4rzCpSLY_hWapMUqf4WKrAxmyw%40mail.gmail.com\n\nThanks and Regards,\nShlok Kyal",
"msg_date": "Fri, 16 Aug 2024 12:52:06 +0530",
"msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix memory counter update in reorderbuffer"
},
{
"msg_contents": "Hi,\n\nOn Fri, Aug 16, 2024 at 12:22 AM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n>\n> On Wed, 7 Aug 2024 at 11:48, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Aug 7, 2024 at 7:42 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Aug 6, 2024 at 1:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Sat, Aug 3, 2024 at 1:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > I found a bug in the memory counter update in reorderbuffer. It was\n> > > > > introduced by commit 5bec1d6bc5e, so pg17 and master are affected.\n> > > > >\n> > > > > In ReorderBufferCleanupTXN() we zero the transaction size and then\n> > > > > free the transaction entry as follows:\n> > > > >\n> > > > > /* Update the memory counter */\n> > > > > ReorderBufferChangeMemoryUpdate(rb, NULL, txn, false, txn->size);\n> > > > >\n> > > > > /* deallocate */\n> > > > > ReorderBufferReturnTXN(rb, txn);\n> > > > >\n> > > >\n> > > > Why do we need to zero the transaction size explicitly? Shouldn't it\n> > > > automatically become zero after freeing all the changes?\n> > >\n> > > It will become zero after freeing all the changes. However, since\n> > > updating the max-heap when freeing each change could bring some\n> > > overhead, we freed the changes without updating the memory counter,\n> > > and then zerod it.\n> > >\n> >\n> > I think this should be covered in comments as it is not apparent.\n> >\n> > >\n> > > > BTW, commit 5bec1d6bc5e also introduced\n> > > > ReorderBufferChangeMemoryUpdate() in ReorderBufferTruncateTXN() which\n> > > > is also worth considering while fixing the reported problem. It may\n> > > > not have the same problem as you have reported but we can consider\n> > > > whether setting txn size as zero explicitly is required or not.\n> > >\n> > > The reason why it introduced ReorderBufferChangeMemoryUpdate() is the\n> > > same as I mentioned above. And yes, as you mentioned, it doesn't have\n> > > the same problem that I reported here.\n> > >\n> >\n> > I checked again and found that ReorderBufferResetTXN() first calls\n> > ReorderBufferTruncateTXN() and then ReorderBufferToastReset(). After\n> > that, it also tries to free spec_insert change which should have the\n> > same problem. So, what saves this path from the same problem?\n>\n> I tried testing this scenario and I was able to reproduce the crash in\n> HEAD with this scenario. I have created a patch for the testcase.\n> I also tested the same scenario with the latest patch shared by\n> Sawada-san in [1]. And confirm that this resolves the issue.\n\nThank you for testing the patch!\n\nI'm slightly hesitant to add a test under src/test/subscription since\nit's a bug in ReorderBuffer and not specific to logical replication.\nIf we reasonably cannot add a test under contrib/test_decoding, I'm\nokay with adding it under src/test/subscription.\n\nI've attached the updated patch with the commit message (but without a\ntest case for now).\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 20 Aug 2024 05:24:24 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix memory counter update in reorderbuffer"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 5:55 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Aug 16, 2024 at 12:22 AM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n> >\n>\n> Thank you for testing the patch!\n>\n> I'm slightly hesitant to add a test under src/test/subscription since\n> it's a bug in ReorderBuffer and not specific to logical replication.\n> If we reasonably cannot add a test under contrib/test_decoding, I'm\n> okay with adding it under src/test/subscription.\n>\n\nSounds like a reasonable approach.\n\n> I've attached the updated patch with the commit message (but without a\n> test case for now).\n>\n\nThe patch LGTM except for one minor comment.\n\n+ /* All changes must be returned */\n+ Assert(txn->size == 0);\n\nIsn't it better to say: \"All changes must be deallocated.\" in the above comment?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 21 Aug 2024 09:59:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix memory counter update in reorderbuffer"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 9:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 20, 2024 at 5:55 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Aug 16, 2024 at 12:22 AM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n> > >\n> >\n> > Thank you for testing the patch!\n> >\n> > I'm slightly hesitant to add a test under src/test/subscription since\n> > it's a bug in ReorderBuffer and not specific to logical replication.\n> > If we reasonably cannot add a test under contrib/test_decoding, I'm\n> > okay with adding it under src/test/subscription.\n> >\n>\n> Sounds like a reasonable approach.\n\nI've managed to reproduce this issue without logical replication based\non the test shared by Shlok Kyal.\n\n>\n> > I've attached the updated patch with the commit message (but without a\n> > test case for now).\n> >\n>\n> The patch LGTM except for one minor comment.\n>\n> + /* All changes must be returned */\n> + Assert(txn->size == 0);\n>\n> Isn't it better to say: \"All changes must be deallocated.\" in the above comment?\n\nAgreed.\n\nI've updated the updated patch with regression tests.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 23 Aug 2024 03:13:22 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix memory counter update in reorderbuffer"
},
{
"msg_contents": "On Fri, Aug 23, 2024 at 3:44 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Aug 20, 2024 at 9:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> I've updated the updated patch with regression tests.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 26 Aug 2024 09:59:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix memory counter update in reorderbuffer"
},
{
"msg_contents": "On Sun, Aug 25, 2024 at 9:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Aug 23, 2024 at 3:44 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Aug 20, 2024 at 9:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > I've updated the updated patch with regression tests.\n> >\n>\n> LGTM.\n\nThank you for reviewing the patch. Pushed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 26 Aug 2024 11:25:53 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix memory counter update in reorderbuffer"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 11:25:53AM -0700, Masahiko Sawada wrote:\n> Thank you for reviewing the patch. Pushed.\n\nnitpick: I think this one needs a pgindent run.\n\n-- \nnathan\n\n\n",
"msg_date": "Mon, 26 Aug 2024 16:27:15 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix memory counter update in reorderbuffer"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 2:27 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Aug 26, 2024 at 11:25:53AM -0700, Masahiko Sawada wrote:\n> > Thank you for reviewing the patch. Pushed.\n>\n> nitpick: I think this one needs a pgindent run.\n\nThank you for pointing it out. I forgot to check with pgindent. I've\npushed the change. Hope it\nfixes the problem.\n\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 26 Aug 2024 16:20:25 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix memory counter update in reorderbuffer"
}
] |
[
{
"msg_contents": "See\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee219ee8c40d88e7a0ef52c3c1b76c90bbd0d164\n\nAs usual, please send corrections/comments by Sunday.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2024 16:26:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Draft release notes for next week's releases are up"
},
{
"msg_contents": "On Fri, Aug 2, 2024 at 4:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> See\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ee219ee8c40d88e7a0ef52c3c1b76c90bbd0d164\n>\n> As usual, please send corrections/comments by Sunday.\n\nFor the \"Prevent infinite loop in VACUUM\" item, I noticed two things\n\n1) In REL_17_STABLE, the behavior is an ERROR -- not an infinite loop.\nPerhaps it is not worth mentioning the differing behavior because 17\nhasn't GA'd yet, but I just wanted to bring this up since I wasn't\nsure of the convention.\n\n2) The note says <command>VACUUM</command>. This issue is, of course,\nnot limited to explicitly issued vacuum commands (i.e.\nautovacuum-initiated vacuums can also encounter it). Again, I don't\nknow if this matters or what the convention is here.\n\n- Melanie\n\n\n",
"msg_date": "Sat, 3 Aug 2024 11:14:20 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Draft release notes for next week's releases are up"
},
{
"msg_contents": "Melanie Plageman <melanieplageman@gmail.com> writes:\n> For the \"Prevent infinite loop in VACUUM\" item, I noticed two things\n\nThanks for looking!\n\n> 1) In REL_17_STABLE, the behavior is an ERROR -- not an infinite loop.\n> Perhaps it is not worth mentioning the differing behavior because 17\n> hasn't GA'd yet, but I just wanted to bring this up since I wasn't\n> sure of the convention.\n\nRight, at this point v17 is not a target of these notes, so I just\ndocumented what happens in <= 16. I did include the commit hashes\nfor v17/master in the comments, but that's mostly to reassure anyone\nwho looks that the issue was addressed in those branches.\n\n> 2) The note says <command>VACUUM</command>. This issue is, of course,\n> not limited to explicitly issued vacuum commands (i.e.\n> autovacuum-initiated vacuums can also encounter it). Again, I don't\n> know if this matters or what the convention is here.\n\nI think we usually just say VACUUM. If it affected autovacuum\ndifferently, that might be worth documenting; but IIUC it's the\nsame.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Aug 2024 11:27:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Draft release notes for next week's releases are up"
}
] |
[
{
"msg_contents": "I noticed (during [0]) to some uses of the function atol() seem \ninappropriate. Either they assume that sizeof(long)==8 and so might \ntruncate data if not, or they are gratuitous because the surrounding \ncode does not use the long type. This patch fixes these, by using \natoll() or atoi() instead. (There are still some atol() calls left \nafter this, which seemed ok to me.)\n\nIn the past, Windows didn't have atoll(), but the online documentation \nappears to indicate that this now works in VS 2015 and later, which is \nwhat we support at the moment. The Cirrus CI passes.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/5d216d1c-91f6-4cbe-95e2-b4cbd930520c@ewie.name",
"msg_date": "Sat, 3 Aug 2024 13:04:04 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Fix inappropriate uses of atol()"
},
{
"msg_contents": "On 03/08/2024 14:04, Peter Eisentraut wrote:\n> I noticed (during [0]) to some uses of the function atol() seem \n> inappropriate. Either they assume that sizeof(long)==8 and so might \n> truncate data if not, or they are gratuitous because the surrounding \n> code does not use the long type. This patch fixes these, by using \n> atoll() or atoi() instead. (There are still some atol() calls left \n> after this, which seemed ok to me.)\n> \n> In the past, Windows didn't have atoll(), but the online documentation \n> appears to indicate that this now works in VS 2015 and later, which is \n> what we support at the moment. The Cirrus CI passes.\n+1 except for this one:\n\n> diff --git a/src/interfaces/ecpg/preproc/ecpg.trailer b/src/interfaces/ecpg/preproc/ecpg.trailer\n> index b2aa44f36dd..8ac1c5c9eda 100644\n> --- a/src/interfaces/ecpg/preproc/ecpg.trailer\n> +++ b/src/interfaces/ecpg/preproc/ecpg.trailer\n> @@ -217,7 +217,7 @@ char_variable: cvariable\n> \t\t\tenum ECPGttype type = p->type->type;\n> \n> \t\t\t/* If we have just one character this is not a string */\n> -\t\t\tif (atol(p->type->size) == 1)\n> +\t\t\tif (atoi(p->type->size) == 1)\n> \t\t\t\t\tmmerror(PARSE_ERROR, ET_ERROR, \"invalid data type\");\n> \t\t\telse\n> \t\t\t{\n> \n\nIn principle you can have an array larger than INT_MAX. However, this is \na pretty weak test anyway. I think this is what the error is meant for:\n\nEXEC SQL BEGIN DECLARE SECTION;\n\tchar connstr;\nEXEC SQL END DECLARE SECTION;\n EXEC SQL CONNECT TO :connstr;\n\nThis also produces the error, which seems fine:\n\nEXEC SQL BEGIN DECLARE SECTION;\n\tchar connstr[1];\nEXEC SQL END DECLARE SECTION;\n EXEC SQL CONNECT TO :connstr;\n\nThis also produces the error, which does not seem good (if you replace 1 \nwith 2 here, it works):\n\nEXEC SQL BEGIN DECLARE SECTION;\n\tchar connstr[1+100];\nEXEC SQL END DECLARE SECTION;\n EXEC SQL CONNECT TO :connstr;\n\nYou can work around that with:\n\n#define LEN (1 + 100)\nEXEC SQL BEGIN DECLARE SECTION;\n\tchar connstr[LEN];\nEXEC SQL END DECLARE SECTION;\n EXEC SQL CONNECT TO :connstr;\n\nThe grammar currently balks on arrays larger than INT_MAX, giving a \n\"syntax error\", which I don't think is correct, because at least my C \ncompiler accepts it in a non-ecpg context:\n\nEXEC SQL BEGIN DECLARE SECTION;\n\tchar connstr[2147483648];\nEXEC SQL END DECLARE SECTION;\n EXEC SQL CONNECT TO :connstr;\n\nOverall I think we should just leave this as it is. If we want to do \nsomething here, would be good to address those cases that are currently \nbogus, but it's probably not worth the effort.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Sat, 3 Aug 2024 17:07:46 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Fix inappropriate uses of atol()"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 03/08/2024 14:04, Peter Eisentraut wrote:\n>> I noticed (during [0]) to some uses of the function atol() seem \n>> inappropriate.\n\n> +1 except for this one:\n\n>> /* If we have just one character this is not a string */\n>> -\t\t\tif (atol(p->type->size) == 1)\n>> +\t\t\tif (atoi(p->type->size) == 1)\n>> \t\t\t\tmmerror(PARSE_ERROR, ET_ERROR, \"invalid data type\");\n\nHow about\n\n-\t\t\tif (atol(p->type->size) == 1)\n+\t\t\tif (strcmp(p->type->size, \"1\") == 0)\n\n? I've not actually tested, but this should catch the cases the\nwarning is meant to catch while not complaining about any of the\nexamples you give. I'm not sure if leading/trailing spaces\nwould fool it (i.e., \"char foo[ 1 ];\"). But even if they do,\nthat doesn't seem disastrous.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Aug 2024 11:20:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix inappropriate uses of atol()"
},
{
"msg_contents": "On 03/08/2024 18:20, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> On 03/08/2024 14:04, Peter Eisentraut wrote:\n>>> I noticed (during [0]) to some uses of the function atol() seem\n>>> inappropriate.\n> \n>> +1 except for this one:\n> \n>>> /* If we have just one character this is not a string */\n>>> -\t\t\tif (atol(p->type->size) == 1)\n>>> +\t\t\tif (atoi(p->type->size) == 1)\n>>> \t\t\t\tmmerror(PARSE_ERROR, ET_ERROR, \"invalid data type\");\n> \n> How about\n> \n> -\t\t\tif (atol(p->type->size) == 1)\n> +\t\t\tif (strcmp(p->type->size, \"1\") == 0)\n> \n> ? I've not actually tested, but this should catch the cases the\n> warning is meant to catch while not complaining about any of the\n> examples you give.\n\nMakes sense.\n\n> I'm not sure if leading/trailing spaces\n> would fool it (i.e., \"char foo[ 1 ];\"). But even if they do,\n> that doesn't seem disastrous.\nRight. There are many ways to fool it in that direction, e.g.\n\n#define ONE 1\nchar foo[ONE];\n\nI'm actually not even sure if it's intentional to throw the error even \nwith \"char[1]\". It makes sense to give an error on \"char\", but who says \nthat \"char[1]\" isn't a valid string? Not a very useful string, because \nit can only hold the empty string, but a string nevertheless, and \nsometimes an empty string is exactly what you want.\n\nIf we can easily distinguish between \"char\" and any array of char here, \nmight be best to accept the all arrays regardless of the length.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Sat, 3 Aug 2024 18:27:00 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Fix inappropriate uses of atol()"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> I'm actually not even sure if it's intentional to throw the error even \n> with \"char[1]\". It makes sense to give an error on \"char\", but who says \n> that \"char[1]\" isn't a valid string?\n\nI agree that that behavior looks more like an implementation artifact\nthan anything else.\n\n> If we can easily distinguish between \"char\" and any array of char here, \n> might be best to accept the all arrays regardless of the length.\n\nThe data structure is so poorly documented that I'm hesitant to try\nto do that. It might work to test for type == ECPGt_array, but then\nwhy is the immediately following code explicitly allowing for both\nthat case and not-array? I'm also fairly unsure how ECPGt_string\nfits in here. If this were an important point then it might be\nworth trying to reverse-engineer all this, but right now I have\nbetter things to do.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Aug 2024 12:05:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix inappropriate uses of atol()"
},
{
"msg_contents": "On 03.08.24 16:07, Heikki Linnakangas wrote:\n> On 03/08/2024 14:04, Peter Eisentraut wrote:\n>> I noticed (during [0]) to some uses of the function atol() seem \n>> inappropriate. Either they assume that sizeof(long)==8 and so might \n>> truncate data if not, or they are gratuitous because the surrounding \n>> code does not use the long type. This patch fixes these, by using \n>> atoll() or atoi() instead. (There are still some atol() calls left \n>> after this, which seemed ok to me.)\n>>\n>> In the past, Windows didn't have atoll(), but the online documentation \n>> appears to indicate that this now works in VS 2015 and later, which is \n>> what we support at the moment. The Cirrus CI passes.\n> +1 except for this one:\n> \n>> diff --git a/src/interfaces/ecpg/preproc/ecpg.trailer \n>> b/src/interfaces/ecpg/preproc/ecpg.trailer\n>> index b2aa44f36dd..8ac1c5c9eda 100644\n>> --- a/src/interfaces/ecpg/preproc/ecpg.trailer\n>> +++ b/src/interfaces/ecpg/preproc/ecpg.trailer\n>> @@ -217,7 +217,7 @@ char_variable: cvariable\n>> enum ECPGttype type = p->type->type;\n>>\n>> /* If we have just one character this is not a string */\n>> - if (atol(p->type->size) == 1)\n>> + if (atoi(p->type->size) == 1)\n>> mmerror(PARSE_ERROR, ET_ERROR, \"invalid data type\");\n>> else\n>> {\n\nI committed it without this hunk.\n\n\n\n",
"msg_date": "Sat, 10 Aug 2024 08:43:59 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: Fix inappropriate uses of atol()"
}
] |
[
{
"msg_contents": "I noticed in Commitfest App Add Review form's checkbox layout seems\nbroken as you can see in the attached snapshot. I see the same layout\nin Google Chrome and FireFox.\nFor Frequent Reviewers I guess it's fine as they can just guess how\nmany and which check boxes to click without a clear layout but for new\nusers it seems confusing.\n\n- Is this the right list to discuss or is there some other mailing\nlist covering CommitFest App development?\n- Is this a known issue and should it be improved ?\n\nRegards\nUmar Hayat",
"msg_date": "Sun, 4 Aug 2024 00:40:30 +0900",
"msg_from": "Umar Hayat <postgresql.wizard@gmail.com>",
"msg_from_op": true,
"msg_subject": "Broken layout: CommitFest Add Review Form"
},
{
"msg_contents": "On Sun, 4 Aug 2024 at 03:35, Umar Hayat <postgresql.wizard@gmail.com> wrote:\n> I noticed in Commitfest App Add Review form's checkbox layout seems\n> broken as you can see in the attached snapshot. I see the same layout\n> in Google Chrome and FireFox.\n> For Frequent Reviewers I guess it's fine as they can just guess how\n> many and which check boxes to click without a clear layout but for new\n> users it seems confusing.\n>\n> - Is this the right list to discuss or is there some other mailing\n> list covering CommitFest App development?\n> - Is this a known issue and should it be improved ?\n\nThe layout is also broken for me. Moving to www list.\n\nDavid\n\n\n",
"msg_date": "Tue, 13 Aug 2024 09:57:21 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Broken layout: CommitFest Add Review Form"
},
{
"msg_contents": "I also see a broken layout. Playing around in browser dev tools,\nremoving the \"form-control\" class on the checkbox input helps, but\ndoesn't entirely resolve the layout issues.\n\nThanks,\nMaciek\n\nOn Mon, Aug 12, 2024 at 2:57 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sun, 4 Aug 2024 at 03:35, Umar Hayat <postgresql.wizard@gmail.com> wrote:\n> > I noticed in Commitfest App Add Review form's checkbox layout seems\n> > broken as you can see in the attached snapshot. I see the same layout\n> > in Google Chrome and FireFox.\n> > For Frequent Reviewers I guess it's fine as they can just guess how\n> > many and which check boxes to click without a clear layout but for new\n> > users it seems confusing.\n> >\n> > - Is this the right list to discuss or is there some other mailing\n> > list covering CommitFest App development?\n> > - Is this a known issue and should it be improved ?\n>\n> The layout is also broken for me. Moving to www list.\n>\n> David\n>\n>\n\n\n",
"msg_date": "Tue, 3 Sep 2024 08:58:51 -0700",
"msg_from": "Maciek Sakrejda <maciek@pganalyze.com>",
"msg_from_op": false,
"msg_subject": "Re: Broken layout: CommitFest Add Review Form"
},
{
"msg_contents": "I recently sent an incorrectly-labeled review because of this issue\n[1]. It looks like I may not be the only one [2].\n\nHow can I help get this fixed?\n\n[1]: https://www.postgresql.org/message-id/CAOtHd0BmF5jqaaZFoV-EPVZ8Wyaye_hHbXak8Ti3z3wM6RmQ5Q@mail.gmail.com\n[2]: https://www.postgresql.org/message-id/CAGMVOduh9CyTiRtQzAneC_KbAPXiBwz-P_XBxKRHH9Rte5jXSQ@mail.gmail.com\n\nOn Tue, Sep 3, 2024 at 8:58 AM Maciek Sakrejda <maciek@pganalyze.com> wrote:\n>\n> I also see a broken layout. Playing around in browser dev tools,\n> removing the \"form-control\" class on the checkbox input helps, but\n> doesn't entirely resolve the layout issues.\n>\n> Thanks,\n> Maciek\n>\n> On Mon, Aug 12, 2024 at 2:57 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Sun, 4 Aug 2024 at 03:35, Umar Hayat <postgresql.wizard@gmail.com> wrote:\n> > > I noticed in Commitfest App Add Review form's checkbox layout seems\n> > > broken as you can see in the attached snapshot. I see the same layout\n> > > in Google Chrome and FireFox.\n> > > For Frequent Reviewers I guess it's fine as they can just guess how\n> > > many and which check boxes to click without a clear layout but for new\n> > > users it seems confusing.\n> > >\n> > > - Is this the right list to discuss or is there some other mailing\n> > > list covering CommitFest App development?\n> > > - Is this a known issue and should it be improved ?\n> >\n> > The layout is also broken for me. Moving to www list.\n> >\n> > David\n> >\n> >\n\n\n",
"msg_date": "Mon, 9 Sep 2024 12:34:38 -0700",
"msg_from": "Maciek Sakrejda <maciek@pganalyze.com>",
"msg_from_op": false,
"msg_subject": "Re: Broken layout: CommitFest Add Review Form"
}
] |
[
{
"msg_contents": "In our documentations, \"subquery\", \"sub-query\", \"sub-select\" and\n\"sub-SELECT\" are used. In English, are they interchangeable? I am\nasking because we are looking for Japanese translations for the\nwords. If they have the identical meaning in English, we can choose\nsingle Japanese word for them. If not, we would like to use different\nJapanese words to reflect the difference.\n\nI noticed that \"sub-SELECT\" only appears in syntax rules in the\nreference manuals. Maybe \"sub-SELECT\" should be tread differently from\nothers?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sun, 04 Aug 2024 10:48:07 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@postgresql.org>",
"msg_from_op": true,
"msg_subject": "subquery and sub-SELECT"
},
{
"msg_contents": "Tatsuo Ishii <ishii@postgresql.org> writes:\n> In our documentations, \"subquery\", \"sub-query\", \"sub-select\" and\n> \"sub-SELECT\" are used. In English, are they interchangeable?\n\nPretty nearly. I think \"sub-query\" can include DML such as\nINSERT RETURNING, whereas \"sub-select\" should only be a SELECT.\n(I'm not claiming that we've been perfectly accurate about\nthat distinction.) The dashes definitely don't matter.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Aug 2024 22:22:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: subquery and sub-SELECT"
},
{
"msg_contents": "> Tatsuo Ishii <ishii@postgresql.org> writes:\n>> In our documentations, \"subquery\", \"sub-query\", \"sub-select\" and\n>> \"sub-SELECT\" are used. In English, are they interchangeable?\n> \n> Pretty nearly. I think \"sub-query\" can include DML such as\n> INSERT RETURNING, whereas \"sub-select\" should only be a SELECT.\n> (I'm not claiming that we've been perfectly accurate about\n> that distinction.) The dashes definitely don't matter.\n\nThat makes sense. Thanks for the explanation!\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sun, 04 Aug 2024 11:30:09 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: subquery and sub-SELECT"
}
] |
[
{
"msg_contents": "Fixed a minor typo in src/include/storage/bufpage.h.\nPlease let me know if any further action is required.\n\n---\nBest regards,\nSenglee Choi",
"msg_date": "Mon, 5 Aug 2024 10:23:42 +0900",
"msg_from": "Senglee Choi <tmdfl1027@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fixed small typo in bufpage.h"
},
{
"msg_contents": "Senglee Choi <tmdfl1027@gmail.com> 于2024年8月5日周一 09:24写道:\n\n> Fixed a minor typo in src/include/storage/bufpage.h.\n> Please let me know if any further action is required.\n>\n\nGood catch. +1\n\n\n-- \nTender Wang\n\nSenglee Choi <tmdfl1027@gmail.com> 于2024年8月5日周一 09:24写道:Fixed a minor typo in src/include/storage/bufpage.h.Please let me know if any further action is required.Good catch. +1 -- Tender Wang",
"msg_date": "Mon, 5 Aug 2024 10:29:17 +0800",
"msg_from": "Tender Wang <tndrwang@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixed small typo in bufpage.h"
},
{
"msg_contents": "On Mon, Aug 5, 2024 at 6:54 AM Senglee Choi <tmdfl1027@gmail.com> wrote:\n>\n> Fixed a minor typo in src/include/storage/bufpage.h.\n>\n\nLGTM. BTW, we do use two spaces before the new sentence. See comments\nin the nearby code.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 5 Aug 2024 14:19:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixed small typo in bufpage.h"
}
] |
[
{
"msg_contents": "The now preferred way to call realpath() is by passing NULL as the\nsecond argument and get a malloc'ed result. We still supported the\nold way of providing our own buffer as a second argument, for some\nplatforms that didn't support the new way yet. Those were only\nSolaris less than version 11 and some older AIX versions (7.1 and\nnewer appear to support the new variant). We don't support those\nplatforms versions anymore, so we can remove this extra code.",
"msg_date": "Mon, 5 Aug 2024 08:12:30 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Remove support for old realpath() API"
},
{
"msg_contents": "On 05/08/2024 09:12, Peter Eisentraut wrote:\n> The now preferred way to call realpath() is by passing NULL as the\n> second argument and get a malloc'ed result. We still supported the\n> old way of providing our own buffer as a second argument, for some\n> platforms that didn't support the new way yet. Those were only\n> Solaris less than version 11 and some older AIX versions (7.1 and\n> newer appear to support the new variant). We don't support those\n> platforms versions anymore, so we can remove this extra code.\n\n+1\n\nWe don't seem to have any mentions of POSIX or SuS in docs, in the \ninstallation sections. There are a few mentions of POSIX-1.2008 and \nPOSIX-1.2001 it in the commit log, though, where we require features \nspecified by those. Can we rely on everything from POSIX-1-2008 \nnowadays, or is it more on a case-by-case basis, depending on which \nparts of POSIX are supported by various platforms?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 5 Aug 2024 10:41:25 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Remove support for old realpath() API"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> We don't seem to have any mentions of POSIX or SuS in docs, in the \n> installation sections. There are a few mentions of POSIX-1.2008 and \n> POSIX-1.2001 it in the commit log, though, where we require features \n> specified by those. Can we rely on everything from POSIX-1-2008 \n> nowadays, or is it more on a case-by-case basis, depending on which \n> parts of POSIX are supported by various platforms?\n\nI'd say it's still case-by-case. Perhaps everything in POSIX-1.2008\nis supported now on every platform we care about, but perhaps not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Aug 2024 10:08:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove support for old realpath() API"
},
{
"msg_contents": "On Mon, Aug 05, 2024 at 10:08:04AM -0400, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> We don't seem to have any mentions of POSIX or SuS in docs, in the \n>> installation sections. There are a few mentions of POSIX-1.2008 and \n>> POSIX-1.2001 it in the commit log, though, where we require features \n>> specified by those. Can we rely on everything from POSIX-1-2008 \n>> nowadays, or is it more on a case-by-case basis, depending on which \n>> parts of POSIX are supported by various platforms?\n> \n> I'd say it's still case-by-case. Perhaps everything in POSIX-1.2008\n> is supported now on every platform we care about, but perhaps not.\n\nJust pointing at the message where this has been discussed previously,\nfor reference:\nhttps://www.postgresql.org/message-id/1457809.1662232534@sss.pgh.pa.us\n\nLeaving Solaris aside because there is nothing older than 11 in the\nbuildfarm currently, I am dubious that it is a good idea to remove\nthis code knowing that we have a thread from a few months ago about\nthe fact that we have folks complaining about AIX support and that we\nshould bring it back:\nhttps://www.postgresql.org/message-id/CY5PR11MB63928CC05906F27FB10D74D0FD322@CY5PR11MB6392.namprd11.prod.outlook.com\n--\nMichael",
"msg_date": "Tue, 6 Aug 2024 14:43:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove support for old realpath() API"
},
{
"msg_contents": "> On 6 Aug 2024, at 07:43, Michael Paquier <michael@paquier.xyz> wrote:\n\n> I am dubious that it is a good idea to remove\n> this code knowing that we have a thread from a few months ago about\n> the fact that we have folks complaining about AIX support and that we\n> should bring it back:\n\nAccording to upthread it is supported since AIX 7.1 which shipped in 2010 so\neven if support materializes for AIX it still wouldn't be needed.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 6 Aug 2024 09:23:25 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Remove support for old realpath() API"
},
{
"msg_contents": "On 05.08.24 09:41, Heikki Linnakangas wrote:\n> On 05/08/2024 09:12, Peter Eisentraut wrote:\n>> The now preferred way to call realpath() is by passing NULL as the\n>> second argument and get a malloc'ed result. We still supported the\n>> old way of providing our own buffer as a second argument, for some\n>> platforms that didn't support the new way yet. Those were only\n>> Solaris less than version 11 and some older AIX versions (7.1 and\n>> newer appear to support the new variant). We don't support those\n>> platforms versions anymore, so we can remove this extra code.\n> \n> +1\n\ncommitted\n\n\n\n",
"msg_date": "Mon, 12 Aug 2024 08:17:45 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: Remove support for old realpath() API"
},
{
"msg_contents": "On Mon, Aug 12, 2024 at 6:18 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> On 05.08.24 09:41, Heikki Linnakangas wrote:\n> > On 05/08/2024 09:12, Peter Eisentraut wrote:\n> >> The now preferred way to call realpath() is by passing NULL as the\n> >> second argument and get a malloc'ed result. We still supported the\n> >> old way of providing our own buffer as a second argument, for some\n> >> platforms that didn't support the new way yet. Those were only\n> >> Solaris less than version 11 and some older AIX versions (7.1 and\n> >> newer appear to support the new variant). We don't support those\n> >> platforms versions anymore, so we can remove this extra code.\n\nI checked this in the AIX 7.3 manual and the POSIX 2008 way does not\nappear to be mentioned there:\n\nhttps://www.ibm.com/docs/en/aix/7.3?topic=r-realpath-subroutine\n\nThat's a bit confusing, or maybe there are just too many versioning\nsystems to keep track of and I've made a mistake, because it looks\nlike AIX 7.2.5+ has actual certification for Unix V7 AKA SUSv4 AKA\nPOSIX 2008... Or maybe the documentation is wrong and it does\nactually work. I guess the IBM crew will be forced to look into this\nas they continue to work on their PostgreSQL/AIX patch, if it doesn't\nwork...\n\n\n",
"msg_date": "Mon, 12 Aug 2024 18:47:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove support for old realpath() API"
},
{
"msg_contents": "On 12.08.24 08:47, Thomas Munro wrote:\n> On Mon, Aug 12, 2024 at 6:18 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>> On 05.08.24 09:41, Heikki Linnakangas wrote:\n>>> On 05/08/2024 09:12, Peter Eisentraut wrote:\n>>>> The now preferred way to call realpath() is by passing NULL as the\n>>>> second argument and get a malloc'ed result. We still supported the\n>>>> old way of providing our own buffer as a second argument, for some\n>>>> platforms that didn't support the new way yet. Those were only\n>>>> Solaris less than version 11 and some older AIX versions (7.1 and\n>>>> newer appear to support the new variant). We don't support those\n>>>> platforms versions anymore, so we can remove this extra code.\n> \n> I checked this in the AIX 7.3 manual and the POSIX 2008 way does not\n> appear to be mentioned there:\n> \n> https://www.ibm.com/docs/en/aix/7.3?topic=r-realpath-subroutine\n> \n> That's a bit confusing, or maybe there are just too many versioning\n> systems to keep track of and I've made a mistake, because it looks\n> like AIX 7.2.5+ has actual certification for Unix V7 AKA SUSv4 AKA\n> POSIX 2008... Or maybe the documentation is wrong and it does\n> actually work. I guess the IBM crew will be forced to look into this\n> as they continue to work on their PostgreSQL/AIX patch, if it doesn't\n> work...\n\nTom had tested this on and found that it does actually work on AIX 7.1 \nand 7.3 but the documentation is wrong.\n\n\n\n",
"msg_date": "Mon, 12 Aug 2024 09:04:23 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: Remove support for old realpath() API"
},
{
"msg_contents": "Peter Eisentraut <peter@eisentraut.org> writes:\n> On 12.08.24 08:47, Thomas Munro wrote:\n>> I checked this in the AIX 7.3 manual and the POSIX 2008 way does not\n>> appear to be mentioned there:\n>> https://www.ibm.com/docs/en/aix/7.3?topic=r-realpath-subroutine\n\n> Tom had tested this on and found that it does actually work on AIX 7.1 \n> and 7.3 but the documentation is wrong.\n\nI too have a distinct recollection of having tested this (using the\ngcc compile farm machines), but I cannot find anything saying so in\nthe mailing list archives. I can go check it again, I guess.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Aug 2024 10:05:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove support for old realpath() API"
},
{
"msg_contents": "I wrote:\n> Peter Eisentraut <peter@eisentraut.org> writes:\n>> Tom had tested this on and found that it does actually work on AIX 7.1 \n>> and 7.3 but the documentation is wrong.\n\n> I too have a distinct recollection of having tested this (using the\n> gcc compile farm machines), but I cannot find anything saying so in\n> the mailing list archives. I can go check it again, I guess.\n\nI can confirm that the attached program works on cfarm111 (AIX 7.1)\nand cfarm119 (AIX 7.3), though \"man realpath\" denies it on both\nsystems.\n\nI also found leftover test files demonstrating that I checked this\nsame point on Apr 26 2024, so I'm not sure why that didn't turn up\nin a mail list search.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 12 Aug 2024 10:35:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove support for old realpath() API"
},
{
"msg_contents": "On Tue, Aug 13, 2024 at 2:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I can confirm that the attached program works on cfarm111 (AIX 7.1)\n> and cfarm119 (AIX 7.3), though \"man realpath\" denies it on both\n> systems.\n\nAnother example of this phenomenon: they have nl_langinfo_l(), a POSIX\n2008 feature we want in a nearby thread, but the online documentation\nand man pages also deny that.\n\n\n",
"msg_date": "Tue, 13 Aug 2024 08:15:39 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove support for old realpath() API"
}
] |
[
{
"msg_contents": "Hi,\n\n(This discussion started on the security@ list, but was deemed\nsuitable for the -hackers@ list.)\n\nPostgreSQL supports RADIUS authentication[1]. It's probably not\nwidely used, and generally any security technology based on MD5 alone\nshould by now be considered weak and obsolete, but there are a few\nreports of users. Recently, a paper was published[2] showing\nsuccessful attacks on a range of RADIUS clients, though no mention of\nPostgreSQL. It's not just an MD5 collision search, it's a more\nspecific cryptographic weakness in the protocol that allows messages\nto be forged in quite practical amounts of CPU time. We might be\nquite lucky though: our hard-coded RADIUS_TIMEOUT = 3s doesn't seem to\nbe enough time, based on my non-crypto-expert reading of the paper at\nleast.\n\nEven if we assume that is true, FreeRADIUS 3.2.5 has recently[3]\nstarted spewing scary warnings when PostgreSQL sends requests to it by\ndefault, and advising administrators to adjust the relevant setting\nfurther to just drop unsigned messages. I assume the commercial\nRADIUS implementations will do something like that too, based on\ncomments about the intended direction and expectations of clients in\nan in-development RFC[4].\n\nThe attached patch-set adds a basic TAP test for RADIUS\nauthentication, and then adds a Message-Authenticator attribute to all\noutgoing requests (an HMAC-MD5 signature for the whole message that\nwas already defined by the RFCs but marked optional and often\nomitted), and also *optionally* requires a Message-Authenticator to\nappear in responses from the RADIUS server, and verifies it if\npresent. With the change, FreeRADIUS 3.2.5 is happy to talk to\nPostgreSQL again.\n\nThe response requirement can be enabled by radiusrequirema=1 in\npg_hba.conf. For example, Debian stable is currently shipping\nFreeRADIUS 3.2.1 which doesn't yet send the MA in its responses, but\nFreeBSD and Debian \"testing\" have started shipping FreeRADIUS 3.2.5\nwhich is how I noticed all this. So it doesn't seem quite right to\nrequire it by default, yet?\n\nIt's quite easy to test this locally, if you have FreeRADIUS installed\non any Unixoid system. See the TAP test for some minimal\nconfiguration files required to try it out.\n\nI had originally developed the TAP test as part of a larger project[5]\nreally about something else, which is why it has Michael listed as a\nreviewer already, and this version incorporates some new improvements\nhe recommended (thanks!). I've created this new thread and new\nminimal test just to deal with the BlastRADIUS mitigation topic.\n\nWe might also consider just dropping RADIUS support in 18, if we don't\nget patches to bring it up to date with modern RADIUS recommendations\nbeyond the mitigation (deprecating UDP, adding TLS, probably more\nthings). Such patches would ideally be written by someone with a more\ndirect interest in the protocol (ie I am not volunteering). But even\nif we decide to drop it instead. I think we'd still want the change\nI'm proposing here for released branches.\n\nSince PostgreSQL v12 and v13 don't have the modern \"common/hmac.h\"\nAPI, I came up with a cheap kludge: locally #define those interfaces\nto point directly to the OpenSSL HMAC API, or just give up and drop\nMessage-Authenticator support if you didn't build with OpenSSL support\n(in practice everyone does). Better ideas?\n\n[1] https://www.postgresql.org/docs/current/auth-radius.html\n[2] https://www.blastradius.fail/\n[3] https://github.com/FreeRADIUS/freeradius-server/commit/6616be90346beb6050446bd00c8ed5bca1b8ef29\n[4] https://datatracker.ietf.org/doc/draft-ietf-radext-deprecating-radius/\n[5] https://www.postgresql.org/message-id/flat/CA%2BhUKGKxNoVjkMCksnj6z3BwiS3y2v6LN6z7_CisLK%2Brv%2B0V4g%40mail.gmail.com",
"msg_date": "Tue, 6 Aug 2024 00:43:26 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "BlastRADIUS mitigation"
},
{
"msg_contents": "On 05/08/2024 15:43, Thomas Munro wrote:\n> The response requirement can be enabled by radiusrequirema=1 in\n> pg_hba.conf. For example, Debian stable is currently shipping\n> FreeRADIUS 3.2.1 which doesn't yet send the MA in its responses, but\n> FreeBSD and Debian \"testing\" have started shipping FreeRADIUS 3.2.5\n> which is how I noticed all this. So it doesn't seem quite right to\n> require it by default, yet?\n\nAgreed.\n\n> Since PostgreSQL v12 and v13 don't have the modern \"common/hmac.h\"\n> API, I came up with a cheap kludge: locally #define those interfaces\n> to point directly to the OpenSSL HMAC API, or just give up and drop\n> Message-Authenticator support if you didn't build with OpenSSL support\n> (in practice everyone does). Better ideas?\n\nSeems reasonable. It probably wouldn't be hard to backport common/hmac.h \neither, perhaps in a limited fashion with just md5 support.\n\nReview on v1-0001-Add-simple-test-for-RADIUS-authentication.patch:\n\n> +if ($ENV{PG_TEST_EXTRA} !~ /\\bradius\\b/)\n> +{\n> +\tplan skip_all => 'radius not enabled in PG_TEST_EXTRA';\n> +}\n> +elsif ($^O eq 'freebsd')\n> +{\n> +\t$radiusd = '/usr/local/sbin/radiusd';\n> +}\n> +elsif ($^O eq 'linux' && -f '/usr/sbin/freeradius')\n> +{\n> +\t$radiusd = '/usr/sbin/freeradius';\n> +}\n> +elsif ($^O eq 'linux')\n> +{\n> +\t$radiusd = '/usr/sbin/radiusd';\n> +}\n> +elsif ($^O eq 'darwin' && -d '/opt/local')\n> +{\n> +\t# typical path for MacPorts\n> +\t$radiusd = '/opt/local/sbin/radiusd';\n> +\t$radiusd_prefix = '/opt/local';\n> +}\n> +elsif ($^O eq 'darwin' && -d '/opt/homebrew')\n> +{\n> +\t# typical path for Homebrew on ARM\n> +\t$radiusd = '/opt/homebrew/bin/radiusd';\n> +\t$radiusd_prefix = '/opt/homebrew';\n> +}\n> +elsif ($^O eq 'darwin' && -d '/usr/local')\n> +{\n> +\t# typical path for Homebrew on Intel\n> +\t$radiusd = '/usr/local/bin/radiusd';\n> +\t$radiusd_prefix = '/usr/local';\n> +}\n> +else\n> +{\n> +\tplan skip_all =>\n> +\t \"radius tests not supported on $^O or dependencies not installed\";\n> +}\n\nSeems that on linux or freebsd, you'd plow ahead even if the binary is \nnot found, and fail later, while on macOS you'd skip the tests. I think \nwe should always error out if the dependencies are not found. If you \nmake an effort to add PG_TEST_EXTRA=radius, presumably you really do \nwant to run the tests, and if dependencies are missing you'd like to \nknow about it.\n\n> + secret = \"shared-secret\"\n\nLet's use a random value for this, and for the Cleartext-Password. This \nonly runs locally, and only if you explicitly add it to PG_TEST_EXTRA, \nbut it still seems nice to protect from other users on the system when \nwe can do so easily.\n\n> +security {\n> + require_message_authenticator = \"yes\"\n> +}\n\n> +# Note that require_message_authenticator defaulted to \"no\" before 3.2.5, and\n> +# then switched to \"auto\" (a new mode that fills the logs up with warning\n> +# messages about clients that don't send MA), and presumably a later version\n> +# will default to \"yes\".\n\nThat's not quite accurate: the option didn't exist before version 3.2.5. \nWhat happens if you set it on an older server version? /me tests: seems \nthat FreeRadius 3.2.1 silently accepts the setting, or any other setting \nit doesn't recognize, and will do nothing with it. A little surprising, \nbut ok. I didn't find any mention in the docs on that.\n\n(Also, that will make the test fail, until the \nv1-0003-Mitigation-for-BlastRADIUS.patch is also applied. You could \nleave that out of the test in this first patch, and add it \nv1-0003-Mitigation-for-BlastRADIUS.patch)\n\nReview on v1-0003-Mitigation-for-BlastRADIUS.patch:\n\n> + <varlistentry>\n> + <term><literal>radiusrequirema</literal></term>\n> + <listitem>\n> + <para>\n> + Whether to require a valid <literal>Message-Authenticator</literal>\n> + attribute in messages received from RADIUS servers, and ignore messages\n> + that don't contain it. The default value\n> + is <literal>0</literal>, but it can be set to <literal>1</literal>\n> + to enable that requirement.\n> + This setting does not affect requests sent by\n> + <productname>PostgreSQL</productname> to the RADIUS server, which\n> + always include a <literal>Message-Authenticator</literal> attribute\n> + (but didn't in earlier releases).\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nI think this should include some advise on why and when you should set \nit. Something like:\n\nEnabling this mitigates the RADIUS protocol vulnerability described at \nblastradius.fail. It is recommended to always set this to 1, unless you \nare running an older RADIUS server version that does include the \nmitigation to include Message-Authenticator in all replies. The default \nwill be changed to 1 in a future PostgreSQL version.\n\n> \tattr = (radius_attribute *) (packet + RADIUS_HEADER_LENGTH);\n> \n> \twhile ((uint8 *) attr < end)\n> \t{\n> \t\t/* Would this attribute overflow the buffer? */\n> \t\tif (&attr->length >= end || (uint8 *) attr + attr->length > end)\n> \t\t\treturn false;\n\nIs this kind of pointer arithmetic safe? In theory, if a packet is \nmalformed so that attr->length points beyond the end of the packet, \n\"(uint8 *) attr + attr-length\" might overflow. That would require the \npacket buffer to be allocated just before the end of the address space, \nso probably cannot happen in practice. I don't remember if there are \nsome guarantees on that in C99 or other standards. Still, perhaps it \nwould be better to write this differently, e.g. using a separate \"size_t \nposition\" variable to track the current position in the buffer.\n\n(This also relies on the fact that \"struct radius_attribute\" doesn't \nrequire alignment, which is valid, and radius_add_attribute() made that \nassumption already. Maybe worth a comment though while we're at it; it \ncertainly raised my eyebrow while reading this)\n\nWhat if the message contains multiple attribute of the same type? If \nthere's a duplicate Message-Authenticator, we should surely reject the \npacket. I don't know if duplicate attributes are legal in general.\n\n> +\t/*\n> +\t * Add Message-Authenticator attribute first, which for now holds zeroes.\n> +\t * We remember where it is in the message so that we can fill it in later.\n> +\t */\n\nLet's mention Blast-RADIUS here as the reason to put this first. Reading \nthe paper though, I think it's only important in the server->client \nmessages, but I'm not sure, and shouldn't hurt anyway.\n\n> +\t\telse if (message_authenticator_location == NULL)\n> +\t\t{\n> +\t\t\tereport(LOG,\n> +\t\t\t\t\t(errmsg(\"RADIUS response from %s has no Message-Authenticator\",\n> +\t\t\t\t\t\t\tserver)));\n> +\n> +\t\t\t/* We'll ignore this message, unless pg_hba.conf told us not to. */\n> +\t\t\tif (requirema)\n> +\t\t\t\tcontinue;\n> +\t\t}\n\nThis is going to be very noisy if you are running an older server.\n\n> +\tuint8\t\tmessage_authenticator_key[RADIUS_VECTOR_LENGTH];\n> +\tuint8\t\tmessage_authenticator[RADIUS_VECTOR_LENGTH];\n\nPerhaps use MD5_DIGEST_LENGTH for these. The Message-Authenticator is an \nHMAC-MD5, which indeed has the same length as the MD5 hash used on the \npassword, so it's just pro forma, but it seems a little coincidental. \nThere's no fundamental reason they would have to be the same length, if \nthe RFC author's had chosen to use a different hash algorithm for \nMessage-Authenticator, for example.\n\n> +\t/*\n> +\t * We use the first 16 bytes of the shared secret, zero-padded if too\n> +\t * short, as an HMAC-MD5 key for creating and validating\n> +\t * Message-Authenticator attributes.\n> +\t */\n> +\tmemset(message_authenticator_key, 0, lengthof(message_authenticator_key));\n> +\tmemcpy(message_authenticator_key,\n> +\t\t secret,\n> +\t\t Min(lengthof(message_authenticator_key), strlen(secret)));\n\nIf the secret is longer than 16 bytes, this truncates it. Is that \ncorrect? According to https://en.wikipedia.org/wiki/HMAC, you're \nsupposed derive the suitably-sized key by calling the hash function on \nthe longer key in that case.\n\n> +\telse if (strcmp(name, \"radiusrequirema\") == 0)\n> +\t{\n> +\t\tREQUIRE_AUTH_OPTION(uaRADIUS, \"radiusrequirema\", \"radius\");\n> +\t\tif (strcmp(val, \"1\") == 0)\n> +\t\t\thbaline->radiusrequirema = true;\n> +\t\telse\n> +\t\t\thbaline->radiusrequirema = false;\n> +\t}\n\nI was going to suggest throwing an error on any other val than \"1\" or \n\"0\", but I see that we're doing the same in many other boolean options, \nlike ldaptls. So I guess that's fine, but would be nice to tighten that \nup for all the options as a separate patch.\n\n# v1-0004-Teach-007_radius-test-about-Message-Authenticator.patch\n\nLooks good to me, although I'm not sure it's worthwhile to do this. \nWe're not reaching the codepath where we'd reject a message because of a \nmissing Message-Authenticator anyway. If the radiusrequirema option was \nbroken and had no effect, for example, the test would still pass.\n\n# v1-0005-XXX-BlastRADIUS-back-patch-kludge-for-12-and-13.patch\n\nPerhaps throw an error if you set \"radiusrequirema=1\", but the server is \ncompiled without OpenSSL.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 5 Aug 2024 17:41:21 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: BlastRADIUS mitigation"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 05/08/2024 15:43, Thomas Munro wrote:\n>> The response requirement can be enabled by radiusrequirema=1 in\n>> pg_hba.conf. For example, Debian stable is currently shipping\n>> FreeRADIUS 3.2.1 which doesn't yet send the MA in its responses, but\n>> FreeBSD and Debian \"testing\" have started shipping FreeRADIUS 3.2.5\n>> which is how I noticed all this. So it doesn't seem quite right to\n>> require it by default, yet?\n\n> Agreed.\n\nWe should think about that not in terms of the situation today,\nbut the situation when we ship this fix, possibly as much as\nthree months from now. (There was some mention in the security-list\ndiscussion of maybe making an off-cycle release to get this out\nsooner; but nothing was decided, and I doubt we'll do that unless\nwe start getting user complaints.) It seems likely to me that\nmost up-to-date systems will have BlastRADIUS mitigation in place\nby then, so maybe we should lean towards secure-by-default.\n\nWe don't necessarily have to make that decision today, either.\nWe could start with not-secure-by-default but reconsider\nwhenever the release is imminent.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Aug 2024 14:50:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BlastRADIUS mitigation"
},
{
"msg_contents": "On Tue, Aug 6, 2024 at 2:41 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Seems that on linux or freebsd, you'd plow ahead even if the binary is\n> not found, and fail later, while on macOS you'd skip the tests. I think\n> we should always error out if the dependencies are not found. If you\n> make an effort to add PG_TEST_EXTRA=radius, presumably you really do\n> want to run the tests, and if dependencies are missing you'd like to\n> know about it.\n\nFixed.\n\n> > + secret = \"shared-secret\"\n>\n> Let's use a random value for this, and for the Cleartext-Password. This\n> only runs locally, and only if you explicitly add it to PG_TEST_EXTRA,\n> but it still seems nice to protect from other users on the system when\n> we can do so easily.\n\nOK, done.\n\n> > +security {\n> > + require_message_authenticator = \"yes\"\n> > +}\n>\n> > +# Note that require_message_authenticator defaulted to \"no\" before 3.2.5, and\n> > +# then switched to \"auto\" (a new mode that fills the logs up with warning\n> > +# messages about clients that don't send MA), and presumably a later version\n> > +# will default to \"yes\".\n>\n> That's not quite accurate: the option didn't exist before version 3.2.5.\n> What happens if you set it on an older server version? /me tests: seems\n> that FreeRadius 3.2.1 silently accepts the setting, or any other setting\n> it doesn't recognize, and will do nothing with it. A little surprising,\n> but ok. I didn't find any mention in the docs on that.\n\nHuh. Thanks, I was confused by that. Fixed.\n\n> (Also, that will make the test fail, until the\n> v1-0003-Mitigation-for-BlastRADIUS.patch is also applied. You could\n> leave that out of the test in this first patch, and add it\n> v1-0003-Mitigation-for-BlastRADIUS.patch)\n\nYeah, done.\n\n> Review on v1-0003-Mitigation-for-BlastRADIUS.patch:\n>\n> > + <varlistentry>\n> > + <term><literal>radiusrequirema</literal></term>\n> > + <listitem>\n> > + <para>\n> > + Whether to require a valid <literal>Message-Authenticator</literal>\n> > + attribute in messages received from RADIUS servers, and ignore messages\n> > + that don't contain it. The default value\n> > + is <literal>0</literal>, but it can be set to <literal>1</literal>\n> > + to enable that requirement.\n> > + This setting does not affect requests sent by\n> > + <productname>PostgreSQL</productname> to the RADIUS server, which\n> > + always include a <literal>Message-Authenticator</literal> attribute\n> > + (but didn't in earlier releases).\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n>\n> I think this should include some advise on why and when you should set\n> it. Something like:\n>\n> Enabling this mitigates the RADIUS protocol vulnerability described at\n> blastradius.fail. It is recommended to always set this to 1, unless you\n> are running an older RADIUS server version that does include the\n> mitigation to include Message-Authenticator in all replies. The default\n> will be changed to 1 in a future PostgreSQL version.\n\nDone, with small tweaks.\n\n> > attr = (radius_attribute *) (packet + RADIUS_HEADER_LENGTH);\n> >\n> > while ((uint8 *) attr < end)\n> > {\n> > /* Would this attribute overflow the buffer? */\n> > if (&attr->length >= end || (uint8 *) attr + attr->length > end)\n> > return false;\n>\n> Is this kind of pointer arithmetic safe? In theory, if a packet is\n> malformed so that attr->length points beyond the end of the packet,\n> \"(uint8 *) attr + attr-length\" might overflow. That would require the\n> packet buffer to be allocated just before the end of the address space,\n> so probably cannot happen in practice. I don't remember if there are\n> some guarantees on that in C99 or other standards. Still, perhaps it\n> would be better to write this differently, e.g. using a separate \"size_t\n> position\" variable to track the current position in the buffer.\n\nDone.\n\n> (This also relies on the fact that \"struct radius_attribute\" doesn't\n> require alignment, which is valid, and radius_add_attribute() made that\n> assumption already. Maybe worth a comment though while we're at it; it\n> certainly raised my eyebrow while reading this)\n\nComment added.\n\n> What if the message contains multiple attribute of the same type? If\n> there's a duplicate Message-Authenticator, we should surely reject the\n> packet. I don't know if duplicate attributes are legal in general.\n\nDuplicate attributes are legal in general per RFC 2865, which has a\ntable of attributes and their possible quantity; unfortunately this\none is an extension from RFC 2869, and I didn't find where it pins\nthat down. I suppose we could try to detect an unexpected duplicate,\nwhich might have the side benefit of checking the rest of the\nattributes for well-formedness (though in practice there aren't any).\nIs it worth bothering with that?\n\nI suppose if we wanted to be extra fastidious, we could also test with\na gallery of malformed packets crafted by a Perl script, but that\nfeels like overkill. On the other hand it would be bad if you could\ncrash a server by lobbing UDP packets at it because of some dumb\nthinko.\n\n> > + /*\n> > + * Add Message-Authenticator attribute first, which for now holds zeroes.\n> > + * We remember where it is in the message so that we can fill it in later.\n> > + */\n>\n> Let's mention Blast-RADIUS here as the reason to put this first. Reading\n> the paper though, I think it's only important in the server->client\n> messages, but I'm not sure, and shouldn't hurt anyway.\n\nDone.\n\n> > + else if (message_authenticator_location == NULL)\n> > + {\n> > + ereport(LOG,\n> > + (errmsg(\"RADIUS response from %s has no Message-Authenticator\",\n> > + server)));\n> > +\n> > + /* We'll ignore this message, unless pg_hba.conf told us not to. */\n> > + if (requirema)\n> > + continue;\n> > + }\n>\n> This is going to be very noisy if you are running an older server.\n\nSilenced.\n\n> > + uint8 message_authenticator_key[RADIUS_VECTOR_LENGTH];\n> > + uint8 message_authenticator[RADIUS_VECTOR_LENGTH];\n>\n> Perhaps use MD5_DIGEST_LENGTH for these. The Message-Authenticator is an\n> HMAC-MD5, which indeed has the same length as the MD5 hash used on the\n> password, so it's just pro forma, but it seems a little coincidental.\n> There's no fundamental reason they would have to be the same length, if\n> the RFC author's had chosen to use a different hash algorithm for\n> Message-Authenticator, for example.\n\nThe first one is now gone (see next).\n\nNow I have message_authenticator[MD5_DIGEST_LENGTH], and then the\nother places that need that number use\nlengthof(message_authenticator).\n\n> If the secret is longer than 16 bytes, this truncates it. Is that\n> correct? According to https://en.wikipedia.org/wiki/HMAC, you're\n> supposed derive the suitably-sized key by calling the hash function on\n> the longer key in that case.\n\nOh, actually I don't think we need that step at all: the HMAC init\nfunction takes a variable length key and does the required\npadding/hashing itself.\n\n> # v1-0004-Teach-007_radius-test-about-Message-Authenticator.patch\n>\n> Looks good to me, although I'm not sure it's worthwhile to do this.\n> We're not reaching the codepath where we'd reject a message because of a\n> missing Message-Authenticator anyway. If the radiusrequirema option was\n> broken and had no effect, for example, the test would still pass.\n\nDropped.\n\n> # v1-0005-XXX-BlastRADIUS-back-patch-kludge-for-12-and-13.patch\n>\n> Perhaps throw an error if you set \"radiusrequirema=1\", but the server is\n> compiled without OpenSSL.\n\nDone.\n\nI don't think I would turn this on in the build farm, because of the 3\nsecond timeout which might cause noise. Elsewhere I had a patch to\nmake the timeout configurable, so it could be set long for positive\ntests and short for negative tests, so we could maybe do that in\nmaster and think about turning the test on somewhere.",
"msg_date": "Tue, 6 Aug 2024 12:58:46 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: BlastRADIUS mitigation"
},
{
"msg_contents": "On Mon, Aug 05, 2024 at 05:41:21PM +0300, Heikki Linnakangas wrote:\n> On 05/08/2024 15:43, Thomas Munro wrote:\n>> Since PostgreSQL v12 and v13 don't have the modern \"common/hmac.h\"\n>> API, I came up with a cheap kludge: locally #define those interfaces\n>> to point directly to the OpenSSL HMAC API, or just give up and drop\n>> Message-Authenticator support if you didn't build with OpenSSL support\n>> (in practice everyone does). Better ideas?\n> \n> Seems reasonable. It probably wouldn't be hard to backport common/hmac.h\n> either, perhaps in a limited fashion with just md5 support.\n\nIt's a bit more than just backporting hmac.h and hmac.c.\nhmac_openssl.c only depends on OpenSSL to do its business, but the \nnon-OpenSSL fallback implementation depends also on the cryptohash\nfallbacks for SHA-NNN and MD5. So you would also need the parts\nrelated to cryptohash.c, sha{1,2}.c, etc. Not really complex as these\ncould be dropped as-is into the stable branches of 12 and 13, but not\nthat straight-forward either as we had the bad idea to use the\nfallback MD5 implementation even if linking to OpenSSL in v12 and v13,\nmeaning that you may need some tweaks to avoid API conflicts.\n\nRequiring OpenSSL and its HMAC APIs to do the job is much safer for a\nstable branch, IMO.\n--\nMichael",
"msg_date": "Tue, 6 Aug 2024 15:28:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BlastRADIUS mitigation"
},
{
"msg_contents": "On 06/08/2024 03:58, Thomas Munro wrote:\n> On Tue, Aug 6, 2024 at 2:41 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> What if the message contains multiple attribute of the same type? If\n>> there's a duplicate Message-Authenticator, we should surely reject the\n>> packet. I don't know if duplicate attributes are legal in general.\n> \n> Duplicate attributes are legal in general per RFC 2865, which has a\n> table of attributes and their possible quantity; unfortunately this\n> one is an extension from RFC 2869, and I didn't find where it pins\n> that down. I suppose we could try to detect an unexpected duplicate,\n> which might have the side benefit of checking the rest of the\n> attributes for well-formedness (though in practice there aren't any).\n> Is it worth bothering with that?\n\nHmm, it does feel sloppy to not verify the format of the rest of the \nattributes. That's not new with this patch though. Maybe have a separate \nfunction to verify the packet's format, and call that before \nradius_find_attribute().\n\n> I suppose if we wanted to be extra fastidious, we could also test with\n> a gallery of malformed packets crafted by a Perl script, but that\n> feels like overkill. On the other hand it would be bad if you could\n> crash a server by lobbing UDP packets at it because of some dumb\n> thinko.\n\nThis would also be a easy target for fuzz testing. I'm not too worried \nthough, the packet format is pretty simple. Still, bugs happen. (Not a \nrequirement for this patch in any case)\n\n> +my $radius_port = PostgreSQL::Test::Cluster::get_free_port();\n\nThis isn't quite right because get_free_port() finds a free TCP port, \nwhile radius uses UDP.\n\n> +#else\n> +\t\t\tereport(elevel,\n> +\t\t\t\t\t(errcode(ERRCODE_CONFIG_FILE_ERROR),\n> +\t\t\t\t\t errmsg(\"this build does not support radiusrequirema=1\"),\n> +\t\t\t\t\t errcontext(\"line %d of configuration file \\\"%s\\\"\",\n> +\t\t\t\t\t\t\t\tline_num, file_name)));\n> +#endif\n\nMaybe something like:\n\n errmsg(\"radiusrequirema=1 is not supported because the server was \nbuilt without OpenSSL\")\n\nto give the user a hint what they need to do to enable it.\n\nOther than those little things, looks good to me.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 7 Aug 2024 15:55:33 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: BlastRADIUS mitigation"
},
{
"msg_contents": "On Wed, Aug 7, 2024 at 5:55 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 06/08/2024 03:58, Thomas Munro wrote:\n> > On Tue, Aug 6, 2024 at 2:41 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >> What if the message contains multiple attribute of the same type? If\n> >> there's a duplicate Message-Authenticator, we should surely reject the\n> >> packet. I don't know if duplicate attributes are legal in general.\n> >\n> > Duplicate attributes are legal in general per RFC 2865, which has a\n> > table of attributes and their possible quantity; unfortunately this\n> > one is an extension from RFC 2869, and I didn't find where it pins\n> > that down.\n\nThere's a similar table near the end of 2869:\n\n https://datatracker.ietf.org/doc/html/rfc2869#section-5.19\n\n> > I suppose if we wanted to be extra fastidious, we could also test with\n> > a gallery of malformed packets crafted by a Perl script, but that\n> > feels like overkill. On the other hand it would be bad if you could\n> > crash a server by lobbing UDP packets at it because of some dumb\n> > thinko.\n>\n> This would also be a easy target for fuzz testing. I'm not too worried\n> though, the packet format is pretty simple. Still, bugs happen. (Not a\n> requirement for this patch in any case)\n\n<tangent>\n\nI've been working on fuzzing JSON, and I spent some time adapting that\nto test this RADIUS code. No pressure to use any of it (the\nrefactoring to pull out the response validation is cowboy-quality at\nbest), but it might help anyone who wants to pursue it in the future?\n\nThis fuzzer hasn't been able to find anything in the response parser.\n(But the modifications I made make that claim a little weaker, since\nI'm not testing what's actually shipping.) The attached response\ncorpus could be used to seed a malformed-packet gallery like Thomas\nmentioned; it's built with the assumption that the request packet was\nall zeroes and the shared secret is `secret`.\n\nI was impressed with how quickly LLVM was able to find the packet\nshape, including valid signatures. The only thing I had to eventually\nadd manually was a valid Message-Authenticator case; I assume the\ninterdependency between the authenticator and the packet checksum was\na little too much for libfuzzer to figure out on its own.\n\n</tangent>\n\n--Jacob",
"msg_date": "Tue, 13 Aug 2024 11:22:43 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BlastRADIUS mitigation"
}
] |
[
{
"msg_contents": "Hello.\n\nAfter a recent commit f80b09bac8, make update-po fails with \"missing\nrequired files\". The commit moved some files to fe_utils but this\nchange was not reflected in pg_basebackup's nls.mk. The attached patch\nfixes this issue.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 06 Aug 2024 10:21:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Missing reflection in nls.mk from recent basebackup changes"
},
{
"msg_contents": "On Tue, Aug 06, 2024 at 10:21:23AM +0900, Kyotaro Horiguchi wrote:\n> After a recent commit f80b09bac8, make update-po fails with \"missing\n> required files\". The commit moved some files to fe_utils but this\n> change was not reflected in pg_basebackup's nls.mk. The attached patch\n> fixes this issue.\n\nYou are right. Still, it is not something that committers are\nrequired to update when introducing new files, isn't it? I don't see\nwhy we should be aggressive here for HEAD.\n\nad8877cb5137 has done a large batch of these for the v17 cycle.\n--\nMichael",
"msg_date": "Tue, 6 Aug 2024 10:39:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Missing reflection in nls.mk from recent basebackup changes"
},
{
"msg_contents": "On 2024-Aug-06, Michael Paquier wrote:\n\n> On Tue, Aug 06, 2024 at 10:21:23AM +0900, Kyotaro Horiguchi wrote:\n> > After a recent commit f80b09bac8, make update-po fails with \"missing\n> > required files\". The commit moved some files to fe_utils but this\n> > change was not reflected in pg_basebackup's nls.mk. The attached patch\n> > fixes this issue.\n> \n> You are right. Still, it is not something that committers are\n> required to update when introducing new files, isn't it? I don't see\n> why we should be aggressive here for HEAD.\n\nWell, make targets should always work.\n\n> ad8877cb5137 has done a large batch of these for the v17 cycle.\n\nIIUC that's slightly different -- it concerns files that contain\n*additional* files that msgmerge need to scan in order to extract\ntranslatable strings. This patch is about fixing bogus file locations\nin the makefiles. So without this patch, update-po fails; without the\ncommit you mention, update-po continues to run, the only problem is it\nmisses a few files.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El sentido de las cosas no viene de las cosas, sino de\nlas inteligencias que las aplican a sus problemas diarios\nen busca del progreso.\" (Ernesto Hernández-Novich)\n\n\n",
"msg_date": "Mon, 12 Aug 2024 18:54:37 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Missing reflection in nls.mk from recent basebackup changes"
},
{
"msg_contents": "On 2024-Aug-06, Kyotaro Horiguchi wrote:\n\n> After a recent commit f80b09bac8, make update-po fails with \"missing\n> required files\". The commit moved some files to fe_utils but this\n> change was not reflected in pg_basebackup's nls.mk. The attached patch\n> fixes this issue.\n\nThanks, pushed!\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Cómo ponemos nuestros dedos en la arcilla del otro. Eso es la amistad; jugar\nal alfarero y ver qué formas se pueden sacar del otro\" (C. Halloway en\nLa Feria de las Tinieblas, R. Bradbury)\n\n\n",
"msg_date": "Mon, 12 Aug 2024 21:47:05 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Missing reflection in nls.mk from recent basebackup changes"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nAttached is the release announcement draft for the 2024-08-08 release. \r\nPlease review for accuracy and omissions, and provide feedback no later \r\nthan 2024-08-08 12:00 UTC.\r\n\r\nThanks!\r\n\r\nJonathan",
"msg_date": "Mon, 5 Aug 2024 23:24:46 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "2024-08-08 update release announcement draft"
}
] |
[
{
"msg_contents": "Hi all,\n\ndikkop has reported a failure with the regression tests of pg_combinebackup:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2024-08-04%2010%3A04%3A51\n\nThat's in the test 003_timeline.pl, from dc212340058b:\n# Failed test 'incremental backup from node1'\n# at t/003_timeline.pl line 43.\n\nThe node is extremely slow, so perhaps bumping up the timeout would be\nfine enough in this case (did not spend time analyzing it). I don't\nthink that this has been discussed, but perhaps I just missed a\nreference to it and the incremental backup thread is quite large.\n\nThanks,\n--\nMichael",
"msg_date": "Tue, 6 Aug 2024 14:48:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Instability with incremental backup tests (pg_combinebackup,\n 003_timeline.pl)"
},
{
"msg_contents": "On 8/6/24 07:48, Michael Paquier wrote:\n> Hi all,\n> \n> dikkop has reported a failure with the regression tests of pg_combinebackup:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2024-08-04%2010%3A04%3A51\n> \n> That's in the test 003_timeline.pl, from dc212340058b:\n> # Failed test 'incremental backup from node1'\n> # at t/003_timeline.pl line 43.\n> \n> The node is extremely slow, so perhaps bumping up the timeout would be\n> fine enough in this case (did not spend time analyzing it). I don't\n> think that this has been discussed, but perhaps I just missed a\n> reference to it and the incremental backup thread is quite large.\n> \n\nYeah, it's a freebsd running on rpi4, from a USB flash disk, and in my\nexperience it's much slower than rpi4 running Linux. I'm not sure why is\nthat, never found a way to make it faster\n\nThe machine already has:\n\n export PGCTLTIMEOUT=600\n export PG_TEST_TIMEOUT_DEFAULT=600\n\nI doubt increasing it further will do the trick. Maybe there's some\nother timeout that I should increase?\n\nFWIW I just moved the buildfarm stuff to a proper SSD disk (still USB,\nbut hopefully better than the crappy flash disk).\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Tue, 6 Aug 2024 14:53:09 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: Instability with incremental backup tests (pg_combinebackup,\n 003_timeline.pl)"
},
{
"msg_contents": "\n\nOn 8/6/24 14:53, Tomas Vondra wrote:\n> On 8/6/24 07:48, Michael Paquier wrote:\n>> Hi all,\n>>\n>> dikkop has reported a failure with the regression tests of pg_combinebackup:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2024-08-04%2010%3A04%3A51\n>>\n>> That's in the test 003_timeline.pl, from dc212340058b:\n>> # Failed test 'incremental backup from node1'\n>> # at t/003_timeline.pl line 43.\n>>\n>> The node is extremely slow, so perhaps bumping up the timeout would be\n>> fine enough in this case (did not spend time analyzing it). I don't\n>> think that this has been discussed, but perhaps I just missed a\n>> reference to it and the incremental backup thread is quite large.\n>>\n> \n> Yeah, it's a freebsd running on rpi4, from a USB flash disk, and in my\n> experience it's much slower than rpi4 running Linux. I'm not sure why is\n> that, never found a way to make it faster\n> \n> The machine already has:\n> \n> export PGCTLTIMEOUT=600\n> export PG_TEST_TIMEOUT_DEFAULT=600\n> \n> I doubt increasing it further will do the trick. Maybe there's some\n> other timeout that I should increase?\n> \n> FWIW I just moved the buildfarm stuff to a proper SSD disk (still USB,\n> but hopefully better than the crappy flash disk).\n> \n\nSeems the move to SSD helped a lot - the runs went from ~4h to ~40m. So\nchances are the instability won't be such a problem.\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Wed, 7 Aug 2024 01:23:17 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: Instability with incremental backup tests (pg_combinebackup,\n 003_timeline.pl)"
},
{
"msg_contents": "On Tue, Aug 6, 2024 at 1:49 AM Michael Paquier <michael@paquier.xyz> wrote:\n> dikkop has reported a failure with the regression tests of pg_combinebackup:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dikkop&dt=2024-08-04%2010%3A04%3A51\n>\n> That's in the test 003_timeline.pl, from dc212340058b:\n> # Failed test 'incremental backup from node1'\n> # at t/003_timeline.pl line 43.\n>\n> The node is extremely slow, so perhaps bumping up the timeout would be\n> fine enough in this case (did not spend time analyzing it). I don't\n> think that this has been discussed, but perhaps I just missed a\n> reference to it and the incremental backup thread is quite large.\n\nI just noticed, rather belatedly, that this thread is on the open\nitems list. This seems to be the cause of the failure:\n\n2024-08-04 12:46:34.986 UTC [4951:15] 003_timeline.pl STATEMENT:\nSTART_REPLICATION SLOT \"pg_basebackup_4951\" 0/4000000 TIMELINE 1\n2024-08-04 12:47:34.987 UTC [4951:16] 003_timeline.pl LOG:\nterminating walsender process due to replication timeout\n\nwal_sender_timeout is 60s by default, so that tracks. The command that\nprovokes this failure is:\n\npg_basebackup -D\n/mnt/data/buildfarm/buildroot/HEAD/pgsql.build/src/bin/pg_combinebackup/tmp_check/t_003_timeline_node1_data/backup/backup2\n--no-sync -cfast --incremental\n/mnt/data/buildfarm/buildroot/HEAD/pgsql.build/src/bin/pg_combinebackup/tmp_check/t_003_timeline_node1_data/backup/backup1/backup_manifest\n\nAll we're doing here is taking an incremental backup of 1-table\ndatabase that had 1 row at the time of the full backup and has had 1\nmore row inserted since then. On my system, the last time I ran this\nregression test, this step completed in 410ms. It shouldn't be\nexpensive. So I'm inclined to chalk this up to the machine not having\nenough resources. The only thing that I don't really understand is why\nthis particular test would fail vs. anything else. We have a bunch of\ntests that take backups. A possibly important difference here is that\nthis one is an incremental backup, so it would need to read WAL\nsummary files from the beginning of the full backup to the beginning\nof the current backup and combine them into one super-summary that it\ncould then use to decide what to include in the incremental backup.\nHowever, since this is an artificial example with just 1 insert\nbetween the full and the incremental, it's hard to imagine that being\nexpensive, unless there's some low-probability bug that makes it go\ninto an infinite loop or chew up a million CPU cycles or something.\nThat's not impossible, but given the discussion between you and Tomas,\nI'm kinda hoping it was just a hardware issue.\n\nBarring objections or other similar trouble reports, I think we should\njust close out this open item.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 21 Aug 2024 08:58:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Instability with incremental backup tests (pg_combinebackup,\n 003_timeline.pl)"
},
{
"msg_contents": "On 8/21/24 14:58, Robert Haas wrote:\n> ...\n>\n> All we're doing here is taking an incremental backup of 1-table\n> database that had 1 row at the time of the full backup and has had 1\n> more row inserted since then. On my system, the last time I ran this\n> regression test, this step completed in 410ms. It shouldn't be\n> expensive. So I'm inclined to chalk this up to the machine not having\n> enough resources. The only thing that I don't really understand is why\n> this particular test would fail vs. anything else. We have a bunch of\n> tests that take backups. A possibly important difference here is that\n> this one is an incremental backup, so it would need to read WAL\n> summary files from the beginning of the full backup to the beginning\n> of the current backup and combine them into one super-summary that it\n> could then use to decide what to include in the incremental backup.\n> However, since this is an artificial example with just 1 insert\n> between the full and the incremental, it's hard to imagine that being\n> expensive, unless there's some low-probability bug that makes it go\n> into an infinite loop or chew up a million CPU cycles or something.\n> That's not impossible, but given the discussion between you and Tomas,\n> I'm kinda hoping it was just a hardware issue.\n> \n> Barring objections or other similar trouble reports, I think we should\n> just close out this open item.\n> \n\n+1 to just close it\n\nThe animal is running FreeBSD on rpi4, and used to be running from a\nflash disk. Seems FreeBSD has some trouble with that, which likely\ncontributed to the failures (a bit weird it affected just this test).\n\nMoving to a better storage (SATA SSD over USB) improved the situation\nquite a bit. It's a bit too early to say for sure, ofc. But I don't\nthink the test itself is broken.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Wed, 21 Aug 2024 15:18:49 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: Instability with incremental backup tests (pg_combinebackup,\n 003_timeline.pl)"
},
{
"msg_contents": "On Wed, Aug 21, 2024 at 03:18:49PM +0200, Tomas Vondra wrote:\n> +1 to just close it\n\nSounds good to me. Thanks.\n--\nMichael",
"msg_date": "Thu, 22 Aug 2024 15:24:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Instability with incremental backup tests (pg_combinebackup,\n 003_timeline.pl)"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nI use async commits. At some moment, I would like to make sure that all inserted WAL records are fsync-ed. I can use XLogFlush function but I have some doubts which LSN to specify. There is a number of functions which return write or insert LSNs but they are not applicable.\n\nI can't use GetXLogInsertRecPtr() because it returns a real insert LSN, not the end LSN of the last record. XLogFlush may fail with such LSN because the specified LSN may be \"in the future\" if the WAL record ends up to the page boundary (the real insert LSN is summed up with page header size).\n\nI can't use GetXLogWriteRecPtr() because it seems to be bounded to page boundaries. Some inserted WAL records may not be fsync-ed. Some other functions seems not applicable as well.\n\nThe first idea is to use GetLastImportantRecPtr() but this function returns the start LSN of the last important WAL record. I would use XLogFlush(GetLastImportantRecPtr() + 1) but I'm not sure that this way is conventional.\n\nAnother idea is to create a new function like GetXLogInsertRecPtr() which calls XLogBytePosToEndRecPtr() instead of XLogBytePosToRecPtr() inside it.\n\nCould you please advice which way to go?\n\nWith best regards,\nVitaly\n\nHi Hackers,I use async commits. At some moment, I would like to make sure that all inserted WAL records are fsync-ed. I can use XLogFlush function but I have some doubts which LSN to specify. There is a number of functions which return write or insert LSNs but they are not applicable.I can't use GetXLogInsertRecPtr() because it returns a real insert LSN, not the end LSN of the last record. XLogFlush may fail with such LSN because the specified LSN may be \"in the future\" if the WAL record ends up to the page boundary (the real insert LSN is summed up with page header size).I can't use GetXLogWriteRecPtr() because it seems to be bounded to page boundaries. Some inserted WAL records may not be fsync-ed. Some other functions seems not applicable as well.The first idea is to use GetLastImportantRecPtr() but this function returns the start LSN of the last important WAL record. I would use XLogFlush(GetLastImportantRecPtr() + 1) but I'm not sure that this way is conventional.Another idea is to create a new function like GetXLogInsertRecPtr() which calls XLogBytePosToEndRecPtr() instead of XLogBytePosToRecPtr() inside it.Could you please advice which way to go?With best regards,Vitaly",
"msg_date": "Tue, 06 Aug 2024 09:41:12 +0300",
"msg_from": "\"Vitaly Davydov\" <v.davydov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Fsync (flush) all inserted WAL records"
},
{
"msg_contents": "Hi,\n\n> I use async commits. At some moment, I would like to make sure that all inserted WAL records are fsync-ed. I can use XLogFlush function but I have some doubts which LSN to specify. There is a number of functions which return write or insert LSNs but they are not applicable.\n>\n> I can't use GetXLogInsertRecPtr() because it returns a real insert LSN, not the end LSN of the last record. XLogFlush may fail with such LSN because the specified LSN may be \"in the future\" if the WAL record ends up to the page boundary (the real insert LSN is summed up with page header size).\n>\n> I can't use GetXLogWriteRecPtr() because it seems to be bounded to page boundaries. Some inserted WAL records may not be fsync-ed. Some other functions seems not applicable as well.\n>\n> The first idea is to use GetLastImportantRecPtr() but this function returns the start LSN of the last important WAL record. I would use XLogFlush(GetLastImportantRecPtr() + 1) but I'm not sure that this way is conventional.\n>\n> Another idea is to create a new function like GetXLogInsertRecPtr() which calls XLogBytePosToEndRecPtr() instead of XLogBytePosToRecPtr() inside it.\n>\n> Could you please advice which way to go?\n\nDoes pg_current_wal_flush_lsn() [1] return what you need?\n\n[1]: https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-RECOVERY-CONTROL\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 7 Aug 2024 12:15:50 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Fsync (flush) all inserted WAL records"
},
{
"msg_contents": "Hi,\n\n> > Could you please advice which way to go?\n>\n> Does pg_current_wal_flush_lsn() [1] return what you need?\n>\n> [1]: https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-RECOVERY-CONTROL\n\nIf not, take a look at its implementation and functions around,\nGetInsertRecPtr() and others. I believe you will find all you need for\nthe task.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 7 Aug 2024 12:19:28 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Fsync (flush) all inserted WAL records"
},
{
"msg_contents": "Hi Aleksander,\n\nOn Wednesday, August 07, 2024 12:19 MSK, Aleksander Alekseev <aleksander@timescale.com> wrote:\n > Does pg_current_wal_flush_lsn() [1] return what you need?\n>\n> [1]: https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-RECOVERY-CONTROL\n\nIf not, take a look at its implementation and functions around,\nGetInsertRecPtr() and others. I believe you will find all you need for\nthe task.Thank you for the response. I need the LSN of the last inserted by not flushed WAL record. The function pg_current_wal_flush_lsn() doesn't help. It returns the current flush position. GetInsertRecPtr() doesn't help as well because it returns XLogCtl->LogwrtRqst.Write which is updated when the record crosses page boundary. I looked at the code and haven't found any suitable function except of GetLastImportantRecPtr() but it returns start LSN of the last inserted important record (but I need end lsn).\n\nI would propose a new function to fulfill my requirements like this (see below) but I prefer not to create new functions unreasonably:\n\nXLogRecPtr\nGetXLogLastInsertEndRecPtr(void)\n{\n XLogCtlInsert *Insert = &XLogCtl->Insert;\n uint64 current_bytepos;\n SpinLockAcquire(&Insert->insertpos_lck);\n current_bytepos = Insert->CurrBytePos;\n SpinLockRelease(&Insert->insertpos_lck);\n return XLogBytePosToEndRecPtr(current_bytepos);\n}\n\nThis function differs from the existing GetXLogInsertRecPtr() by calling XLogBytePosToEndRecPtr instead of XLogBytePosToRecPtr. \n\nWith best regards,\nVitaly\n\nHi Aleksander,On Wednesday, August 07, 2024 12:19 MSK, Aleksander Alekseev <aleksander@timescale.com> wrote: > Does pg_current_wal_flush_lsn() [1] return what you need?>> [1]: https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-RECOVERY-CONTROLIf not, take a look at its implementation and functions around,GetInsertRecPtr() and others. I believe you will find all you need forthe task.Thank you for the response. I need the LSN of the last inserted by not flushed WAL record. The function pg_current_wal_flush_lsn() doesn't help. It returns the current flush position. GetInsertRecPtr() doesn't help as well because it returns XLogCtl->LogwrtRqst.Write which is updated when the record crosses page boundary. I looked at the code and haven't found any suitable function except of GetLastImportantRecPtr() but it returns start LSN of the last inserted important record (but I need end lsn).I would propose a new function to fulfill my requirements like this (see below) but I prefer not to create new functions unreasonably:XLogRecPtrGetXLogLastInsertEndRecPtr(void){ XLogCtlInsert *Insert = &XLogCtl->Insert; uint64 current_bytepos; SpinLockAcquire(&Insert->insertpos_lck); current_bytepos = Insert->CurrBytePos; SpinLockRelease(&Insert->insertpos_lck); return XLogBytePosToEndRecPtr(current_bytepos);}This function differs from the existing GetXLogInsertRecPtr() by calling XLogBytePosToEndRecPtr instead of XLogBytePosToRecPtr. With best regards,Vitaly",
"msg_date": "Wed, 07 Aug 2024 16:42:01 +0300",
"msg_from": "\"Vitaly Davydov\" <v.davydov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "=?utf-8?q?Re=3A?= Fsync (flush) all inserted WAL records"
},
{
"msg_contents": "Hi Vitaly,\n\n> I would propose a new function to fulfill my requirements like this (see below) but I prefer not to create new functions unreasonably:\n>\n> XLogRecPtr\n> GetXLogLastInsertEndRecPtr(void)\n> {\n> XLogCtlInsert *Insert = &XLogCtl->Insert;\n> uint64 current_bytepos;\n> SpinLockAcquire(&Insert->insertpos_lck);\n> current_bytepos = Insert->CurrBytePos;\n> SpinLockRelease(&Insert->insertpos_lck);\n> return XLogBytePosToEndRecPtr(current_bytepos);\n> }\n>\n> This function differs from the existing GetXLogInsertRecPtr() by calling XLogBytePosToEndRecPtr instead of XLogBytePosToRecPtr.\n\nPerhaps you could give more context on the use cases for this\nfunction? The value of it is not quite clear. What people typically\nneed is making sure if a given LSN was fsync'ed and/or replicated\nand/or applied on a replica. Your case(s) however is different and I\ndon't fully understand it.\n\nIn any case you will need to implement an SQL-wrapper in order to make\nthe function available to DBAs, cover it with tests and provide\ndocumentation.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 7 Aug 2024 16:55:36 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Fsync (flush) all inserted WAL records"
},
{
"msg_contents": "On Wednesday, August 07, 2024 16:55 MSK, Aleksander Alekseev <aleksander@timescale.com> wrote:\n \nPerhaps you could give more context on the use cases for this\nfunction? The value of it is not quite clear. What people typically\nneed is making sure if a given LSN was fsync'ed and/or replicated\nand/or applied on a replica. Your case(s) however is different and I\ndon't fully understand it.\nI use asynchronous commit (without XLogFlush/fsync at commit). At some moment I would like to XLogFlush (fsync) all already asynchronously committed transactions (inserted but not flushed/fsynced yet WAL records). Assume, that there is no any active transactions at this moment, no any potential race conditions. My problem is to find a proper LSN which I can use as a parameter for XLogFlush. The problem is that I can't use GetXLogInsertRecPtr() because it may be \"in the future\" due to some reasons (added page header size). XLogFlush will fail in this case.\nIn any case you will need to implement an SQL-wrapper in order to make\nthe function available to DBAs, cover it with tests and provide\ndocumentation.Well, I would like to use such function in C language code, in some solution, not as a function to be used by users. \n\nWith best regards,\nVitaly\n\n Hi Vitaly,\n\n> I would propose a new function to fulfill my requirements like this (see below) but I prefer not to create new functions unreasonably:\n>\n> XLogRecPtr\n> GetXLogLastInsertEndRecPtr(void)\n> {\n> XLogCtlInsert *Insert = &XLogCtl->Insert;\n> uint64 current_bytepos;\n> SpinLockAcquire(&Insert->insertpos_lck);\n> current_bytepos = Insert->CurrBytePos;\n> SpinLockRelease(&Insert->insertpos_lck);\n> return XLogBytePosToEndRecPtr(current_bytepos);\n> }\n>\n> This function differs from the existing GetXLogInsertRecPtr() by calling XLogBytePosToEndRecPtr instead of XLogBytePosToRecPtr.\n\n\nIn any case you will need to implement an SQL-wrapper in order to make\nthe function available to DBAs, cover it with tests and provide\ndocumentation.\n\n--\nBest regards,\nAleksander Alekseev\n\n \n\n \n\nOn Wednesday, August 07, 2024 16:55 MSK, Aleksander Alekseev <aleksander@timescale.com> wrote: Perhaps you could give more context on the use cases for thisfunction? The value of it is not quite clear. What people typicallyneed is making sure if a given LSN was fsync'ed and/or replicatedand/or applied on a replica. Your case(s) however is different and Idon't fully understand it.I use asynchronous commit (without XLogFlush/fsync at commit). At some moment I would like to XLogFlush (fsync) all already asynchronously committed transactions (inserted but not flushed/fsynced yet WAL records). Assume, that there is no any active transactions at this moment, no any potential race conditions. My problem is to find a proper LSN which I can use as a parameter for XLogFlush. The problem is that I can't use GetXLogInsertRecPtr() because it may be \"in the future\" due to some reasons (added page header size). XLogFlush will fail in this case.In any case you will need to implement an SQL-wrapper in order to makethe function available to DBAs, cover it with tests and providedocumentation.Well, I would like to use such function in C language code, in some solution, not as a function to be used by users. With best regards,Vitaly Hi Vitaly,> I would propose a new function to fulfill my requirements like this (see below) but I prefer not to create new functions unreasonably:>> XLogRecPtr> GetXLogLastInsertEndRecPtr(void)> {> XLogCtlInsert *Insert = &XLogCtl->Insert;> uint64 current_bytepos;> SpinLockAcquire(&Insert->insertpos_lck);> current_bytepos = Insert->CurrBytePos;> SpinLockRelease(&Insert->insertpos_lck);> return XLogBytePosToEndRecPtr(current_bytepos);> }>> This function differs from the existing GetXLogInsertRecPtr() by calling XLogBytePosToEndRecPtr instead of XLogBytePosToRecPtr.In any case you will need to implement an SQL-wrapper in order to makethe function available to DBAs, cover it with tests and providedocumentation.--Best regards,Aleksander Alekseev",
"msg_date": "Wed, 07 Aug 2024 17:37:39 +0300",
"msg_from": "\"Vitaly Davydov\" <v.davydov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "=?utf-8?q?Re=3A?= Fsync (flush) all inserted WAL records"
},
{
"msg_contents": "Hi,\n\n> I use asynchronous commit (without XLogFlush/fsync at commit). At some moment I would like to XLogFlush (fsync) all already asynchronously committed transactions (inserted but not flushed/fsynced yet WAL records). Assume, that there is no any active transactions at this moment, no any potential race conditions. My problem is to find a proper LSN which I can use as a parameter for XLogFlush.\n\nHow is it different from `CHECKPOINT;` ?\n\n> Well, I would like to use such function in C language code, in some solution, not as a function to be used by users.\n\nAssuming the function has value, as you claim, I see no reason not to\nexpose it similarly to pg_current_wal_*(). On top of that you will\nhave to test-cover it anyway. The easiest way to do it will be to have\nan SQL-wrapper.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 7 Aug 2024 18:00:45 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Fsync (flush) all inserted WAL records"
},
{
"msg_contents": "On Wed, Aug 07, 2024 at 06:00:45PM +0300, Aleksander Alekseev wrote:\n> Assuming the function has value, as you claim, I see no reason not to\n> expose it similarly to pg_current_wal_*(). On top of that you will\n> have to test-cover it anyway. The easiest way to do it will be to have\n> an SQL-wrapper.\n\nI cannot be absolutely without seeing a patch, but adding SQL\nfunctions in this area is usually very useful for monitoring purposes\nof external solutions.\n--\nMichael",
"msg_date": "Mon, 19 Aug 2024 15:35:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fsync (flush) all inserted WAL records"
},
{
"msg_contents": "Dear All,\n\nI would propose a new function like GetXLogInsertRecPtr(), but with some modifications (please, see the attached patch). The result LSN can be passed to XLogFLush() safely. I believe, it will not raise an error in any case. XLogFlush(GetXLogLastInsertEndRecPtr()) will flush (fsync) all already inserted records at the moment. It is what I would like to get.\n\nI'm not sure, we need a SQL function counterpart for this new C function, but it is not a big deal to implement.\n\nWith best regards,\nVitaly\n\nOn Monday, August 19, 2024 09:35 MSK, Michael Paquier <michael@paquier.xyz> wrote:\n On Wed, Aug 07, 2024 at 06:00:45PM +0300, Aleksander Alekseev wrote:\n> Assuming the function has value, as you claim, I see no reason not to\n> expose it similarly to pg_current_wal_*(). On top of that you will\n> have to test-cover it anyway. The easiest way to do it will be to have\n> an SQL-wrapper.\n\nI cannot be absolutely without seeing a patch, but adding SQL\nfunctions in this area is usually very useful for monitoring purposes\nof external solutions.\n--\nMichael",
"msg_date": "Tue, 20 Aug 2024 18:18:45 +0300",
"msg_from": "\"Vitaly Davydov\" <v.davydov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "=?utf-8?q?Re=3A?= Fsync (flush) all inserted WAL records"
}
] |
[
{
"msg_contents": "constexpr is a keyword in C23. Rename a conflicting identifier for\nfuture-proofing.\n\nObviously, C23 is way in the future, but this is a hard error that \nprevents any further exploration. (To be clear: This only happens if \nyou explicitly select C23 mode. I'm not aware of a compiler where this \nis the default yet.)",
"msg_date": "Tue, 6 Aug 2024 10:04:27 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Rename C23 keyword"
},
{
"msg_contents": "On Tue, Aug 6, 2024 at 4:04 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> constexpr is a keyword in C23. Rename a conflicting identifier for\n> future-proofing.\n>\n> Obviously, C23 is way in the future, but this is a hard error that\n> prevents any further exploration. (To be clear: This only happens if\n> you explicitly select C23 mode. I'm not aware of a compiler where this\n> is the default yet.)\n\nThis seems fine.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Aug 2024 10:00:34 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rename C23 keyword"
},
{
"msg_contents": "On 06.08.24 16:00, Robert Haas wrote:\n> On Tue, Aug 6, 2024 at 4:04 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n>> constexpr is a keyword in C23. Rename a conflicting identifier for\n>> future-proofing.\n>>\n>> Obviously, C23 is way in the future, but this is a hard error that\n>> prevents any further exploration. (To be clear: This only happens if\n>> you explicitly select C23 mode. I'm not aware of a compiler where this\n>> is the default yet.)\n> \n> This seems fine.\n\ncommitted\n\n\n\n",
"msg_date": "Tue, 13 Aug 2024 06:29:22 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: Rename C23 keyword"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile working on [1], I came across what seems to be incorrect comments in\ninstr_time.h and an unneeded cast to int64.\n\nIndeed, 03023a2664 represented time as an int64 on all platforms but forgot to\nupdate the comment related to INSTR_TIME_GET_MICROSEC() and provided an incorrect\ncomment for INSTR_TIME_GET_NANOSEC().\n\nPlease find attached a tiny patch to correct those and, in passing, remove what\nI think is an unneeded cast to int64.\n\n[1]: https://www.postgresql.org/message-id/19E276C9-2C2B-435A-B275-8FA22222AEB8%40gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 6 Aug 2024 08:54:23 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix comments in instr_time.h and remove an unneeded cast to int64"
},
{
"msg_contents": "On 06/08/2024 11:54, Bertrand Drouvot wrote:\n> Hi hackers,\n> \n> While working on [1], I came across what seems to be incorrect comments in\n> instr_time.h and an unneeded cast to int64.\n> \n> Indeed, 03023a2664 represented time as an int64 on all platforms but forgot to\n> update the comment related to INSTR_TIME_GET_MICROSEC() and provided an incorrect\n> comment for INSTR_TIME_GET_NANOSEC().\n> \n> Please find attached a tiny patch to correct those and, in passing, remove what\n> I think is an unneeded cast to int64.\n\nApplied, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 6 Aug 2024 14:28:38 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Fix comments in instr_time.h and remove an unneeded cast to int64"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 06/08/2024 11:54, Bertrand Drouvot wrote:\n>> Please find attached a tiny patch to correct those and, in passing, remove what\n>> I think is an unneeded cast to int64.\n\n> Applied, thanks!\n\nI think this comment change is a dis-improvement. It's removed the\ndocumentation of the important fact that INSTR_TIME_GET_MICROSEC and\nINSTR_TIME_GET_NANOSEC return a different data type from\nINSTR_TIME_GET_MILLISEC (ie, integer versus float). Also, the\nexpectation is that users of these APIs do not know the actual data\ntype of instr_time, and instead we tell them what the output of those\nmacros is. This patch just blew a hole in that abstraction.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Aug 2024 10:20:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix comments in instr_time.h and remove an unneeded cast to int64"
},
{
"msg_contents": "On 06/08/2024 17:20, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> On 06/08/2024 11:54, Bertrand Drouvot wrote:\n>>> Please find attached a tiny patch to correct those and, in passing, remove what\n>>> I think is an unneeded cast to int64.\n> \n>> Applied, thanks!\n> \n> I think this comment change is a dis-improvement. It's removed the\n> documentation of the important fact that INSTR_TIME_GET_MICROSEC and\n> INSTR_TIME_GET_NANOSEC return a different data type from\n> INSTR_TIME_GET_MILLISEC (ie, integer versus float). Also, the\n> expectation is that users of these APIs do not know the actual data\n> type of instr_time, and instead we tell them what the output of those\n> macros is. This patch just blew a hole in that abstraction.\n\nHmm, ok I see. Then I propose:\n\n1. Revert\n2. Just fix the comment to say int64 instead of uint64.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Tue, 6 Aug 2024 17:49:32 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Fix comments in instr_time.h and remove an unneeded cast to int64"
},
{
"msg_contents": "Hi,\n\nOn Tue, Aug 06, 2024 at 05:49:32PM +0300, Heikki Linnakangas wrote:\n> On 06/08/2024 17:20, Tom Lane wrote:\n> > Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> > > On 06/08/2024 11:54, Bertrand Drouvot wrote:\n> > > > Please find attached a tiny patch to correct those and, in passing, remove what\n> > > > I think is an unneeded cast to int64.\n> > \n> > > Applied, thanks!\n> > \n> > I think this comment change is a dis-improvement. It's removed the\n> > documentation of the important fact that INSTR_TIME_GET_MICROSEC and\n> > INSTR_TIME_GET_NANOSEC return a different data type from\n> > INSTR_TIME_GET_MILLISEC (ie, integer versus float). Also, the\n> > expectation is that users of these APIs do not know the actual data\n> > type of instr_time, and instead we tell them what the output of those\n> > macros is. This patch just blew a hole in that abstraction.\n\nOh ok, did not think about it that way, thanks for the feedback!\n\n> \n> Hmm, ok I see. Then I propose:\n> \n> 1. Revert\n> 2. Just fix the comment to say int64 instead of uint64.\n\nLGTM, thanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 6 Aug 2024 14:57:28 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix comments in instr_time.h and remove an unneeded cast to int64"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> Hmm, ok I see. Then I propose:\n\n> 1. Revert\n> 2. Just fix the comment to say int64 instead of uint64.\n\nYeah, it's probably reasonable to specify the output as int64\nnot uint64 (especially since it looks like that's what the\nmacros actually produce).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Aug 2024 11:16:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix comments in instr_time.h and remove an unneeded cast to int64"
},
{
"msg_contents": "On 06/08/2024 18:16, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> Hmm, ok I see. Then I propose:\n> \n>> 1. Revert\n>> 2. Just fix the comment to say int64 instead of uint64.\n> \n> Yeah, it's probably reasonable to specify the output as int64\n> not uint64 (especially since it looks like that's what the\n> macros actually produce).\n\nCommitted\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 6 Aug 2024 22:18:45 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Fix comments in instr_time.h and remove an unneeded cast to int64"
}
] |