threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": " From 73dbe66c0ed68ae588223639a2ba93f6a727b704 Mon Sep 17 00:00:00 2001\nFrom: Samuel Marks <807580+SamuelMarks@users.noreply.github.com>\nDate: Fri, 23 Aug 2024 16:03:41 -0500\nSubject: [PATCH] [src/include/pg_config_manual.h] Guard `CLOBBER_FREED_MEMORY`\n & `MEMORY_CONTEXT_CHECKING`\n\n---\n src/include/pg_config_manual.h | 5 +++--\n 1 file changed, 3 insertions(+), 2 deletions(-)\n\ndiff --git a/src/include/pg_config_manual.h b/src/include/pg_config_manual.h\nindex e799c298..1f28d585 100644\n--- a/src/include/pg_config_manual.h\n+++ b/src/include/pg_config_manual.h\n@@ -261,7 +261,7 @@\n * facilitate catching bugs that refer to already-freed values.\n * Right now, this gets defined automatically if --enable-cassert.\n */\n-#ifdef USE_ASSERT_CHECKING\n+#if defined(USE_ASSERT_CHECKING) && !defined(CLOBBER_FREED_MEMORY)\n #define CLOBBER_FREED_MEMORY\n #endif\n\n@@ -270,7 +270,8 @@\n * bytes than were allocated). Right now, this gets defined\n * automatically if --enable-cassert or USE_VALGRIND.\n */\n-#if defined(USE_ASSERT_CHECKING) || defined(USE_VALGRIND)\n+#if (defined(USE_ASSERT_CHECKING) || defined(USE_VALGRIND)) && \\\n+ !defined(MEMORY_CONTEXT_CHECKING)\n #define MEMORY_CONTEXT_CHECKING\n #endif\n\n-- \n2.43.0\n\nSamuel Marks, PhD\nhttps://linkedin.com/in/samuelmarks\n\n\n",
"msg_date": "Fri, 23 Aug 2024 16:09:22 -0500",
"msg_from": "Samuel Marks <samuelmarks@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Guard `CLOBBER_FREED_MEMORY` & `MEMORY_CONTEXT_CHECKING`"
},
{
"msg_contents": "Samuel Marks <samuelmarks@gmail.com> writes:\n> Subject: [PATCH] [src/include/pg_config_manual.h] Guard `CLOBBER_FREED_MEMORY`\n> & `MEMORY_CONTEXT_CHECKING`\n\nWhy is this a good idea?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Aug 2024 17:11:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Guard `CLOBBER_FREED_MEMORY` & `MEMORY_CONTEXT_CHECKING`"
},
{
"msg_contents": "It will resolve the large number of these warnings from\nhttps://github.com/pgcentralfoundation/pgrx/blob/6dfb9d1/cargo-pgrx/src/command/init.rs#L411-L412:\n\n<command-line>: note: this is the location of the previous definition\n./../../src/include/pg_config_manual.h:274: warning:\n\"MEMORY_CONTEXT_CHECKING\" redefined\n | #define MEMORY_CONTEXT_CHECKING\n |\n <command-line>: note: this is the location of the previous definition\n In file included from ../../../src/include/c.h:55,\n from ../../../src/include/postgres.h:46,\n from xactdesc.c:15:\n ../../../src/include/pg_config_manual.h:265: warning:\n\"CLOBBER_FREED_MEMORY\" redefined\n | #define CLOBBER_FREED_MEMORY\n\n\n\n\n\n\nand yes will be sending them a patch also. But there's no harm in not\nredefining symbols, so not sure why this is a controversial patch.\n\nOn Fri, Aug 23, 2024 at 4:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Samuel Marks <samuelmarks@gmail.com> writes:\n> > Subject: [PATCH] [src/include/pg_config_manual.h] Guard `CLOBBER_FREED_MEMORY`\n> > & `MEMORY_CONTEXT_CHECKING`\n>\n> Why is this a good idea?\n>\n> regards, tom lane\n\n\n",
"msg_date": "Fri, 23 Aug 2024 16:25:54 -0500",
"msg_from": "Samuel Marks <samuelmarks@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Guard `CLOBBER_FREED_MEMORY` & `MEMORY_CONTEXT_CHECKING`"
},
{
"msg_contents": "Samuel Marks <samuelmarks@gmail.com> writes:\n> It will resolve the large number of these warnings from\n> https://github.com/pgcentralfoundation/pgrx/blob/6dfb9d1/cargo-pgrx/src/command/init.rs#L411-L412:\n\nHmm, that seems like their problem not ours. It's not very clear\nto me why they'd want to force these flags from the compiler\ncommand line in the first place, but if they do they should be\nconsistent with the more usual ways to set them.\n\n> and yes will be sending them a patch also. But there's no harm in not\n> redefining symbols, so not sure why this is a controversial patch.\n\nThe reason I'm resistant to changing it is that the code you want\nto touch has been unchanged since 2003 in the first case, and 2013\nin the second. It's fairly unclear what external code might have\ngrown dependencies on the current behavior, but with that much\nhistory I'm not eager to bet that the answer is \"none\". Also,\nthe present setup makes it clear that you are supposed to test\n\"#ifdef CLOBBER_FREED_MEMORY\" not \"#if CLOBBER_FREED_MEMORY\".\nIf we stop locking down the expected contents of the macro, bugs\nof that sort could sneak in.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Aug 2024 17:39:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Guard `CLOBBER_FREED_MEMORY` & `MEMORY_CONTEXT_CHECKING`"
},
{
"msg_contents": "Hmm… so you're worried about people setting the definition to 0 for\ndisabled and nonzero || present for enabled? - I suppose all the\n`#define`s in PostgreSQL could be guarded and checked for both\npresence and either empty value or value. But maybe you just don't\nwant to touch this side of the codebase?\n\nhttps://gcc.gnu.org/onlinedocs/cpp/If.html\nhttps://learn.microsoft.com/en-us/cpp/preprocessor/hash-if-hash-elif-hash-else-and-hash-endif-directives-c-cpp?view=msvc-170\n(assuming similar on clang and other compilers; I'm sure this\nbehaviour predated C89)\n\nIt would be odd for people to depend on behaviour like not defining\nCLOBBER_FREED_MEMORY at the CPPFLAGS level. Anyway, I assume it's your\ncall, so should I withdraw this PATCH then? - Or make a more\ncomprehensive PATCH checking for definition and (emptiness or\nnonzero)?\n\nPS: I did send through a PR to that\nbuild-PostgreSQL-extensions-with-Rust project\nhttps://github.com/pgcentralfoundation/pgrx/pull/1826\n\nSamuel Marks, PhD\nhttps://linkedin.com/in/samuelmarks\n\nOn Fri, Aug 23, 2024 at 4:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Samuel Marks <samuelmarks@gmail.com> writes:\n> > It will resolve the large number of these warnings from\n> > https://github.com/pgcentralfoundation/pgrx/blob/6dfb9d1/cargo-pgrx/src/command/init.rs#L411-L412:\n>\n> Hmm, that seems like their problem not ours. It's not very clear\n> to me why they'd want to force these flags from the compiler\n> command line in the first place, but if they do they should be\n> consistent with the more usual ways to set them.\n>\n> > and yes will be sending them a patch also. But there's no harm in not\n> > redefining symbols, so not sure why this is a controversial patch.\n>\n> The reason I'm resistant to changing it is that the code you want\n> to touch has been unchanged since 2003 in the first case, and 2013\n> in the second. It's fairly unclear what external code might have\n> grown dependencies on the current behavior, but with that much\n> history I'm not eager to bet that the answer is \"none\". Also,\n> the present setup makes it clear that you are supposed to test\n> \"#ifdef CLOBBER_FREED_MEMORY\" not \"#if CLOBBER_FREED_MEMORY\".\n> If we stop locking down the expected contents of the macro, bugs\n> of that sort could sneak in.\n>\n> regards, tom lane\n\n\n",
"msg_date": "Fri, 23 Aug 2024 17:15:21 -0500",
"msg_from": "Samuel Marks <samuelmarks@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Guard `CLOBBER_FREED_MEMORY` & `MEMORY_CONTEXT_CHECKING`"
},
{
"msg_contents": "> Hmm, that seems like their problem not ours. It's not very clear\n> to me why they'd want to force these flags from the compiler\n> command line in the first place, but if they do they should be\n> consistent with the more usual ways to set them.\n\nThere is no particular need to merge this patch for our sakes, as\nthe only reason it is this way was haste to build valgrind support.\nClang often finds some issue with the Postgres code style, so\nI ignore its lint output, which is only regurgitated for errors anyway.\nAt least, in most build configurations. Happy to take any patches\nfor build stuff.\n\n\n",
"msg_date": "Fri, 23 Aug 2024 21:55:43 -0700",
"msg_from": "Jubilee Young <workingjubilee@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Guard `CLOBBER_FREED_MEMORY` & `MEMORY_CONTEXT_CHECKING`"
}
] |
[
{
"msg_contents": "Here is a patch to replace a getpwuid() call in the backend, for \nthread-safety.\n\nThis is AFAICT the only call in the getpw*() family that needs to be \ndealt with.\n\n(There is also a getgrnam() call, but that is called very early in the \npostmaster, before multiprocessing, so we can leave that as is.)\n\nThe solution here is actually quite attractive: We can replace the \ngetpwuid() call by the existing wrapper function pg_get_user_name(), \nwhich internally uses getpwuid_r(). This also makes the code a bit \nsimpler. The same function is already used in libpq for a purpose that \nmirrors the backend use, so it's also nicer to use the same function for \nconsistency.",
"msg_date": "Sat, 24 Aug 2024 10:42:53 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "thread-safety: getpwuid_r()"
},
{
"msg_contents": "On 24/08/2024 11:42, Peter Eisentraut wrote:\n> Here is a patch to replace a getpwuid() call in the backend, for\n> thread-safety.\n> \n> This is AFAICT the only call in the getpw*() family that needs to be\n> dealt with.\n> \n> (There is also a getgrnam() call, but that is called very early in the\n> postmaster, before multiprocessing, so we can leave that as is.)\n> \n> The solution here is actually quite attractive: We can replace the\n> getpwuid() call by the existing wrapper function pg_get_user_name(),\n> which internally uses getpwuid_r(). This also makes the code a bit\n> simpler. The same function is already used in libpq for a purpose that\n> mirrors the backend use, so it's also nicer to use the same function for\n> consistency.\n\nMakes sense.\n\nThe temporary buffers are a bit funky. pg_get_user_name() internally \nuses a BUFSIZE-sized area to hold the result of getpwuid_r(). If the \npg_get_user_name() caller passes a buffer smaller than BUFSIZE, the user \nid might get truncated. I don't think that's a concern on any real-world \nsystem, and the callers do pass a large-enough buffer so truncation \ncan't happen. At a minimum, it would be good to add a comment to \npg_get_user_name() along the lines of \"if 'buflen' is smaller than \nBUFSIZE, the result might be truncated\".\n\nCome to think of it, the pg_get_user_name() function is just a thin \nwrapper around getpwuid_r(). It doesn't provide a lot of value. Perhaps \nwe should remove pg_get_user_name() and pg_get_user_home_dir() \naltogether and call getpwuid_r() directly.\n\nBut no objection to committing this as it is, either.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Sat, 24 Aug 2024 16:55:06 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: thread-safety: getpwuid_r()"
},
{
"msg_contents": "On 24.08.24 15:55, Heikki Linnakangas wrote:\n> Come to think of it, the pg_get_user_name() function is just a thin \n> wrapper around getpwuid_r(). It doesn't provide a lot of value. Perhaps \n> we should remove pg_get_user_name() and pg_get_user_home_dir() \n> altogether and call getpwuid_r() directly.\n\nYeah, that seems better. These functions are somewhat strangely \ndesigned and as you described have faulty error handling. By calling \ngetpwuid_r() directly, we can handle the errors better and the code \nbecomes more transparent. (There used to be a lot more interesting \nportability complications in that file, but those are long gone.)\n\nI tried to be overly correct by using sysconf(_SC_GETPW_R_SIZE_MAX) to \nget the buffer size, but that doesn't work on FreeBSD. All the OS where \nI could find the source code internally use 1024 as the suggested buffer \nsize, so I just ended up hardcoding that. This should be no worse than \nwhat the code is currently handling.",
"msg_date": "Mon, 26 Aug 2024 19:38:59 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: thread-safety: getpwuid_r()"
},
{
"msg_contents": "On 26/08/2024 20:38, Peter Eisentraut wrote:\n> On 24.08.24 15:55, Heikki Linnakangas wrote:\n>> Come to think of it, the pg_get_user_name() function is just a thin \n>> wrapper around getpwuid_r(). It doesn't provide a lot of value. \n>> Perhaps we should remove pg_get_user_name() and pg_get_user_home_dir() \n>> altogether and call getpwuid_r() directly.\n> \n> Yeah, that seems better. These functions are somewhat strangely \n> designed and as you described have faulty error handling. By calling \n> getpwuid_r() directly, we can handle the errors better and the code \n> becomes more transparent. (There used to be a lot more interesting \n> portability complications in that file, but those are long gone.)\n\nNew patch looks good to me, thanks!\n\n> I tried to be overly correct by using sysconf(_SC_GETPW_R_SIZE_MAX) to \n> get the buffer size, but that doesn't work on FreeBSD. All the OS where \n> I could find the source code internally use 1024 as the suggested buffer \n> size, so I just ended up hardcoding that. This should be no worse than \n> what the code is currently handling.\n\nMaybe add a brief comment on that.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 20:54:13 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: thread-safety: getpwuid_r()"
},
{
"msg_contents": "On 26.08.24 19:54, Heikki Linnakangas wrote:\n> On 26/08/2024 20:38, Peter Eisentraut wrote:\n>> On 24.08.24 15:55, Heikki Linnakangas wrote:\n>>> Come to think of it, the pg_get_user_name() function is just a thin \n>>> wrapper around getpwuid_r(). It doesn't provide a lot of value. \n>>> Perhaps we should remove pg_get_user_name() and \n>>> pg_get_user_home_dir() altogether and call getpwuid_r() directly.\n>>\n>> Yeah, that seems better. These functions are somewhat strangely \n>> designed and as you described have faulty error handling. By calling \n>> getpwuid_r() directly, we can handle the errors better and the code \n>> becomes more transparent. (There used to be a lot more interesting \n>> portability complications in that file, but those are long gone.)\n> \n> New patch looks good to me, thanks!\n\ncommitted\n\n\n\n",
"msg_date": "Mon, 2 Sep 2024 09:28:21 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: thread-safety: getpwuid_r()"
}
] |
[
{
"msg_contents": "The list of acknowledgments for the PG17 release notes has been \ncommitted. You can see it here:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=doc/src/sgml/release-17.sgml;h=08a479807ca2933668dede22e4e6f464b937ee45;hb=refs/heads/REL_17_STABLE#l3229\n\nAs usual, please check for problems such as wrong sorting, duplicate \nnames in different variants, or names in the wrong order etc.\n\n\n",
"msg_date": "Sat, 24 Aug 2024 16:27:23 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "list of acknowledgments for PG17"
},
{
"msg_contents": "On Sat, Aug 24, 2024 at 11:27 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> As usual, please check for problems such as wrong sorting, duplicate\n> names in different variants, or names in the wrong order etc.\n\nI think Japanese names are in the right order except “Sutou Kouhei”.\nI am 100% sure his given name is Kouhei.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Tue, 27 Aug 2024 18:26:04 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG17"
},
{
"msg_contents": "On 27.08.24 11:26, Etsuro Fujita wrote:\n> On Sat, Aug 24, 2024 at 11:27 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>> As usual, please check for problems such as wrong sorting, duplicate\n>> names in different variants, or names in the wrong order etc.\n> \n> I think Japanese names are in the right order except “Sutou Kouhei”.\n> I am 100% sure his given name is Kouhei.\n\nFixed. Thank you for checking.\n\n\n\n",
"msg_date": "Fri, 30 Aug 2024 08:36:01 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: list of acknowledgments for PG17"
},
{
"msg_contents": "On Sat, 24 Aug 2024, 16:27 Peter Eisentraut, <peter@eisentraut.org> wrote:\n>\n> The list of acknowledgments for the PG17 release notes has been\n> committed. You can see it here:\n>\n>\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=doc/src/sgml/release-17.sgml;h=08a479807ca2933668dede22e4e6f464b937ee45;hb=refs/heads/REL_17_STABLE#l3229\n>\n> As usual, please check for problems such as wrong sorting, duplicate\n> names in different variants, or names in the wrong order etc.\n\nI've done a small check of the list (search for first names in the\ndocument), and found the following curiosities:\n\nI see various cases where what seem to be names with Chinese origin are\nstyled differently between this list and the feature list. Specifically,\n\"Acknowledged\", vs \"Feature List\":\n\n\"Mingli Zhang\", vs \"Zhang Mingli\"\n\"Zhijie Hou\", vs \"Hou Zhijie\" (cc-ed)\n\"Zongliang Quan\", vs \" Quan Zongliang\"\n\nI don't know if one order is preferred over the other, but it seems\ninconsistent. Hou, do you have more info on this?\n\nNext, I noticed some other inconsistent names:\n\n\"Hubert Lubaczewski\", vs \"Hubert Depesz Lubaczewski\"\n\"Paul Jungwirth\", vs \"Paul A. Jungwirth\"\n\nSidenote: Paul Amondson is Amonson, Paul D on this list, and is not a\nduplicate entry for Paul A. Jungwirth as might be suspected based on\ninitials.\n\n\nKind regards,\n\nMatthias van de Meent\n\nOn Sat, 24 Aug 2024, 16:27 Peter Eisentraut, <peter@eisentraut.org> wrote:\n>\n> The list of acknowledgments for the PG17 release notes has been\n> committed. You can see it here:\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=doc/src/sgml/release-17.sgml;h=08a479807ca2933668dede22e4e6f464b937ee45;hb=refs/heads/REL_17_STABLE#l3229\n>\n> As usual, please check for problems such as wrong sorting, duplicate\n> names in different variants, or names in the wrong order etc.\nI've done a small check of the list (search for first names in the document), and found the following curiosities:\nI see various cases where what seem to be \nnames \n\nwith Chinese origin are styled differently between this list and the feature list. Specifically, \"Acknowledged\", vs \"Feature List\":\"Mingli Zhang\", vs \"Zhang Mingli\"\n\"Zhijie Hou\", vs \n\"Hou Zhijie\" (cc-ed) \n\"Zongliang Quan\", vs \"\nQuan Zongliang\" I don't know if one order is preferred over the other, but it seems inconsistent. Hou, do you have more info on this?Next, I noticed some other inconsistent names:\n\"Hubert Lubaczewski\", vs \"Hubert Depesz Lubaczewski\"\"Paul Jungwirth\", vs \"Paul A. Jungwirth\" Sidenote: Paul Amondson is Amonson, Paul D on this list, and is not a duplicate entry for Paul A. Jungwirth as might be suspected based on initials.\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Tue, 3 Sep 2024 19:59:30 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: list of acknowledgments for PG17"
},
{
"msg_contents": "On 03.09.24 19:59, Matthias van de Meent wrote:\n> I've done a small check of the list (search for first names in the \n> document), and found the following curiosities:\n> \n> I see various cases where what seem to be names with Chinese origin are \n> styled differently between this list and the feature list. Specifically, \n> \"Acknowledged\", vs \"Feature List\":\n> \n> \"Mingli Zhang\", vs \"Zhang Mingli\"\n> \"Zhijie Hou\", vs \"Hou Zhijie\" (cc-ed)\n> \"Zongliang Quan\", vs \" Quan Zongliang\"\n> \n> I don't know if one order is preferred over the other, but it seems \n> inconsistent. Hou, do you have more info on this?\n\nThe list of acknowledgments consistently uses given name followed by \nfamily name. I suspect the release notes are not as rigid about that.\n\n> Next, I noticed some other inconsistent names:\n> \n> \"Hubert Lubaczewski\", vs \"Hubert Depesz Lubaczewski\"\n> \"Paul Jungwirth\", vs \"Paul A. Jungwirth\"\n\nIn some cases like that, I default to using the name variant that was \nused in past release notes, so it's consistent over time.\n\n\n\n",
"msg_date": "Wed, 4 Sep 2024 10:48:10 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: list of acknowledgments for PG17"
}
] |
[
{
"msg_contents": "Hi,\n\nusing `PostgreSQL 16.2 (Debian 16.2-1.pgdg120+2) on x86_64-pc-linux-gnu, \ncompiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit`, I've observed the \nfollowing behavior:\n\n– keep in mind that this example is as simplified as possible, the \noriginal query involves foreign tables, and the failure to propagate / \npush down the condition results in a query plan that basically tries to \ndownload the complete foreign table, which is not a feasible execution \nstrategy:\n\nSetup:\n\nCREATE TABLE tbl1 (id INTEGER GENERATED ALWAYS AS IDENTITY, site_id \nINTEGER NOT NULL, data TEXT);\nCREATE TABLE tbl2 (id INTEGER GENERATED ALWAYS AS IDENTITY, site_id \nINTEGER NOT NULL, data TEXT);\nCREATE INDEX ON tbl1 (site_id);\nCREATE INDEX ON tbl2 (site_id);\n\nWorking queries:\n\nSELECT * FROM tbl1 WHERE tbl1.site_id = 1; -- \"trivial condition\"\nSELECT * FROM tbl2 WHERE tbl2.site_id = 1;\nSELECT * FROM tbl1 WHERE tbl1.site_id = 1 OR tbl1.site_id IS NULL; -- \n\"non-trivial condition\"\nSELECT * FROM tbl2 WHERE tbl2.site_id = 1 OR tbl2.site_id IS NULL;\n\n1) Exemplary Query Plan:\n\n# EXPLAIN SELECT * FROM tbl2 WHERE tbl2.site_id = 1 OR tbl2.site_id IS NULL;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Bitmap Heap Scan on tbl2 (cost=8.40..19.08 rows=12 width=40)\n Recheck Cond: ((site_id = 1) OR (site_id IS NULL))\n -> BitmapOr (cost=8.40..8.40 rows=12 width=0)\n -> Bitmap Index Scan on tbl2_site_id_idx (cost=0.00..4.20 \nrows=6 width=0)\n Index Cond: (site_id = 1)\n -> Bitmap Index Scan on tbl2_site_id_idx (cost=0.00..4.20 \nrows=6 width=0)\n Index Cond: (site_id IS NULL)\n(7 rows)\n\nThe key takeaway is, that the index can be used, because the condition \nis propagated deep enough.\n\n2) Still working example:\n\n# EXPLAIN SELECT * FROM tbl1 LEFT JOIN tbl2 ON tbl2.site_id = \ntbl1.site_id WHERE tbl1.site_id = 1;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=8.40..27.80 rows=36 width=80)\n -> Bitmap Heap Scan on tbl1 (cost=4.20..13.67 rows=6 width=40)\n Recheck Cond: (site_id = 1)\n -> Bitmap Index Scan on tbl1_site_id_idx (cost=0.00..4.20 \nrows=6 width=0)\n Index Cond: (site_id = 1)\n -> Materialize (cost=4.20..13.70 rows=6 width=40)\n -> Bitmap Heap Scan on tbl2 (cost=4.20..13.67 rows=6 width=40)\n Recheck Cond: (site_id = 1)\n -> Bitmap Index Scan on tbl2_site_id_idx \n(cost=0.00..4.20 rows=6 width=0)\n Index Cond: (site_id = 1)\n(10 rows)\n\nThe condition is propagated into BOTH branches of the join. The join \ncould also be an INNER join and might also be realized as a Merge Join \nor Hash Join: they all behave the same.\n\n3) Problematic example:\n\n# EXPLAIN SELECT * FROM tbl1 JOIN tbl2 ON tbl2.site_id = tbl1.site_id \nWHERE tbl1.site_id = 1 OR tbl1.site_id IS NULL;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\n Hash Join (cost=19.23..46.45 rows=72 width=80)\n Hash Cond: (tbl2.site_id = tbl1.site_id)\n -> Seq Scan on tbl2 (cost=0.00..22.00 rows=1200 width=40)\n -> Hash (cost=19.08..19.08 rows=12 width=40)\n -> Bitmap Heap Scan on tbl1 (cost=8.40..19.08 rows=12 width=40)\n Recheck Cond: ((site_id = 1) OR (site_id IS NULL))\n -> BitmapOr (cost=8.40..8.40 rows=12 width=0)\n -> Bitmap Index Scan on tbl1_site_id_idx \n(cost=0.00..4.20 rows=6 width=0)\n Index Cond: (site_id = 1)\n -> Bitmap Index Scan on tbl1_site_id_idx \n(cost=0.00..4.20 rows=6 width=0)\n Index Cond: (site_id IS NULL)\n(11 rows)\n\nNow, a full seq scan used for tbl2, the condition is only pushed down on \nONE side of the JOIN!\n(with `WHERE tbl2.site_id = 1 OR tbl2.site_id IS NULL`, the Seq Scan \nwould have been on tbl1... [not so easily demostrated w/ LEFT JOINs]).\nAlso, `ON tbl1.site_id IS NOT DISTINCT FROM tbl2.site_id` does not help,\n\nThe weird thing is: The subqueries on both sides of the join are \nperfectly capable of accepting/using the \"non-trivial\" condition, as \ndemonstrated in 1), and JOINs are generally able to propagate conditions \nto both sides, as demonstrated in 2).\n\nIs there a magic knob to force postgres to do the right thing, or is \nthis basically a bug in the query planner?\n\n Tobias\n\n\n\n\n\n\n",
"msg_date": "Sun, 25 Aug 2024 17:10:26 +0200",
"msg_from": "Tobias Hoffmann <ldev-list@thax.hardliners.org>",
"msg_from_op": true,
"msg_subject": "Non-trivial condition is only propagated to one side of JOIN"
},
{
"msg_contents": "On Sunday, August 25, 2024, Tobias Hoffmann <ldev-list@thax.hardliners.org>\nwrote:\n\n>\n> 3) Problematic example:\n>\n> # EXPLAIN SELECT * FROM tbl1 JOIN tbl2 ON tbl2.site_id = tbl1.site_id\n> WHERE tbl1.site_id = 1 OR tbl1.site_id IS NULL;\n\n\nThe “is null” predicate in this query is doing nothing as your next comment\nalludes to; you will produce no rows out of the join with a null site_id\ndue to the use of the equals operator in the join.\n\n\n>\n> Also, `ON tbl1.site_id IS NOT DISTINCT FROM tbl2.site_id` does not help,\n>\n\nOthers may correct me but I’m guessing that indeed the optimizer has a gap\nhere that could be filled in, it’s just it feels like adding code to deal\nwith broken queries so isn’t overly motivated to work on. Joining using\ndistinct instead of equality is uncommon, since nearly all models join\nprimary keys to foreign keys and both of those are almost always non-null.\n\nDavid J.\n\nOn Sunday, August 25, 2024, Tobias Hoffmann <ldev-list@thax.hardliners.org> wrote:\n3) Problematic example:\n\n# EXPLAIN SELECT * FROM tbl1 JOIN tbl2 ON tbl2.site_id = tbl1.site_id WHERE tbl1.site_id = 1 OR tbl1.site_id IS NULL; The “is null” predicate in this query is doing nothing as your next comment alludes to; you will produce no rows out of the join with a null site_id due to the use of the equals operator in the join. \nAlso, `ON tbl1.site_id IS NOT DISTINCT FROM tbl2.site_id` does not help,\nOthers may correct me but I’m guessing that indeed the optimizer has a gap here that could be filled in, it’s just it feels like adding code to deal with broken queries so isn’t overly motivated to work on. Joining using distinct instead of equality is uncommon, since nearly all models join primary keys to foreign keys and both of those are almost always non-null.David J.",
"msg_date": "Sun, 25 Aug 2024 08:35:24 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Non-trivial condition is only propagated to one side of JOIN"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Sunday, August 25, 2024, Tobias Hoffmann <ldev-list@thax.hardliners.org>\n> wrote:\n>> 3) Problematic example:\n>> \n>> # EXPLAIN SELECT * FROM tbl1 JOIN tbl2 ON tbl2.site_id = tbl1.site_id\n>> WHERE tbl1.site_id = 1 OR tbl1.site_id IS NULL;\n\n> The “is null” predicate in this query is doing nothing as your next comment\n> alludes to; you will produce no rows out of the join with a null site_id\n> due to the use of the equals operator in the join.\n\nIndeed. This WHERE clause might be useful with a left join to tbl1,\nbut then the IS NULL means something totally different and can *not*\nbe pushed down.\n\n> Others may correct me but I’m guessing that indeed the optimizer has a gap\n> here that could be filled in, it’s just it feels like adding code to deal\n> with broken queries so isn’t overly motivated to work on.\n\nThe short answer is that we expend quite a lot of effort to deduce\nimplied equality clauses from combinations of equality clauses;\nthat is, given \"WHERE A = B AND B = C\" we can deduce \"A = C\" as well.\nNo such deductions can be made from equality clauses that are buried\nunder an OR, because they might not be true for every join row.\n\nMaybe some machinery could be built that would do something useful\nwith an OR clause of this form, but I doubt it would be useful\noften enough to justify the development effort and additional\nplanner cycles.\n\nAn important point here is that \"WHERE A = B AND p(A)\" does not permit\nus to deduce \"p(B)\" for arbitrary conditions p(), because we have some\nequality operators that will return true for values that sort equal\nbut are distinguishable in other ways. (Handy example: in float8\narithmetic, zero and minus zero compare equal, as required by the IEEE\nfloat spec.) We presently don't assume that this works for anything\nother than transitive equality across equality operators of a single\nbtree operator family. In particular, I don't think we could assume\nit for the given example, because \"WHERE A = B AND A IS NULL\" is\ntautologically false. You can deduce anything from a falsehood,\nso I'm not entirely sure that the proposed optimization is even\nlogically sound.\n\n> Joining using\n> distinct instead of equality is uncommon, since nearly all models join\n> primary keys to foreign keys and both of those are almost always non-null.\n\nYeah. I'll concede that we probably should work harder on building\nout planner and executor support for IS NOT DISTINCT FROM. But again,\nit just seems like a case that is not worth spending large amounts of\ndevelopment time on.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Aug 2024 12:52:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Non-trivial condition is only propagated to one side of JOIN"
},
{
"msg_contents": "On 25/08/2024 17:35, David G. Johnston wrote:\n> On Sunday, August 25, 2024, Tobias Hoffmann \n> <ldev-list@thax.hardliners.org> wrote:\n>\n>\n> 3) Problematic example:\n>\n> # EXPLAIN SELECT * FROM tbl1 JOIN tbl2 ON tbl2.site_id =\n> tbl1.site_id WHERE tbl1.site_id = 1 OR tbl1.site_id IS NULL; \n>\n>\n> The “is null” predicate in this query is doing nothing as your next \n> comment alludes to; you will produce no rows out of the join with a \n> null site_id due to the use of the equals operator in the join.\n\n\nWell, that's why I said: \"keep in mind that this example is as \nsimplified as possible\"...\nEven though `tbl1.site_id = 1` is – in this case – completely equivalent \nto `tbl1.site_id = 1 OR tbl1.site_id IS NULL` – the first one is \ncompletely pushed down, but the second is not.\n\nA more complete example might look more like this:\n\nCREATE VIEW \"subview1\" AS\n SELECT tbl1.site_id, ... JOIN ... ON tbl1.site_id = tbl2.site_id \nWHERE ...;\n\nCREATE VIEW \"view1\" AS\n SELECT site_id, ... FROM subview1 -- maybe even: WHERE site_id IS \nNOT NULL\n UNION ALL\n SELECT null, ...;\n\nSELECT * FROM view1 WHERE (site_id = 1 OR site_id IS NULL);\n\nThe reason, why the outer query would have a more complicated condition \nmight have nothing to do with the subquery containing the JOIN.\n(This is also not a `UNION ALL` special case: `site_id IS NULL` could \nalso be generated by a LEFT JOIN, e.g.)\n\nBut not pushing down the condition has the grave impact, that those - \notherwise working VIEWs (i.e. subview1) become \"unfixably\" broken, for \ncertain WHERE-conditions on the outside.\n\nAnother reason why I said `site_id INTEGER NOT NULL` and `IS NOT \nDISTINCT FROM`, is that there might be some mathematical reason I'm not \nyet aware of where the propagation would not be sound, but which would \nnot apply for arbitrary site_id [nullable, ...].\nMy primary goal is to find at least *some* way to get the condition \npushed further in to avoid the full table scan, and to not have to \ncompletely rewrite all the VIEWs into a single big query, where I could \ninject the site_id parameter (e.g. \"$1\") in multiple places as needed...\n\n Tobias\n\n\n\n\n\n\n\nOn 25/08/2024 17:35, David G. Johnston\n wrote:\n\n\n\n On Sunday, August 25, 2024, Tobias Hoffmann <ldev-list@thax.hardliners.org>\n wrote:\n\n 3) Problematic example:\n\n # EXPLAIN SELECT * FROM tbl1 JOIN tbl2 ON tbl2.site_id =\n tbl1.site_id WHERE tbl1.site_id = 1 OR tbl1.site_id IS NULL; \n\n\nThe “is null” predicate in this query is doing nothing as\n your next comment alludes to; you will produce no rows out of\n the join with a null site_id due to the use of the equals\n operator in the join.\n\n\n\nWell, that's why I said: \"keep in mind that this example is as\n simplified as possible\"...\n Even though `tbl1.site_id = 1` is – in this case – completely\n equivalent to `tbl1.site_id = 1 OR tbl1.site_id IS NULL` – the\n first one is completely pushed down, but the second is not.\nA more complete example might look more like this:\nCREATE VIEW \"subview1\" AS\n SELECT tbl1.site_id, ... JOIN ... ON tbl1.site_id = tbl2.site_id\n WHERE ...; \n\nCREATE VIEW \"view1\" AS\n SELECT site_id, ... FROM subview1 -- maybe even: WHERE site_id\n IS NOT NULL\n UNION ALL\n SELECT null, ...;\n\n SELECT * FROM view1 WHERE (site_id = 1 OR site_id IS NULL);\nThe reason, why the outer query would have a more complicated\n condition might have nothing to do with the subquery containing\n the JOIN.\n (This is also not a `UNION ALL` special case: `site_id IS NULL`\n could also be generated by a LEFT JOIN, e.g.)\n\nBut not pushing down the condition has the grave impact, that\n those - otherwise working VIEWs (i.e. subview1) become \"unfixably\"\n broken, for certain WHERE-conditions on the outside.\nAnother reason why I said `site_id INTEGER NOT NULL` and `IS NOT\n DISTINCT FROM`, is that there might be some mathematical reason\n I'm not yet aware of where the propagation would not be sound, but\n which would not apply for arbitrary site_id [nullable, ...].\n My primary goal is to find at least *some* way to get the\n condition pushed further in to avoid the full table scan, and to\n not have to completely rewrite all the VIEWs into a single big\n query, where I could inject the site_id parameter (e.g. \"$1\") in\n multiple places as needed...\n\n Tobias",
"msg_date": "Sun, 25 Aug 2024 19:04:32 +0200",
"msg_from": "Tobias Hoffmann <ldev-list@thax.hardliners.org>",
"msg_from_op": true,
"msg_subject": "Re: Non-trivial condition is only propagated to one side of JOIN"
},
{
"msg_contents": "Tobias Hoffmann <ldev-list@thax.hardliners.org> writes:\n> A more complete example might look more like this:\n\n> CREATE VIEW \"subview1\" AS\n> SELECT tbl1.site_id, ... JOIN ... ON tbl1.site_id = tbl2.site_id \n> WHERE ...;\n\n> CREATE VIEW \"view1\" AS\n> SELECT site_id, ... FROM subview1 -- maybe even: WHERE site_id IS \n> NOT NULL\n> UNION ALL\n> SELECT null, ...;\n\n> SELECT * FROM view1 WHERE (site_id = 1 OR site_id IS NULL);\n\nFor this particular case, you could probably get somewhere by\nwriting\n\nSELECT * FROM view1 WHERE site_id = 1\nUNION ALL\nSELECT * FROM view1 WHERE site_id IS NULL;\n\nsince the sets of rows satisfying those two WHERE conditions\nmust be disjoint. (I recall working on a patch that essentially\ntried to do that transformation automatically, but it eventually\nfailed because things get too messy if the row sets might not\nbe disjoint.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Aug 2024 13:28:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Non-trivial condition is only propagated to one side of JOIN"
},
{
"msg_contents": "On 25/08/2024 19:28, Tom Lane wrote:\n> For this particular case, you could probably get somewhere by\n> writing\n>\n> SELECT * FROM view1 WHERE site_id = 1\n> UNION ALL\n> SELECT * FROM view1 WHERE site_id IS NULL;\n>\nThank you for your suggestion, Tom.\n\nUnfortunately, as I now understand, nothing *except* `var = const` can \never be propagated to the second branch of the join.\nIn particular, even just `WHERE site_id IS NULL` no longer propagates \nlike `WHERE site_id = 1` does.\nOther cases, that do not propagate, include `WHERE site_id IN (1, 2)`.\n\nI'll probably have to find another workaround to my current problem.\n\n----\n\nMore generally, I think that the currently possible set is very \nrestrictive and affects not just edge-cases;\n\nmy SQL-Engine-implementation-fu is far from good enough for the \nnecessary changes, though.\n\nHere are some more thoughts:\n\n> Maybe some machinery could be built that would do something useful\n> with an OR clause of this form,\n>\n> [...]\n> An important point here is that \"WHERE A = B AND p(A)\" does not permit\n> us to deduce \"p(B)\" for arbitrary conditions p(),\n\nIMO the relevant equality should be the `ON tbl2.site_id IS NOT DISTINCT \nFROM tbl1.site_id` (resp. `ON tbl2.site_id = tbl1.site_id`, but nulls \nprobably need special care), which should allow any predicate `p` only \ndepending on `tbl1.site_id` (i.e.`WHERE p(tbl1.site_id)`, from \n\"outside\") to be pulled \"inside\" the INNER JOIN or LEFT JOIN, because no \nrow which does not satisfy `p(tbl1.site_id)`, and, via equivalence, \n`p(tbl2.site_id)` could ever be part of the result.\n\nMore specifically, given a WHERE clause in CNF (i.e. `p0(...) AND \np1(...) AND (p2a(...) OR p2b(...)) AND ...`), every top-level term which \nonly uses variables which are deemed equivalent, should be allowed to \npropagate.\n\nIf this is too difficult, not just single constants (`var = const`), but \nsets of constants (`var = ANY(...)`) and/or especially `var IS NULL`) \nshould be considered.\n\nJust my 2ct...\n\n Tobias\n\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 09:42:33 +0200",
"msg_from": "Tobias Hoffmann <ldev-list@thax.hardliners.org>",
"msg_from_op": true,
"msg_subject": "Re: Non-trivial condition is only propagated to one side of JOIN"
},
{
"msg_contents": "You must use a where clause on the FDW table or you get a full load/SEQ scan of that table, per documentation.\n\nSelect * is not recommended for FDW tables.\n\nFrom: Tobias Hoffmann <ldev-list@thax.hardliners.org>\nDate: Sunday, August 25, 2024 at 8:10 AM\nTo: \"pgsql-hackers@postgresql.org\" <pgsql-hackers@postgresql.org>\nSubject: Non-trivial condition is only propagated to one side of JOIN\n\nHi, using `PostgreSQL 16. 2 (Debian 16. 2-1. pgdg120+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12. 2. 0-14) 12. 2. 0, 64-bit`, I've observed the following behavior: – keep in mind that this example is as simplified as possible, the original\n\n\nHi,\n\n\n\nusing `PostgreSQL 16.2 (Debian 16.2-1.pgdg120+2) on x86_64-pc-linux-gnu,\n\ncompiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit`, I've observed the\n\nfollowing behavior:\n\n\n\n– keep in mind that this example is as simplified as possible, the\n\noriginal query involves foreign tables, and the failure to propagate /\n\npush down the condition results in a query plan that basically tries to\n\ndownload the complete foreign table, which is not a feasible execution\n\nstrategy:\n\n\n\nSetup:\n\n\n\nCREATE TABLE tbl1 (id INTEGER GENERATED ALWAYS AS IDENTITY, site_id\n\nINTEGER NOT NULL, data TEXT);\n\nCREATE TABLE tbl2 (id INTEGER GENERATED ALWAYS AS IDENTITY, site_id\n\nINTEGER NOT NULL, data TEXT);\n\nCREATE INDEX ON tbl1 (site_id);\n\nCREATE INDEX ON tbl2 (site_id);\n\n\n\nWorking queries:\n\n\n\nSELECT * FROM tbl1 WHERE tbl1.site_id = 1; -- \"trivial condition\"\n\nSELECT * FROM tbl2 WHERE tbl2.site_id = 1;\n\nSELECT * FROM tbl1 WHERE tbl1.site_id = 1 OR tbl1.site_id IS NULL; --\n\n\"non-trivial condition\"\n\nSELECT * FROM tbl2 WHERE tbl2.site_id = 1 OR tbl2.site_id IS NULL;\n\n\n\n1) Exemplary Query Plan:\n\n\n\n# EXPLAIN SELECT * FROM tbl2 WHERE tbl2.site_id = 1 OR tbl2.site_id IS NULL;\n\n\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------\n\n Bitmap Heap Scan on tbl2 (cost=8.40..19.08 rows=12 width=40)\n\n Recheck Cond: ((site_id = 1) OR (site_id IS NULL))\n\n -> BitmapOr (cost=8.40..8.40 rows=12 width=0)\n\n -> Bitmap Index Scan on tbl2_site_id_idx (cost=0.00..4.20\n\nrows=6 width=0)\n\n Index Cond: (site_id = 1)\n\n -> Bitmap Index Scan on tbl2_site_id_idx (cost=0.00..4.20\n\nrows=6 width=0)\n\n Index Cond: (site_id IS NULL)\n\n(7 rows)\n\n\n\nThe key takeaway is, that the index can be used, because the condition\n\nis propagated deep enough.\n\n\n\n2) Still working example:\n\n\n\n# EXPLAIN SELECT * FROM tbl1 LEFT JOIN tbl2 ON tbl2.site_id =\n\ntbl1.site_id WHERE tbl1.site_id = 1;\n\n\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------\n\n Nested Loop Left Join (cost=8.40..27.80 rows=36 width=80)\n\n -> Bitmap Heap Scan on tbl1 (cost=4.20..13.67 rows=6 width=40)\n\n Recheck Cond: (site_id = 1)\n\n -> Bitmap Index Scan on tbl1_site_id_idx (cost=0.00..4.20\n\nrows=6 width=0)\n\n Index Cond: (site_id = 1)\n\n -> Materialize (cost=4.20..13.70 rows=6 width=40)\n\n -> Bitmap Heap Scan on tbl2 (cost=4.20..13.67 rows=6 width=40)\n\n Recheck Cond: (site_id = 1)\n\n -> Bitmap Index Scan on tbl2_site_id_idx\n\n(cost=0.00..4.20 rows=6 width=0)\n\n Index Cond: (site_id = 1)\n\n(10 rows)\n\n\n\nThe condition is propagated into BOTH branches of the join. The join\n\ncould also be an INNER join and might also be realized as a Merge Join\n\nor Hash Join: they all behave the same.\n\n\n\n3) Problematic example:\n\n\n\n# EXPLAIN SELECT * FROM tbl1 JOIN tbl2 ON tbl2.site_id = tbl1.site_id\n\nWHERE tbl1.site_id = 1 OR tbl1.site_id IS NULL;\n\n\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------\n\n Hash Join (cost=19.23..46.45 rows=72 width=80)\n\n Hash Cond: (tbl2.site_id = tbl1.site_id)\n\n -> Seq Scan on tbl2 (cost=0.00..22.00 rows=1200 width=40)\n\n -> Hash (cost=19.08..19.08 rows=12 width=40)\n\n -> Bitmap Heap Scan on tbl1 (cost=8.40..19.08 rows=12 width=40)\n\n Recheck Cond: ((site_id = 1) OR (site_id IS NULL))\n\n -> BitmapOr (cost=8.40..8.40 rows=12 width=0)\n\n -> Bitmap Index Scan on tbl1_site_id_idx\n\n(cost=0.00..4.20 rows=6 width=0)\n\n Index Cond: (site_id = 1)\n\n -> Bitmap Index Scan on tbl1_site_id_idx\n\n(cost=0.00..4.20 rows=6 width=0)\n\n Index Cond: (site_id IS NULL)\n\n(11 rows)\n\n\n\nNow, a full seq scan used for tbl2, the condition is only pushed down on\n\nONE side of the JOIN!\n\n(with `WHERE tbl2.site_id = 1 OR tbl2.site_id IS NULL`, the Seq Scan\n\nwould have been on tbl1... [not so easily demostrated w/ LEFT JOINs]).\n\nAlso, `ON tbl1.site_id IS NOT DISTINCT FROM tbl2.site_id` does not help,\n\n\n\nThe weird thing is: The subqueries on both sides of the join are\n\nperfectly capable of accepting/using the \"non-trivial\" condition, as\n\ndemonstrated in 1), and JOINs are generally able to propagate conditions\n\nto both sides, as demonstrated in 2).\n\n\n\nIs there a magic knob to force postgres to do the right thing, or is\n\nthis basically a bug in the query planner?\n\n\n\n Tobias\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nYou must use a where clause on the FDW table or you get a full load/SEQ scan of that table, per documentation.\n\nSelect * is not recommended for FDW tables.\n \n\nFrom:\nTobias Hoffmann <ldev-list@thax.hardliners.org>\nDate: Sunday, August 25, 2024 at 8:10 AM\nTo: \"pgsql-hackers@postgresql.org\" <pgsql-hackers@postgresql.org>\nSubject: Non-trivial condition is only propagated to one side of JOIN\n\n\n \n\n\nHi, using `PostgreSQL 16. 2 (Debian\n 16. 2-1. pgdg120+2)\n on x86_64-pc-linux-gnu, compiled by gcc (Debian 12. 2. 0-14)\n 12. 2. 0,\n 64-bit`, I've observed the following behavior: – keep in mind that this example is as simplified as possible, the original\n\n\n\n\nHi,\n \nusing `PostgreSQL 16.2 (Debian 16.2-1.pgdg120+2) on x86_64-pc-linux-gnu, \ncompiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit`, I've observed the \nfollowing behavior:\n \n– keep in mind that this example is as simplified as possible, the \noriginal query involves foreign tables, and the failure to propagate / \npush down the condition results in a query plan that basically tries to \ndownload the complete foreign table, which is not a feasible execution \nstrategy:\n \nSetup:\n \nCREATE TABLE tbl1 (id INTEGER GENERATED ALWAYS AS IDENTITY, site_id \nINTEGER NOT NULL, data TEXT);\nCREATE TABLE tbl2 (id INTEGER GENERATED ALWAYS AS IDENTITY, site_id \nINTEGER NOT NULL, data TEXT);\nCREATE INDEX ON tbl1 (site_id);\nCREATE INDEX ON tbl2 (site_id);\n \nWorking queries:\n \nSELECT * FROM tbl1 WHERE tbl1.site_id = 1; -- \"trivial condition\"\nSELECT * FROM tbl2 WHERE tbl2.site_id = 1;\nSELECT * FROM tbl1 WHERE tbl1.site_id = 1 OR tbl1.site_id IS NULL; -- \n\"non-trivial condition\"\nSELECT * FROM tbl2 WHERE tbl2.site_id = 1 OR tbl2.site_id IS NULL;\n \n1) Exemplary Query Plan:\n \n# EXPLAIN SELECT * FROM tbl2 WHERE tbl2.site_id = 1 OR tbl2.site_id IS NULL;\n \n QUERY PLAN\n-------------------------------------------------------------------------------------\n Bitmap Heap Scan on tbl2 (cost=8.40..19.08 rows=12 width=40)\n Recheck Cond: ((site_id = 1) OR (site_id IS NULL))\n -> BitmapOr (cost=8.40..8.40 rows=12 width=0)\n -> Bitmap Index Scan on tbl2_site_id_idx (cost=0.00..4.20 \nrows=6 width=0)\n Index Cond: (site_id = 1)\n -> Bitmap Index Scan on tbl2_site_id_idx (cost=0.00..4.20 \nrows=6 width=0)\n Index Cond: (site_id IS NULL)\n(7 rows)\n \nThe key takeaway is, that the index can be used, because the condition \nis propagated deep enough.\n \n2) Still working example:\n \n# EXPLAIN SELECT * FROM tbl1 LEFT JOIN tbl2 ON tbl2.site_id = \ntbl1.site_id WHERE tbl1.site_id = 1;\n \n QUERY PLAN\n-------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=8.40..27.80 rows=36 width=80)\n -> Bitmap Heap Scan on tbl1 (cost=4.20..13.67 rows=6 width=40)\n Recheck Cond: (site_id = 1)\n -> Bitmap Index Scan on tbl1_site_id_idx (cost=0.00..4.20 \nrows=6 width=0)\n Index Cond: (site_id = 1)\n -> Materialize (cost=4.20..13.70 rows=6 width=40)\n -> Bitmap Heap Scan on tbl2 (cost=4.20..13.67 rows=6 width=40)\n Recheck Cond: (site_id = 1)\n -> Bitmap Index Scan on tbl2_site_id_idx \n(cost=0.00..4.20 rows=6 width=0)\n Index Cond: (site_id = 1)\n(10 rows)\n \nThe condition is propagated into BOTH branches of the join. The join \ncould also be an INNER join and might also be realized as a Merge Join \nor Hash Join: they all behave the same.\n \n3) Problematic example:\n \n# EXPLAIN SELECT * FROM tbl1 JOIN tbl2 ON tbl2.site_id = tbl1.site_id \nWHERE tbl1.site_id = 1 OR tbl1.site_id IS NULL;\n \n QUERY PLAN\n-------------------------------------------------------------------------------------------------\n Hash Join (cost=19.23..46.45 rows=72 width=80)\n Hash Cond: (tbl2.site_id = tbl1.site_id)\n -> Seq Scan on tbl2 (cost=0.00..22.00 rows=1200 width=40)\n -> Hash (cost=19.08..19.08 rows=12 width=40)\n -> Bitmap Heap Scan on tbl1 (cost=8.40..19.08 rows=12 width=40)\n Recheck Cond: ((site_id = 1) OR (site_id IS NULL))\n -> BitmapOr (cost=8.40..8.40 rows=12 width=0)\n -> Bitmap Index Scan on tbl1_site_id_idx \n(cost=0.00..4.20 rows=6 width=0)\n Index Cond: (site_id = 1)\n -> Bitmap Index Scan on tbl1_site_id_idx \n(cost=0.00..4.20 rows=6 width=0)\n Index Cond: (site_id IS NULL)\n(11 rows)\n \nNow, a full seq scan used for tbl2, the condition is only pushed down on \nONE side of the JOIN!\n(with `WHERE tbl2.site_id = 1 OR tbl2.site_id IS NULL`, the Seq Scan \nwould have been on tbl1... [not so easily demostrated w/ LEFT JOINs]).\nAlso, `ON tbl1.site_id IS NOT DISTINCT FROM tbl2.site_id` does not help,\n \nThe weird thing is: The subqueries on both sides of the join are \nperfectly capable of accepting/using the \"non-trivial\" condition, as \ndemonstrated in 1), and JOINs are generally able to propagate conditions \nto both sides, as demonstrated in 2).\n \nIs there a magic knob to force postgres to do the right thing, or is \nthis basically a bug in the query planner?\n \n Tobias",
"msg_date": "Mon, 26 Aug 2024 12:58:38 +0000",
"msg_from": "\"Wetmore, Matthew (CTR)\" <Matthew.Wetmore@mdlive.com>",
"msg_from_op": false,
"msg_subject": "Re: Non-trivial condition is only propagated to one side of JOIN"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile working more on the cumulative pgstats and its interactions with\npg_stat_statements, one thing that I have been annoyed with is that\nthe dshash key for variable-numbered stats uses a pair of (Oid dboid,\nOid objoid), mostly to stick with the fact that most of the stats are\ndealing with system objects.\n\nThat's not completely true, though, as statistics can also implement\ntheir own index numbering without storing these numbers to disk, by\ndefining {from,to}_serialized_name. Replication slots do that, so we\nare already considering as OIDs numbers that are not that.\n\nFor pg_stat_statements, one issue with the current pgstats is that we\nwant to use the query ID as hash key, which is 8 bytes, while also\nhaving some knowledge of the database OID because we want to be able\nto clean up stats entries about specific databases.\n\nPlease find attached a patch switching PgStat_HashKey.objoid from an\nOid to uint64 to be able to handle cases of stats that want more\nspace. The size of PgStat_HashKey is increased from 12 to 16 bytes,\nbut with alignment the size of PgStatShared_HashEntry (what's stored\nin the dshash) is unchanged at 32 bytes.\n\nPerhaps what's proposed here is a bad idea for a good reason, and we\ncould just leave with storing 4 bytes of the query ID in the dshash\ninstead of 8. Anyway, we make a lot of efforts to use 8 bytes to\nreduce conflicts with different statements.\n\nAnother thing to note is the change for xl_xact_stats_item, requiring\na bump of XLOG_PAGE_MAGIC. A second thing is pg_stat_have_stats that\nneeds to use a different argument than an OID for the object,\nrequiring a catversion bump.\n\nAn interesting thing is that I have seen ubsan complain about this\npatch, due to the way WAL records xl_xact_commit are built with\nXACT_XINFO_HAS_DROPPED_STATS and parsed as xl_xact_stats_item requires\nan 8-byte alignment now (see pg_waldump TAP reports when using the\nattached), but we don't enforce anything as the data of such WAL\nrecords is added with a simple XLogRegisterData(), like:\n# xactdesc.c:91:28: runtime error: member access within misaligned\naddress 0x5651e996b86c for type 'struct xl_xact_stats_items', which\nrequires 8 byte alignment # 0x5651e996b86c: note: pointer points here\n\nTBH, I've looked at that for quite a bit, thinking about the addition\nof some \"dummy\" member to some of the parsed structures to force some\npadding, or play with the alignment macros, or for some alignment when\ninserting the record, or looked at pg_attribute_aligned().\n\nFirst I'm surprised that it did not show up as an issue yet in this\narea. Second, I could not get down to something \"nice\", but perhaps\nthere are preferred approaches when it comes to that and somebody has\na fancier idea? Or perhaps the problem is bigger than that due to\nthe way the record is designed and built? It also feels that I'm\nmissing something obvious, not sure what TBH. Still I'm OK to paint\nsome more MAXALIGN()s to make sure that all these deparsing pointers\nhave a correct alignment with some more TYPEALIGN()s or similar,\nbecause this deparsing stuff is about that, but I'm also wondering if\nthere is an argument for forcing that for the record itself? I'll\nthink more about that next week or so.\n\nAnyway, I'm attaching that to the next CF for discussion for now, as\nthere could be objections about this whole idea, as well.\n\nThoughts or comments?\n--\nMichael",
"msg_date": "Mon, 26 Aug 2024 09:58:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Switch PgStat_HashKey.objoid from Oid to uint64"
},
{
"msg_contents": "On 26/08/2024 03:58, Michael Paquier wrote:\n> An interesting thing is that I have seen ubsan complain about this\n> patch, due to the way WAL records xl_xact_commit are built with\n> XACT_XINFO_HAS_DROPPED_STATS and parsed as xl_xact_stats_item requires\n> an 8-byte alignment now (see pg_waldump TAP reports when using the\n> attached), but we don't enforce anything as the data of such WAL\n> records is added with a simple XLogRegisterData(), like:\n> # xactdesc.c:91:28: runtime error: member access within misaligned\n> address 0x5651e996b86c for type 'struct xl_xact_stats_items', which\n> requires 8 byte alignment # 0x5651e996b86c: note: pointer points here\n> \n> TBH, I've looked at that for quite a bit, thinking about the addition\n> of some \"dummy\" member to some of the parsed structures to force some\n> padding, or play with the alignment macros, or for some alignment when\n> inserting the record, or looked at pg_attribute_aligned().\n> \n> First I'm surprised that it did not show up as an issue yet in this\n> area. Second, I could not get down to something \"nice\", but perhaps\n> there are preferred approaches when it comes to that and somebody has\n> a fancier idea? Or perhaps the problem is bigger than that due to\n> the way the record is designed and built? It also feels that I'm\n> missing something obvious, not sure what TBH. Still I'm OK to paint\n> some more MAXALIGN()s to make sure that all these deparsing pointers\n> have a correct alignment with some more TYPEALIGN()s or similar,\n> because this deparsing stuff is about that, but I'm also wondering if\n> there is an argument for forcing that for the record itself? I'll\n> think more about that next week or so.\n\nCurrently, we rely on the fact that all the xl_xact_* structs require \nsizeof(int) alignment. See comment above struct xl_xact_xinfo.\n\nOne idea is to store the uint64 as two uint32's.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 09:32:54 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Switch PgStat_HashKey.objoid from Oid to uint64"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 09:32:54AM +0300, Heikki Linnakangas wrote:\n> Currently, we rely on the fact that all the xl_xact_* structs require\n> sizeof(int) alignment. See comment above struct xl_xact_xinfo.\n\nThanks, I have missed this part. So that explains the alignment I'd\nbetter use in the record.\n\n> One idea is to store the uint64 as two uint32's.\n\nNice, we could just do that. This idea makes me feel much better than\nsticking more aligment macros in the paths where the record is built.\n\nAttached is an updated patch doing that. ubsan is silent with that.\n--\nMichael",
"msg_date": "Thu, 29 Aug 2024 08:56:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Switch PgStat_HashKey.objoid from Oid to uint64"
},
{
"msg_contents": "Hi,\n\nOn Thu, Aug 29, 2024 at 08:56:59AM +0900, Michael Paquier wrote:\n> On Mon, Aug 26, 2024 at 09:32:54AM +0300, Heikki Linnakangas wrote:\n> > Currently, we rely on the fact that all the xl_xact_* structs require\n> > sizeof(int) alignment. See comment above struct xl_xact_xinfo.\n> \n> Thanks, I have missed this part. So that explains the alignment I'd\n> better use in the record.\n> \n> > One idea is to store the uint64 as two uint32's.\n> \n> Nice, we could just do that. This idea makes me feel much better than\n> sticking more aligment macros in the paths where the record is built.\n> \n> Attached is an updated patch doing that. ubsan is silent with that.\n\nThanks!\n\nYeah, indeed, with \"COPT=-fsanitize=alignment -fno-sanitize-recover=all\", then\n\n\"make -C src/test/modules/test_custom_rmgrs check\":\n\n- Is fine on master\n- Fails with v1 applied due to things like:\n\nxactdesc.c:91:28: runtime error: member access within misaligned address 0x5d29d22cc13c for type 'struct xl_xact_stats_items', which requires 8 byte alignment\n0x5d29d22cc13c: note: pointer points here\n 7f 06 00 00 02 00 00 00 fe 7f 00 00 03 00 00 00 05 00 00 00 05 40 00 00 00 00 00 00 03 00 00 00\n\n- Is fine with v2\n\nSo v2 does fix the alignment issue and I also think that's a nice way to fix it.\n\nLot of stuff that this patch does is mechanical changes:\n\n- replace \"objoid\" by \"objid\" in *stats* files \n- change the related type from Oid to uint64\n- make use of hash_bytes_extended() instead of hash_bytes when needed\n\nand I don't see any issues here.\n\nThere is also some manipulation around the 2 new uint32 fields (objid_hi and\nobjid_lo) in the xactdesc.c and pgstat_xact.c files that look good to me.\n\nBut now we end up having functions that accept Oid as parameters to call\nfunctions that accept uint64 as parameter (for the exact same parameter), for\nexample:\n\n\"\nvoid\npgstat_create_function(Oid proid)\n{\n pgstat_create_transactional(PGSTAT_KIND_FUNCTION,\n MyDatabaseId,\n proid);\n}\n\"\n\nas pgstat_create_transactional is now:\n\n-pgstat_create_transactional(PgStat_Kind kind, Oid dboid, Oid objoid)\n+pgstat_create_transactional(PgStat_Kind kind, Oid dboid, uint64 objid)\n\nThat's not an issue as both are unsigned and as we do those calls in that\norder (Oid -> uint64).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 12 Sep 2024 13:37:52 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Switch PgStat_HashKey.objoid from Oid to uint64"
},
{
"msg_contents": "On Thu, Sep 12, 2024 at 01:37:52PM +0000, Bertrand Drouvot wrote:\n> There is also some manipulation around the 2 new uint32 fields (objid_hi and\n> objid_lo) in the xactdesc.c and pgstat_xact.c files that look good to me.\n\nThanks for the reviews. The high and low manipulations are still kind\nof OK to me as a solution for the record constructions.\n\n> But now we end up having functions that accept Oid as parameters to call\n> functions that accept uint64 as parameter (for the exact same parameter), for\n> example:\n> \n> \"\n> void\n> pgstat_create_function(Oid proid)\n> {\n> pgstat_create_transactional(PGSTAT_KIND_FUNCTION,\n> MyDatabaseId,\n> proid);\n> }\n> \"\n> \n> as pgstat_create_transactional is now:\n> \n> -pgstat_create_transactional(PgStat_Kind kind, Oid dboid, Oid objoid)\n> +pgstat_create_transactional(PgStat_Kind kind, Oid dboid, uint64 objid)\n> \n> That's not an issue as both are unsigned and as we do those calls in that\n> order (Oid -> uint64).\n\nYes, that's intentional. All the pgstats routines associated to a\nparticular object that depends on an OID should still use an OID, and\nanything that's generic enough to be used for all stats kinds had\nbetter use a uint64. I was wondering if it would be better hiding\nthat behind a dedicated type, but decided to stick with uint64.\n--\nMichael",
"msg_date": "Fri, 13 Sep 2024 07:34:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Switch PgStat_HashKey.objoid from Oid to uint64"
},
{
"msg_contents": "Hi,\n\nOn Fri, Sep 13, 2024 at 07:34:21AM +0900, Michael Paquier wrote:\n> On Thu, Sep 12, 2024 at 01:37:52PM +0000, Bertrand Drouvot wrote:\n> > There is also some manipulation around the 2 new uint32 fields (objid_hi and\n> > objid_lo) in the xactdesc.c and pgstat_xact.c files that look good to me.\n> \n> Thanks for the reviews. The high and low manipulations are still kind\n> of OK to me as a solution for the record constructions.\n\nAgree.\n\n> > But now we end up having functions that accept Oid as parameters to call\n> > functions that accept uint64 as parameter (for the exact same parameter), for\n> > example:\n> > \n> > \"\n> > void\n> > pgstat_create_function(Oid proid)\n> > {\n> > pgstat_create_transactional(PGSTAT_KIND_FUNCTION,\n> > MyDatabaseId,\n> > proid);\n> > }\n> > \"\n> > \n> > as pgstat_create_transactional is now:\n> > \n> > -pgstat_create_transactional(PgStat_Kind kind, Oid dboid, Oid objoid)\n> > +pgstat_create_transactional(PgStat_Kind kind, Oid dboid, uint64 objid)\n> > \n> > That's not an issue as both are unsigned and as we do those calls in that\n> > order (Oid -> uint64).\n> \n> Yes, that's intentional. All the pgstats routines associated to a\n> particular object that depends on an OID should still use an OID, and\n> anything that's generic enough to be used for all stats kinds had\n> better use a uint64.\n\nYeah, that sounds good to me.\n\n> I was wondering if it would be better hiding\n> that behind a dedicated type, but decided to stick with uint64.\n\nThat makes sense to me.\n\nOverall, the patch LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 13 Sep 2024 04:03:13 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Switch PgStat_HashKey.objoid from Oid to uint64"
},
{
"msg_contents": "On Fri, Sep 13, 2024 at 04:03:13AM +0000, Bertrand Drouvot wrote:\n> Overall, the patch LGTM.\n\nThanks for the review, I've applied that, then, detailing in the\ncommit log what this changes and the three format bumps required.\n--\nMichael",
"msg_date": "Wed, 18 Sep 2024 12:54:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Switch PgStat_HashKey.objoid from Oid to uint64"
}
] |
[
{
"msg_contents": "Hi all,\n(Nathan in CC regarding c8b06bb969bf)\n\nWhile rebasing my patch set for sequence AMs, I've looked at what has\nbeen done with c8b06bb969bf and pg_sequence_read_tuple() because I can\nreuse it in the sequence AM patch to grab the last value of a sequence\nand if it has been called (the patch set implemented the same thing,\nwith a different function name), and got surprised that the function\nalso returns log_cnt, which is for the in-core sequence metadata an\ninternal counter to decide when a sequence should be WAL-logged.\n\nWhy do we need this field at all in this function? pg_dump only cares\nabout the last value and is_called to be able to rebuilt its sequence\nDDLs, and log_cnt is reset each time we restore or do a crash\nrecovery, bumping at the next sequence value based on an internal of\n32.\n\nI am also a bit dubious about the value it adds for debugging. The\nthing is that depending on the way sequences are computed, we may not\ncare about WAL at all, and log_cnt is something related to the way\nin-core sequences are computed and how its data is persistent. So\nthis makes the whole concept of sequence metadata a bit fuzzier\nbecause we mix data necessary for the sequence command and more\nthings. There is no need for it in pg_dump or pg_upgrade, either.\n\nlast_value and is_called are different and required all the time,\nof course, because these define how the sequence DDLs should be\ncreated.\n\nIt seems to me that we'd live better without it, at least it matters\nfor the sequence AM patch, because some of its callbacks are also\nshaped around the fact that WAL may not be required for sequence value\ncomputations. Providing a function that should be rather generic does\nnot fit well in this context, still I agree that it comes down to how\nthe callbacks are defined, of course. My point is that the use of WAL\nis something that should be optional, but log_cnt in this function\nmakes it a mandatory concept that all sequence AMs would need to deal\nwith.\n\nComments?\n--\nMichael",
"msg_date": "Mon, 26 Aug 2024 11:11:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Removing log_cnt from pg_sequence_read_tuple()"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 11:11:55AM +0900, Michael Paquier wrote:\n> It seems to me that we'd live better without it, at least it matters\n> for the sequence AM patch, because some of its callbacks are also\n> shaped around the fact that WAL may not be required for sequence value\n> computations. Providing a function that should be rather generic does\n> not fit well in this context, still I agree that it comes down to how\n> the callbacks are defined, of course. My point is that the use of WAL\n> is something that should be optional, but log_cnt in this function\n> makes it a mandatory concept that all sequence AMs would need to deal\n> with.\n\nI am fine with changes to this function that would allow it to be\ngenerically useful for all sequence AMs, provided that it can still be used\nfor dumpSequenceData(). I only included log_cnt because\npg_sequence_read_tuple() is intended to be a substitute for SELECT from the\nsequence, but I'm not aware of any real use-case for that column besides\ndebugging, which you've already addressed. Even if we remove log_cnt, you\ncan still find it with SELECT, too.\n\nThe patch looks reasonable to me. Do you think the name of the function\nstill makes sense now that 1) we might have different sequence AMs in the\nnear future and 2) it no longer returns everything in the sequence tuple?\n\n-- \nnathan\n\n\n",
"msg_date": "Mon, 26 Aug 2024 09:19:06 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing log_cnt from pg_sequence_read_tuple()"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 09:19:06AM -0500, Nathan Bossart wrote:\n> I am fine with changes to this function that would allow it to be\n> generically useful for all sequence AMs, provided that it can still be used\n> for dumpSequenceData(). I only included log_cnt because\n> pg_sequence_read_tuple() is intended to be a substitute for SELECT from the\n> sequence, but I'm not aware of any real use-case for that column besides\n> debugging, which you've already addressed.\n\nOkay, thanks.\n\n> Even if we remove log_cnt, you can still find it with SELECT, too.\n\nThe design used in the sequence AM patch makes it possible to assign\ncustom attributes to the sequence \"relation\" used for storage, with a\ntable AM used underneath that may not be heap. The AM callback\nplugged into the path used by pg_sequence_read_tuple() (previous\nversion for pg_sequence_last_value) returns the pair is_called and\nlast_value, to map to the row of the function used to rebuild the\ncommands in dumps and upgrades.\n\n> The patch looks reasonable to me. Do you think the name of the function\n> still makes sense now that 1) we might have different sequence AMs in the\n> near future and 2) it no longer returns everything in the sequence tuple?\n\nIndeed, pg_sequence_read_tuple() would not reflect the reality, some\nideas: \n- pg_sequence_read_data\n- pg_sequence_get_data\n- pg_sequence_data\n- More consistent with other catalog functions: pg_get_sequence_data,\nas we have already in the tree a lot of pg_get_* functions.\n--\nMichael",
"msg_date": "Thu, 29 Aug 2024 08:00:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removing log_cnt from pg_sequence_read_tuple()"
},
{
"msg_contents": "On Thu, Aug 29, 2024 at 08:00:52AM +0900, Michael Paquier wrote:\n> On Mon, Aug 26, 2024 at 09:19:06AM -0500, Nathan Bossart wrote:\n>> The patch looks reasonable to me. Do you think the name of the function\n>> still makes sense now that 1) we might have different sequence AMs in the\n>> near future and 2) it no longer returns everything in the sequence tuple?\n> \n> Indeed, pg_sequence_read_tuple() would not reflect the reality, some\n> ideas: \n> - pg_sequence_read_data\n> - pg_sequence_get_data\n> - pg_sequence_data\n> - More consistent with other catalog functions: pg_get_sequence_data,\n> as we have already in the tree a lot of pg_get_* functions.\n\npg_get_sequence_data() sounds fine to me.\n\n-- \nnathan\n\n\n",
"msg_date": "Wed, 28 Aug 2024 20:28:03 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing log_cnt from pg_sequence_read_tuple()"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 08:28:03PM -0500, Nathan Bossart wrote:\n> On Thu, Aug 29, 2024 at 08:00:52AM +0900, Michael Paquier wrote:\n>> Indeed, pg_sequence_read_tuple() would not reflect the reality, some\n>> ideas: \n>> - pg_sequence_read_data\n>> - pg_sequence_get_data\n>> - pg_sequence_data\n>> - More consistent with other catalog functions: pg_get_sequence_data,\n>> as we have already in the tree a lot of pg_get_* functions.\n> \n> pg_get_sequence_data() sounds fine to me.\n\nOkay, here is a v2 of the patch using this name for the function.\n--\nMichael",
"msg_date": "Thu, 29 Aug 2024 14:11:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removing log_cnt from pg_sequence_read_tuple()"
},
{
"msg_contents": "On Thu, Aug 29, 2024 at 02:11:22PM +0900, Michael Paquier wrote:\n> Okay, here is a v2 of the patch using this name for the function.\n\nLGTM\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 29 Aug 2024 09:28:49 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing log_cnt from pg_sequence_read_tuple()"
},
{
"msg_contents": "On Thu, Aug 29, 2024 at 09:28:49AM -0500, Nathan Bossart wrote:\n> On Thu, Aug 29, 2024 at 02:11:22PM +0900, Michael Paquier wrote:\n> > Okay, here is a v2 of the patch using this name for the function.\n> \n> LGTM\n\nThanks, applied that, after one tweak for the #define name.\n--\nMichael",
"msg_date": "Fri, 30 Aug 2024 08:59:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Removing log_cnt from pg_sequence_read_tuple()"
}
] |
[
{
"msg_contents": "hello,all,\nI am a newcomer who recently joined the PostgreSQL community. I logged into the community using my GitHub account and wanted to start familiarizing myself with community work by reviewing patches. However, I am currently facing an issue. When I log into commitfest.postgresql.org, I encounter the following message. \n\n\n\n\n\"The site you are trying to log in to (commitfest.postgresql.org) requires a cool-off period between account creation and logging in.\n\n\n\n\nYou have not passed the cool off period yet.\"\n\n\n\n\nanyone advise me on how to resolve this, or how long this cool off period lasts? please!\n\n\n\n\nthanks!\nhello,all,I am a newcomer who recently joined the PostgreSQL community. I logged into the community using my GitHub account and wanted to start familiarizing myself with community work by reviewing patches. However, I am currently facing an issue. When I log into commitfest.postgresql.org, I encounter the following message. \"The site you are trying to log in to (commitfest.postgresql.org) requires a cool-off period between account creation and logging in.You have not passed the cool off period yet.\"anyone advise me on how to resolve this, or how long this cool off period lasts? please!thanks!",
"msg_date": "Mon, 26 Aug 2024 11:21:26 +0800 (CST)",
"msg_from": "=?GBK?B?zfW0urnw?= <wcg2008zl@126.com>",
"msg_from_op": true,
"msg_subject": "how to log into commitfest.postgresql.org and begin review patch"
},
{
"msg_contents": "> On 26 Aug 2024, at 05:21, 王春桂 <wcg2008zl@126.com> wrote:\n> \n> hello,all,\n> I am a newcomer who recently joined the PostgreSQL community. I logged into the community using my GitHub account and wanted to start familiarizing myself with community work by reviewing patches. However, I am currently facing an issue. When I log into commitfest.postgresql.org, I encounter the following message. \n> \n> \"The site you are trying to log in to (commitfest.postgresql.org) requires a cool-off period between account creation and logging in.\n> \n> You have not passed the cool off period yet.\"\n> \n> anyone advise me on how to resolve this, or how long this cool off period lasts? please!\n\nHi,\n\nThere is a cool-off period on new accounts since we were seeing SPAM from\nfreshly created accounts. Posting here means that admins can expedite the\nperiod for activating your account. In the meantime you can get started on\nreviewing though, the patches for review are on the mailinglist and the review\nis sent to the mailinglist as well, so you don't need to wait for the CF\naccount to dig in.\n\nThanks for your interest in postgres, if you have questions regarding the\nprocess etc there is material on the wiki and website, and you can always ask\nhere as well.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 09:58:00 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: how to log into commitfest.postgresql.org and begin review patch"
},
{
"msg_contents": "At 2024-08-26 15:58:00, \"Daniel Gustafsson\" <daniel@yesql.se> wrote:\n>> On 26 Aug 2024, at 05:21, 王春桂 <wcg2008zl@126.com> wrote:\n>> \n>> hello,all,\n>> I am a newcomer who recently joined the PostgreSQL community. I logged into the community using my GitHub account and wanted to start familiarizing myself with community work by reviewing patches. However, I am currently facing an issue. When I log into commitfest.postgresql.org, I encounter the following message. \n>> \n>> \"The site you are trying to log in to (commitfest.postgresql.org) requires a cool-off period between account creation and logging in.\n>> \n>> You have not passed the cool off period yet.\"\n>> \n>> anyone advise me on how to resolve this, or how long this cool off period lasts? please!\n>\n>Hi,\n>\n>There is a cool-off period on new accounts since we were seeing SPAM from\n>freshly created accounts. Posting here means that admins can expedite the\n>period for activating your account. In the meantime you can get started on\n>reviewing though, the patches for review are on the mailinglist and the review\n>is sent to the mailinglist as well, so you don't need to wait for the CF\n>account to dig in.\n>\n>Thanks for your interest in postgres, if you have questions regarding the\n>process etc there is material on the wiki and website, and you can always ask\n>here as well.\n>\n>--\n>Daniel Gustafsson\n>\n\n>\n\n\nThank you for your response. I understand now. I will keep studying the content on the mailing list and integrate into the community as soon as possible.:)\n\n\nAt 2024-08-26 15:58:00, \"Daniel Gustafsson\" <daniel@yesql.se> wrote:\n>> On 26 Aug 2024, at 05:21, 王春桂 <wcg2008zl@126.com> wrote:\n>> \n>> hello,all,\n>> I am a newcomer who recently joined the PostgreSQL community. I logged into the community using my GitHub account and wanted to start familiarizing myself with community work by reviewing patches. However, I am currently facing an issue. When I log into commitfest.postgresql.org, I encounter the following message. \n>> \n>> \"The site you are trying to log in to (commitfest.postgresql.org) requires a cool-off period between account creation and logging in.\n>> \n>> You have not passed the cool off period yet.\"\n>> \n>> anyone advise me on how to resolve this, or how long this cool off period lasts? please!\n>\n>Hi,\n>\n>There is a cool-off period on new accounts since we were seeing SPAM from\n>freshly created accounts. Posting here means that admins can expedite the\n>period for activating your account. In the meantime you can get started on\n>reviewing though, the patches for review are on the mailinglist and the review\n>is sent to the mailinglist as well, so you don't need to wait for the CF\n>account to dig in.\n>\n>Thanks for your interest in postgres, if you have questions regarding the\n>process etc there is material on the wiki and website, and you can always ask\n>here as well.\n>\n>--\n>Daniel Gustafsson\n>\n>Thank you for your response. I understand now. I will keep studying the content on the mailing list and integrate into the community as soon as possible.:)",
"msg_date": "Mon, 26 Aug 2024 16:45:17 +0800 (CST)",
"msg_from": "\"chungui.wcg\" <wcg2008zl@126.com>",
"msg_from_op": false,
"msg_subject": "Re: how to log into commitfest.postgresql.org and begin review\n patch"
},
{
"msg_contents": "> On 26 Aug 2024, at 10:45, chungui.wcg <wcg2008zl@126.com> wrote:\n> At 2024-08-26 15:58:00, \"Daniel Gustafsson\" <daniel@yesql.se> wrote:\n> >> On 26 Aug 2024, at 05:21, 王春桂 <wcg2008zl@126.com> wrote:\n> >> \n> >> hello,all,\n> >> I am a newcomer who recently joined the PostgreSQL community. I logged into the community using my GitHub account and wanted to start familiarizing myself with community work by reviewing patches. However, I am currently facing an issue. When I log into commitfest.postgresql.org, I encounter the following message. \n> >> \n> >> \"The site you are trying to log in to (commitfest.postgresql.org) requires a cool-off period between account creation and logging in.\n> >> \n> >> You have not passed the cool off period yet.\"\n> >> \n> >> anyone advise me on how to resolve this, or how long this cool off period lasts? please!\n> >\n> >Hi,\n> >\n> >There is a cool-off period on new accounts since we were seeing SPAM from\n> >freshly created accounts. Posting here means that admins can expedite the\n> >period for activating your account. In the meantime you can get started on\n> >reviewing though, the patches for review are on the mailinglist and the review\n> >is sent to the mailinglist as well, so you don't need to wait for the CF\n> >account to dig in.\n> >\n> >Thanks for your interest in postgres, if you have questions regarding the\n> >process etc there is material on the wiki and website, and you can always ask\n> >here as well.\n\n> Thank you for your response. I understand now. I will keep studying the content on the mailing list and integrate into the community as soon as possible.:)\n\nGreat, welcome!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 11:02:19 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: how to log into commitfest.postgresql.org and begin review patch"
},
{
"msg_contents": "On 8/26/24 03:58, Daniel Gustafsson wrote:\n>> On 26 Aug 2024, at 05:21, 王春桂 <wcg2008zl@126.com> wrote: \n>> hello,all, I am a newcomer who recently joined the PostgreSQL\n>> community. I logged into the community using my GitHub account and\n>> wanted to start familiarizing myself with community work by\n>> reviewing patches. However, I am currently facing an issue. When I\n>> log into commitfest.postgresql.org, I encounter the following\n>> message.\n>> \n>> \"The site you are trying to log in to (commitfest.postgresql.org)\n>> requires a cool-off period between account creation and logging\n>> in.\n>> \n>> You have not passed the cool off period yet.\"\n>> \n>> anyone advise me on how to resolve this, or how long this cool off\n>> period lasts? please!\n\n> There is a cool-off period on new accounts since we were seeing SPAM from\n> freshly created accounts. Posting here means that admins can expedite the\n> period for activating your account.\n\n\nCooling off period expedited.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 26 Aug 2024 09:25:04 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: how to log into commitfest.postgresql.org and begin review patch"
}
] |
[
{
"msg_contents": "Hi\n\nI checked our implementation of xmlserialize, and it looks like there are\nfew issues.\n\n1. It doesn't conform to SQL/XML - there is some overlap with the proposed\nCANONICAL flag, but not full.\n\n2. There is significantly different behaviour of NO INDENT on Oracle and\nother db, that implements it. It really removes indentation from XML.\n\n3. Oracle doesn't conform SQL/XML too fully. SQL/XML knows two states\n\"INDENT\" x \"NO INDENT\". Oracle has three states: indeterminate (default),\nINDENT and NO INDENT. Probably, Oracle's default (INDENT or NO INDENT) is\nnot the same as PostgreSQL default (without change of formatting).\n\nHas somebody an idea, what should be expected behaviour?\n\nRegards\n\nPavel\n\nHiI checked our implementation of xmlserialize, and it looks like there are few issues.1. It doesn't conform to SQL/XML - there is some overlap with the proposed CANONICAL flag, but not full.2. There is significantly different behaviour of NO INDENT on Oracle and other db, that implements it. It really removes indentation from XML. 3. Oracle doesn't conform SQL/XML too fully. SQL/XML knows two states \"INDENT\" x \"NO INDENT\". Oracle has three states: indeterminate (default), INDENT and NO INDENT. Probably, Oracle's default (INDENT or NO INDENT) is not the same as PostgreSQL default (without change of formatting).Has somebody an idea, what should be expected behaviour? RegardsPavel",
"msg_date": "Mon, 26 Aug 2024 07:32:53 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "maybe buggy implementation of NO INDENT in xmlserialize"
}
] |
[
{
"msg_contents": "This patch allows using text position search functions with \nnondeterministic collations. These functions are\n\n- position, strpos\n- replace\n- split_part\n- string_to_array\n- string_to_table\n\nwhich all use common internal infrastructure.\n\n(This complements the patch \"Support LIKE with nondeterministic \ncollations\" but is independent.)\n\nSome exploratory testing could be useful here. The present test \ncoverage was already quite helpful during development, but there is \nalways the possibility that something was overlooked.",
"msg_date": "Mon, 26 Aug 2024 08:09:19 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Support POSITION with nondeterministic collations"
}
] |
[
{
"msg_contents": "I'm reposting this patch in a separate thread so I can make a separate \ncommitfest entry for it. The previous discussion is mixed in with [0].\n\nThe purpose of this patch is to allow using pg_upgrade between clusters \nthat have different checksum settings. When upgrading between instances \nwith different checksum settings, the --copy (default) mode \nautomatically sets (or unsets) the checksum on the fly.\n\nThis would be particularly useful if we switched to enabling checksums \nby default, as [0] proposes, but it's also useful without that.\n\nSome discussion points:\n\n- We have added a bunch of different transfer modes to pg_upgrade in \norder to give the user control over the expected file transfer \nperformance. Here, I have added this checksum rewriting to the existing \n--copy mode, and I have measured about a 5% overhead. An alternative \nwould be to make this an explicit mode (--copy-slow, \n--copy-and-make-adjustments).\n\n- Windows has a separate code path in the --copy mode. I don't know the \nreasons or advantages of that. So it's not clear how the checksum \nrewriting mode should be handled in that case. We could switch to the \nnon-Windows code path in that case, but then the performance difference \nbetween the normal path and the checksum-rewriting path is even more \nunclear.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/CAKAnmmKwiMHik5AHmBEdf5vqzbOBbcwEPHo4-PioWeAbzwcTOQ@mail.gmail.com",
"msg_date": "Mon, 26 Aug 2024 08:23:44 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "pg_upgrade: Support for upgrading to checksums enabled"
},
{
"msg_contents": "Hi Peter,\n\nI've applied and tested your patch, it works at least on MacOS with\nmeson build. A couple of thoughts about this patch inline below.\n\nOn Mon, Aug 26, 2024 at 8:23 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> The purpose of this patch is to allow using pg_upgrade between clusters\n> that have different checksum settings. When upgrading between instances\n> with different checksum settings, the --copy (default) mode\n> automatically sets (or unsets) the checksum on the fly.\n\nI think the entire idea of this patch is great because it allows us to\nremove an additional step in upgrade procedure, i.e. enabling\nchecksums before upgrade. A part about which I am not quite sure, is\n\"automatically\". It is sufficient in most cases, but maybe also to\nhave an explicit flag would be a nice option as well.\n\nin the patch itself:\n\n> - * We might eventually allow upgrades from checksum to no-checksum\n> - * clusters.\n> - */\n> - if (oldctrl->data_checksum_version == 0 &&\n> - newctrl->data_checksum_version != 0)\n> - pg_fatal(\"old cluster does not use data checksums but the new one does\");\n> - else if (oldctrl->data_checksum_version != 0 &&\n> - newctrl->data_checksum_version == 0)\n> - pg_fatal(\"old cluster uses data checksums but the new one does not\");\n> - else if (oldctrl->data_checksum_version != newctrl->data_checksum_version)\n> - pg_fatal(\"old and new cluster pg_controldata checksum versions do not match\");\n> + if (oldctrl->data_checksum_version != newctrl->data_checksum_version)\n> + {\n> + if (user_opts.transfer_mode != TRANSFER_MODE_COPY)\n> + pg_fatal(\"when upgrading between clusters with different data checksum settings, transfer mode --copy must be used\");\n> + }\n\nI've tried to recall when I see the previous error message \"old and\nnew cluster pg_controldata checksum versions do not match\" at most. It\nwas almost always pg_upgrade with --link option, because we mostly use\npg_upgrade when copy is barely an option. Previous error message gave\na clear statement: go one step behind and enable checksums. The new\none gives imo a wrong message: \"your only option now is to use copy\".\n\nWould it be better to polish wording in direction \"when upgrading\nbetween clusters with different data checksum settings, transfer mode\n--copy must be used to enable checksum automatically\" or \"when\nupgrading between clusters with different data checksum settings,\ntransfer mode --copy must be used or data checksum settings of the old\ncluster must be changed manually before upgrade\"?\n\n\n> Some discussion points:\n>\n> - We have added a bunch of different transfer modes to pg_upgrade in\n> order to give the user control over the expected file transfer\n> performance. Here, I have added this checksum rewriting to the existing\n> --copy mode, and I have measured about a 5% overhead. An alternative\n> would be to make this an explicit mode (--copy-slow,\n> --copy-and-make-adjustments).\n\nMaybe a separate -k flag to enable this behavior explicitly?\n\nbest regards,\nIlya\n\n\n-- \nIlya Kosmodemiansky\nCEO, Founder\n\nData Egret GmbH\nYour remote PostgreSQL DBA team\nT.: +49 6821 919 3297\nik@dataegret.com\n\n\n",
"msg_date": "Mon, 26 Aug 2024 11:13:21 +0200",
"msg_from": "Ilya Kosmodemiansky <ik@dataegret.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: Support for upgrading to checksums enabled"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 08:23:44AM +0200, Peter Eisentraut wrote:\n> The purpose of this patch is to allow using pg_upgrade between clusters that\n> have different checksum settings. When upgrading between instances with\n> different checksum settings, the --copy (default) mode automatically sets\n> (or unsets) the checksum on the fly.\n> \n> This would be particularly useful if we switched to enabling checksums by\n> default, as [0] proposes, but it's also useful without that.\n\nGiven enabling checksums can be rather expensive, I think it makes sense to\nadd a way to do it during pg_upgrade versus asking folks to run\npg_checksums separately. I'd anticipate arguments against enabling\nchecksums automatically, but as you noted, we can move it to a separate\noption (e.g., --copy --enable-checksums). Disabling checksums with\npg_checksums is fast because it just updates pg_control, so I don't see any\nneed for --disable-checkums in pg_upgrade.\n\n> - Windows has a separate code path in the --copy mode. I don't know the\n> reasons or advantages of that. So it's not clear how the checksum rewriting\n> mode should be handled in that case. We could switch to the non-Windows\n> code path in that case, but then the performance difference between the\n> normal path and the checksum-rewriting path is even more unclear.\n\nAFAICT the separate Windows path dates back to before pg_upgrade was first\nadded to the Postgres tree, and unfortunately I couldn't find any\ndiscussion about it.\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 27 Aug 2024 16:56:53 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: Support for upgrading to checksums enabled"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen I read the following documentation related to the \"synchronized_standby_slots\", I misunderstood that data loss would not occur in the case of synchronous physical replication. However, this is incorrect (see reproduce.txt).\n\n> Note that in the case of asynchronous replication, there remains a risk of data loss for transactions committed on the former primary server but have yet to be replicated to the new primary server.\nhttps://www.postgresql.org/docs/17/logical-replication-failover.html\n\nAm I missing something? IIUC, could you change the documentation as suggested in the attached patch? I also believe it would be better to move the sentence to the next paragraph because the note is related to \"synchronized_standby_slots.\".\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Mon, 26 Aug 2024 07:59:51 +0000",
"msg_from": "<Masahiro.Ikeda@nttdata.com>",
"msg_from_op": true,
"msg_subject": "Doc: fix the note related to the GUC \"synchronized_standby_slots\""
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 1:30 PM <Masahiro.Ikeda@nttdata.com> wrote:\n>\n> When I read the following documentation related to the \"synchronized_standby_slots\", I misunderstood that data loss would not occur in the case of synchronous physical replication. However, this is incorrect (see reproduce.txt).\n>\n> > Note that in the case of asynchronous replication, there remains a risk of data loss for transactions committed on the former primary server but have yet to be replicated to the new primary server.\n> https://www.postgresql.org/docs/17/logical-replication-failover.html\n>\n> Am I missing something?\n>\n\nIt seems part of the paragraph: \"Note that in the case of asynchronous\nreplication, there remains a risk of data loss for transactions\ncommitted on the former primary server but have yet to be replicated\nto the new primary server.\" is a bit confusing. Will it make things\nclear to me if we remove that part?\n\nI am keeping a few others involved in this feature development in Cc.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 26 Aug 2024 15:07:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc: fix the note related to the GUC \"synchronized_standby_slots\""
},
{
"msg_contents": "On Monday, August 26, 2024 5:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Mon, Aug 26, 2024 at 1:30 PM <Masahiro.Ikeda@nttdata.com> wrote:\r\n> >\r\n> > When I read the following documentation related to the\r\n> \"synchronized_standby_slots\", I misunderstood that data loss would not occur\r\n> in the case of synchronous physical replication. However, this is incorrect (see\r\n> reproduce.txt).\r\n> >\r\n> > > Note that in the case of asynchronous replication, there remains a risk of\r\n> data loss for transactions committed on the former primary server but have yet\r\n> to be replicated to the new primary server.\r\n> > https://www.postgresql.org/docs/17/logical-replication-failover.html\r\n> >\r\n> > Am I missing something?\r\n> >\r\n> \r\n> It seems part of the paragraph: \"Note that in the case of asynchronous\r\n> replication, there remains a risk of data loss for transactions committed on the\r\n> former primary server but have yet to be replicated to the new primary server.\" is\r\n> a bit confusing. Will it make things clear to me if we remove that part?\r\n\r\nI think the intention is to address a complaint[1] that the date inserted on\r\nprimary after the primary disconnects with the standby is still lost after\r\nfailover. But after rethinking, maybe it's doesn't directly belong to the topic in\r\nthe logical failover section because it's a general fact for async replication.\r\nIf we think it matters, maybe we can remove this part and slightly modify\r\nanother part:\r\n\r\n parameter ensures a seamless transition of those subscriptions after the\r\n standby is promoted. They can continue subscribing to publications on the\r\n- new primary server without losing data.\r\n+ new primary server without losing that has already been replicated and\r\n+ flushed on the standby server.\r\n\r\n[1] https://www.postgresql.org/message-id/ZfRe2%2BOxMS0kvNvx%40ip-10-97-1-34.eu-west-3.compute.internal\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Mon, 26 Aug 2024 13:08:32 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Doc: fix the note related to the GUC \"synchronized_standby_slots\""
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 1:30 PM <Masahiro.Ikeda@nttdata.com> wrote:\n>\n> When I read the following documentation related to the \"synchronized_standby_slots\", I misunderstood that data loss would not occur in the case of synchronous physical replication. However, this is incorrect (see reproduce.txt).\n>\n\nI think you see such a behavior because you have disabled\n'synchronized_standby_slots' in your script (# disable\n\"synchronized_standby_slots\"). You need to enable that to avoid data\nloss. Considering that, I don't think your proposed text is an\nimprovement.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 27 Aug 2024 08:49:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc: fix the note related to the GUC \"synchronized_standby_slots\""
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 6:38 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, August 26, 2024 5:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Aug 26, 2024 at 1:30 PM <Masahiro.Ikeda@nttdata.com> wrote:\n> > >\n> > > When I read the following documentation related to the\n> > \"synchronized_standby_slots\", I misunderstood that data loss would not occur\n> > in the case of synchronous physical replication. However, this is incorrect (see\n> > reproduce.txt).\n> > >\n> > > > Note that in the case of asynchronous replication, there remains a risk of\n> > data loss for transactions committed on the former primary server but have yet\n> > to be replicated to the new primary server.\n> > > https://www.postgresql.org/docs/17/logical-replication-failover.html\n> > >\n> > > Am I missing something?\n> > >\n> >\n> > It seems part of the paragraph: \"Note that in the case of asynchronous\n> > replication, there remains a risk of data loss for transactions committed on the\n> > former primary server but have yet to be replicated to the new primary server.\" is\n> > a bit confusing. Will it make things clear to me if we remove that part?\n>\n> I think the intention is to address a complaint[1] that the date inserted on\n> primary after the primary disconnects with the standby is still lost after\n> failover. But after rethinking, maybe it's doesn't directly belong to the topic in\n> the logical failover section because it's a general fact for async replication.\n> If we think it matters, maybe we can remove this part and slightly modify\n> another part:\n>\n> parameter ensures a seamless transition of those subscriptions after the\n> standby is promoted. They can continue subscribing to publications on the\n> - new primary server without losing data.\n> + new primary server without losing that has already been replicated and\n> + flushed on the standby server.\n>\n\nYeah, we can change that way but not sure if that satisfies the OP's\nconcern. I am waiting for his response.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 27 Aug 2024 08:50:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc: fix the note related to the GUC \"synchronized_standby_slots\""
},
{
"msg_contents": "On Monday, August 26, 2024, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Aug 26, 2024 at 6:38 PM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Monday, August 26, 2024 5:37 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > >\n> > > On Mon, Aug 26, 2024 at 1:30 PM <Masahiro.Ikeda@nttdata.com> wrote:\n> > > >\n> > > > When I read the following documentation related to the\n> > > \"synchronized_standby_slots\", I misunderstood that data loss would not\n> occur\n> > > in the case of synchronous physical replication. However, this is\n> incorrect (see\n> > > reproduce.txt).\n> > > >\n> > > > > Note that in the case of asynchronous replication, there remains a\n> risk of\n> > > data loss for transactions committed on the former primary server but\n> have yet\n> > > to be replicated to the new primary server.\n> > > > https://www.postgresql.org/docs/17/logical-replication-failover.html\n> > > >\n> > > > Am I missing something?\n> > > >\n> > >\n> > > It seems part of the paragraph: \"Note that in the case of asynchronous\n> > > replication, there remains a risk of data loss for transactions\n> committed on the\n> > > former primary server but have yet to be replicated to the new primary\n> server.\" is\n> > > a bit confusing. Will it make things clear to me if we remove that\n> part?\n> >\n> > I think the intention is to address a complaint[1] that the date\n> inserted on\n> > primary after the primary disconnects with the standby is still lost\n> after\n> > failover. But after rethinking, maybe it's doesn't directly belong to\n> the topic in\n> > the logical failover section because it's a general fact for async\n> replication.\n> > If we think it matters, maybe we can remove this part and slightly modify\n> > another part:\n> >\n> > parameter ensures a seamless transition of those subscriptions after\n> the\n> > standby is promoted. They can continue subscribing to publications on\n> the\n> > - new primary server without losing data.\n> > + new primary server without losing that has already been replicated\n> and\n> > + flushed on the standby server.\n> >\n>\n> Yeah, we can change that way but not sure if that satisfies the OP's\n> concern. I am waiting for his response.\n>\n\nI’d suggest getting rid of all mention of “without losing data” and just\nemphasize the fact that the subscribers can operate in a hot-standby\npublishing environment in an automated fashion by connecting using\n“failover” enabled slots, assuming the publishing group prevents any\nchanges from propagating to any logical subscriber until all standbys in\nthe group have been updated. Whether or not the primary-standby group is\nresilient in the face of failure during internal group synchronization is\nout of the hands of logical subscribers - rather they are only guaranteed\nto see a consistent linear history of activity coming out of the publishing\ngroup. Specifically, if the group synchronizes asynchronously there is no\nguarantee that every committed transaction on the primary makes its way\nthrough to the logical subscriber if a slot failover happens. But at the\nsame time its view of the world will be consistent with the newly chosen\nprimary.\n\nDavid J.\n\nOn Monday, August 26, 2024, Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Aug 26, 2024 at 6:38 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Monday, August 26, 2024 5:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Aug 26, 2024 at 1:30 PM <Masahiro.Ikeda@nttdata.com> wrote:\n> > >\n> > > When I read the following documentation related to the\n> > \"synchronized_standby_slots\", I misunderstood that data loss would not occur\n> > in the case of synchronous physical replication. However, this is incorrect (see\n> > reproduce.txt).\n> > >\n> > > > Note that in the case of asynchronous replication, there remains a risk of\n> > data loss for transactions committed on the former primary server but have yet\n> > to be replicated to the new primary server.\n> > > https://www.postgresql.org/docs/17/logical-replication-failover.html\n> > >\n> > > Am I missing something?\n> > >\n> >\n> > It seems part of the paragraph: \"Note that in the case of asynchronous\n> > replication, there remains a risk of data loss for transactions committed on the\n> > former primary server but have yet to be replicated to the new primary server.\" is\n> > a bit confusing. Will it make things clear to me if we remove that part?\n>\n> I think the intention is to address a complaint[1] that the date inserted on\n> primary after the primary disconnects with the standby is still lost after\n> failover. But after rethinking, maybe it's doesn't directly belong to the topic in\n> the logical failover section because it's a general fact for async replication.\n> If we think it matters, maybe we can remove this part and slightly modify\n> another part:\n>\n> parameter ensures a seamless transition of those subscriptions after the\n> standby is promoted. They can continue subscribing to publications on the\n> - new primary server without losing data.\n> + new primary server without losing that has already been replicated and\n> + flushed on the standby server.\n>\n\nYeah, we can change that way but not sure if that satisfies the OP's\nconcern. I am waiting for his response.\nI’d suggest getting rid of all mention of “without losing data” and just emphasize the fact that the subscribers can operate in a hot-standby publishing environment in an automated fashion by connecting using “failover” enabled slots, assuming the publishing group prevents any changes from propagating to any logical subscriber until all standbys in the group have been updated. Whether or not the primary-standby group is resilient in the face of failure during internal group synchronization is out of the hands of logical subscribers - rather they are only guaranteed to see a consistent linear history of activity coming out of the publishing group. Specifically, if the group synchronizes asynchronously there is no guarantee that every committed transaction on the primary makes its way through to the logical subscriber if a slot failover happens. But at the same time its view of the world will be consistent with the newly chosen primary.David J.",
"msg_date": "Mon, 26 Aug 2024 20:54:31 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc: fix the note related to the GUC \"synchronized_standby_slots\""
},
{
"msg_contents": "Thans for your responses.\n\n> I think you see such a behavior because you have disabled 'synchronized_standby_slots'\n> in your script (# disable \"synchronized_standby_slots\"). You need to enable that to\n> avoid data loss. Considering that, I don't think your proposed text is an improvement.\nYes, I know. \n\nAs David said, \"without losing data\" makes me confused because there are three patterns that users\nthink the data was lost though there may be other cases.\n\nPattern1. the data which clients get a committed response for from the old primary, but the new primary doesn’t have in the case of asynchronous replication\n -> we can avoid this with synchronous replication. This is not relevant to the failover feature.\n\nPattern2. the data which the new primary has, but the subscribers don't have\n -> we can avoid this with the failover feature.\n\nPattern3. the data which the subscribers have, but the new primary doesn't have\n -> we can avoid this with the 'synchronized_standby_slots' parameter. \n\nCurrently, I understand that the following documentation says\n* the failover feature makes publications without losing pattern 2 data.\n* pattern 1 data may be lost if you use asynchronous replication.\n* the following doesn't mention pattern 3 at all, which I misunderstood point.\n\n> They can continue subscribing to publications on the new primary server without losing data. \n> Note that in the case of asynchronous replication, there remains a risk of data loss for transactions\n> committed on the former primary server but have yet to be replicated to the new primary server\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Tue, 27 Aug 2024 04:48:22 +0000",
"msg_from": "<Masahiro.Ikeda@nttdata.com>",
"msg_from_op": true,
"msg_subject": "RE: Doc: fix the note related to the GUC \"synchronized_standby_slots\""
},
{
"msg_contents": "On Tue, Aug 27, 2024 at 10:18 AM <Masahiro.Ikeda@nttdata.com> wrote:\n>\n> > I think you see such a behavior because you have disabled 'synchronized_standby_slots'\n> > in your script (# disable \"synchronized_standby_slots\"). You need to enable that to\n> > avoid data loss. Considering that, I don't think your proposed text is an improvement.\n> Yes, I know.\n>\n> As David said, \"without losing data\" makes me confused because there are three patterns that users\n> think the data was lost though there may be other cases.\n>\n\nSo, will it be okay if we just remove \".. without losing data\" from\nthe sentence? Will that avoid the confusion you have?\n\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 27 Aug 2024 14:15:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc: fix the note related to the GUC \"synchronized_standby_slots\""
},
{
"msg_contents": "> So, will it be okay if we just remove \".. without losing data\" from the sentence? Will that\n> avoid the confusion you have?\nYes. Additionally, it would be better to add notes about data consistency after failover for example\n\nNote that data consistency after failover can vary depending on the configurations. If\n\"synchronized_standby_slots\" is not configured, there may be data that only the subscribers hold,\neven though the new primary does not. Additionally, in the case of asynchronous physical replication,\nthere remains a risk of data loss for transactions committed on the former primary server\nbut have yet to be replicated to the new primary server.\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 27 Aug 2024 09:35:24 +0000",
"msg_from": "<Masahiro.Ikeda@nttdata.com>",
"msg_from_op": true,
"msg_subject": "RE: Doc: fix the note related to the GUC \"synchronized_standby_slots\""
},
{
"msg_contents": "On Tue, Aug 27, 2024 at 3:05 PM <Masahiro.Ikeda@nttdata.com> wrote:\n>\n> > So, will it be okay if we just remove \".. without losing data\" from the sentence? Will that\n> > avoid the confusion you have?\n> Yes. Additionally, it would be better to add notes about data consistency after failover for example\n>\n> Note that data consistency after failover can vary depending on the configurations. If\n> \"synchronized_standby_slots\" is not configured, there may be data that only the subscribers hold,\n> even though the new primary does not.\n>\n\nThis part can be inferred from the description of\nsynchronized_standby_slots [1] (See: This guarantees that logical\nreplication failover slots do not consume changes until those changes\nare received and flushed to corresponding physical standbys. If a\nlogical replication connection is meant to switch to a physical\nstandby after the standby is promoted, the physical replication slot\nfor the standby should be listed here.)\n\n>\n Additionally, in the case of asynchronous physical replication,\n> there remains a risk of data loss for transactions committed on the former primary server\n> but have yet to be replicated to the new primary server.\n>\n\nThis has nothing to do with failover slots. This is a known behavior\nof asynchronous replication, so adding here doesn't make much sense.\n\nIn general, adding more information unrelated to failover slots can\nconfuse users.\n\n[1] - https://www.postgresql.org/docs/17/runtime-config-replication.html#GUC-SYNCHRONIZED-STANDBY-SLOTS\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 27 Aug 2024 15:54:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc: fix the note related to the GUC \"synchronized_standby_slots\""
},
{
"msg_contents": "> > > So, will it be okay if we just remove \".. without losing data\" from\n> > > the sentence? Will that avoid the confusion you have?\n> > Yes. Additionally, it would be better to add notes about data\n> > consistency after failover for example\n> >\n> > Note that data consistency after failover can vary depending on the\n> > configurations. If \"synchronized_standby_slots\" is not configured,\n> > there may be data that only the subscribers hold, even though the new primary does\n> not.\n> >\n> \n> This part can be inferred from the description of synchronized_standby_slots [1] (See:\n> This guarantees that logical replication failover slots do not consume changes until those\n> changes are received and flushed to corresponding physical standbys. If a logical\n> replication connection is meant to switch to a physical standby after the standby is\n> promoted, the physical replication slot for the standby should be listed here.)\n\nOK, it's enough for me just remove \".. without losing data\".\n\n> >\n> Additionally, in the case of asynchronous physical replication,\n> > there remains a risk of data loss for transactions committed on the\n> > former primary server but have yet to be replicated to the new primary server.\n> >\n> \n> This has nothing to do with failover slots. This is a known behavior of asynchronous\n> replication, so adding here doesn't make much sense.\n> \n> In general, adding more information unrelated to failover slots can confuse users.\n\nOK, I agreed to remove the sentence.\n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 28 Aug 2024 00:46:12 +0000",
"msg_from": "<Masahiro.Ikeda@nttdata.com>",
"msg_from_op": true,
"msg_subject": "RE: Doc: fix the note related to the GUC \"synchronized_standby_slots\""
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 6:16 AM <Masahiro.Ikeda@nttdata.com> wrote:\n>\n> > > > So, will it be okay if we just remove \".. without losing data\" from\n> > > > the sentence? Will that avoid the confusion you have?\n> > > Yes. Additionally, it would be better to add notes about data\n> > > consistency after failover for example\n> > >\n> > > Note that data consistency after failover can vary depending on the\n> > > configurations. If \"synchronized_standby_slots\" is not configured,\n> > > there may be data that only the subscribers hold, even though the new primary does\n> > not.\n> > >\n> >\n> > This part can be inferred from the description of synchronized_standby_slots [1] (See:\n> > This guarantees that logical replication failover slots do not consume changes until those\n> > changes are received and flushed to corresponding physical standbys. If a logical\n> > replication connection is meant to switch to a physical standby after the standby is\n> > promoted, the physical replication slot for the standby should be listed here.)\n>\n> OK, it's enough for me just remove \".. without losing data\".\n>\n\nThe next line related to asynchronous replication is also not\nrequired. See attached.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 28 Aug 2024 14:30:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc: fix the note related to the GUC \"synchronized_standby_slots\""
},
{
"msg_contents": "> > > > > So, will it be okay if we just remove \".. without losing data\"\n> > > > > from the sentence? Will that avoid the confusion you have?\n> > > > Yes. Additionally, it would be better to add notes about data\n> > > > consistency after failover for example\n> > > >\n> > > > Note that data consistency after failover can vary depending on\n> > > > the configurations. If \"synchronized_standby_slots\" is not\n> > > > configured, there may be data that only the subscribers hold, even\n> > > > though the new primary does\n> > > not.\n> > > >\n> > >\n> > > This part can be inferred from the description of synchronized_standby_slots [1]\n> (See:\n> > > This guarantees that logical replication failover slots do not\n> > > consume changes until those changes are received and flushed to\n> > > corresponding physical standbys. If a logical replication connection\n> > > is meant to switch to a physical standby after the standby is\n> > > promoted, the physical replication slot for the standby should be\n> > > listed here.)\n> >\n> > OK, it's enough for me just remove \".. without losing data\".\n> >\n> \n> The next line related to asynchronous replication is also not required. See attached.\n\nThanks, I found another \".. without losing data\". \n\nRegards,\n--\nMasahiro Ikeda\nNTT DATA CORPORATION",
"msg_date": "Wed, 28 Aug 2024 09:32:30 +0000",
"msg_from": "<Masahiro.Ikeda@nttdata.com>",
"msg_from_op": true,
"msg_subject": "RE: Doc: fix the note related to the GUC \"synchronized_standby_slots\""
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 3:02 PM <Masahiro.Ikeda@nttdata.com> wrote:\n>\n> >\n> > The next line related to asynchronous replication is also not required. See attached.\n>\n> Thanks, I found another \".. without losing data\".\n>\n\nI'll push this tomorrow unless there are any other suggestions on this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 28 Aug 2024 15:08:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc: fix the note related to the GUC \"synchronized_standby_slots\""
}
] |
[
{
"msg_contents": "Hi PostgreSQL Community,\nI have encountered an issue when attempting to use pgstattuple extension\nwith sequences. When executing the following command:\n\nSELECT * FROM pgstattuple('serial');\nERROR: only heap AM is supported\n\nThis behaviour is observed in PostgreSQL versions post v11 [1] , where\nsequences support in pgstattuple used to work fine. However, this issue\nslipped through as we did not have any test cases to catch it.\n\nGiven the situation, I see two potential paths forward:\n*1/ Reintroduce Support for Sequences in pgstattuple*: This would be a\nrelatively small change. However, it's important to note that the purpose\nof pgstattuple is to provide statistics like the number of tuples, dead\ntuples, and free space in a relation. Sequences, on the other hand, return\nonly one value at a time and don’t have attributes like dead tuples.\nTherefore, the result for any sequence would consistently look something\nlike this:\n\nSELECT * FROM pgstattuple('serial');\n table_len | tuple_count | tuple_len | tuple_percent |\ndead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space |\nfree_percent\n-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n 8192 | 1 | 41 | 0.5 |\n0 | 0 | 0 | 8104 | 98.93\n(1 row)\n\n\n*2/ Explicitly Block Sequence Support in pgstattuple*: We could align\nsequences with other unsupported objects, such as foreign tables, by\nproviding a more explicit error message. For instance:\n\nSELECT * FROM pgstattuple('x');\nERROR: cannot get tuple-level statistics for relation \"x\"\nDETAIL: This operation is not supported for foreign tables.\n\nThis approach would ensure that the error handling for sequences is\nconsistent with how other unsupported objects are handled.\nPersonally, I lean towards the second approach, as it promotes consistency\nand clarity. However, I would greatly appreciate the community's feedback\nand suggestions on the best way to proceed.\nBased on the feedback received, I will work on the appropriate patch.\n\nLooking forward to your comments and feedback.\n\n[1]* Reference to Earlier Discussion:* For additional context, I previously\ndiscussed this issue on the pgsql-general mailing list. You can find the\ndiscussion\nhttps://www.postgresql.org/message-id/CACX%2BKaMOd3HHteOJNX7fkWxO%2BR%3DuLJkfKqE2-QUK8fKmKfOwqw%40mail.gmail.com.\nIn that thread, it was suggested that this could be considered a\ndocumentation bug, and that we might update the documentation and\nregression tests accordingly.\n\nRegards\nAyush Vatsa\nAWS\n\nHi PostgreSQL Community,I have encountered an issue when attempting to use pgstattuple extension with sequences. When executing the following command:SELECT * FROM pgstattuple('serial');ERROR: only heap AM is supportedThis behaviour is observed in PostgreSQL versions post v11 [1] , where sequences support in pgstattuple used to work fine. However, this issue slipped through as we did not have any test cases to catch it.Given the situation, I see two potential paths forward:1/ Reintroduce Support for Sequences in pgstattuple: This would be a relatively small change. However, it's important to note that the purpose of pgstattuple is to provide statistics like the number of tuples, dead tuples, and free space in a relation. Sequences, on the other hand, return only one value at a time and don’t have attributes like dead tuples. Therefore, the result for any sequence would consistently look something like this:SELECT * FROM pgstattuple('serial'); table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+-------------- 8192 | 1 | 41 | 0.5 | 0 | 0 | 0 | 8104 | 98.93(1 row)2/ Explicitly Block Sequence Support in pgstattuple: We could align sequences with other unsupported objects, such as foreign tables, by providing a more explicit error message. For instance:SELECT * FROM pgstattuple('x');ERROR: cannot get tuple-level statistics for relation \"x\"DETAIL: This operation is not supported for foreign tables.This approach would ensure that the error handling for sequences is consistent with how other unsupported objects are handled.Personally, I lean towards the second approach, as it promotes consistency and clarity. However, I would greatly appreciate the community's feedback and suggestions on the best way to proceed.Based on the feedback received, I will work on the appropriate patch.Looking forward to your comments and feedback.[1] Reference to Earlier Discussion: For additional context, I previously discussed this issue on the pgsql-general mailing list. You can find the discussion https://www.postgresql.org/message-id/CACX%2BKaMOd3HHteOJNX7fkWxO%2BR%3DuLJkfKqE2-QUK8fKmKfOwqw%40mail.gmail.com. In that thread, it was suggested that this could be considered a documentation bug, and that we might update the documentation and regression tests accordingly.RegardsAyush VatsaAWS",
"msg_date": "Mon, 26 Aug 2024 21:14:27 +0530",
"msg_from": "Ayush Vatsa <ayushvatsa1810@gmail.com>",
"msg_from_op": true,
"msg_subject": "Pgstattuple on Sequences: Seeking Community Feedback on Potential\n Patch"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 11:44 AM Ayush Vatsa <ayushvatsa1810@gmail.com> wrote:\n> Hi PostgreSQL Community,\n> I have encountered an issue when attempting to use pgstattuple extension with sequences. When executing the following command:\n>\n> SELECT * FROM pgstattuple('serial');\n> ERROR: only heap AM is supported\n>\n> This behaviour is observed in PostgreSQL versions post v11 [1] , where sequences support in pgstattuple used to work fine. However, this issue slipped through as we did not have any test cases to catch it.\n>\n> Given the situation, I see two potential paths forward:\n> 1/ Reintroduce Support for Sequences in pgstattuple: This would be a relatively small change. However, it's important to note that the purpose of pgstattuple is to provide statistics like the number of tuples, dead tuples, and free space in a relation. Sequences, on the other hand, return only one value at a time and don’t have attributes like dead tuples. Therefore, the result for any sequence would consistently look something like this:\n>\n> SELECT * FROM pgstattuple('serial');\n> table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent\n> -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n> 8192 | 1 | 41 | 0.5 | 0 | 0 | 0 | 8104 | 98.93\n> (1 row)\n>\n>\n> 2/ Explicitly Block Sequence Support in pgstattuple: We could align sequences with other unsupported objects, such as foreign tables, by providing a more explicit error message. For instance:\n>\n> SELECT * FROM pgstattuple('x');\n> ERROR: cannot get tuple-level statistics for relation \"x\"\n> DETAIL: This operation is not supported for foreign tables.\n>\n> This approach would ensure that the error handling for sequences is consistent with how other unsupported objects are handled.\n> Personally, I lean towards the second approach, as it promotes consistency and clarity. However, I would greatly appreciate the community's feedback and suggestions on the best way to proceed.\n> Based on the feedback received, I will work on the appropriate patch.\n>\n> Looking forward to your comments and feedback.\n\nI don't really see what the problem is here. You state that the\ninformation pgstattuple provides isn't really useful for sequences, so\nthat means there's no real reason to do (1). As for (2), I'm not\nopposed to improving error messages but it's not clear to me why you\nthink that the current one is bad. You say that we should provide a\nmore explicit error message, but \"only heap AM is supported\" seems\npretty explicit to me: it doesn't spell out that this only works for\nrelkind='r', but since relam=heap is only possible for relkind='r',\nthere's not really any other reasonable interpretation, which IMHO\nmakes this pretty specific about what the problem is. Maybe you just\nfind it confusing, but that's a bit different from whether it's\nexplicit enough.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Aug 2024 13:03:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on Potential\n Patch"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 09:14:27PM +0530, Ayush Vatsa wrote:\n> Given the situation, I see two potential paths forward:\n> *1/ Reintroduce Support for Sequences in pgstattuple*: This would be a\n> relatively small change. However, it's important to note that the purpose\n> of pgstattuple is to provide statistics like the number of tuples, dead\n> tuples, and free space in a relation. Sequences, on the other hand, return\n> only one value at a time and don�t have attributes like dead tuples.\n>\n> [...] \n> \n> *2/ Explicitly Block Sequence Support in pgstattuple*: We could align\n> sequences with other unsupported objects, such as foreign tables, by\n> providing a more explicit error message.\n\nWhile it is apparently pretty uncommon to use pgstattuple on sequences,\nthis is arguably a bug that should be fixed and back-patched. I've CC'd\nMichael Paquier, who is working on sequence AMs and may have thoughts. I\nhaven't looked at his patch set for that, but I'm assuming that it might\nfill in pg_class.relam for sequences, which would have the same effect as\noption 1.\n\nI see a couple of other places we might want to look into as part of this\nthread. Besides pgstattuple, I see that pageinspect and pg_surgery follow\na similar pattern. pgrowlocks does, too, but that one seems intentionally\nlimited to RELKIND_RELATION. I also see that amcheck explicitly allows\nsequences:\n\n\t/*\n\t * Sequences always use heap AM, but they don't show that in the catalogs.\n\t * Other relkinds might be using a different AM, so check.\n\t */\n\tif (ctx.rel->rd_rel->relkind != RELKIND_SEQUENCE &&\n\t\tctx.rel->rd_rel->relam != HEAP_TABLE_AM_OID)\n\t\tereport(ERROR,\n\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n\t\t\t\t errmsg(\"only heap AM is supported\")));\n\nIMHO it would be good to establish some level of consistency here.\n\n-- \nnathan\n\n\n",
"msg_date": "Mon, 26 Aug 2024 12:26:27 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 1:26 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> While it is apparently pretty uncommon to use pgstattuple on sequences,\n> this is arguably a bug that should be fixed and back-patched.\n\nI don't understand what would make it a bug.\n\n> IMHO it would be good to establish some level of consistency here.\n\nSure, consistency is good, all other things being equal, but just\nsaying \"well this used to work one way and now it works another way\"\nisn't enough to say that there is a bug, or that something should be\nchanged.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Aug 2024 13:35:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on Potential\n Patch"
},
{
"msg_contents": "> You state that the\n> information pgstattuple provides isn't really useful for sequences, so\n> that means there's no real reason to do (1)\nThat's correct, but we should consider that up until v11,\nsequences were supported in pgstattuple. Their support\nwas removed unintentionally (I believe so). Therefore, it might be worth\ndiscussing whether it makes sense to reinstate support for sequences.\n\n> why you think that the current one is bad\nThe current implementation has some drawbacks.\nFor instance, when encountering other unsupported objects, the error looks\nlike this:\nERROR: cannot get tuple-level statistics for relation \"x\"\nDETAIL: This operation is not supported for foreign tables.\n\nHowever, for sequences, the message should explicitly\nstate that \"This operation is not supported for sequences.\"\n\nCurrently, we're deducing that the heap access method (AM) is\nfor relkind='r', so the message \"only heap AM is supported\" implies\nthat only relkind='r' are supported.\nThis prompted my thoughts on the matter.\n\nMoreover, if you refer to the code in pgstattuple.c\n<https://github.com/postgres/postgres/blob/master/contrib/pgstattuple/pgstattuple.c#L255-L256>\n,\nyou'll notice that sequences appear to be explicitly allowed in\npgstattuple,\nbut it results in an error encountered here -\nhttps://github.com/postgres/postgres/blob/master/contrib/pgstattuple/pgstattuple.c#L326-L329\nTherefore, I believe a small refactoring is needed to make the code cleaner\nand more consistent.\n\n> IMHO it would be good to establish some level of consistency here.\nAgree.\n\nLet me know your thoughts.\n\nRegards\nAyush Vatsa\nAWS\n\n> You state that the> information pgstattuple provides isn't really useful for sequences, so> that means there's no real reason to do (1)That's correct, but we should consider that up until v11, sequences were supported in pgstattuple. Their support was removed unintentionally (I believe so). Therefore, it might be worth discussing whether it makes sense to reinstate support for sequences.> why you think that the current one is badThe current implementation has some drawbacks. For instance, when encountering other unsupported objects, the error looks like this:ERROR: cannot get tuple-level statistics for relation \"x\"\nDETAIL: This operation is not supported for foreign tables.However, for sequences, the message should explicitly state that \"This operation is not supported for sequences.\"Currently, we're deducing that the heap access method (AM) is for relkind='r', so the message \"only heap AM is supported\" implies that only relkind='r' are supported. This prompted my thoughts on the matter.Moreover, if you refer to the code in pgstattuple.c, you'll notice that sequences appear to be explicitly allowed in pgstattuple, but it results in an error encountered here - https://github.com/postgres/postgres/blob/master/contrib/pgstattuple/pgstattuple.c#L326-L329Therefore, I believe a small refactoring is needed to make the code cleaner and more consistent.> IMHO it would be good to establish some level of consistency here.Agree.Let me know your thoughts.RegardsAyush VatsaAWS",
"msg_date": "Mon, 26 Aug 2024 23:39:35 +0530",
"msg_from": "Ayush Vatsa <ayushvatsa1810@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on Potential\n Patch"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 01:35:52PM -0400, Robert Haas wrote:\n> On Mon, Aug 26, 2024 at 1:26 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> While it is apparently pretty uncommon to use pgstattuple on sequences,\n>> this is arguably a bug that should be fixed and back-patched.\n> \n> I don't understand what would make it a bug.\n> \n>> IMHO it would be good to establish some level of consistency here.\n> \n> Sure, consistency is good, all other things being equal, but just\n> saying \"well this used to work one way and now it works another way\"\n> isn't enough to say that there is a bug, or that something should be\n> changed.\n\nThe reason I think it's arguably a bug is because it used to work fine and\nthen started ERROR-ing after commit 4b82664. I'm fine with saying that we\ndon't think it's useful and intentionally deprecating it, but AFAICT no\nsuch determination has been made. I see no discussion about this on the\nthread for commit 4b82664, and the only caller of pgstat_heap()\nintentionally calls into the affected function for sequences (and has since\npgstattuple was introduced 18 years ago):\n\n\tif (RELKIND_HAS_TABLE_AM(rel->rd_rel->relkind) ||\n\t\trel->rd_rel->relkind == RELKIND_SEQUENCE)\n\t{\n\t\treturn pgstat_heap(rel, fcinfo);\n\t}\n\n-- \nnathan\n\n\n",
"msg_date": "Mon, 26 Aug 2024 13:14:28 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "Hi all,\nPlease find attached the patch that re-enables\nsupport for sequences within the pgstattuple extension.\nI have also included the necessary test cases for\nsequences, implemented in the form of regress tests.\n\nHere is the commitfest link for the same -\nhttps://commitfest.postgresql.org/49/5215/\n\nRegards\nAyush Vatsa\nAWS",
"msg_date": "Thu, 29 Aug 2024 22:17:57 +0530",
"msg_from": "Ayush Vatsa <ayushvatsa1810@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on Potential\n Patch"
},
{
"msg_contents": "On Thu, Aug 29, 2024 at 10:17:57PM +0530, Ayush Vatsa wrote:\n> Please find attached the patch that re-enables\n> support for sequences within the pgstattuple extension.\n> I have also included the necessary test cases for\n> sequences, implemented in the form of regress tests.\n\nThanks. Robert, do you have any concerns with this?\n\n+select * from pgstattuple('serial');\n+ table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent \n+-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n+ 8192 | 1 | 41 | 0.5 | 0 | 0 | 0 | 8104 | 98.93\n+(1 row)\n\nI'm concerned that some of this might be platform-dependent and make the\ntest unstable. Perhaps we should just select count(*) here.\n\n+\t/**\n+\t * Sequences don't fall under heap AM but are still\n+\t * allowed for obtaining tuple-level statistics.\n+\t */\n\nI think we should be a bit more descriptive here, like the comment in\nverify_heapam.c:\n\n\t/*\n\t * Sequences always use heap AM, but they don't show that in the catalogs.\n\t * Other relkinds might be using a different AM, so check.\n\t */\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 29 Aug 2024 12:36:35 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "On Thu, Aug 29, 2024 at 12:36:35PM -0500, Nathan Bossart wrote:\n> +select * from pgstattuple('serial');\n> + table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent \n> +-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------\n> + 8192 | 1 | 41 | 0.5 | 0 | 0 | 0 | 8104 | 98.93\n> +(1 row)\n> \n> I'm concerned that some of this might be platform-dependent and make the\n> test unstable. Perhaps we should just select count(*) here.\n\nSure enough, the CI testing for 32-bit is failing here [0].\n\n[0] https://api.cirrus-ci.com/v1/artifact/task/4798423386292224/testrun/build-32/testrun/pgstattuple/regress/regression.diffs\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 29 Aug 2024 12:38:33 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "> I'm concerned that some of this might be platform-dependent and make the\n> test unstable. Perhaps we should just select count(*) here.\n> Sure enough, the CI testing for 32-bit is failing here [0].\nThanks for catching that! I wasn't aware of this earlier.\n\n> I think we should be a bit more descriptive here\nRegarding the comment, I've tried to make it more\ndescriptive and simpler than the existing one in\nverify_heapam.c. Here’s the comment I propose:\n\n/*\n * Sequences implicitly use the heap AM, even though it's not explicitly\n * recorded in the catalogs. For other relation kinds, verify that the AM\n * is heap; otherwise, raise an error.\n */\n\nPlease let me know if this still isn’t clear enough,\nthen I can make further revisions in line with verify_heapam.c.\n\nThe patch with all the changes is attached.\n\nRegards\nAyush Vatsa\nAWS",
"msg_date": "Fri, 30 Aug 2024 00:17:47 +0530",
"msg_from": "Ayush Vatsa <ayushvatsa1810@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on Potential\n Patch"
},
{
"msg_contents": "On Fri, Aug 30, 2024 at 12:17:47AM +0530, Ayush Vatsa wrote:\n> The patch with all the changes is attached.\n\nLooks generally reasonable to me.\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 29 Aug 2024 14:25:11 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "On Thu, Aug 29, 2024 at 1:36 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Thanks. Robert, do you have any concerns with this?\n\nI don't know if I'm exactly concerned but I don't understand what\nproblem we're solving, either. I thought Ayush said that the function\nwouldn't produce useful results for sequences; so then why do we need\nto change the code to enable it?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 30 Aug 2024 16:07:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on Potential\n Patch"
},
{
"msg_contents": "On Fri, Aug 30, 2024 at 04:07:30PM -0400, Robert Haas wrote:\n> On Thu, Aug 29, 2024 at 1:36 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Thanks. Robert, do you have any concerns with this?\n> \n> I don't know if I'm exactly concerned but I don't understand what\n> problem we're solving, either. I thought Ayush said that the function\n> wouldn't produce useful results for sequences; so then why do we need\n> to change the code to enable it?\n\nI suppose it would be difficult to argue that it is actually useful, given\nit hasn't worked since v11 and apparently nobody noticed until recently.\nIf we're content to leave it unsupported, then sure, let's just remove the\n\"relkind == RELKIND_SEQUENCE\" check in pgstat_relation(). But I also don't\nhave a great reason to _not_ support it. It used to work (which appears to\nhave been intentional, based on the code), it was unintentionally broken,\nand it'd work again with a ~1 line change. \"SELECT count(*) FROM\nmy_sequence\" probably doesn't provide a lot of value, but I have no\nintention of proposing a patch that removes support for that.\n\nAll that being said, I don't have a terribly strong opinion, but I guess I\nlean towards re-enabling.\n\nAnother related inconsistency I just noticed in pageinspect:\n\n postgres=# select t_data from heap_page_items(get_raw_page('s', 0));\n t_data\n --------------------------------------\n \\x0100000000000000000000000000000000\n (1 row)\n\n postgres=# select tuple_data_split('s'::regclass, t_data, t_infomask, t_infomask2, t_bits) from heap_page_items(get_raw_page('s', 0));\n ERROR: only heap AM is supported\n\n-- \nnathan\n\n\n",
"msg_date": "Fri, 30 Aug 2024 16:06:03 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "On Fri, Aug 30, 2024 at 04:06:03PM -0500, Nathan Bossart wrote:\n> I suppose it would be difficult to argue that it is actually useful, given\n> it hasn't worked since v11 and apparently nobody noticed until recently.\n> If we're content to leave it unsupported, then sure, let's just remove the\n> \"relkind == RELKIND_SEQUENCE\" check in pgstat_relation(). But I also don't\n> have a great reason to _not_ support it. It used to work (which appears to\n> have been intentional, based on the code), it was unintentionally broken,\n> and it'd work again with a ~1 line change. \"SELECT count(*) FROM\n> my_sequence\" probably doesn't provide a lot of value, but I have no\n> intention of proposing a patch that removes support for that.\n\nIMO, it can be useful to check the state of the page used by a\nsequence. We have a few tweaks in sequence.c like manipulations of\nt_infomask, and I can be good to check for its status on corrupted\nsystems.\n--\nMichael",
"msg_date": "Mon, 2 Sep 2024 08:44:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "Here is roughly what I had in mind to commit, but I'm not sure there's a\nconsensus on doing this.\n\n-- \nnathan",
"msg_date": "Tue, 3 Sep 2024 14:52:50 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "On Fri, 30 Aug 2024, 23:06 Nathan Bossart, <nathandbossart@gmail.com> wrote:\n>\n> On Fri, Aug 30, 2024 at 04:07:30PM -0400, Robert Haas wrote:\n> > On Thu, Aug 29, 2024 at 1:36 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> Thanks. Robert, do you have any concerns with this?\n> >\n> > I don't know if I'm exactly concerned but I don't understand what\n> > problem we're solving, either. I thought Ayush said that the function\n> > wouldn't produce useful results for sequences; so then why do we need\n> > to change the code to enable it?\n>\n> I suppose it would be difficult to argue that it is actually useful, given\n> it hasn't worked since v11 and apparently nobody noticed until recently.\n> If we're content to leave it unsupported, then sure, let's just remove the\n> \"relkind == RELKIND_SEQUENCE\" check in pgstat_relation(). But I also don't\n> have a great reason to _not_ support it. It used to work (which appears to\n> have been intentional, based on the code), it was unintentionally broken,\n> and it'd work again with a ~1 line change. \"SELECT count(*) FROM\n> my_sequence\" probably doesn't provide a lot of value, but I have no\n> intention of proposing a patch that removes support for that.\n>\n> All that being said, I don't have a terribly strong opinion, but I guess I\n> lean towards re-enabling.\n>\n> Another related inconsistency I just noticed in pageinspect:\n>\n> postgres=# select t_data from heap_page_items(get_raw_page('s', 0));\n> t_data\n> --------------------------------------\n> \\x0100000000000000000000000000000000\n> (1 row)\n>\n> postgres=# select tuple_data_split('s'::regclass, t_data, t_infomask, t_infomask2, t_bits) from heap_page_items(get_raw_page('s', 0));\n> ERROR: only heap AM is supported\n\nI don't think this is an inconsistency:\n\nheap_page_items works on a raw page-as-bytea (produced by\nget_raw_page) without knowing about or accessing the actual relation\ntype of that page, so it doesn't have the context why it should error\nout if the page looks similar enough to a heap page. I could feed it\nan arbitrary bytea, and it should still work as long as that bytea\nlooks similar enough to a heap page.\ntuple_data_split, however, uses the regclass to decode the contents of\nthe tuple, and can thus determine with certainty based on that\nregclass that it was supplied incorrect (non-heapAM table's regclass)\narguments. It therefore has enough context to bail out and stop trying\nto decode the page's tuple data.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 3 Sep 2024 22:19:33 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on Potential\n Patch"
},
{
"msg_contents": "On Tue, Sep 03, 2024 at 10:19:33PM +0200, Matthias van de Meent wrote:\n> On Fri, 30 Aug 2024, 23:06 Nathan Bossart, <nathandbossart@gmail.com> wrote:\n>> Another related inconsistency I just noticed in pageinspect:\n>>\n>> postgres=# select t_data from heap_page_items(get_raw_page('s', 0));\n>> t_data\n>> --------------------------------------\n>> \\x0100000000000000000000000000000000\n>> (1 row)\n>>\n>> postgres=# select tuple_data_split('s'::regclass, t_data, t_infomask, t_infomask2, t_bits) from heap_page_items(get_raw_page('s', 0));\n>> ERROR: only heap AM is supported\n> \n> I don't think this is an inconsistency:\n> \n> heap_page_items works on a raw page-as-bytea (produced by\n> get_raw_page) without knowing about or accessing the actual relation\n> type of that page, so it doesn't have the context why it should error\n> out if the page looks similar enough to a heap page. I could feed it\n> an arbitrary bytea, and it should still work as long as that bytea\n> looks similar enough to a heap page.\n> tuple_data_split, however, uses the regclass to decode the contents of\n> the tuple, and can thus determine with certainty based on that\n> regclass that it was supplied incorrect (non-heapAM table's regclass)\n> arguments. It therefore has enough context to bail out and stop trying\n> to decode the page's tuple data.\n\nMy point is really that tuple_data_split() needlessly ERRORs for sequences.\nOther heap functions work fine for sequences, and we know it uses the heap\ntable AM, so why should tuple_data_split() fail? I agree that the others\nneedn't enforce relkind checks and that they might succeed in some cases\nwhere tuple_data_split() might not be appropriate.\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 3 Sep 2024 15:40:53 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "Barring objections, I'm planning to commit v3 soon. Robert/Matthias, I'm\nnot sure you are convinced this is the right thing to do (or worth doing,\nrather), but I don't sense that you are actually opposed to it, either.\nPlease correct me if I am wrong.\n\n-- \nnathan\n\n\n",
"msg_date": "Wed, 11 Sep 2024 15:36:12 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "On Wed, 11 Sept 2024 at 21:36, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> Barring objections, I'm planning to commit v3 soon. Robert/Matthias, I'm\n> not sure you are convinced this is the right thing to do (or worth doing,\n> rather), but I don't sense that you are actually opposed to it, either.\n> Please correct me if I am wrong.\n\nCorrect: I do think making heapam-related inspection functions have a\nconsistent behaviour when applied to sequences is beneficial, with no\npreference towards specifically supporting or not supporting sequences\nin these functions. If people that work on sequences think it's better\nto also support inspection of sequences, then I think that's a good\nreason to add that support where it doesn't already exist.\n\nAs for patch v3, that seems fine with me.\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 11 Sep 2024 23:02:43 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on Potential\n Patch"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 11:02:43PM +0100, Matthias van de Meent wrote:\n> On Wed, 11 Sept 2024 at 21:36, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >\n> > Barring objections, I'm planning to commit v3 soon. Robert/Matthias, I'm\n> > not sure you are convinced this is the right thing to do (or worth doing,\n> > rather), but I don't sense that you are actually opposed to it, either.\n> > Please correct me if I am wrong.\n> \n> Correct: I do think making heapam-related inspection functions have a\n> consistent behaviour when applied to sequences is beneficial, with no\n> preference towards specifically supporting or not supporting sequences\n> in these functions. If people that work on sequences think it's better\n> to also support inspection of sequences, then I think that's a good\n> reason to add that support where it doesn't already exist.\n> \n> As for patch v3, that seems fine with me.\n\n+1 from here as well, after looking at what v3 is doing for these two\nmodules.\n--\nMichael",
"msg_date": "Thu, 12 Sep 2024 11:19:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "On Thu, Sep 12, 2024 at 11:19:00AM +0900, Michael Paquier wrote:\n> On Wed, Sep 11, 2024 at 11:02:43PM +0100, Matthias van de Meent wrote:\n>> As for patch v3, that seems fine with me.\n> \n> +1 from here as well, after looking at what v3 is doing for these two\n> modules.\n\nCommitted.\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 12 Sep 2024 16:39:14 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "On Thu, Sep 12, 2024 at 04:39:14PM -0500, Nathan Bossart wrote:\n> Committed.\n\nUgh, the buildfarm is unhappy with the new tests [0] [1]. Will fix.\n\n[0] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2024-09-12%2022%3A54%3A45\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skimmer&dt=2024-09-12%2022%3A38%3A13\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 12 Sep 2024 19:41:30 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "On Thu, Sep 12, 2024 at 07:41:30PM -0500, Nathan Bossart wrote:\n> On Thu, Sep 12, 2024 at 04:39:14PM -0500, Nathan Bossart wrote:\n> > Committed.\n> \n> Ugh, the buildfarm is unhappy with the new tests [0] [1]. Will fix.\n\nI'd suggest to switch the test to return a count() and make sure that\none record exists. The data in the page does not really matter.\n--\nMichael",
"msg_date": "Fri, 13 Sep 2024 09:47:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "On Fri, Sep 13, 2024 at 09:47:36AM +0900, Michael Paquier wrote:\n> On Thu, Sep 12, 2024 at 07:41:30PM -0500, Nathan Bossart wrote:\n>> Ugh, the buildfarm is unhappy with the new tests [0] [1]. Will fix.\n> \n> I'd suggest to switch the test to return a count() and make sure that\n> one record exists. The data in the page does not really matter.\n\nThat's what I had in mind. I see that skimmer is failing with this error:\n\n\tERROR: cannot access temporary tables during a parallel operation\n\nThis makes sense because that machine has\ndebug_parallel_query/force_parallel_mode set to \"regress\", but this test\nfile has used a temporary table for a couple of years without issue...\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 12 Sep 2024 19:56:56 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "On Thu, Sep 12, 2024 at 07:56:56PM -0500, Nathan Bossart wrote:\n> I see that skimmer is failing with this error:\n> \n> \tERROR: cannot access temporary tables during a parallel operation\n> \n> This makes sense because that machine has\n> debug_parallel_query/force_parallel_mode set to \"regress\", but this test\n> file has used a temporary table for a couple of years without issue...\n\nOh, the answer seems to be commits aeaaf52 and 47a22dc. In short, we can't\nuse a temporary sequence in this test for versions older than v15.\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 12 Sep 2024 20:42:09 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "On Thu, Sep 12, 2024 at 08:42:09PM -0500, Nathan Bossart wrote:\n> Oh, the answer seems to be commits aeaaf52 and 47a22dc. In short, we can't\n> use a temporary sequence in this test for versions older than v15.\n\nHere's a patch to make the sequence permanent and to make the output of\ntuple_data_split() not depend on endianness.\n\n-- \nnathan",
"msg_date": "Thu, 12 Sep 2024 21:12:29 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Here's a patch to make the sequence permanent and to make the output of\n> tuple_data_split() not depend on endianness.\n\n+1 --- I checked this on mamba's host and it does produce\n\"\\\\x0100000000000001\" regardless of endianness.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Sep 2024 22:40:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on Potential\n Patch"
},
{
"msg_contents": "On Thu, Sep 12, 2024 at 10:40:11PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> Here's a patch to make the sequence permanent and to make the output of\n>> tuple_data_split() not depend on endianness.\n> \n> +1 --- I checked this on mamba's host and it does produce\n> \"\\\\x0100000000000001\" regardless of endianness.\n\nThanks for checking. I'll commit this fix in the morning.\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 12 Sep 2024 22:41:27 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
},
{
"msg_contents": "On Thu, Sep 12, 2024 at 10:41:27PM -0500, Nathan Bossart wrote:\n> Thanks for checking. I'll commit this fix in the morning.\n\nCommitted.\n\n-- \nnathan\n\n\n",
"msg_date": "Fri, 13 Sep 2024 10:21:21 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pgstattuple on Sequences: Seeking Community Feedback on\n Potential Patch"
}
] |
[
{
"msg_contents": "Hi!\n\nThis is my first time contribution to the PostgreSQL, so I’m not really\nfamiliar with the whole process. The attached patch adds basic support\nfor Type=notify-reload systemd services, that is, sends readiness\nnotifications on service reload. This allows waiting for postmaster\nreload to complete (note that child reloads still happen asynchronously\nand we don’t wait for them).\n\n—\nIvan",
"msg_date": "Mon, 26 Aug 2024 19:03:46 +0300",
"msg_from": "mr.trubach@icloud.com",
"msg_from_op": true,
"msg_subject": "[PATCH] Support systemd readiness notifications on reload"
},
{
"msg_contents": "On 26.08.24 18:03, mr.trubach@icloud.com wrote:\n> This is my first time contribution to the PostgreSQL, so I’m not really\n> familiar with the whole process. The attached patch adds basic support\n> for Type=notify-reload systemd services, that is, sends readiness\n> notifications on service reload. This allows waiting for postmaster\n> reload to complete (note that child reloads still happen asynchronously\n> and we don’t wait for them).\n\nMy understanding of this new notify-reload type is that it would allow \nsystemd to sequence configuration reloads that depend on each other. \nBut if we're only waiting for the postmaster reload to complete, are we \nreally satisfying that purpose?\n\nIt could be quite useful if we could somehow get the information that \nall backends have completed a configuration reload, but that would \nobviously be a much more complicated feature.\n\nAbout the patch: For this purpose, I would not use \nINSTR_TIME_SET_CURRENT(), which is too much of an abstraction, but use \nclock_gettime(CLOCK_MONOTONIC) directly.\n\nAlso, there would need to be some documentation updates in \ndoc/src/sgml/runtime.sgml (but it's ok if the first patch version omits \nthat).\n\n\n\n",
"msg_date": "Wed, 28 Aug 2024 11:01:22 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support systemd readiness notifications on reload"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm getting back to work on the index prefetching patch [1], but one\nannoying aspect of that patch is that it's limited to the context of a\nsingle executor node. It can be very effective when there's an index\nscan with many matches for a key, but it's entirely useless for plans\nwith many tiny index scans.\n\nFor example, consider a plan like:\n\n Nested Loop\n -> ... some scan of a \"fact\" table ...\n -> Index Scan on PK of a \"dimension\" table\n\nFor this the index prefetching is entirely useless - there'll be just\none match for each outer row.\n\nBut there still is opportunity for prefetching - we could look ahead in\nthe outer relation (which can be arbitrary node feeding the nestloop),\nand request the index scan to prefetch the matching tuples.\n\nOf course, this is not something the nestloop can do on it's own, it\nwould require support from the executor. For some nodes prefetching is\npretty straightforward (e.g. index scan), for other nodes it can be\nimpossible or at least very hard / too expensive.\n\nI was a bit bored over the weekend, so I decided to experiment a bit and\nsee how difficult would this be, and how much could it gain. Attached is\nan early PoC version of that patch - it's very limited (essentially just\nNL + inner index scan), but it seems to be working. It's only about 250\ninsertions, to tiny.\n\n\nThe patch does this:\n--------------------\n\n1) ExecPrefetch executor callback\n\nThis call is meant to do the actual prefetching - the parent node sets\neverything up almost as if for ExecProcNode(), but does not expect the\nactual result. The child either does some prefetching or nothing.\n\n2) ExecSupportsPrefetch to identify what nodes accept ExecPrefetch()\n\nThis simply says if a given node supports prefetching. The only place\ncalling this is the nested loop, to enable prefetching only for nested\nloops with (parameterized) index scans.\n\n3) ExecPrefetchIndexScan doing prefetching in index scans\n\nThis is just trivial IndexNext() variant, getting TIDs and calling\nPrefetchBuffer() on them. Right now it just prefetches everything, but\nthat's seems wrong - this is where the original index prefetching patch\nshould kick in.\n\n4) ExecNestLoop changes\n\nThis is where the actual magic happens - if the inner child knows how to\nprefetch stuff (per ExecSupportsPrefetch), this switches to reading\nbatches of outer slots, and calls ExecPrefetch() on them. Right now the\nbatch size is hardcoded to 32, but it might use effective_io_concurrency\nor something like that. It's a bit naive in other aspects too - it\nalways reads and prefetches the whole batch at once, instead of ramping\nup and then consuming and prefetching slots one by one. Good enough for\nPoC, but probably needs improvements.\n\n5) adds enable_nestloop_prefetch to enable/disable this easily\n\n\nbenchmark\n---------\n\nOf course, the main promise of this is faster queries, so I did a simple\nbenchmark, with a query like this:\n\n SELECT * FROM fact_table f JOIN dimension d ON (f.id = d.id)\n WHERE f.r < 0.0001;\n\nThe \"r\" is simply a random value, allowing to select arbitrary fraction\nof the large fact table \"f\". Here it selects 0.01%, so ~10k rows from\n100M table. Dimensions have 10M rows. See the .sql file attached.\n\nFor a variable number of dimensions (1 to 4) the results look like this:\n\n prefetch 1 2 3 4\n ----------------------------------------\n off 3260 6193 8993 11980\n on 2145 3897 5531 7352\n ----------------------------------------\n 66% 63% 62% 61%\n\nThis is on \"cold\" data, with a restart + drop caches between runs. The\nresults suggest the prefetching makes it about twice as fast. I was\nhoping for more, but not bad for a Poc, chances are it can be improved.\n\n\nI just noticed there's a couple failures in the regression tests, if I\nchange the GUC to \"true\" by default. I haven't looked into that yet, but\nI guess there's some mistake in resetting the child node, or something\nlike that. Will investigate.\n\nWhat I find a bit annoying is the amount of processing required to\nhappen twice - once for the prefetching, once for the actual execution.\nIn particular, this needs to set up all the PARAM_EXEC slots as if the\ninner plan was to do regular execution.\n\nThe other thing is that while ExecPrefetchIndexScan() only prefetches\nthe heap page, it still needs to navigate the index to the leaf page. If\nthe index is huge, that may require I/O. But we'd have to pay that cost\nshortly after anyway. It just isn't asynchronous.\n\nOne minor detail is that I ran into some issues with tuple slots. I need\na bunch of them to stash the slots received from the outer plan, so I\ncreated a couple slots with TTSOpsVirtual. And that mostly works, except\nthat later ExecInterpExpr() happens to call slot_getsomeattrs() and that\nfails because tts_virtual_getsomeattrs() says:\n\n elog(ERROR, \"getsomeattrs is not required to be called on a virtual\n tuple table slot\");\n\nOK, that call is not necessary for virtual slots, it's noop. But it's\nnot me calling that, the interpreter does that for some reason. I did\ncomment that error out in the patch, but I wonder what's the proper way\nto make this work ...\n\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/cf85f46f-b02f-05b2-5248-5000b894ebab%40enterprisedb.com\n\n-- \nTomas Vondra",
"msg_date": "Mon, 26 Aug 2024 18:06:04 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": true,
"msg_subject": "PoC: prefetching data between executor nodes (e.g. nestloop +\n indexscan)"
},
{
"msg_contents": "On 8/26/24 18:06, Tomas Vondra wrote:\n> \n> I just noticed there's a couple failures in the regression tests, if I\n> change the GUC to \"true\" by default. I haven't looked into that yet, but\n> I guess there's some mistake in resetting the child node, or something\n> like that. Will investigate.\n> \n\nTurned out to be a silly bug - not resetting the queue on rescan. Fixed,\nand also removed two not-quite-correct asserts. No other changes.\n\n\nregards\n\n-- \nTomas Vondra",
"msg_date": "Tue, 27 Aug 2024 12:38:48 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": true,
"msg_subject": "Re: PoC: prefetching data between executor nodes (e.g. nestloop +\n indexscan)"
},
{
"msg_contents": "Hi,\n\nOn 2024-08-26 18:06:04 +0200, Tomas Vondra wrote:\n> I'm getting back to work on the index prefetching patch [1], but one\n> annoying aspect of that patch is that it's limited to the context of a\n> single executor node. It can be very effective when there's an index\n> scan with many matches for a key, but it's entirely useless for plans\n> with many tiny index scans.\n\nRight.\n\n\n\n> The patch does this:\n> --------------------\n>\n> 1) ExecPrefetch executor callback\n>\n> This call is meant to do the actual prefetching - the parent node sets\n> everything up almost as if for ExecProcNode(), but does not expect the\n> actual result. The child either does some prefetching or nothing.\n>\n> 2) ExecSupportsPrefetch to identify what nodes accept ExecPrefetch()\n>\n> This simply says if a given node supports prefetching. The only place\n> calling this is the nested loop, to enable prefetching only for nested\n> loops with (parameterized) index scans.\n>\n> 3) ExecPrefetchIndexScan doing prefetching in index scans\n>\n> This is just trivial IndexNext() variant, getting TIDs and calling\n> PrefetchBuffer() on them. Right now it just prefetches everything, but\n> that's seems wrong - this is where the original index prefetching patch\n> should kick in.\n>\n> 4) ExecNestLoop changes\n>\n> This is where the actual magic happens - if the inner child knows how to\n> prefetch stuff (per ExecSupportsPrefetch), this switches to reading\n> batches of outer slots, and calls ExecPrefetch() on them. Right now the\n> batch size is hardcoded to 32, but it might use effective_io_concurrency\n> or something like that. It's a bit naive in other aspects too - it\n> always reads and prefetches the whole batch at once, instead of ramping\n> up and then consuming and prefetching slots one by one. Good enough for\n> PoC, but probably needs improvements.\n>\n> 5) adds enable_nestloop_prefetch to enable/disable this easily\n\nHm. Doing this via more executor tree traversals doesn't seem optimal, that's\nnot exactly free. And because the prefetching can be beneficial even if there\nare nodes above the inner parametrized index node, we IMO would want to\niterate through multiple node levels.\n\nHave you considered instead expanding the parameterized scan logic? Right now\nnestloop passes down values one-by-one via PARAM_EXEC. What if we expanded\nthat to allow nodes, e.g. nestloop in this case, to pass down multiple values\nin one parameter? That'd e.g. allow passing down multiple rows to fetch from\nnodeNestloop.c to nodeIndexscan.c without needing to iterate over the executor\nstate tree. And it might be more powerful than just doing prefetching -\ne.g. we could use one ScalarArrayOps scan in the index instead of doing a\nseparate scan for each of the to-be-prefetched values.\n\n\n\n> benchmark\n> ---------\n> \n> Of course, the main promise of this is faster queries, so I did a simple\n> benchmark, with a query like this:\n> \n> SELECT * FROM fact_table f JOIN dimension d ON (f.id = d.id)\n> WHERE f.r < 0.0001;\n> \n> The \"r\" is simply a random value, allowing to select arbitrary fraction\n> of the large fact table \"f\". Here it selects 0.01%, so ~10k rows from\n> 100M table. Dimensions have 10M rows. See the .sql file attached.\n> \n> For a variable number of dimensions (1 to 4) the results look like this:\n> \n> prefetch 1 2 3 4\n> ----------------------------------------\n> off 3260 6193 8993 11980\n> on 2145 3897 5531 7352\n> ----------------------------------------\n> 66% 63% 62% 61%\n> \n> This is on \"cold\" data, with a restart + drop caches between runs. The\n> results suggest the prefetching makes it about twice as fast. I was\n> hoping for more, but not bad for a Poc, chances are it can be improved.\n\nI think that's indeed a pretty nice win.\n\n\n\n> One minor detail is that I ran into some issues with tuple slots. I need\n> a bunch of them to stash the slots received from the outer plan, so I\n> created a couple slots with TTSOpsVirtual. And that mostly works, except\n> that later ExecInterpExpr() happens to call slot_getsomeattrs() and that\n> fails because tts_virtual_getsomeattrs() says:\n>\n> elog(ERROR, \"getsomeattrs is not required to be called on a virtual\n> tuple table slot\");\n>\n> OK, that call is not necessary for virtual slots, it's noop. But it's\n> not me calling that, the interpreter does that for some reason. I did\n> comment that error out in the patch, but I wonder what's the proper way\n> to make this work ...\n\nHm, that's odd. slot_getsomeattrs() only calls tts_virtual_getsomeattrs() if\nslot->tts_nvalid is smaller than the requested attnum. Which should never be\nthe case for a virtual slot. So I suspect there might be some bug leading to\nwrong contents being stored in slots?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Aug 2024 14:40:06 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PoC: prefetching data between executor nodes (e.g. nestloop +\n indexscan)"
},
{
"msg_contents": "On Tue, Aug 27, 2024 at 2:40 PM Andres Freund <andres@anarazel.de> wrote:\n> Have you considered instead expanding the parameterized scan logic? Right now\n> nestloop passes down values one-by-one via PARAM_EXEC. What if we expanded\n> that to allow nodes, e.g. nestloop in this case, to pass down multiple values\n> in one parameter? That'd e.g. allow passing down multiple rows to fetch from\n> nodeNestloop.c to nodeIndexscan.c without needing to iterate over the executor\n> state tree.\n\nThis sounds a bit like block nested loop join.\n\n> And it might be more powerful than just doing prefetching -\n> e.g. we could use one ScalarArrayOps scan in the index instead of doing a\n> separate scan for each of the to-be-prefetched values.\n\nScalarArrayOps within nbtree are virtually the same thing as regular\nindex scans these days. That could make a big difference (perhaps this\nis obvious).\n\nOne reason to do it this way is because it cuts down on index descent\ncosts, and other executor overheads. But it is likely that it will\nalso make prefetching itself more effective, too -- just because\nprefetching will naturally end up with fewer, larger batches of\nlogically related work.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 27 Aug 2024 14:53:28 -0400",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PoC: prefetching data between executor nodes (e.g. nestloop +\n indexscan)"
},
{
"msg_contents": "On 8/27/24 20:40, Andres Freund wrote:\n> Hi,\n> \n> On 2024-08-26 18:06:04 +0200, Tomas Vondra wrote:\n>> I'm getting back to work on the index prefetching patch [1], but one\n>> annoying aspect of that patch is that it's limited to the context of a\n>> single executor node. It can be very effective when there's an index\n>> scan with many matches for a key, but it's entirely useless for plans\n>> with many tiny index scans.\n> \n> Right.\n> \n> \n> \n>> The patch does this:\n>> --------------------\n>>\n>> 1) ExecPrefetch executor callback\n>>\n>> This call is meant to do the actual prefetching - the parent node sets\n>> everything up almost as if for ExecProcNode(), but does not expect the\n>> actual result. The child either does some prefetching or nothing.\n>>\n>> 2) ExecSupportsPrefetch to identify what nodes accept ExecPrefetch()\n>>\n>> This simply says if a given node supports prefetching. The only place\n>> calling this is the nested loop, to enable prefetching only for nested\n>> loops with (parameterized) index scans.\n>>\n>> 3) ExecPrefetchIndexScan doing prefetching in index scans\n>>\n>> This is just trivial IndexNext() variant, getting TIDs and calling\n>> PrefetchBuffer() on them. Right now it just prefetches everything, but\n>> that's seems wrong - this is where the original index prefetching patch\n>> should kick in.\n>>\n>> 4) ExecNestLoop changes\n>>\n>> This is where the actual magic happens - if the inner child knows how to\n>> prefetch stuff (per ExecSupportsPrefetch), this switches to reading\n>> batches of outer slots, and calls ExecPrefetch() on them. Right now the\n>> batch size is hardcoded to 32, but it might use effective_io_concurrency\n>> or something like that. It's a bit naive in other aspects too - it\n>> always reads and prefetches the whole batch at once, instead of ramping\n>> up and then consuming and prefetching slots one by one. Good enough for\n>> PoC, but probably needs improvements.\n>>\n>> 5) adds enable_nestloop_prefetch to enable/disable this easily\n> \n> Hm. Doing this via more executor tree traversals doesn't seem optimal, that's\n> not exactly free.\n\nGood point. I was wondering what the cost of the executor call might be\ntoo, so I did a test with cached data (the results I presented in the\nfirst message were with a restart + page cache drop before each query,\ni.e. \"best case\" for prefetching).\n\nIf I run each query twice - uncached and cached - I get this:\n\n | prefetch=off | prefetch=on\n dimensions | cache nocache | cache nocache | cache nocache\n --------------------------------------------------------------------\n 1 | 61 3314 | 74 2172 | 121% 66%\n 2 | 100 6327 | 129 3900 | 129% 62%\n 3 | 137 9374 | 177 5637 | 129% 60%\n 4 | 169 12211 | 225 7319 | 133% 60%\n\nThe columns at the end are (prefetch=on)/(prefetch=off). This shows that\nfor uncached data, we get ~40% speedup, while for cached it's ~30%\nregression. That's not great, but where does the regression come from?\n\nPer flamegraphs, the wast majority of that is due to doing btgettuple\ntwice. ExecPrefetchIndexScan simply doing index_getnext_tid() too, just\nlike IndexNext().\n\nIf I remove that, leaving ExecPrefetchIndexScan() empty, the difference\nentirely disappears. The difference is ~1%, maybe. So at least in this\ncase the overhead of traversal is quite negligible. I'm actually\nsurprised copying slots and building the parameters twice does not cause\na regression, but that's what I see.\n\n> And because the prefetching can be beneficial even if there\n> are nodes above the inner parametrized index node, we IMO would want\n> to iterate through multiple node levels.\n\nI don't think we'd actually want that. It makes it very hard to\ndetermine how far ahead to prefetch, because how would you know what the\nchild nodes are doing? I think it'd be easy to end up prefetching way\ntoo much data. But also because the subnodes likely need to do sync I/O\nto do *their* prefetching.\n\nI mean, what if the inner path has another nestloop? Surely that needs\nto get the outer tuple? If we get those tuples, should we prefetch the\nmatching inner tuples too? Wouldn't that means we could be prefetching\nexponential number of tuples?\n\nI honestly don't know - maybe there are cases where this makes sense,\nbut I'm not sure why would that be \"incompatible\" with something like\nExecutorPrefetch().\n\nIn any case, I think it'd be fine to have prefetching at least for\nsimple cases, where we know it can help. It wasn't my ambition to make\nthe whole executor somehow asynchronous ;-)\n\n\n> Have you considered instead expanding the parameterized scan logic? Right now\n> nestloop passes down values one-by-one via PARAM_EXEC. What if we expanded\n> that to allow nodes, e.g. nestloop in this case, to pass down multiple values\n> in one parameter? That'd e.g. allow passing down multiple rows to fetch from\n> nodeNestloop.c to nodeIndexscan.c without needing to iterate over the executor\n> state tree. And it might be more powerful than just doing prefetching -\n> e.g. we could use one ScalarArrayOps scan in the index instead of doing a\n> separate scan for each of the to-be-prefetched values.\n> \n\nI have not, but it seems \"batching\" the prefetches in some way might be\na way to go. I'm not sure it'll be much more efficient (not just for\nbtree, what about other index AMs?).\n\nBut then that really starts to look like BNL - why would we even batch\nprefetches and then do the rest row-by-row? We could just as well pass\ndown the batch to the index scan, and let it handle the prefetches.\n\nThat'd be much less about prefetching and more about allowing batching\nfor some nodes.\n\n> \n> \n>> benchmark\n>> ---------\n>>\n>> Of course, the main promise of this is faster queries, so I did a simple\n>> benchmark, with a query like this:\n>>\n>> SELECT * FROM fact_table f JOIN dimension d ON (f.id = d.id)\n>> WHERE f.r < 0.0001;\n>>\n>> The \"r\" is simply a random value, allowing to select arbitrary fraction\n>> of the large fact table \"f\". Here it selects 0.01%, so ~10k rows from\n>> 100M table. Dimensions have 10M rows. See the .sql file attached.\n>>\n>> For a variable number of dimensions (1 to 4) the results look like this:\n>>\n>> prefetch 1 2 3 4\n>> ----------------------------------------\n>> off 3260 6193 8993 11980\n>> on 2145 3897 5531 7352\n>> ----------------------------------------\n>> 66% 63% 62% 61%\n>>\n>> This is on \"cold\" data, with a restart + drop caches between runs. The\n>> results suggest the prefetching makes it about twice as fast. I was\n>> hoping for more, but not bad for a Poc, chances are it can be improved.\n> \n> I think that's indeed a pretty nice win.\n> \n\nYeah. It's a bit of a \"best case\" workload, though.\n\n> \n> \n>> One minor detail is that I ran into some issues with tuple slots. I need\n>> a bunch of them to stash the slots received from the outer plan, so I\n>> created a couple slots with TTSOpsVirtual. And that mostly works, except\n>> that later ExecInterpExpr() happens to call slot_getsomeattrs() and that\n>> fails because tts_virtual_getsomeattrs() says:\n>>\n>> elog(ERROR, \"getsomeattrs is not required to be called on a virtual\n>> tuple table slot\");\n>>\n>> OK, that call is not necessary for virtual slots, it's noop. But it's\n>> not me calling that, the interpreter does that for some reason. I did\n>> comment that error out in the patch, but I wonder what's the proper way\n>> to make this work ...\n> \n> Hm, that's odd. slot_getsomeattrs() only calls tts_virtual_getsomeattrs() if\n> slot->tts_nvalid is smaller than the requested attnum. Which should never be\n> the case for a virtual slot. So I suspect there might be some bug leading to\n> wrong contents being stored in slots?\n> \n\nHmmm, it seems I can no longer reproduce this :-( Chances are it was\nhappening because of some other bug that I fixed, and didn't realize it\nwas causing this too. Sorry :-/\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Wed, 28 Aug 2024 00:38:44 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": true,
"msg_subject": "Re: PoC: prefetching data between executor nodes (e.g. nestloop +\n indexscan)"
},
{
"msg_contents": "\n\nOn 8/27/24 20:53, Peter Geoghegan wrote:\n> On Tue, Aug 27, 2024 at 2:40 PM Andres Freund <andres@anarazel.de> wrote:\n>> Have you considered instead expanding the parameterized scan logic? Right now\n>> nestloop passes down values one-by-one via PARAM_EXEC. What if we expanded\n>> that to allow nodes, e.g. nestloop in this case, to pass down multiple values\n>> in one parameter? That'd e.g. allow passing down multiple rows to fetch from\n>> nodeNestloop.c to nodeIndexscan.c without needing to iterate over the executor\n>> state tree.\n> \n> This sounds a bit like block nested loop join.\n> \n\nYeah.\n\n>> And it might be more powerful than just doing prefetching -\n>> e.g. we could use one ScalarArrayOps scan in the index instead of doing a\n>> separate scan for each of the to-be-prefetched values.\n> \n> ScalarArrayOps within nbtree are virtually the same thing as regular\n> index scans these days. That could make a big difference (perhaps this\n> is obvious).\n> \n> One reason to do it this way is because it cuts down on index descent\n> costs, and other executor overheads. But it is likely that it will\n> also make prefetching itself more effective, too -- just because\n> prefetching will naturally end up with fewer, larger batches of\n> logically related work.\n> \n\nPerhaps. So nestloop would pass down multiple values, the inner subplan\nwould do whatever it wants (including prefetching), and then return the\nmatching rows, somehow? It's not very clear to me how would we return\nthe tuples for many matches, but it seems to shift the prefetching\ncloser to the \"normal\" index prefetching discussed elsewhere.\n\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Wed, 28 Aug 2024 00:44:50 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": true,
"msg_subject": "Re: PoC: prefetching data between executor nodes (e.g. nestloop +\n indexscan)"
},
{
"msg_contents": "On Tue, Aug 27, 2024 at 6:44 PM Tomas Vondra <tomas@vondra.me> wrote:\n> > One reason to do it this way is because it cuts down on index descent\n> > costs, and other executor overheads. But it is likely that it will\n> > also make prefetching itself more effective, too -- just because\n> > prefetching will naturally end up with fewer, larger batches of\n> > logically related work.\n> >\n>\n> Perhaps.\n\nI expect this to be particularly effective whenever there is naturally\noccuring locality. I think that's fairly common. We'll sort the SAOP\narray on the nbtree side, as we always do.\n\n> So nestloop would pass down multiple values, the inner subplan\n> would do whatever it wants (including prefetching), and then return the\n> matching rows, somehow?\n\nRight.\n\n> It's not very clear to me how would we return\n> the tuples for many matches, but it seems to shift the prefetching\n> closer to the \"normal\" index prefetching discussed elsewhere.\n\nIt'll be necessary to keep track of which outer side rows relate to\nwhich inner-side array values (within a given batch/block). Some new\ndata structure will be needed to manage that book keeping.\n\nCurrently, we deduplicate arrays for SAOP scans. I suppose that it\nworks that way because it's not really clear what it would mean for\nthe scan to have duplicate array keys. I don't see any need to change\nthat for block nested loop join/whatever this is. We would have to use\nthe new data structure to \"pair up\" outer side tuples with their\nassociated inner side result sets, at the end of processing each\nbatch/block. That way we avoid repeating the same inner index scan\nwithin a given block/batch -- a little like with a memoize node.\n\nObviously, that's the case where we can exploit naturally occuring\nlocality most effectively -- the case where multiple duplicate inner\nindex scans are literally combined into only one. But, as I already\ntouched on, locality will be important in a variety of cases, not just\nthis one.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 27 Aug 2024 19:17:45 -0400",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PoC: prefetching data between executor nodes (e.g. nestloop +\n indexscan)"
}
] |
[
{
"msg_contents": "I'm somewhat expecting to be flamed to a well-done crisp for saying\nthis, but I think we need better ways for extensions to control the\nbehavior of PostgreSQL's query planner. I know of two major reasons\nwhy somebody might want to do this. First, you might want to do\nsomething like what pg_hint_plan does, where it essentially implements\nOracle-style hints that can be either inline or stored in a side table\nand automatically applied to queries.[1] In addition to supporting\nOracle-style hints, it also supports some other kinds of hints so that\nyou can, for example, try to fix broken cardinality estimates. Second,\nyou might want to convince the planner to keep producing the same kind\nof plan that it produced previously. I believe this is what Amazon's\nquery plan management feature[2] does, although since it is closed\nsource and I don't work at Amazon maybe it's actually implemented\ncompletely differently. Regardless of what Amazon did in this case,\nplan stability is a feature people want. Just trying to keep using the\nsame plan data structure forever doesn't seem like a good strategy,\nbecause for example it would be fragile in the case of any DDL\nchanges, like dropping and recreating an index, or dropping or adding\na column. But you might want conceptually the same plan. Although it's\nnot frequently admitted on this mailing list, unexpected plan changes\nare a frequent cause of sudden database outages, and wanting to\nprevent that is a legitimate thing for a user to try to do. Naturally,\nthere is a risk that you might in so doing also prevent plan changes\nthat would have dramatically improved performance, or stick with a\nplan long after you've outgrown it, but that doesn't stop people from\nwanting it, or other databases (or proprietary forks of this database)\nfrom offering it, and I don't think it should.\n\nWe have some hooks right now that offer a few options in this area,\nbut there are problems. The hook that I believe to be closest to the\nright thing is this one:\n\n /*\n * Allow a plugin to editorialize on the set of Paths for this base\n * relation. It could add new paths (such as CustomPaths) by calling\n * add_path(), or add_partial_path() if parallel aware. It could also\n * delete or modify paths added by the core code.\n */\n if (set_rel_pathlist_hook)\n (*set_rel_pathlist_hook) (root, rel, rti, rte);\n\nUnfortunately, the part about the hook having the freedom to delete\npaths isn't really true. Perhaps technically you can delete a path\nthat you don't want to be chosen, but any paths that were dominated by\nthe path you deleted have already been thrown away and it's too late\nto get them back. You can modify paths if you don't want to change\ntheir costs, but if you change their costs then you have the same\nproblem: the contents of the pathlist at the time that you see it are\ndetermined by the costs that each path had when it was initially\nadded, and it's really too late to editorialize on that. So all you\ncan really do here in practice is add new paths.\nset_join_pathlist_hook, which applies to joinrels, is similarly\nlimited. appendrels don't even have an equivalent of this hook.\n\nSo, how could we do better?\n\nI think there are two basic approaches that are possible here. If\nsomeone sees a third option, let me know. First, we could allow users\nto hook add_path() and add_partial_path(). That certainly provides the\nflexibility on paper to accept or reject whatever paths you do or do\nnot want. However, I don't find this approach very appealing. One\nproblem is that it's likely to be fairly expensive, because add_path()\ngets called A LOT. A second problem is that you don't actually get an\nawful lot of context: I think anybody writing a hook would have to\nwrite code to basically analyze each proposed path and figure out why\nit was getting added and then decide what to do. In some cases that\nmight be fine, because for example accepting or rejecting paths based\non path type seems fairly straightforward with this approach, but as\nsoon as you want to do anything more complicated than that it starts\nto seem difficult. If, for example, you want relation R1 to be the\ndriving table for the whole query plan, you're going to have to\ndetermine whether or not that is the case for every single candidate\n(partial) path that someone hands you, so you're going to end up\nmaking that decision a whole lot of times. It doesn't sound\nparticularly fun. Third, even if you are doing something really simple\nlike trying to reject mergejoins, you've already lost the opportunity\nto skip a bunch of work. If you had known when you started planning\nthe joinrel that you didn't care about mergejoins, you could have\nskipped looking for merge-joinable clauses. Overall, while I wouldn't\nbe completely against further exploration of this option, I suspect\nit's pretty hard to do anything useful with it.\n\nThe other possible approach is to allow extensions to feed some\ninformation into the planner before path generation and let that\ninfluence which paths are generated. This is essentially what\npg_hint_plan is doing: it implements plan type hints by arranging to\nflip the various enable_* GUCs on and off during the planning of\nvarious rels. That's clever but ugly, and it ends up duplicating\nsubstantial chunks of planner code due to the inadequacy of the\nexisting hooks. With some refactoring and some additional hooks, we\ncould make this much less ugly. But that gets at what I believe to be\nthe core difficulty of this approach, which is that the core planner\ncode needs to be somewhat aware of and on board with what the user or\nthe extension is trying to do. If an extension wants to force the join\norder, that is likely to require different scaffolding than if it\nwants to force the join methods which is again different from if a\nhook wants to bias the query planner towards or against particular\nindexes. Putting in hooks or other infrastructure that allows an\nextension to control a particular aspect of planner behavior is to\nsome extent an endorsement of controlling the planner behavior in that\nparticular way. Since any amount of allowing the user to control the\nplanner tends to be controversial around here, that opens up the\nspectre of putting a whole lot of effort into arguing about which\nthings extensions should be allowed to do, getting most of the patches\nrejected, and ending up with nothing that's actually useful.\n\nBut on the other hand, it's not like we have to design everything in a\ngreenfield. Other database systems have provided in-core, user-facing\nfeatures to control the planner for decades, and we can look at those\nofferings -- and existing offerings in the PG space -- as we try to\njudge whether a particular use case is totally insane. I am not here\nto argue that everything that every system has done is completely\nperfect and without notable flaws, but our own system has its own\nshare of flaws, and the fact that you can do very little when a\npreviously unproblematic query starts suddenly producing a bad plan is\ndefinitely one of them. I believe we are long past the point where we\ncan simply hold our breath and pretend like there's no issue here. At\nthe very least, allowing extensions to control scan methods (including\nchoice of indexes), join methods, and join order (including which\ntable ends up on which side of a given join) and similar things for\naggregates and appendrels seems to me like it ought to be table\nstakes. And those extensions shouldn't have to duplicate large chunks\nof code or resort to gross hacks to do it. Eventually, maybe we'll\neven want to have directly user-facing features to do some of this\nstuff (in query hints, out of query hints, or whatever) but I think\nopening the door up to extensions doing it is a good first step,\nbecause (1) that allows different extensions to do different things\nwithout taking a position on what the One Right Thing To Do is and (2)\nif it becomes clear that something improvident has been done, it is a\nlot easier to back out a hook or some C API change than it is to\nback-out a user-visible feature. Or maybe we'll never want to expose a\nuser-visible feature here, but it can still be useful to enable\nextensions.\n\nThe attached patch, briefly mentioned above, essentially converts the\nenable_* GUCs into RelOptInfo properties where the defaults are set by\nthe corresponding GUCs. The idea is that a hook could then change this\non a per-RelOptInfo basis before path generation happens. For\nbaserels, I believe that could be done from get_relation_info_hook for\nbaserels, and we could introduce something similar for other kinds of\nrels. I don't think this is in any way the perfect approach. On the\none hand, it doesn't give you all the kinds of control over path\ngeneration that you might want. On the other hand, the more I look at\nwhat our enable_* GUCs actually do, the less impressed I am. IMHO,\nthings like enable_hashjoin make a lot of sense, but enable_sort seems\nlike it just controls an absolutely random smattering of behaviors in\na way that seems to me to have very little to recommend it, and I've\ncomplained elsewhere about how enable_indexscan and\nenable_indexonlyscan are really quite odd when you look at how they're\nimplemented. Still, this seemed like a fairly easy thing to do as a\nway of demonstrating the kind of thing that we could do to provide\nextensions with more control over planner behavior, and I believe it\nwould be concretely useful to pg_hint_plan in particular. But all that\nsaid, as much as anything, I want to get some feedback on what\napproaches and trade-offs people think might be acceptable here,\nbecause there's not much point in me spending a bunch of time writing\ncode that everyone (or a critical mass of people) are going to hate.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n[1] https://github.com/ossc-db/pg_hint_plan\n[2] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Optimize.html",
"msg_date": "Mon, 26 Aug 2024 12:32:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "allowing extensions to control planner behavior"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I'm somewhat expecting to be flamed to a well-done crisp for saying\n> this, but I think we need better ways for extensions to control the\n> behavior of PostgreSQL's query planner.\n\nNah, I won't flame you for that, it's a reasonable thing to think\nabout. However, the devil is in the details, and ...\n\n> The attached patch, briefly mentioned above, essentially converts the\n> enable_* GUCs into RelOptInfo properties where the defaults are set by\n> the corresponding GUCs.\n\n... this doesn't seem like it's moving the football very far at all.\nThe enable_XXX GUCs are certainly blunt instruments, but I'm not sure\nhow much better it is if they're per-rel. For example, I don't see\nhow this gets us any closer to letting an extension fix a poor choice\nof join order. Or, if your problem is that the planner wants to scan\nindex A but you want it to scan index B, enable_indexscan won't help.\n\n> ... On the other hand, the more I look at\n> what our enable_* GUCs actually do, the less impressed I am. IMHO,\n> things like enable_hashjoin make a lot of sense, but enable_sort seems\n> like it just controls an absolutely random smattering of behaviors in\n> a way that seems to me to have very little to recommend it, and I've\n> complained elsewhere about how enable_indexscan and\n> enable_indexonlyscan are really quite odd when you look at how they're\n> implemented.\n\nYeah, these sorts of questions aren't made better this way either.\nIf anything, having extensions manipulating these variables will\nmake it even harder to rethink what they do.\n\nYou mentioned that there is prior art out there, but this proposal\ndoesn't seem like it's drawing on any such thing. What ideas should\nwe be stealing?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Aug 2024 13:37:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On 26/8/2024 18:32, Robert Haas wrote:\n> I'm somewhat expecting to be flamed to a well-done crisp for saying\n> this, but I think we need better ways for extensions to control the\n> behavior of PostgreSQL's query planner. I know of two major reasons\nIt is the change I have been waiting for a long time. Remember how many \nkludge codes in pg_hint_plan, aqo, citus, timescale, etc., are written \nfor only the reason of a small number of hooks - I guess many other \npeople could cheer such work.\n\n> why somebody might want to do this. First, you might want to do\n> something like what pg_hint_plan does, where it essentially implements\n> Oracle-style hints that can be either inline or stored in a side table\n> and automatically applied to queries.[1] In addition to supporting\n> Oracle-style hints, it also supports some other kinds of hints so that\n> you can, for example, try to fix broken cardinality estimates. Second,\nMy personal most wanted list:\n- Selectivity list estimation hook\n- Groups number estimation hook\n- hooks on memory estimations, involving work_mem\n- add_path() hook\n- Hook on final RelOptInfo pathlist\n- a custom list of nodes in RelOptinfo, PlannerStmt, Plan and Query \nstructures\n- Extensibility of extended and plain statistics\n- Hook on portal error processing\n- Canonicalise expressions hook\n\n> you might want to convince the planner to keep producing the same kind\n> of plan that it produced previously. I believe this is what Amazon's\n> query plan management feature[2] does, although since it is closed\n> source and I don't work at Amazon maybe it's actually implemented\n> completely differently. Regardless of what Amazon did in this case,\n> plan stability is a feature people want. Just trying to keep using the\n> same plan data structure forever doesn't seem like a good strategy,\n> because for example it would be fragile in the case of any DDL\n> changes, like dropping and recreating an index, or dropping or adding\nAs a designer of plan freezing feature [1] I can say it utilises \nplancache and, being under its invalidation callbacks it doesn't afraid \nDDL or any other stuff altering database objects.\n\n> Unfortunately, the part about the hook having the freedom to delete\n> paths isn't really true. Perhaps technically you can delete a path\n> that you don't want to be chosen, but any paths that were dominated by\n> the path you deleted have already been thrown away and it's too late\n> to get them back. You can modify paths if you don't want to change\n> their costs, but if you change their costs then you have the same\n> problem: the contents of the pathlist at the time that you see it are\n> determined by the costs that each path had when it was initially\n> added, and it's really too late to editorialize on that. So all you\n> can really do here in practice is add new paths.\n From my standpoint, it is enough to export routines creating paths and \ncalculating costs.\n\n> set_join_pathlist_hook, which applies to joinrels, is similarly\n> limited. appendrels don't even have an equivalent of this hook.\n> \n> So, how could we do better?\n> \n> I think there are two basic approaches that are possible here. If\n> someone sees a third option, let me know. First, we could allow users\n> to hook add_path() and add_partial_path(). That certainly provides the\n> flexibility on paper to accept or reject whatever paths you do or do\n+1\n\n> The attached patch, briefly mentioned above, essentially converts the\n> enable_* GUCs into RelOptInfo properties where the defaults are set by\n> the corresponding GUCs. The idea is that a hook could then change this\n> on a per-RelOptInfo basis before path generation happens. For\nIMO, it is better not to switch on/off algorithms, but allow extensions \nto change their cost multipliers, modifying costs balance. 10E9 looks \nlike a disable, but multiplier == 10 for a cost node just provide more \nfreedom for hashing strategies.\n\n[1] https://postgrespro.com/docs/enterprise/16/sr-plan\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 20:00:54 +0200",
"msg_from": "Andrei Lepikhov <lepihov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 1:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I'm somewhat expecting to be flamed to a well-done crisp for saying\n> > this, but I think we need better ways for extensions to control the\n> > behavior of PostgreSQL's query planner.\n>\n> Nah, I won't flame you for that, it's a reasonable thing to think\n> about. However, the devil is in the details, and ...\n\nThank you. Not being flamed is one of my favorite things. :-)\n\n> > The attached patch, briefly mentioned above, essentially converts the\n> > enable_* GUCs into RelOptInfo properties where the defaults are set by\n> > the corresponding GUCs.\n>\n> ... this doesn't seem like it's moving the football very far at all.\n> The enable_XXX GUCs are certainly blunt instruments, but I'm not sure\n> how much better it is if they're per-rel. For example, I don't see\n> how this gets us any closer to letting an extension fix a poor choice\n> of join order. Or, if your problem is that the planner wants to scan\n> index A but you want it to scan index B, enable_indexscan won't help.\n\nWell, I agree that this doesn't address everything you might want to\ndo, and I thought I said so, admittedly in the middle of a long wall\nof text. This would JUST be a step toward letting an extension control\nthe scan and join methods, not the join order or the choice of index\nor whatever else there is. But the fact that it doesn't do everything\nis not a strike against it unless there's some competing design that\nlets you take care of everything with a single mechanism, which I do\nnot see as realistic. If this proposal -- or really any proposal in\nthis area -- gets through, I will very happily propose more things to\naddress the other problems that I know about, but it doesn't make\nsense to do a huge amount of work to craft a comprehensive solution\nbefore we've had any discussion here.\n\n> Yeah, these sorts of questions aren't made better this way either.\n> If anything, having extensions manipulating these variables will\n> make it even harder to rethink what they do.\n\nCorrect, but my proposal to make enable_indexscan behave like\nenable_indexonlyscan, which I thought was a slam-dunk, just provoked a\nlot of grumbling. There's a kind of chicken and egg problem here. If\nthe existing GUCs were better designed, then using them here would\nmake sense. And the patch that I attached to my previous email were in\nmaster, then cleaning up the design of the GUCs would have more value.\nBut if I can't make any progress with either problem because the other\nproblem also exists, then I'm kind of boxed into a corner. I could\nalso propose something here that is diverges from the enable_*\nbehavior, but then people will complain that the two shouldn't be\ninconsistent, which I agree with, BTW. I thought maybe doing this\nfirst would make sense, and then we could refine afterwards.\n\n> You mentioned that there is prior art out there, but this proposal\n> doesn't seem like it's drawing on any such thing. What ideas should\n> we be stealing?\n\nDepends what you mean. As far as PostgreSQL-related things, the two\nthings that I mentioned in my opening paragraph and for which I\nprovided links seem to be me to the best examples we have. It's pretty\neasy to see how to make pg_hint_plan require less kludgery, and I\nthink we can just iterate through the various problems there and solve\nthem pretty easily by adding a few hooks here and there and a few\nextension-settable structure members here and there. I am engaging in\nsome serious hand-waving here, but this is not rocket science. I am\nconfident that if you made it your top priority to get into PG 18\nstuff which would thoroughly un-hackify pg_hint_plan, you could be\ndone in months, possibly weeks. It will take me longer, but if we have\nan agreement in principal that it is worth doing, I just can't see it\nas being particularly difficult.\n\nAmazon's query plan management stuff is a much tougher lift. For that,\nyou're asking the planner to try to create a new plan which is like\nsome old plan that you got before. So in a perfect world, you want to\ncontrol every planner decision. That's hard just because there are a\nlot of them. If for example you want to get the same index scan that\nyou got before, you need not only to get the same type of index scan\n(index, index-only, bitmap) and the same index, but also things like\nthe same non-native saop treatment, which seems like it would be\nasking an awful lot of a hook system. On the other hand, maybe you can\ncheat. If your regurgitate-the-same-plan system could force the same\njoin order, join methods, scan methods, choice of indexes, and\nprobably some stuff about aggregate and appendrel strategy, it might\nbe close enough to giving you the same plan you had before that nobody\nwould really care if the non-native saop treatment was different. I'm\nalmost positive it's better than not having a feature, which is where\nare today. And although allowing control over just the major decisions\nin query planning doesn't seem like something we can do in one patch,\nI don't think it takes 100 patches either. Maybe five or ten.\n\nIf we step outside of the PostgreSQL ecosystem, I think we should look\nat Oracle as one example. I have never been a real believer in hints\nlike SeqScan(foo), because if you don't fix the cardinality estimate\nfor table foo, then the rest of your plan is going to suck, too. On\nthe other hand, \"hint everything\" for some people in some situations\nis a way to address that. It's stupid in a sense, but if you have an\nautomated way to do it, especially one that allows applying hints\nout-of-query, it's not THAT stupid. Also, Oracle offers some other\npretty useful hints. In particular, using the LEADING hint to set the\ndriving table for the query plan does not seem dumb to me at all.\nHinting that things should be parallel or not, and with what degree of\nparallelism, also seem quite reasonable. They've also got ALL_ROWS and\nFIRST_ROWS(n) hints, which let you say whether you want fast-start\nbehavior or not, and it hardly needs to be said how often we get that\nwrong or how badly. pg_hint_plan, which copies a lot of stuff that\nOracle does, innovates by allowing you to hint that a certain join\nwill return X number of rows or that the number or rows that the\nplanner thinks should be returned should be corrected by multiplying,\nadding, or subtracting some constant. I'm not sure how useful this is\nreally because I feel like a lot of times you'd just pick some join\norder where that particular join is no longer used e.g. if. A JOIN B\nJOIN C and I hint the AB join, perhaps the planner will just start by\njoining C to either A or B, and then that join will never occur.\nHowever, that can be avoided by also using LEADING, or maybe in some\nother cleverer way, like making an AB selectivity hint apply at\nwhatever point in the plan you join something that includes A to\nsomething that includes B.\n\nThere's some details on SQL server's hinting here:\nhttps://learn.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql?view=sql-server-ver16\n\nIt looks pretty complicated, but some of the basic concepts that you'd\nexpect are also present here: force the join method, rule in or out,\nforce the use of an index or of no index, force the join order. Those\nseem to be the major things that \"everyone\" supports. I think we'd\nwant to expand a bit on that to allow forcing aggregate strategy and\nperhaps some PostgreSQL-specific things e.g. other systems won't have\na hint to force a TIDRangeScan or not because that's a\nPostgreSQL-specific concept, but it would be silly to make a system\nthat lets an extension control sequential scans and index scans but\nnot other, more rarely-used ways of scanning a relation, so probably\nwe want to do something.\n\nI don't know if that helps, in terms of context. If it doesn't, let me\nknow what would help. And just to be clear, I *absolutely* think we\nneed to take a light touch here. If we install a ton of new\nhighly-opinionated infrastructure we will make a lot of people mad and\nthat's definitely not where I want to end up. I just think we need to\ngrow beyond \"the planner is a black box and you shall not presume to\ndirect it.\" If every other system provides a way to control, say, the\njoin order, then it seems reasonable to suppose that a PostgreSQL\nextension should be able to control the join order too. A lot of\ndetails might be different but if multiple other systems have the\nconcept then the concept itself probably isn't ridiculous.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Aug 2024 15:28:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 2:00 PM Andrei Lepikhov <lepihov@gmail.com> wrote:\n> It is the change I have been waiting for a long time. Remember how many\n> kludge codes in pg_hint_plan, aqo, citus, timescale, etc., are written\n> for only the reason of a small number of hooks - I guess many other\n> people could cheer such work.\n\nI think so, too. I know there are going to be people who hate this,\nbut I think the cat is already out of the bag. It's not a question any\nmore of whether it will happen, it's just a question of whether we\nwant to collaborate with extension developers or try to make their\nlife difficult.\n\n> My personal most wanted list:\n> - Selectivity list estimation hook\n> - Groups number estimation hook\n> - hooks on memory estimations, involving work_mem\n> - add_path() hook\n> - Hook on final RelOptInfo pathlist\n> - a custom list of nodes in RelOptinfo, PlannerStmt, Plan and Query\n> structures\n> - Extensibility of extended and plain statistics\n> - Hook on portal error processing\n> - Canonicalise expressions hook\n\nOne of my chronic complaints about hooks is that people propose hooks\nthat are just in any random spot in the code where they happen to want\nto change something. If we accept that, we end up with a lot of hooks\nwhere nobody can say how the hook can be used usefully and maybe it\ncan't actually be used usefully even by the original author, or only\nthem and nobody else. So these kinds of proposals need detailed,\ncase-by-case scrutiny. It's unacceptable for the planner to get filled\nup with a bunch of poorly-designed hooks just as it is for any other\npart of the system, but well-designed hooks whose usefulness can\nclearly be seen should be just as welcome here as anywhere else.\n\n> IMO, it is better not to switch on/off algorithms, but allow extensions\n> to change their cost multipliers, modifying costs balance. 10E9 looks\n> like a disable, but multiplier == 10 for a cost node just provide more\n> freedom for hashing strategies.\n\nThat may be a valid use case, but I do not think it is a typical use\ncase. In my experience, when people want to force the planner to do\nsomething, they really mean it. They don't mean \"please do it this way\nunless you really, really don't feel like it.\" They mean \"please do it\nthis way, period.\" And that is also what other systems provide. Oracle\ncould provide a hint MERGE_COST(foo,10) meaning make merge joins look\nten times as expensive but in fact they only provide MERGE and\nNO_MERGE. And a \"reproduce this previous plan\" feature really demands\ninfrastructure that truly forces the planner to do what it's told,\nrather than just nicely suggesting that it might want to do as it's\ntold. I wouldn't be sad at all if we happen to end up with a system\nthat's powerful enough for an extension to implement \"make merge joins\nten times as expensive\"; in fact, I think that would be pretty cool.\nBut I don't think it should be the design center for what we\nimplement, because it looks nothing like what existing PG or non-PG\nsystems do, at least in my experience.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Aug 2024 15:44:11 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On 26/8/2024 21:44, Robert Haas wrote:\n> On Mon, Aug 26, 2024 at 2:00 PM Andrei Lepikhov <lepihov@gmail.com> wrote:\n>> My personal most wanted list:\n>> - Selectivity list estimation hook\n>> - Groups number estimation hook\n>> - hooks on memory estimations, involving work_mem\n>> - add_path() hook\n>> - Hook on final RelOptInfo pathlist\n>> - a custom list of nodes in RelOptinfo, PlannerStmt, Plan and Query\n>> structures\n>> - Extensibility of extended and plain statistics\n>> - Hook on portal error processing\n>> - Canonicalise expressions hook\n> \n> One of my chronic complaints about hooks is that people propose hooks\n> that are just in any random spot in the code where they happen to want\n> to change something. If we accept that, we end up with a lot of hooks\n> where nobody can say how the hook can be used usefully and maybe it\n> can't actually be used usefully even by the original author, or only\n> them and nobody else. So these kinds of proposals need detailed,\n> case-by-case scrutiny. It's unacceptable for the planner to get filled\n> up with a bunch of poorly-designed hooks just as it is for any other\n> part of the system, but well-designed hooks whose usefulness can\n> clearly be seen should be just as welcome here as anywhere else.\nDefinitely so. Think about that as a sketch proposal on the roadmap. \nRight now, I know about only one hook - selectivity hook - which we \nalready discussed and have Tomas Vondra's patch on the table. But even \nthis is a big deal, because multi-clause estimations are a huge pain for \nusers that can't be resolved with extensions for now without core patches.\n\n>> IMO, it is better not to switch on/off algorithms, but allow extensions\n>> to change their cost multipliers, modifying costs balance. 10E9 looks\n>> like a disable, but multiplier == 10 for a cost node just provide more\n>> freedom for hashing strategies.\n> \n> That may be a valid use case, but I do not think it is a typical use\n> case. In my experience, when people want to force the planner to do\n> something, they really mean it. They don't mean \"please do it this way\n> unless you really, really don't feel like it.\" They mean \"please do it\n> this way, period.\" And that is also what other systems provide. Oracle\n> could provide a hint MERGE_COST(foo,10) meaning make merge joins look\n> ten times as expensive but in fact they only provide MERGE and\n> NO_MERGE. And a \"reproduce this previous plan\" feature really demands\n> infrastructure that truly forces the planner to do what it's told,\n> rather than just nicely suggesting that it might want to do as it's\n> told. I wouldn't be sad at all if we happen to end up with a system\n> that's powerful enough for an extension to implement \"make merge joins\n> ten times as expensive\"; in fact, I think that would be pretty cool.\n> But I don't think it should be the design center for what we\n> implement, because it looks nothing like what existing PG or non-PG\n> systems do, at least in my experience.\nHeh, I meant not manual usage, but automatical one, provided by extensions.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 22:01:53 +0200",
"msg_from": "Andrei Lepikhov <lepihov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "At 2024-08-27 00:32:53, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\n>I'm somewhat expecting to be flamed to a well-done crisp for saying\n>this, but I think we need better ways for extensions to control the\n>behavior of PostgreSQL's query planner. I know of two major reasons\n>why somebody might want to do this. First, you might want to do\n>something like what pg_hint_plan does, where it essentially implements\n>Oracle-style hints that can be either inline or stored in a side table\n>and automatically applied to queries.[1] In addition to supporting\n>Oracle-style hints, it also supports some other kinds of hints so that\n>you can, for example, try to fix broken cardinality estimates. Second,\n>you might want to convince the planner to keep producing the same kind\n>of plan that it produced previously. I believe this is what Amazon's\n>query plan management feature[2] does, although since it is closed\n>source and I don't work at Amazon maybe it's actually implemented\n>completely differently. Regardless of what Amazon did in this case,\n>plan stability is a feature people want. Just trying to keep using the\n>same plan data structure forever doesn't seem like a good strategy,\n>because for example it would be fragile in the case of any DDL\n>changes, like dropping and recreating an index, or dropping or adding\n>a column. But you might want conceptually the same plan. Although it's\n>not frequently admitted on this mailing list, unexpected plan changes\n>are a frequent cause of sudden database outages, and wanting to\n>prevent that is a legitimate thing for a user to try to do. Naturally,\n>there is a risk that you might in so doing also prevent plan changes\n>that would have dramatically improved performance, or stick with a\n>plan long after you've outgrown it, but that doesn't stop people from\n>wanting it, or other databases (or proprietary forks of this database)\n>from offering it, and I don't think it should.\n>\n>We have some hooks right now that offer a few options in this area,\n>but there are problems. The hook that I believe to be closest to the\n>right thing is this one:\n>\n> /*\n> * Allow a plugin to editorialize on the set of Paths for this base\n> * relation. It could add new paths (such as CustomPaths) by calling\n> * add_path(), or add_partial_path() if parallel aware. It could also\n> * delete or modify paths added by the core code.\n> */\n> if (set_rel_pathlist_hook)\n> (*set_rel_pathlist_hook) (root, rel, rti, rte);\n>\n>Unfortunately, the part about the hook having the freedom to delete\n>paths isn't really true. Perhaps technically you can delete a path\n>that you don't want to be chosen, but any paths that were dominated by\n>the path you deleted have already been thrown away and it's too late\n>to get them back. You can modify paths if you don't want to change\n>their costs, but if you change their costs then you have the same\n>problem: the contents of the pathlist at the time that you see it are\n>determined by the costs that each path had when it was initially\n>added, and it's really too late to editorialize on that. So all you\n>can really do here in practice is add new paths.\n>set_join_pathlist_hook, which applies to joinrels, is similarly\n>limited. appendrels don't even have an equivalent of this hook.\n>\n>So, how could we do better?\n>\n>I think there are two basic approaches that are possible here. If\n>someone sees a third option, let me know. First, we could allow users\n>to hook add_path() and add_partial_path(). That certainly provides the\n>flexibility on paper to accept or reject whatever paths you do or do\n>not want. However, I don't find this approach very appealing. One\n>problem is that it's likely to be fairly expensive, because add_path()\n>gets called A LOT. A second problem is that you don't actually get an\n>awful lot of context: I think anybody writing a hook would have to\n>write code to basically analyze each proposed path and figure out why\n>it was getting added and then decide what to do. In some cases that\n>might be fine, because for example accepting or rejecting paths based\n>on path type seems fairly straightforward with this approach, but as\n>soon as you want to do anything more complicated than that it starts\n>to seem difficult. If, for example, you want relation R1 to be the\n>driving table for the whole query plan, you're going to have to\n>determine whether or not that is the case for every single candidate\n>(partial) path that someone hands you, so you're going to end up\n>making that decision a whole lot of times. It doesn't sound\n>particularly fun. Third, even if you are doing something really simple\n>like trying to reject mergejoins, you've already lost the opportunity\n>to skip a bunch of work. If you had known when you started planning\n>the joinrel that you didn't care about mergejoins, you could have\n>skipped looking for merge-joinable clauses. Overall, while I wouldn't\n>be completely against further exploration of this option, I suspect\n>it's pretty hard to do anything useful with it.\n>\n>The other possible approach is to allow extensions to feed some\n>information into the planner before path generation and let that\n>influence which paths are generated. This is essentially what\n>pg_hint_plan is doing: it implements plan type hints by arranging to\n>flip the various enable_* GUCs on and off during the planning of\n>various rels. That's clever but ugly, and it ends up duplicating\n>substantial chunks of planner code due to the inadequacy of the\n>existing hooks. With some refactoring and some additional hooks, we\n>could make this much less ugly. But that gets at what I believe to be\n>the core difficulty of this approach, which is that the core planner\n>code needs to be somewhat aware of and on board with what the user or\n>the extension is trying to do. If an extension wants to force the join\n>order, that is likely to require different scaffolding than if it\n>wants to force the join methods which is again different from if a\n>hook wants to bias the query planner towards or against particular\n>indexes. Putting in hooks or other infrastructure that allows an\n>extension to control a particular aspect of planner behavior is to\n>some extent an endorsement of controlling the planner behavior in that\n>particular way. Since any amount of allowing the user to control the\n>planner tends to be controversial around here, that opens up the\n>spectre of putting a whole lot of effort into arguing about which\n>things extensions should be allowed to do, getting most of the patches\n>rejected, and ending up with nothing that's actually useful.\n>\n>But on the other hand, it's not like we have to design everything in a\n>greenfield. Other database systems have provided in-core, user-facing\n>features to control the planner for decades, and we can look at those\n>offerings -- and existing offerings in the PG space -- as we try to\n>judge whether a particular use case is totally insane. I am not here\n>to argue that everything that every system has done is completely\n>perfect and without notable flaws, but our own system has its own\n>share of flaws, and the fact that you can do very little when a\n>previously unproblematic query starts suddenly producing a bad plan is\n>definitely one of them. I believe we are long past the point where we\n>can simply hold our breath and pretend like there's no issue here. At\n>the very least, allowing extensions to control scan methods (including\n>choice of indexes), join methods, and join order (including which\n>table ends up on which side of a given join) and similar things for\n>aggregates and appendrels seems to me like it ought to be table\n>stakes. And those extensions shouldn't have to duplicate large chunks\n>of code or resort to gross hacks to do it. Eventually, maybe we'll\n>even want to have directly user-facing features to do some of this\n>stuff (in query hints, out of query hints, or whatever) but I think\n>opening the door up to extensions doing it is a good first step,\n>because (1) that allows different extensions to do different things\n>without taking a position on what the One Right Thing To Do is and (2)\n>if it becomes clear that something improvident has been done, it is a\n>lot easier to back out a hook or some C API change than it is to\n>back-out a user-visible feature. Or maybe we'll never want to expose a\n>user-visible feature here, but it can still be useful to enable\n>extensions.\n>\n>The attached patch, briefly mentioned above, essentially converts the\n>enable_* GUCs into RelOptInfo properties where the defaults are set by\n>the corresponding GUCs. The idea is that a hook could then change this\n>on a per-RelOptInfo basis before path generation happens. For\n>baserels, I believe that could be done from get_relation_info_hook for\n>baserels, and we could introduce something similar for other kinds of\n>rels. I don't think this is in any way the perfect approach. On the\n>one hand, it doesn't give you all the kinds of control over path\n>generation that you might want. On the other hand, the more I look at\n>what our enable_* GUCs actually do, the less impressed I am. IMHO,\n>things like enable_hashjoin make a lot of sense, but enable_sort seems\n>like it just controls an absolutely random smattering of behaviors in\n>a way that seems to me to have very little to recommend it, and I've\n>complained elsewhere about how enable_indexscan and\n>enable_indexonlyscan are really quite odd when you look at how they're\n>implemented. Still, this seemed like a fairly easy thing to do as a\n>way of demonstrating the kind of thing that we could do to provide\n>extensions with more control over planner behavior, and I believe it\n>would be concretely useful to pg_hint_plan in particular. But all that\n>said, as much as anything, I want to get some feedback on what\n>approaches and trade-offs people think might be acceptable here,\n>because there's not much point in me spending a bunch of time writing\n>code that everyone (or a critical mass of people) are going to hate.\n>\n>Thanks,\n>\n>-- \n>Robert Haas\n>EDB: http://www.enterprisedb.com\n>\n>[1] https://github.com/ossc-db/pg_hint_plan\n\n>[2] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Optimize.html\n\n\n\n\nI really admire this idea.\n here is my confusion: Isn't the core of this idea whether to turn the planner into a framework? Personally, I think that under PostgreSQL's heap table storage, the optimizer might be better off focusing on optimizing the generation of execution plans. It’s possible that in some specific scenarios, developers might want to intervene in the generation of execution plans by extensions. I'm not sure if these scenarios usually occur when the storage structure is also extended by developers. If so, could existing solutions like \"planner_hook\" potentially solve the problem? \nAt 2024-08-27 00:32:53, \"Robert Haas\" <robertmhaas@gmail.com> wrote:\n>I'm somewhat expecting to be flamed to a well-done crisp for saying\n>this, but I think we need better ways for extensions to control the\n>behavior of PostgreSQL's query planner. I know of two major reasons\n>why somebody might want to do this. First, you might want to do\n>something like what pg_hint_plan does, where it essentially implements\n>Oracle-style hints that can be either inline or stored in a side table\n>and automatically applied to queries.[1] In addition to supporting\n>Oracle-style hints, it also supports some other kinds of hints so that\n>you can, for example, try to fix broken cardinality estimates. Second,\n>you might want to convince the planner to keep producing the same kind\n>of plan that it produced previously. I believe this is what Amazon's\n>query plan management feature[2] does, although since it is closed\n>source and I don't work at Amazon maybe it's actually implemented\n>completely differently. Regardless of what Amazon did in this case,\n>plan stability is a feature people want. Just trying to keep using the\n>same plan data structure forever doesn't seem like a good strategy,\n>because for example it would be fragile in the case of any DDL\n>changes, like dropping and recreating an index, or dropping or adding\n>a column. But you might want conceptually the same plan. Although it's\n>not frequently admitted on this mailing list, unexpected plan changes\n>are a frequent cause of sudden database outages, and wanting to\n>prevent that is a legitimate thing for a user to try to do. Naturally,\n>there is a risk that you might in so doing also prevent plan changes\n>that would have dramatically improved performance, or stick with a\n>plan long after you've outgrown it, but that doesn't stop people from\n>wanting it, or other databases (or proprietary forks of this database)\n>from offering it, and I don't think it should.\n>\n>We have some hooks right now that offer a few options in this area,\n>but there are problems. The hook that I believe to be closest to the\n>right thing is this one:\n>\n> /*\n> * Allow a plugin to editorialize on the set of Paths for this base\n> * relation. It could add new paths (such as CustomPaths) by calling\n> * add_path(), or add_partial_path() if parallel aware. It could also\n> * delete or modify paths added by the core code.\n> */\n> if (set_rel_pathlist_hook)\n> (*set_rel_pathlist_hook) (root, rel, rti, rte);\n>\n>Unfortunately, the part about the hook having the freedom to delete\n>paths isn't really true. Perhaps technically you can delete a path\n>that you don't want to be chosen, but any paths that were dominated by\n>the path you deleted have already been thrown away and it's too late\n>to get them back. You can modify paths if you don't want to change\n>their costs, but if you change their costs then you have the same\n>problem: the contents of the pathlist at the time that you see it are\n>determined by the costs that each path had when it was initially\n>added, and it's really too late to editorialize on that. So all you\n>can really do here in practice is add new paths.\n>set_join_pathlist_hook, which applies to joinrels, is similarly\n>limited. appendrels don't even have an equivalent of this hook.\n>\n>So, how could we do better?\n>\n>I think there are two basic approaches that are possible here. If\n>someone sees a third option, let me know. First, we could allow users\n>to hook add_path() and add_partial_path(). That certainly provides the\n>flexibility on paper to accept or reject whatever paths you do or do\n>not want. However, I don't find this approach very appealing. One\n>problem is that it's likely to be fairly expensive, because add_path()\n>gets called A LOT. A second problem is that you don't actually get an\n>awful lot of context: I think anybody writing a hook would have to\n>write code to basically analyze each proposed path and figure out why\n>it was getting added and then decide what to do. In some cases that\n>might be fine, because for example accepting or rejecting paths based\n>on path type seems fairly straightforward with this approach, but as\n>soon as you want to do anything more complicated than that it starts\n>to seem difficult. If, for example, you want relation R1 to be the\n>driving table for the whole query plan, you're going to have to\n>determine whether or not that is the case for every single candidate\n>(partial) path that someone hands you, so you're going to end up\n>making that decision a whole lot of times. It doesn't sound\n>particularly fun. Third, even if you are doing something really simple\n>like trying to reject mergejoins, you've already lost the opportunity\n>to skip a bunch of work. If you had known when you started planning\n>the joinrel that you didn't care about mergejoins, you could have\n>skipped looking for merge-joinable clauses. Overall, while I wouldn't\n>be completely against further exploration of this option, I suspect\n>it's pretty hard to do anything useful with it.\n>\n>The other possible approach is to allow extensions to feed some\n>information into the planner before path generation and let that\n>influence which paths are generated. This is essentially what\n>pg_hint_plan is doing: it implements plan type hints by arranging to\n>flip the various enable_* GUCs on and off during the planning of\n>various rels. That's clever but ugly, and it ends up duplicating\n>substantial chunks of planner code due to the inadequacy of the\n>existing hooks. With some refactoring and some additional hooks, we\n>could make this much less ugly. But that gets at what I believe to be\n>the core difficulty of this approach, which is that the core planner\n>code needs to be somewhat aware of and on board with what the user or\n>the extension is trying to do. If an extension wants to force the join\n>order, that is likely to require different scaffolding than if it\n>wants to force the join methods which is again different from if a\n>hook wants to bias the query planner towards or against particular\n>indexes. Putting in hooks or other infrastructure that allows an\n>extension to control a particular aspect of planner behavior is to\n>some extent an endorsement of controlling the planner behavior in that\n>particular way. Since any amount of allowing the user to control the\n>planner tends to be controversial around here, that opens up the\n>spectre of putting a whole lot of effort into arguing about which\n>things extensions should be allowed to do, getting most of the patches\n>rejected, and ending up with nothing that's actually useful.\n>\n>But on the other hand, it's not like we have to design everything in a\n>greenfield. Other database systems have provided in-core, user-facing\n>features to control the planner for decades, and we can look at those\n>offerings -- and existing offerings in the PG space -- as we try to\n>judge whether a particular use case is totally insane. I am not here\n>to argue that everything that every system has done is completely\n>perfect and without notable flaws, but our own system has its own\n>share of flaws, and the fact that you can do very little when a\n>previously unproblematic query starts suddenly producing a bad plan is\n>definitely one of them. I believe we are long past the point where we\n>can simply hold our breath and pretend like there's no issue here. At\n>the very least, allowing extensions to control scan methods (including\n>choice of indexes), join methods, and join order (including which\n>table ends up on which side of a given join) and similar things for\n>aggregates and appendrels seems to me like it ought to be table\n>stakes. And those extensions shouldn't have to duplicate large chunks\n>of code or resort to gross hacks to do it. Eventually, maybe we'll\n>even want to have directly user-facing features to do some of this\n>stuff (in query hints, out of query hints, or whatever) but I think\n>opening the door up to extensions doing it is a good first step,\n>because (1) that allows different extensions to do different things\n>without taking a position on what the One Right Thing To Do is and (2)\n>if it becomes clear that something improvident has been done, it is a\n>lot easier to back out a hook or some C API change than it is to\n>back-out a user-visible feature. Or maybe we'll never want to expose a\n>user-visible feature here, but it can still be useful to enable\n>extensions.\n>\n>The attached patch, briefly mentioned above, essentially converts the\n>enable_* GUCs into RelOptInfo properties where the defaults are set by\n>the corresponding GUCs. The idea is that a hook could then change this\n>on a per-RelOptInfo basis before path generation happens. For\n>baserels, I believe that could be done from get_relation_info_hook for\n>baserels, and we could introduce something similar for other kinds of\n>rels. I don't think this is in any way the perfect approach. On the\n>one hand, it doesn't give you all the kinds of control over path\n>generation that you might want. On the other hand, the more I look at\n>what our enable_* GUCs actually do, the less impressed I am. IMHO,\n>things like enable_hashjoin make a lot of sense, but enable_sort seems\n>like it just controls an absolutely random smattering of behaviors in\n>a way that seems to me to have very little to recommend it, and I've\n>complained elsewhere about how enable_indexscan and\n>enable_indexonlyscan are really quite odd when you look at how they're\n>implemented. Still, this seemed like a fairly easy thing to do as a\n>way of demonstrating the kind of thing that we could do to provide\n>extensions with more control over planner behavior, and I believe it\n>would be concretely useful to pg_hint_plan in particular. But all that\n>said, as much as anything, I want to get some feedback on what\n>approaches and trade-offs people think might be acceptable here,\n>because there's not much point in me spending a bunch of time writing\n>code that everyone (or a critical mass of people) are going to hate.\n>\n>Thanks,\n>\n>-- \n>Robert Haas\n>EDB: http://www.enterprisedb.com\n>\n>[1] https://github.com/ossc-db/pg_hint_plan\n>[2] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Optimize.htmlI really admire this idea.\n here is my confusion: Isn't the core of this idea whether to turn the planner into a framework? Personally, I think that under PostgreSQL's heap table storage, the optimizer might be better off focusing on optimizing the generation of execution plans. It’s possible that in some specific scenarios, developers might want to intervene in the generation of execution plans by extensions. I'm not sure if these scenarios usually occur when the storage structure is also extended by developers. If so, could existing solutions like \"planner_hook\" potentially solve the problem?",
"msg_date": "Tue, 27 Aug 2024 14:43:51 +0800 (CST)",
"msg_from": "\"chungui.wcg\" <wcg2008zl@126.com>",
"msg_from_op": false,
"msg_subject": "Re:allowing extensions to control planner behavior"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 3:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Well, I agree that this doesn't address everything you might want to\n> do, ... I will very happily propose more things to\n> address the other problems that I know about ...\n\nIn that vein, here's a new patch set where I've added a second patch\nthat allows extensions to control choice of index. It's 3 lines of new\ncode, plus 7 lines of comments and whitespace. Feeling inspired, I\nalso included a contrib module, initial_vowels_are_evil, to\ndemonstrate how this can be used by an extension that wants to disable\ncertain indexes but not others. This is obviously quite silly and we\nmight (or might not) want a more serious example in contrib, but it\ndemonstrates how easy this can be with just a tiny bit of core\ninfrastructure:\n\nrobert.haas=# load 'initial_vowels_are_evil';\nLOAD\nrobert.haas=# explain select count(*) from pgbench_accounts;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2854.29..2854.30 rows=1 width=8)\n -> Index Only Scan using pgbench_accounts_pkey on pgbench_accounts\n (cost=0.29..2604.29 rows=100000 width=0)\n(2 rows)\nrobert.haas=# alter index pgbench_accounts_pkey rename to\nevil_pgbench_accounts_pkey;\nALTER INDEX\nrobert.haas=# explain select count(*) from pgbench_accounts;\n QUERY PLAN\n------------------------------------------------------------------------------\n Aggregate (cost=2890.00..2890.01 rows=1 width=8)\n -> Seq Scan on pgbench_accounts (cost=0.00..2640.00 rows=100000 width=0)\n(2 rows)\nrobert.haas=#\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 27 Aug 2024 11:45:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On 8/27/24 11:45, Robert Haas wrote:\n> On Mon, Aug 26, 2024 at 3:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> Well, I agree that this doesn't address everything you might want to\n>> do, ... I will very happily propose more things to\n>> address the other problems that I know about ...\n> \n> In that vein, here's a new patch set where I've added a second patch\n> that allows extensions to control choice of index. It's 3 lines of new\n> code, plus 7 lines of comments and whitespace. Feeling inspired, I\n> also included a contrib module, initial_vowels_are_evil, to\n> demonstrate how this can be used by an extension that wants to disable\n> certain indexes but not others. This is obviously quite silly and we\n> might (or might not) want a more serious example in contrib, but it\n> demonstrates how easy this can be with just a tiny bit of core\n> infrastructure:\n> \n> robert.haas=# load 'initial_vowels_are_evil';\n> LOAD\n> robert.haas=# explain select count(*) from pgbench_accounts;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=2854.29..2854.30 rows=1 width=8)\n> -> Index Only Scan using pgbench_accounts_pkey on pgbench_accounts\n> (cost=0.29..2604.29 rows=100000 width=0)\n> (2 rows)\n> robert.haas=# alter index pgbench_accounts_pkey rename to\n> evil_pgbench_accounts_pkey;\n> ALTER INDEX\n> robert.haas=# explain select count(*) from pgbench_accounts;\n> QUERY PLAN\n> ------------------------------------------------------------------------------\n> Aggregate (cost=2890.00..2890.01 rows=1 width=8)\n> -> Seq Scan on pgbench_accounts (cost=0.00..2640.00 rows=100000 width=0)\n> (2 rows)\n> robert.haas=#\n\n\nNice!\n\nOn the one hand, excluding indexes by initial vowels is definitely \nsilly. On the other, I can see how it might be useful for an extension \nto exclude indexes based on a regex match of the index name or something \nsimilar, at least for testing.\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 27 Aug 2024 11:56:59 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Tue, Aug 27, 2024 at 2:44 AM chungui.wcg <wcg2008zl@126.com> wrote:\n> I really admire this idea.\n\nThanks.\n\n> here is my confusion: Isn't the core of this idea whether to turn the planner into a framework? Personally, I think that under PostgreSQL's heap table storage, the optimizer might be better off focusing on optimizing the generation of execution plans. It’s possible that in some specific scenarios, developers might want to intervene in the generation of execution plans by extensions. I'm not sure if these scenarios usually occur when the storage structure is also extended by developers. If so, could existing solutions like \"planner_hook\" potentially solve the problem?\n\nYou could use planner_hook if you wanted to replace the entire planner\nwith your own planner. However, that doesn't seem like something\npractical, as the planner code is very large. The real use of the hook\nis to allow running some extra code when the planner is invoked, as\ndemonstrated by the pg_stat_statements contrib module. To get some\nmeaningful control over the planner, you need something more\nfine-grained. You need to be able to run code at specific points in\nthe planner, as we already allow with, for example,\nget_relation_info_hook or set_rel_pathlist_hook.\n\nWhether or not that constitutes \"turning the planner into a framework\"\nis, I suppose, a question of opinion. Perhaps a more positive way to\nphrase it would be \"allowing for some code reuse\". Right now, if you\nmostly like the behavior of the planner but want a few things to be\ndifferent, you've got to duplicate a lot of code and then hack it up.\nThat's not very nice. I think it's better to set things up so that you\ncan keep most of the planner behavior but override it in a few\nspecific cases without a lot of difficulty.\n\nCases where the data is stored in some different way are really a\nseparate issue from what I'm talking about here. In that case, you\ndon't want to override the planner behavior for all tables everywhere,\nso planner_hook still isn't a good solution. You only want to change\nthe behavior for the specific table AM that implements the new\nstorage. You would probably want there to be an option where\ncost_seqscan() calls a tableam-specific function instead of just doing\nthe same thing for every AM; and maybe something similar for indexes,\nalthough that is less clear. The details aren't quite clear, which is\nprobably part of why we haven't done anything yet.\n\nBut this patch set is really more about enabling use cases where the\nuser wants an extension to take control of the plan more explicitly,\nsay to avoid some bad plan that they got before and that they don't\nwant to get again.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Aug 2024 12:11:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Tue, Aug 27, 2024 at 11:57 AM Joe Conway <mail@joeconway.com> wrote:\n> On the one hand, excluding indexes by initial vowels is definitely\n> silly. On the other, I can see how it might be useful for an extension\n> to exclude indexes based on a regex match of the index name or something\n> similar, at least for testing.\n\nRight. I deliberately picked a contrib module that implemented a silly\npolicy, because what I wanted to demonstrate with it is that this\nlittle bit of infrastructure provides enough mechanism to implement\nwhatever policy you want. And I think it demonstrates it quite well,\nbecause the whole contrib module to implement this has just 6 lines of\nexecutable code. If you wanted a policy that would be more\nrealistically useful, you'd need more code, but only however much is\nneeded to implement your policy. All you need do is replace this\nstrchr call with something else:\n\n if (name != NULL && strchr(\"aeiouAEIOU\", name[0]) != NULL)\n index->disabled = true;\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Aug 2024 12:36:07 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> In that vein, here's a new patch set where I've added a second patch\n> that allows extensions to control choice of index.\n\nI'm minus-several on this bit, because that is a solved problem and\nwe really don't need to introduce More Than One Way To Do It. The\nintention has always been that get_relation_info_hook can editorialize\non the rel's indexlist by removing entries (or adding fake ones,\nin the case of hypothetical-index extensions). For that matter,\nif you really want to do it by modifying the IndexInfo rather than\ndeleting it from the list, that's already possible: just set\nindexinfo->hypothetical = true.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Aug 2024 12:56:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Tue, Aug 27, 2024 at 12:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > In that vein, here's a new patch set where I've added a second patch\n> > that allows extensions to control choice of index.\n>\n> I'm minus-several on this bit, because that is a solved problem and\n> we really don't need to introduce More Than One Way To Do It. The\n> intention has always been that get_relation_info_hook can editorialize\n> on the rel's indexlist by removing entries (or adding fake ones,\n> in the case of hypothetical-index extensions). For that matter,\n> if you really want to do it by modifying the IndexInfo rather than\n> deleting it from the list, that's already possible: just set\n> indexinfo->hypothetical = true.\n\nWell, now I'm confused. Just yesterday, in response to the 0001 patch\nthat allows extensions to exert control over the join strategy, you\ncomplained that \"Or, if your problem is that the planner wants to scan\nindex A but you want it to scan index B, enable_indexscan won't help.\"\nSo today, I produced a patch demonstrating how we could address that\nissue, and your response is \"well, actually we don't need to do\nanything about it because that problem is already solved.\" But if that\nis true, then the fact that yesterday's patch did nothing about it was\na feature, not a bug.\n\nAm I missing something here?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Aug 2024 13:17:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Well, now I'm confused. Just yesterday, in response to the 0001 patch\n> that allows extensions to exert control over the join strategy, you\n> complained that \"Or, if your problem is that the planner wants to scan\n> index A but you want it to scan index B, enable_indexscan won't help.\"\n\nI was just using that to illustrate that making the enable_XXX GUCs\nrelation-local covers only a small part of the planner-control problem.\nYou had not, at that point, been very clear that you intended that\npatch as only a small part of a solution.\n\nI do think that index selection is pretty well under control already,\nthanks to stuff that we put in ages ago at the urging of people who\nwanted to write \"index advisor\" extensions. (The fact that that\narea seems a bit moribund is disheartening, though. Is it a lack\nof documentation?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Aug 2024 14:24:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Tue, Aug 27, 2024 at 2:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I was just using that to illustrate that making the enable_XXX GUCs\n> relation-local covers only a small part of the planner-control problem.\n> You had not, at that point, been very clear that you intended that\n> patch as only a small part of a solution.\n\nAh, OK, apologies for the lack of clarity. I actually think it's a\nmedium part of the solution. I believe the minimum viable product here\nis probably something like:\n\n- control over scan methods\n- control over index selection\n- control over join methods\n- control over join order\n\nIt gets a lot better if we also have:\n\n- control over aggregation methods\n- something that I'm not quite sure about for appendrels\n- control over whether parallelism is used and the degree of parallelism\n\nIf control over index selection is already adequate, then the proposed\npatch is one way to get about 1/3 of the way to the MVP, which isn't\nnothing. Maybe I'm underestimating the amount of stuff that people are\ngoing to want here, but if you look at pg_hint_plan, it isn't doing a\nwhole lot more than this.\n\n> I do think that index selection is pretty well under control already,\n> thanks to stuff that we put in ages ago at the urging of people who\n> wanted to write \"index advisor\" extensions. (The fact that that\n> area seems a bit moribund is disheartening, though. Is it a lack\n> of documentation?)\n\nSo a couple of things about this.\n\nFirst, EDB maintains closed-source index advisor code that uses this\nmachinery. In fact, if I'm not mistaken, we now have two extensions\nthat use it. So it's not dead from that point of view, but of course\nanything closed-source can't be promoted through community channels.\nThere's open-source code around too; to my knowledge,\nhttps://github.com/HypoPG/hypopg is the leading open-source\nimplementation, but my knowledge may very well be incomplete.\n\nSecond, I do think that the lack of documentation poses somewhat of a\nchallenge, and our exchange about whether an IndexOptInfo needs a\ndisabled flag is perhaps an example of that. To be fair, now that I\nlook at it, the comment where get_relation_info_hook does say that you\ncan remove indexes from the index list, so maybe I should have\nrealized that the problem can be solved that way, but on the other\nhand, the comment for set_rel_pathlist_hook claims you can delete\npaths from the pathlist, which AFAICS is completely non-viable, so one\ncan't necessarily rely too much on the comments in this area to learn\nwhat actually does and does not work. Having some in-core examples\nshowing how to use this stuff correctly and demonstrating its full\npower would also be really helpful. Right now, I often find myself\nlooking at out-of-core code which is sometimes poorly written and\nfrequently resorts to nasty hacks. It can be hard to determine whether\nthose nasty hacks are there because they're the only way to implement\nsome bit of functionality or because the author missed an opportunity\nto do better.\n\nThird, I think there's simply a lack of critical mass in terms of our\nplanner hooks. While the ability to add hypothetical indexes has some\nuse, the ability to remove indexes from consideration is probably\nsignificantly more useful. But not if it's the only technique for\nfixing a bad plan that you have available. Nobody gets excited about a\ntoolbox that contains just one tool. That's why I'm keen to expand\nwhat can be done cleanly via hooks, and I think if we do that and also\nprovide either some very good documentation or some well-written\nexample implementations, we'll get more traction here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Aug 2024 15:11:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Mon, Aug 26, 2024 at 1:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> For example, I don't see\n> how this gets us any closer to letting an extension fix a poor choice\n> of join order.\n\nThinking more about this particular sub-problem, let's say we're\njoining four tables A, B, C, and D. An extension wants to compel join\norder A-B-C-D. Let's suppose, however, that it wants to do this in a\nway where planning won't fail if that's impossible, so it wants to use\ndisabled_nodes rather than skipping path generation entirely.\n\nWhen we're planning the baserels, we don't need to do anything\nspecial. When we plan 2-way joins, we need to mark all paths disabled\nexcept those originating from the A-B join. When we plan 3-way joins,\nwe need to mark all paths disabled except those originating from an\n(A-B)-C join. When we plan the final 4-way join, we don't really need\nto do anything extra: the only way to end up with a non-disabled path\nat the top level is to pick a path from the (A-B)-C join and a path\nfrom D.\n\nThere's a bit of nuance here, though. Suppose that when planning the\nA-B join, the planner generates HashJoin(SeqScan(B),Hash(A)). Does\nthat path need to be disabled? If you think that join order A-B-C-D\nmeans that table A should be the driving table, then the answer is\nyes, because that path will lead to a join order beginning with B-A,\nnot one beginning with A-B. But you might also have a mental model\nwhere it doesn't matter which side of the table is on which side of\nthe join, and as long as you start by joining A and B in some way,\nthat's good enough to qualify as an A-B join order. I believe actual\nimplementations vary in which approach they take.\n\nI think that the beginning of add_paths_to_joinrel() looks like a\nuseful spot to get control. You could, for example, have a hook there\nwhich returns a bitmask indicating which of merge-join, nested-loop,\nand hash join will be allowable for this call; that hook would then\nallow for control over the join method and the join order, and the\njoin order control is strong enough that you can implement either of\nthe two interpretations above. This idea theorizes that 0001 was wrong\nto make the path mask a per-RelOptInfo value, because there could be\nmany calls to add_paths_to_joinrel() for a single RelOptInfo and, in\nthis idea, every one of those can enforce a different mask.\n\nPotentially, such a hook could return additional information, either\nby using more bits of the bitmask or by returning other information\nvia some other data type. For instance, I still believe that\ndistinguishing between parameterized-nestloops and\nunparameterized-nestloops would be real darn useful, so we could have\nseparate bits for each; or you could have a bit to control whether\nforeign-join paths get disabled (or are considered at all), or you\ncould have separate bits for merge joins that involve 0, 1, or 2\nsorts. Whether we need or want any or all of that is certainly\ndebatable, but the point is that if you did want some of that, or\nsomething else, it doesn't look difficult to feed that information\nthrough to the places where you would need it to be available.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Aug 2024 16:07:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "Hi,\n\nOn Tue, Aug 27, 2024 at 03:11:15PM -0400, Robert Haas wrote:\n> Third, I think there's simply a lack of critical mass in terms of our\n> planner hooks. While the ability to add hypothetical indexes has some\n> use, the ability to remove indexes from consideration is probably\n> significantly more useful. \n\nJFTR, hypopg can also mask away/hide indexes since version 1.4.0:\n\nhttps://github.com/HypoPG/hypopg/commit/351f14a79daae8ab57339d2367d7f2fc639041f7\n\nI haven't looked closely at the implementation though, and maybe you\nmeant something else in the above entirely.\n\n\nMichael\n\n\n",
"msg_date": "Tue, 27 Aug 2024 22:08:01 +0200",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I believe the minimum viable product here\n> is probably something like:\n\n> - control over scan methods\n> - control over index selection\n> - control over join methods\n> - control over join order\n\nSeems reasonable. It might be possible to say that our answer\nto \"control over join order\" is to provide a hook that can modify\nthe \"joinlist\" before it's passed to make_one_rel. If you want\nto force a particular join order you can rearrange that\nlist-of-lists-of-range-table-indexes to do so. The thing this\nwould not give you is control over which rel is picked as outer\nin any given join step. Not sure how critical that bit is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Aug 2024 16:15:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Tue, Aug 27, 2024 at 4:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Seems reasonable. It might be possible to say that our answer\n> to \"control over join order\" is to provide a hook that can modify\n> the \"joinlist\" before it's passed to make_one_rel. If you want\n> to force a particular join order you can rearrange that\n> list-of-lists-of-range-table-indexes to do so. The thing this\n> would not give you is control over which rel is picked as outer\n> in any given join step. Not sure how critical that bit is.\n\nThis has a big advantage over what I proposed yesterday in that it's\nbasically declarative. With one call to the hook, you get all the\ninformation about the join order that you could ever want. That's\nreally nice. However, I don't really think it quite works, partly\nbecause of what you mention here about not being able to control which\nrel ends up on which side of the join, which I do think is important,\nand also because if the join order isn't possible, planning will fail,\nrather than falling back to some other plan shape. If you have an idea\nhow we could address those things within this same general framework,\nI'd be keen to hear it.\n\nIt has occurred to me more than once that it might be really useful if\nwe could attempt to plan under a set of constraints and then, if we\ndon't end up finding a plan, retry without the constraints. But I\ndon't quite see how to make it work. When I tried to do that as a\nsolution to the disable_cost problem, it ended up meaning that once\nyou couldn't satisfy every constraint perfectly, you gave up on even\ntrying. I wasn't immediately certain that such behavior was\nunacceptable, but I didn't have to look any further than our own\nregression test suites to see that it was going to cause a lot of\nunhappiness. In this case, if we could attempt join planning with the\nuser-prescribed join order and then try it again if we fail to find a\npath, that would be really cool. Or if we could do all of planning\nwithout generating disabled paths *at all* and then go back and\nrestart if it becomes clear that's not working out, that would be\nslick. But, unless you have a clever idea, that all seems like\nadvanced magic that should wait until we have basic things working.\nRight now, I think we should focus on getting something in place where\nwe still try all the paths but an extension can arrange for some of\nthem to be disabled. Then all the right things will happen naturally;\nwe'll only be leaving some CPU cycles on the table. Which isn't\namazing, but I don't think it's a critical defect either, and we can\ntry to improve things later if we want to.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Aug 2024 08:37:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "Hi Robert,\n\nOn Mon, Aug 26, 2024 at 6:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I'm somewhat expecting to be flamed to a well-done crisp for saying\n> this, but I think we need better ways for extensions to control the\n> behavior of PostgreSQL's query planner.\n[..]\n> [..] But all that\n> said, as much as anything, I want to get some feedback on what\n> approaches and trade-offs people think might be acceptable here,\n> because there's not much point in me spending a bunch of time writing\n> code that everyone (or a critical mass of people) are going to hate.\n\nAs a somewhat tiny culprit of the self-flaming done by Robert (due to\nnagging him about this in the past on various occasions), I'm of\ncourse obligated to +1 to any work related to giving end-users/DBA the\nability to cage the plans generated by the optimizer.\n\nWhen dealing with issues like those, I have a feeling we have 2\nclasses of most frequent issues being reported (that's my subjective\nexperience):\na. cardinality misestimate leading usually to nest loop plans (e.g.\nJOIN estimates thread [1] could also somehow help and it also has nice\nreproducers)\nb. issues after upgrades\n\nSo the \"selectivity estimation hook(s)\" mentioned by Andrei seems to\nbe a must, but also the ability not to just guess & tweak (shape) the\nplan, but a way to export all SQL plans before upgrade with capability\nto import and override(lock) specific SQL query to specific plan from\nbefore upgrade.\n\nI'm not into the internals of optimizer at all, but here are other\nrandom thoughts/questions:\n- I do think that \"hints\" words have bad connotations and should not\nbe used. It might be because of embedding them in SQL query text of\nthe application itself. On one front they are localized to the SQL\n(good), but the PG operator has no realistic way of altering that once\nit's embedded in binary (bad), as the application team is usually\nseparate if not from an external company (very bad situation, but\nhappens almost always).\n- Would stacking of such extensions, each overriding the same planner\nhooks, be allowed or not in the long run ? Technically there's nothing\npreventing it and I think I could imagine someone attempting to run\nmultiple planner hook hotfixes for multiple issues, all at once?\n- Shouldn't EXPLAIN contain additional information that for that\nspecific plan the optimizer hooks changed at least 1 thing ? (e.g.\n\"Plan was tainted\" or something like that). Maybe extension could mark\nit politely that it did by setting a certain flag, or maybe there\nshould be routines exposed by the core to do that ?\n- the \"b\" (upgrade) seems like a much more heavy duty issue, as that\nwould require transfer of version-independent and textualized dump of\nSQL plan that could be back-filled into a newer version of optimizer.\nIs such a big thing realistic at all and it's better to just\nconcentrate on the hooks approach?\n\n-J.\n\n[1] - https://www.postgresql.org/message-id/flat/c8c0ff31-3a8a-7562-bbd3-78b2ec65f16c%40enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Aug 2024 15:46:05 +0200",
"msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 9:46 AM Jakub Wartak\n<jakub.wartak@enterprisedb.com> wrote:\n> As a somewhat tiny culprit of the self-flaming done by Robert (due to\n> nagging him about this in the past on various occasions), I'm of\n> course obligated to +1 to any work related to giving end-users/DBA the\n> ability to cage the plans generated by the optimizer.\n\nThanks.\n\n> When dealing with issues like those, I have a feeling we have 2\n> classes of most frequent issues being reported (that's my subjective\n> experience):\n> a. cardinality misestimate leading usually to nest loop plans (e.g.\n> JOIN estimates thread [1] could also somehow help and it also has nice\n> reproducers)\n> b. issues after upgrades\n>\n> So the \"selectivity estimation hook(s)\" mentioned by Andrei seems to\n> be a must, but also the ability not to just guess & tweak (shape) the\n> plan, but a way to export all SQL plans before upgrade with capability\n> to import and override(lock) specific SQL query to specific plan from\n> before upgrade.\n\nI'm not against some kind of selectivity estimation hook in principle,\nbut I don't know what the proposal is specifically, and I think it's\nseparate from the topic of this thread. On the other hand, being able\nto force the same plans after an upgrade that you were getting before\nan upgrade is the kind of thing that I'd like to enable with the\ninfrastructure proposed here. I do not propose to put something like\nthat in core, at least not any time soon, but I'd like to have the\ninfrastructure be good enough that people can try to do it in an\nextension module and learn from how it turns out.\n\nEver since I read\nhttps://15721.courses.cs.cmu.edu/spring2020/papers/22-costmodels/p204-leis.pdf\nI have believed that the cardinality misestimate leading to nested\nloop plans is just because we're doing something dumb. They write:\n\n\"When looking at the queries that did not finish in a reasonable time\nusing the estimates, we found that most have one thing in common:\nPostgreSQL’s optimizer decides to introduce a nestedloop join (without\nan index lookup) because of a very low cardinality estimate, whereas\nin reality the true cardinality is larger. As we saw in the previous\nsection, systematic underestimation happens very frequently, which\noccasionally results in the introduction of nested-loop joins. [...]\nif the cost estimate is 1,000,000 with the nested-loop join algorithm\nand 1,000,001 with a hash join, PostgreSQL will always prefer the\nnested-loop algorithm even if there is a equality join predicate,\nwhich allows one to use hashing. [...] given the fact that\nunderestimates are quite frequent, this decision is extremely risky.\nAnd even if the estimates happen to be correct, any potential\nperformance advantage of a nested-loop join in comparison with a hash\njoin is very small, so taking this high risk can only result in a very\nsmall payoff. Therefore, we disabled nested-loop joins (but not\nindex-nestedloop joins) in all following experiments.\"\n\nWe don't even have an option to turn off that kind of join, and we\ncould probably avoid a lot of pain if we did. This, too, is mostly\nseparate from the topic of this thread, but I just can't believe we've\nchosen to do literally nothing about this given that we all know this\nspecific thing hoses everybody, everywhere, all the time.\n\n> I'm not into the internals of optimizer at all, but here are other\n> random thoughts/questions:\n> - I do think that \"hints\" words have bad connotations and should not\n> be used. It might be because of embedding them in SQL query text of\n> the application itself. On one front they are localized to the SQL\n> (good), but the PG operator has no realistic way of altering that once\n> it's embedded in binary (bad), as the application team is usually\n> separate if not from an external company (very bad situation, but\n> happens almost always).\n\nI haven't quite figured out whether the problem is that hints are\nactually bad or whether it's more that we just hate saying the word\nhints. The reason I'm talking about hints here is mostly because\nthat's how other systems let users control the query planner. If we\nwant to let extensions control the query planner, we need to know in\nwhat ways it needs to be controllable, and looking to hints in other\nsystems is one way to understand what would be useful. As far as\nhaving hints in PostgreSQL, which admittedly is not really the topic\nof this thread either, one objection is that we should just make the\nquery planner instead, and I used to believe that, but I no longer do,\nbecause I've been doing this PostgreSQL thing for 15+ years and we're\nnot much closer to a perfect query planner that never makes any\nmistakes than we were when I started. It's not really clear that\nperfection is possible, but it's extremely clear that we're not\ngetting there any time soon. Another objection to hints is that they\nrequire modifying the query text, which does indeed suck but it\ndoesn't mean they're useless either. There are also schemes that put\nthem out of line, including pg_hint_plan's optional use of a hint\ntable. Yet another objection is that you should fix cardinalities\ninstead of controlling the plan manually, and I agree that's often a\nbetter solution, but it again does not mean that using a hint is never\ndefensible in any situation. I think we've become so negative about\nhints that we rarely have a rational discussion about them. I'm no\nmore keen to see every PostgreSQL query in the universe decorated with\na bunch of hints than anyone else here, but I also don't enjoy telling\na customer \"hey, I know this query started misbehaving in the middle\nof the night on Christmas, but hints are bad and we shouldn't ever\nhave them so you'd better get started on redesigning your schema or\nalternatively you can just have your web site be down for the next 20\nyears while we try to improve the optimizer.\" I don't know what the\nright solution(s) are exactly but it's insane not to have some kind of\npressure relief valve that can be used in case of emergency.\n\n> - Would stacking of such extensions, each overriding the same planner\n> hooks, be allowed or not in the long run ? Technically there's nothing\n> preventing it and I think I could imagine someone attempting to run\n> multiple planner hook hotfixes for multiple issues, all at once?\n\nI suspect this would tend to work poorly in practice, but there might\nbe specific cases where it works OK. It's usually best if only one\nperson is steering a given vehicle at a time, and so here. But there's\nno intrinsic reason you couldn't use multiple extensions at once if\nyou happen to have multiple extensions that use the hooks in mutually\ncompatible ways.\n\n> - Shouldn't EXPLAIN contain additional information that for that\n> specific plan the optimizer hooks changed at least 1 thing ? (e.g.\n> \"Plan was tainted\" or something like that). Maybe extension could mark\n> it politely that it did by setting a certain flag, or maybe there\n> should be routines exposed by the core to do that ?\n\nThis could be really useful when things go wrong and someone is trying\nto figure out from an EXPLAIN ANALYZE output what in the world\nhappened. I'm not sure exactly what makes sense to do here but I think\nwe should come back to this topic after we've settled some of the\nbasics.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Aug 2024 10:57:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 8:37 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> This has a big advantage over what I proposed yesterday in that it's\n> basically declarative. With one call to the hook, you get all the\n> information about the join order that you could ever want. That's\n> really nice.\n\nHmm. On further thought, I suppose that another disadvantage of this\nkind of declarative approach is that there are some kinds of\nconstraints that you could conceivably want that you just can't\ndeclare, especially negative constraints. For example, imagine we're\njoining tables A1 ... A10 and we don't want A1 to be joined directly\nto A2. Or suppose you want to insist on a non-bushy plan. I don't\nthink there's a way to express those types of requirements by frobbing\nthe joinlist.\n\nI'm not quite sure whether those kinds of gaps are sufficiently\nserious that we should worry about them. After all, there's a lot of\nthings that you can express perfectly clearly with this kind of\nscheme. I don't think I know of something that you can do to control\nthe join order in an existing hinting system that cannot be expressed\nas a manipulation of the joinlist. That's not to say that I am 100%\nconfident that everything everyone could reasonably want to do can be\nexpressed this way; in fact, I don't think that's true at all. But I\n_think_ that all of the things that I know about that people are\nactually doing _could_ be expressed this way, but for the\njoin-direction and hard-failulre problems I mentioned in my earlier\nreply.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Aug 2024 13:06:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 10:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Ever since I read\n> https://15721.courses.cs.cmu.edu/spring2020/papers/22-costmodels/p204-leis.pdf\n> I have believed that the cardinality misestimate leading to nested\n> loop plans is just because we're doing something dumb.\n\n> We don't even have an option to turn off that kind of join, and we\n> could probably avoid a lot of pain if we did. This, too, is mostly\n> separate from the topic of this thread, but I just can't believe we've\n> chosen to do literally nothing about this given that we all know this\n> specific thing hoses everybody, everywhere, all the time.\n\nI couldn't agree more. I was really annoyed when your proposal was shot down.\n\nIt's an unusually clear-cut issue. Tying it to much broader and much\nmore complicated questions about how we model risk was a mistake.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 28 Aug 2024 13:20:28 -0400",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Tue, 2024-08-27 at 15:11 -0400, Robert Haas wrote:\n> - control over scan methods\n> - control over index selection\n> - control over join methods\n> - control over join order\n\nI suggest we split join order into \"commutative\" and \"associative\".\n\nThe former is both useful and seems relatively easy -- A JOIN B or B\nJOIN A (though there's some nuance about when you try to make that\ndecision).\n\nThe latter requires controlling an explosion of possibilities, and\nwould be an entirely different kind of hook.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 28 Aug 2024 12:23:52 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 3:23 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Tue, 2024-08-27 at 15:11 -0400, Robert Haas wrote:\n> > - control over scan methods\n> > - control over index selection\n> > - control over join methods\n> > - control over join order\n>\n> I suggest we split join order into \"commutative\" and \"associative\".\n>\n> The former is both useful and seems relatively easy -- A JOIN B or B\n> JOIN A (though there's some nuance about when you try to make that\n> decision).\n>\n> The latter requires controlling an explosion of possibilities, and\n> would be an entirely different kind of hook.\n\nMy proposal in http://postgr.es/m/CA+TgmoZQyVxnRU--4g2bJonJ8RyJqNi2CHpy-=nwwBTNpAj71A@mail.gmail.com\nseems like it can cover both cases.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Aug 2024 16:12:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Mon, 2024-08-26 at 12:32 -0400, Robert Haas wrote:\n> I think there are two basic approaches that are possible here. If\n> someone sees a third option, let me know. First, we could allow users\n> to hook add_path() and add_partial_path(). \n\n...\n\n> The other possible approach is to allow extensions to feed some\n> information into the planner before path generation and let that\n> influence which paths are generated.\n\nPreserving a path for the right amount of time seems like the primary\nchallenge for most of the use cases you raised (removing paths is\neasier than resurrecting one that was pruned too early). If we try to\nkeep a path around, that implies that we need to keep parent paths\naround too, which leads to an explosion if we aren't careful.\n\nBut we already solved all of that for pathkeys. We keep the paths\naround if there's a reason to (a useful pathkey) and there's not some\nother cheaper path that also satisfies the same reason.\n\nIdea: generalize the idea of \"pathkeys\" to work for other reasons to\npreserve a path.\n\nMechanically, a hint to use an index could work very similarly: come up\nwith a custom reason to keep a path around, such as \"a hint suggests we\nuse index foo_idx for table foo\", and assign it a unique number. If\nthere's another hint that says we should also use index bar_idx for\ntable bar, then that reason would get a different unique reason number.\n(In other words, the number of reasons would not be fixed; there could\nbe one reason for each hint specified in the query, kind of like there\ncould be many interesting pathkeys for a query.)\n\nEach Path would have a \"preserve_for_these_reasons\" bitmapset holding\nall of the non-cost reasons we are preserving that path. If two paths\nhave exactly the same set of reasons, then add_path() would only keep\nthe cheaper one.\n\nWe could get fancy and have a compare_reasons_hook that would allow you\nto take two paths with the same reason and see if there are other\nfactors to consider that would cause both to still be preserved\n(similar to pathkey length).\n\nI suspect that we might see interesting applications of this mechanism\nin core as well: for instance, track partition keys or other properties\nrelevant to parallelism. That could allow us to keep parallel-friendly\npaths around and then decide later in the planning process whether to\nactually parallelize them or not.\n\n\nOnce we've generalized the \"reasons\" mechnism, it would be easy enough\nto have a hook to add reasons to a path as it's being generated to be\nsure it's not lost. These hooks should probably be called in the\nindividual create_*_path() functions where there's enough context to\nknow what's happening. There could be many such hooks, but I suspect\nonly a handful of important ones.\n\nThis idea allows the extension author to preserve the right paths long\nenough to use set_rel_pathlist_hook/set_join_pathlist_hook, which can\neditorialize on costs or do its own pruning.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 28 Aug 2024 13:29:43 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 4:29 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Preserving a path for the right amount of time seems like the primary\n> challenge for most of the use cases you raised (removing paths is\n> easier than resurrecting one that was pruned too early). If we try to\n> keep a path around, that implies that we need to keep parent paths\n> around too, which leads to an explosion if we aren't careful.\n>\n> But we already solved all of that for pathkeys. We keep the paths\n> around if there's a reason to (a useful pathkey) and there's not some\n> other cheaper path that also satisfies the same reason.\n\nBut we've already solved it for this case, too. This is exactly what\nincrementing disabled_nodes does. This very recently replaced what we\ndid previously, which was adding disable_cost to the cost of every\npath. Either way, you just need a hook that lets you disable the paths\nthat you don't prefer. Once you do that, add_path() takes care of the\nrest: disabled paths lose to non-disabled paths, and disabled paths\nlose to more expensive disabled paths.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Aug 2024 16:35:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Wed, 2024-08-28 at 16:35 -0400, Robert Haas wrote:\n> On Wed, Aug 28, 2024 at 4:29 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > Preserving a path for the right amount of time seems like the\n> > primary\n> > challenge for most of the use cases you raised (removing paths is\n> > easier than resurrecting one that was pruned too early). If we try\n> > to\n> > keep a path around, that implies that we need to keep parent paths\n> > around too, which leads to an explosion if we aren't careful.\n> > \n> > But we already solved all of that for pathkeys. We keep the paths\n> > around if there's a reason to (a useful pathkey) and there's not\n> > some\n> > other cheaper path that also satisfies the same reason.\n> \n> But we've already solved it for this case, too. This is exactly what\n> incrementing disabled_nodes does.\n\nHints are often described as something positive: use this index, use a\nhash join here, etc. Trying to force a positive thing by adding\nnegative attributes to everything else is awkward. We've all had the\nexperience where we disable one plan type hoping for a good plan, and\nwe end up getting a different crazy plan that we didn't expect, and\nneed to disable a few more plan types.\n\nBeyond awkwardness, one case where it matters is the interaction\nbetween an extension that provides hints and an extension that offers a\nCustomScan. How is the hints extension supposed to disable a path it\ndoesn't know about?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 28 Aug 2024 16:15:51 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> Beyond awkwardness, one case where it matters is the interaction\n> between an extension that provides hints and an extension that offers a\n> CustomScan. How is the hints extension supposed to disable a path it\n> doesn't know about?\n\nThis does not seem remarkably problematic to me, given Robert's\nproposal of a bitmask of allowed plan types per RelOptInfo.\nYou just do something like\n\n\trel->allowed_plan_types = DESIRED_PLAN_TYPE;\n\nThe names of the bits you aren't setting are irrelevant to you.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Aug 2024 19:25:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 07:25:59PM -0400, Tom Lane wrote:\n> Jeff Davis <pgsql@j-davis.com> writes:\n> > Beyond awkwardness, one case where it matters is the interaction\n> > between an extension that provides hints and an extension that offers a\n> > CustomScan. How is the hints extension supposed to disable a path it\n> > doesn't know about?\n\npg_hint_plan documents its hints here:\nhttps://pg-hint-plan.readthedocs.io/en/master/hint_list.html#hint-list\n\nHmm. I think that we should be careful to check that this works\ncorrectly with pg_hint_plan, at least. The module goes through a lot\nof tweaks and is a can of worms in terms of plan adjustments because\nwe can only rely on the planner hook to do the whole work. This leads\nto a lot of frustration for users because each feature somebody asks\nfor leads to just more tweaks to apply on the paths.\n\nThe bullet list sent here sounds pretty good to me:\nhttps://www.postgresql.org/message-id/3131957.1724789735@sss.pgh.pa.us\n\n> This does not seem remarkably problematic to me, given Robert's\n> proposal of a bitmask of allowed plan types per RelOptInfo.\n> You just do something like\n> \n> \trel->allowed_plan_types = DESIRED_PLAN_TYPE;\n> \n> The names of the bits you aren't setting are irrelevant to you.\n\nFor the types of scans to use, that would be OK. The module has a\nfeature where one can also have a regex to match for an index, and\nthe module is very funky with inheritance and partitioned tables.\n\nHow does that help if using a Leading hint to force a join order?\nThat's something people like a lot. But perhaps that's just the part\nof upthread where we'd need a extra hook? I am not completely sure to\nget the portion of the proposal for that. add_paths_to_joinrel() has\nbeen mentioned, and there is set_join_pathlist_hook already there.\n--\nMichael",
"msg_date": "Thu, 29 Aug 2024 09:28:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Wed, 2024-08-28 at 19:25 -0400, Tom Lane wrote:\n> This does not seem remarkably problematic to me, given Robert's\n> proposal of a bitmask of allowed plan types per RelOptInfo.\n> You just do something like\n> \n> rel->allowed_plan_types = DESIRED_PLAN_TYPE;\n> \n> The names of the bits you aren't setting are irrelevant to you.\n\nI don't see that in the code yet, so I assume you are referring to the\ncomment at [1]?\n\nI still like my idea to generalize the pathkey infrastructure, and\nRobert asked for other approaches to consider. It would allow us to\nhold onto multiple paths for longer, similar to pathkeys, which might\noffer some benefits or simplifications.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/message-id/CA+TgmoZQyVxnRU--4g2bJonJ8RyJqNi2CHpy-=nwwBTNpAj71A@mail.gmail.com\n\n\n\n",
"msg_date": "Thu, 29 Aug 2024 15:49:12 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Thu, Aug 29, 2024 at 6:49 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I don't see that in the code yet, so I assume you are referring to the\n> comment at [1]?\n\nFYI, I'm hacking on a revised approach but it's not ready to show to\nother people yet.\n\n> I still like my idea to generalize the pathkey infrastructure, and\n> Robert asked for other approaches to consider. It would allow us to\n> hold onto multiple paths for longer, similar to pathkeys, which might\n> offer some benefits or simplifications.\n\nThis is a fair point. I dislike the fact that add_path() is a thicket\nof if-statements that's actually quite hard to understand and easy to\nscrew up when you're making modifications. But I feel like it would be\ndifficult to generalize the infrastructure without making it\nsubstantially slower, which would probably cause too much of an\nincrease in planning time to be acceptable. So my guess is that this\nis a dead end, unless there's a clever idea that I'm not seeing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 30 Aug 2024 07:33:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Fri, 2024-08-30 at 07:33 -0400, Robert Haas wrote:\n> This is a fair point. I dislike the fact that add_path() is a thicket\n> of if-statements that's actually quite hard to understand and easy to\n> screw up when you're making modifications. But I feel like it would\n> be\n> difficult to generalize the infrastructure without making it\n> substantially slower, which would probably cause too much of an\n> increase in planning time to be acceptable. So my guess is that this\n> is a dead end, unless there's a clever idea that I'm not seeing.\n\nAs far as performance goes, I'm only looking at branch in add_path()\nthat calls compare_pathkeys(). Do you have some example queries which\nwould be a worst case for that path?\n\nIn general if you can post some details about how you are measuring,\nthat would be helpful.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 30 Aug 2024 10:42:52 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Fri, Aug 30, 2024 at 1:42 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> As far as performance goes, I'm only looking at branch in add_path()\n> that calls compare_pathkeys(). Do you have some example queries which\n> would be a worst case for that path?\n\nI think we must be talking past each other somehow. It seems to me\nthat your scheme would require replacing that branch with something\nmore complicated or generalized. If it doesn't, then I don't\nunderstand the proposal. If it does, then that seems like it could be\na problem.\n\n> In general if you can post some details about how you are measuring,\n> that would be helpful.\n\nI'm not really measuring anything at this point, just recalling the\nmany previous times when add_path() has been discussed as a pain\npoint.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 30 Aug 2024 16:04:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On Tue, Aug 27, 2024 at 4:07 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think that the beginning of add_paths_to_joinrel() looks like a\n> useful spot to get control. You could, for example, have a hook there\n> which returns a bitmask indicating which of merge-join, nested-loop,\n> and hash join will be allowable for this call; that hook would then\n> allow for control over the join method and the join order, and the\n> join order control is strong enough that you can implement either of\n> the two interpretations above. This idea theorizes that 0001 was wrong\n> to make the path mask a per-RelOptInfo value, because there could be\n> many calls to add_paths_to_joinrel() for a single RelOptInfo and, in\n> this idea, every one of those can enforce a different mask.\n\nHere is an implementation of this idea. I think this is significantly\nmore elegant than the previous patch. Functionally, it does a better\njob allowing for control over join planning than the previous patch,\nbecause you can control both the join method and the join order. It\ndoes not attempt to provide control over scan or appendrel methods; I\ncould build similar machinery for those cases, but let's talk about\nthis case, first. As non-for-commit proofs-of-concept, I've included\ntwo sample contrib modules here, one called alphabet_join and one\ncalled hint_via_alias. alphabet_join forces the join to be done in\nalphabetical order by table alias. Consider this test query:\n\nSELECT COUNT(*) FROM pgbench_accounts THE\nINNER JOIN pgbench_accounts QUICK on THE.aid = QUICK.aid\nINNER JOIN pgbench_accounts BROWN on THE.aid = BROWN.aid\nINNER JOIN pgbench_accounts FOX on THE.aid = FOX.aid\nINNER JOIN pgbench_accounts JUMPED on THE.aid = JUMPED.aid\nINNER JOIN pgbench_accounts OVER on THE.aid = OVER.aid\nINNER JOIN pgbench_accounts LAZY on THE.aid = LAZY.aid\nINNER JOIN pgbench_accounts DOG on THE.aid = DOG.aid;\n\nWhen you just execute this normally, the join order matches the order\nyou enter it: THE QUICK BROWN FOX JUMPED OVER LAZY DOG. But if you\nload alphabet_join, then the join order becomes BROWN DOG FOX JUMPED\nLAZY OVER QUICK THE. It is unlikely that anyone wants their join order\nto be determined by strict alphabetical order, but again, this is just\nintended to show that the hook works.\n\nhint_via_alias whose table alias starts with mj_, hj_, or nl_ using a\nmerge-join, hash-join, or nested loop, respectively. Here again, I\ndon't think that passing hints through the table alias names is\nprobably the best thing from a UI perspective, but unlike the previous\none which is clearly a toy, I can imagine someone actually trying to\nuse this one on a real server. If we want anything in contrib at all\nit should probably be something much better than this, but again at\nthis stage I'm just trying to showcase the hook.\n\n> Potentially, such a hook could return additional information, either\n> by using more bits of the bitmask or by returning other information\n> via some other data type. For instance, I still believe that\n> distinguishing between parameterized-nestloops and\n> unparameterized-nestloops would be real darn useful, so we could have\n> separate bits for each; or you could have a bit to control whether\n> foreign-join paths get disabled (or are considered at all), or you\n> could have separate bits for merge joins that involve 0, 1, or 2\n> sorts. Whether we need or want any or all of that is certainly\n> debatable, but the point is that if you did want some of that, or\n> something else, it doesn't look difficult to feed that information\n> through to the places where you would need it to be available.\n\nI spent a lot of time thinking about what should and should not be in\nscope for this hook and decided against both of the ideas above.\nThey're not necessarily bad ideas but they feel like examples of\narbitrary policy that you could want to implement, and I don't think\nit's viable to have every arbitrary policy that someone happens to\nfavor in the core code. If we want extensions to be able to achieve\nthese kinds of results, I think we're going to need a hook at either\ninitial_cost_XXX time that would be free to make arbitrary decisions\nabout cost and disabled nodes for each possible path we might\ngenerate, or a hook at final_cost_XXX time that could make paths more\ndisabled (but not less) or more costly (but not less costly unless it\nalso makes them more disabled). For now, I have not done that, because\nI think the hook that I've added to add_paths_to_joinrel is fairly\npowerful and significantly cheaper than a hook that has to fire for\nevery possible generated path. Also, even if we do add that, I bet\nit's useful to let this hook pass some state through to that hook, as\na way of avoiding recomputation.\n\nHowever, although I didn't end up including either of the policies\nmentioned above in this patch, I still did end up subdividing the\n\"merge join\" strategy according to whether or not a Materialize node\nis used, and the \"nested loop\" strategy according to whether we use\nMaterialize, Memoize, or neither. At least according to my reading of\nthe code, the planner really does consider these to be separate\nsub-strategies: it thinks about whether to use a nested loop without a\nmaterialize node, and it thinks about whether to do a nested loop with\na materialize node, and there's separate code for those things. So\nthis doesn't feel like an arbitrary distinction to me. In contrast,\nthe parameterized-vs-unparameterized nested loop thing is just a\nquestion of whether the outer path that we happen to choose happens to\nsatisfy some part of the parameterization of the inner path we happen\nto choose; the code neither knows nor cares whether that will occur.\nThere is also a pragmatic reason to make sure that the hook allows for\ncontrol over use of these sub-strategies: pg_hint_plan has Memoize and\nNoMemoize hints, and if whatever hook we add here can't replace what\npg_hint_plan is already doing, then it's clearly not up to the mark.\n\nI also spent some time thinking about what behavior this hook does and\ndoes not allow you to control. As noted, it allows you to control the\njoin method and the join order, including which table ends up on which\nside of the join. But, is that good enough to reproduce a very\nspecific plan, say one that you saw before and liked? Let's imagine\nthat you've arranged to disable every path in outerrel and innerrel\nother than the ones that you want to be chosen, either using some\nhackery or some future patch not included here. Now, you want to use\nthis patch to make sure that those are joined in the way that you want\nthem to be joined. Can you do that? I think the answer is \"mostly\".\nYou'll be able to get the join method you want used, and you'll be\nable to get Memoize and/or Materialize nodes if you want them or avoid\nthem if you don't. Also, join_path_setup_hook will see\nJOIN_UNIQUE_INNER or JOIN_UNIQUE_OUTER so if we're thinking of\nimplementing a semijoin via uniquify+join, the hook will be able to\nencourage or discourage that approach if it wants. However, it *won't*\nbe able to force the uniquification to happen using hashing rather\nthan sorting or vice versa, or at least not without doing something\npretty kludgy. Also, it won't be able to force a merge join produced\nby sort_inner_and_outer() to use the sort keys that it prefers. Merge\njoins produced by match_unsorted_outer() are entirely a function of\nthe input paths, but those produced by sort_inner_and_outer() are not.\nAside from these two cases, I found no other gaps.\n\nAFAICS, the only way to close the gap around unique-ification strategy\nwould be with some piece of bespoke infrastructure that just does\nexactly that. The inability to control the sort order selected by\nsort_inner_and_outer() could, I believe, be closed by a hook at\ninitial or final cost time. As noted above, such a hook is also useful\nwhen you're not trying to arrive at a specific plan, but rather have\nsome policy around what kind of plan you want to end up with and wish\nto penalize plans that don't comply with your policy. So maybe this is\nan argument for adding that hook. That said, even without that, this\nmight be close enough for government work. If the planner chooses the\ncorrect input paths and if it also chooses a merge join, how likely is\nit that it will now choose the wrong pathkeys to perform that merge\njoin? I bet it's quite unlikely, because I think the effect of the\nlogic we have will probably be to just do the smallest possible amount\nof sorting, and that's also probably the answer the user wants. So it\nmight be OK in practice to just not worry about this. On the other\nhand, a leading cause of disasters is people assuming that certain\nthings would not go wrong and then having those exact things go wrong,\nso it's probably unwise to be confident that the attached patch is all\nwe'll ever need.\n\nStill, I think it's a pretty useful starting point. It is mostly\nenough to give you control over join planning, and if combined with\nsimilar work for scan planning, I think it would be enough for\npg_hint_plan. If we also got control over appendrel and agg planning,\nthen you could do a bunch more cool things.\n\nComments?\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 18 Sep 2024 11:48:49 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allowing extensions to control planner behavior"
},
{
"msg_contents": "On 18/9/2024 17:48, Robert Haas wrote:\n> Comments?\nLet me share my personal experience on path management in the planner.\nThe main thing important for extensions is flexibility - I would discuss \na decision that is not limited by join ordering but could be applied to \nimplement an index picking strategy, Memoize/Material choice versus a \nnon-cached one, choice of custom paths, etc.\n\nThe most flexible way I have found to this moment is a collaboration \nbetween the get_relation_info_hook and add_path hook. In \nget_relation_info, we have enough context and can add some information \nto RelOptInfo - I added an extra list to this structure where extensions \ncan add helpful info. Using extensible nodes, we can tackle interference \nbetween extensions.\nThe add_path hook can analyse new and old paths and also look into the \nextensible data inside RelOptInfo. The issue with lots of calls eases by \nquick return on the out-of-scope paths: usually extensions manage some \nspecific path type or relation and quick test of RelOptInfo::extra_list \nallow to sort out unnecessary cases.\n\nBeing flexible, this approach is less invasive. Now, I use it to \nimplement heuristics demanded by clients for cases when the estimator \npredicts only one row - usually, it means that the optimiser \nunderestimates cardinality. For example, in-place switch-off of NestLoop \nif it uses too many clauses, or rules to pick up index scan if we have \nalternative scans, each of them predicts only one tuple.\n\nPositive outcomes includes: we don't alter path costs; extension may be \nsure that core doesn't remove path from the list if the extension \nforbids it.\n\nIn attachment - hooks for add_path and add_partial_path. As you can see, \nbecause of differences in these routines hooks also implemented \ndifferently. Also the compare_path_costs_fuzzily is exported, but it is \nreally good stuff for an extension.\n\n-- \nregards, Andrei Lepikhov",
"msg_date": "Mon, 30 Sep 2024 16:50:40 +0700",
"msg_from": "Andrei Lepikhov <lepihov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allowing extensions to control planner behavior"
}
] |
[
{
"msg_contents": "Hi All,\n\nI've encountered a noticeable difference in execution time and query\nexecution plan row counts between PostgreSQL 13 and PostgreSQL 16 when\nrunning a query on information_schema tables. Surprisingly, PostgreSQL 16\nis performing slower than PostgreSQL 13.\n\nThe query executed on both versions is as follows:\nSELECT DISTINCT \"tc\".\"constraint_name\" AS \"ConstraintName\",\n\"ccu\".\"column_name\" AS \"ColumnName\"\n FROM\n information_schema.constraint_column_usage AS \"ccu\" right join\ninformation_schema.table_constraints AS \"tc\"\nON \"tc\".\"constraint_catalog\" = \"ccu\".\"constraint_catalog\"\n AND \"tc\".\"constraint_name\" = \"ccu\".\"constraint_name\"\n WHERE \"tc\".\"constraint_type\" = 'PRIMARY KEY'\n AND \"ccu\".\"table_name\" = 't_c56ng1_repository'\n\n\n Here are the details of the PostgreSQL versions and the execution plans:\n\n*4PostgreSQL 13.14 (PostgreSQL 13.14 on x86_64-pc-linux-gnu, compiled by\ngcc 11.4.0, 64-bit)*\nExecution plan: PG13.14 Execution Plan\n<https://explain.dalibo.com/plan/ag1a62a9d47dg29d>\n\n*PostgreSQL 16.4 (PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc\n11.4.0, 64-bit)*\nExecution plan: PG16.4 Execution Plan\n<https://explain.dalibo.com/plan/4c66fdfbf2hf9ed2>\n\nHas anyone else experienced similar behavior or could provide insights into\nwhy PostgreSQL 16 might be slower for this query? Any advice or suggestions\nfor optimization would be greatly appreciated.\n\nThank you!\nNOTE:- PFA the raw file of explain and analyze below.",
"msg_date": "Tue, 27 Aug 2024 03:19:04 +0530",
"msg_from": "nikhil raj <nikhilraj474@gmail.com>",
"msg_from_op": true,
"msg_subject": "Significant Execution Time Difference Between PG13.14 and PG16.4 for\n Query on information_schema Tables."
},
{
"msg_contents": "On 8/26/24 14:49, nikhil raj wrote:\n> Hi All,\n> \n> I've encountered a noticeable difference in execution time and query \n> execution plan row counts between PostgreSQL 13 and PostgreSQL 16 when \n> running a query on |information_schema| tables. Surprisingly, PostgreSQL \n> 16 is performing slower than PostgreSQL 13.\n\nDid you run ANALYZE on the Postgres 16 instance?\n\n> *4PostgreSQL 13.14 (PostgreSQL 13.14 on x86_64-pc-linux-gnu, compiled by \n> gcc 11.4.0, 64-bit)*\n> Execution plan: PG13.14 Execution Plan \n> <https://explain.dalibo.com/plan/ag1a62a9d47dg29d>\n> \n> *PostgreSQL 16.4 (PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by \n> gcc 11.4.0, 64-bit)*\n> Execution plan: PG16.4 Execution Plan \n> <https://explain.dalibo.com/plan/4c66fdfbf2hf9ed2>\n\n\nUse:\n\nhttps://explain.depesz.com/\n\nIt is easier to follow it's output.\n\n> \n> \n> Has anyone else experienced similar behavior or could provide insights \n> into why PostgreSQL 16 might be slower for this query? Any advice or \n> suggestions for optimization would be greatly appreciated.\n\nYes when ANALYZE was not run on a new instance.\n\n> \n> Thank you!\n> \n> NOTE:- PFA the raw file of explain and analyze below.\n> \n> \n> \n> \n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 15:10:06 -0700",
"msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "Hi Adrian,\n\nThanks for the quick response.\n\nI've already performed a vacuum, reindex, and analyze on the entire\ndatabase, but the issue persists. As you can see from the execution plan,\nthe time difference in PostgreSQL 16 is still significantly higher, even\nafter all maintenance activities have been completed.\nIt seems there might be a bug in PostgreSQL 16 where the performance of\nqueries on *information_schema* tables is degraded. As both the tables are\npostgres system tables\n\nhttps://explain.depesz.com/s/bdO6b :-PG13\n<https://explain.depesz.com/s/bdO6b>\nhttps://explain.depesz.com/s/bpAU :- PG16\n<https://explain.depesz.com/s/bpAU>\n\nOn Tue 27 Aug, 2024, 3:40 AM Adrian Klaver, <adrian.klaver@aklaver.com>\nwrote:\n\n> On 8/26/24 14:49, nikhil raj wrote:\n> > Hi All,\n> >\n> > I've encountered a noticeable difference in execution time and query\n> > execution plan row counts between PostgreSQL 13 and PostgreSQL 16 when\n> > running a query on |information_schema| tables. Surprisingly, PostgreSQL\n> > 16 is performing slower than PostgreSQL 13.\n>\n> Did you run ANALYZE on the Postgres 16 instance?\n>\n> > *4PostgreSQL 13.14 (PostgreSQL 13.14 on x86_64-pc-linux-gnu, compiled by\n> > gcc 11.4.0, 64-bit)*\n> > Execution plan: PG13.14 Execution Plan\n> > <https://explain.dalibo.com/plan/ag1a62a9d47dg29d>\n> >\n> > *PostgreSQL 16.4 (PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by\n> > gcc 11.4.0, 64-bit)*\n> > Execution plan: PG16.4 Execution Plan\n> > <https://explain.dalibo.com/plan/4c66fdfbf2hf9ed2>\n>\n>\n> Use:\n>\n> https://explain.depesz.com/\n>\n> It is easier to follow it's output.\n>\n> >\n> >\n> > Has anyone else experienced similar behavior or could provide insights\n> > into why PostgreSQL 16 might be slower for this query? Any advice or\n> > suggestions for optimization would be greatly appreciated.\n>\n> Yes when ANALYZE was not run on a new instance.\n>\n> >\n> > Thank you!\n> >\n> > NOTE:- PFA the raw file of explain and analyze below.\n> >\n> >\n> >\n> >\n>\n> --\n> Adrian Klaver\n> adrian.klaver@aklaver.com\n>\n>\n\nHi Adrian,Thanks for the quick response.I've already performed a vacuum, reindex, and analyze on the entire database, but the issue persists. As you can see from the execution plan, the time difference in PostgreSQL 16 is still significantly higher, even after all maintenance activities have been completed.It seems there might be a bug in PostgreSQL 16 where the performance of queries on information_schema tables is degraded. As both the tables are postgres system tables https://explain.depesz.com/s/bdO6b :-PG13https://explain.depesz.com/s/bpAU :- PG16On Tue 27 Aug, 2024, 3:40 AM Adrian Klaver, <adrian.klaver@aklaver.com> wrote:On 8/26/24 14:49, nikhil raj wrote:\n> Hi All,\n> \n> I've encountered a noticeable difference in execution time and query \n> execution plan row counts between PostgreSQL 13 and PostgreSQL 16 when \n> running a query on |information_schema| tables. Surprisingly, PostgreSQL \n> 16 is performing slower than PostgreSQL 13.\n\nDid you run ANALYZE on the Postgres 16 instance?\n\n> *4PostgreSQL 13.14 (PostgreSQL 13.14 on x86_64-pc-linux-gnu, compiled by \n> gcc 11.4.0, 64-bit)*\n> Execution plan: PG13.14 Execution Plan \n> <https://explain.dalibo.com/plan/ag1a62a9d47dg29d>\n> \n> *PostgreSQL 16.4 (PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by \n> gcc 11.4.0, 64-bit)*\n> Execution plan: PG16.4 Execution Plan \n> <https://explain.dalibo.com/plan/4c66fdfbf2hf9ed2>\n\n\nUse:\n\nhttps://explain.depesz.com/\n\nIt is easier to follow it's output.\n\n> \n> \n> Has anyone else experienced similar behavior or could provide insights \n> into why PostgreSQL 16 might be slower for this query? Any advice or \n> suggestions for optimization would be greatly appreciated.\n\nYes when ANALYZE was not run on a new instance.\n\n> \n> Thank you!\n> \n> NOTE:- PFA the raw file of explain and analyze below.\n> \n> \n> \n> \n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com",
"msg_date": "Tue, 27 Aug 2024 04:11:36 +0530",
"msg_from": "nikhil raj <nikhilraj474@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "On 8/26/24 15:41, nikhil raj wrote:\n> Hi Adrian,\n> \n> Thanks for the quick response.\n> \n> I've already performed a vacuum, reindex, and analyze on the entire \n> database, but the issue persists. As you can see from the execution \n> plan, the time difference in PostgreSQL 16 is still significantly \n> higher, even after all maintenance activities have been completed.\n> \n> It seems there might be a bug in PostgreSQL 16 where the performance of \n> queries on *information_schema* tables is degraded. As both the tables \n> are postgres system tables\n> \n> https://explain.depesz.com/s/bdO6b <https://explain.depesz.com/s/bdO6b> \n> :-PG13 <https://explain.depesz.com/s/bdO6b>\n> \n> https://explain.depesz.com/s/bpAU <https://explain.depesz.com/s/bpAU> \n> :- PG16 <https://explain.depesz.com/s/bpAU>\n\nWhat I see is Postgres 13:\n\nNested Loop (cost=9.54..119.02 rows=1 width=128) (actual \ntime=1.038..288.777 rows=1 loops=1)\n\n Join Filter: ((\"*SELECT* 1\".constraint_name)::name = \"*SELECT* \n1_1\".conname)\n Rows Removed by Join Filter: 935\n Buffers: shared hit=34,675\n\nvs Postgres 16\n\nNested Loop (cost=62.84..538.22 rows=1 width=128) (actual \ntime=1,905.153..14,006.921 rows=1 loops=1)\n\n Join Filter: (\"*SELECT* 1\".conname = (\"*SELECT* \n1_1\".constraint_name)::name)\n Rows Removed by Join Filter: 997\n Buffers: shared hit=5,153,054\n\n\nSo either switching this\n\n(\"*SELECT* 1\".constraint_name)::name = \"*SELECT* 1_1\".conname\n\nto\n\n\"*SELECT* 1\".conname = (\"*SELECT* 1_1\".constraint_name)::name\n\nis more of a change then I would expect.\n\nOr\n\nBuffers: shared hit=34,675\n\nvs\n\nBuffers: shared hit=5,153,054\n\nindicates a hardware/configuration difference.\n\nAre both instances running on the same machine?\n\nIs the configuration for both the same?\n\n> \n> On Tue 27 Aug, 2024, 3:40 AM Adrian Klaver, <adrian.klaver@aklaver.com \n> <mailto:adrian.klaver@aklaver.com>> wrote:\n> \n> On 8/26/24 14:49, nikhil raj wrote:\n> > Hi All,\n> >\n> > I've encountered a noticeable difference in execution time and query\n> > execution plan row counts between PostgreSQL 13 and PostgreSQL 16\n> when\n> > running a query on |information_schema| tables. Surprisingly,\n> PostgreSQL\n> > 16 is performing slower than PostgreSQL 13.\n> \n> Did you run ANALYZE on the Postgres 16 instance?\n> \n> > *4PostgreSQL 13.14 (PostgreSQL 13.14 on x86_64-pc-linux-gnu,\n> compiled by\n> > gcc 11.4.0, 64-bit)*\n> > Execution plan: PG13.14 Execution Plan\n> > <https://explain.dalibo.com/plan/ag1a62a9d47dg29d\n> <https://explain.dalibo.com/plan/ag1a62a9d47dg29d>>\n> >\n> > *PostgreSQL 16.4 (PostgreSQL 16.4 on x86_64-pc-linux-gnu,\n> compiled by\n> > gcc 11.4.0, 64-bit)*\n> > Execution plan: PG16.4 Execution Plan\n> > <https://explain.dalibo.com/plan/4c66fdfbf2hf9ed2\n> <https://explain.dalibo.com/plan/4c66fdfbf2hf9ed2>>\n> \n> \n> Use:\n> \n> https://explain.depesz.com/ <https://explain.depesz.com/>\n> \n> It is easier to follow it's output.\n> \n> >\n> >\n> > Has anyone else experienced similar behavior or could provide\n> insights\n> > into why PostgreSQL 16 might be slower for this query? Any advice or\n> > suggestions for optimization would be greatly appreciated.\n> \n> Yes when ANALYZE was not run on a new instance.\n> \n> >\n> > Thank you!\n> >\n> > NOTE:- PFA the raw file of explain and analyze below.\n> >\n> >\n> >\n> >\n> \n> -- \n> Adrian Klaver\n> adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>\n> \n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 17:32:21 -0700",
"msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "nikhil raj <nikhilraj474@gmail.com> writes:\n> I've encountered a noticeable difference in execution time and query\n> execution plan row counts between PostgreSQL 13 and PostgreSQL 16 when\n> running a query on information_schema tables. Surprisingly, PostgreSQL 16\n> is performing slower than PostgreSQL 13.\n\nYeah, it looks like that condition on \"table_name\" is not getting\npushed down to the scan level anymore. I'm not sure why not,\nbut will look closer tomorrow.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Aug 2024 21:40:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "On Tue, 27 Aug 2024 at 13:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, it looks like that condition on \"table_name\" is not getting\n> pushed down to the scan level anymore. I'm not sure why not,\n> but will look closer tomorrow.\n\nI was looking for the offending commit as at first I thought it might\nbe related to Memoize. It does not seem to be.\n\nI get the following up until 2489d76c, and from then on, it's a subquery filter.\n\n -> Index Scan using pg_class_relname_nsp_index on pg_class r_2\n(cost=0.27..8.30 rows=1 width=8) (actual time=0.004..0.004 rows=0\nloops=1)\n Index Cond: (relname = 't_c56ng1_repository'::name)\n Filter: ((relkind = ANY ('{r,p}'::\"char\"[])) AND\npg_has_role(relowner, 'USAGE'::text))\n\nSo looks like it was the \"Make Vars be outer-join-aware.\" commit that\nchanged this.\n\nDavid\n\n\n",
"msg_date": "Tue, 27 Aug 2024 13:50:56 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 27 Aug 2024 at 13:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, it looks like that condition on \"table_name\" is not getting\n>> pushed down to the scan level anymore. I'm not sure why not,\n>> but will look closer tomorrow.\n\n> So looks like it was the \"Make Vars be outer-join-aware.\" commit that\n> changed this.\n\nYeah, I got that same result by bisecting. It seems like it's\nsomehow related to the cast to information_schema.sql_identifier:\nwe are able to get rid of that normally but seem to fail to do so\nin this query.\n\nThere was a smaller increase in the runtime at dfb75e478 \"Add primary\nkeys and unique constraints to system catalogs\", but that seems to\njust be due to there being more rows in the relevant catalogs.\n(That's from testing the query in an empty database; probably the\neffect of dfb75e478 would be swamped in a production DB anyway.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Aug 2024 22:03:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "On 2024-08-27 11:50, David Rowley wrote:\n> On Tue, 27 Aug 2024 at 13:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, it looks like that condition on \"table_name\" is not getting\n>> pushed down to the scan level anymore. I'm not sure why not,\n>> but will look closer tomorrow.\n> \n> I was looking for the offending commit as at first I thought it might\n> be related to Memoize. It does not seem to be.\n\nAs a general thought, seeing that this might be an actual problem\nshould some kind of automated testing be added that checks for\nperformance regressions like this?\n\nRegards and best wishes,\n\nJustin Clift\n\n\n",
"msg_date": "Tue, 27 Aug 2024 16:00:13 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "On Tue, 27 Aug 2024 at 18:00, Justin Clift <justin@postgresql.org> wrote:\n> As a general thought, seeing that this might be an actual problem\n> should some kind of automated testing be added that checks for\n> performance regressions like this?\n\nWe normally try to catch these sorts of things with regression tests.\nOf course, that requires having a test that would catch a particular\nproblem, which we don't seem to have for this particular case. A\nperformance test would also require testing a particular scenario, so\nI don't see why that's better. A regression test is better suited as\nthere's no middle ground between pass and fail.\n\nDavid\n\n\n",
"msg_date": "Tue, 27 Aug 2024 22:14:07 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "On Tue, 27 Aug 2024 at 14:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, I got that same result by bisecting. It seems like it's\n> somehow related to the cast to information_schema.sql_identifier:\n> we are able to get rid of that normally but seem to fail to do so\n> in this query.\n\nIn case it saves you a bit of time, I stripped as much of the\nunrelated stuff out as I could and got:\n\ncreate table t (a name, b int);\nexplain select * from (select a::varchar,b from (select distinct a,b\nfrom t) st) t right join t t2 on t.b=t2.b where t.a='test';\n\ngetting rid of the cast or swapping to INNER JOIN rather than RIGHT\nJOIN means that qual_is_pushdown_safe() gets a Var rather than a\nPlaceHolderVar.\n\nDavid\n\n\n",
"msg_date": "Tue, 27 Aug 2024 23:03:00 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "[ switching to -hackers list ]\n\nDavid Rowley <dgrowleyml@gmail.com> writes:\n> In case it saves you a bit of time, I stripped as much of the\n> unrelated stuff out as I could and got:\n\n> create table t (a name, b int);\n> explain select * from (select a::varchar,b from (select distinct a,b\n> from t) st) t right join t t2 on t.b=t2.b where t.a='test';\n\n> getting rid of the cast or swapping to INNER JOIN rather than RIGHT\n> JOIN means that qual_is_pushdown_safe() gets a Var rather than a\n> PlaceHolderVar.\n\nThanks. So it seems that what's happening is that we stick a\nPlaceHolderVar on the intermediate subquery's output (\"a::varchar\"),\nand then later when we realize that the RIGHT JOIN can be reduced to\nan inner join we run around and remove the right join from the\nPlaceHolderVar's nullingrels, leaving a useless PHV with no\nnullingrels. remove_nulling_relids explains\n\n * Note: it might seem desirable to remove the PHV altogether if\n * phnullingrels goes to empty. Currently we dare not do that\n * because we use PHVs in some cases to enforce separate identity\n * of subexpressions; see wrap_non_vars usages in prepjointree.c.\n\nHowever, then when we consider whether the upper WHERE condition\ncan be pushed down into the unflattened lower subquery,\nqual_is_pushdown_safe punts:\n\n * XXX Punt if we find any PlaceHolderVars in the restriction clause.\n * It's not clear whether a PHV could safely be pushed down, and even\n * less clear whether such a situation could arise in any cases of\n * practical interest anyway. So for the moment, just refuse to push\n * down.\n\nWe didn't see this particular behavior before 2489d76c49 because\npullup_replace_vars avoided inserting a PHV:\n\n * If it contains a Var of the subquery being pulled up, and\n * does not contain any non-strict constructs, then it's\n * certainly nullable so we don't need to insert a\n * PlaceHolderVar.\n\nI dropped that case in 2489d76c49 because now we need to attach\nnullingrels to the expression. You could imagine attaching the\nnullingrels to the contained Var(s) instead of putting a PHV on top,\nbut that seems like a mess and I'm not quite sure it's semantically\nthe same. In any case it wouldn't fix adjacent cases where there is\na non-strict construct in the subquery output expression.\n\nSo it seems like we need to fix one or the other of these\nimplementation shortcuts to restore the previous behavior.\nI'm wondering if it'd be okay for qual_is_pushdown_safe to accept\nPHVs that have no nullingrels. I'm not really thrilled about trying\nto back-patch any such fix though --- the odds of introducing new bugs\nseem nontrivial, and the problem case seems rather narrow. If we\nare willing to accept a HEAD-only fix, it'd likely be better to\nattack the other end and make it possible to remove no-op PHVs.\nI think that'd require marking PHVs that need to be kept because\nthey are serving to isolate subexpressions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Aug 2024 12:15:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "I wrote:\n> We didn't see this particular behavior before 2489d76c49 because\n> pullup_replace_vars avoided inserting a PHV:\n> * If it contains a Var of the subquery being pulled up, and\n> * does not contain any non-strict constructs, then it's\n> * certainly nullable so we don't need to insert a\n> * PlaceHolderVar.\n> I dropped that case in 2489d76c49 because now we need to attach\n> nullingrels to the expression. You could imagine attaching the\n> nullingrels to the contained Var(s) instead of putting a PHV on top,\n> but that seems like a mess and I'm not quite sure it's semantically\n> the same. In any case it wouldn't fix adjacent cases where there is\n> a non-strict construct in the subquery output expression.\n\nI realized that actually we do have the mechanism for making that\nwork: we could apply add_nulling_relids to the expression, if it\nmeets those same conditions. This is a kluge really, but it would\nrestore the status quo ante in a fairly localized fashion that\nseems like it might be safe enough to back-patch into v16.\n\nHere's a WIP patch that does it like that. One problem with it\nis that it requires rcon->relids to be calculated in cases where\nwe didn't need that before, which is probably not *that* expensive\nbut it's annoying. If we go forward with this, I'm thinking about\nchanging add_nulling_relids' API contract to say \"if target_relid\nis NULL then all level-zero Vars/PHVs are modified\", so that we\ndon't need that relid set in non-LATERAL cases.\n\nThe other problem with this is that it breaks one test case in\nmemoize.sql: a query that formerly generated a memoize plan\nnow does not use memoize. I am not sure why not --- does that\nmean anything to you?\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 27 Aug 2024 17:52:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "On Wed, 28 Aug 2024 at 09:52, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The other problem with this is that it breaks one test case in\n> memoize.sql: a query that formerly generated a memoize plan\n> now does not use memoize. I am not sure why not --- does that\n> mean anything to you?\n\nThe reason it works in master is that get_memoize_path() calls\nextract_lateral_vars_from_PHVs() and finds PlaceHolderVars to use as\nthe Memoize keys. With your patch PlannerInfo.placeholder_list is\nempty.\n\nThe commit that made this work is 069d0ff02. Richard might be able to\nexplain better. I don't quite understand why RelOptInfo.lateral_vars\ndon't contain these in the first place.\n\nDavid\n\n\n",
"msg_date": "Wed, 28 Aug 2024 11:03:08 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Wed, 28 Aug 2024 at 09:52, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The other problem with this is that it breaks one test case in\n>> memoize.sql: a query that formerly generated a memoize plan\n>> now does not use memoize. I am not sure why not --- does that\n>> mean anything to you?\n\n> The reason it works in master is that get_memoize_path() calls\n> extract_lateral_vars_from_PHVs() and finds PlaceHolderVars to use as\n> the Memoize keys. With your patch PlannerInfo.placeholder_list is\n> empty.\n\nThat seems like a pretty fishy way to do it. Are you saying that\nMemoize is never applicable if there aren't outer joins in the\nquery? Without OJs there probably won't be any PHVs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Aug 2024 19:15:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "I wrote:\n> That seems like a pretty fishy way to do it. Are you saying that\n> Memoize is never applicable if there aren't outer joins in the\n> query? Without OJs there probably won't be any PHVs.\n\nOh, scratch that, I see you mean this is an additional way to do it\nnot the only way to do it. But I'm confused why it works for\n\tt1.two+1 AS c1\nbut not\n\tt1.two+t2.two AS c1\nThose ought to look pretty much the same for this purpose.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Aug 2024 19:37:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "On Wed, 28 Aug 2024 at 11:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Oh, scratch that, I see you mean this is an additional way to do it\n> not the only way to do it. But I'm confused why it works for\n> t1.two+1 AS c1\n> but not\n> t1.two+t2.two AS c1\n> Those ought to look pretty much the same for this purpose.\n\nThe bms_overlap(pull_varnos(rcon->root, newnode), rcon->relids) test\nis false with t1.two+1. Looks like there needs to be a Var from t2\nfor the bms_overlap to be true\n\nDavid\n\n\n",
"msg_date": "Wed, 28 Aug 2024 11:57:53 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 5:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I realized that actually we do have the mechanism for making that\n> work: we could apply add_nulling_relids to the expression, if it\n> meets those same conditions.\n\nI think this should work, as long as we apply add_nulling_relids only\nto Vars/PHVs that belong to the subquery in this case, because only\nthose Vars/PHVs would be nulled by the outer joins contained in the\nnullingrels.\n\n> If we go forward with this, I'm thinking about\n> changing add_nulling_relids' API contract to say \"if target_relid\n> is NULL then all level-zero Vars/PHVs are modified\", so that we\n> don't need that relid set in non-LATERAL cases.\n\n+1. In LATERAL case, we can always find the subquery's relids in\nrcon->relids. In non-lateral case, any level-zero Vars/PHVs must\nbelong to the subquery - so if we change add_nulling_relids' API to be\nso, we do not need to have rcon->relids set.\n\nThanks\nRichard\n\n\n",
"msg_date": "Wed, 28 Aug 2024 11:30:45 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 11:30 AM Richard Guo <guofenglinux@gmail.com> wrote:\n> On Wed, Aug 28, 2024 at 5:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I realized that actually we do have the mechanism for making that\n> > work: we could apply add_nulling_relids to the expression, if it\n> > meets those same conditions.\n>\n> I think this should work, as long as we apply add_nulling_relids only\n> to Vars/PHVs that belong to the subquery in this case, because only\n> those Vars/PHVs would be nulled by the outer joins contained in the\n> nullingrels.\n\nTo be more concrete, I know theoretically it is the whole expression\nthat is nullable by the outer joins, not its individual vars. But in\nthis case if the contained vars (that belong to the subquery) become\nNULL, the whole expression would be NULL too, because it does not\ncontain any non-strict constructs. That's why I think this approach\nshould work.\n\nThanks\nRichard\n\n\n",
"msg_date": "Wed, 28 Aug 2024 11:52:46 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "On 2024-08-27 20:14, David Rowley wrote:\n> On Tue, 27 Aug 2024 at 18:00, Justin Clift <justin@postgresql.org> \n> wrote:\n>> As a general thought, seeing that this might be an actual problem\n>> should some kind of automated testing be added that checks for\n>> performance regressions like this?\n> \n> We normally try to catch these sorts of things with regression tests.\n> Of course, that requires having a test that would catch a particular\n> problem, which we don't seem to have for this particular case. A\n> performance test would also require testing a particular scenario, so\n> I don't see why that's better. A regression test is better suited as\n> there's no middle ground between pass and fail.\n\nYeah, that's the kind of thing I was thinking.\n\nAny idea who normally does those, and if it would be reasonable to add\ntest(s) for the internal information tables?\n\nRegards and best wishes,\n\nJustin Clift\n\n\n",
"msg_date": "Wed, 28 Aug 2024 16:58:56 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 12:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If we\n> are willing to accept a HEAD-only fix, it'd likely be better to\n> attack the other end and make it possible to remove no-op PHVs.\n> I think that'd require marking PHVs that need to be kept because\n> they are serving to isolate subexpressions.\n\nI think it's always desirable to remove no-op PHVs, even if we end up\nwith a different approach to fix the issue discussed here. Doing that\ncould potentially open up opportunities for optimization in other\ncases. For example:\n\nexplain (costs off)\nselect * from t t1 left join\n lateral (select t1.a as x, * from t t2) s on true\nwhere t1.a = s.a;\n QUERY PLAN\n----------------------------\n Nested Loop\n -> Seq Scan on t t1\n -> Seq Scan on t t2\n Filter: (t1.a = a)\n(4 rows)\n\nThe target entry s.x is wrapped in a PHV that contains lateral\nreference to t1, which forces us to resort to nestloop join. However,\nsince the left join has been reduced to an inner join, and it is\nremoved from the PHV's nullingrels, leaving the nullingrels being\nempty, we should be able to remove this PHV and use merge or hash\njoins, depending on which is cheaper.\n\nI think there may be more cases where no-op PHVs constrain\noptimization opportunities.\n\nIn [1] when working on the fix-grouping-sets patch, I included a\nmechanism in 0003 to remove no-op PHVs by including a flag in\nPlaceHolderVar to indicate whether it is safe to remove the PHV when\nits phnullingrels becomes empty. In that patch this flag is only set\nin cases where the PHV is used to carry the nullingrel bit that\nrepresents the grouping step. Maybe we can extend its use to remove\nall no-op PHVs, except those that are serving to isolate\nsubexpressions.\n\nAny thoughts on this?\n\n[1] https://postgr.es/m/CAMbWs4_2t2pqqCFdS3NYJLwMMkAzYQKBOhKweFt-wE3YOi7rGg@mail.gmail.com\n\nThanks\nRichard\n\n\n",
"msg_date": "Wed, 28 Aug 2024 15:08:16 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 7:58 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 28 Aug 2024 at 11:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Oh, scratch that, I see you mean this is an additional way to do it\n> > not the only way to do it. But I'm confused why it works for\n> > t1.two+1 AS c1\n> > but not\n> > t1.two+t2.two AS c1\n> > Those ought to look pretty much the same for this purpose.\n>\n> The bms_overlap(pull_varnos(rcon->root, newnode), rcon->relids) test\n> is false with t1.two+1. Looks like there needs to be a Var from t2\n> for the bms_overlap to be true\n\nExactly. What Tom's patch does is that if the expression contains\nVars/PHVs that belong to the subquery, and does not contain any\nnon-strict constructs, then it can escape being wrapped.\n\nIn expression 't1.two+t2.two', 't2.two' is a Var that belongs to the\nsubquery, and '+' is strict, so it can escape being wrapped.\n\nThe expression 't1.two+1' does not meet these conditions, so it is\nwrapped into a PHV, and the PHV contains lateral reference to t1,\nwhich results in a nestloop join with a parameterized inner path.\nThat's why Memoize can work in this query.\n\nThanks\nRichard\n\n\n",
"msg_date": "Wed, 28 Aug 2024 15:31:27 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> Exactly. What Tom's patch does is that if the expression contains\n> Vars/PHVs that belong to the subquery, and does not contain any\n> non-strict constructs, then it can escape being wrapped.\n\n> In expression 't1.two+t2.two', 't2.two' is a Var that belongs to the\n> subquery, and '+' is strict, so it can escape being wrapped.\n\n> The expression 't1.two+1' does not meet these conditions, so it is\n> wrapped into a PHV, and the PHV contains lateral reference to t1,\n> which results in a nestloop join with a parameterized inner path.\n> That's why Memoize can work in this query.\n\nYeah. (I'd missed that t1.two is a lateral reference and t2.two is\nnot; sorry for the noise.)\n\nWhat happens as of HEAD is that, because we wrap this subquery output\nin a PHV marked as due to be evaluated at t2, the entire clause\n\n\t(t1.two+t2.two) = t2.unique1\n\nbecomes a base restriction clause for t2, so that when we generate\na path for t2 it will include this as a path qual (forcing the path\nto be laterally dependent on t1). Without the PHV, it's just an\nordinary join clause and it will not be evaluated at scan level\nunless it can be turned into an indexqual --- which it can't.\n\nThe preceding regression-test case with \"t1.two+1 = t2.unique1\"\ncan be made into a parameterized indexscan on t2.unique1, so it is,\nand then memoize can trigger off that.\n\nI'm inclined to think that treating such a clause as a join clause\nis strictly better than what happens now, so I'm not going to\napologize for the PHV not being there. If you wanted to cast\nblame, you could look to set_plain_rel_pathlist, where it says\n\n * We don't support pushing join clauses into the quals of a seqscan, but\n * it could still have required parameterization due to LATERAL refs in\n * its tlist.\n\n(This comment could stand some work, as it fails to note that\nlabeling the path with required parameterization can result in\n\"join clauses\" being evaluated there anyway.)\n\nIn the normal course of things I'd be dubious about the value of\npushing join clauses into a seqscan, but maybe the possibility of a\nmemoize'd join has moved the goalposts enough that we should\nconsider that. Alternatively, maybe get_memoized_path should take\nmore responsibility for devising plausible subpaths rather than\nassuming they'll be handed to it on a platter. (I don't remember\nall the conditions checked in add_path, but I wonder if we are\nmissing some potential memoize applications because suitable paths\nfail to survive the scan rel's add_path tournament.)\n\nIn the meantime, I think this test case is mighty artificial,\nand it wouldn't bother me any to just take it out again for the\ntime being.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Aug 2024 16:47:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "On Thu, Aug 29, 2024 at 4:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In the meantime, I think this test case is mighty artificial,\n> and it wouldn't bother me any to just take it out again for the\n> time being.\n\nYeah, I think we can remove the 't1.two+t2.two' test case if we go\nwith your proposed patch to address this performance regression.\n\nThanks\nRichard\n\n\n",
"msg_date": "Thu, 29 Aug 2024 09:54:57 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "On Wed, 28 Aug 2024 at 18:59, Justin Clift <justin@postgresql.org> wrote:\n> Any idea who normally does those, and if it would be reasonable to add\n> test(s) for the internal information tables?\n\nThese tend to get added along with features and along with of bug\nfixes. I imagine any tests for the information_schema views would be\nfor the results of the views rather than for the expected plans.\nHowever, that seems very separate from this as the bug has nothing to\ndo with information_schema. It just happens to be a query to an\ninformation_schema view that helped highlight the bug. Those views\nare often quite complex and so are the resulting plans. With tests\nchecking the expected EXPLAIN output, it's much better to give these a\nvery narrow focus otherwise the expected output could be too unstable\nand the purpose of the test harder to determine for anyone working on\na new patch which results in a plan change of a preexisting test.\nI've seen tests before rendered useless by people blindly accepting\nthe plan change without properly determining what the test is supposed\nto be testing. That's much more likely to happen when the purpose of\nthe test is less clear due to some unwieldy and complex expected plan.\nI managed to get a reproducer for this down to something quite simple.\nProbably that or something similar would be a better test to make sure\nthis bug stays gone.\n\nDavid\n\n\n",
"msg_date": "Thu, 29 Aug 2024 18:49:21 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Thu, Aug 29, 2024 at 4:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In the meantime, I think this test case is mighty artificial,\n>> and it wouldn't bother me any to just take it out again for the\n>> time being.\n\n> Yeah, I think we can remove the 't1.two+t2.two' test case if we go\n> with your proposed patch to address this performance regression.\n\nHere's a polished-up patchset for that. I made the memoize test\nremoval a separate patch because (a) it only applies to master\nand (b) it seems worth calling out as something we might be able\nto revert later.\n\nI found one bug in the draft patch: add_nulling_relids only processes\nVars of level zero, so we have to apply it before not after adjusting\nthe Vars' levelsup. An alternative could be to add a levelsup\nparameter to add_nulling_relids, but that seemed like unnecessary\ncomplication.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 29 Aug 2024 16:53:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
},
{
"msg_contents": "Hi All,\n\nI hope you're doing well.\n\nI'm writing to kindly requesting if there is a bug tracker ID or any\nreference number associated with this issue, I would appreciate it if you\ncould share it with me.\n\nThank you for your time and assistance. Please let me know if there's any\nadditional information you need from me.\n\nBest regards,\n\nNikhil\n\nOn Fri, 30 Aug, 2024, 2:23 am Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > On Thu, Aug 29, 2024 at 4:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> In the meantime, I think this test case is mighty artificial,\n> >> and it wouldn't bother me any to just take it out again for the\n> >> time being.\n>\n> > Yeah, I think we can remove the 't1.two+t2.two' test case if we go\n> > with your proposed patch to address this performance regression.\n>\n> Here's a polished-up patchset for that. I made the memoize test\n> removal a separate patch because (a) it only applies to master\n> and (b) it seems worth calling out as something we might be able\n> to revert later.\n>\n> I found one bug in the draft patch: add_nulling_relids only processes\n> Vars of level zero, so we have to apply it before not after adjusting\n> the Vars' levelsup. An alternative could be to add a levelsup\n> parameter to add_nulling_relids, but that seemed like unnecessary\n> complication.\n>\n> regards, tom lane\n>\n>\n\nHi All,I hope you're doing well.I'm writing to kindly requesting if there is a bug tracker ID or any reference number associated with this issue, I would appreciate it if you could share it with me.Thank you for your time and assistance. Please let me know if there's any additional information you need from me.Best regards,NikhilOn Fri, 30 Aug, 2024, 2:23 am Tom Lane, <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> On Thu, Aug 29, 2024 at 4:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> In the meantime, I think this test case is mighty artificial,\n>> and it wouldn't bother me any to just take it out again for the\n>> time being.\n\n> Yeah, I think we can remove the 't1.two+t2.two' test case if we go\n> with your proposed patch to address this performance regression.\n\nHere's a polished-up patchset for that. I made the memoize test\nremoval a separate patch because (a) it only applies to master\nand (b) it seems worth calling out as something we might be able\nto revert later.\n\nI found one bug in the draft patch: add_nulling_relids only processes\nVars of level zero, so we have to apply it before not after adjusting\nthe Vars' levelsup. An alternative could be to add a levelsup\nparameter to add_nulling_relids, but that seemed like unnecessary\ncomplication.\n\n regards, tom lane",
"msg_date": "Wed, 18 Sep 2024 11:49:30 +0530",
"msg_from": "nikhil raj <nikhilraj474@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Significant Execution Time Difference Between PG13.14 and PG16.4\n for Query on information_schema Tables."
}
] |
[
{
"msg_contents": "Hello hackers,\n\nThis is an attempt to resurrect the thread [1] to throttle WAL inserts\nbefore the point of commit.\n\nBackground:\n\nTransactions on commit, wait for replication and make sure WAL is\nflushed up to commit lsn on standby, when synchronous_commit is on.\n\nWhile commit is a mandatory sync/wait point, waiting for replication at\nsome periodic intervals en route may be desirable/efficient to act as\ngood citizen. Consider for example, a setup where primary and standby\ncan write at 20GB/sec, while network between them can only transfer at\n2GB/sec. Now if CTAS is run in such a setup for a large table, it can\ngenerate WAL very aggressively on primary, but can't be transferred at\nthat rate to standby. Hence, there would be pending WAL build-up on\nprimary. This exhibits two main things:\n\n- Fairness: new write transactions (even if single tuple I/U/D), and\n even read transactions (setting hint bits) would exhibit latency for\n amount of time equivalent to the pending WAL to be shipped and\n flushed to standby.\n\n- Primary needs to have space to hold that much WAL, since till the WAL\n is not shipped to standby, it can't be recycled, if replication slots\n are in use.\n\nProposed solution (patch attached):\n\n- Global (backend local) variable wal_bytes_written to track the amount\n of wal written by the backend since the start of transaction or the\n last time SyncReplWaitForLSN() was called for this transaction.\n\n- Whenever we find wal_bytes_written exceeds the new\n wait_for_replication_threshold GUC, we set the control flag\n XlogThrottlePending (similar in spirit to LogMemoryContextPending),\n which is then handled at ProcessInterrupts() time. This is the\n mechanism proposed in [2]. Doing it this way avoids issues such as\n holding locks inside a critical section.\n\n- To do the wait itself, we rely on SyncRepWaitForLSN(), with the cached\n value of the WAL flush point.\n\n[1] https://www.postgresql.org/message-id/flat/CAHg%2BQDcO_zhgBCMn5SosvhuuCoJ1vKmLjnVuqUEOd4S73B1urw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/20220105174643.lozdd3radxv4tlmx%40alap3.anarazel.de\n\nRegards,\nShirisha\nBroadcom Inc.\n\n-- \nThis electronic communication and the information and any files transmitted \nwith it, or attached to it, are confidential and are intended solely for \nthe use of the individual or entity to whom it is addressed and may contain \ninformation that is confidential, legally privileged, protected by privacy \nlaws, or otherwise restricted from disclosure to anyone else. If you are \nnot the intended recipient or the person responsible for delivering the \ne-mail to the intended recipient, you are hereby notified that any use, \ncopying, distributing, dissemination, forwarding, printing, or copying of \nthis e-mail is strictly prohibited. If you received this e-mail in error, \nplease return the e-mail to the sender, delete it from your computer, and \ndestroy any printed copy of it.",
"msg_date": "Tue, 27 Aug 2024 16:20:40 +0530",
"msg_from": "Shirisha Shirisha <shirisha.sn@broadcom.com>",
"msg_from_op": true,
"msg_subject": "Redux: Throttle WAL inserts before commit"
},
{
"msg_contents": "On Tue, Aug 27, 2024 at 12:51 PM Shirisha Shirisha\n<shirisha.sn@broadcom.com> wrote:\n>\n> Hello hackers,\n>\n> This is an attempt to resurrect the thread [1] to throttle WAL inserts\n> before the point of commit.\n>\n> Background:\n>\n> Transactions on commit, wait for replication and make sure WAL is\n> flushed up to commit lsn on standby, when synchronous_commit is on.\n\nHi Shirisha,\n\nJust to let you know, there was a more recent attempt at that in [1]\nin Jan 2023 , also with a resurrection attempt there in Nov 2023 by\nTomas. Those patches there seemed to have received plenty of attention\nback then and were also based on SyncRepWaitForLSN(), but somehow\nmaybe we ran out of steam and there was not that big interest back\nthen.\n\nMaybe you could post a review there (for Tomas's more modern recent\npatch), if it is helping your use case even today. That way it could\nget some traction again?\n\n-Jakub Wartak.\n\n\n",
"msg_date": "Thu, 29 Aug 2024 09:28:16 +0200",
"msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Redux: Throttle WAL inserts before commit"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhen error \"trailing junk after numeric literal\" occurs at a number\nfollowed by a symbol that is presented by more than one byte, that symbol\nin the error message is not displayed correctly. Instead of that symbol\nthere is only its first byte. That makes the error message an invalid\nUTF-8 (or whatever encoding is set). The whole log file where this error\nmessage goes also becomes invalid. That could lead to problems with\nreading logs. You can see an invalid message by trying \"SELECT 123ä;\".\n\nRejecting trailing junk after numeric literals was introduced in commit\n2549f066 to prevent scanning a number immediately followed by an\nidentifier without whitespace as number and identifier. All the tokens\nthat made to catch such cases match a numeric literal and the next byte,\nand that is where the problem comes from. I thought that it could be fixed\njust by using tokens that match a numeric literal immediately followed by\nan identifier, not only one byte. This also improves error messages in\ncases with English letters. After these changes, for \"SELECT 123abc;\" the\nerror message will say that the error appeared at or near \"123abc\" instead\nof \"123a\".\n\nI've attached the patch. Are there any pitfalls I can't see? It just keeps\nbothering me why wasn't it done from the beginning. Matching the whole\nidentifier after a numeric literal just seems more obvious to me than\nmatching its first byte.\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/",
"msg_date": "Tue, 27 Aug 2024 18:05:53 +0300",
"msg_from": "Karina Litskevich <litskevichkarina@gmail.com>",
"msg_from_op": true,
"msg_subject": "Invalid \"trailing junk\" error message when non-English letters are\n used"
},
{
"msg_contents": "Hi, Karina!\n\nOn Tue, 27 Aug 2024 at 19:06, Karina Litskevich <litskevichkarina@gmail.com>\nwrote:\n\n> Hi hackers,\n>\n> When error \"trailing junk after numeric literal\" occurs at a number\n> followed by a symbol that is presented by more than one byte, that symbol\n> in the error message is not displayed correctly. Instead of that symbol\n> there is only its first byte. That makes the error message an invalid\n> UTF-8 (or whatever encoding is set). The whole log file where this error\n> message goes also becomes invalid. That could lead to problems with\n> reading logs. You can see an invalid message by trying \"SELECT 123ä;\".\n>\n> Rejecting trailing junk after numeric literals was introduced in commit\n> 2549f066 to prevent scanning a number immediately followed by an\n> identifier without whitespace as number and identifier. All the tokens\n> that made to catch such cases match a numeric literal and the next byte,\n> and that is where the problem comes from. I thought that it could be fixed\n> just by using tokens that match a numeric literal immediately followed by\n> an identifier, not only one byte. This also improves error messages in\n> cases with English letters. After these changes, for \"SELECT 123abc;\" the\n> error message will say that the error appeared at or near \"123abc\" instead\n> of \"123a\".\n>\n> I've attached the patch. Are there any pitfalls I can't see? It just keeps\n> bothering me why wasn't it done from the beginning. Matching the whole\n> identifier after a numeric literal just seems more obvious to me than\n> matching its first byte.\n>\n\nI see the following compile time warnings:\nscan.l:1062: warning, rule cannot be matched\nscan.l:1066: warning, rule cannot be matched\nscan.l:1070: warning, rule cannot be matched\npgc.l:1030: warning, rule cannot be matched\npgc.l:1033: warning, rule cannot be matched\npgc.l:1036: warning, rule cannot be matched\npsqlscan.l:905: warning, rule cannot be matched\npsqlscan.l:908: warning, rule cannot be matched\npsqlscan.l:911: warning, rule cannot be matched\n\nFWIW output of the whole string in the error message doesnt' look nice to\nme, but other places of code do this anyway e.g:\nselect ('1'||repeat('p',1000000))::integer;\nThis may be worth fixing.\n\nRegards,\nPavel Borisov\nSupabase\n\nHi, Karina!On Tue, 27 Aug 2024 at 19:06, Karina Litskevich <litskevichkarina@gmail.com> wrote:Hi hackers,When error \"trailing junk after numeric literal\" occurs at a numberfollowed by a symbol that is presented by more than one byte, that symbolin the error message is not displayed correctly. Instead of that symbolthere is only its first byte. That makes the error message an invalidUTF-8 (or whatever encoding is set). The whole log file where this errormessage goes also becomes invalid. That could lead to problems withreading logs. You can see an invalid message by trying \"SELECT 123ä;\".Rejecting trailing junk after numeric literals was introduced in commit2549f066 to prevent scanning a number immediately followed by anidentifier without whitespace as number and identifier. All the tokensthat made to catch such cases match a numeric literal and the next byte,and that is where the problem comes from. I thought that it could be fixedjust by using tokens that match a numeric literal immediately followed byan identifier, not only one byte. This also improves error messages incases with English letters. After these changes, for \"SELECT 123abc;\" theerror message will say that the error appeared at or near \"123abc\" insteadof \"123a\".I've attached the patch. Are there any pitfalls I can't see? It just keepsbothering me why wasn't it done from the beginning. Matching the wholeidentifier after a numeric literal just seems more obvious to me thanmatching its first byte. I see the following compile time warnings:scan.l:1062: warning, rule cannot be matchedscan.l:1066: warning, rule cannot be matchedscan.l:1070: warning, rule cannot be matchedpgc.l:1030: warning, rule cannot be matchedpgc.l:1033: warning, rule cannot be matchedpgc.l:1036: warning, rule cannot be matchedpsqlscan.l:905: warning, rule cannot be matchedpsqlscan.l:908: warning, rule cannot be matchedpsqlscan.l:911: warning, rule cannot be matchedFWIW output of the whole string in the error message doesnt' look nice to me, but other places of code do this anyway e.g:select ('1'||repeat('p',1000000))::integer;This may be worth fixing.Regards,Pavel BorisovSupabase",
"msg_date": "Wed, 28 Aug 2024 01:06:24 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Invalid \"trailing junk\" error message when non-English letters\n are used"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 12:06 AM Pavel Borisov <pashkin.elfe@gmail.com>\nwrote:\n\n> I see the following compile time warnings:\n> scan.l:1062: warning, rule cannot be matched\n> scan.l:1066: warning, rule cannot be matched\n> scan.l:1070: warning, rule cannot be matched\n> pgc.l:1030: warning, rule cannot be matched\n> pgc.l:1033: warning, rule cannot be matched\n> pgc.l:1036: warning, rule cannot be matched\n> psqlscan.l:905: warning, rule cannot be matched\n> psqlscan.l:908: warning, rule cannot be matched\n> psqlscan.l:911: warning, rule cannot be matched\n>\n\nThanks for the feedback!\n\nI somehow missed these warnings, my bad. The problem is with queries like\n\"select 0x12junk;\". In master \"0x\" matches decinteger_junk token and\n\"0x12j\" matches hexinteger_junk token and flex chooses the longest match,\nno conflict. But with the patch \"0x12junk\" matches both decinteger_junk\n(decinteger \"0\" + identifier \"x12junk\") and hexinteger_junk (hexinteger\n\"0x12\" + identifier \"junk\"). Since any match to hexinteger_junk also\nmatches decinteger_junk, and the rule for hexinteger_junk is below the\nrule for decinteger_junk, it's never reached.\n\nI see the two solutions here: either move the rule for decinteger_junk\nbelow the rules for hexinteger_junk, octinteger_junk and bininteger_junk,\nor just use a single rule decinteger_junk for all these cases, since the\nerror message is the same anyway. I implemented the latter in the second\nversion of the patch, also renamed this common rule to integer_junk.\n\n\nAdditionally, I noticed that this patch is going to change error messages\nin some cases, though I don't think it's a big deal. Example:\n\nWithout patch:\npostgres=# select 0xyz;\nERROR: invalid hexadecimal integer at or near \"0x\"\n\nWith patch:\npostgres=# select 0xyz;\nERROR: trailing junk after numeric literal at or near \"0xyz\"\n\n\n\n> FWIW output of the whole string in the error message doesnt' look nice to\n> me, but other places of code do this anyway e.g:\n> select ('1'||repeat('p',1000000))::integer;\n> This may be worth fixing.\n>\n\n That's interesting. I didn't know we could do that to create a long error\nmessage. At first I thought that it's not a problem for error messages\nfrom the scanner, since its \"at or near\" string cannot be longer than the\nquery typed in psql or written in a script file so it won't be enormously\nbig. But that's just not true, because we can send a generated query.\nSomething like that:\n\nWith patch:\npostgres=# select 'select '||'1'||repeat('p',1000000) \\gexec\nERROR: trailing junk after numeric literal at or near \"1ppp<lots of p>\"\n\nAnd another query that leads to this without patch:\npostgres=# select 'select 1'||repeat('@',1000000)||'1' \\gexec\nERROR: operator too long at or near \"@@@<lots of @>\"\n\nIt would be nice to prevent such long strings in error messages. Maybe a\nGUC variable to set the maximum length for such strings could work. But\nhow do we determine all places where it is needed?\n\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/",
"msg_date": "Wed, 28 Aug 2024 13:00:33 +0300",
"msg_from": "Karina Litskevich <litskevichkarina@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Invalid \"trailing junk\" error message when non-English letters\n are used"
},
{
"msg_contents": "Karina Litskevich <litskevichkarina@gmail.com> writes:\n> I see the two solutions here: either move the rule for decinteger_junk\n> below the rules for hexinteger_junk, octinteger_junk and bininteger_junk,\n> or just use a single rule decinteger_junk for all these cases, since the\n> error message is the same anyway. I implemented the latter in the second\n> version of the patch, also renamed this common rule to integer_junk.\n\nThat seems reasonable, but IMO this code was unacceptably\nundercommented before and what you've done has made it worse.\nWe really need a comment block associated with the flex macros,\nperhaps along the lines of\n\n/*\n * An identifier immediately following a numeric literal is disallowed\n * because in some cases it's ambiguous what is meant: for example,\n * 0x1234 could be either a hexinteger or a decinteger \"0\" and an\n * identifier \"x1234\". We can detect such problems by seeing if\n * integer_junk matches a longer substring than any of the XXXinteger\n * patterns. (One \"junk\" pattern is sufficient because this will match\n * all the same strings we'd match with {hexinteger}{identifier} etc.)\n * Note that the rule for integer_junk must appear after the ones for\n * XXXinteger to make this work correctly.\n */\n\n(Hmm, actually, is that last sentence true? My flex is a bit rusty.)\n\nparam_junk really needs a similar comment, or maybe we could put\nall the XXX_junk macros together and use one comment for all.\n\n> Additionally, I noticed that this patch is going to change error messages\n> in some cases, though I don't think it's a big deal. Example:\n> Without patch:\n> postgres=# select 0xyz;\n> ERROR: invalid hexadecimal integer at or near \"0x\"\n> With patch:\n> postgres=# select 0xyz;\n> ERROR: trailing junk after numeric literal at or near \"0xyz\"\n\nThat's sort of annoying, but I don't really see a better way,\nor at least not one that's worth the effort.\n\n>> FWIW output of the whole string in the error message doesnt' look nice to\n>> me, but other places of code do this anyway e.g:\n>> select ('1'||repeat('p',1000000))::integer;\n>> This may be worth fixing.\n\nI think this is nonsense: we are already in the habit of repeating the\nwhole failing query string in the STATEMENT field. In any case it's\nnot something for this patch to worry about.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Sep 2024 18:52:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Invalid \"trailing junk\" error message when non-English letters\n are used"
},
{
"msg_contents": "On Thu, Sep 5, 2024 at 1:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Karina Litskevich <litskevichkarina@gmail.com> writes:\n> > I see the two solutions here: either move the rule for decinteger_junk\n> > below the rules for hexinteger_junk, octinteger_junk and\nbininteger_junk,\n> > or just use a single rule decinteger_junk for all these cases, since the\n> > error message is the same anyway. I implemented the latter in the second\n> > version of the patch, also renamed this common rule to integer_junk.\n>\n> That seems reasonable, but IMO this code was unacceptably\n> undercommented before and what you've done has made it worse.\n> We really need a comment block associated with the flex macros,\n> perhaps along the lines of\n>\n> /*\n> * An identifier immediately following a numeric literal is disallowed\n> * because in some cases it's ambiguous what is meant: for example,\n> * 0x1234 could be either a hexinteger or a decinteger \"0\" and an\n> * identifier \"x1234\". We can detect such problems by seeing if\n> * integer_junk matches a longer substring than any of the XXXinteger\n> * patterns. (One \"junk\" pattern is sufficient because this will match\n> * all the same strings we'd match with {hexinteger}{identifier} etc.)\n> * Note that the rule for integer_junk must appear after the ones for\n> * XXXinteger to make this work correctly.\n> */\n\nThank you, this piece of code definitely needs a comment.\n\n> (Hmm, actually, is that last sentence true? My flex is a bit rusty.)\n\nYes, the rule for integer_junk must appear after the ones for XXXinteger.\nHere is a quote from\nhttps://ftp.gnu.org/old-gnu/Manuals/flex-2.5.4/html_mono/flex.html\n\"If it finds more than one match, it takes the one matching the most text\n(...). If it finds two or more matches of the same length, the rule listed\nfirst in the flex input file is chosen.\"\nFor example, 0x123 is matched by both integer_junk and hexinteger, and we\nwant the rule for hexinteger to be chosen, so we should place it before\nthe rule for integer_junk.\n\n> param_junk really needs a similar comment, or maybe we could put\n> all the XXX_junk macros together and use one comment for all.\n\nIn v3 of the patch I grouped all the *_junk rules together and included\nthe suggested comment with a little added something.\n\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/\n\nOn Thu, Sep 5, 2024 at 1:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:> Karina Litskevich <litskevichkarina@gmail.com> writes:> > I see the two solutions here: either move the rule for decinteger_junk> > below the rules for hexinteger_junk, octinteger_junk and bininteger_junk,> > or just use a single rule decinteger_junk for all these cases, since the> > error message is the same anyway. I implemented the latter in the second> > version of the patch, also renamed this common rule to integer_junk.>> That seems reasonable, but IMO this code was unacceptably> undercommented before and what you've done has made it worse.> We really need a comment block associated with the flex macros,> perhaps along the lines of>> /*> * An identifier immediately following a numeric literal is disallowed> * because in some cases it's ambiguous what is meant: for example,> * 0x1234 could be either a hexinteger or a decinteger \"0\" and an> * identifier \"x1234\". We can detect such problems by seeing if> * integer_junk matches a longer substring than any of the XXXinteger> * patterns. (One \"junk\" pattern is sufficient because this will match> * all the same strings we'd match with {hexinteger}{identifier} etc.)> * Note that the rule for integer_junk must appear after the ones for> * XXXinteger to make this work correctly.> */Thank you, this piece of code definitely needs a comment. > (Hmm, actually, is that last sentence true? My flex is a bit rusty.)Yes, the rule for integer_junk must appear after the ones for XXXinteger.Here is a quote from https://ftp.gnu.org/old-gnu/Manuals/flex-2.5.4/html_mono/flex.html\"If it finds more than one match, it takes the one matching the most text(...). If it finds two or more matches of the same length, the rule listedfirst in the flex input file is chosen.\"For example, 0x123 is matched by both integer_junk and hexinteger, and wewant the rule for hexinteger to be chosen, so we should place it beforethe rule for integer_junk. > param_junk really needs a similar comment, or maybe we could put> all the XXX_junk macros together and use one comment for all.In v3 of the patch I grouped all the *_junk rules together and includedthe suggested comment with a little added something.Best regards,Karina LitskevichPostgres Professional: http://postgrespro.com/",
"msg_date": "Thu, 5 Sep 2024 18:07:57 +0300",
"msg_from": "Karina Litskevich <litskevichkarina@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Invalid \"trailing junk\" error message when non-English letters\n are used"
},
{
"msg_contents": "On Thu, Sep 5, 2024 at 6:07 PM Karina Litskevich\n<litskevichkarina@gmail.com> wrote:\n> In v3 of the patch I grouped all the *_junk rules together and included\n> the suggested comment with a little added something.\n\nOops, I forgot to attach the patch, here it is.\n\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/",
"msg_date": "Thu, 5 Sep 2024 18:11:20 +0300",
"msg_from": "Karina Litskevich <litskevichkarina@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Invalid \"trailing junk\" error message when non-English letters\n are used"
},
{
"msg_contents": "Karina Litskevich <litskevichkarina@gmail.com> writes:\n> On Thu, Sep 5, 2024 at 6:07 PM Karina Litskevich\n> <litskevichkarina@gmail.com> wrote:\n>> In v3 of the patch I grouped all the *_junk rules together and included\n>> the suggested comment with a little added something.\n\n> Oops, I forgot to attach the patch, here it is.\n\nPushed with a bit of further wordsmithing on the comment.\n\nI left out the proposed new test case \"SELECT 1ä;\". The trouble\nwith that is it'd introduce an encoding dependency into the test.\nFor example, it'd likely fail with some other error message in\na server encoding that lacks an equivalent to UTF8 \"ä\". While\nwe have methods for coping with such cases, it requires some\npushups, and I didn't see the value. The changes in existing\ntest case results are sufficient to show the patch does what\nwe want.\n\nAlso, while the bug exists in v15, the patch didn't apply at all.\nI got lazy and just did the minimal s/ident_start/identifier/ change\nin that branch, instead of back-patching the cosmetic aspects.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Sep 2024 12:49:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Invalid \"trailing junk\" error message when non-English letters\n are used"
},
{
"msg_contents": "On Thu, 5 Sept 2024 at 20:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Karina Litskevich <litskevichkarina@gmail.com> writes:\n> > On Thu, Sep 5, 2024 at 6:07 PM Karina Litskevich\n> > <litskevichkarina@gmail.com> wrote:\n> >> In v3 of the patch I grouped all the *_junk rules together and included\n> >> the suggested comment with a little added something.\n>\n> > Oops, I forgot to attach the patch, here it is.\n>\n> Pushed with a bit of further wordsmithing on the comment.\n>\n> I left out the proposed new test case \"SELECT 1ä;\". The trouble\n> with that is it'd introduce an encoding dependency into the test.\n> For example, it'd likely fail with some other error message in\n> a server encoding that lacks an equivalent to UTF8 \"ä\". While\n> we have methods for coping with such cases, it requires some\n> pushups, and I didn't see the value. The changes in existing\n> test case results are sufficient to show the patch does what\n> we want.\n>\n> Also, while the bug exists in v15, the patch didn't apply at all.\n> I got lazy and just did the minimal s/ident_start/identifier/ change\n> in that branch, instead of back-patching the cosmetic aspects.\n>\n\nGood! Thank you!\nPavel\n\nOn Thu, 5 Sept 2024 at 20:49, Tom Lane <tgl@sss.pgh.pa.us> wrote:Karina Litskevich <litskevichkarina@gmail.com> writes:\n> On Thu, Sep 5, 2024 at 6:07 PM Karina Litskevich\n> <litskevichkarina@gmail.com> wrote:\n>> In v3 of the patch I grouped all the *_junk rules together and included\n>> the suggested comment with a little added something.\n\n> Oops, I forgot to attach the patch, here it is.\n\nPushed with a bit of further wordsmithing on the comment.\n\nI left out the proposed new test case \"SELECT 1ä;\". The trouble\nwith that is it'd introduce an encoding dependency into the test.\nFor example, it'd likely fail with some other error message in\na server encoding that lacks an equivalent to UTF8 \"ä\". While\nwe have methods for coping with such cases, it requires some\npushups, and I didn't see the value. The changes in existing\ntest case results are sufficient to show the patch does what\nwe want.\n\nAlso, while the bug exists in v15, the patch didn't apply at all.\nI got lazy and just did the minimal s/ident_start/identifier/ change\nin that branch, instead of back-patching the cosmetic aspects.Good! Thank you!Pavel",
"msg_date": "Thu, 5 Sep 2024 21:56:10 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Invalid \"trailing junk\" error message when non-English letters\n are used"
}
] |
[
{
"msg_contents": "(creating new thread from [0])\n\nOn Wed, Apr 10, 2024 at 09:52:59PM -0400, Tom Lane wrote:\n> On fourth thought ... the number of tries to acquire the lock, or\n> in this case number of tries to observe the lock free, is not\n> NUM_DELAYS but NUM_DELAYS * spins_per_delay. Decreasing\n> spins_per_delay should therefore increase the risk of unexpected\n> \"stuck spinlock\" failures. And finish_spin_delay will decrement\n> spins_per_delay in any cycle where we slept at least once.\n> It's plausible therefore that this coding with finish_spin_delay\n> inside the main wait loop puts more downward pressure on\n> spins_per_delay than the algorithm is intended to cause.\n> \n> I kind of wonder whether the premises finish_spin_delay is written\n> on even apply anymore, given that nobody except some buildfarm\n> dinosaurs runs Postgres on single-processor hardware anymore.\n> Maybe we should rip out the whole mechanism and hard-wire\n> spins_per_delay at 1000 or so.\n\nI've been looking at spinlock contention on newer hardware, and while I do\nnot yet have any proposal to share for that, I saw this adaptive\nspins_per_delay code and wondered about this possibility of \"downward\npressure on spins_per_delay\" for contended locks. ISTM it could make\nmatters worse in some cases.\n\nAnyway, I'm inclined to agree that the premise of the adaptive\nspins_per_delay code probably doesn't apply anymore, so here's a patch to\nremove it.\n\n[0] https://postgr.es/m/65063.1712800379%40sss.pgh.pa.us\n\n-- \nnathan",
"msg_date": "Tue, 27 Aug 2024 11:16:15 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "remove adaptive spins_per_delay code"
},
{
"msg_contents": "Hi,\n\nOn 2024-08-27 11:16:15 -0500, Nathan Bossart wrote:\n> (creating new thread from [0])\n> \n> On Wed, Apr 10, 2024 at 09:52:59PM -0400, Tom Lane wrote:\n> > On fourth thought ... the number of tries to acquire the lock, or\n> > in this case number of tries to observe the lock free, is not\n> > NUM_DELAYS but NUM_DELAYS * spins_per_delay. Decreasing\n> > spins_per_delay should therefore increase the risk of unexpected\n> > \"stuck spinlock\" failures. And finish_spin_delay will decrement\n> > spins_per_delay in any cycle where we slept at least once.\n> > It's plausible therefore that this coding with finish_spin_delay\n> > inside the main wait loop puts more downward pressure on\n> > spins_per_delay than the algorithm is intended to cause.\n> > \n> > I kind of wonder whether the premises finish_spin_delay is written\n> > on even apply anymore, given that nobody except some buildfarm\n> > dinosaurs runs Postgres on single-processor hardware anymore.\n> > Maybe we should rip out the whole mechanism and hard-wire\n> > spins_per_delay at 1000 or so.\n> \n> I've been looking at spinlock contention on newer hardware, and while I do\n> not yet have any proposal to share for that, I saw this adaptive\n> spins_per_delay code and wondered about this possibility of \"downward\n> pressure on spins_per_delay\" for contended locks. ISTM it could make\n> matters worse in some cases.\n> \n> Anyway, I'm inclined to agree that the premise of the adaptive\n> spins_per_delay code probably doesn't apply anymore, so here's a patch to\n> remove it.\n\nFWIW, I've seen cases on multi-socket machines where performance was vastly\nworse under contention with some values of spins_per_delay. With good numbers\nbeing quite different on smaller machines. Most new-ish server CPUs these days\nbasically behave like a multi-socket machine internally, due to being\ninternally partitioned into multiple chiplets. And it's pretty clear that that\ntrend isn't going to go away. So finding a good value probably isn't easy.\n\n\n\nWe don't have a whole lot of contended spin.h spinlocks left, except that we\nhave one very critical one, XLogCtl->Insert.insertpos_lck. And of course we\nuse the same spinning logic for buffer header locks - which can be heavily\ncontended.\n\nI suspect that eventually we ought to replace all our userspace\nspinlock-like-things with a framework for writing properly \"waiting\" locks\nwith some spinning. We can't just use lwlocks because support for\nreader-writer locks makes them much more heavyweight (mainly because it\nimplies having to use an atomic operation for lock release, which shows up\nsubstantially).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Aug 2024 14:27:00 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: remove adaptive spins_per_delay code"
},
{
"msg_contents": "On Tue, Aug 27, 2024 at 02:27:00PM -0400, Andres Freund wrote:\n> FWIW, I've seen cases on multi-socket machines where performance was vastly\n> worse under contention with some values of spins_per_delay. With good numbers\n> being quite different on smaller machines. Most new-ish server CPUs these days\n> basically behave like a multi-socket machine internally, due to being\n> internally partitioned into multiple chiplets. And it's pretty clear that that\n> trend isn't going to go away. So finding a good value probably isn't easy.\n\nYeah.\n\n> We don't have a whole lot of contended spin.h spinlocks left, except that we\n> have one very critical one, XLogCtl->Insert.insertpos_lck. And of course we\n> use the same spinning logic for buffer header locks - which can be heavily\n> contended.\n\nAnother one I've been looking into is pgssEntry->mutex, which shows up\nprominently when pg_stat_statements.track_planning is on. There was some\nprevious discussion about this [0], which resulted in that parameter\ngetting turned off by default (commit d1763ea). I tried converting those\nlocks to LWLocks, but that actually hurt performance. I also tried\nchanging the counters to atomics, which AFAICT is mostly doable except for\n\"usage\". That one would require some more thought to be able to convert it\naway from a double.\n\n> I suspect that eventually we ought to replace all our userspace\n> spinlock-like-things with a framework for writing properly \"waiting\" locks\n> with some spinning. We can't just use lwlocks because support for\n> reader-writer locks makes them much more heavyweight (mainly because it\n> implies having to use an atomic operation for lock release, which shows up\n> substantially).\n\nAnother approach I'm investigating is adding exponential backoff via extra\nspins in perform_spin_delay(). I'm doubtful this will be a popular\nsuggestion, as appropriate settings seem to be hardware/workload dependent\nand therefore will require a GUC or two, but it does seem to help\nsubstantially on machines with many cores. In any case, I think we ought\nto do _something_ in this area for v18.\n\n[0] https://postgr.es/m/2895b53b033c47ccb22972b589050dd9%40EX13D05UWC001.ant.amazon.com\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 27 Aug 2024 13:55:35 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: remove adaptive spins_per_delay code"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nRecently we faced issues with WAL recovery in our production. Issue\nreproduces not frequently, once in 1.5-2 month.\n\nSo, the problem looks like this in our production cluster logs:\n\nPostgresql startup process fails with signal 6.\n\"startup process (PID 369306) was terminated by signal 6:\nAborted\",,,,,,,,,\"\",\"postmaster\",,0\n\"PANIC: invalid magic number 0000 in log segment\n000000010000000000000000, offset 0\"\n\n=== Conditions to repro:\n\nThere are conditions in which this problem occurs:\n\n1) Postgresql process exited via OOM or something (crash recovery needed)\n2) RedoStartLSN < CheckPointLoc AND RedoStartLSN is in different WAL\nfile than CheckPointLoc\n3) Lets say RedoStartLSN is in file 000000010000000000000010. This\nfile IS not in the pg_wal directory (already archived).\n\np.3 is the tricky part, because I don't know if this WAL archive and\nremoval is OK or not. Maybe too early removal of WAL with RedoStartLSN\nis the problem itself.\n\n=== My repro:\n\nThis is how these conditions can be reproduced:\n\n1) initdb, create table tt(i int, j int);\n2) In two parallel sessions:\n s1: insert into tt select * from generate_series(1,1000) a,\ngenerate_series(1,1000)b;\n s2: (wait a little and) CHECKPOINT;\n3) killall -9 postgres\n4) check if RedoStartLSN and CheckPointLoc are in different WAL files.\nIf so, take the RedoStartLSN WAL file and move it somewhere out of\npg_wal.\n\nAfter i done this, this is pg_controldata output:\n\n```\nreshke@ygp-jammy:~/postgres$ /usr/local/pgsql/bin/pg_controldata ./db/\npg_control version number: 1300\nCatalog version number: 202209061\nDatabase system identifier: 7408076969759520641\nDatabase cluster state: in crash recovery\npg_control last modified: Wed Aug 28 07:53:27 2024\nLatest checkpoint location: 0/11400268\nLatest checkpoint's REDO location: 0/10E49720\nLatest checkpoint's REDO WAL file: 000000010000000000000010\nLatest checkpoint's TimeLineID: 1\n```\nTry to start PG:\n\n\n```\nreshke@ygp-jammy:~/postgres$ /usr/local/pgsql/bin/postgres --single -D ./db/\n2024-08-28 08:01:44.209 UTC [586161] DEBUG: invoking\nIpcMemoryCreate(size=149471232)\n2024-08-28 08:01:44.209 UTC [586161] DEBUG: mmap(150994944) with\nMAP_HUGETLB failed, huge pages disabled: Cannot allocate memory\n2024-08-28 08:01:44.209 UTC [586161] DEBUG: cleaning up dynamic\nshared memory control segment with ID 3875438492\n2024-08-28 08:01:44.217 UTC [586161] DEBUG: dynamic shared memory\nsystem will support 674 segments\n2024-08-28 08:01:44.217 UTC [586161] DEBUG: created dynamic shared\nmemory control segment 1123977252 (26976 bytes)\n2024-08-28 08:01:44.217 UTC [586161] DEBUG: InitPostgres\n2024-08-28 08:01:44.217 UTC [586161] DEBUG: my backend ID is 1\n2024-08-28 08:01:44.217 UTC [586161] LOG: database system was\ninterrupted while in recovery at 2024-08-28 07:53:27 UTC\n2024-08-28 08:01:44.217 UTC [586161] HINT: This probably means that\nsome data is corrupted and you will have to use the last backup for\nrecovery.\n2024-08-28 08:01:44.217 UTC [586161] DEBUG: removing all temporary WAL segments\n2024-08-28 08:01:44.226 UTC [586161] DEBUG: checkpoint record is at 0/11400268\n2024-08-28 08:01:44.226 UTC [586161] DEBUG: redo record is at\n0/10E49720; shutdown false\n2024-08-28 08:01:44.226 UTC [586161] DEBUG: next transaction ID: 744;\nnext OID: 16392\n2024-08-28 08:01:44.226 UTC [586161] DEBUG: next MultiXactId: 1; next\nMultiXactOffset: 0\n2024-08-28 08:01:44.226 UTC [586161] DEBUG: oldest unfrozen\ntransaction ID: 716, in database 1\n2024-08-28 08:01:44.226 UTC [586161] DEBUG: oldest MultiXactId: 1, in\ndatabase 1\n2024-08-28 08:01:44.226 UTC [586161] DEBUG: commit timestamp Xid\noldest/newest: 0/0\n2024-08-28 08:01:44.226 UTC [586161] LOG: database system was not\nproperly shut down; automatic recovery in progress\n2024-08-28 08:01:44.226 UTC [586161] DEBUG: transaction ID wrap limit\nis 2147484363, limited by database with OID 1\n2024-08-28 08:01:44.226 UTC [586161] DEBUG: MultiXactId wrap limit is\n2147483648, limited by database with OID 1\n2024-08-28 08:01:44.226 UTC [586161] DEBUG: starting up replication slots\n2024-08-28 08:01:44.227 UTC [586161] DEBUG: xmin required by slots:\ndata 0, catalog 0\n2024-08-28 08:01:44.227 UTC [586161] DEBUG: starting up replication\norigin progress state\n2024-08-28 08:01:44.227 UTC [586161] DEBUG: didn't need to unlink\npermanent stats file \"pg_stat/pgstat.stat\" - didn't exist\n2024-08-28 08:01:44.231 UTC [586161] DEBUG: resetting unlogged\nrelations: cleanup 1 init 0\n2024-08-28 08:01:44.232 UTC [586161] DEBUG: could not open file\n\"pg_wal/000000010000000000000010\": No such file or directory\n2024-08-28 08:01:44.232 UTC [586161] LOG: redo is not required\n2024-08-28 08:01:44.232 UTC [586161] PANIC: invalid magic number 0000\nin log segment 000000010000000000000000, offset 0\nAborted (core dumped)\n```\n\n\nThis is obviously very bad, especially the `redo is not required`\npart, which is in fact simply skipping recovery, when recovery is 100%\nneeded.\n\n\nRun with asserts compiled:\n```\nreshke@ygp-jammy:~/postgres$ /usr/local/pgsql/bin/postgres --single -D ./db/\n2024-08-28 07:33:30.119 UTC [572905] DEBUG: invoking\nIpcMemoryCreate(size=149471232)\n2024-08-28 07:33:30.119 UTC [572905] DEBUG: mmap(150994944) with\nMAP_HUGETLB failed, huge pages disabled: Cannot allocate memory\n2024-08-28 07:33:30.119 UTC [572905] DEBUG: cleaning up dynamic\nshared memory control segment with ID 1601540488\n2024-08-28 07:33:30.151 UTC [572905] DEBUG: dynamic shared memory\nsystem will support 674 segments\n2024-08-28 07:33:30.152 UTC [572905] DEBUG: created dynamic shared\nmemory control segment 773415688 (26976 bytes)\n2024-08-28 07:33:30.152 UTC [572905] DEBUG: InitPostgres\n2024-08-28 07:33:30.152 UTC [572905] DEBUG: my backend ID is 1\n2024-08-28 07:33:30.152 UTC [572905] LOG: database system was\ninterrupted while in recovery at 2024-08-28 07:31:48 UTC\n2024-08-28 07:33:30.152 UTC [572905] HINT: This probably means that\nsome data is corrupted and you will have to use the last backup for\nrecovery.\n2024-08-28 07:33:30.152 UTC [572905] DEBUG: removing all temporary WAL segments\n2024-08-28 07:33:30.170 UTC [572905] DEBUG: checkpoint record is at 0/11400268\n2024-08-28 07:33:30.170 UTC [572905] DEBUG: redo record is at\n0/10E49720; shutdown false\n2024-08-28 07:33:30.170 UTC [572905] DEBUG: next transaction ID: 744;\nnext OID: 16392\n2024-08-28 07:33:30.170 UTC [572905] DEBUG: next MultiXactId: 1; next\nMultiXactOffset: 0\n2024-08-28 07:33:30.170 UTC [572905] DEBUG: oldest unfrozen\ntransaction ID: 716, in database 1\n2024-08-28 07:33:30.170 UTC [572905] DEBUG: oldest MultiXactId: 1, in\ndatabase 1\n2024-08-28 07:33:30.170 UTC [572905] DEBUG: commit timestamp Xid\noldest/newest: 0/0\n2024-08-28 07:33:30.170 UTC [572905] LOG: database system was not\nproperly shut down; automatic recovery in progress\n2024-08-28 07:33:30.170 UTC [572905] DEBUG: transaction ID wrap limit\nis 2147484363, limited by database with OID 1\n2024-08-28 07:33:30.170 UTC [572905] DEBUG: MultiXactId wrap limit is\n2147483648, limited by database with OID 1\n2024-08-28 07:33:30.170 UTC [572905] DEBUG: starting up replication slots\n2024-08-28 07:33:30.170 UTC [572905] DEBUG: xmin required by slots:\ndata 0, catalog 0\n2024-08-28 07:33:30.170 UTC [572905] DEBUG: starting up replication\norigin progress state\n2024-08-28 07:33:30.170 UTC [572905] DEBUG: didn't need to unlink\npermanent stats file \"pg_stat/pgstat.stat\" - didn't exist\n2024-08-28 07:33:30.195 UTC [572905] DEBUG: resetting unlogged\nrelations: cleanup 1 init 0\n2024-08-28 07:33:30.195 UTC [572905] DEBUG: could not open file\n\"pg_wal/000000010000000000000010\": No such file or directory\n2024-08-28 07:33:30.195 UTC [572905] LOG: redo is not required\nTRAP: FailedAssertion(\"!XLogRecPtrIsInvalid(RecPtr)\", File:\n\"xlogreader.c\", Line: 235, PID: 572905)\n/usr/local/pgsql/bin/postgres(ExceptionalCondition+0x99)[0x562a1a4311f9]\n/usr/local/pgsql/bin/postgres(+0x229d41)[0x562a1a05dd41]\n/usr/local/pgsql/bin/postgres(FinishWalRecovery+0x79)[0x562a1a062519]\n/usr/local/pgsql/bin/postgres(StartupXLOG+0x324)[0x562a1a0578e4]\n/usr/local/pgsql/bin/postgres(InitPostgres+0x72a)[0x562a1a44344a]\n/usr/local/pgsql/bin/postgres(PostgresMain+0xb1)[0x562a1a300261]\n/usr/local/pgsql/bin/postgres(PostgresSingleUserMain+0xf1)[0x562a1a3024d1]\n/usr/local/pgsql/bin/postgres(main+0x4f1)[0x562a19f918c1]\n/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f12dc110d90]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f12dc110e40]\n/usr/local/pgsql/bin/postgres(_start+0x25)[0x562a19f91925]\nAborted (core dumped)\n```\n\nProblem tracks down to commit\n70e81861fadd9112fa2d425c762e163910a4ee52\nWe only observe this problem for PostgreSQL version 15. (15.6)\n\n=== My understanding\n\nSo, xlogrecovery internals wrongly assumes that if no WAL record was\nsuccessfully fetched, then redo is not needed.\n\n=== Proposed fix\n\nIs as simply as attached. WFM, but this is probably not a correct way\nto fix this.\n\n\n\n-- \nBest regards,\nKirill Reshke",
"msg_date": "Wed, 28 Aug 2024 11:18:13 +0300",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": true,
"msg_subject": "[BUG?] WAL file archive leads to crash during startup"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile testing a feature reported by Pavel in this thread[1] I realized\nthat elements containing whitespaces between them won't be indented with\nXMLSERIALIZE( ... INDENT)\n\nSELECT xmlserialize(DOCUMENT '<foo><bar>42</bar></foo>' AS text INDENT);\n\n xmlserialize \n-----------------\n <foo> +\n <bar>42</bar>+\n </foo> +\n \n(1 row)\n\nSELECT xmlserialize(DOCUMENT '<foo> <bar>42</bar> </foo>'::xml AS text\nINDENT);\n xmlserialize \n----------------------------\n <foo> <bar>42</bar> </foo>+\n \n(1 row)\n\n\nOther products have a different approach[2]\n\nPerhaps simply setting xmltotext_with_options' parameter \"perserve_whitespace\" to false when XMLSERIALIZE(.. INDENT) would do the trick.\n\ndoc = xml_parse(data, xmloption_arg, !indent ? true : false,\n\t\t\tGetDatabaseEncoding(),\n\t\t\t&parsed_xmloptiontype, &content_nodes,\n\t\t\t(Node *) &escontext);\n\n\n(diff attached)\n\nSELECT xmlserialize(DOCUMENT '<foo> <bar>42</bar> </foo>'::xml AS text\nINDENT);\n xmlserialize \n-----------------\n <foo> +\n <bar>42</bar>+\n </foo> +\n \n(1 row)\n\nIf this is indeed the way to go I can update the regression tests accordingly.\n\nBest,\n\n-- \nJim\n\n1 - https://www.postgresql.org/message-id/cbd68a31-9776-4742-9c09-4344a4c5e6dc%40uni-muenster.de\n2 - https://dbfiddle.uk/zdKnfsqX",
"msg_date": "Wed, 28 Aug 2024 10:19:48 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "[BUG?] XMLSERIALIZE( ... INDENT) won't work with blank nodes"
},
{
"msg_contents": "On 28.08.24 10:19, Jim Jones wrote:\n> Hi,\n>\n> While testing a feature reported by Pavel in this thread[1] I realized\n> that elements containing whitespaces between them won't be indented with\n> XMLSERIALIZE( ... INDENT)\n>\n> SELECT xmlserialize(DOCUMENT '<foo><bar>42</bar></foo>' AS text INDENT);\n>\n> xmlserialize \n> -----------------\n> <foo> +\n> <bar>42</bar>+\n> </foo> +\n> \n> (1 row)\n>\n> SELECT xmlserialize(DOCUMENT '<foo> <bar>42</bar> </foo>'::xml AS text\n> INDENT);\n> xmlserialize \n> ----------------------------\n> <foo> <bar>42</bar> </foo>+\n> \n> (1 row)\n>\n>\n> Other products have a different approach[2]\n>\n> Perhaps simply setting xmltotext_with_options' parameter \"perserve_whitespace\" to false when XMLSERIALIZE(.. INDENT) would do the trick.\n>\n> doc = xml_parse(data, xmloption_arg, !indent ? true : false,\n> \t\t\tGetDatabaseEncoding(),\n> \t\t\t&parsed_xmloptiontype, &content_nodes,\n> \t\t\t(Node *) &escontext);\n>\n>\n> (diff attached)\n>\n> SELECT xmlserialize(DOCUMENT '<foo> <bar>42</bar> </foo>'::xml AS text\n> INDENT);\n> xmlserialize \n> -----------------\n> <foo> +\n> <bar>42</bar>+\n> </foo> +\n> \n> (1 row)\n>\n> If this is indeed the way to go I can update the regression tests accordingly.\n>\n> Best,\n>\n\nJust created a CF entry for this: https://commitfest.postgresql.org/49/5217/\nv1 attached includes regression tests.\n\n-- \nJim",
"msg_date": "Fri, 30 Aug 2024 00:06:46 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [BUG?] XMLSERIALIZE( ... INDENT) won't work with blank nodes"
},
{
"msg_contents": "\n\nOn 28.08.24 10:19, Jim Jones wrote:\n> Hi,\n>\n> While testing a feature reported by Pavel in this thread[1] I realized\n> that elements containing whitespaces between them won't be indented with\n> XMLSERIALIZE( ... INDENT)\n>\n\nmmh... xmlDocContentDumpOutput seems to add a trailing newline in the\nend of a document by default, making the serialization of the same xml\nstring with DOCUMENT and CONTENT different:\n\n-- postgres v16\n\nSELECT xmlserialize(CONTENT '<foo><bar>42</bar></foo>' AS text INDENT);\n xmlserialize \n-----------------\n <foo> +\n <bar>42</bar>+\n </foo>\n(1 row)\n\nSELECT xmlserialize(DOCUMENT '<foo><bar>42</bar></foo>' AS text INDENT);\n xmlserialize \n-----------------\n <foo> +\n <bar>42</bar>+\n </foo> +\n \n(1 row)\n\n\nI do recall a discussion along these lines some time ago, but I just\ncan't find it now. Does anyone know if this is the expected behaviour?\nOr should we in this case consider something like this in\nxmltotext_with_options()?\n\nresult = cstring_to_text_with_len((const char *) xmlBufferContent(buf),\nxmlBufferLength(buf) - 1);\n\n-- \nJim\n\n\n\n",
"msg_date": "Fri, 6 Sep 2024 13:55:06 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [BUG?] XMLSERIALIZE( ... INDENT) won't work with blank nodes"
},
{
"msg_contents": "Jim Jones <jim.jones@uni-muenster.de> writes:\n> mmh... xmlDocContentDumpOutput seems to add a trailing newline in the\n> end of a document by default, making the serialization of the same xml\n> string with DOCUMENT and CONTENT different:\n\nDoes seem a bit inconsistent.\n\n> Or should we in this case consider something like this in\n> xmltotext_with_options()?\n> result = cstring_to_text_with_len((const char *) xmlBufferContent(buf),\n> xmlBufferLength(buf) - 1);\n\nI think it'd be quite foolish to assume that every extant and future\nversion of libxml2 will share this glitch. Probably should use\nlogic more like pg_strip_crlf(), although we can't use that directly.\n\nWould it ever be the case that trailing whitespace would be valid\ndata? In a bit of testing, it seems like that could be true in\nCONTENT mode but not DOCUMENT mode.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Sep 2024 12:34:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG?] XMLSERIALIZE( ... INDENT) won't work with blank nodes"
},
{
"msg_contents": "Hi Tom\n\nOn 06.09.24 18:34, Tom Lane wrote:\n> I think it'd be quite foolish to assume that every extant and future\n> version of libxml2 will share this glitch. Probably should use\n> logic more like pg_strip_crlf(), although we can't use that directly.\nMakes sense. I Introduced this logic in the end of\nxmltotext_with_options() in case it was called with INDENT and DOCUMENT\ntype xml string.\n\nSELECT xmlserialize(DOCUMENT '<foo><bar>42</bar></foo>' AS text INDENT);\n xmlserialize \n-----------------\n <foo> +\n <bar>42</bar>+\n </foo>\n(1 row)\n\nThe regression tests were updated accordingly - see patch v2-0002.\n> Would it ever be the case that trailing whitespace would be valid\n> data? In a bit of testing, it seems like that could be true in\n> CONTENT mode but not DOCUMENT mode.\nYes, in case of CONTENT it is valid data and it will be preserved, as\nCONTENT can be pretty much anything.\n\nSELECT xmlserialize(CONTENT E'<foo><bar>42</bar></foo>\\n\\n\\t\\t\\t' AS\ntext INDENT);\n xmlserialize \n--------------------------\n <foo> +\n <bar>42</bar> +\n </foo> +\n +\n \n(1 row)\n\n\nWith DOCUMENT it is superfluous and should be removed after indentation.\nIIRC there's an xmlSaveToBuffer option called XML_SAVE_WSNONSIG that can\nbe used to preserve it.\n\nThanks\n\nBest, Jim",
"msg_date": "Sat, 7 Sep 2024 00:46:06 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [BUG?] XMLSERIALIZE( ... INDENT) won't work with blank nodes"
},
{
"msg_contents": "Jim Jones <jim.jones@uni-muenster.de> writes:\n> [ xmlserialize patches ]\n\nPushed with minor editorialization. Notably, I got rid of scribbling\non xmlBufferContent's buffer --- I don't know how likely that is to\nupset libxml2, but it seems like a fairly bad idea given that they\ndeclare the result as \"const xmlChar*\". Casting away the const is\npoor form in any case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Sep 2024 16:24:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG?] XMLSERIALIZE( ... INDENT) won't work with blank nodes"
}
] |
[
{
"msg_contents": "These are ok:\n\nselect json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' without wrapper);\n json_query\n------------\n 42\n\nselect json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with \nunconditional wrapper);\n json_query\n------------\n [42]\n\nBut this appears to be wrong:\n\nselect json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with conditional \nwrapper);\n json_query\n------------\n [42]\n\nThis should return an unwrapped 42.\n\n\n",
"msg_date": "Wed, 28 Aug 2024 11:21:41 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "json_query conditional wrapper bug"
},
{
"msg_contents": "On 28.08.24 11:21, Peter Eisentraut wrote:\n> These are ok:\n> \n> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' without wrapper);\n> json_query\n> ------------\n> 42\n> \n> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with \n> unconditional wrapper);\n> json_query\n> ------------\n> [42]\n> \n> But this appears to be wrong:\n> \n> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with conditional \n> wrapper);\n> json_query\n> ------------\n> [42]\n> \n> This should return an unwrapped 42.\n\nIf I make the code change illustrated in the attached patch, then I get \nthe correct result here. And various regression test results change, \nwhich, to me, all look more correct after this patch. I don't know what \nthe code I removed was supposed to accomplish, but it seems to be wrong \nsomehow. In the current implementation, the WITH CONDITIONAL WRAPPER \nclause doesn't appear to work correctly in any case I could identify.",
"msg_date": "Wed, 4 Sep 2024 12:16:31 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: json_query conditional wrapper bug"
},
{
"msg_contents": "\nOn 2024-09-04 We 6:16 AM, Peter Eisentraut wrote:\n> On 28.08.24 11:21, Peter Eisentraut wrote:\n>> These are ok:\n>>\n>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' without \n>> wrapper);\n>> json_query\n>> ------------\n>> 42\n>>\n>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with \n>> unconditional wrapper);\n>> json_query\n>> ------------\n>> [42]\n>>\n>> But this appears to be wrong:\n>>\n>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with \n>> conditional wrapper);\n>> json_query\n>> ------------\n>> [42]\n>>\n>> This should return an unwrapped 42.\n>\n> If I make the code change illustrated in the attached patch, then I \n> get the correct result here. And various regression test results \n> change, which, to me, all look more correct after this patch. I don't \n> know what the code I removed was supposed to accomplish, but it seems \n> to be wrong somehow. In the current implementation, the WITH \n> CONDITIONAL WRAPPER clause doesn't appear to work correctly in any \n> case I could identify.\n\n\nAgree the code definitely looks wrong. If anything the test should \nprobably be reversed:\n\n wrap = count > 1 || !(\n IsAJsonbScalar(singleton) ||\n (singleton->type == jbvBinary &&\nJsonContainerIsScalar(singleton->val.binary.data)));\n\ni.e. in the count = 1 case wrap unless it's a scalar or a binary \nwrapping a scalar. The code could do with a comment about the logic.\n\nI know we're very close to release but we should fix this as it's a new \nfeature.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 4 Sep 2024 16:10:37 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: json_query conditional wrapper bug"
},
{
"msg_contents": "On 2024-09-04 We 4:10 PM, Andrew Dunstan wrote:\n>\n> On 2024-09-04 We 6:16 AM, Peter Eisentraut wrote:\n>> On 28.08.24 11:21, Peter Eisentraut wrote:\n>>> These are ok:\n>>>\n>>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' without \n>>> wrapper);\n>>> json_query\n>>> ------------\n>>> 42\n>>>\n>>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with \n>>> unconditional wrapper);\n>>> json_query\n>>> ------------\n>>> [42]\n>>>\n>>> But this appears to be wrong:\n>>>\n>>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with \n>>> conditional wrapper);\n>>> json_query\n>>> ------------\n>>> [42]\n>>>\n>>> This should return an unwrapped 42.\n>>\n>> If I make the code change illustrated in the attached patch, then I \n>> get the correct result here. And various regression test results \n>> change, which, to me, all look more correct after this patch. I \n>> don't know what the code I removed was supposed to accomplish, but it \n>> seems to be wrong somehow. In the current implementation, the WITH \n>> CONDITIONAL WRAPPER clause doesn't appear to work correctly in any \n>> case I could identify.\n>\n>\n> Agree the code definitely looks wrong. If anything the test should \n> probably be reversed:\n>\n> wrap = count > 1 || !(\n> IsAJsonbScalar(singleton) ||\n> (singleton->type == jbvBinary &&\n> JsonContainerIsScalar(singleton->val.binary.data)));\n>\n> i.e. in the count = 1 case wrap unless it's a scalar or a binary \n> wrapping a scalar. The code could do with a comment about the logic.\n>\n> I know we're very close to release but we should fix this as it's a \n> new feature.\n\n\nI thought about this again.\n\nI don't know what the spec says, but the Oracle docs say:\n\n Specify |WITH| |CONDITIONAL| |WRAPPER| to include the array wrapper\n only if the path expression matches a single scalar value or\n multiple values of any type. If the path expression matches a single\n JSON object or JSON array, then the array wrapper is omitted.\n\nSo I now think the code that's there now is actually correct, and what \nyou say appears wrong is also correct.\n\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-09-04 We 4:10 PM, Andrew\n Dunstan wrote:\n\n\n\n On 2024-09-04 We 6:16 AM, Peter Eisentraut wrote:\n \nOn 28.08.24 11:21, Peter Eisentraut wrote:\n \nThese are ok:\n \n\n select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b'\n without wrapper);\n \n json_query\n \n ------------\n \n 42\n \n\n select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with\n unconditional wrapper);\n \n json_query\n \n ------------\n \n [42]\n \n\n But this appears to be wrong:\n \n\n select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with\n conditional wrapper);\n \n json_query\n \n ------------\n \n [42]\n \n\n This should return an unwrapped 42.\n \n\n\n If I make the code change illustrated in the attached patch,\n then I get the correct result here. And various regression test\n results change, which, to me, all look more correct after this\n patch. I don't know what the code I removed was supposed to\n accomplish, but it seems to be wrong somehow. In the current\n implementation, the WITH CONDITIONAL WRAPPER clause doesn't\n appear to work correctly in any case I could identify.\n \n\n\n\n Agree the code definitely looks wrong. If anything the test should\n probably be reversed:\n \n\n wrap = count > 1 || !(\n \n IsAJsonbScalar(singleton) ||\n \n (singleton->type == jbvBinary &&\n \n JsonContainerIsScalar(singleton->val.binary.data)));\n \n\n i.e. in the count = 1 case wrap unless it's a scalar or a binary\n wrapping a scalar. The code could do with a comment about the\n logic.\n \n\n I know we're very close to release but we should fix this as it's\n a new feature.\n \n\n\n\nI thought about this again.\nI don't know what the spec says, but the Oracle docs say:\n\nSpecify WITH CONDITIONAL\nWRAPPER to include the array wrapper\n only if the path expression matches a single scalar value or\n multiple values of any type. If the path expression matches a\n single JSON object or JSON array, then the array wrapper is\n omitted. \n\n\nSo I now think the code that's there now is actually correct, and\n what you say appears wrong is also correct.\n\n\n\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 5 Sep 2024 11:01:49 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: json_query conditional wrapper bug"
},
{
"msg_contents": "On 05.09.24 17:01, Andrew Dunstan wrote:\n> \n> On 2024-09-04 We 4:10 PM, Andrew Dunstan wrote:\n>>\n>> On 2024-09-04 We 6:16 AM, Peter Eisentraut wrote:\n>>> On 28.08.24 11:21, Peter Eisentraut wrote:\n>>>> These are ok:\n>>>>\n>>>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' without \n>>>> wrapper);\n>>>> json_query\n>>>> ------------\n>>>> 42\n>>>>\n>>>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with \n>>>> unconditional wrapper);\n>>>> json_query\n>>>> ------------\n>>>> [42]\n>>>>\n>>>> But this appears to be wrong:\n>>>>\n>>>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with \n>>>> conditional wrapper);\n>>>> json_query\n>>>> ------------\n>>>> [42]\n>>>>\n>>>> This should return an unwrapped 42.\n>>>\n>>> If I make the code change illustrated in the attached patch, then I \n>>> get the correct result here. And various regression test results \n>>> change, which, to me, all look more correct after this patch. I \n>>> don't know what the code I removed was supposed to accomplish, but it \n>>> seems to be wrong somehow. In the current implementation, the WITH \n>>> CONDITIONAL WRAPPER clause doesn't appear to work correctly in any \n>>> case I could identify.\n>>\n>>\n>> Agree the code definitely looks wrong. If anything the test should \n>> probably be reversed:\n>>\n>> wrap = count > 1 || !(\n>> IsAJsonbScalar(singleton) ||\n>> (singleton->type == jbvBinary &&\n>> JsonContainerIsScalar(singleton->val.binary.data)));\n>>\n>> i.e. in the count = 1 case wrap unless it's a scalar or a binary \n>> wrapping a scalar. The code could do with a comment about the logic.\n>>\n>> I know we're very close to release but we should fix this as it's a \n>> new feature.\n> \n> \n> I thought about this again.\n> \n> I don't know what the spec says,\n\nHere is the relevant bit:\n\na) Case:\ni) If the length of SEQ is 0 (zero), then let WRAPIT be False.\nNOTE 479 — This ensures that the ON EMPTY behavior supersedes the \nWRAPPER behavior.\nii) If WRAPPER is WITHOUT ARRAY, then let WRAPIT be False.\niii) If WRAPPER is WITH UNCONDITIONAL ARRAY, then let WRAPIT be True.\niv) If WRAPPER is WITH CONDITIONAL ARRAY, then\nCase:\n1) If SEQ has a single SQL/JSON item, then let WRAPIT be False.\n2) Otherwise, let WRAPIT be True.\n\n > but the Oracle docs say:>\n> Specify |WITH| |CONDITIONAL| |WRAPPER| to include the array wrapper\n> only if the path expression matches a single scalar value or\n> multiple values of any type. If the path expression matches a single\n> JSON object or JSON array, then the array wrapper is omitted.\n> \n> So I now think the code that's there now is actually correct, and what \n> you say appears wrong is also correct.\n\nI tested the above test expressions as well as the regression test case \nagainst Oracle and it agrees with my solution. So it seems to me that \nthis piece of documentation is wrong.\n\n\n\n",
"msg_date": "Thu, 5 Sep 2024 17:51:39 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: json_query conditional wrapper bug"
},
{
"msg_contents": "\n\n\n> On Sep 5, 2024, at 11:51 AM, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n> On 05.09.24 17:01, Andrew Dunstan wrote:\n>>> On 2024-09-04 We 4:10 PM, Andrew Dunstan wrote:\n>>> \n>>> On 2024-09-04 We 6:16 AM, Peter Eisentraut wrote:\n>>>> On 28.08.24 11:21, Peter Eisentraut wrote:\n>>>>> These are ok:\n>>>>> \n>>>>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' without wrapper);\n>>>>> json_query\n>>>>> ------------\n>>>>> 42\n>>>>> \n>>>>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with unconditional wrapper);\n>>>>> json_query\n>>>>> ------------\n>>>>> [42]\n>>>>> \n>>>>> But this appears to be wrong:\n>>>>> \n>>>>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with conditional wrapper);\n>>>>> json_query\n>>>>> ------------\n>>>>> [42]\n>>>>> \n>>>>> This should return an unwrapped 42.\n>>>> \n>>>> If I make the code change illustrated in the attached patch, then I get the correct result here. And various regression test results change, which, to me, all look more correct after this patch. I don't know what the code I removed was supposed to accomplish, but it seems to be wrong somehow. In the current implementation, the WITH CONDITIONAL WRAPPER clause doesn't appear to work correctly in any case I could identify.\n>>> \n>>> \n>>> Agree the code definitely looks wrong. If anything the test should probably be reversed:\n>>> \n>>> wrap = count > 1 || !(\n>>> IsAJsonbScalar(singleton) ||\n>>> (singleton->type == jbvBinary &&\n>>> JsonContainerIsScalar(singleton->val.binary.data)));\n>>> \n>>> i.e. in the count = 1 case wrap unless it's a scalar or a binary wrapping a scalar. The code could do with a comment about the logic.\n>>> \n>>> I know we're very close to release but we should fix this as it's a new feature.\n>> I thought about this again.\n>> I don't know what the spec says,\n> \n> Here is the relevant bit:\n> \n> a) Case:\n> i) If the length of SEQ is 0 (zero), then let WRAPIT be False.\n> NOTE 479 — This ensures that the ON EMPTY behavior supersedes the WRAPPER behavior.\n> ii) If WRAPPER is WITHOUT ARRAY, then let WRAPIT be False.\n> iii) If WRAPPER is WITH UNCONDITIONAL ARRAY, then let WRAPIT be True.\n> iv) If WRAPPER is WITH CONDITIONAL ARRAY, then\n> Case:\n> 1) If SEQ has a single SQL/JSON item, then let WRAPIT be False.\n> 2) Otherwise, let WRAPIT be True.\n> \n> > but the Oracle docs say:>\n>> Specify |WITH| |CONDITIONAL| |WRAPPER| to include the array wrapper\n>> only if the path expression matches a single scalar value or\n>> multiple values of any type. If the path expression matches a single\n>> JSON object or JSON array, then the array wrapper is omitted.\n>> So I now think the code that's there now is actually correct, and what you say appears wrong is also correct.\n> \n> I tested the above test expressions as well as the regression test case against Oracle and it agrees with my solution. So it seems to me that this piece of documentation is wrong.\n\nOh, odd. Then assuming a scalar is an SQL/JSON item your patch appears correct.\n\nCheers\n\nAndrew\n\n\n",
"msg_date": "Thu, 5 Sep 2024 12:44:56 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: json_query conditional wrapper bug"
},
{
"msg_contents": "Sorry for missing this report and thanks Andrew for the offlist heads up.\n\nOn Wed, Sep 4, 2024 at 7:16 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> On 28.08.24 11:21, Peter Eisentraut wrote:\n> > These are ok:\n> >\n> > select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' without wrapper);\n> > json_query\n> > ------------\n> > 42\n> >\n> > select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with\n> > unconditional wrapper);\n> > json_query\n> > ------------\n> > [42]\n> >\n> > But this appears to be wrong:\n> >\n> > select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with conditional\n> > wrapper);\n> > json_query\n> > ------------\n> > [42]\n> >\n> > This should return an unwrapped 42.\n>\n> If I make the code change illustrated in the attached patch, then I get\n> the correct result here. And various regression test results change,\n> which, to me, all look more correct after this patch. I don't know what\n> the code I removed was supposed to accomplish, but it seems to be wrong\n> somehow. In the current implementation, the WITH CONDITIONAL WRAPPER\n> clause doesn't appear to work correctly in any case I could identify.\n\nAgreed that this looks wrong.\n\nI've wondered why the condition was like that but left it as-is,\nbecause I thought at one point that that's needed to ensure that the\nreturned single scalar SQL/JSON item is valid jsonb.\n\nI've updated your patch to include updated test outputs and a nearby\ncode comment expanded. Do you intend to commit it or do you prefer\nthat I do?\n\n-- \nThanks, Amit Langote",
"msg_date": "Tue, 10 Sep 2024 17:00:17 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: json_query conditional wrapper bug"
},
{
"msg_contents": "On 10.09.24 10:00, Amit Langote wrote:\n> Sorry for missing this report and thanks Andrew for the offlist heads up.\n> \n> On Wed, Sep 4, 2024 at 7:16 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>> On 28.08.24 11:21, Peter Eisentraut wrote:\n>>> These are ok:\n>>>\n>>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' without wrapper);\n>>> json_query\n>>> ------------\n>>> 42\n>>>\n>>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with\n>>> unconditional wrapper);\n>>> json_query\n>>> ------------\n>>> [42]\n>>>\n>>> But this appears to be wrong:\n>>>\n>>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with conditional\n>>> wrapper);\n>>> json_query\n>>> ------------\n>>> [42]\n>>>\n>>> This should return an unwrapped 42.\n>>\n>> If I make the code change illustrated in the attached patch, then I get\n>> the correct result here. And various regression test results change,\n>> which, to me, all look more correct after this patch. I don't know what\n>> the code I removed was supposed to accomplish, but it seems to be wrong\n>> somehow. In the current implementation, the WITH CONDITIONAL WRAPPER\n>> clause doesn't appear to work correctly in any case I could identify.\n> \n> Agreed that this looks wrong.\n> \n> I've wondered why the condition was like that but left it as-is,\n> because I thought at one point that that's needed to ensure that the\n> returned single scalar SQL/JSON item is valid jsonb.\n> \n> I've updated your patch to include updated test outputs and a nearby\n> code comment expanded. Do you intend to commit it or do you prefer\n> that I do?\n\nThis change looks unrelated:\n\n-ERROR: new row for relation \"test_jsonb_constraints\" violates check \nconstraint \"test_jsonb_constraint4\"\n+ERROR: new row for relation \"test_jsonb_constraints\" violates check \nconstraint \"test_jsonb_constraint5\"\n\nIs this some randomness in the way these constraints are evaluated?\n\n\n",
"msg_date": "Tue, 10 Sep 2024 22:15:55 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: json_query conditional wrapper bug"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 5:15 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> On 10.09.24 10:00, Amit Langote wrote:\n> > Sorry for missing this report and thanks Andrew for the offlist heads up.\n> >\n> > On Wed, Sep 4, 2024 at 7:16 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> >> On 28.08.24 11:21, Peter Eisentraut wrote:\n> >>> These are ok:\n> >>>\n> >>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' without wrapper);\n> >>> json_query\n> >>> ------------\n> >>> 42\n> >>>\n> >>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with\n> >>> unconditional wrapper);\n> >>> json_query\n> >>> ------------\n> >>> [42]\n> >>>\n> >>> But this appears to be wrong:\n> >>>\n> >>> select json_query('{\"a\": 1, \"b\": 42}'::jsonb, 'lax $.b' with conditional\n> >>> wrapper);\n> >>> json_query\n> >>> ------------\n> >>> [42]\n> >>>\n> >>> This should return an unwrapped 42.\n> >>\n> >> If I make the code change illustrated in the attached patch, then I get\n> >> the correct result here. And various regression test results change,\n> >> which, to me, all look more correct after this patch. I don't know what\n> >> the code I removed was supposed to accomplish, but it seems to be wrong\n> >> somehow. In the current implementation, the WITH CONDITIONAL WRAPPER\n> >> clause doesn't appear to work correctly in any case I could identify.\n> >\n> > Agreed that this looks wrong.\n> >\n> > I've wondered why the condition was like that but left it as-is,\n> > because I thought at one point that that's needed to ensure that the\n> > returned single scalar SQL/JSON item is valid jsonb.\n> >\n> > I've updated your patch to include updated test outputs and a nearby\n> > code comment expanded. Do you intend to commit it or do you prefer\n> > that I do?\n>\n> This change looks unrelated:\n>\n> -ERROR: new row for relation \"test_jsonb_constraints\" violates check\n> constraint \"test_jsonb_constraint4\"\n> +ERROR: new row for relation \"test_jsonb_constraints\" violates check\n> constraint \"test_jsonb_constraint5\"\n>\n> Is this some randomness in the way these constraints are evaluated?\n\nThe result of JSON_QUERY() in the CHECK constraint changes, so the\nconstraint that previously failed now succeeds after this change,\nbecause the comparison looked like this before and after:\n\n-- before\npostgres=# select jsonb '[10]' < jsonb '[10]';\n ?column?\n----------\n f\n(1 row)\n\n-- after\npostgres=# select jsonb '10' < jsonb '[10]';\n ?column?\n----------\n t\n(1 row)\n\nThat causes the next constraint to be evaluated and its failure\nreported instead.\n\nIn the attached, I've adjusted the constraint for the test case to be\na bit more relevant and removed a nearby somewhat redundant test,\nmainly because its output changes after the adjustment.\n\n--\nThanks, Amit Langote",
"msg_date": "Wed, 11 Sep 2024 16:51:18 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: json_query conditional wrapper bug"
},
{
"msg_contents": "On 11.09.24 09:51, Amit Langote wrote:\n>>> I've updated your patch to include updated test outputs and a nearby\n>>> code comment expanded. Do you intend to commit it or do you prefer\n>>> that I do?\n>>\n>> This change looks unrelated:\n>>\n>> -ERROR: new row for relation \"test_jsonb_constraints\" violates check\n>> constraint \"test_jsonb_constraint4\"\n>> +ERROR: new row for relation \"test_jsonb_constraints\" violates check\n>> constraint \"test_jsonb_constraint5\"\n>>\n>> Is this some randomness in the way these constraints are evaluated?\n> \n> The result of JSON_QUERY() in the CHECK constraint changes, so the\n> constraint that previously failed now succeeds after this change,\n> because the comparison looked like this before and after:\n> \n> -- before\n> postgres=# select jsonb '[10]' < jsonb '[10]';\n> ?column?\n> ----------\n> f\n> (1 row)\n> \n> -- after\n> postgres=# select jsonb '10' < jsonb '[10]';\n> ?column?\n> ----------\n> t\n> (1 row)\n> \n> That causes the next constraint to be evaluated and its failure\n> reported instead.\n> \n> In the attached, I've adjusted the constraint for the test case to be\n> a bit more relevant and removed a nearby somewhat redundant test,\n> mainly because its output changes after the adjustment.\n\nOk, that looks good. Good that we could clear that up a bit.\n\n\n\n",
"msg_date": "Wed, 11 Sep 2024 11:57:14 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: json_query conditional wrapper bug"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 6:57 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> On 11.09.24 09:51, Amit Langote wrote:\n> >>> I've updated your patch to include updated test outputs and a nearby\n> >>> code comment expanded. Do you intend to commit it or do you prefer\n> >>> that I do?\n> >>\n> >> This change looks unrelated:\n> >>\n> >> -ERROR: new row for relation \"test_jsonb_constraints\" violates check\n> >> constraint \"test_jsonb_constraint4\"\n> >> +ERROR: new row for relation \"test_jsonb_constraints\" violates check\n> >> constraint \"test_jsonb_constraint5\"\n> >>\n> >> Is this some randomness in the way these constraints are evaluated?\n> >\n> > The result of JSON_QUERY() in the CHECK constraint changes, so the\n> > constraint that previously failed now succeeds after this change,\n> > because the comparison looked like this before and after:\n> >\n> > -- before\n> > postgres=# select jsonb '[10]' < jsonb '[10]';\n> > ?column?\n> > ----------\n> > f\n> > (1 row)\n> >\n> > -- after\n> > postgres=# select jsonb '10' < jsonb '[10]';\n> > ?column?\n> > ----------\n> > t\n> > (1 row)\n> >\n> > That causes the next constraint to be evaluated and its failure\n> > reported instead.\n> >\n> > In the attached, I've adjusted the constraint for the test case to be\n> > a bit more relevant and removed a nearby somewhat redundant test,\n> > mainly because its output changes after the adjustment.\n>\n> Ok, that looks good. Good that we could clear that up a bit.\n\nThanks for checking. Would you like me to commit it?\n\n-- \nThanks, Amit Langote\n\n\n",
"msg_date": "Wed, 11 Sep 2024 20:25:08 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: json_query conditional wrapper bug"
},
{
"msg_contents": "On 11.09.24 13:25, Amit Langote wrote:\n> On Wed, Sep 11, 2024 at 6:57 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>> On 11.09.24 09:51, Amit Langote wrote:\n>>>>> I've updated your patch to include updated test outputs and a nearby\n>>>>> code comment expanded. Do you intend to commit it or do you prefer\n>>>>> that I do?\n>>>>\n>>>> This change looks unrelated:\n>>>>\n>>>> -ERROR: new row for relation \"test_jsonb_constraints\" violates check\n>>>> constraint \"test_jsonb_constraint4\"\n>>>> +ERROR: new row for relation \"test_jsonb_constraints\" violates check\n>>>> constraint \"test_jsonb_constraint5\"\n>>>>\n>>>> Is this some randomness in the way these constraints are evaluated?\n>>>\n>>> The result of JSON_QUERY() in the CHECK constraint changes, so the\n>>> constraint that previously failed now succeeds after this change,\n>>> because the comparison looked like this before and after:\n>>>\n>>> -- before\n>>> postgres=# select jsonb '[10]' < jsonb '[10]';\n>>> ?column?\n>>> ----------\n>>> f\n>>> (1 row)\n>>>\n>>> -- after\n>>> postgres=# select jsonb '10' < jsonb '[10]';\n>>> ?column?\n>>> ----------\n>>> t\n>>> (1 row)\n>>>\n>>> That causes the next constraint to be evaluated and its failure\n>>> reported instead.\n>>>\n>>> In the attached, I've adjusted the constraint for the test case to be\n>>> a bit more relevant and removed a nearby somewhat redundant test,\n>>> mainly because its output changes after the adjustment.\n>>\n>> Ok, that looks good. Good that we could clear that up a bit.\n> \n> Thanks for checking. Would you like me to commit it?\n\nPlease do.\n\n\n\n",
"msg_date": "Wed, 11 Sep 2024 13:56:23 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: json_query conditional wrapper bug"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 8:56 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> On 11.09.24 13:25, Amit Langote wrote:\n> > On Wed, Sep 11, 2024 at 6:57 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> >> On 11.09.24 09:51, Amit Langote wrote:\n> >>>>> I've updated your patch to include updated test outputs and a nearby\n> >>>>> code comment expanded. Do you intend to commit it or do you prefer\n> >>>>> that I do?\n> >>>>\n> >>>> This change looks unrelated:\n> >>>>\n> >>>> -ERROR: new row for relation \"test_jsonb_constraints\" violates check\n> >>>> constraint \"test_jsonb_constraint4\"\n> >>>> +ERROR: new row for relation \"test_jsonb_constraints\" violates check\n> >>>> constraint \"test_jsonb_constraint5\"\n> >>>>\n> >>>> Is this some randomness in the way these constraints are evaluated?\n> >>>\n> >>> The result of JSON_QUERY() in the CHECK constraint changes, so the\n> >>> constraint that previously failed now succeeds after this change,\n> >>> because the comparison looked like this before and after:\n> >>>\n> >>> -- before\n> >>> postgres=# select jsonb '[10]' < jsonb '[10]';\n> >>> ?column?\n> >>> ----------\n> >>> f\n> >>> (1 row)\n> >>>\n> >>> -- after\n> >>> postgres=# select jsonb '10' < jsonb '[10]';\n> >>> ?column?\n> >>> ----------\n> >>> t\n> >>> (1 row)\n> >>>\n> >>> That causes the next constraint to be evaluated and its failure\n> >>> reported instead.\n> >>>\n> >>> In the attached, I've adjusted the constraint for the test case to be\n> >>> a bit more relevant and removed a nearby somewhat redundant test,\n> >>> mainly because its output changes after the adjustment.\n> >>\n> >> Ok, that looks good. Good that we could clear that up a bit.\n> >\n> > Thanks for checking. Would you like me to commit it?\n>\n> Please do.\n\nDone. Thanks for the report and the patch.\n\n-- \nThanks, Amit Langote\n\n\n",
"msg_date": "Thu, 12 Sep 2024 11:24:52 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: json_query conditional wrapper bug"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm trying to create an Index Access Method Roting.\nBuilding the index requires iterating over all rows and calculating,\nwhich is then used during index construction.\n\nThe methods in the IndexAmRoutine seem to deal with insertion / index build\none row at a time.\nIs there any workaround you can suggest that would allow me to calculate\nthe median of a column,\nstore it someplace and then use it during Inserts to the index?\n\nThanks,\nSushrut\n\nHi,I'm trying to create an Index Access Method Roting.Building the index requires iterating over all rows and calculating, which is then used during index construction.The methods in the IndexAmRoutine seem to deal with insertion / index build one row at a time.Is there any workaround you can suggest that would allow me to calculate the median of a column, store it someplace and then use it during Inserts to the index?Thanks,Sushrut",
"msg_date": "Wed, 28 Aug 2024 19:51:22 +0530",
"msg_from": "Sushrut Shivaswamy <sushrut.shivaswamy@gmail.com>",
"msg_from_op": true,
"msg_subject": "Reading all tuples in Index Access Method"
},
{
"msg_contents": "On Wed, 28 Aug 2024 at 16:21, Sushrut Shivaswamy\n<sushrut.shivaswamy@gmail.com> wrote:\n>\n> Hi,\n>\n> I'm trying to create an Index Access Method Roting.\n> Building the index requires iterating over all rows and calculating,\n> which is then used during index construction.\n>\n> The methods in the IndexAmRoutine seem to deal with insertion / index build one row at a time.\n> Is there any workaround you can suggest that would allow me to calculate the median of a column,\n> store it someplace and then use it during Inserts to the index?\n\nI'm not sure what to say. Index insertions through indam->aminsert\nhappen as users insert new values into the table, so I don't see how a\nonce-calculated median would remain correct across an index's\nlifespan: every time I insert a new value (or delete a tuple) the\nmedian will change. Furthermore, indexes will not know about deletions\nand updates until significantly after the deleting or updating\ntransaction got committed, so transactionally consistent aggregates\nare likely impossible to keep consistent while staying inside the\nindex AM API.\n\nHowever, if you only need this median (or other aggregate) at index\nbuild time, you should probably look at various indexes'\nindam->ambuild functions, as that function's purpose is to build a new\nindex from an existing table's dataset, usually by scanning the table\nwith table_index_build_scan.\n\nAs for storing such data more permanently: Practically all included\nindexes currently have a metapage at block 0 of the main data fork,\nwhich contains metadata and bookkeeping info about the index's\nstructure, and you're free to do the same for your index.\n\nI hope that helps?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Wed, 28 Aug 2024 17:59:32 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reading all tuples in Index Access Method"
},
{
"msg_contents": "Thanks Matthias, table_index_build_scan sounds like what I\"m looking for.\n\nOn Wed, Aug 28, 2024 at 9:29 PM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> On Wed, 28 Aug 2024 at 16:21, Sushrut Shivaswamy\n> <sushrut.shivaswamy@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I'm trying to create an Index Access Method Roting.\n> > Building the index requires iterating over all rows and calculating,\n> > which is then used during index construction.\n> >\n> > The methods in the IndexAmRoutine seem to deal with insertion / index\n> build one row at a time.\n> > Is there any workaround you can suggest that would allow me to calculate\n> the median of a column,\n> > store it someplace and then use it during Inserts to the index?\n>\n> I'm not sure what to say. Index insertions through indam->aminsert\n> happen as users insert new values into the table, so I don't see how a\n> once-calculated median would remain correct across an index's\n> lifespan: every time I insert a new value (or delete a tuple) the\n> median will change. Furthermore, indexes will not know about deletions\n> and updates until significantly after the deleting or updating\n> transaction got committed, so transactionally consistent aggregates\n> are likely impossible to keep consistent while staying inside the\n> index AM API.\n>\n> However, if you only need this median (or other aggregate) at index\n> build time, you should probably look at various indexes'\n> indam->ambuild functions, as that function's purpose is to build a new\n> index from an existing table's dataset, usually by scanning the table\n> with table_index_build_scan.\n>\n> As for storing such data more permanently: Practically all included\n> indexes currently have a metapage at block 0 of the main data fork,\n> which contains metadata and bookkeeping info about the index's\n> structure, and you're free to do the same for your index.\n>\n> I hope that helps?\n>\n> Kind regards,\n>\n> Matthias van de Meent\n> Neon (https://neon.tech)\n>\n\nThanks Matthias, table_index_build_scan sounds like what I\"m looking for.On Wed, Aug 28, 2024 at 9:29 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:On Wed, 28 Aug 2024 at 16:21, Sushrut Shivaswamy\n<sushrut.shivaswamy@gmail.com> wrote:\n>\n> Hi,\n>\n> I'm trying to create an Index Access Method Roting.\n> Building the index requires iterating over all rows and calculating,\n> which is then used during index construction.\n>\n> The methods in the IndexAmRoutine seem to deal with insertion / index build one row at a time.\n> Is there any workaround you can suggest that would allow me to calculate the median of a column,\n> store it someplace and then use it during Inserts to the index?\n\nI'm not sure what to say. Index insertions through indam->aminsert\nhappen as users insert new values into the table, so I don't see how a\nonce-calculated median would remain correct across an index's\nlifespan: every time I insert a new value (or delete a tuple) the\nmedian will change. Furthermore, indexes will not know about deletions\nand updates until significantly after the deleting or updating\ntransaction got committed, so transactionally consistent aggregates\nare likely impossible to keep consistent while staying inside the\nindex AM API.\n\nHowever, if you only need this median (or other aggregate) at index\nbuild time, you should probably look at various indexes'\nindam->ambuild functions, as that function's purpose is to build a new\nindex from an existing table's dataset, usually by scanning the table\nwith table_index_build_scan.\n\nAs for storing such data more permanently: Practically all included\nindexes currently have a metapage at block 0 of the main data fork,\nwhich contains metadata and bookkeeping info about the index's\nstructure, and you're free to do the same for your index.\n\nI hope that helps?\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)",
"msg_date": "Wed, 28 Aug 2024 22:27:58 +0530",
"msg_from": "Sushrut Shivaswamy <sushrut.shivaswamy@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reading all tuples in Index Access Method"
}
] |
[
{
"msg_contents": "Hi,\n\nI would like to understand the pull_up_simple_values code a little bit more.\nPull-up of simple values was implemented in 2015 by commit f4abd02. In \nthe is_simple_values I see a check on the expression_returns_set() of \nthe RTE values list.\n\nBut since d43a619 in 2017 the check_srf_call_placement has reported an \nERROR in the case of set-returning function inside a VALUES expression. \nLet's demonstrate:\n\nSELECT * FROM (VALUES ((generate_series(1,1E2))));\nERROR: set-returning functions are not allowed in VALUES\nLINE 1: SELECT * FROM (VALUES ((generate_series(1,1E2))));\n\nI think, the expression_returns_set examination doesn't necessary and we \ncan replace it with an assertion, if needed (see attachment).\nAm I wrong?\n\n-- \nregards, Andrei Lepikhov",
"msg_date": "Wed, 28 Aug 2024 16:26:22 +0200",
"msg_from": "Andrei Lepikhov <lepihov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove unnecessary check on set-returning functions in values_lists"
},
{
"msg_contents": "Andrei Lepikhov <lepihov@gmail.com> writes:\n> I think, the expression_returns_set examination doesn't necessary and we \n> can replace it with an assertion, if needed (see attachment).\n\nI think you may be right that this test is not really necessary given\nthe upstream parser test, but nonetheless I'm not inclined to remove\nit. The upstream test is very far away in code terms, and there are\nnearby steps like SQL-function inlining that make it less than 100%\nobvious that an expression that was SRF-free at parse time still is\nwhen we get here. I also don't care for destroying the parallel that\nthe comment mentions to the checks done before pulling up a subquery.\n\nI'm reminded of Weinberg’s Law:\n\n\tIf builders built buildings the way programmers wrote\n\tprograms, then the first woodpecker that came along would\n\tdestroy civilization.\n\nUnless there's a demonstrable, nontrivial performance hit from\nthis check, I think we should leave it alone.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Aug 2024 23:03:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove unnecessary check on set-returning functions in\n values_lists"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen working on the regex code I noticed that the labels of \nPG_Locale_Strategy had become inconsistent with the addition of \nPG_REGEX_BUILTIN but while at it I also noticed that \nPG_REGEX_LOCALE_WIDE_L and PG_REGEX_LOCALE_1BYTE_L did not make it\nobvious that they were libc-related so I propose a new naming scheme:\n\nPG_STRATEGY_<type>[_<subtype>]\n\nI am open for other suggestions of course like keeping the PG_LOCALE_* \nprefix, but in any case I think we should make the enum labels consistent.\n\nAndreas",
"msg_date": "Wed, 28 Aug 2024 16:58:16 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": true,
"msg_subject": "Minor refactor: Use more consistent names for the labels of\n PG_Locale_Strategy"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 04:58:16PM +0200, Andreas Karlsson wrote:\n> When working on the regex code I noticed that the labels of\n> PG_Locale_Strategy had become inconsistent with the addition of\n> PG_REGEX_BUILTIN but while at it I also noticed that PG_REGEX_LOCALE_WIDE_L\n> and PG_REGEX_LOCALE_1BYTE_L did not make it\n> obvious that they were libc-related so I propose a new naming scheme:\n> \n> PG_STRATEGY_<type>[_<subtype>]\n> \n> I am open for other suggestions of course like keeping the PG_LOCALE_*\n> prefix, but in any case I think we should make the enum labels consistent.\n\n+1 for your suggestion, as you are suggesting. The original intention\nwhen PG_Locale_Strategy got introduced was to have everything named as\nPG_REGEX_LOCALE_*, but with the built-in part coming in play in this\ncode adding \"STRATEGY\" is cleaner than just \"LOCALE\".\n--\nMichael",
"msg_date": "Thu, 29 Aug 2024 10:06:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Minor refactor: Use more consistent names for the labels of\n PG_Locale_Strategy"
},
{
"msg_contents": "On Thu, Aug 29, 2024 at 10:06:32AM +0900, Michael Paquier wrote:\n> +1 for your suggestion, as you are suggesting. The original intention\n> when PG_Locale_Strategy got introduced was to have everything named as\n> PG_REGEX_LOCALE_*, but with the built-in part coming in play in this\n> code adding \"STRATEGY\" is cleaner than just \"LOCALE\".\n\nApplied this one as 23138284cde4.\n--\nMichael",
"msg_date": "Mon, 2 Sep 2024 09:43:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Minor refactor: Use more consistent names for the labels of\n PG_Locale_Strategy"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nThis patch introduces four new columns in pg_stat_database:\n\n* parallel_worker_planned\n* parallel_worker_launched\n* parallel_maint_worker_planned\n* parallel_maint_worker_launched\n\nThe intent is to help administrators evaluate the usage of parallel \nworkers in their databases and help sizing max_worker_processes, \nmax_parallel_workers or max_parallel_maintenance_workers).\n\nHere is a test script:\n\npsql << _EOF_\n\n-- Index creation\nDROP TABLE IF EXISTS test_pql;\nCREATE TABLE test_pql(i int, j int);\nINSERT INTO test_pql SELECT x,x FROM generate_series(1,1000000) as F(x);\n\n-- 0 planned / 0 launched\nEXPLAIN (ANALYZE)\n\tSELECT 1;\n\n-- 2 planned / 2 launched\nEXPLAIN (ANALYZE)\n\tSELECT i, avg(j) FROM test_pql GROUP BY i;\n\nSET max_parallel_workers TO 1;\n-- 4 planned / 1 launched\nEXPLAIN (ANALYZE)\n\tSELECT i, avg(j) FROM test_pql GROUP BY i\n\tUNION\n\tSELECT i, avg(j) FROM test_pql GROUP BY i;\n\nRESET max_parallel_workers;\n-- 1 planned / 1 launched\nCREATE INDEX ON test_pql(i);\n\nSET max_parallel_workers TO 0;\n-- 1 planned / 0 launched\nCREATE INDEX ON test_pql(j);\n-- 1 planned / 0 launched\nCREATE INDEX ON test_pql(i, j);\n\nSET maintenance_work_mem TO '96MB';\nRESET max_parallel_workers;\n-- 2 planned / 2 launched\nVACUUM (VERBOSE) test_pql;\n\nSET max_parallel_workers TO 1;\n-- 2 planned / 1 launched\nVACUUM (VERBOSE) test_pql;\n\n-- TOTAL: parallel workers: 6 planned / 3 launched\n-- TOTAL: parallel maint workers: 7 planned / 4 launched\n_EOF_\n\n\nAnd the output in pg_stat_database a fresh server without any \nconfiguration change except thoses in the script:\n\n[local]:5445 postgres@postgres=# SELECT datname, \nparallel_workers_planned, parallel_workers_launched, \nparallel_maint_workers_planned, parallel_maint_workers_launched FROM pg\n_stat_database WHERE datname = 'postgres' \\gx \n \n\n-[ RECORD 1 ]-------------------+--------- \n \n\ndatname | postgres \n \n\nparallel_workers_planned | 6 \n \n\nparallel_workers_launched | 3 \n \n\nparallel_maint_workers_planned | 7 \n \n\nparallel_maint_workers_launched | 4\n\nThanks to: Jehan-Guillaume de Rorthais, Guillaume Lelarge and Franck\nBoudehen for the help and motivation boost.\n\n---\nBenoit Lobréau\nConsultant\nhttp://dalibo.com",
"msg_date": "Wed, 28 Aug 2024 17:10:26 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <benoit.lobreau@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Parallel workers stats in pg_stat_database"
},
{
"msg_contents": "Hi,\n\nThis is a new patch which:\n\n* fixes some typos\n* changes the execgather / execgathermerge code so that the stats are \naccumulated in EState and inserted in pg_stat_database only once, during \nExecutorEnd\n* adds tests (very ugly, but I could get the parallel plan to be stable \nacross make check executions.)\n\n\nOn 8/28/24 17:10, Benoit Lobréau wrote:\n> Hi hackers,\n> \n> This patch introduces four new columns in pg_stat_database:\n> \n> * parallel_worker_planned\n> * parallel_worker_launched\n> * parallel_maint_worker_planned\n> * parallel_maint_worker_launched\n> \n> The intent is to help administrators evaluate the usage of parallel \n> workers in their databases and help sizing max_worker_processes, \n> max_parallel_workers or max_parallel_maintenance_workers).\n> \n> Here is a test script:\n> \n> psql << _EOF_\n> \n> -- Index creation\n> DROP TABLE IF EXISTS test_pql;\n> CREATE TABLE test_pql(i int, j int);\n> INSERT INTO test_pql SELECT x,x FROM generate_series(1,1000000) as F(x);\n> \n> -- 0 planned / 0 launched\n> EXPLAIN (ANALYZE)\n> SELECT 1;\n> \n> -- 2 planned / 2 launched\n> EXPLAIN (ANALYZE)\n> SELECT i, avg(j) FROM test_pql GROUP BY i;\n> \n> SET max_parallel_workers TO 1;\n> -- 4 planned / 1 launched\n> EXPLAIN (ANALYZE)\n> SELECT i, avg(j) FROM test_pql GROUP BY i\n> UNION\n> SELECT i, avg(j) FROM test_pql GROUP BY i;\n> \n> RESET max_parallel_workers;\n> -- 1 planned / 1 launched\n> CREATE INDEX ON test_pql(i);\n> \n> SET max_parallel_workers TO 0;\n> -- 1 planned / 0 launched\n> CREATE INDEX ON test_pql(j);\n> -- 1 planned / 0 launched\n> CREATE INDEX ON test_pql(i, j);\n> \n> SET maintenance_work_mem TO '96MB';\n> RESET max_parallel_workers;\n> -- 2 planned / 2 launched\n> VACUUM (VERBOSE) test_pql;\n> \n> SET max_parallel_workers TO 1;\n> -- 2 planned / 1 launched\n> VACUUM (VERBOSE) test_pql;\n> \n> -- TOTAL: parallel workers: 6 planned / 3 launched\n> -- TOTAL: parallel maint workers: 7 planned / 4 launched\n> _EOF_\n> \n> \n> And the output in pg_stat_database a fresh server without any \n> configuration change except thoses in the script:\n> \n> [local]:5445 postgres@postgres=# SELECT datname, \n> parallel_workers_planned, parallel_workers_launched, \n> parallel_maint_workers_planned, parallel_maint_workers_launched FROM pg\n> _stat_database WHERE datname = 'postgres' \\gx\n> \n> -[ RECORD 1 ]-------------------+---------\n> \n> datname | postgres\n> \n> parallel_workers_planned | 6\n> \n> parallel_workers_launched | 3\n> \n> parallel_maint_workers_planned | 7\n> \n> parallel_maint_workers_launched | 4\n> \n> Thanks to: Jehan-Guillaume de Rorthais, Guillaume Lelarge and Franck\n> Boudehen for the help and motivation boost.\n> \n> ---\n> Benoit Lobréau\n> Consultant\n> http://dalibo.com\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com",
"msg_date": "Thu, 29 Aug 2024 17:02:59 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <benoit.lobreau@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel workers stats in pg_stat_database"
},
{
"msg_contents": "Hi,\n\nThis new version avoids updating the stats for non parallel queries.\n\nI noticed that the tests are still not stable. I tried using tenk2\nbut fail to have stable plans. I'd love to have pointers on that front.\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com",
"msg_date": "Tue, 3 Sep 2024 14:34:06 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <benoit.lobreau@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel workers stats in pg_stat_database"
},
{
"msg_contents": "Hi,\n\nOn Tue, Sep 03, 2024 at 02:34:06PM +0200, Benoit Lobr�au wrote:\n> I noticed that the tests are still not stable. I tried using tenk2\n> but fail to have stable plans. I'd love to have pointers on that front.\n\nWhat about moving the tests to places where it's \"guaranteed\" to get \nparallel workers involved? For example, a \"parallel_maint_workers\" only test\ncould be done in vacuum_parallel.sql.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 4 Sep 2024 06:46:35 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel workers stats in pg_stat_database"
},
{
"msg_contents": "On 9/4/24 08:46, Bertrand Drouvot wrote:> What about moving the tests to \nplaces where it's \"guaranteed\" to get\n> parallel workers involved? For example, a \"parallel_maint_workers\" only test\n> could be done in vacuum_parallel.sql.\n\nThank you ! I was too focussed on the stat part and missed the obvious.\nIt's indeed better with this file.\n\n... Which led me to discover that the area I choose to gather my stats \nis wrong (parallel_vacuum_end), it only traps workers allocated for \nparallel_vacuum_cleanup_all_indexes() and not \nparallel_vacuum_bulkdel_all_indexes().\n\nBack to the drawing board...\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com\n\n\n",
"msg_date": "Wed, 4 Sep 2024 17:25:41 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <benoit.lobreau@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel workers stats in pg_stat_database"
},
{
"msg_contents": "Here is an updated patch fixing the aforementionned problems\nwith tests and vacuum stats.\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com",
"msg_date": "Tue, 17 Sep 2024 14:22:59 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <benoit.lobreau@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel workers stats in pg_stat_database"
}
] |
[
{
"msg_contents": "Hi,\n\nSuppose you set compute_query_id=on and then execute a query using\neither the simple or the extended query protocol. With the simple\nquery protocol, the query ID is advertised during query execution;\nwith the extended query protocol, it's not. Here's a test case to\ndemonstrate:\n\nrobert.haas=# select query_id from pg_stat_activity where pid =\npg_backend_pid() \\g\n query_id\n---------------------\n 4332952175652079452\n(1 row)\n\nrobert.haas=# select query_id from pg_stat_activity where pid =\npg_backend_pid() \\bind \\g\n query_id\n----------\n\n(1 row)\n\nI found this surprising and decided to investigate why it was\nhappening. It turns out that exec_{parse,bind,execute}_message() all\nstart by calling pgstat_report_activity(STATE_RUNNING, ...) which\nresets the advertised query ID, which seems fair enough. After that,\nexec_parse_message() does parse analysis, which results in a call to\npgstat_report_query_id() that advertises the correct query ID. After\nresetting the advertised query ID initially, exec_bind_message() calls\nPortalStart() which calls ExecutorStart() which re-advertises the same\nquery ID that was computed at parse analysis time. At execute time, we\ncall PortalRun() which calls ExecutorRun() which does NOT re-advertise\nthe saved query ID. But as far as I can see, that's just an oversight\nin commit 4f0b0966c86, which added this hunk in ExecutorStart:\n\n+ /*\n+ * In some cases (e.g. an EXECUTE statement) a query execution will skip\n+ * parse analysis, which means that the queryid won't be reported. Note\n+ * that it's harmless to report the queryid multiple time, as the call will\n+ * be ignored if the top level queryid has already been reported.\n+ */\n+ pgstat_report_queryid(queryDesc->plannedstmt->queryId, false);\n\nBut I think that the entire first sentence of this comment is just\nwrong. Query execution can't skip parse analysis; parse analysis has\nto be performed before planning and planning has to be performed\nbefore execution. So any time parse analysis computes the query ID, we\nshould have it at execution time, as long as it got saved into the\nPlannedStmt (and why would we ever intentionally skip that?). The\ncomment is also wrong about the consequences: this not only affects\nthe EXECUTE statement but also an Execute protocol message, hence the\nbehavior demonstrated above.\n\nI propose that we should:\n\n- Add a call to\npgstat_report_query_id(queryDesc->plannedstmt->queryId, false) to the\ntop of ExecutorRun() with an appropriate comment.\n- Fix the incorrect comment mentioned above.\n- Back-patch to all releases containing 4f0b0966c86 i.e. v14+.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Aug 2024 16:27:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "query ID goes missing with extended query protocol"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 04:27:38PM -0400, Robert Haas wrote:\n> - Add a call to\n> pgstat_report_query_id(queryDesc->plannedstmt->queryId, false) to the\n> top of ExecutorRun() with an appropriate comment.\n> - Fix the incorrect comment mentioned above.\n> - Back-patch to all releases containing 4f0b0966c86 i.e. v14+.\n\nThis is being discussed already on a different thread:\n- Thread: https://www.postgresql.org/message-id/CA+427g8DiW3aZ6pOpVgkPbqK97ouBdf18VLiHFesea2jUk3XoQ@mail.gmail.com\n- CF entry: https://commitfest.postgresql.org/49/4963/\n\nThere is a patch proposed there, that attempts to deal with the two\ncases of a portal where ExecutorRun() is called once while holding a\ntuplestore and where ExecutorRun() is called multiple times. A few\napproaches have been discussed for the tests, where the new psql\nmetacommands added in d55322b0da60 should become handy. That's on my\nTODO spreadsheet of things to look at, but I did not come back to it\nyet.\n\nA backpatch would be adapted for that, yep. I'd like to see tests on\nHEAD at least.\n--\nMichael",
"msg_date": "Thu, 29 Aug 2024 07:48:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: query ID goes missing with extended query protocol"
},
{
"msg_contents": "On Wed, Aug 28, 2024 at 6:48 PM Michael Paquier <michael@paquier.xyz> wrote:\n> This is being discussed already on a different thread:\n> - Thread: https://www.postgresql.org/message-id/CA+427g8DiW3aZ6pOpVgkPbqK97ouBdf18VLiHFesea2jUk3XoQ@mail.gmail.com\n> - CF entry: https://commitfest.postgresql.org/49/4963/\n>\n> There is a patch proposed there, that attempts to deal with the two\n> cases of a portal where ExecutorRun() is called once while holding a\n> tuplestore and where ExecutorRun() is called multiple times. A few\n> approaches have been discussed for the tests, where the new psql\n> metacommands added in d55322b0da60 should become handy. That's on my\n> TODO spreadsheet of things to look at, but I did not come back to it\n> yet.\n\nThat's interesting, but it sort of seems like it's reinventing the\nwheel, vs. the one-line change that I proposed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Sep 2024 11:24:28 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: query ID goes missing with extended query protocol"
},
{
"msg_contents": "On Mon, Sep 09, 2024 at 11:24:28AM -0400, Robert Haas wrote:\n> That's interesting, but it sort of seems like it's reinventing the\n> wheel, vs. the one-line change that I proposed.\n\nProbably. I'll try to look at all that this week (with some automated\ntests!).\n--\nMichael",
"msg_date": "Tue, 10 Sep 2024 07:44:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: query ID goes missing with extended query protocol"
}
] |
[
{
"msg_contents": "Hi Hackers!\n\nThis would be version v1 of this feature\n\nBasically, the subject says it all: pl/pgperl Patch for being able to \ntell which function you're in.\nThis is a hashref so it will be possible to populate new and exciting \nother details in the future as the need arises\n\nThis also greatly improves logging capabilities for things like catching \nwarnings, Because as it stands right now, there's no information that \ncan assist with locating the source of a warning like this:\n\n# tail -f /var/log/postgresql.log\n******* GOT A WARNING - Use of uninitialized value $prefix in \nconcatenation (.) or string at (eval 531) line 48.\n\nNow, with $_FN you can do this:\n\n\nCREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE plperlu \nAS $function$\n\nuse warnings;\nuse strict;\nuse Data::Dumper;\n\n$SIG{__WARN__} = sub {\n elog(NOTICE, Dumper($_FN));\n\n print STDERR \"In Function: $_FN->{name}: $_[0]\\n\";\n};\n\nmy $a;\nprint \"$a\"; # uninit!\n\nreturn undef;\n\n$function$\n;\n\nThis patch is against 12 which is still our production branch. This \ncould easily be also patched against newer releases as well.\n\nI've been using this code in production now for about 3 years, it's \ngreatly helped track down issues. And there shouldn't be anything \nplatform-specific here, it's all regular perl API\n\nI'm not sure about adding testing. This is my first postgres patch, so \nany guidance on adding regression testing would be appreciated.\n\nThe rationale for this has come from the need to know the source \nfunction name, and we've typically resorted to things like this in the past:\n\nCREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE plperlu \nAS $function$\nmy $function_name = 'throw_warning';\n$SIG{__WARN__} = sub { print STDERR \"In Function: $function_name: \n$_[0]\\n\"; }\n$function$\n;\n\nWe've literally had to copy/paste this all over and it's something that \npostgres should just 'give you' since it knows the name already, just \nlike when triggers pass you $_TD with all the pertinent information\n\nA wishlist item would be for postgres plperl to automatically prepend \nthe function name and schema when throwing perl warnings so you don't \nhave to do your own __WARN__ handler, but this is the next best thing.",
"msg_date": "Wed, 28 Aug 2024 17:53:50 -0400",
"msg_from": "Mark Murawski <markm-lists@intellasoft.net>",
"msg_from_op": true,
"msg_subject": "pl/pgperl Patch for adding $_FN detail just like triggers have for\n $_TD"
},
{
"msg_contents": "\nOn 2024-08-28 We 5:53 PM, Mark Murawski wrote:\n> Hi Hackers!\n>\n> This would be version v1 of this feature\n>\n> Basically, the subject says it all: pl/pgperl Patch for being able to \n> tell which function you're in.\n> This is a hashref so it will be possible to populate new and exciting \n> other details in the future as the need arises\n>\n> This also greatly improves logging capabilities for things like \n> catching warnings, Because as it stands right now, there's no \n> information that can assist with locating the source of a warning like \n> this:\n>\n> # tail -f /var/log/postgresql.log\n> ******* GOT A WARNING - Use of uninitialized value $prefix in \n> concatenation (.) or string at (eval 531) line 48.\n>\n> Now, with $_FN you can do this:\n>\n>\n> CREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE \n> plperlu AS $function$\n>\n> use warnings;\n> use strict;\n> use Data::Dumper;\n>\n> $SIG{__WARN__} = sub {\n> elog(NOTICE, Dumper($_FN));\n>\n> print STDERR \"In Function: $_FN->{name}: $_[0]\\n\";\n> };\n>\n> my $a;\n> print \"$a\"; # uninit!\n>\n> return undef;\n>\n> $function$\n> ;\n>\n> This patch is against 12 which is still our production branch. This \n> could easily be also patched against newer releases as well.\n>\n> I've been using this code in production now for about 3 years, it's \n> greatly helped track down issues. And there shouldn't be anything \n> platform-specific here, it's all regular perl API\n>\n> I'm not sure about adding testing. This is my first postgres patch, \n> so any guidance on adding regression testing would be appreciated.\n>\n> The rationale for this has come from the need to know the source \n> function name, and we've typically resorted to things like this in the \n> past:\n>\n> CREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE \n> plperlu AS $function$\n> my $function_name = 'throw_warning';\n> $SIG{__WARN__} = sub { print STDERR \"In Function: $function_name: \n> $_[0]\\n\"; }\n> $function$\n> ;\n>\n> We've literally had to copy/paste this all over and it's something \n> that postgres should just 'give you' since it knows the name already, \n> just like when triggers pass you $_TD with all the pertinent information\n>\n> A wishlist item would be for postgres plperl to automatically prepend \n> the function name and schema when throwing perl warnings so you don't \n> have to do your own __WARN__ handler, but this is the next best thing.\n\n\n\nI'm not necessarily opposed to this, but the analogy to $_TD isn't \nreally apt. You can't know the trigger data at compile time, whereas \nyou can know the function's name at compile time, using just the \nmechanism you find irksome.\n\nAnd if we're going to do it for plperl, shouldn't we do it for other PLs?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 29 Aug 2024 11:56:44 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgperl Patch for adding $_FN detail just like triggers have\n for $_TD"
},
{
"msg_contents": "\n\nOn 8/29/24 11:56, Andrew Dunstan wrote:\n>\n> On 2024-08-28 We 5:53 PM, Mark Murawski wrote:\n>> Hi Hackers!\n>>\n>> This would be version v1 of this feature\n>>\n>> Basically, the subject says it all: pl/pgperl Patch for being able to \n>> tell which function you're in.\n>> This is a hashref so it will be possible to populate new and exciting \n>> other details in the future as the need arises\n>>\n>> This also greatly improves logging capabilities for things like \n>> catching warnings, Because as it stands right now, there's no \n>> information that can assist with locating the source of a warning \n>> like this:\n>>\n>> # tail -f /var/log/postgresql.log\n>> ******* GOT A WARNING - Use of uninitialized value $prefix in \n>> concatenation (.) or string at (eval 531) line 48.\n>>\n>> Now, with $_FN you can do this:\n>>\n>>\n>> CREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE \n>> plperlu AS $function$\n>>\n>> use warnings;\n>> use strict;\n>> use Data::Dumper;\n>>\n>> $SIG{__WARN__} = sub {\n>> elog(NOTICE, Dumper($_FN));\n>>\n>> print STDERR \"In Function: $_FN->{name}: $_[0]\\n\";\n>> };\n>>\n>> my $a;\n>> print \"$a\"; # uninit!\n>>\n>> return undef;\n>>\n>> $function$\n>> ;\n>>\n>> This patch is against 12 which is still our production branch. This \n>> could easily be also patched against newer releases as well.\n>>\n>> I've been using this code in production now for about 3 years, it's \n>> greatly helped track down issues. And there shouldn't be anything \n>> platform-specific here, it's all regular perl API\n>>\n>> I'm not sure about adding testing. This is my first postgres patch, \n>> so any guidance on adding regression testing would be appreciated.\n>>\n>> The rationale for this has come from the need to know the source \n>> function name, and we've typically resorted to things like this in \n>> the past:\n>>\n>> CREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE \n>> plperlu AS $function$\n>> my $function_name = 'throw_warning';\n>> $SIG{__WARN__} = sub { print STDERR \"In Function: $function_name: \n>> $_[0]\\n\"; }\n>> $function$\n>> ;\n>>\n>> We've literally had to copy/paste this all over and it's something \n>> that postgres should just 'give you' since it knows the name already, \n>> just like when triggers pass you $_TD with all the pertinent information\n>>\n>> A wishlist item would be for postgres plperl to automatically prepend \n>> the function name and schema when throwing perl warnings so you don't \n>> have to do your own __WARN__ handler, but this is the next best thing.\n>\n>\n>\n> I'm not necessarily opposed to this, but the analogy to $_TD isn't \n> really apt. You can't know the trigger data at compile time, whereas \n> you can know the function's name at compile time, using just the \n> mechanism you find irksome.\n>\n> And if we're going to do it for plperl, shouldn't we do it for other PLs?\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> -- \n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n>\n\nHi Andrew,\n\n\nThanks for the feedback.\n\n\n1) Why is this not similar to _TD? It literally operates identically. \nAt run-time it passes you $_TD for triggers. Same her for \nfunctions. This is all run-time. What exactly is the issue you're \ntrying to point out?\n\n\n2) I would agree that other PLs should get the same detail. I don't \nknow the other ones as I've been only working in pl/perl.\n\n\n\n\n",
"msg_date": "Thu, 29 Aug 2024 13:01:18 -0400",
"msg_from": "Mark Murawski <markm-lists@intellasoft.net>",
"msg_from_op": true,
"msg_subject": "Re: pl/pgperl Patch for adding $_FN detail just like triggers have\n for $_TD"
},
{
"msg_contents": "\nOn 2024-08-29 Th 1:01 PM, Mark Murawski wrote:\n>\n>\n> On 8/29/24 11:56, Andrew Dunstan wrote:\n>>\n>> On 2024-08-28 We 5:53 PM, Mark Murawski wrote:\n>>> Hi Hackers!\n>>>\n>>> This would be version v1 of this feature\n>>>\n>>> Basically, the subject says it all: pl/pgperl Patch for being able \n>>> to tell which function you're in.\n>>> This is a hashref so it will be possible to populate new and \n>>> exciting other details in the future as the need arises\n>>>\n>>> This also greatly improves logging capabilities for things like \n>>> catching warnings, Because as it stands right now, there's no \n>>> information that can assist with locating the source of a warning \n>>> like this:\n>>>\n>>> # tail -f /var/log/postgresql.log\n>>> ******* GOT A WARNING - Use of uninitialized value $prefix in \n>>> concatenation (.) or string at (eval 531) line 48.\n>>>\n>>> Now, with $_FN you can do this:\n>>>\n>>>\n>>> CREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE \n>>> plperlu AS $function$\n>>>\n>>> use warnings;\n>>> use strict;\n>>> use Data::Dumper;\n>>>\n>>> $SIG{__WARN__} = sub {\n>>> elog(NOTICE, Dumper($_FN));\n>>>\n>>> print STDERR \"In Function: $_FN->{name}: $_[0]\\n\";\n>>> };\n>>>\n>>> my $a;\n>>> print \"$a\"; # uninit!\n>>>\n>>> return undef;\n>>>\n>>> $function$\n>>> ;\n>>>\n>>> This patch is against 12 which is still our production branch. This \n>>> could easily be also patched against newer releases as well.\n>>>\n>>> I've been using this code in production now for about 3 years, it's \n>>> greatly helped track down issues. And there shouldn't be anything \n>>> platform-specific here, it's all regular perl API\n>>>\n>>> I'm not sure about adding testing. This is my first postgres patch, \n>>> so any guidance on adding regression testing would be appreciated.\n>>>\n>>> The rationale for this has come from the need to know the source \n>>> function name, and we've typically resorted to things like this in \n>>> the past:\n>>>\n>>> CREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE \n>>> plperlu AS $function$\n>>> my $function_name = 'throw_warning';\n>>> $SIG{__WARN__} = sub { print STDERR \"In Function: $function_name: \n>>> $_[0]\\n\"; }\n>>> $function$\n>>> ;\n>>>\n>>> We've literally had to copy/paste this all over and it's something \n>>> that postgres should just 'give you' since it knows the name \n>>> already, just like when triggers pass you $_TD with all the \n>>> pertinent information\n>>>\n>>> A wishlist item would be for postgres plperl to automatically \n>>> prepend the function name and schema when throwing perl warnings so \n>>> you don't have to do your own __WARN__ handler, but this is the next \n>>> best thing.\n>>\n>>\n>>\n>> I'm not necessarily opposed to this, but the analogy to $_TD isn't \n>> really apt. You can't know the trigger data at compile time, whereas \n>> you can know the function's name at compile time, using just the \n>> mechanism you find irksome.\n>>\n>> And if we're going to do it for plperl, shouldn't we do it for other \n>> PLs?\n>>\n>>\n>>\n>\n> Hi Andrew,\n>\n>\n> Thanks for the feedback.\n>\n>\n> 1) Why is this not similar to _TD? It literally operates identically. \n> At run-time it passes you $_TD for triggers. Same her for \n> functions. This is all run-time. What exactly is the issue you're \n> trying to point out?\n\n\nIt's not the same as the trigger data case because the function name is \nknowable at compile time, as in fact you have demonstrated. You just \nfind it a bit inconvenient to have to code for that knowledge. By \ncontrast, trigger data is ONLY knowable at run time.\n\nBut I don't see that it is such a heavy burden to have to write\n\n my $funcname = \"foo\";\n\nor similar in your function.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 29 Aug 2024 16:54:15 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgperl Patch for adding $_FN detail just like triggers have\n for $_TD"
},
{
"msg_contents": "\n\nOn 8/29/24 16:54, Andrew Dunstan wrote:\n>\n> On 2024-08-29 Th 1:01 PM, Mark Murawski wrote:\n>>\n>>\n>> On 8/29/24 11:56, Andrew Dunstan wrote:\n>>>\n>>> On 2024-08-28 We 5:53 PM, Mark Murawski wrote:\n>>>> Hi Hackers!\n>>>>\n>>>> This would be version v1 of this feature\n>>>>\n>>>> Basically, the subject says it all: pl/pgperl Patch for being able \n>>>> to tell which function you're in.\n>>>> This is a hashref so it will be possible to populate new and \n>>>> exciting other details in the future as the need arises\n>>>>\n>>>> This also greatly improves logging capabilities for things like \n>>>> catching warnings, Because as it stands right now, there's no \n>>>> information that can assist with locating the source of a warning \n>>>> like this:\n>>>>\n>>>> # tail -f /var/log/postgresql.log\n>>>> ******* GOT A WARNING - Use of uninitialized value $prefix in \n>>>> concatenation (.) or string at (eval 531) line 48.\n>>>>\n>>>> Now, with $_FN you can do this:\n>>>>\n>>>>\n>>>> CREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE \n>>>> plperlu AS $function$\n>>>>\n>>>> use warnings;\n>>>> use strict;\n>>>> use Data::Dumper;\n>>>>\n>>>> $SIG{__WARN__} = sub {\n>>>> elog(NOTICE, Dumper($_FN));\n>>>>\n>>>> print STDERR \"In Function: $_FN->{name}: $_[0]\\n\";\n>>>> };\n>>>>\n>>>> my $a;\n>>>> print \"$a\"; # uninit!\n>>>>\n>>>> return undef;\n>>>>\n>>>> $function$\n>>>> ;\n>>>>\n>>>> This patch is against 12 which is still our production branch. This \n>>>> could easily be also patched against newer releases as well.\n>>>>\n>>>> I've been using this code in production now for about 3 years, it's \n>>>> greatly helped track down issues. And there shouldn't be anything \n>>>> platform-specific here, it's all regular perl API\n>>>>\n>>>> I'm not sure about adding testing. This is my first postgres \n>>>> patch, so any guidance on adding regression testing would be \n>>>> appreciated.\n>>>>\n>>>> The rationale for this has come from the need to know the source \n>>>> function name, and we've typically resorted to things like this in \n>>>> the past:\n>>>>\n>>>> CREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE \n>>>> plperlu AS $function$\n>>>> my $function_name = 'throw_warning';\n>>>> $SIG{__WARN__} = sub { print STDERR \"In Function: $function_name: \n>>>> $_[0]\\n\"; }\n>>>> $function$\n>>>> ;\n>>>>\n>>>> We've literally had to copy/paste this all over and it's something \n>>>> that postgres should just 'give you' since it knows the name \n>>>> already, just like when triggers pass you $_TD with all the \n>>>> pertinent information\n>>>>\n>>>> A wishlist item would be for postgres plperl to automatically \n>>>> prepend the function name and schema when throwing perl warnings so \n>>>> you don't have to do your own __WARN__ handler, but this is the \n>>>> next best thing.\n>>>\n>>>\n>>>\n>>> I'm not necessarily opposed to this, but the analogy to $_TD isn't \n>>> really apt. You can't know the trigger data at compile time, \n>>> whereas you can know the function's name at compile time, using just \n>>> the mechanism you find irksome.\n>>>\n>>> And if we're going to do it for plperl, shouldn't we do it for other \n>>> PLs?\n>>>\n>>>\n>>>\n>>\n>> Hi Andrew,\n>>\n>>\n>> Thanks for the feedback.\n>>\n>>\n>> 1) Why is this not similar to _TD? It literally operates \n>> identically. At run-time it passes you $_TD for triggers. Same her \n>> for functions. This is all run-time. What exactly is the issue \n>> you're trying to point out?\n>\n>\n> It's not the same as the trigger data case because the function name \n> is knowable at compile time, as in fact you have demonstrated. You \n> just find it a bit inconvenient to have to code for that knowledge. By \n> contrast, trigger data is ONLY knowable at run time.\n>\n> But I don't see that it is such a heavy burden to have to write\n>\n> my $funcname = \"foo\";\n>\n> or similar in your function.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> -- \n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n>\n\nUnderstood, regarding knowability. Trigger data is definitely going to \nbe very dynamic in that regard.\n\nNo, It's not a heavy burden to hard code the function name. But what my \nideal goal would be is this:\n\nCREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE plperlu \nAS $function$\nuse 'PostgresWarnHandler'; # <--- imagine this registers a WARN handler \nand outputs $_FN->{name} for you as part of the final warning\n\nmy $a;\nprint $a;\n\n.... etc\n\n\nand then there's nothing 'hard coded' regarding the name of the \nfunction, anywhere. It just seems nonsensical that postgres plperl \ncan't send you the name of your registered function and there's \nabsolutely no way to get it other than hard coding the name \n(redundantly, considering you're already know the name when you're \ndefining the function in the first place)\n\nBut even better would be catching the warn at the plperl level, \nprepending the function name to the warn message, and then you only need:\n\nCREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE plperlu \nAS $function$\n\nmy $a;\nprint $a;\n\n.... etc\n\nAnd then in this hypothetical situation -- magic ensues, and you're left \nwith this:\n# tail -f /var/log/postgresql.log\n******* GOT A WARNING - Use of uninitialized value $a in concatenation \n(.) or string in function public.throw_warning() line 1\n\n\n\n\n\n\n\n",
"msg_date": "Thu, 29 Aug 2024 17:50:35 -0400",
"msg_from": "Mark Murawski <markm-lists@intellasoft.net>",
"msg_from_op": true,
"msg_subject": "Re: pl/pgperl Patch for adding $_FN detail just like triggers have\n for $_TD"
},
{
"msg_contents": "\nOn 2024-08-29 Th 5:50 PM, Mark Murawski wrote:\n>\n>\n> On 8/29/24 16:54, Andrew Dunstan wrote:\n>>\n>> On 2024-08-29 Th 1:01 PM, Mark Murawski wrote:\n>>>\n>>>\n>>> On 8/29/24 11:56, Andrew Dunstan wrote:\n>>>>\n>>>> On 2024-08-28 We 5:53 PM, Mark Murawski wrote:\n>>>>> Hi Hackers!\n>>>>>\n>>>>> This would be version v1 of this feature\n>>>>>\n>>>>> Basically, the subject says it all: pl/pgperl Patch for being able \n>>>>> to tell which function you're in.\n>>>>> This is a hashref so it will be possible to populate new and \n>>>>> exciting other details in the future as the need arises\n>>>>>\n>>>>> This also greatly improves logging capabilities for things like \n>>>>> catching warnings, Because as it stands right now, there's no \n>>>>> information that can assist with locating the source of a warning \n>>>>> like this:\n>>>>>\n>>>>> # tail -f /var/log/postgresql.log\n>>>>> ******* GOT A WARNING - Use of uninitialized value $prefix in \n>>>>> concatenation (.) or string at (eval 531) line 48.\n>>>>>\n>>>>> Now, with $_FN you can do this:\n>>>>>\n>>>>>\n>>>>> CREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE \n>>>>> plperlu AS $function$\n>>>>>\n>>>>> use warnings;\n>>>>> use strict;\n>>>>> use Data::Dumper;\n>>>>>\n>>>>> $SIG{__WARN__} = sub {\n>>>>> elog(NOTICE, Dumper($_FN));\n>>>>>\n>>>>> print STDERR \"In Function: $_FN->{name}: $_[0]\\n\";\n>>>>> };\n>>>>>\n>>>>> my $a;\n>>>>> print \"$a\"; # uninit!\n>>>>>\n>>>>> return undef;\n>>>>>\n>>>>> $function$\n>>>>> ;\n>>>>>\n>>>>> This patch is against 12 which is still our production branch. \n>>>>> This could easily be also patched against newer releases as well.\n>>>>>\n>>>>> I've been using this code in production now for about 3 years, \n>>>>> it's greatly helped track down issues. And there shouldn't be \n>>>>> anything platform-specific here, it's all regular perl API\n>>>>>\n>>>>> I'm not sure about adding testing. This is my first postgres \n>>>>> patch, so any guidance on adding regression testing would be \n>>>>> appreciated.\n>>>>>\n>>>>> The rationale for this has come from the need to know the source \n>>>>> function name, and we've typically resorted to things like this in \n>>>>> the past:\n>>>>>\n>>>>> CREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE \n>>>>> plperlu AS $function$\n>>>>> my $function_name = 'throw_warning';\n>>>>> $SIG{__WARN__} = sub { print STDERR \"In Function: $function_name: \n>>>>> $_[0]\\n\"; }\n>>>>> $function$\n>>>>> ;\n>>>>>\n>>>>> We've literally had to copy/paste this all over and it's something \n>>>>> that postgres should just 'give you' since it knows the name \n>>>>> already, just like when triggers pass you $_TD with all the \n>>>>> pertinent information\n>>>>>\n>>>>> A wishlist item would be for postgres plperl to automatically \n>>>>> prepend the function name and schema when throwing perl warnings \n>>>>> so you don't have to do your own __WARN__ handler, but this is the \n>>>>> next best thing.\n>>>>\n>>>>\n>>>>\n>>>> I'm not necessarily opposed to this, but the analogy to $_TD isn't \n>>>> really apt. You can't know the trigger data at compile time, \n>>>> whereas you can know the function's name at compile time, using \n>>>> just the mechanism you find irksome.\n>>>>\n>>>> And if we're going to do it for plperl, shouldn't we do it for \n>>>> other PLs?\n>>>>\n>>>>\n>>>>\n>>>\n>>> Hi Andrew,\n>>>\n>>>\n>>> Thanks for the feedback.\n>>>\n>>>\n>>> 1) Why is this not similar to _TD? It literally operates \n>>> identically. At run-time it passes you $_TD for triggers. Same her \n>>> for functions. This is all run-time. What exactly is the issue \n>>> you're trying to point out?\n>>\n>>\n>> It's not the same as the trigger data case because the function name \n>> is knowable at compile time, as in fact you have demonstrated. You \n>> just find it a bit inconvenient to have to code for that knowledge. \n>> By contrast, trigger data is ONLY knowable at run time.\n>>\n>> But I don't see that it is such a heavy burden to have to write\n>>\n>> my $funcname = \"foo\";\n>>\n>> or similar in your function.\n>>\n>>\n>> cheers\n>>\n>>\n>> andrew\n>>\n>> -- \n>> Andrew Dunstan\n>> EDB: https://www.enterprisedb.com\n>>\n>>\n>>\n>\n> Understood, regarding knowability. Trigger data is definitely going \n> to be very dynamic in that regard.\n>\n> No, It's not a heavy burden to hard code the function name. But what \n> my ideal goal would be is this:\n>\n> CREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE \n> plperlu AS $function$\n> use 'PostgresWarnHandler'; # <--- imagine this registers a WARN \n> handler and outputs $_FN->{name} for you as part of the final warning\n>\n> my $a;\n> print $a;\n>\n> .... etc\n>\n>\n> and then there's nothing 'hard coded' regarding the name of the \n> function, anywhere. It just seems nonsensical that postgres plperl \n> can't send you the name of your registered function and there's \n> absolutely no way to get it other than hard coding the name \n> (redundantly, considering you're already know the name when you're \n> defining the function in the first place)\n>\n> But even better would be catching the warn at the plperl level, \n> prepending the function name to the warn message, and then you only need:\n>\n> CREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE \n> plperlu AS $function$\n>\n> my $a;\n> print $a;\n>\n> .... etc\n>\n> And then in this hypothetical situation -- magic ensues, and you're \n> left with this:\n> # tail -f /var/log/postgresql.log\n> ******* GOT A WARNING - Use of uninitialized value $a in concatenation \n> (.) or string in function public.throw_warning() line 1\n>\n>\n>\n>\n>\n>\n>\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 30 Aug 2024 15:49:57 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgperl Patch for adding $_FN detail just like triggers have\n for $_TD"
},
{
"msg_contents": "\nOn 2024-08-30 Fr 3:49 PM, Andrew Dunstan wrote:\n>\n>\n\nSorry for empty reply. Errant finger hit send.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 30 Aug 2024 16:12:20 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgperl Patch for adding $_FN detail just like triggers have\n for $_TD"
},
{
"msg_contents": "On 8/30/24 16:12, Andrew Dunstan wrote:\n>\n> Sorry for empty reply. Errant finger hit send.\n>\n\n\nNo problem.\n\nSo anyway... my main point is to highlight this:\n\n\n>\n> On 2024-08-29 Th 5:50 PM, Mark Murawski wrote:\n>>\n>>\n>> And then in this hypothetical situation -- magic ensues, and you're \n>> left with this:\n>> # tail -f /var/log/postgresql.log\n>> ******* GOT A WARNING - Use of uninitialized value $a in \n>> concatenation (.) or string in function public.throw_warning() line 1\n>>\n>>\n\n\nThe essential element here is: Why does every single developer who ever \nwants to develop in plperl be forced to figure out (typically at the \nworst possible time) that Postgres doesn't log the source function of \nwarning. And then be forced to hard code their own function name as a \nvariable inside their function. The typical situation is you test your \ncode, you push it to production, and then observe. And then production \ndoes something you didn't expect and throws a warning. With the current \ndesign, you have no idea what code threw the warning and you have to go \ninto every single possible plperl function and throw in hard coded \nfunction names for logging. To me this is highly nonsensical to force \nthis on developers.\n\nPretty much every modern scripting language I've come across, has a way \nto access dynamically: the name of the currently executing function. \nEither by way of a special variable, or a stack trace introspection. \nBeing that this is Perl, sure... we can get caller() or a stack trace. \nBut the design of plperl Postgres functions uses an anonymous coderef to \ndefine the function, so there is no function name defined. I don't see \nthe harm in adding more information so that the running function can \nknow its own name.\n\nMaybe another approach to the fix here is to give the function an actual \nname, and when calling it, now you know where you are executing from. \nBut in perl you cannot define a sub called schema.function, it \ndefinitely does not compile. So you would need some sort of name \nmangling like postgres_plperl_schema_function so it's painfully obvious \nwhere it came from. So... this is why it's handy to just have a \nvariable, since you can format the called schema.function properly.\n\nBut, Ideally the even better solution or just catching and re-throwing \nthe warn handler sounds like it would be the best option. I'll need to \nlook into this more since this is really my first jump into the perl-c \napi and I've never worked with warn handlers at this level.\n\n\n\n\n\n On 8/30/24 16:12, Andrew Dunstan wrote:\n\n\n Sorry for empty reply. Errant finger hit send.\n \n\n\n\n\n No problem.\n\n So anyway... my main point is to highlight this:\n\n\n\n\n\n On 2024-08-29 Th 5:50 PM, Mark Murawski wrote:\n \n\n\n\n And then in this hypothetical situation -- magic ensues, and\n you're left with this:\n \n # tail -f /var/log/postgresql.log\n \n ******* GOT A WARNING - Use of uninitialized value $a in\n concatenation (.) or string in function public.throw_warning()\n line 1\n \n\n\n\n\n\n\n\n The essential element here is: Why does every single developer who\n ever wants to develop in plperl be forced to figure out (typically\n at the worst possible time) that Postgres doesn't log the source\n function of warning. And then be forced to hard code their own\n function name as a variable inside their function. The typical\n situation is you test your code, you push it to production, and then\n observe. And then production does something you didn't expect and\n throws a warning. With the current design, you have no idea what\n code threw the warning and you have to go into every single possible\n plperl function and throw in hard coded function names for logging. \n To me this is highly nonsensical to force this on developers.\n\n Pretty much every modern scripting language I've come across, has a\n way to access dynamically: the name of the currently executing\n function. Either by way of a special variable, or a stack trace\n introspection. Being that this is Perl, sure... we can get caller()\n or a stack trace. But the design of plperl Postgres functions uses\n an anonymous coderef to define the function, so there is no function\n name defined. I don't see the harm in adding more information so\n that the running function can know its own name.\n\n Maybe another approach to the fix here is to give the function an\n actual name, and when calling it, now you know where you are\n executing from. But in perl you cannot define a sub called\n schema.function, it definitely does not compile. So you would need\n some sort of name mangling like postgres_plperl_schema_function so\n it's painfully obvious where it came from. So... this is why it's\n handy to just have a variable, since you can format the called\n schema.function properly.\n\n But, Ideally the even better solution or just catching and\n re-throwing the warn handler sounds like it would be the best\n option. I'll need to look into this more since this is really my\n first jump into the perl-c api and I've never worked with warn\n handlers at this level.",
"msg_date": "Fri, 30 Aug 2024 16:46:04 -0400",
"msg_from": "Mark Murawski <markm-lists@intellasoft.net>",
"msg_from_op": true,
"msg_subject": "Re: pl/pgperl Patch for adding $_FN detail just like triggers have\n for $_TD"
}
] |
[
{
"msg_contents": "Hi Hackers!\n\nThis would be version v1 of this feature\n\nBasically, the subject says it all: pl/pgperl Patch for being able to \ntell which function you're in.\nThis is a hashref so it will be possible to populate new and exciting \nother details in the future as the need arises\n\nThis also greatly improves logging capabilities for things like catching \nwarnings, Because as it stands right now, there's no information that \ncan assist with locating the source of a warning like this:\n\n# tail -f /var/log/postgresql.log\n******* GOT A WARNING - Use of uninitialized value $prefix in \nconcatenation (.) or string at (eval 531) line 48.\n\nNow, with $_FN you can do this:\n\n\nCREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE plperlu \nAS $function$\n\nuse warnings;\nuse strict;\nuse Data::Dumper;\n\n$SIG{__WARN__} = sub {\n elog(NOTICE, Dumper($_FN));\n\n print STDERR \"In Function: $_FN->{name}: $_[0]\\n\";\n};\n\nmy $a;\nprint \"$a\"; # uninit!\n\nreturn undef;\n\n$function$\n;\n\nThis patch is against 12 which is still our production branch. This \ncould easily be also patched against newer releases as well.\n\nI've been using this code in production now for about 3 years, it's \ngreatly helped track down issues. And there shouldn't be anything \nplatform-specific here, it's all regular perl API\n\nI'm not sure about adding testing. This is my first postgres patch, so \nany guidance on adding regression testing would be appreciated.\n\nThe rationale for this has come from the need to know the source \nfunction name, and we've typically resorted to things like this in the past:\n\nCREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE plperlu \nAS $function$\nmy $function_name = 'throw_warning';\n$SIG{__WARN__} = sub { print STDERR \"In Function: $function_name: \n$_[0]\\n\"; }\n$function$\n;\n\nWe've literally had to copy/paste this all over and it's something that \npostgres should just 'give you' since it knows the name already, just \nlike when triggers pass you $_TD with all the pertinent information\n\nA wishlist item would be for postgres plperl to automatically prepend \nthe function name and schema when throwing perl warnings so you don't \nhave to do your own __WARN__ handler, but this is the next best thing.",
"msg_date": "Wed, 28 Aug 2024 17:54:46 -0400",
"msg_from": "Mark Murawski <markm-lists@intellasoft.net>",
"msg_from_op": true,
"msg_subject": "pl/pgperl Patch for adding $_FN detail just like triggers have for\n $_TD"
},
{
"msg_contents": "Hi Hackers!\n\nThis would be version v1 of this feature\n\nBasically, the subject says it all: pl/pgperl Patch for being able to \ntell which function you're in.\nThis is a hashref so it will be possible to populate new and exciting \nother details in the future as the need arises\n\nThis also greatly improves logging capabilities for things like catching \nwarnings, Because as it stands right now, there's no information that \ncan assist with locating the source of a warning like this:\n\n# tail -f /var/log/postgresql.log\n******* GOT A WARNING - Use of uninitialized value $prefix in \nconcatenation (.) or string at (eval 531) line 48.\n\nNow, with $_FN you can do this:\n\n\nCREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE plperlu \nAS $function$\n\nuse warnings;\nuse strict;\nuse Data::Dumper;\n\n$SIG{__WARN__} = sub {\n elog(NOTICE, Dumper($_FN));\n\n print STDERR \"In Function: $_FN->{name}: $_[0]\\n\";\n};\n\nmy $a;\nprint \"$a\"; # uninit!\n\nreturn undef;\n\n$function$\n;\n\nThis patch is against 12 which is still our production branch. This \ncould easily be also patched against newer releases as well.\n\nI've been using this code in production now for about 3 years, it's \ngreatly helped track down issues. And there shouldn't be anything \nplatform-specific here, it's all regular perl API\n\nI'm not sure about adding testing. This is my first postgres patch, so \nany guidance on adding regression testing would be appreciated.\n\nThe rationale for this has come from the need to know the source \nfunction name, and we've typically resorted to things like this in the past:\n\nCREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE plperlu \nAS $function$\nmy $function_name = 'throw_warning';\n$SIG{__WARN__} = sub { print STDERR \"In Function: $function_name: \n$_[0]\\n\"; }\n$function$\n;\n\nWe've literally had to copy/paste this all over and it's something that \npostgres should just 'give you' since it knows the name already, just \nlike when triggers pass you $_TD with all the pertinent information\n\nA wishlist item would be for postgres plperl to automatically prepend \nthe function name and schema when throwing perl warnings so you don't \nhave to do your own __WARN__ handler, but this is the next best thing.",
"msg_date": "Wed, 28 Aug 2024 18:34:05 -0400",
"msg_from": "Mark Murawski <markm-lists@intellasoft.net>",
"msg_from_op": true,
"msg_subject": "pl/pgperl Patch for adding $_FN detail just like triggers have for\n $_TD"
},
{
"msg_contents": "Hi Hackers!\n\nThis would be version v1 of this feature\n\nBasically, the subject says it all: pl/pgperl Patch for being able to \ntell which function you're in.\nThis is a hashref so it will be possible to populate new and exciting \nother details in the future as the need arises\n\nThis also greatly improves logging capabilities for things like catching \nwarnings, Because as it stands right now, there's no information that \ncan assist with locating the source of a warning like this:\n\n# tail -f /var/log/postgresql.log\n******* GOT A WARNING - Use of uninitialized value $prefix in \nconcatenation (.) or string at (eval 531) line 48.\n\nNow, with $_FN you can do this:\n\n\nCREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE plperlu \nAS $function$\n\nuse warnings;\nuse strict;\nuse Data::Dumper;\n\n$SIG{__WARN__} = sub {\n elog(NOTICE, Dumper($_FN));\n\n print STDERR \"In Function: $_FN->{name}: $_[0]\\n\";\n};\n\nmy $a;\nprint \"$a\"; # uninit!\n\nreturn undef;\n\n$function$\n;\n\nThis patch is against 12 which is still our production branch. This \ncould easily be also patched against newer releases as well.\n\nI've been using this code in production now for about 3 years, it's \ngreatly helped track down issues. And there shouldn't be anything \nplatform-specific here, it's all regular perl API\n\nI'm not sure about adding testing. This is my first postgres patch, so \nany guidance on adding regression testing would be appreciated.\n\nThe rationale for this has come from the need to know the source \nfunction name, and we've typically resorted to things like this in the past:\n\nCREATE OR REPLACE FUNCTION throw_warning() RETURNS text LANGUAGE plperlu \nAS $function$\nmy $function_name = 'throw_warning';\n$SIG{__WARN__} = sub { print STDERR \"In Function: $function_name: \n$_[0]\\n\"; }\n$function$\n;\n\nWe've literally had to copy/paste this all over and it's something that \npostgres should just 'give you' since it knows the name already, just \nlike when triggers pass you $_TD with all the pertinent information\n\nA wishlist item would be for postgres plperl to automatically prepend \nthe function name and schema when throwing perl warnings so you don't \nhave to do your own __WARN__ handler, but this is the next best thing.",
"msg_date": "Wed, 28 Aug 2024 18:36:46 -0400",
"msg_from": "Mark Murawski <markm-lists@intellasoft.net>",
"msg_from_op": true,
"msg_subject": "pl/pgperl Patch for adding $_FN detail just like triggers have for\n $_TD"
},
{
"msg_contents": "We don't really need four copies of this patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Aug 2024 18:38:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgperl Patch for adding $_FN detail just like triggers have\n for $_TD"
},
{
"msg_contents": "Sorry! I'm having email delivery issues. I thought the first few \ndidn't go through. I'm working through email DKMS problems where we \nwere incompatible with the mailing list.\n\nIt sounds like it's fixed now! Sorry for the spam!\n\n\n\nOn 8/28/24 18:38, Tom Lane wrote:\n> We don't really need four copies of this patch.\n>\n> \t\t\tregards, tom lane\n\n\n\n",
"msg_date": "Wed, 28 Aug 2024 18:39:10 -0400",
"msg_from": "Mark Murawski <markm-lists@intellasoft.net>",
"msg_from_op": true,
"msg_subject": "Re: pl/pgperl Patch for adding $_FN detail just like triggers have\n for $_TD"
}
] |
[
{
"msg_contents": "\nUsually I see printtup in the perf-report with a noticeable ratio. Take\n\"SELECT * FROM pg_class\" for example, we can see:\n\n85.65% 3.25% postgres postgres [.] printtup\n\nThe high level design of printtup is:\n\n1. Used a pre-allocated StringInfo DR_printtup.buf to store data for\neach tuples. \n2. for each datum in the tuple, it calls the type-specific out function\nand get a cstring. \n3. after get the cstring, we figure out the \"len\" and add both len and\n'data' into DR_printtup.buf. \n4. after all the datums are handled, socket_putmessage copies them into\nPqSendBuffer. \n5. When the usage of PgSendBuffer is up to PqSendBufferSize, using send\nsyscall to sent them into client (by copying the data from userspace to\nkernel space again). \n\nPart of the slowness is caused by \"memcpy\", \"strlen\" and palloc in\noutfunction. \n\n8.35% 8.35% postgres libc.so.6 [.] __strlen_avx2 \n4.27% 0.00% postgres libc.so.6 [.] __memcpy_avx_unaligned_erms\n3.93% 3.93% postgres postgres [.] palloc (part of them caused by\nout function) \n5.70% 5.70% postgres postgres [.] AllocSetAlloc (part of them\ncaused by printtup.) \n\nMy high level proposal is define a type specific print function like:\n\noidprint(Datum datum, StringInfo buf)\ntextprint(Datum datum, StringInfo buf)\n\nThis function should append both data and len into buf directly. \n\nfor the oidprint case, we can avoid:\n\n5. the dedicate palloc in oid function.\n6. the memcpy from the above memory into DR_printtup.buf\n\nfor the textprint case, we can avoid\n\n7. strlen, since we can figure out the length from varlena.vl_len\n\nint2/4/8/timestamp/date/time are similar with oid. and numeric, varchar\nare similar with text. This almost covers all the common used type.\n\nHard coding the relationship between common used type and {type}print\nfunction OID looks not cool, Adding a new attribute in pg_type looks too\naggressive however. Anyway this is the next topic to talk about.\n\nIf a type's print function is not defined, we can still using the out \nfunction (and PrinttupAttrInfo caches FmgrInfo rather than\nFunctionCallInfo, so there is some optimization in this step as well). \n\nThis proposal covers the step 2 & 3. If we can do something more\naggressively, we can let the xxxprint print to PqSendBuffer directly,\nbut this is more complex and need some infrastructure changes. the\nmemcpy in step 4 is: \"1.27% __memcpy_avx_unaligned_erms\" in my above\ncase. \n\nWhat do you think?\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Thu, 29 Aug 2024 17:40:14 +0800",
"msg_from": "Andy Fan <zhihuifan1213@163.com>",
"msg_from_op": true,
"msg_subject": "Make printtup a bit faster"
},
{
"msg_contents": "On Thu, 29 Aug 2024 at 21:40, Andy Fan <zhihuifan1213@163.com> wrote:\n>\n>\n> Usually I see printtup in the perf-report with a noticeable ratio.\n\n> Part of the slowness is caused by \"memcpy\", \"strlen\" and palloc in\n> outfunction.\n\nYeah, it's a pretty inefficient API from a performance point of view.\n\n> My high level proposal is define a type specific print function like:\n>\n> oidprint(Datum datum, StringInfo buf)\n> textprint(Datum datum, StringInfo buf)\n\nI think what we should do instead is make the output functions take a\nStringInfo and just pass it the StringInfo where we'd like the bytes\nwritten.\n\nThat of course would require rewriting all the output functions for\nall the built-in types, so not a small task. Extensions make that job\nharder. I don't think it would be good to force extensions to rewrite\ntheir output functions, so perhaps some wrapper function could help us\nalign the APIs for extensions that have not been converted yet.\n\nThere's a similar problem with input functions not having knowledge of\nthe input length. You only have to look at textin() to see how useful\nthat could be. Fixing that would probably make COPY FROM horrendously\nfaster. Team that up with SIMD for the delimiter char search and COPY\ngo a bit better still. Neil Conway did propose the SIMD part in [1],\nbut it's just not nearly as good as it could be when having to still\nperform the strlen() calls.\n\nI had planned to work on this for PG18, but I'd be happy for some\nassistance if you're willing.\n\nDavid\n\n[1] https://postgr.es/m/CAOW5sYb1HprQKrzjCsrCP1EauQzZy+njZ-AwBbOUMoGJHJS7Sw@mail.gmail.com\n\n\n",
"msg_date": "Thu, 29 Aug 2024 23:51:48 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make printtup a bit faster"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> [ redesign I/O function APIs ]\n> I had planned to work on this for PG18, but I'd be happy for some\n> assistance if you're willing.\n\nI'm skeptical that such a thing will ever be practical. To avoid\nbreaking un-converted data types, all the call sites would have to\nsupport both old and new APIs. To avoid breaking non-core callers,\nall the I/O functions would have to support both old and new APIs.\nThat probably adds enough overhead to negate whatever benefit you'd\nget.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Aug 2024 11:33:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Make printtup a bit faster"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n\nHello David,\n\n>> My high level proposal is define a type specific print function like:\n>>\n>> oidprint(Datum datum, StringInfo buf)\n>> textprint(Datum datum, StringInfo buf)\n>\n> I think what we should do instead is make the output functions take a\n> StringInfo and just pass it the StringInfo where we'd like the bytes\n> written.\n>\n> That of course would require rewriting all the output functions for\n> all the built-in types, so not a small task. Extensions make that job\n> harder. I don't think it would be good to force extensions to rewrite\n> their output functions, so perhaps some wrapper function could help us\n> align the APIs for extensions that have not been converted yet.\n\nI have the similar concern as Tom that this method looks too\naggressive. That's why I said: \n\n\"If a type's print function is not defined, we can still using the out \nfunction.\"\n\nAND\n\n\"Hard coding the relationship between [common] used type and {type}print\nfunction OID looks not cool, Adding a new attribute in pg_type looks too \naggressive however. Anyway this is the next topic to talk about.\"\n\nWhat would be the extra benefit we redesign all the out functions?\n\n> There's a similar problem with input functions not having knowledge of\n> the input length. You only have to look at textin() to see how useful\n> that could be. Fixing that would probably make COPY FROM horrendously\n> faster. Team that up with SIMD for the delimiter char search and COPY\n> go a bit better still. Neil Conway did propose the SIMD part in [1],\n> but it's just not nearly as good as it could be when having to still\n> perform the strlen() calls.\n\nOK, I think I can understand the needs to make in-function knows the\ninput length and good to know the SIMD part for delimiter char\nsearch. strlen looks like a delimiter char search ('\\0') as well. Not\nsure if \"strlen\" has been implemented with SIMD part, but if not, why? \n\n> I had planned to work on this for PG18, but I'd be happy for some\n> assistance if you're willing.\n\nI see you did many amazing work with cache-line-frindly data struct\ndesign, branch predition optimization and SIMD optimization. I'd like to\ntry one myself. I'm not sure if I can meet the target, what if we handle\nthe out/in function separately (can be by different people)? \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Fri, 30 Aug 2024 08:09:43 +0800",
"msg_from": "Andy Fan <zhihuifan1213@163.com>",
"msg_from_op": true,
"msg_subject": "Re: Make printtup a bit faster"
},
{
"msg_contents": "On Fri, 30 Aug 2024 at 03:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > [ redesign I/O function APIs ]\n> > I had planned to work on this for PG18, but I'd be happy for some\n> > assistance if you're willing.\n>\n> I'm skeptical that such a thing will ever be practical. To avoid\n> breaking un-converted data types, all the call sites would have to\n> support both old and new APIs. To avoid breaking non-core callers,\n> all the I/O functions would have to support both old and new APIs.\n> That probably adds enough overhead to negate whatever benefit you'd\n> get.\n\nScepticism is certainly good when it comes to such a large API change.\nI don't want to argue with you, but I'd like to state a few things\nabout why I think you're wrong on this...\n\nSo, we currently return cstrings in our output functions. Let's take\njsonb_out() as an example, to build that cstring, we make a *new*\nStringInfoData on *every call* inside JsonbToCStringWorker(). That\ngives you 1024 bytes before you need to enlarge it. However, it's\nmaybe not all bad as we have some size estimations there to call\nenlargeStringInfo(), only that's a bit wasteful as it does a\nrepalloc() which memcpys the freshly allocated 1024 bytes allocated in\ninitStringInfo() when it doesn't yet contain any data. After\njsonb_out() has returned and we have the cstring, only we forgot the\nlength of the string, so most places will immediately call strlen() or\ndo that indirectly via appendStringInfoString(). For larger JSON\ndocuments, that'll likely require pulling cachelines back into L1\nagain. I don't know how modern CPU cacheline eviction works, but if it\nwas as simple as FIFO then the strlen() would flush all those\ncachelines only for memcpy() to have to read them back again for\noutput strings larger than L1.\n\nIf we rewrote all of core's output functions to use the new API, then\nthe branch to test the function signature would be perfectly\npredictable and amount to an extra cmp and jne/je opcode. So, I just\ndon't agree with the overheads negating the benefits comment. You're\nprobably off by 1 order of magnitude at the minimum and for\nmedium/large varlena types likely 2-3+ orders. Even a simple int4out\nrequires a palloc()/memcpy. If we were outputting lots of data, e.g.\nin a COPY operation, the output buffer would seldom need to be\nenlarged as it would quickly adjust to the correct size.\n\nFor the input functions, the possible gains are extensive too.\ntextin() is a good example, it uses cstring_to_text(), but could be\nchanged to use cstring_to_text_with_len(). Knowing the input string\nlength also opens the door to SIMD. Take int4in() as an example, if\npg_strtoint32_safe() knew its input length there are a bunch of\nprechecks that could be done with either 64-bit SWAR or with SIMD.\nFor example, if you knew you had an 8-char string of decimal digits\nthen converting that to an int32 is quite cheap. It's impossible to\noverflow an int32 with 8 decimal digits, so no overflow checks need to\nbe done until there are at least 10 decimal digits. ca6fde922 seems\nlike good enough example of the possible gains of SIMD vs\nbyte-at-a-time processing. I saw some queries go 4x faster there and\nthat was me trying to keep the JSON document sizes realistic.\nbyte-at-a-time is just not enough to saturate RAM speed. Take DDR5,\nfor example, Wikipedia says it has a bandwidth of 32–64 GB/s, so\nunless we discover room temperature superconductors, we're not going\nto see any massive jump in clock speeds any time soon, and with 5 or\n6Ghz CPUs, there's just no way to get anywhere near that bandwidth by\nprocessing byte-at-a-time. For some sort of nieve strcpy() type\nfunction, you're going to need at least a cmp and mov, even if those\nwere latency=1 (which they're not, see [1]), you can only do 2.5\nbillion of those two per second on a 5Ghz processor. I've done tested,\nbut hypothetically (assuming latency=1) that amounts to processing\n2.5GB/s, i.e. a long way from DDR5 RAM speed and that's not taking\ninto account having to increment pointers to the next byte on each\nloop.\n\nDavid\n\n[1] https://www.agner.org/optimize/instruction_tables.pdf\n\n\n",
"msg_date": "Fri, 30 Aug 2024 12:31:07 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make printtup a bit faster"
},
{
"msg_contents": "On Fri, 30 Aug 2024 at 12:10, Andy Fan <zhihuifan1213@163.com> wrote:\n> What would be the extra benefit we redesign all the out functions?\n\nIf I've understood your proposal correctly, it sounds like you want to\ninvent a new \"print\" output function for each type to output the Datum\nonto a StringInfo, if that's the case, what would be the point of\nhaving both versions? If there's anywhere we call output functions\nwhere the resulting value isn't directly appended to a StringInfo,\nthen we could just use a temporary StringInfo to obtain the cstring\nand its length.\n\nDavid\n\n\n",
"msg_date": "Fri, 30 Aug 2024 12:38:50 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make printtup a bit faster"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n\n> On Fri, 30 Aug 2024 at 12:10, Andy Fan <zhihuifan1213@163.com> wrote:\n>> What would be the extra benefit we redesign all the out functions?\n>\n> If I've understood your proposal correctly, it sounds like you want to\n> invent a new \"print\" output function for each type to output the Datum\n> onto a StringInfo,\n\nMostly yes, but not for [each type at once], just for the [common used\ntype], like int2/4/8, float4/8, date/time/timestamp, text/.. and so on.\n\n> if that's the case, what would be the point of having both versions?\n\nThe biggest benefit would be compatibility.\n\nIn my opinion, print function (not need to be in pg_type at all) is as\nan optimization and optional, in some performance critical path we can\nreplace the out-function with printfunction, like (printtup). if such\nperformance-critical path find a type without a print-function is\ndefined, just keep the old way.\n\nKind of like supportfunction for proc, this is for data type? Within\nthis way, changes would be much smaller and step-by-step.\n\n> If there's anywhere we call output functions\n> where the resulting value isn't directly appended to a StringInfo,\n> then we could just use a temporary StringInfo to obtain the cstring\n> and its length.\n\nI think this is true, but it requests some caller's code change.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Fri, 30 Aug 2024 09:04:18 +0800",
"msg_from": "Andy Fan <zhihuifan1213@163.com>",
"msg_from_op": true,
"msg_subject": "Re: Make printtup a bit faster"
},
{
"msg_contents": "On Fri, 30 Aug 2024 at 13:04, Andy Fan <zhihuifan1213@163.com> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > If there's anywhere we call output functions\n> > where the resulting value isn't directly appended to a StringInfo,\n> > then we could just use a temporary StringInfo to obtain the cstring\n> > and its length.\n>\n> I think this is true, but it requests some caller's code change.\n\nYeah, calling code would need to be changed to take advantage of the\nnew API, however, the differences in which types support which API\ncould be hidden inside OutputFunctionCall(). That function could just\nfake up a StringInfo for any types that only support the old cstring\nAPI. That means we don't need to add handling for both cases\neverywhere we need to call the output function. It's possible that\ncould make some operations slightly slower when only the old API is\navailable, but then maybe not as we do now have read-only StringInfos.\nMaybe the StringInfoData.data field could just be set to point to the\ngiven cstring using initReadOnlyStringInfo() rather than doing\nappendBinaryStringInfo() onto yet another buffer for the old API.\n\nDavid\n\n\n",
"msg_date": "Fri, 30 Aug 2024 13:13:37 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make printtup a bit faster"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n\n> On Fri, 30 Aug 2024 at 13:04, Andy Fan <zhihuifan1213@163.com> wrote:\n>>\n>> David Rowley <dgrowleyml@gmail.com> writes:\n>> > If there's anywhere we call output functions\n>> > where the resulting value isn't directly appended to a StringInfo,\n>> > then we could just use a temporary StringInfo to obtain the cstring\n>> > and its length.\n>>\n>> I think this is true, but it requests some caller's code change.\n>\n> Yeah, calling code would need to be changed to take advantage of the\n> new API, however, the differences in which types support which API\n> could be hidden inside OutputFunctionCall(). That function could just\n> fake up a StringInfo for any types that only support the old cstring\n> API. That means we don't need to add handling for both cases\n> everywhere we need to call the output function.\n\nWe can do this, then the printtup case (stands for some performance\ncrital path) still need to change discard OutputFunctionCall() since it\nuses the fake StringInfo then a memcpy is needed again IIUC.\n\nBesides above, my major concerns about your proposal need to change [all\nthe type's outfunction at once] which is too aggresive for me. In the\nfresh setup without any extension is created, \"SELECT count(*) FROM\npg_type\" returns 627 already, So other piece of my previous reply is \nmore important to me. \n\nIt is great that both of us feeling the current stategy is not good for\nperformance:) \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Fri, 30 Aug 2024 09:34:34 +0800",
"msg_from": "Andy Fan <zhihuifan1213@163.com>",
"msg_from_op": true,
"msg_subject": "Re: Make printtup a bit faster"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n\n> On Fri, 30 Aug 2024 at 03:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> David Rowley <dgrowleyml@gmail.com> writes:\n>> > [ redesign I/O function APIs ]\n>> > I had planned to work on this for PG18, but I'd be happy for some\n>> > assistance if you're willing.\n>>\n>> I'm skeptical that such a thing will ever be practical. To avoid\n>> breaking un-converted data types, all the call sites would have to\n>> support both old and new APIs. To avoid breaking non-core callers,\n>> all the I/O functions would have to support both old and new APIs.\n>> That probably adds enough overhead to negate whatever benefit you'd\n>> get.\n>\n> So, we currently return cstrings in our output functions. Let's take\n> jsonb_out() as an example, to build that cstring, we make a *new*\n> StringInfoData on *every call* inside JsonbToCStringWorker(). That\n> gives you 1024 bytes before you need to enlarge it. However, it's\n> maybe not all bad as we have some size estimations there to call\n> enlargeStringInfo(), only that's a bit wasteful as it does a\n> repalloc() which memcpys the freshly allocated 1024 bytes allocated in\n> initStringInfo() when it doesn't yet contain any data. After\n> jsonb_out() has returned and we have the cstring, only we forgot the\n> length of the string, so most places will immediately call strlen() or\n> do that indirectly via appendStringInfoString(). For larger JSON\n> documents, that'll likely require pulling cachelines back into L1\n> again. I don't know how modern CPU cacheline eviction works, but if it\n> was as simple as FIFO then the strlen() would flush all those\n> cachelines only for memcpy() to have to read them back again for\n> output strings larger than L1.\n\nThe attached is PoC of this idea, not matter which method are adopted\n(rewrite all the outfunction or a optional print function), I think the\nbenefit will be similar. In the blew test case, it shows us 10%+\nimprovements. (0.134ms vs 0.110ms)\n\ncreate table demo as select oid as oid1, relname::text as text1, relam,\nrelname::text as text2 from pg_class; \n\npgbench: \nselect * from demo;\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Fri, 30 Aug 2024 13:00:50 +0800",
"msg_from": "Andy Fan <zhihuifan1213@163.com>",
"msg_from_op": true,
"msg_subject": "Re: Make printtup a bit faster"
},
{
"msg_contents": "On 8/29/24 1:51 PM, David Rowley wrote:\n> I had planned to work on this for PG18, but I'd be happy for some\n> assistance if you're willing.\n\nI am interested in working on this, unless Andy Fan wants to do this \nwork. :) I believe that optimizing the out, in and send functions would \nbe worth the pain. I get Tom's objections but I do not think adding a \nsmall check would add much overhead compared to the gains we can get.\n\nAnd given that all of in, out and send could be optimized I do not like \nthe idea of duplicating all three in the catalog.\n\nDavid, have you given any thought on the cleanest way to check for if \nthe new API or the old is the be used for these functions? If not I can \nfigure out something myself, just wondering if you already had something \nin mind.\n\nAndreas\n\n\n\n",
"msg_date": "Fri, 30 Aug 2024 18:20:39 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Make printtup a bit faster"
},
{
"msg_contents": "Andy Fan <zhihuifan1213@163.com> writes:\n\n> The attached is PoC of this idea, not matter which method are adopted\n> (rewrite all the outfunction or a optional print function), I think the\n> benefit will be similar. In the blew test case, it shows us 10%+\n> improvements. (0.134ms vs 0.110ms)\n\nAfter working on more {type}_print functions, I'm thinking it is pretty\nlike the 3rd IO function which shows some confused maintainence\neffort. so I agree refactoring the existing out function is a better\nidea. I'd like to work on _print function body first for easy review and\ntesting. after all, if some common issues exists in these changes,\nit is better to know that before we working on the 700+ out functions.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Mon, 02 Sep 2024 03:18:23 +0000",
"msg_from": "Andy Fan <zhihuifan1213@163.com>",
"msg_from_op": true,
"msg_subject": "Re: Make printtup a bit faster"
},
{
"msg_contents": "\nHello David & Andreas, \n\n> On 8/29/24 1:51 PM, David Rowley wrote:\n>> I had planned to work on this for PG18, but I'd be happy for some\n>> assistance if you're willing.\n>\n> I am interested in working on this, unless Andy Fan wants to do this\n> work. :) I believe that optimizing the out, in and send functions would\n> be worth the pain. I get Tom's objections but I do not think adding a\n> small check would add much overhead compared to the gains we can get.\n\nJust to be clearer, I'd like work on the out function only due to my\ninternal assignment. (Since David planned it for PG18, so it is better\nsay things clearer eariler). I'd put parts of out(print) function\nrefactor in the next 2 days. I think it deserves a double check before\nworking on *all* the out function.\n\nselect count(*), count(distinct typoutput) from pg_type;\n count | count \n-------+-------\n 621 | 97\n(1 row)\n\nselect typoutput, count(*) from pg_type group by typoutput having\n count(*) > 1 order by 2 desc;\n \n typoutput | count \n-----------------+-------\n array_out | 296\n record_out | 214\n multirange_out | 6\n range_out | 6\n varcharout | 3\n int4out | 2\n timestamptz_out | 2\n nameout | 2\n textout | 2\n(9 rows)\n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Wed, 11 Sep 2024 09:02:50 +0800",
"msg_from": "Andy Fan <zhihuifan1213@163.com>",
"msg_from_op": true,
"msg_subject": "Re: Make printtup a bit faster"
},
{
"msg_contents": "Andy Fan <zhihuifan1213@163.com> writes:\n> Just to be clearer, I'd like work on the out function only due to my\n> internal assignment. (Since David planned it for PG18, so it is better\n> say things clearer eariler). I'd put parts of out(print) function\n> refactor in the next 2 days. I think it deserves a double check before\n> working on *all* the out function.\n\nWell, sure. You *cannot* write a patch that breaks existing output\nfunctions. Not at the start, and not at the end either. You\nshould focus on writing the infrastructure and, for starters,\nconverting just a few output functions as a demonstration. If\nthat gets accepted then you can work on converting other output\nfunctions a few at a time. But they'll never all be done, because\nwe can't realistically force extensions to convert.\n\nThere are lots of examples of similar incremental conversions in our\nproject's history. I think the most recent example is the \"soft error\nhandling\" work (d9f7f5d32, ccff2d20e, and many follow-on patches).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Sep 2024 21:18:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Make printtup a bit faster"
},
{
"msg_contents": ">> ... I'd put parts of out(print) function\n>> refactor in the next 2 days. I think it deserves a double check before\n>> working on *all* the out function.\n>\n> Well, sure. You *cannot* write a patch that breaks existing output\n> functions. Not at the start, and not at the end either. You\n> should focus on writing the infrastructure and, for starters,\n> converting just a few output functions as a demonstration. If\n> that gets accepted then you can work on converting other output\n> functions a few at a time. But they'll never all be done, because\n> we can't realistically force extensions to convert.\n>\n> There are lots of examples of similar incremental conversions in our\n> project's history. I think the most recent example is the \"soft error\n> handling\" work (d9f7f5d32, ccff2d20e, and many follow-on patches).\n\nThank you for this example! What I want is a smaller step than you said.\nOur goal is to make out function take an extra StringInfo input to avoid\nthe extra palloc, memcpy, strlen. so the *final state* is:\n\n1). implement all the out functions with (datum, StringInfo) as inputs.\n2). change all the caller like printtup or any other function.\n3). any extensions which doesn't in core has to change their out function\nfor their data type. The patch in this thread can't help in this area,\nbut I guess it would not be very hard for extension's author.\n\nThe current (intermediate) stage is:\n- I finished parts of step (1), 17 functions in toally. and named it as\nprint function, the function body is exactly same as the out function in\nfinal stage, so this part is reviewable.\n- I use them in printtup user case. so it is testable (for correctness\nand performance test purpose).\n\nso I want some of you can have a double check on these function bodies, if\nanything wrong, I can change it easlier (vs I made the same efforts on\nall the type function). does it make sense?\n\nPatch 0001 ~ 0003 is something related and can be reviewed or committed\nseperately. and 0004 is the main part of the above. \n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Thu, 12 Sep 2024 10:31:41 +0000",
"msg_from": "Andy Fan <zhihuifan1213@163.com>",
"msg_from_op": true,
"msg_subject": "Re: Make printtup a bit faster"
}
] |
[
{
"msg_contents": "Our tool have detected that postgre in the version of REL9_6_18~ REL9_6_24\nmay also affected by the vulnerability CVE-2022-2625. The vulnerability\ndatabase does not include these versions and you may not fix it in the\nREL9_6 branch. Is there a need to backport the patch of CVE-2022-2625?\n\nOur tool have detected that postgre in the version of REL9_6_18~\n\nREL9_6_24 may also affected by the vulnerability \n\nCVE-2022-2625. The vulnerability database does not include these versions and you may not fix it in the REL9_6 branch. Is there a need to backport the patch of CVE-2022-2625?",
"msg_date": "Thu, 29 Aug 2024 20:54:44 +0800",
"msg_from": "James Watt <crispy.james.watt@gmail.com>",
"msg_from_op": true,
"msg_subject": "[BUG] Security bugs affected version detected."
},
{
"msg_contents": "> On 29 Aug 2024, at 14:54, James Watt <crispy.james.watt@gmail.com> wrote:\n> \n> Our tool have detected that postgre in the version of REL9_6_18~ REL9_6_24 may also affected by the vulnerability CVE-2022-2625. The vulnerability database does not include these versions and you may not fix it in the REL9_6 branch. Is there a need to backport the patch of CVE-2022-2625?\n\n9.6 was EOL at the time of 2022-2625 being announced and thus wasn't considered\nfor a backport of the fix, the project only applies fixes to supported\nversions. Anyone still running 9.6 in production is highly recommended to\nupgrade to a supported version.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 29 Aug 2024 15:00:37 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Security bugs affected version detected."
}
] |
[
{
"msg_contents": "Hello,\n\nThis patch was a bit discussed on [1], and with more details on [2]. It\nintroduces four new columns in pg_stat_all_tables:\n\n* parallel_seq_scan\n* last_parallel_seq_scan\n* parallel_idx_scan\n* last_parallel_idx_scan\n\nand two new columns in pg_stat_all_indexes:\n\n* parallel_idx_scan\n* last_parallel_idx_scan\n\nAs Benoit said yesterday, the intent is to help administrators evaluate the\nusage of parallel workers in their databases and help configuring\nparallelization usage.\n\nA test script (test.sql) is attached. You can execute it with \"psql -Xef\ntest.sql your_database\" (your_database should not contain a t1 table as it\nwill be dropped and recreated).\n\nHere is its result, a bit commented:\n\nDROP TABLE IF EXISTS t1;\nDROP TABLE\nCREATE TABLE t1 (id integer);\nCREATE TABLE\nINSERT INTO t1 SELECT generate_series(1, 10_000_000);\nINSERT 0 10000000\nVACUUM ANALYZE t1;\nVACUUM\nSELECT relname, seq_scan, last_seq_scan, parallel_seq_scan,\nlast_parallel_seq_scan FROM pg_stat_user_tables WHERE relname='t1'\n-[ RECORD 1 ]----------+---\nrelname | t1\nseq_scan | 0\nlast_seq_scan |\nparallel_seq_scan | 0\nlast_parallel_seq_scan |\n\n==> no scan at all, the table has just been created\n\nSELECT * FROM t1 LIMIT 1;\n id\n----\n 1\n(1 row)\n\nSELECT pg_sleep(1);\nSELECT relname, seq_scan, last_seq_scan, parallel_seq_scan,\nlast_parallel_seq_scan FROM pg_stat_user_tables WHERE relname='t1'\n-[ RECORD 1 ]----------+------------------------------\nrelname | t1\nseq_scan | 1\nlast_seq_scan | 2024-08-29 15:43:17.377182+02\nparallel_seq_scan | 0\nlast_parallel_seq_scan |\n\n==> one sequential scan, no parallelization\n\nSELECT count(*) FROM t1;\n count\n----------\n 10000000\n(1 row)\n\nSELECT pg_sleep(1);\nSELECT relname, seq_scan, last_seq_scan, parallel_seq_scan,\nlast_parallel_seq_scan FROM pg_stat_user_tables WHERE relname='t1'\n-[ RECORD 1 ]----------+------------------------------\nrelname | t1\nseq_scan | 4\nlast_seq_scan | 2024-08-29 15:43:18.504533+02\nparallel_seq_scan | 3\nlast_parallel_seq_scan | 2024-08-29 15:43:18.504533+02\n\n==> one parallel sequential scan\n==> I use the default configuration, so parallel_leader_participation = on,\nmax_parallel_workers_per_gather = 2\n==> meaning 3 parallel sequential scans (1 leader, two workers)\n==> take note that seq_scan was also incremented... we didn't change the\nprevious behaviour for this column\n\nCREATE INDEX ON t1(id);\nCREATE INDEX\nSELECT\nindexrelname,idx_scan,last_idx_scan,parallel_idx_scan,last_parallel_idx_scan,idx_tup_read,idx_tup_fetch\nFROM pg_stat_user_indexes WHERE relname='t1'\n-[ RECORD 1 ]----------+----------\nindexrelname | t1_id_idx\nidx_scan | 0\nlast_idx_scan |\nparallel_idx_scan | 0\nlast_parallel_idx_scan |\nidx_tup_read | 0\nidx_tup_fetch | 0\n\n==> no scan at all, the index has just been created\n\nSELECT * FROM t1 WHERE id=150000;\n id\n--------\n 150000\n(1 row)\n\nSELECT pg_sleep(1);\nSELECT\nindexrelname,idx_scan,last_idx_scan,parallel_idx_scan,last_parallel_idx_scan,idx_tup_read,idx_tup_fetch\nFROM pg_stat_user_indexes WHERE relname='t1'\n-[ RECORD 1 ]----------+------------------------------\nindexrelname | t1_id_idx\nidx_scan | 1\nlast_idx_scan | 2024-08-29 15:43:22.020853+02\nparallel_idx_scan | 0\nlast_parallel_idx_scan |\nidx_tup_read | 1\nidx_tup_fetch | 0\n\n==> one index scan, no parallelization\n\nSELECT * FROM t1 WHERE id BETWEEN 100000 AND 400000;\nSELECT pg_sleep(1);\n pg_sleep\n----------\n\n(1 row)\n\nSELECT\nindexrelname,idx_scan,last_idx_scan,parallel_idx_scan,last_parallel_idx_scan,idx_tup_read,idx_tup_fetch\nFROM pg_stat_user_indexes WHERE relname='t1'\n-[ RECORD 1 ]----------+------------------------------\nindexrelname | t1_id_idx\nidx_scan | 2\nlast_idx_scan | 2024-08-29 15:43:23.136665+02\nparallel_idx_scan | 0\nlast_parallel_idx_scan |\nidx_tup_read | 300002\nidx_tup_fetch | 0\n\n==> another index scan, no parallelization\n\nSELECT count(*) FROM t1 WHERE id BETWEEN 100000 AND 400000;\n count\n--------\n 300001\n(1 row)\n\nSELECT pg_sleep(1);\nSELECT\nindexrelname,idx_scan,last_idx_scan,parallel_idx_scan,last_parallel_idx_scan,idx_tup_read,idx_tup_fetch\nFROM pg_stat_user_indexes WHERE relname='t1'\n-[ RECORD 1 ]----------+-----------------------------\nindexrelname | t1_id_idx\nidx_scan | 5\nlast_idx_scan | 2024-08-29 15:43:24.16057+02\nparallel_idx_scan | 3\nlast_parallel_idx_scan | 2024-08-29 15:43:24.16057+02\nidx_tup_read | 600003\nidx_tup_fetch | 0\n\n==> one parallel index scan\n==> same thing, 3 parallel index scans (1 leader, two workers)\n==> also, take note that idx_scan was also incremented... we didn't change\nthe previous behaviour for this column\n\nFirst time I had to add new columns to a statistics catalog. I'm actually\nnot sure that we were right to change pg_proc.dat manually. We'll probably\nhave to fix this.\n\nDocumentation is done, but maybe we should also add that seq_scan and\nidx_scan also include parallel scan.\n\nYet to be done: tests. Once there's an agreement on this patch, we'll work\non the tests.\n\nThis has been a collective work with Benoit Lobréau, Jehan-Guillaume de\nRorthais, and Franck Boudehen.\n\nThanks.\n\nRegards.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/b4220d15-2e21-0e98-921b-b9892543cc93%40dalibo.com\n[2]\nhttps://www.postgresql.org/message-id/flat/d657df20-c4bf-63f6-e74c-cb85a81d0383%40dalibo.com\n\n\n-- \nGuillaume.",
"msg_date": "Thu, 29 Aug 2024 16:04:05 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": true,
"msg_subject": "Add parallel columns for seq scan and index scan on\n pg_stat_all_tables and _indexes"
},
{
"msg_contents": "Hi,\n\nOn Thu, Aug 29, 2024 at 04:04:05PM +0200, Guillaume Lelarge wrote:\n> Hello,\n> \n> This patch was a bit discussed on [1], and with more details on [2]. It\n> introduces four new columns in pg_stat_all_tables:\n> \n> * parallel_seq_scan\n> * last_parallel_seq_scan\n> * parallel_idx_scan\n> * last_parallel_idx_scan\n> \n> and two new columns in pg_stat_all_indexes:\n> \n> * parallel_idx_scan\n> * last_parallel_idx_scan\n> \n> As Benoit said yesterday, the intent is to help administrators evaluate the\n> usage of parallel workers in their databases and help configuring\n> parallelization usage.\n\nThanks for the patch. I think that's a good idea to provide more instrumentation\nin this area. So, +1 regarding this patch.\n\nA few random comments:\n\n1 ===\n\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>parallel_seq_scan</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Number of parallel sequential scans initiated on this table\n+ </para></entry>\n+ </row>\n\nI wonder if we should not update the seq_scan too to indicate that it\nincludes the parallel_seq_scan.\n\nSame kind of comment for last_seq_scan, idx_scan and last_idx_scan.\n\n2 ===\n\n@@ -410,6 +410,8 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock)\n */\n if (scan->rs_base.rs_flags & SO_TYPE_SEQSCAN)\n pgstat_count_heap_scan(scan->rs_base.rs_rd);\n+ if (scan->rs_base.rs_parallel != NULL)\n+ pgstat_count_parallel_heap_scan(scan->rs_base.rs_rd);\n\nIndentation seems broken.\n\nShouldn't the parallel counter relies on the \"scan->rs_base.rs_flags & SO_TYPE_SEQSCAN\"\ntest too?\n\nWhat about to get rid of the pgstat_count_parallel_heap_scan and add an extra\nbolean parameter to pgstat_count_heap_scan to indicate if counts.parallelnumscans\nshould be incremented too?\n\nSomething like:\n\npgstat_count_heap_scan(scan->rs_base.rs_rd, scan->rs_base.rs_parallel != NULL)\n\n3 ===\n\nSame comment for pgstat_count_index_scan (add an extra bolean parameter) and\nget rid of pgstat_count_parallel_index_scan()).\n\nI think that 2 === and 3 === would help to avoid missing increments should we\nadd those call to other places in the future.\n\n4 ===\n\n+ if (lstats->counts.numscans || lstats->counts.parallelnumscans)\n\nIs it possible to have (lstats->counts.parallelnumscans) whithout having\n(lstats->counts.numscans) ?\n\n> First time I had to add new columns to a statistics catalog. I'm actually\n> not sure that we were right to change pg_proc.dat manually.\n\nI think that's the right way to do.\n\nI don't see a CF entry for this patch. Would you mind creating one so that\nwe don't lost track of it?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 4 Sep 2024 08:47:11 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add parallel columns for seq scan and index scan on\n pg_stat_all_tables and _indexes"
},
{
"msg_contents": "Hi,\n\nLe mer. 4 sept. 2024 à 10:47, Bertrand Drouvot <bertranddrouvot.pg@gmail.com>\na écrit :\n\n> Hi,\n>\n> On Thu, Aug 29, 2024 at 04:04:05PM +0200, Guillaume Lelarge wrote:\n> > Hello,\n> >\n> > This patch was a bit discussed on [1], and with more details on [2]. It\n> > introduces four new columns in pg_stat_all_tables:\n> >\n> > * parallel_seq_scan\n> > * last_parallel_seq_scan\n> > * parallel_idx_scan\n> > * last_parallel_idx_scan\n> >\n> > and two new columns in pg_stat_all_indexes:\n> >\n> > * parallel_idx_scan\n> > * last_parallel_idx_scan\n> >\n> > As Benoit said yesterday, the intent is to help administrators evaluate\n> the\n> > usage of parallel workers in their databases and help configuring\n> > parallelization usage.\n>\n> Thanks for the patch. I think that's a good idea to provide more\n> instrumentation\n> in this area. So, +1 regarding this patch.\n>\n>\nThanks.\n\n\n> A few random comments:\n>\n> 1 ===\n>\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>parallel_seq_scan</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Number of parallel sequential scans initiated on this table\n> + </para></entry>\n> + </row>\n>\n> I wonder if we should not update the seq_scan too to indicate that it\n> includes the parallel_seq_scan.\n>\n> Same kind of comment for last_seq_scan, idx_scan and last_idx_scan.\n>\n>\nYeah, not sure why I didn't do it at first. I was wondering the same thing.\nThe patch attached does this.\n\n\n> 2 ===\n>\n> @@ -410,6 +410,8 @@ initscan(HeapScanDesc scan, ScanKey key, bool\n> keep_startblock)\n> */\n> if (scan->rs_base.rs_flags & SO_TYPE_SEQSCAN)\n> pgstat_count_heap_scan(scan->rs_base.rs_rd);\n> + if (scan->rs_base.rs_parallel != NULL)\n> + pgstat_count_parallel_heap_scan(scan->rs_base.rs_rd);\n>\n> Indentation seems broken.\n>\n>\nMy bad, sorry. Fixed in the attached patch.\n\n\n> Shouldn't the parallel counter relies on the \"scan->rs_base.rs_flags &\n> SO_TYPE_SEQSCAN\"\n> test too?\n>\n>\nYou're right. Fixed in the attached patch.\n\n\n> What about to get rid of the pgstat_count_parallel_heap_scan and add an\n> extra\n> bolean parameter to pgstat_count_heap_scan to indicate if\n> counts.parallelnumscans\n> should be incremented too?\n>\n> Something like:\n>\n> pgstat_count_heap_scan(scan->rs_base.rs_rd, scan->rs_base.rs_parallel !=\n> NULL)\n>\n> 3 ===\n>\n> Same comment for pgstat_count_index_scan (add an extra bolean parameter)\n> and\n> get rid of pgstat_count_parallel_index_scan()).\n>\n> I think that 2 === and 3 === would help to avoid missing increments should\n> we\n> add those call to other places in the future.\n>\n>\nOh OK, understood. Done for both.\n\n4 ===\n>\n> + if (lstats->counts.numscans || lstats->counts.parallelnumscans)\n>\n> Is it possible to have (lstats->counts.parallelnumscans) whithout having\n> (lstats->counts.numscans) ?\n>\n>\nNope, parallel scans are included in seq/index scans, as far as I can tell.\nI could remove the parallelnumscans testing but it would be less obvious to\nread.\n\n\n> > First time I had to add new columns to a statistics catalog. I'm actually\n> > not sure that we were right to change pg_proc.dat manually.\n>\n> I think that's the right way to do.\n>\n>\nOK, new patch attached.\n\n\n> I don't see a CF entry for this patch. Would you mind creating one so that\n> we don't lost track of it?\n>\n>\nI don't mind adding it, though I don't know if I should add it to the\nSeptember or November commit fest. Which one should I choose?\n\nThanks.\n\nRegards.\n\n\n-- \nGuillaume.",
"msg_date": "Wed, 4 Sep 2024 14:51:51 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": true,
"msg_subject": "Re: Add parallel columns for seq scan and index scan on\n pg_stat_all_tables and _indexes"
},
{
"msg_contents": "Hi,\n\nOn Wed, Sep 04, 2024 at 02:51:51PM +0200, Guillaume Lelarge wrote:\n> Le mer. 4 sept. 2024 � 10:47, Bertrand Drouvot <bertranddrouvot.pg@gmail.com>\n> a �crit :\n> \n> > I don't see a CF entry for this patch. Would you mind creating one so that\n> > we don't lost track of it?\n> >\n> >\n> I don't mind adding it, though I don't know if I should add it to the\n> September or November commit fest. Which one should I choose?\n\nThanks! That should be the November one (as the September one already started).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 4 Sep 2024 12:58:47 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add parallel columns for seq scan and index scan on\n pg_stat_all_tables and _indexes"
},
{
"msg_contents": "Le mer. 4 sept. 2024 à 14:58, Bertrand Drouvot <bertranddrouvot.pg@gmail.com>\na écrit :\n\n> Hi,\n>\n> On Wed, Sep 04, 2024 at 02:51:51PM +0200, Guillaume Lelarge wrote:\n> > Le mer. 4 sept. 2024 ą 10:47, Bertrand Drouvot <\n> bertranddrouvot.pg@gmail.com>\n> > a écrit :\n> >\n> > > I don't see a CF entry for this patch. Would you mind creating one so\n> that\n> > > we don't lost track of it?\n> > >\n> > >\n> > I don't mind adding it, though I don't know if I should add it to the\n> > September or November commit fest. Which one should I choose?\n>\n> Thanks! That should be the November one (as the September one already\n> started).\n>\n>\nI should have gone to the commit fest website, it says the same. I had the\nrecollection that it started on the 15th. Anyway, added to the november\ncommit fest (https://commitfest.postgresql.org/50/5238/).\n\n\n-- \nGuillaume.\n\nLe mer. 4 sept. 2024 à 14:58, Bertrand Drouvot <bertranddrouvot.pg@gmail.com> a écrit :Hi,\n\nOn Wed, Sep 04, 2024 at 02:51:51PM +0200, Guillaume Lelarge wrote:\n> Le mer. 4 sept. 2024 ą 10:47, Bertrand Drouvot <bertranddrouvot.pg@gmail.com>\n> a écrit :\n> \n> > I don't see a CF entry for this patch. Would you mind creating one so that\n> > we don't lost track of it?\n> >\n> >\n> I don't mind adding it, though I don't know if I should add it to the\n> September or November commit fest. Which one should I choose?\n\nThanks! That should be the November one (as the September one already started).\nI should have gone to the commit fest website, it says the same. I had the recollection that it started on the 15th. Anyway, added to the november commit fest (https://commitfest.postgresql.org/50/5238/). -- Guillaume.",
"msg_date": "Wed, 4 Sep 2024 15:25:03 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": true,
"msg_subject": "Re: Add parallel columns for seq scan and index scan on\n pg_stat_all_tables and _indexes"
},
{
"msg_contents": "Hi,\n\nOn Wed, Sep 04, 2024 at 02:51:51PM +0200, Guillaume Lelarge wrote:\n> Hi,\n> \n> Le mer. 4 sept. 2024 � 10:47, Bertrand Drouvot <bertranddrouvot.pg@gmail.com>\n> a �crit :\n> \n> > What about to get rid of the pgstat_count_parallel_heap_scan and add an\n> > extra\n> > bolean parameter to pgstat_count_heap_scan to indicate if\n> > counts.parallelnumscans\n> > should be incremented too?\n> >\n> > Something like:\n> >\n> > pgstat_count_heap_scan(scan->rs_base.rs_rd, scan->rs_base.rs_parallel !=\n> > NULL)\n> >\n> > 3 ===\n> >\n> > Same comment for pgstat_count_index_scan (add an extra bolean parameter)\n> > and\n> > get rid of pgstat_count_parallel_index_scan()).\n> >\n> > I think that 2 === and 3 === would help to avoid missing increments should\n> > we\n> > add those call to other places in the future.\n> >\n> >\n> Oh OK, understood. Done for both.\n\nThanks for v2!\n\n1 ===\n\n-#define pgstat_count_heap_scan(rel)\n+#define pgstat_count_heap_scan(rel, parallel)\n do {\n- if (pgstat_should_count_relation(rel))\n- (rel)->pgstat_info->counts.numscans++;\n+ if (pgstat_should_count_relation(rel)) {\n+ if (!parallel)\n+ (rel)->pgstat_info->counts.numscans++;\n+ else\n+ (rel)->pgstat_info->counts.parallelnumscans++;\n+ }\n\nI think counts.numscans has to be incremented in all the cases (so even if\n\"parallel\" is true).\n\nSame comment for pgstat_count_index_scan().\n\n> 4 ===\n> >\n> > + if (lstats->counts.numscans || lstats->counts.parallelnumscans)\n> >\n> > Is it possible to have (lstats->counts.parallelnumscans) whithout having\n> > (lstats->counts.numscans) ?\n> >\n> >\n> Nope, parallel scans are included in seq/index scans, as far as I can tell.\n> I could remove the parallelnumscans testing but it would be less obvious to\n> read.\n\n2 ===\n\nWhat about adding a comment instead of this extra check?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 4 Sep 2024 14:18:57 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add parallel columns for seq scan and index scan on\n pg_stat_all_tables and _indexes"
},
{
"msg_contents": "Hi,\n\nLe mer. 4 sept. 2024 à 16:18, Bertrand Drouvot <bertranddrouvot.pg@gmail.com>\na écrit :\n\n> Hi,\n>\n> On Wed, Sep 04, 2024 at 02:51:51PM +0200, Guillaume Lelarge wrote:\n> > Hi,\n> >\n> > Le mer. 4 sept. 2024 à 10:47, Bertrand Drouvot <\n> bertranddrouvot.pg@gmail.com>\n> > a écrit :\n> >\n> > > What about to get rid of the pgstat_count_parallel_heap_scan and add an\n> > > extra\n> > > bolean parameter to pgstat_count_heap_scan to indicate if\n> > > counts.parallelnumscans\n> > > should be incremented too?\n> > >\n> > > Something like:\n> > >\n> > > pgstat_count_heap_scan(scan->rs_base.rs_rd, scan->rs_base.rs_parallel\n> !=\n> > > NULL)\n> > >\n> > > 3 ===\n> > >\n> > > Same comment for pgstat_count_index_scan (add an extra bolean\n> parameter)\n> > > and\n> > > get rid of pgstat_count_parallel_index_scan()).\n> > >\n> > > I think that 2 === and 3 === would help to avoid missing increments\n> should\n> > > we\n> > > add those call to other places in the future.\n> > >\n> > >\n> > Oh OK, understood. Done for both.\n>\n> Thanks for v2!\n>\n> 1 ===\n>\n> -#define pgstat_count_heap_scan(rel)\n> +#define pgstat_count_heap_scan(rel, parallel)\n> do {\n> - if (pgstat_should_count_relation(rel))\n> - (rel)->pgstat_info->counts.numscans++;\n> + if (pgstat_should_count_relation(rel)) {\n> + if (!parallel)\n> + (rel)->pgstat_info->counts.numscans++;\n> + else\n> +\n> (rel)->pgstat_info->counts.parallelnumscans++;\n> + }\n>\n> I think counts.numscans has to be incremented in all the cases (so even if\n> \"parallel\" is true).\n>\n> Same comment for pgstat_count_index_scan().\n>\n>\nYou're right, and I've been too quick. Fixed in v3.\n\n\n> > 4 ===\n> > >\n> > > + if (lstats->counts.numscans || lstats->counts.parallelnumscans)\n> > >\n> > > Is it possible to have (lstats->counts.parallelnumscans) whithout\n> having\n> > > (lstats->counts.numscans) ?\n> > >\n> > >\n> > Nope, parallel scans are included in seq/index scans, as far as I can\n> tell.\n> > I could remove the parallelnumscans testing but it would be less obvious\n> to\n> > read.\n>\n> 2 ===\n>\n> What about adding a comment instead of this extra check?\n>\n>\nDone too in v3.\n\n\n-- \nGuillaume.",
"msg_date": "Wed, 4 Sep 2024 16:37:19 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": true,
"msg_subject": "Re: Add parallel columns for seq scan and index scan on\n pg_stat_all_tables and _indexes"
},
{
"msg_contents": "Hi,\n\nOn Wed, Sep 04, 2024 at 04:37:19PM +0200, Guillaume Lelarge wrote:\n> Hi,\n> \n> Le mer. 4 sept. 2024 � 16:18, Bertrand Drouvot <bertranddrouvot.pg@gmail.com>\n> a �crit :\n> > What about adding a comment instead of this extra check?\n> >\n> >\n> Done too in v3.\n\nThanks!\n\n1 ===\n\n+ /*\n+ * Don't check counts.parallelnumscans because counts.numscans includes\n+ * counts.parallelnumscans\n+ */\n\n\".\" is missing at the end of the comment.\n\n2 ===\n\n- if (t > tabentry->lastscan)\n+ if (t > tabentry->lastscan && lstats->counts.numscans)\n\nThe extra check on lstats->counts.numscans is not needed as it's already done\na few lines before.\n\n3 ===\n\n+ if (t > tabentry->parallellastscan && lstats->counts.parallelnumscans)\n\nThis one makes sense.\n \nAnd now I'm wondering if the extra comment added in v3 is really worth it (and\ndoes not sound confusing)? I mean, the parallel check is done once we passe\nthe initial test on counts.numscans. I think the code is clear enough without\nthis extra comment, thoughts? \n\n4 ===\n\nWhat about adding a few tests? or do you want to wait a bit more to see if \"\nthere's an agreement on this patch\" (as you stated at the start of this thread).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 5 Sep 2024 05:36:09 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add parallel columns for seq scan and index scan on\n pg_stat_all_tables and _indexes"
},
{
"msg_contents": "Le jeu. 5 sept. 2024 à 07:36, Bertrand Drouvot <bertranddrouvot.pg@gmail.com>\na écrit :\n\n> Hi,\n>\n> On Wed, Sep 04, 2024 at 04:37:19PM +0200, Guillaume Lelarge wrote:\n> > Hi,\n> >\n> > Le mer. 4 sept. 2024 à 16:18, Bertrand Drouvot <\n> bertranddrouvot.pg@gmail.com>\n> > a écrit :\n> > > What about adding a comment instead of this extra check?\n> > >\n> > >\n> > Done too in v3.\n>\n> Thanks!\n>\n> 1 ===\n>\n> + /*\n> + * Don't check counts.parallelnumscans because counts.numscans\n> includes\n> + * counts.parallelnumscans\n> + */\n>\n> \".\" is missing at the end of the comment.\n>\n>\nFixed in v4.\n\n\n> 2 ===\n>\n> - if (t > tabentry->lastscan)\n> + if (t > tabentry->lastscan && lstats->counts.numscans)\n>\n> The extra check on lstats->counts.numscans is not needed as it's already\n> done\n> a few lines before.\n>\n>\nFixed in v4.\n\n\n> 3 ===\n>\n> + if (t > tabentry->parallellastscan &&\n> lstats->counts.parallelnumscans)\n>\n> This one makes sense.\n>\n> And now I'm wondering if the extra comment added in v3 is really worth it\n> (and\n> does not sound confusing)? I mean, the parallel check is done once we passe\n> the initial test on counts.numscans. I think the code is clear enough\n> without\n> this extra comment, thoughts?\n>\n>\nI'm not sure I understand you here. I kinda like the extra comment though.\n\n\n> 4 ===\n>\n> What about adding a few tests? or do you want to wait a bit more to see if\n> \"\n> there's an agreement on this patch\" (as you stated at the start of this\n> thread).\n>\n>\nGuess I can start working on that now. It will take some time as I've never\ndone it before. Good thing I added the patch on the November commit fest :)\n\nThanks again.\n\nRegards.\n\n\n-- \nGuillaume.",
"msg_date": "Thu, 5 Sep 2024 08:19:13 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": true,
"msg_subject": "Re: Add parallel columns for seq scan and index scan on\n pg_stat_all_tables and _indexes"
}
] |
[
{
"msg_contents": "Hello Hackers,\n\nI’ve attached a patch to start adding SQL:2023 JSON simplified\naccessor support. This allows accessing JSON or JSONB fields using dot\nnotation (e.g., colname.field.field...), similar to composite types.\n\nCurrently, PostgreSQL uses nonstandard syntax like colname->x->y for\nJSON and JSONB, and colname['blah'] for JSONB. These existing syntaxes\npredate the standard. Oracle already supports the standard dot\nnotation syntax [1].\n\nThe full specification for the JSON simplified accessor format is as\nfollows:\n\n<JSON simplified accessor> ::=\n <value expression primary> <JSON simplified accessor op chain>\n<JSON simplified accessor op chain> ::=\n <JSON simplified accessor op>\n | <JSON simplified accessor op chain> <JSON simplified accessor op>\n<JSON simplified accessor op> ::=\n <JSON member accessor>\n | <JSON wildcard member accessor>\n | <JSON array accessor>\n | <JSON wildcard array accessor>\n | <JSON item method>\n\nI’ve implemented the member and array accessors and attached two\nalternative patches:\n\n1. v1-0001-Add-JSON-JSONB-simplified-accessor.patch: This patch\nenables dot access to JSON object fields and subscript access to\nindexed JSON array elements by converting \".\" and \"[]\" indirection\ninto a JSON_QUERY JsonFuncExpr node.\n\n2. v2-0001-Transform-JSON-dot-access-to-arrow-operator.txt: This\nalternative patch implements dot access to JSON object fields by\ntransforming the \".\" indirection into a \"->\" operator.\n\nThe upside of the v1 patch is that it strictly aligns with the SQL\nstandard, which specifies that the simplified access is equivalent to:\n\nJSON_QUERY (VEP, 'lax $.JC' WITH CONDITIONAL ARRAY WRAPPER NULL ON\nEMPTY NULL ON ERROR)\n\nHowever, the performance of JSON_QUERY might be suboptimal due to\nfunction call overhead. Therefore, I implemented the v2 alternative\nusing the \"->\" operator.\n\nThere is some uncertainty about the semantics of conditional array\nwrappers. Currently, there is at least one subtle difference between\nthe \"->\" operator and JSON_QUERY, as shown:\n\npostgres=# select '{\"a\": 42}'::json->'a';\n ?column?\n----------\n 42\n(1 row)\n\npostgres=# select json_query('{\"a\": 42}'::json, 'lax $.a' with\nconditional array wrapper null on empty null on error);\n json_query\n------------\n [42]\n(1 row)\n\nJSON_QUERY encloses the JSON value 42 in brackets, which may be a bug,\nas Peter noted [2]. If there are no other semantic differences, we\ncould implement simple access without using JSON_QUERY to avoid\nfunction call overhead.\n\nI aim to first enable standard dot notation access to JSON object\nfields. Both patches implement this, and I’m also open to alternative\napproaches.\n\nFor subscripting access to jsonb array elements, jsonb already\nsupports this via the subscripting handler interface. In the v1 patch,\nI added json support using JSON_QUERY, but I can easily adapt this for\nthe v2 patch using the -> operator. I did not leverage the\nsubscripting handler interface for json because implementing the\nfetch/assign functions for json seems challenging for plain text. Let\nme know if you have a different approach in mind.\n\nFinally, I have not implemented wildcard or item method accessors yet\nand would appreciate input on their necessity.\n\n[1] https://docs.oracle.com/en/database/oracle/oracle-database/21/adjsn/simple-dot-notation-access-to-json-data.html#GUID-7249417B-A337-4854-8040-192D5CEFD576\n[2] https://www.postgresql.org/message-id/8022e067-818b-45d3-8fab-6e0d94d03626@eisentraut.org",
"msg_date": "Thu, 29 Aug 2024 11:33:28 -0500",
"msg_from": "Alexandra Wang <alexandra.wang.oss@gmail.com>",
"msg_from_op": true,
"msg_subject": "SQL:2023 JSON simplified accessor support"
},
{
"msg_contents": "On 29.08.24 18:33, Alexandra Wang wrote:\n> I’ve implemented the member and array accessors and attached two\n> alternative patches:\n> \n> 1. v1-0001-Add-JSON-JSONB-simplified-accessor.patch: This patch\n> enables dot access to JSON object fields and subscript access to\n> indexed JSON array elements by converting \".\" and \"[]\" indirection\n> into a JSON_QUERY JsonFuncExpr node.\n> \n> 2. v2-0001-Transform-JSON-dot-access-to-arrow-operator.txt: This\n> alternative patch implements dot access to JSON object fields by\n> transforming the \".\" indirection into a \"->\" operator.\n> \n> The upside of the v1 patch is that it strictly aligns with the SQL\n> standard, which specifies that the simplified access is equivalent to:\n> \n> JSON_QUERY (VEP, 'lax $.JC' WITH CONDITIONAL ARRAY WRAPPER NULL ON\n> EMPTY NULL ON ERROR)\n> \n> However, the performance of JSON_QUERY might be suboptimal due to\n> function call overhead. Therefore, I implemented the v2 alternative\n> using the \"->\" operator.\n\nUsing the operator approach would also allow taking advantage of \noptimizations such as \n<https://www.postgresql.org/message-id/flat/CAKU4AWoqAVya6PBhn%2BBCbFaBMt3z-2%3Di5fKO3bW%3D6HPhbid2Dw%40mail.gmail.com>.\n\n> There is some uncertainty about the semantics of conditional array\n> wrappers. Currently, there is at least one subtle difference between\n> the \"->\" operator and JSON_QUERY, as shown:\n\nThat JSON_QUERY bug has been fixed.\n\nI suggest you rebase both of your patches over this, just to double \ncheck everything. But then I think you can drop the v1 patch and just \nsubmit a new version of v2.\n\nThe patch should eventually contain some documentation. It might be \ngood starting to look for a good spot where to put that documentation. \nIt might be either near the json types documentation or near the general \nqualified identifier syntax, not sure.\n\n\n\n",
"msg_date": "Mon, 16 Sep 2024 21:44:54 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL:2023 JSON simplified accessor support"
},
{
"msg_contents": "Hi Peter,\n\nThank you so much for helping!\n\nOn Mon, Sep 16, 2024 at 12:44 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 29.08.24 18:33, Alexandra Wang wrote:\n> > I’ve implemented the member and array accessors and attached two\n> > alternative patches:\n> >\n> > 1. v1-0001-Add-JSON-JSONB-simplified-accessor.patch: This patch\n> > enables dot access to JSON object fields and subscript access to\n> > indexed JSON array elements by converting \".\" and \"[]\" indirection\n> > into a JSON_QUERY JsonFuncExpr node.\n> >\n> > 2. v2-0001-Transform-JSON-dot-access-to-arrow-operator.txt: This\n> > alternative patch implements dot access to JSON object fields by\n> > transforming the \".\" indirection into a \"->\" operator.\n> >\n> > The upside of the v1 patch is that it strictly aligns with the SQL\n> > standard, which specifies that the simplified access is equivalent to:\n> >\n> > JSON_QUERY (VEP, 'lax $.JC' WITH CONDITIONAL ARRAY WRAPPER NULL ON\n> > EMPTY NULL ON ERROR)\n> >\n> > However, the performance of JSON_QUERY might be suboptimal due to\n> > function call overhead. Therefore, I implemented the v2 alternative\n> > using the \"->\" operator.\n> Using the operator approach would also allow taking advantage of\n> optimizations such as\n> <https://www.postgresql.org/message-id/flat/CAKU4AWoqAVya6PBhn%2BBCbFaBMt3z-2%3Di5fKO3bW%3D6HPhbid2Dw%40mail.gmail.com>.\n\nOK, that makes sense.\n\n> > There is some uncertainty about the semantics of conditional array\n> > wrappers. Currently, there is at least one subtle difference between\n> > the \"->\" operator and JSON_QUERY, as shown:\n>\n> That JSON_QUERY bug has been fixed.\n>\n> I suggest you rebase both of your patches over this, just to double\n> check everything. But then I think you can drop the v1 patch and just\n> submit a new version of v2.\n\nDone. I rebased both patches and confirmed they have the same test\noutputs. I attached v3, which also adds JSON subscript support on top\nof v2.\n\n> The patch should eventually contain some documentation. It might be\n> good starting to look for a good spot where to put that documentation.\n> It might be either near the json types documentation or near the general\n> qualified identifier syntax, not sure.\n\nRight, I’m not sure either. A third option, I think, would be to\ninclude it in the JSON Functions and Operators section [1].\n\n[1] https://www.postgresql.org/docs/devel/functions-json.html\n\nBest,\nAlex",
"msg_date": "Mon, 23 Sep 2024 12:22:20 -0700",
"msg_from": "Alexandra Wang <alexandra.wang.oss@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL:2023 JSON simplified accessor support"
},
{
"msg_contents": "Hi,\n\nI didn’t run pgindent earlier, so here’s the updated version with the\ncorrect indentation. Hope this helps!\n\nBest,\nAlex",
"msg_date": "Thu, 26 Sep 2024 08:45:11 -0700",
"msg_from": "Alexandra Wang <alexandra.wang.oss@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL:2023 JSON simplified accessor support"
},
{
"msg_contents": "\nOn 2024-09-26 Th 11:45 AM, Alexandra Wang wrote:\n> Hi,\n>\n> I didn’t run pgindent earlier, so here’s the updated version with the\n> correct indentation. Hope this helps!\n\n\nThis is a really nice feature, and provides a lot of expressive power \nfor such a small piece of code.\n\nI notice this doesn't seem to work for domains over json and jsonb.\n\n\nandrew@~=# create domain json_d as json;\nCREATE DOMAIN\nandrew@~=# create table test_json_dot(id int, test_json json_d);\nCREATE TABLE\nandrew@~=# insert into test_json_dot select 1, '{\"a\": 1, \"b\": 42}'::json;\nINSERT 0 1 | |\nandrew@~=# select (test_json_dot.test_json).b, json_query(test_json, \n'lax $.b' WITH CONDITIONAL WRAPPER NULL ON EMPTY NULL ON ERROR) as \nexpected from test_json_dot;\nERROR: column notation .b applied to type json_d, which is not a \ncomposite type\nLINE 1: select (test_json_dot.test_json).b, json_query(test_json, 'l...\n\n\nI'm not sure that's a terribly important use case, but we should \nprobably make it work. If it's a domain we should get the basetype of \nthe domain. There's some example code in src/backend/utils/adt/jsonfuncs.c\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 26 Sep 2024 13:16:38 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL:2023 JSON simplified accessor support"
},
{
"msg_contents": "On Sep 26, 2024, at 16:45, Alexandra Wang <alexandra.wang.oss@gmail.com> wrote:\n\n> I didn’t run pgindent earlier, so here’s the updated version with the\n> correct indentation. Hope this helps!\n\nOh, nice! I don’t suppose the standard also has defined an operator equivalent to ->>, though, has it? I tend to want the text output far more often than a JSON scalar.\n\nBest,\n\nDavid\n\n\n\n",
"msg_date": "Fri, 27 Sep 2024 10:49:31 +0100",
"msg_from": "\"David E. Wheeler\" <david@justatheory.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL:2023 JSON simplified accessor support"
},
{
"msg_contents": "\nOn 2024-09-27 Fr 5:49 AM, David E. Wheeler wrote:\n> On Sep 26, 2024, at 16:45, Alexandra Wang <alexandra.wang.oss@gmail.com> wrote:\n>\n>> I didn’t run pgindent earlier, so here’s the updated version with the\n>> correct indentation. Hope this helps!\n> Oh, nice! I don’t suppose the standard also has defined an operator equivalent to ->>, though, has it? I tend to want the text output far more often than a JSON scalar.\n>\n\nThat would defeat being able to chain these.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 27 Sep 2024 07:07:51 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL:2023 JSON simplified accessor support"
},
{
"msg_contents": "On Sep 27, 2024, at 12:07, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> That would defeat being able to chain these.\n\nNot if it’s a different operator. But I’m fine to just keep using ->> at the end of a chain.\n\nD\n\n\n\n",
"msg_date": "Fri, 27 Sep 2024 22:19:15 +0100",
"msg_from": "\"David E. Wheeler\" <david@justatheory.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL:2023 JSON simplified accessor support"
}
] |
[
{
"msg_contents": "Hi,\r\nwhen i read source code of the part of nbtree, i detected another kind of contradictory case postgresql has not eliminated \r\n(eg. x >4 and x <3) in the function _bt_preprocess_keys in nbtutils.c. this cause next unnecessary index scan and qual evaluation.\r\nit seems no one will write SQL like that, but maybe it generated from the ec or degenerate qual. \r\nthe function just invoked in function _bt_first during initializing index scan, so it’s cheap to eliminate this case in the function.\r\nwe just pay attention on the opposite operator, so there are four detailed cases (ie. x >4 and x <3 , x >4 and x <=3, x >=4 and x <3, x >=4 and x <=3).\r\nwhen test the case is contradictory or not, it's need to use the more restrictive operator when less and more restrictive operator both appeared, \r\ncase like x >4 and x <=4, if we choose <=, 4 <= 4 is satisfied, actually x >4 and x <=4 is contradictory. \r\nwhen >= with <= or > with <, it's ok to pick any one of them.\r\ni have added this feature in my local environment and tested. but i am not sure it's worth to spend more effort on it. \r\n\r\n\r\nbigbro_wq@hotmail.com\r\n\n\n\nHi, when i read source code of the part of nbtree, i detected another kind of contradictory case postgresql has not eliminated (eg. x >4 and x <3) in the function _bt_preprocess_keys in nbtutils.c. this cause next unnecessary index scan and qual evaluation.it seems no one will write SQL like that, but maybe it generated from the ec or degenerate qual. the function just invoked in function _bt_first during initializing index scan, so it’s cheap to eliminate this case in the function.we just pay attention on the opposite operator, so there are four detailed cases (ie. x >4 and x <3 , x >4 and x <=3, x >=4 and x <3, x >=4 and x <=3).when test the case is contradictory or not, it's need to use the more restrictive operator when less and more restrictive operator both appeared, case like x >4 and x <=4, if we choose <=, 4 <= 4 is satisfied, actually x >4 and x <=4 is contradictory. when >= with <= or > with <, it's ok to pick any one of them.i have added this feature in my local environment and tested. but i am not sure it's worth to spend more effort on it. \nbigbro_wq@hotmail.com",
"msg_date": "Fri, 30 Aug 2024 00:56:23 +0800",
"msg_from": "\"bigbro_wq@hotmail.com\" <bigbro_wq@hotmail.com>",
"msg_from_op": true,
"msg_subject": "bt Scankey in another contradictory case"
},
{
"msg_contents": "Hi,\r\n this is the patch attachment.\r\n\r\n________________________________\r\n发件人: b ro <bigbro_wq@hotmail.com>\r\n发送时间: 2024年8月30日 0:56\r\n收件人: pgsql-hackers <pgsql-hackers@lists.postgresql.org>\r\n主题: bt Scankey in another contradictory case\r\n\r\nHi,\r\nwhen i read source code of the part of nbtree, i detected another kind of contradictory case postgresql has not eliminated\r\n(eg. x >4 and x <3) in the function _bt_preprocess_keys in nbtutils.c. this cause next unnecessary index scan and qual evaluation.\r\nit seems no one will write SQL like that, but maybe it generated from the ec or degenerate qual.\r\nthe function just invoked in function _bt_first during initializing index scan, so it’s cheap to eliminate this case in the function.\r\nwe just pay attention on the opposite operator, so there are four detailed cases (ie. x >4 and x <3 , x >4 and x <=3, x >=4 and x <3, x >=4 and x <=3).\r\nwhen test the case is contradictory or not, it's need to use the more restrictive operator when less and more restrictive operator both appeared,\r\ncase like x >4 and x <=4, if we choose <=, 4 <= 4 is satisfied, actually x >4 and x <=4 is contradictory.\r\nwhen >= with <= or > with <, it's ok to pick any one of them.\r\ni have added this feature in my local environment and tested. but i am not sure it's worth to spend more effort on it.\r\n________________________________\r\nbigbro_wq@hotmail.com",
"msg_date": "Fri, 30 Aug 2024 08:27:44 +0000",
"msg_from": "b ro <bigbro_wq@hotmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?utf-8?B?5Zue5aSNOiBidCBTY2Fua2V5IGluIGFub3RoZXIgY29udHJhZGljdG9yeSBj?=\n =?utf-8?Q?ase?="
},
{
"msg_contents": "On Fri, Aug 30, 2024 at 7:36 AM b ro <bigbro_wq@hotmail.com> wrote:\n> this is the patch attachment.\n\nWe discussed this recently:\n\nhttps://www.postgresql.org/message-id/80384.1715458896%40sss.pgh.pa.us\n\nI think that we should do this.\n\nIt doesn't make a huge difference in practice, because we'll still end\nthe scan once the leaf level is reached. But it matters more when\narray keys are involved, where there might be more than one descent to\nthe leaf level. Plus we might as well just be thorough about this\nstuff.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 30 Aug 2024 10:32:30 -0400",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: bt Scankey in another contradictory case"
},
{
"msg_contents": "Hi,\r\nI have reanalysed the code of function _bt_first. I notice that using a multi-attribute index\r\nif we can't identify the starting boundaries and the following attributes markded not required ,\r\nthat means we need start at first or last page in the index to examine every tuple to satisfy the \r\nqual or not, in the meantime the scan will be stopped while the first attribute evaluated failed.\r\n\r\nFor instance:\r\n create table c_s( x int, y int);\r\n insert into c_s select generate_series(1, 20000), generate_series(1, 20000);\r\n create index my_c_s_idx on c_s using btree(x,y);\r\n explain (analyze, buffers) select * from c_s where x >4000 and y >10 and y <10 order by x desc;\r\n--------------------------------------------------------------------------------------------------------------------------------\r\n Index Only Scan Backward using my_c_s_idx on c_s (cost=0.29..384.31 rows=1 width=8) (actual time=1.302..1.304 rows=0 loops=1)\r\n Index Cond: ((x > 4000) AND (y > 10) AND (y < 10))\r\n Heap Fetches: 0\r\n Buffers: shared read=46\r\n Planning:\r\n Buffers: shared hit=51 read=15\r\n Planning Time: 1.311 ms\r\n Execution Time: 1.435 ms\r\n(8 rows)\r\n\r\nThe instance is a little different for description above due to the implies NOT NULL Scankey, \r\nbut it has no effect on the whole situation. \r\n\r\nWhat's more, if i change the number 4000 to 1000.\r\n-----------------------------------------------------------------------------------------------------\r\n Sort (cost=441.01..441.01 rows=1 width=8) (actual time=2.974..2.975 rows=0 loops=1)\r\n Sort Key: x DESC\r\n Sort Method: quicksort Memory: 25kB\r\n Buffers: shared hit=89\r\n -> Seq Scan on c_s (cost=0.00..441.00 rows=1 width=8) (actual time=2.971..2.971 rows=0 loops=1)\r\n Filter: ((x > 1000) AND (y > 10) AND (y < 10))\r\n Rows Removed by Filter: 20000\r\n Buffers: shared hit=89\r\n Planning:\r\n Buffers: shared hit=2\r\n Planning Time: 0.113 ms\r\n Execution Time: 2.990 ms\r\n(12 rows)\r\n\r\nThe planner choosed the Seq Scan, and the executor have done the unnecessary jobs 20000 times.\r\n\r\nLet's don't confine to the plain attributes or row comparison and Seq Scan or Index Scan . \r\nWe can pretend row-comparison as multi-attributes comparison. The qual is implicit-AND format, \r\nthat means once one attribute is self-contradictory, we can abandon the qual immediately.\r\n\r\nMaybe we can do more efficient jobs during planning time. Like at the phase of function deconstruct_recurse \r\ninvoked, we can identify qual that is self-contradictory then flag it. With this information we know who is\r\na dummy relation named arbitrarily.\r\n\r\nFor instance:\r\n\r\nexplain (analyze, buffers) select * from c_s a , c_s b where a.y >10 and a.y<10 and a.x=b.x;\r\n QUERY PLAN \r\n-------------------------------------------------------------------------------------------------------\r\n Nested Loop (cost=0.29..393.31 rows=1 width=16) (actual time=1.858..1.858 rows=0 loops=1)\r\n Buffers: shared hit=89\r\n -> Seq Scan on c_s a (cost=0.00..389.00 rows=1 width=8) (actual time=1.857..1.857 rows=0 loops=1)\r\n Filter: ((y > 10) AND (y < 10))\r\n Rows Removed by Filter: 20000\r\n Buffers: shared hit=89\r\n -> Index Only Scan using my_c_s_idx on c_s b (cost=0.29..4.30 rows=1 width=8) (never executed)\r\n Index Cond: (x = a.x)\r\n Heap Fetches: 0\r\n Planning:\r\n Buffers: shared hit=12\r\n Planning Time: 0.236 ms\r\n Execution Time: 1.880 ms\r\n(13 rows)\r\n\r\nAfter deconstruct_recurse invoked, qual(a.y >10 and a.y<10) was distributed to relation \"a\" and relation \"a\" is \r\na dummy relation. When \"a\" and \"b\" make inner join, \"a\" will supply nothing. That means the inner-join rel is \r\na dummy relation too. If there is a outer join above the inner join and some one who can commute with it,\r\nwe can do more efficient jobs and so on.\r\nIt also has benefit on path-competing phase with this feature due to it doesn't cost anything.\r\n\r\nIt's need to discuss the idea whether is feasible or not and it will takes a lot of effort to complete this feature. \r\nAnyway we can fix these issues what we had encountered first.\r\n\r\n\r\n\r\nbigbro_wq@hotmail.com\r\n \r\nFrom: Peter Geoghegan\r\nDate: 2024-08-30 22:32\r\nTo: b ro\r\nCC: pgsql-hackers\r\nSubject: Re: bt Scankey in another contradictory case\r\nOn Fri, Aug 30, 2024 at 7:36 AM b ro <bigbro_wq@hotmail.com> wrote:\r\n> this is the patch attachment.\r\n \r\nWe discussed this recently:\r\n \r\nhttps://www.postgresql.org/message-id/80384.1715458896%40sss.pgh.pa.us\r\n \r\nI think that we should do this.\r\n \r\nIt doesn't make a huge difference in practice, because we'll still end\r\nthe scan once the leaf level is reached. But it matters more when\r\narray keys are involved, where there might be more than one descent to\r\nthe leaf level. Plus we might as well just be thorough about this\r\nstuff.\r\n \r\n-- \r\nPeter Geoghegan\r\n\n\n\nHi,I have reanalysed the code of function _bt_first. I notice that using a multi-attribute indexif we can't identify the starting boundaries and the following attributes markded not required ,that means we need start at first or last page in the index to examine every tuple to satisfy the qual or not, in the meantime the scan will be stopped while the first attribute evaluated failed.For instance: create table c_s( x int, y int); insert into c_s select generate_series(1, 20000), generate_series(1, 20000); create index my_c_s_idx on c_s using btree(x,y); explain (analyze, buffers) select * from c_s where x >4000 and y >10 and y <10 order by x desc;\n-------------------------------------------------------------------------------------------------------------------------------- Index Only Scan Backward using my_c_s_idx on c_s (cost=0.29..384.31 rows=1 width=8) (actual time=1.302..1.304 rows=0 loops=1) Index Cond: ((x > 4000) AND (y > 10) AND (y < 10)) Heap Fetches: 0 Buffers: shared read=46 Planning: Buffers: shared hit=51 read=15 Planning Time: 1.311 ms Execution Time: 1.435 ms(8 rows)The instance is a little different for description above due to the implies NOT NULL Scankey, but it has no effect on the whole situation. What's more, if i change the number 4000 to 1000.----------------------------------------------------------------------------------------------------- Sort (cost=441.01..441.01 rows=1 width=8) (actual time=2.974..2.975 rows=0 loops=1) Sort Key: x DESC Sort Method: quicksort Memory: 25kB Buffers: shared hit=89 -> Seq Scan on c_s (cost=0.00..441.00 rows=1 width=8) (actual time=2.971..2.971 rows=0 loops=1) Filter: ((x > 1000) AND (y > 10) AND (y < 10)) Rows Removed by Filter: 20000 Buffers: shared hit=89 Planning: Buffers: shared hit=2 Planning Time: 0.113 ms Execution Time: 2.990 ms(12 rows)The planner choosed the Seq Scan, and the executor have done the unnecessary jobs 20000 times.Let's don't confine to the plain attributes or row comparison and Seq Scan or Index Scan . We can pretend row-comparison as multi-attributes comparison. The qual is implicit-AND format, that means once one attribute is self-contradictory, we can abandon the qual immediately.Maybe we can do more efficient jobs during planning time. Like at the phase of function deconstruct_recurse invoked, we can identify qual that is self-contradictory then flag it. With this information we know who isa dummy relation named arbitrarily.For instance:explain (analyze, buffers) select * from c_s a , c_s b where a.y >10 and a.y<10 and a.x=b.x; QUERY PLAN ------------------------------------------------------------------------------------------------------- Nested Loop (cost=0.29..393.31 rows=1 width=16) (actual time=1.858..1.858 rows=0 loops=1) Buffers: shared hit=89 -> Seq Scan on c_s a (cost=0.00..389.00 rows=1 width=8) (actual time=1.857..1.857 rows=0 loops=1) Filter: ((y > 10) AND (y < 10)) Rows Removed by Filter: 20000 Buffers: shared hit=89 -> Index Only Scan using my_c_s_idx on c_s b (cost=0.29..4.30 rows=1 width=8) (never executed) Index Cond: (x = a.x) Heap Fetches: 0 Planning: Buffers: shared hit=12 Planning Time: 0.236 ms Execution Time: 1.880 ms(13 rows)After deconstruct_recurse invoked, qual(a.y >10 and a.y<10) was distributed to relation \"a\" and relation \"a\" is a dummy relation. When \"a\" and \"b\" make inner join, \"a\" will supply nothing. That means the inner-join rel is a dummy relation too. If there is a outer join above the inner join and some one who can commute with it,we can do more efficient jobs and so on.It also has benefit on path-competing phase with this feature due to it doesn't cost anything.It's need to discuss the idea whether is feasible or not and it will takes a lot of effort to complete this feature. Anyway we can fix these issues what we had encountered first.\nbigbro_wq@hotmail.com\n From: Peter GeogheganDate: 2024-08-30 22:32To: b roCC: pgsql-hackersSubject: Re: bt Scankey in another contradictory caseOn Fri, Aug 30, 2024 at 7:36 AM b ro <bigbro_wq@hotmail.com> wrote:\n> this is the patch attachment.\n \nWe discussed this recently:\n \nhttps://www.postgresql.org/message-id/80384.1715458896%40sss.pgh.pa.us\n \nI think that we should do this.\n \nIt doesn't make a huge difference in practice, because we'll still end\nthe scan once the leaf level is reached. But it matters more when\narray keys are involved, where there might be more than one descent to\nthe leaf level. Plus we might as well just be thorough about this\nstuff.\n \n-- \nPeter Geoghegan",
"msg_date": "Sun, 1 Sep 2024 23:44:01 +0800",
"msg_from": "\"bigbro_wq@hotmail.com\" <bigbro_wq@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: bt Scankey in another contradictory case"
}
] |
[
{
"msg_contents": "Currently, if you configure a hot standby server with a smaller \nmax_connections setting than the primary, the server refuses to start up:\n\nLOG: entering standby mode\nFATAL: recovery aborted because of insufficient parameter settings\nDETAIL: max_connections = 10 is a lower setting than on the primary \nserver, where its value was 100.\nHINT: You can restart the server after making the necessary \nconfiguration changes.\n\nOr if you change the setting in the primary while the standby is \nrunning, replay pauses:\n\nWARNING: hot standby is not possible because of insufficient parameter \nsettings\nDETAIL: max_connections = 100 is a lower setting than on the primary \nserver, where its value was 200.\nCONTEXT: WAL redo at 2/E10000D8 for XLOG/PARAMETER_CHANGE: \nmax_connections=200 max_worker_processes=8 max_wal_senders=10 \nmax_prepared_xacts=0 max_locks_per_xact=64 wal_level=logical \nwal_log_hints=off track_commit_timestamp=off\nLOG: recovery has paused\nDETAIL: If recovery is unpaused, the server will shut down.\nHINT: You can then restart the server after making the necessary \nconfiguration changes.\nCONTEXT: WAL redo at 2/E10000D8 for XLOG/PARAMETER_CHANGE: \nmax_connections=200 max_worker_processes=8 max_wal_senders=10 \nmax_prepared_xacts=0 max_locks_per_xact=64 wal_level=logical \nwal_log_hints=off track_commit_timestamp=off\n\nBoth of these are rather unpleasant behavior.\n\nI thought I could get rid of that limitation with my CSN snapshot patch \n[1], because it gets rid of the fixed-size known-assigned XIDs array, \nbut there's a second reason for these limitations. It's also used to \nensure that the standby has enough space in the lock manager to hold \npossible AccessExclusiveLocks taken by transactions in the primary.\n\nSo firstly, I think that's a bad tradeoff. In vast majority of cases, \nyou would not run out of lock space anyway, if you just started up the \nsystem. Secondly, that cross-check of settings doesn't fully prevent the \nproblem. It ensures that the lock tables are large enough to accommodate \nall the locks you could possibly hold in the primary, but that doesn't \ntake into account any additional locks held by read-only queries in the \nhot standby. So if you have queries running in the standby that take a \nlot of locks, this can happen anyway:\n\n2024-08-29 21:44:32.634 EEST [668327] FATAL: out of shared memory\n2024-08-29 21:44:32.634 EEST [668327] HINT: You might need to increase \n\"max_locks_per_transaction\".\n2024-08-29 21:44:32.634 EEST [668327] CONTEXT: WAL redo at 2/FD40FCC8 \nfor Standby/LOCK: xid 996 db 5 rel 154045\n2024-08-29 21:44:32.634 EEST [668327] WARNING: you don't own a lock of \ntype AccessExclusiveLock\n2024-08-29 21:44:32.634 EEST [668327] LOG: RecoveryLockHash contains \nentry for lock no longer recorded by lock manager: xid 996 database 5 \nrelation 154045\nTRAP: failed Assert(\"false\"), File: \n\"../src/backend/storage/ipc/standby.c\", Line: 1053, PID: 668327\npostgres: startup recovering \n0000000100000002000000FD(ExceptionalCondition+0x6e)[0x556a4588396e]\npostgres: startup recovering \n0000000100000002000000FD(+0x44156e)[0x556a4571356e]\npostgres: startup recovering \n0000000100000002000000FD(StandbyReleaseAllLocks+0x78)[0x556a45712738]\npostgres: startup recovering \n0000000100000002000000FD(ShutdownRecoveryTransactionEnvironment+0x15)[0x556a45712685]\npostgres: startup recovering \n0000000100000002000000FD(shmem_exit+0x111)[0x556a457062e1]\npostgres: startup recovering \n0000000100000002000000FD(+0x434132)[0x556a45706132]\npostgres: startup recovering \n0000000100000002000000FD(proc_exit+0x59)[0x556a45706079]\npostgres: startup recovering \n0000000100000002000000FD(errfinish+0x278)[0x556a45884708]\npostgres: startup recovering \n0000000100000002000000FD(LockAcquireExtended+0xa46)[0x556a45719386]\npostgres: startup recovering \n0000000100000002000000FD(StandbyAcquireAccessExclusiveLock+0x11d)[0x556a4571330d]\npostgres: startup recovering \n0000000100000002000000FD(standby_redo+0x70)[0x556a45713690]\npostgres: startup recovering \n0000000100000002000000FD(PerformWalRecovery+0x7b3)[0x556a4547d313]\npostgres: startup recovering \n0000000100000002000000FD(StartupXLOG+0xac3)[0x556a4546dae3]\npostgres: startup recovering \n0000000100000002000000FD(StartupProcessMain+0xe8)[0x556a45693558]\npostgres: startup recovering \n0000000100000002000000FD(+0x3ba95d)[0x556a4568c95d]\npostgres: startup recovering \n0000000100000002000000FD(+0x3bce41)[0x556a4568ee41]\npostgres: startup recovering \n0000000100000002000000FD(PostmasterMain+0x116e)[0x556a4568eaae]\npostgres: startup recovering \n0000000100000002000000FD(+0x2f960e)[0x556a455cb60e]\n/lib/x86_64-linux-gnu/libc.so.6(+0x27c8a)[0x7f10ef042c8a]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f10ef042d45]\npostgres: startup recovering \n0000000100000002000000FD(_start+0x21)[0x556a453af011]\n2024-08-29 21:44:32.641 EEST [668324] LOG: startup process (PID 668327) \nwas terminated by signal 6: Aborted\n2024-08-29 21:44:32.641 EEST [668324] LOG: terminating any other active \nserver processes\n2024-08-29 21:44:32.654 EEST [668324] LOG: shutting down due to startup \nprocess failure\n2024-08-29 21:44:32.729 EEST [668324] LOG: database system is shut down\n\nGranted, if you restart the server, it will probably succeed because \nrestarting the server will kill all the other queries that were holding \nlocks. But yuck. With assertions disabled, it looks a little less scary, \nbut not nice anyway.\n\nSo how to improve this? I see a few options:\n\na) Downgrade the error at startup to a warning, and allow starting the \nstandby with smaller settings in standby. At least with a smaller \nmax_locks_per_transactions. The other settings also affect the size of \nknown-assigned XIDs array, but if the CSN snapshots get committed, that \nwill get fixed. In most cases there is enough lock memory anyway, and it \nwill be fine. Just fix the assertion failure so that the error message \nis a little nicer.\n\nb) If you run out of lock space, kill running queries, and prevent new \nones from starting. Track the locks in startup process' private memory \nuntil there is enough space in the lock manager, and then re-open for \nqueries. In essence, go from hot standby mode to warm standby, until \nit's possible to go back to hot standby mode again.\n\nThoughts, better ideas?\n\n[1] https://commitfest.postgresql.org/49/4912/\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Thu, 29 Aug 2024 21:52:06 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Primary and standby setting cross-checks"
},
{
"msg_contents": "On Thu, Aug 29, 2024 at 09:52:06PM +0300, Heikki Linnakangas wrote:\n> Currently, if you configure a hot standby server with a smaller\n> max_connections setting than the primary, the server refuses to start up:\n> \n> LOG: entering standby mode\n> FATAL: recovery aborted because of insufficient parameter settings\n> DETAIL: max_connections = 10 is a lower setting than on the primary server,\n> where its value was 100.\n\n> happen anyway:\n> \n> 2024-08-29 21:44:32.634 EEST [668327] FATAL: out of shared memory\n> 2024-08-29 21:44:32.634 EEST [668327] HINT: You might need to increase\n> \"max_locks_per_transaction\".\n> 2024-08-29 21:44:32.634 EEST [668327] CONTEXT: WAL redo at 2/FD40FCC8 for\n> Standby/LOCK: xid 996 db 5 rel 154045\n> 2024-08-29 21:44:32.634 EEST [668327] WARNING: you don't own a lock of type\n> AccessExclusiveLock\n> 2024-08-29 21:44:32.634 EEST [668327] LOG: RecoveryLockHash contains entry\n> for lock no longer recorded by lock manager: xid 996 database 5 relation\n> 154045\n> TRAP: failed Assert(\"false\"), File: \"../src/backend/storage/ipc/standby.c\",\n\n> Granted, if you restart the server, it will probably succeed because\n> restarting the server will kill all the other queries that were holding\n> locks. But yuck.\n\nAgreed.\n\n> So how to improve this? I see a few options:\n> \n> a) Downgrade the error at startup to a warning, and allow starting the\n> standby with smaller settings in standby. At least with a smaller\n> max_locks_per_transactions. The other settings also affect the size of\n> known-assigned XIDs array, but if the CSN snapshots get committed, that will\n> get fixed. In most cases there is enough lock memory anyway, and it will be\n> fine. Just fix the assertion failure so that the error message is a little\n> nicer.\n> \n> b) If you run out of lock space, kill running queries, and prevent new ones\n> from starting. Track the locks in startup process' private memory until\n> there is enough space in the lock manager, and then re-open for queries. In\n> essence, go from hot standby mode to warm standby, until it's possible to go\n> back to hot standby mode again.\n\nEither seems fine. Having never encountered actual lock exhaustion from this,\nI'd lean toward (a) for simplicity.\n\n> Thoughts, better ideas?\n\nI worry about future code assuming a MaxBackends-sized array suffices for\nsomething. That could work almost all the time, breaking only when a standby\nreplays WAL from a server having a larger array. What could we do now to\ncatch that future mistake promptly? As a start, 027_stream_regress.pl could\nuse low settings on its standby.\n\n\n",
"msg_date": "Tue, 24 Sep 2024 20:03:45 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Primary and standby setting cross-checks"
}
] |
[
{
"msg_contents": "Hello,\n\nThis patch was a bit discussed on [1], and with more details on [2]. It's\nbased on another patch sent in 2022 (see [3]). It introduces seven new\ncolumns in pg_stat_statements:\n\n * parallelized_queries_planned, number of times the query has been planned\nto be parallelized,\n * parallelized_queries_launched, number of times the query has been\nexecuted with parallelization,\n * parallelized_workers_planned, number of parallel workers planned for\nthis query,\n * parallelized_workers_launched, number of parallel workers executed for\nthis query,\n * parallelized_nodes, number of parallelized nodes,\n * parallelized_nodes_all_workers, number of parallelized nodes which had\nall requested workers,\n * parallelized_nodes_no_worker, number of parallelized nodes which had no\nrequested workers.\n\nAs Benoit said yesterday, the intent is to help administrators evaluate the\nusage of parallel workers in their databases and help configuring\nparallelization usage.\n\nA test script (test2.sql) is attached. You can execute it with \"psql -Xef\ntest2.sql your_database\" (your_database should not contain a t1 table as it\nwill be dropped and recreated).\n\nHere is its result, a bit commented:\n\nCREATE EXTENSION IF NOT EXISTS pg_stat_statements;\nCREATE EXTENSION\nSELECT pg_stat_statements_reset();\n pg_stat_statements_reset\n-------------------------------\n 2024-08-29 18:00:35.314557+02\n(1 row)\n\nDROP TABLE IF EXISTS t1;\nDROP TABLE\nCREATE TABLE t1 (id integer);\nCREATE TABLE\nINSERT INTO t1 SELECT generate_series(1, 10_000_000);\nINSERT 0 10000000\nVACUUM ANALYZE t1;\nVACUUM\nSELECT query,\n parallelized_queries_planned, parallelized_queries_launched,\n parallelized_workers_planned, parallelized_workers_launched,\n parallelized_nodes, parallelized_nodes_all_workers,\nparallelized_nodes_no_worker\nFROM pg_stat_statements\nWHERE query LIKE 'SELECT%t1%'\n(0 rows)\n\nSELECT * FROM t1 LIMIT 1;\n id\n----\n 1\n(1 row)\n\nSELECT pg_sleep(1);\nSELECT query,\n parallelized_queries_planned, parallelized_queries_launched,\n parallelized_workers_planned, parallelized_workers_launched,\n parallelized_nodes, parallelized_nodes_all_workers,\nparallelized_nodes_no_worker\nFROM pg_stat_statements\nWHERE query LIKE 'SELECT%t1%'\n-[ RECORD 1 ]------------------+--------------------------\nquery | SELECT * FROM t1 LIMIT $1\nparallelized_queries_planned | 0\nparallelized_queries_launched | 0\nparallelized_workers_planned | 0\nparallelized_workers_launched | 0\nparallelized_nodes | 0\nparallelized_nodes_all_workers | 0\nparallelized_nodes_no_worker | 0\n\n==> no parallelization\n\nSELECT count(*) FROM t1;\n count\n----------\n 10000000\n(1 row)\n\nSELECT pg_sleep(1);\nSELECT query,\n parallelized_queries_planned, parallelized_queries_launched,\n parallelized_workers_planned, parallelized_workers_launched,\n parallelized_nodes, parallelized_nodes_all_workers,\nparallelized_nodes_no_worker\nFROM pg_stat_statements\nWHERE query LIKE 'SELECT%t1%'\n-[ RECORD 1 ]------------------+--------------------------\nquery | SELECT count(*) FROM t1\nparallelized_queries_planned | 1\nparallelized_queries_launched | 1\nparallelized_workers_planned | 2\nparallelized_workers_launched | 2\nparallelized_nodes | 1\nparallelized_nodes_all_workers | 1\nparallelized_nodes_no_worker | 0\n-[ RECORD 2 ]------------------+--------------------------\nquery | SELECT * FROM t1 LIMIT $1\nparallelized_queries_planned | 0\nparallelized_queries_launched | 0\nparallelized_workers_planned | 0\nparallelized_workers_launched | 0\nparallelized_nodes | 0\nparallelized_nodes_all_workers | 0\nparallelized_nodes_no_worker | 0\n\n==> one parallelized query\n==> I have the default configuration, so 2 for\nmax_parallel_worker_per_gather\n==> hence two workers, with one node with all workers\n\nSET max_parallel_workers_per_gather TO 5;\nSET\nSELECT count(*) FROM t1;\n count\n----------\n 10000000\n(1 row)\n\nSELECT pg_sleep(1);\nSELECT query,\n parallelized_queries_planned, parallelized_queries_launched,\n parallelized_workers_planned, parallelized_workers_launched,\n parallelized_nodes, parallelized_nodes_all_workers,\nparallelized_nodes_no_worker\nFROM pg_stat_statements\nWHERE query LIKE 'SELECT%t1%'\n-[ RECORD 1 ]------------------+--------------------------\nquery | SELECT count(*) FROM t1\nparallelized_queries_planned | 2\nparallelized_queries_launched | 2\nparallelized_workers_planned | 6\nparallelized_workers_launched | 6\nparallelized_nodes | 2\nparallelized_nodes_all_workers | 2\nparallelized_nodes_no_worker | 0\n-[ RECORD 2 ]------------------+--------------------------\nquery | SELECT * FROM t1 LIMIT $1\nparallelized_queries_planned | 0\nparallelized_queries_launched | 0\nparallelized_workers_planned | 0\nparallelized_workers_launched | 0\nparallelized_nodes | 0\nparallelized_nodes_all_workers | 0\nparallelized_nodes_no_worker | 0\n\n==> another parallelized query\n==> with 5 as max_parallel_workers_per_gather, but only 4 workers to use\n==> hence four workers, with one node with all workers\n\nThe biggest issue with this patch is that it's unable to know workers for\nmaintenance queries (CREATE INDEX for BTree and VACUUM).\n\nDocumentation is done, tests are missing. Once there's an agreement on this\npatch, we'll work on the tests.\n\nThis has been a collective work with Benoit Lobréau, Jehan-Guillaume de\nRorthais, and Franck Boudehen.\n\nThanks.\n\nRegards.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/b4220d15-2e21-0e98-921b-b9892543cc93%40dalibo.com\n[2]\nhttps://www.postgresql.org/message-id/flat/d657df20-c4bf-63f6-e74c-cb85a81d0383%40dalibo.com\n[3]\nhttps://www.postgresql.org/message-id/flat/6acbe570-068e-bd8e-95d5-00c737b865e8%40gmail.com\n\n\n-- \nGuillaume.",
"msg_date": "Thu, 29 Aug 2024 22:08:23 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": true,
"msg_subject": "Add parallel columns for pg_stat_statements"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI find there are some unnecessary load/store instructions being\nemitted by the JIT compiler.\n\nE.g.,\n\n```\nv_boolnull = l_load(b, TypeStorageBool, v_resnullp, \"\");\nv_boolvalue = l_load(b, TypeSizeT, v_resvaluep, \"\");\n\n/* set resnull to boolnull */\nLLVMBuildStore(b, v_boolnull, v_resnullp);\n/* set revalue to boolvalue */\nLLVMBuildStore(b, v_boolvalue, v_resvaluep);\n```\n\nv_boolnull is loaded from v_resnullp and stored to v_resnullp immediately.\nv_boolvalue is loaded from v_resvaluep and stored to v_resvaluep immediately.\n\nThe attached patch is trying to fix them.\n\nBest Regards,\nXing",
"msg_date": "Fri, 30 Aug 2024 11:55:17 +0800",
"msg_from": "Xing Guo <higuoxing@gmail.com>",
"msg_from_op": true,
"msg_subject": "JIT: Remove some unnecessary instructions."
},
{
"msg_contents": "On 8/30/24 5:55 AM, Xing Guo wrote:\n> I find there are some unnecessary load/store instructions being\n> emitted by the JIT compiler.\n\nWell spotted! All of these are obvious dead instructions and while LLVM \nmight be able to optimize them away there is no reason to create extra \nwork for the optimizer.\n\nThe patch looks good, applies and the tests passes.\n\nAndreas\n\n\n",
"msg_date": "Fri, 30 Aug 2024 14:50:51 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: JIT: Remove some unnecessary instructions."
},
{
"msg_contents": "On Fri, Aug 30, 2024 at 8:50 PM Andreas Karlsson <andreas@proxel.se> wrote:\n>\n> On 8/30/24 5:55 AM, Xing Guo wrote:\n> > I find there are some unnecessary load/store instructions being\n> > emitted by the JIT compiler.\n>\n> Well spotted! All of these are obvious dead instructions and while LLVM\n> might be able to optimize them away there is no reason to create extra\n> work for the optimizer.\n>\n> The patch looks good, applies and the tests passes.\n\nThanks for testing it! I spotted another unnecessary store instruction\nand added it in my V2 patch.\n\nBest Regards,\nXing",
"msg_date": "Mon, 2 Sep 2024 10:23:49 +0800",
"msg_from": "Xing Guo <higuoxing@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: JIT: Remove some unnecessary instructions."
},
{
"msg_contents": "On 9/2/24 4:23 AM, Xing Guo wrote:\n> Thanks for testing it! I spotted another unnecessary store instruction\n> and added it in my V2 patch.\n\nAnother well-spotted unnecessary store. Nice!\n\nI think this patch is ready for committer. It is simple and pretty \nobviously correct.\n\nAndreas\n\n\n\n",
"msg_date": "Mon, 2 Sep 2024 21:06:27 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: JIT: Remove some unnecessary instructions."
},
{
"msg_contents": "On 9/2/24 9:06 PM, Andreas Karlsson wrote:\n> On 9/2/24 4:23 AM, Xing Guo wrote:\n>> Thanks for testing it! I spotted another unnecessary store instruction\n>> and added it in my V2 patch.\n> \n> Another well-spotted unnecessary store. Nice!\n> \n> I think this patch is ready for committer. It is simple and pretty \n> obviously correct.\n\nOh, and please add your patch to the commitfest app so it is not lost.\n\nhttps://commitfest.postgresql.org/50/\n\nAndreas\n\n\n\n",
"msg_date": "Mon, 2 Sep 2024 21:10:54 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: JIT: Remove some unnecessary instructions."
}
] |
[
{
"msg_contents": "H, hackersi! Tell me, please Do I understand correctly that\npg_dump/pg_restore has\nnot been taught to do COPY FREEZE when filling data? The last mention of\nthe plans was Bruce in 2013: https://postgrespro.com/list/thread-id/1815657\nhttps://paquier.xyz/postgresql-2/postgres-9-3-feature-highlight-copy-freeze/\nBest regards, Stepan Neretin.\n\nH, hackersi! Tell me, please Do I understand correctly that pg_dump/pg_restore hasnot been taught to do COPY FREEZE when filling data? The last mention ofthe plans was Bruce in 2013: https://postgrespro.com/list/thread-id/1815657https://paquier.xyz/postgresql-2/postgres-9-3-feature-highlight-copy-freeze/Best regards, Stepan Neretin.",
"msg_date": "Fri, 30 Aug 2024 08:34:08 +0300",
"msg_from": "Stepan Neretin <sndcppg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Supporting pg freeze in pg_dump, restore."
}
] |
[
{
"msg_contents": "Hi,\n\nWhile looking at formatting.c file, I noticed a TODO about \"add support for\nroman number to standard number conversion\" (\nhttps://github.com/postgres/postgres/blob/master/src/backend/utils/adt/formatting.c#L52\n)\n\nI have attached the patch that adds support for this and converts roman\nnumerals to standard numbers (1 to 3999) while also checking if roman\nnumerals are valid or not.\nI have also added a new error code: ERRCODE_INVALID_ROMAN_NUMERAL in case\nof invalid numerals.\n\nA few examples:\n\npostgres=# SELECT to_number('MC', 'RN');\n to_number\n-----------\n 1100\n(1 row)\n\npostgres=# SELECT to_number('XIV', 'RN');\n to_number\n-----------\n 14\n(1 row)\n\npostgres=# SELECT to_number('MMMCMXCIX', 'RN');\n to_number\n-----------\n 3999\n(1 row)\n\npostgres=# SELECT to_number('MCCD', 'RN');\nERROR: invalid roman numeral\n\nI would appreciate your feedback on the following cases:\n- Is it okay for the function to handle Roman numerals in a\ncase-insensitive way? (e.g., 'XIV', 'xiv', and 'Xiv' are all seen as 14).\n- How should we handle Roman numerals with leading or trailing spaces, like\n' XIV' or 'MC '? Should we trim the spaces, or would it be better to throw\nan error in such cases?\n\nI have tested the changes and would appreciate any suggestions for\nimprovement. I will update the docs and regressions in the next version of\npatch.\n\nRegards,\nHunaid Sohail",
"msg_date": "Fri, 30 Aug 2024 12:21:52 +0500",
"msg_from": "Hunaid Sohail <hunaidpgml@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add roman support for to_number function"
},
{
"msg_contents": "Thanks for the contribution.\n\nI took a look at the patch, and it works as advertised. It's too late\nfor the September commitfest, but I took the liberty of registering\nyour patch for the November CF [1]. In the course of that, I found an\nolder thread proposing this feature seven years ago [2]. That patch\nwas returned with feedback and (as far as I can tell), was not\nfollowed-up on by the author. You may want to review that thread for\nfeedback; I won't repeat it here.\n\nOn Fri, Aug 30, 2024 at 12:22 AM Hunaid Sohail <hunaidpgml@gmail.com> wrote:\n>While looking at formatting.c file, I noticed a TODO about \"add support for roman number to standard number conversion\" (https://github.com/postgres/postgres/blob/master/src/backend/utils/adt/formatting.c#L52)\n\nYour patch should also remove the TODO =)\n\n> - Is it okay for the function to handle Roman numerals in a case-insensitive way? (e.g., 'XIV', 'xiv', and 'Xiv' are all seen as 14).\n\nThe patch in the thread I linked also took a case-insensitive\napproach. I did not see any objections to that, and it seems\nreasonable to me as well.\n\n> - How should we handle Roman numerals with leading or trailing spaces, like ' XIV' or 'MC '? Should we trim the spaces, or would it be better to throw an error in such cases?\n\nI thought we could reference existing to_number behavior here, but\nafter playing with it a bit, I'm not really sure what that is:\n\n-- single leading space\nmaciek=# select to_number(' 1', '9');\n to_number\n-----------\n 1\n(1 row)\n\n-- two leading spaces\nmaciek=# select to_number(' 1', '9');\nERROR: invalid input syntax for type numeric: \" \"\n-- two leading spaces and template pattern with a decimal\nmaciek=# select to_number(' 1', '9D9');\n to_number\n-----------\n 1\n(1 row)\n\nSeparately, I also noticed some unusual Roman representations work\nwith your patch:\n\npostgres=# select to_number('viv', 'RN');\n to_number\n-----------\n 9\n(1 row)\n\nIs this expected? In contrast, some somewhat common alternative\nrepresentations don't work:\n\npostgres=# select to_number('iiii', 'RN');\nERROR: invalid roman numeral\n\nI know this is expected, but is this the behavior we want? If so, we\nprobably want to reject the former case, too. If not, maybe that one\nis okay, too.\n\nI know I've probably offered more questions than answers, but I hope\nfinding the old thread here is useful.\n\nThanks,\nMaciek\n\n[1]: https://commitfest.postgresql.org/50/5233/\n[2]: https://www.postgresql.org/message-id/flat/CAGMVOduAJ9wKqJXBYnmFmEetKxapJxrG3afUwpbOZ6n_dWaUnA%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 2 Sep 2024 11:40:43 -0700",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add roman support for to_number function"
},
{
"msg_contents": "On Mon, Sep 2, 2024 at 11:41 PM Maciek Sakrejda <m.sakrejda@gmail.com>\nwrote:\n\n> Thanks for the contribution.\n>\n> I took a look at the patch, and it works as advertised. It's too late\n> for the September commitfest, but I took the liberty of registering\n> your patch for the November CF [1]. In the course of that, I found an\n> older thread proposing this feature seven years ago [2]. That patch\n> was returned with feedback and (as far as I can tell), was not\n> followed-up on by the author. You may want to review that thread for\n> feedback; I won't repeat it here.\n>\n\nI submitted the patch on Aug 30 because I read that new patches should be\nsubmitted in CF with \"Open\" status.\n\n\n> On Fri, Aug 30, 2024 at 12:22 AM Hunaid Sohail <hunaidpgml@gmail.com>\n> wrote:\n> >While looking at formatting.c file, I noticed a TODO about \"add support\n> for roman number to standard number conversion\" (\n> https://github.com/postgres/postgres/blob/master/src/backend/utils/adt/formatting.c#L52\n> )\n>\n> Your patch should also remove the TODO =)\n>\n\nNoted.\n\n> - How should we handle Roman numerals with leading or trailing spaces,\n> like ' XIV' or 'MC '? Should we trim the spaces, or would it be better to\n> throw an error in such cases?\n>\n> I thought we could reference existing to_number behavior here, but\n> after playing with it a bit, I'm not really sure what that is:\n>\n> -- single leading space\n> maciek=# select to_number(' 1', '9');\n> to_number\n> -----------\n> 1\n> (1 row)\n>\n> -- two leading spaces\n> maciek=# select to_number(' 1', '9');\n> ERROR: invalid input syntax for type numeric: \" \"\n> -- two leading spaces and template pattern with a decimal\n> maciek=# select to_number(' 1', '9D9');\n> to_number\n> -----------\n> 1\n> (1 row)\n>\n\nYes, you are right. I can't understand the behaviour of trailing spaces\ntoo. Trailing spaces are ignored (doesn't matter how many spaces) but\nleading spaces are ignored if there is 1 leading space. For more leading\nspaces, error is returned.\nA few cases of trailing spaces.\npostgres=# select to_number(' 1', '9');\nERROR: invalid input syntax for type numeric: \" \"\npostgres=# select to_number('1 ', '9');\n to_number\n-----------\n 1\n(1 row)\n\npostgres=# select to_number('1 ', '9');\n to_number\n-----------\n 1\n(1 row)\n\npostgres=# select to_number('1 ', '9');\n to_number\n-----------\n 1\n(1 row)\n\n\nSeparately, I also noticed some unusual Roman representations work\n> with your patch:\n>\n> postgres=# select to_number('viv', 'RN');\n> to_number\n> -----------\n> 9\n> (1 row)\n>\n> Is this expected? In contrast, some somewhat common alternative\n> representations don't work:\n>\n> postgres=# select to_number('iiii', 'RN');\n> ERROR: invalid roman numeral\n>\n> I know this is expected, but is this the behavior we want? If so, we\n> probably want to reject the former case, too. If not, maybe that one\n> is okay, too.\n>\n\nYes, 'viv' is invalid. Thanks for pointing this out. Also, found a few\nother invalid cases like 'lxl' or 'dcd'. I will fix them in the next patch.\nThank you for your feedback.\n\nRegards,\nHunaid Sohail\n\nOn Mon, Sep 2, 2024 at 11:41 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:Thanks for the contribution.\n\nI took a look at the patch, and it works as advertised. It's too late\nfor the September commitfest, but I took the liberty of registering\nyour patch for the November CF [1]. In the course of that, I found an\nolder thread proposing this feature seven years ago [2]. That patch\nwas returned with feedback and (as far as I can tell), was not\nfollowed-up on by the author. You may want to review that thread for\nfeedback; I won't repeat it here.I submitted the patch on Aug 30 because I read that new patches should be submitted in CF with \"Open\" status.\n\nOn Fri, Aug 30, 2024 at 12:22 AM Hunaid Sohail <hunaidpgml@gmail.com> wrote:\n>While looking at formatting.c file, I noticed a TODO about \"add support for roman number to standard number conversion\" (https://github.com/postgres/postgres/blob/master/src/backend/utils/adt/formatting.c#L52)\n\nYour patch should also remove the TODO =)Noted. \n> - How should we handle Roman numerals with leading or trailing spaces, like ' XIV' or 'MC '? Should we trim the spaces, or would it be better to throw an error in such cases?\n\nI thought we could reference existing to_number behavior here, but\nafter playing with it a bit, I'm not really sure what that is:\n\n-- single leading space\nmaciek=# select to_number(' 1', '9');\n to_number\n-----------\n 1\n(1 row)\n\n-- two leading spaces\nmaciek=# select to_number(' 1', '9');\nERROR: invalid input syntax for type numeric: \" \"\n-- two leading spaces and template pattern with a decimal\nmaciek=# select to_number(' 1', '9D9');\n to_number\n-----------\n 1\n(1 row)Yes, you are right. I can't understand the behaviour of trailing spaces too. Trailing spaces are ignored (doesn't matter how many spaces) but leading spaces are ignored if there is 1 leading space. For more leading spaces, error is returned.A few cases of trailing spaces.postgres=# select to_number(' 1', '9');ERROR: invalid input syntax for type numeric: \" \"postgres=# select to_number('1 ', '9'); to_number----------- 1(1 row)postgres=# select to_number('1 ', '9'); to_number----------- 1(1 row)postgres=# select to_number('1 ', '9'); to_number----------- 1(1 row) \nSeparately, I also noticed some unusual Roman representations work\nwith your patch:\n\npostgres=# select to_number('viv', 'RN');\n to_number\n-----------\n 9\n(1 row)\n\nIs this expected? In contrast, some somewhat common alternative\nrepresentations don't work:\n\npostgres=# select to_number('iiii', 'RN');\nERROR: invalid roman numeral\n\nI know this is expected, but is this the behavior we want? If so, we\nprobably want to reject the former case, too. If not, maybe that one\nis okay, too. Yes, 'viv' is invalid. Thanks for pointing this out. Also, found a few other invalid cases like 'lxl' or 'dcd'. I will fix them in the next patch.Thank you for your feedback.Regards,Hunaid Sohail",
"msg_date": "Tue, 3 Sep 2024 18:28:59 +0500",
"msg_from": "Hunaid Sohail <hunaidpgml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add roman support for to_number function"
},
{
"msg_contents": "On Tue, Sep 3, 2024 at 6:29 AM Hunaid Sohail <hunaidpgml@gmail.com> wrote:\n> I submitted the patch on Aug 30 because I read that new patches should be submitted in CF with \"Open\" status.\n\nOh my bad! I missed that you had submitted it to the September CF:\nhttps://commitfest.postgresql.org/49/5221/\n\nI don't see a way to just delete CF entries, so I've marked the\nNovember one that I created as Withdrawn.\n\nI'll add myself as reviewer to the September CF entry.\n\nThanks,\nMaciek\n\n\n",
"msg_date": "Tue, 3 Sep 2024 08:47:43 -0700",
"msg_from": "Maciek Sakrejda <maciek@pganalyze.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add roman support for to_number function"
},
{
"msg_contents": "Hi,\n\nI have attached a new patch v2 with following changes:\n\n- Handled invalid cases like 'viv', 'lxl', and 'dcd'.\n- Changed errcode to 22P07 because 22P06 was already taken.\n- Removed TODO.\n- Added a few positive & negative tests.\n- Updated documentation.\n\nLooking forward to your feedback.\n\nRegards,\nHunaid Sohail\n\n\nOn Tue, Sep 3, 2024 at 8:47 PM Maciek Sakrejda <maciek@pganalyze.com> wrote:\n\n> On Tue, Sep 3, 2024 at 6:29 AM Hunaid Sohail <hunaidpgml@gmail.com> wrote:\n> > I submitted the patch on Aug 30 because I read that new patches should\n> be submitted in CF with \"Open\" status.\n>\n> Oh my bad! I missed that you had submitted it to the September CF:\n> https://commitfest.postgresql.org/49/5221/\n>\n> I don't see a way to just delete CF entries, so I've marked the\n> November one that I created as Withdrawn.\n>\n> I'll add myself as reviewer to the September CF entry.\n>\n> Thanks,\n> Maciek\n>",
"msg_date": "Thu, 5 Sep 2024 13:07:03 +0500",
"msg_from": "Hunaid Sohail <hunaidpgml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add roman support for to_number function"
},
{
"msg_contents": "Hi,\n\n> I have attached a new patch v2 with following changes:\n>\n> - Handled invalid cases like 'viv', 'lxl', and 'dcd'.\n> - Changed errcode to 22P07 because 22P06 was already taken.\n> - Removed TODO.\n> - Added a few positive & negative tests.\n> - Updated documentation.\n>\n> Looking forward to your feedback.\n\nWhile playing with the patch I noticed that to_char(..., 'RN') doesn't\nseem to be test-covered. I suggest adding the following test:\n\n```\nWITH rows AS (\n SELECT i, to_char(i, 'FMRN') AS roman\n FROM generate_series(1, 3999) AS i\n) SELECT bool_and(to_number(roman, 'RN') = i) FROM rows;\n\n bool_and\n----------\n t\n```\n\n... in order to fix this while on it. The query takes ~12 ms on my laptop.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 5 Sep 2024 12:41:42 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add roman support for to_number function"
},
{
"msg_contents": "On Thu, Sep 5, 2024 at 2:41 PM Aleksander Alekseev <aleksander@timescale.com>\nwrote:\n\n>\n> While playing with the patch I noticed that to_char(..., 'RN') doesn't\n> seem to be test-covered. I suggest adding the following test:\n>\n> ```\n> WITH rows AS (\n> SELECT i, to_char(i, 'FMRN') AS roman\n> FROM generate_series(1, 3999) AS i\n> ) SELECT bool_and(to_number(roman, 'RN') = i) FROM rows;\n>\n> bool_and\n> ----------\n> t\n> ```\n>\n\nI also noticed there are no tests for to_char roman format. The test you\nprovided covers roman format in both to_char and to_number. I will add it.\nThank you.\n\nRegards,\nHunaid Sohail\n\nOn Thu, Sep 5, 2024 at 2:41 PM Aleksander Alekseev <aleksander@timescale.com> wrote:\nWhile playing with the patch I noticed that to_char(..., 'RN') doesn't\nseem to be test-covered. I suggest adding the following test:\n\n```\nWITH rows AS (\n SELECT i, to_char(i, 'FMRN') AS roman\n FROM generate_series(1, 3999) AS i\n) SELECT bool_and(to_number(roman, 'RN') = i) FROM rows;\n\n bool_and\n----------\n t\n```I also noticed there are no tests for to_char roman format. The test you provided covers roman format in both to_char and to_number. I will add it.Thank you.Regards,Hunaid Sohail",
"msg_date": "Thu, 5 Sep 2024 18:07:34 +0500",
"msg_from": "Hunaid Sohail <hunaidpgml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add roman support for to_number function"
},
{
"msg_contents": "Hi,\n\n\nOn Thu, Sep 5, 2024 at 2:41 PM Aleksander Alekseev <aleksander@timescale.com>\n> wrote:\n>\n>>\n>> While playing with the patch I noticed that to_char(..., 'RN') doesn't\n>> seem to be test-covered. I suggest adding the following test:\n>>\n>> ```\n>> WITH rows AS (\n>> SELECT i, to_char(i, 'FMRN') AS roman\n>> FROM generate_series(1, 3999) AS i\n>> ) SELECT bool_and(to_number(roman, 'RN') = i) FROM rows;\n>>\n>> bool_and\n>> ----------\n>> t\n>> ```\n>>\n>\n>\nI have added this test in the attached patch (v3). Thank you once again,\nAleksander, for the suggestion.\n\nRegards,\nHunaid Sohail",
"msg_date": "Sat, 7 Sep 2024 14:24:23 +0500",
"msg_from": "Hunaid Sohail <hunaidpgml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add roman support for to_number function"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, failed\nSpec compliant: tested, failed\nDocumentation: tested, passed\n\nTested again, and the patch looks good. It does not accept leading or trailing whitespace, which seems reasonable, given the unclear behavior of to_number with other format strings. It also rejects less common Roman spellings like \"IIII\". I don't feel strongly about that one way or the other, but perhaps a test codifying that behavior would be useful to make it clear it's intentional.\r\n\r\nI'm marking it RfC.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Sat, 07 Sep 2024 18:51:29 +0000",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add roman support for to_number function"
},
{
"msg_contents": "Sorry, it looks like I failed to accurately log my review in the\nreview app due to the current broken layout issues [1]. The summary\nshould be:\n\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested (not sure what the spec has to\nsay about this)\nDocumentation: tested, passed\n\n[1]: https://www.postgresql.org/message-id/flat/CAD68Dp1GgTeBiA0YVWXpfPCMC7%3DnqBCLn6%2BhuOkWURcKRnFw5g%40mail.gmail.com\n\n\n",
"msg_date": "Sat, 7 Sep 2024 11:57:00 -0700",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add roman support for to_number function"
},
{
"msg_contents": "Maciek Sakrejda <m.sakrejda@gmail.com> writes:\n> Tested again, and the patch looks good. It does not accept leading or trailing whitespace, which seems reasonable, given the unclear behavior of to_number with other format strings. It also rejects less common Roman spellings like \"IIII\". I don't feel strongly about that one way or the other, but perhaps a test codifying that behavior would be useful to make it clear it's intentional.\n\nYeah, I don't have a strong feeling about that either, but probably\nbeing strict is better. to_number has a big problem with \"garbage in\ngarbage out\" already, and being lax will make that worse.\n\nA few notes from a quick read of the patch:\n\n* roman_to_int() should have a header comment, notably explaining its\nresult convention. I find it fairly surprising that \"0\" means an\nerror --- I realize that Roman notation can't represent zero, but\nwouldn't it be better to use -1?\n\n* Do *not* rely on toupper(). There are multiple reasons why not,\nbut in this context the worst one is that in Turkish locales it's\nlikely to translate \"i\" to \"İ\", on which you will fail. I'd use\npg_ascii_toupper().\n\n* I think roman_to_int() is under-commented internally too.\nTo start with, where did the magic \"15\" come from? And why\nshould we have such a test anyway --- what if the format allows\nfor trailing stuff after the roman numeral? (That is, I think\nyou need to fix this so that roman_to_int reports how much data\nit ate, instead of assuming that it must read to the end of the\ninput.) The logic for detecting invalid numeral combinations\nfeels a little opaque too. Do you have a source for the rules\nyou're implementing, and if so could you cite it?\n\n* This code will get run through pgindent before commit, so you\nmight want to revisit whether your comments will still look nice\nafterwards. There's not a lot of consistency in them about initial\ncap versus lower case or trailing period versus not, too.\n\n* roman_result could be declared where used, no?\n\n* I'm not really convinced that we need a new errcode\nERRCODE_INVALID_ROMAN_NUMERAL rather than using a generic one like\nERRCODE_INVALID_TEXT_REPRESENTATION. However, if we do do it\nlike that, why did you pick the out-of-sequence value 22P07?\n\n* Might be good to have a few test cases demonstrating what happens\nwhen there's other stuff combined with the RN format spec. Notably,\neven if RN itself won't eat whitespace, there had better be a way\nto write the format string to allow that.\n\n* Further to Aleksander's point about lack of test coverage for\nthe to_char direction, I see from\nhttps://coverage.postgresql.org/src/backend/utils/adt/formatting.c.gcov.html\nthat almost none of the existing roman-number code paths are covered\ntoday. While it's not strictly within the charter of this patch\nto improve that, maybe we should try to do so --- at the very\nleast it'd raise formatting.c's coverage score a few notches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Sep 2024 19:09:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add roman support for to_number function"
},
{
"msg_contents": "I wrote:\n> * Further to Aleksander's point about lack of test coverage for\n> the to_char direction, I see from\n> https://coverage.postgresql.org/src/backend/utils/adt/formatting.c.gcov.html\n> that almost none of the existing roman-number code paths are covered\n> today. While it's not strictly within the charter of this patch\n> to improve that, maybe we should try to do so --- at the very\n> least it'd raise formatting.c's coverage score a few notches.\n\nFor the archives' sake: I created a patch and a separate discussion\nthread [1] for that.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/2956175.1725831136@sss.pgh.pa.us\n\n\n",
"msg_date": "Sun, 08 Sep 2024 17:44:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add roman support for to_number function"
},
{
"msg_contents": "Hi,\n\nOn Sun, Sep 8, 2024 at 4:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> A few notes from a quick read of the patch:\n>\n> * roman_to_int() should have a header comment, notably explaining its\n> result convention. I find it fairly surprising that \"0\" means an\n> error --- I realize that Roman notation can't represent zero, but\n> wouldn't it be better to use -1?\n>\n\nNoted.\n\n\n>\n> * Do *not* rely on toupper(). There are multiple reasons why not,\n> but in this context the worst one is that in Turkish locales it's\n> likely to translate \"i\" to \"İ\", on which you will fail. I'd use\n> pg_ascii_toupper().\n>\n\nNoted.\n\n\n>\n> * I think roman_to_int() is under-commented internally too.\n> To start with, where did the magic \"15\" come from? And why\n> should we have such a test anyway --- what if the format allows\n> for trailing stuff after the roman numeral? (That is, I think\n> you need to fix this so that roman_to_int reports how much data\n> it ate, instead of assuming that it must read to the end of the\n> input.)\n\n\nMMMDCCCLXXXVIII is the longest valid standard roman numeral (15\ncharacters). I will add appropriate comment on length check.\n\nI am not sure I am able to understand the latter part. Could you please\nelaborate it?\nAre you referring to cases like these?\nSELECT to_number('CXXIII foo', 'RN foo');\nSELECT to_number('CXXIII foo', 'RN');\nPlease check my comments on Oracle's behaviour below.\n\n\n\n> The logic for detecting invalid numeral combinations\n> feels a little opaque too. Do you have a source for the rules\n> you're implementing, and if so could you cite it?\n>\n\nThere are many sources on the internet.\nA few sources:\n1. https://www.toppr.com/guides/maths/knowing-our-numbers/roman-numerals/\n2. https://www.cuemath.com/numbers/roman-numerals/\n\nNote that we are following the standard form:\nhttps://en.wikipedia.org/wiki/Roman_numerals#Standard_form\n\n\n>\n> * This code will get run through pgindent before commit, so you\n> might want to revisit whether your comments will still look nice\n> afterwards. There's not a lot of consistency in them about initial\n> cap versus lower case or trailing period versus not, too.\n>\n\nNoted.\n\n\n>\n> * roman_result could be declared where used, no?\n>\n\nNoted.\n\n\n>\n> * I'm not really convinced that we need a new errcode\n> ERRCODE_INVALID_ROMAN_NUMERAL rather than using a generic one like\n> ERRCODE_INVALID_TEXT_REPRESENTATION. However, if we do do it\n> like that, why did you pick the out-of-sequence value 22P07?\n>\n\n22P07 is not out-of-sequence. 22P01 to 22P06 are already used.\nHowever, I do agree with you that we can use\nERRCODE_INVALID_TEXT_REPRESENTATION. I will change it.\n\n\n>\n> * Might be good to have a few test cases demonstrating what happens\n> when there's other stuff combined with the RN format spec. Notably,\n> even if RN itself won't eat whitespace, there had better be a way\n> to write the format string to allow that.\n>\n\nThe current patch (v3) simply ignores other formats with RN except when RN\nis used EEEE which returns error.\n```\npostgres=# SELECT to_number('CXXIII', 'foo RN');\n to_number\n-----------\n 123\n(1 row)\n\npostgres=# SELECT to_number('CXXIII', 'MIrn');\n to_number\n-----------\n 123\n(1 row)\n\npostgres=# SELECT to_number('CXXIII', 'EEEErn');\nERROR: \"EEEE\" must be the last pattern used\n```\n\nHowever, I think that other formats except FM should be rejected when used\nwith RN in NUMDesc_prepare function. Any opinions?\n\nIf we look into Oracle, they are strict in this regard and don't allow any\nother format with RN.\n1. SELECT to_char(12, 'MIrn') from dual;\n2. SELECT to_char(12, 'foo RN') from dual;\nresults in error:\nORA-01481: invalid number format model\n\nI also found that there is no check when RN is used twice (both in to_char\nand to_number) and that results in unexpected output.\n\n```\npostgres=# SELECT to_number('CXXIII', 'RNrn');\n to_number\n-----------\n 123\n(1 row)\n\npostgres=# SELECT to_char(12, 'RNrn');\n to_char\n--------------------------------\n XII xii\n(1 row)\n\npostgres=# SELECT to_char(12, 'rnrn');\n to_char\n--------------------------------\n xii xii\n(1 row)\n\npostgres=# SELECT to_char(12, 'FMrnrn');\n to_char\n---------\n xiixii\n(1 row)\n```\n\n\n> * Further to Aleksander's point about lack of test coverage for\n> the to_char direction, I see from\n>\n> https://coverage.postgresql.org/src/backend/utils/adt/formatting.c.gcov.html\n> that almost none of the existing roman-number code paths are covered\n> today. While it's not strictly within the charter of this patch\n> to improve that, maybe we should try to do so --- at the very\n> least it'd raise formatting.c's coverage score a few notches.\n>\n>\nI see that you have provided a patch to increase test coverage of to_char\nroman format including some other formats. I will try to add more tests for\nthe to_number roman format.\n\n\nI will provide the next patch soon.\n\nRegards,\nHunaid Sohail\n\nHi,On Sun, Sep 8, 2024 at 4:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nA few notes from a quick read of the patch:\n\n* roman_to_int() should have a header comment, notably explaining its\nresult convention. I find it fairly surprising that \"0\" means an\nerror --- I realize that Roman notation can't represent zero, but\nwouldn't it be better to use -1?Noted. \n\n* Do *not* rely on toupper(). There are multiple reasons why not,\nbut in this context the worst one is that in Turkish locales it's\nlikely to translate \"i\" to \"İ\", on which you will fail. I'd use\npg_ascii_toupper().Noted. \n\n* I think roman_to_int() is under-commented internally too.\nTo start with, where did the magic \"15\" come from? And why\nshould we have such a test anyway --- what if the format allows\nfor trailing stuff after the roman numeral? (That is, I think\nyou need to fix this so that roman_to_int reports how much data\nit ate, instead of assuming that it must read to the end of the\ninput.) MMMDCCCLXXXVIII is the longest valid standard roman numeral (15 characters). I will add appropriate comment on length check.I am not sure I am able to understand the latter part. Could you please elaborate it?Are you referring to cases like these?SELECT to_number('CXXIII foo', 'RN foo');SELECT to_number('CXXIII foo', 'RN');Please check my comments on Oracle's behaviour below. The logic for detecting invalid numeral combinations\nfeels a little opaque too. Do you have a source for the rules\nyou're implementing, and if so could you cite it?There are many sources on the internet.A few sources:1. https://www.toppr.com/guides/maths/knowing-our-numbers/roman-numerals/2. https://www.cuemath.com/numbers/roman-numerals/Note that we are following the standard form: https://en.wikipedia.org/wiki/Roman_numerals#Standard_form \n\n* This code will get run through pgindent before commit, so you\nmight want to revisit whether your comments will still look nice\nafterwards. There's not a lot of consistency in them about initial\ncap versus lower case or trailing period versus not, too.Noted. \n\n* roman_result could be declared where used, no?Noted. \n\n* I'm not really convinced that we need a new errcode\nERRCODE_INVALID_ROMAN_NUMERAL rather than using a generic one like\nERRCODE_INVALID_TEXT_REPRESENTATION. However, if we do do it\nlike that, why did you pick the out-of-sequence value 22P07?22P07 is not out-of-sequence. 22P01 to 22P06 are already used.However, I do agree with you that we can use ERRCODE_INVALID_TEXT_REPRESENTATION. I will change it. \n\n* Might be good to have a few test cases demonstrating what happens\nwhen there's other stuff combined with the RN format spec. Notably,\neven if RN itself won't eat whitespace, there had better be a way\nto write the format string to allow that.The current patch (v3) simply ignores other formats with RN except when RN is used EEEE which returns error.```postgres=# SELECT to_number('CXXIII', 'foo RN'); to_number----------- 123(1 row)postgres=# SELECT to_number('CXXIII', 'MIrn'); to_number----------- 123(1 row)postgres=# SELECT to_number('CXXIII', 'EEEErn');ERROR: \"EEEE\" must be the last pattern used```However, I think that other formats except FM should be rejected when used with RN in NUMDesc_prepare function. Any opinions?If we look into Oracle, they are strict in this regard and don't allow any other format with RN.1. SELECT to_char(12, 'MIrn') from dual;2. SELECT to_char(12, 'foo RN') from dual;results in error:ORA-01481: invalid number format modelI also found that there is no check when RN is used twice (both in to_char and to_number) and that results in unexpected output.```postgres=# SELECT to_number('CXXIII', 'RNrn'); to_number----------- 123(1 row)postgres=# SELECT to_char(12, 'RNrn'); to_char-------------------------------- XII xii(1 row)postgres=# SELECT to_char(12, 'rnrn'); to_char-------------------------------- xii xii(1 row)postgres=# SELECT to_char(12, 'FMrnrn'); to_char--------- xiixii(1 row)``` \n\n* Further to Aleksander's point about lack of test coverage for\nthe to_char direction, I see from\nhttps://coverage.postgresql.org/src/backend/utils/adt/formatting.c.gcov.html\nthat almost none of the existing roman-number code paths are covered\ntoday. While it's not strictly within the charter of this patch\nto improve that, maybe we should try to do so --- at the very\nleast it'd raise formatting.c's coverage score a few notches.I see that you have provided a patch to increase test coverage of to_char roman format including some other formats. I will try to add more tests for the to_number roman format.I will provide the next patch soon.Regards,Hunaid Sohail",
"msg_date": "Mon, 9 Sep 2024 17:45:17 +0500",
"msg_from": "Hunaid Sohail <hunaidpgml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add roman support for to_number function"
},
{
"msg_contents": "Hi,\n\nI have started working on the next version of the patch and have addressed\nthe smaller issues identified by Tom. Before proceeding further, I would\nlike to have opinions on some comments/questions in my previous email.\n\nRegards,\nHunaid Sohail\n\nOn Mon, Sep 9, 2024 at 5:45 PM Hunaid Sohail <hunaidpgml@gmail.com> wrote:\n\n> Hi,\n>\n> On Sun, Sep 8, 2024 at 4:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> A few notes from a quick read of the patch:\n>>\n>> * roman_to_int() should have a header comment, notably explaining its\n>> result convention. I find it fairly surprising that \"0\" means an\n>> error --- I realize that Roman notation can't represent zero, but\n>> wouldn't it be better to use -1?\n>>\n>\n> Noted.\n>\n>\n>>\n>> * Do *not* rely on toupper(). There are multiple reasons why not,\n>> but in this context the worst one is that in Turkish locales it's\n>> likely to translate \"i\" to \"İ\", on which you will fail. I'd use\n>> pg_ascii_toupper().\n>>\n>\n> Noted.\n>\n>\n>>\n>> * I think roman_to_int() is under-commented internally too.\n>> To start with, where did the magic \"15\" come from? And why\n>> should we have such a test anyway --- what if the format allows\n>> for trailing stuff after the roman numeral? (That is, I think\n>> you need to fix this so that roman_to_int reports how much data\n>> it ate, instead of assuming that it must read to the end of the\n>> input.)\n>\n>\n> MMMDCCCLXXXVIII is the longest valid standard roman numeral (15\n> characters). I will add appropriate comment on length check.\n>\n> I am not sure I am able to understand the latter part. Could you please\n> elaborate it?\n> Are you referring to cases like these?\n> SELECT to_number('CXXIII foo', 'RN foo');\n> SELECT to_number('CXXIII foo', 'RN');\n> Please check my comments on Oracle's behaviour below.\n>\n>\n>\n>> The logic for detecting invalid numeral combinations\n>> feels a little opaque too. Do you have a source for the rules\n>> you're implementing, and if so could you cite it?\n>>\n>\n> There are many sources on the internet.\n> A few sources:\n> 1. https://www.toppr.com/guides/maths/knowing-our-numbers/roman-numerals/\n> 2. https://www.cuemath.com/numbers/roman-numerals/\n>\n> Note that we are following the standard form:\n> https://en.wikipedia.org/wiki/Roman_numerals#Standard_form\n>\n>\n>>\n>> * This code will get run through pgindent before commit, so you\n>> might want to revisit whether your comments will still look nice\n>> afterwards. There's not a lot of consistency in them about initial\n>> cap versus lower case or trailing period versus not, too.\n>>\n>\n> Noted.\n>\n>\n>>\n>> * roman_result could be declared where used, no?\n>>\n>\n> Noted.\n>\n>\n>>\n>> * I'm not really convinced that we need a new errcode\n>> ERRCODE_INVALID_ROMAN_NUMERAL rather than using a generic one like\n>> ERRCODE_INVALID_TEXT_REPRESENTATION. However, if we do do it\n>> like that, why did you pick the out-of-sequence value 22P07?\n>>\n>\n> 22P07 is not out-of-sequence. 22P01 to 22P06 are already used.\n> However, I do agree with you that we can use\n> ERRCODE_INVALID_TEXT_REPRESENTATION. I will change it.\n>\n>\n>>\n>> * Might be good to have a few test cases demonstrating what happens\n>> when there's other stuff combined with the RN format spec. Notably,\n>> even if RN itself won't eat whitespace, there had better be a way\n>> to write the format string to allow that.\n>>\n>\n> The current patch (v3) simply ignores other formats with RN except when RN\n> is used EEEE which returns error.\n> ```\n> postgres=# SELECT to_number('CXXIII', 'foo RN');\n> to_number\n> -----------\n> 123\n> (1 row)\n>\n> postgres=# SELECT to_number('CXXIII', 'MIrn');\n> to_number\n> -----------\n> 123\n> (1 row)\n>\n> postgres=# SELECT to_number('CXXIII', 'EEEErn');\n> ERROR: \"EEEE\" must be the last pattern used\n> ```\n>\n> However, I think that other formats except FM should be rejected when used\n> with RN in NUMDesc_prepare function. Any opinions?\n>\n> If we look into Oracle, they are strict in this regard and don't allow any\n> other format with RN.\n> 1. SELECT to_char(12, 'MIrn') from dual;\n> 2. SELECT to_char(12, 'foo RN') from dual;\n> results in error:\n> ORA-01481: invalid number format model\n>\n> I also found that there is no check when RN is used twice (both in to_char\n> and to_number) and that results in unexpected output.\n>\n> ```\n> postgres=# SELECT to_number('CXXIII', 'RNrn');\n> to_number\n> -----------\n> 123\n> (1 row)\n>\n> postgres=# SELECT to_char(12, 'RNrn');\n> to_char\n> --------------------------------\n> XII xii\n> (1 row)\n>\n> postgres=# SELECT to_char(12, 'rnrn');\n> to_char\n> --------------------------------\n> xii xii\n> (1 row)\n>\n> postgres=# SELECT to_char(12, 'FMrnrn');\n> to_char\n> ---------\n> xiixii\n> (1 row)\n> ```\n>\n>\n>> * Further to Aleksander's point about lack of test coverage for\n>> the to_char direction, I see from\n>>\n>> https://coverage.postgresql.org/src/backend/utils/adt/formatting.c.gcov.html\n>> that almost none of the existing roman-number code paths are covered\n>> today. While it's not strictly within the charter of this patch\n>> to improve that, maybe we should try to do so --- at the very\n>> least it'd raise formatting.c's coverage score a few notches.\n>>\n>>\n> I see that you have provided a patch to increase test coverage of to_char\n> roman format including some other formats. I will try to add more tests for\n> the to_number roman format.\n>\n>\n> I will provide the next patch soon.\n>\n> Regards,\n> Hunaid Sohail\n>\n\nHi,I have started working on the next version of the patch and have addressed the smaller issues identified by Tom. Before proceeding further, I would like to have opinions on some comments/questions in my previous email.Regards,Hunaid SohailOn Mon, Sep 9, 2024 at 5:45 PM Hunaid Sohail <hunaidpgml@gmail.com> wrote:Hi,On Sun, Sep 8, 2024 at 4:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nA few notes from a quick read of the patch:\n\n* roman_to_int() should have a header comment, notably explaining its\nresult convention. I find it fairly surprising that \"0\" means an\nerror --- I realize that Roman notation can't represent zero, but\nwouldn't it be better to use -1?Noted. \n\n* Do *not* rely on toupper(). There are multiple reasons why not,\nbut in this context the worst one is that in Turkish locales it's\nlikely to translate \"i\" to \"İ\", on which you will fail. I'd use\npg_ascii_toupper().Noted. \n\n* I think roman_to_int() is under-commented internally too.\nTo start with, where did the magic \"15\" come from? And why\nshould we have such a test anyway --- what if the format allows\nfor trailing stuff after the roman numeral? (That is, I think\nyou need to fix this so that roman_to_int reports how much data\nit ate, instead of assuming that it must read to the end of the\ninput.) MMMDCCCLXXXVIII is the longest valid standard roman numeral (15 characters). I will add appropriate comment on length check.I am not sure I am able to understand the latter part. Could you please elaborate it?Are you referring to cases like these?SELECT to_number('CXXIII foo', 'RN foo');SELECT to_number('CXXIII foo', 'RN');Please check my comments on Oracle's behaviour below. The logic for detecting invalid numeral combinations\nfeels a little opaque too. Do you have a source for the rules\nyou're implementing, and if so could you cite it?There are many sources on the internet.A few sources:1. https://www.toppr.com/guides/maths/knowing-our-numbers/roman-numerals/2. https://www.cuemath.com/numbers/roman-numerals/Note that we are following the standard form: https://en.wikipedia.org/wiki/Roman_numerals#Standard_form \n\n* This code will get run through pgindent before commit, so you\nmight want to revisit whether your comments will still look nice\nafterwards. There's not a lot of consistency in them about initial\ncap versus lower case or trailing period versus not, too.Noted. \n\n* roman_result could be declared where used, no?Noted. \n\n* I'm not really convinced that we need a new errcode\nERRCODE_INVALID_ROMAN_NUMERAL rather than using a generic one like\nERRCODE_INVALID_TEXT_REPRESENTATION. However, if we do do it\nlike that, why did you pick the out-of-sequence value 22P07?22P07 is not out-of-sequence. 22P01 to 22P06 are already used.However, I do agree with you that we can use ERRCODE_INVALID_TEXT_REPRESENTATION. I will change it. \n\n* Might be good to have a few test cases demonstrating what happens\nwhen there's other stuff combined with the RN format spec. Notably,\neven if RN itself won't eat whitespace, there had better be a way\nto write the format string to allow that.The current patch (v3) simply ignores other formats with RN except when RN is used EEEE which returns error.```postgres=# SELECT to_number('CXXIII', 'foo RN'); to_number----------- 123(1 row)postgres=# SELECT to_number('CXXIII', 'MIrn'); to_number----------- 123(1 row)postgres=# SELECT to_number('CXXIII', 'EEEErn');ERROR: \"EEEE\" must be the last pattern used```However, I think that other formats except FM should be rejected when used with RN in NUMDesc_prepare function. Any opinions?If we look into Oracle, they are strict in this regard and don't allow any other format with RN.1. SELECT to_char(12, 'MIrn') from dual;2. SELECT to_char(12, 'foo RN') from dual;results in error:ORA-01481: invalid number format modelI also found that there is no check when RN is used twice (both in to_char and to_number) and that results in unexpected output.```postgres=# SELECT to_number('CXXIII', 'RNrn'); to_number----------- 123(1 row)postgres=# SELECT to_char(12, 'RNrn'); to_char-------------------------------- XII xii(1 row)postgres=# SELECT to_char(12, 'rnrn'); to_char-------------------------------- xii xii(1 row)postgres=# SELECT to_char(12, 'FMrnrn'); to_char--------- xiixii(1 row)``` \n\n* Further to Aleksander's point about lack of test coverage for\nthe to_char direction, I see from\nhttps://coverage.postgresql.org/src/backend/utils/adt/formatting.c.gcov.html\nthat almost none of the existing roman-number code paths are covered\ntoday. While it's not strictly within the charter of this patch\nto improve that, maybe we should try to do so --- at the very\nleast it'd raise formatting.c's coverage score a few notches.I see that you have provided a patch to increase test coverage of to_char roman format including some other formats. I will try to add more tests for the to_number roman format.I will provide the next patch soon.Regards,Hunaid Sohail",
"msg_date": "Thu, 12 Sep 2024 12:44:23 +0500",
"msg_from": "Hunaid Sohail <hunaidpgml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add roman support for to_number function"
},
{
"msg_contents": "Hi,\n\nI have attached a new patch with following changes:\n- Addressed minor issues identified by Tom.\n- Rejected other formats with RN and updated the docs.\n- Added a few more tests.\n\n\nRegards,\nHunaid Sohail\n\n>",
"msg_date": "Thu, 19 Sep 2024 12:51:51 +0500",
"msg_from": "Hunaid Sohail <hunaidpgml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add roman support for to_number function"
}
] |
[
{
"msg_contents": "I normally wouldn't mention my blog entries here, but this one was about\nthe hackers mailing list, so wanted to let people know about it in case you\ndon't follow Planet Postgres. I scanned the last year's worth of posts and\ngathered the most used acronyms and jargon. The most commonly used acronym\nwas IMO (in my opinion), followed by FWIW (for what it's worth), and IIUC\n(if I understand correctly). The complete list can be found in the post\nbelow, I'll refrain from copying everything here.\n\nhttps://www.crunchydata.com/blog/understanding-the-postgres-hackers-mailing-list\n\nCheers,\nGreg\n\nI normally wouldn't mention my blog entries here, but this one was about the hackers mailing list, so wanted to let people know about it in case you don't follow Planet Postgres. I scanned the last year's worth of posts and gathered the most used acronyms and jargon. The most commonly used acronym was IMO (in my opinion), followed by FWIW (for what it's worth), and IIUC (if I understand correctly). The complete list can be found in the post below, I'll refrain from copying everything here.https://www.crunchydata.com/blog/understanding-the-postgres-hackers-mailing-listCheers,Greg",
"msg_date": "Fri, 30 Aug 2024 12:01:42 -0400",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": true,
"msg_subject": "Jargon and acronyms on this mailing list"
},
{
"msg_contents": "Greg Sabino Mullane <htamfids@gmail.com> writes:\n\n> I normally wouldn't mention my blog entries here, but this one was about\n> the hackers mailing list, so wanted to let people know about it in case you\n> don't follow Planet Postgres. I scanned the last year's worth of posts and\n> gathered the most used acronyms and jargon. The most commonly used acronym\n> was IMO (in my opinion), followed by FWIW (for what it's worth), and IIUC\n> (if I understand correctly). The complete list can be found in the post\n> below, I'll refrain from copying everything here.\n>\n> https://www.crunchydata.com/blog/understanding-the-postgres-hackers-mailing-list\n\nNice write-up! Might it also be worth linking to the acronyms and\nglossary sections of the docs?\n\nhttps://www.postgresql.org/docs/current/acronyms.html\nhttps://www.postgresql.org/docs/current/glossary.html\n\n> Cheers,\n> Greg\n\n- ilmari\n\n\n",
"msg_date": "Mon, 02 Sep 2024 12:06:18 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
},
{
"msg_contents": "> On 2 Sep 2024, at 13:06, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n> \n> Greg Sabino Mullane <htamfids@gmail.com> writes:\n> \n>> I normally wouldn't mention my blog entries here, but this one was about\n>> the hackers mailing list, so wanted to let people know about it in case you\n>> don't follow Planet Postgres. I scanned the last year's worth of posts and\n>> gathered the most used acronyms and jargon. The most commonly used acronym\n>> was IMO (in my opinion), followed by FWIW (for what it's worth), and IIUC\n>> (if I understand correctly). The complete list can be found in the post\n>> below, I'll refrain from copying everything here.\n>> \n>> https://www.crunchydata.com/blog/understanding-the-postgres-hackers-mailing-list\n> \n> Nice write-up!\n\n+1\n\n> Might it also be worth linking to the acronyms and\n> glossary sections of the docs?\n\nOr maybe on the site under https://www.postgresql.org/list/ in some way?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 2 Sep 2024 13:38:06 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
},
{
"msg_contents": "On Fri, Aug 30, 2024 at 12:01:42PM -0400, Greg Sabino Mullane wrote:\n> I normally wouldn't mention my blog entries here, but this one was about\n> the hackers mailing list, so wanted to let people know about it in case you\n> don't follow Planet Postgres. I scanned the last year's worth of posts and\n> gathered the most used acronyms and jargon. The most commonly used acronym\n> was IMO (in my opinion), followed by FWIW (for what it's worth), and IIUC\n> (if I understand correctly). The complete list can be found in the post\n> below, I'll refrain from copying everything here.\n\nDo you think these acronyms make it difficult for some to contribute to\nPostgres? I've always felt that they were pretty easy to figure out and a\nnice way to save some typing for common phrases, but I'm not sure it's ever\nreally been discussed.\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 3 Sep 2024 10:50:20 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
},
{
"msg_contents": "On 2024-Aug-30, Greg Sabino Mullane wrote:\n\n> I normally wouldn't mention my blog entries here, but this one was about\n> the hackers mailing list, so wanted to let people know about it in case you\n> don't follow Planet Postgres. I scanned the last year's worth of posts and\n> gathered the most used acronyms and jargon. The most commonly used acronym\n> was IMO (in my opinion), followed by FWIW (for what it's worth), and IIUC\n> (if I understand correctly). The complete list can be found in the post\n> below, I'll refrain from copying everything here.\n> \n> https://www.crunchydata.com/blog/understanding-the-postgres-hackers-mailing-list\n\nGood post, thanks for taking the time.\n\nThis seems a great resource to link in the\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F\npage or maybe in\nhttps://wiki.postgresql.org/wiki/Developer_FAQ\nor both ... and also this one\nhttp://rhaas.blogspot.com/2024/08/posting-your-patch-on-pgsql-hackers.html\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Niemand ist mehr Sklave, als der sich für frei hält, ohne es zu sein.\"\n Nadie está tan esclavizado como el que se cree libre no siéndolo\n (Johann Wolfgang von Goethe)\n\n\n",
"msg_date": "Thu, 5 Sep 2024 14:14:48 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
},
{
"msg_contents": "On Sat, 31 Aug 2024 at 04:02, Greg Sabino Mullane <htamfids@gmail.com> wrote:\n> I normally wouldn't mention my blog entries here, but this one was about the hackers mailing list, so wanted to let people know about it in case you don't follow Planet Postgres. I scanned the last year's worth of posts and gathered the most used acronyms and jargon. The most commonly used acronym was IMO (in my opinion), followed by FWIW (for what it's worth), and IIUC (if I understand correctly). The complete list can be found in the post below, I'll refrain from copying everything here.\n>\n> https://www.crunchydata.com/blog/understanding-the-postgres-hackers-mailing-list\n\nI think this is useful. Thanks for posting.\n\nI think HEAD is commonly misused to mean master instead of the latest\ncommit of the current branch. I see the buildfarm even does that.\nThanks for getting that right in your blog post.\n\nDavid\n\n\n",
"msg_date": "Mon, 9 Sep 2024 12:23:51 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
},
{
"msg_contents": "> I normally wouldn't mention my blog entries here, but this one was about\n> the hackers mailing list, so wanted to let people know about it in case you\n> don't follow Planet Postgres. I scanned the last year's worth of posts and\n> gathered the most used acronyms and jargon. The most commonly used acronym\n> was IMO (in my opinion), followed by FWIW (for what it's worth), and IIUC\n> (if I understand correctly). The complete list can be found in the post\n> below, I'll refrain from copying everything here.\n> \n> https://www.crunchydata.com/blog/understanding-the-postgres-hackers-mailing-list\n\nThank you for the excellent article. I think it is very useful for\nnon-native English speakers like me.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 09 Sep 2024 09:32:28 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I think HEAD is commonly misused to mean master instead of the latest\n> commit of the current branch. I see the buildfarm even does that.\n> Thanks for getting that right in your blog post.\n\nIIRC, HEAD *was* the technically correct term back when we were\nusing CVS. Old habits die hard.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Sep 2024 23:35:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
},
{
"msg_contents": "On Sun, Sep 08, 2024 at 11:35:36PM -0400, Tom Lane wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n>> I think HEAD is commonly misused to mean master instead of the latest\n>> commit of the current branch. I see the buildfarm even does that.\n>> Thanks for getting that right in your blog post.\n> \n> IIRC, HEAD *was* the technically correct term back when we were\n> using CVS. Old habits die hard.\n\nEven if it's a new habit from a new technology. I began getting\ninvolved with upstream the year when the moved from CVS to git\nhappened, and just inherited this habit from everybody else ;)\n--\nMichael",
"msg_date": "Mon, 9 Sep 2024 12:42:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
},
{
"msg_contents": "\nOn 2024-09-08 Su 11:35 PM, Tom Lane wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n>> I think HEAD is commonly misused to mean master instead of the latest\n>> commit of the current branch. I see the buildfarm even does that.\n>> Thanks for getting that right in your blog post.\n> IIRC, HEAD *was* the technically correct term back when we were\n> using CVS. Old habits die hard.\n>\n> \t\n\n\nYeah. The reason we kept doing it that way in the buildfarm was that for \na period we actually had some animals using CVS and some using that \nnew-fangled git thing.\n\nI guess I could try to write code to migrate everything, but it would be \nsomewhat fragile. And what would we do if we ever decided to migrate \n\"master\" to another name like \"main\"? I do at least have code ready for \nthat eventuality, but it would (currently) still keep the visible name \nof HEAD.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 9 Sep 2024 13:02:51 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
},
{
"msg_contents": "On Mon, Sep 9, 2024 at 1:03 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> I guess I could try to write code to migrate everything, but it would be\n> somewhat fragile. And what would we do if we ever decided to migrate\n> \"master\" to another name like \"main\"? I do at least have code ready for\n> that eventuality, but it would (currently) still keep the visible name\n> of HEAD.\n\nPersonally, I think using HEAD to mean master is really confusing. In\ngit, master is a branch name, and HEAD is the tip of some branch, or\nthe random commit you've checked out that isn't even a branch. I know\nthat's not how it worked in CVS, but CVS was a very long time ago.\n\nIf we rename master to main or devel or something, we'll have to\nadjust the way we speak again, but that's not a reason to keep using\nthe wrong terminology for the way things are now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Sep 2024 13:19:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
},
{
"msg_contents": "\nOn 2024-09-09 Mo 1:19 PM, Robert Haas wrote:\n> On Mon, Sep 9, 2024 at 1:03 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> I guess I could try to write code to migrate everything, but it would be\n>> somewhat fragile. And what would we do if we ever decided to migrate\n>> \"master\" to another name like \"main\"? I do at least have code ready for\n>> that eventuality, but it would (currently) still keep the visible name\n>> of HEAD.\n> Personally, I think using HEAD to mean master is really confusing. In\n> git, master is a branch name, and HEAD is the tip of some branch, or\n> the random commit you've checked out that isn't even a branch. I know\n> that's not how it worked in CVS, but CVS was a very long time ago.\n>\n> If we rename master to main or devel or something, we'll have to\n> adjust the way we speak again, but that's not a reason to keep using\n> the wrong terminology for the way things are now.\n\n\nThere are some serious obstacles to changing it all over, though. I \ndon't want to rewrite all the history, for example.\n\nWhat we could do relatively simply is change what is seen publicly. e.g. \nwe could rewrite the status page to read \"Branch: master\". We could also \nchange URLs we generate to use master instead of HEAD (and change it \nback when processing the URLs. And so on.\n\nChanging things on the client side would be far more complicated and \ndifficult.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 9 Sep 2024 15:54:52 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
},
{
"msg_contents": "On Mon, Sep 9, 2024 at 7:20 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Sep 9, 2024 at 1:03 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > I guess I could try to write code to migrate everything, but it would be\n> > somewhat fragile. And what would we do if we ever decided to migrate\n> > \"master\" to another name like \"main\"? I do at least have code ready for\n> > that eventuality, but it would (currently) still keep the visible name\n> > of HEAD.\n>\n> Personally, I think using HEAD to mean master is really confusing. In\n> git, master is a branch name, and HEAD is the tip of some branch, or\n> the random commit you've checked out that isn't even a branch. I know\n> that's not how it worked in CVS, but CVS was a very long time ago.\n>\n\nYeah, and it gets extra confusing when some of the error messages in git\nexplicitly talk about HEAD and that HEAD is something completely different\nfrom our terminology.\n\n\nIf we rename master to main or devel or something, we'll have to\n> adjust the way we speak again, but that's not a reason to keep using\n> the wrong terminology for the way things are now.\n>\n\nAgreed in general. But also if we are going to end up making technical\nchanges to handle it, then if we're ever going to make the change master ->\nmain (or whatever), it would save work and pain to do the two at the same\ntime.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Sep 9, 2024 at 7:20 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Sep 9, 2024 at 1:03 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> I guess I could try to write code to migrate everything, but it would be\n> somewhat fragile. And what would we do if we ever decided to migrate\n> \"master\" to another name like \"main\"? I do at least have code ready for\n> that eventuality, but it would (currently) still keep the visible name\n> of HEAD.\n\nPersonally, I think using HEAD to mean master is really confusing. In\ngit, master is a branch name, and HEAD is the tip of some branch, or\nthe random commit you've checked out that isn't even a branch. I know\nthat's not how it worked in CVS, but CVS was a very long time ago.Yeah, and it gets extra confusing when some of the error messages in git explicitly talk about HEAD and that HEAD is something completely different from our terminology.\nIf we rename master to main or devel or something, we'll have to\nadjust the way we speak again, but that's not a reason to keep using\nthe wrong terminology for the way things are now.Agreed in general. But also if we are going to end up making technical changes to handle it, then if we're ever going to make the change master -> main (or whatever), it would save work and pain to do the two at the same time. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 10 Sep 2024 11:31:45 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
},
{
"msg_contents": "On Tue, Sep 3, 2024 at 11:50 AM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> Do you think these acronyms make it difficult for some to contribute to\n> Postgres? I've always felt that they were pretty easy to figure out and a\n> nice way to save some typing for common phrases, but I'm not sure it's ever\n> really been discussed\n>\n\nI do think it raises the bar a bit, especially for non-native-English\nspeakers. Granted, the learning curve is not super high, and context plus\nweb searching can usually help people out, but the lists are dense enough\nalready, so I wanted to help people out. Also, mailing lists in general are\na pretty foreign concept to young developers, and as AFAICT [1], not all\nthe acronyms have crossed to the texting world.\n\nCheers,\nGreg\n\n[1] See what I did there? [2]\n\n[2] Do people still learn about and use footnotes?\n\nOn Tue, Sep 3, 2024 at 11:50 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\nDo you think these acronyms make it difficult for some to contribute to\nPostgres? I've always felt that they were pretty easy to figure out and a\nnice way to save some typing for common phrases, but I'm not sure it's ever\nreally been discussedI do think it raises the bar a bit, especially for non-native-English speakers. Granted, the learning curve is not super high, and context plus web searching can usually help people out, but the lists are dense enough already, so I wanted to help people out. Also, mailing lists in general are a pretty foreign concept to young developers, and as AFAICT [1], not all the acronyms have crossed to the texting world.Cheers,Greg[1] See what I did there? [2][2] Do people still learn about and use footnotes?",
"msg_date": "Tue, 10 Sep 2024 10:41:18 -0400",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
},
{
"msg_contents": "\nOn 2024-09-09 Mo 3:54 PM, Andrew Dunstan wrote:\n>\n> On 2024-09-09 Mo 1:19 PM, Robert Haas wrote:\n>> On Mon, Sep 9, 2024 at 1:03 PM Andrew Dunstan <andrew@dunslane.net> \n>> wrote:\n>>> I guess I could try to write code to migrate everything, but it \n>>> would be\n>>> somewhat fragile. And what would we do if we ever decided to migrate\n>>> \"master\" to another name like \"main\"? I do at least have code ready for\n>>> that eventuality, but it would (currently) still keep the visible name\n>>> of HEAD.\n>> Personally, I think using HEAD to mean master is really confusing. In\n>> git, master is a branch name, and HEAD is the tip of some branch, or\n>> the random commit you've checked out that isn't even a branch. I know\n>> that's not how it worked in CVS, but CVS was a very long time ago.\n>>\n>> If we rename master to main or devel or something, we'll have to\n>> adjust the way we speak again, but that's not a reason to keep using\n>> the wrong terminology for the way things are now.\n>\n>\n> There are some serious obstacles to changing it all over, though. I \n> don't want to rewrite all the history, for example.\n>\n> What we could do relatively simply is change what is seen publicly. \n> e.g. we could rewrite the status page to read \"Branch: master\". We \n> could also change URLs we generate to use master instead of HEAD (and \n> change it back when processing the URLs. And so on.\n\n\nI've done this. Nothing in the client or the database has changed, but \nthe fact that we refer to \"master\" as \"HEAD\" is pretty much hidden now \nfrom the web app and the emails it sends out. That should help lessen \nany confusion in casual viewers.\n\nComments welcome. I don't think I have missed anything but it's always \npossible.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 10 Sep 2024 16:51:28 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
},
{
"msg_contents": "On Mon, Sep 9, 2024 at 3:54 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> There are some serious obstacles to changing it all over, though. I\n> don't want to rewrite all the history, for example.\n\nBecause of the way git works, that really wouldn't be an issue. We'd\njust push the tip of the master branch to main and then start\ncommitting to main and delete master. The history wouldn't change at\nall, because in git, a branch is really just a movable pointer to a\ncommit. The commits themselves don't know that they're part of a\nbranch.\n\nA lot of things would break, naturally. We'd still all have master\nbranches in our local repositories and somebody might accidentally try\nto push one of those branches back to the upstream repository and the\nbuildfarm and lots of other tooling would get confused and it would\nall be a mess for a while, but the history itself would not change.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 Sep 2024 08:34:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
},
{
"msg_contents": "\nOn 2024-09-11 We 8:34 AM, Robert Haas wrote:\n> On Mon, Sep 9, 2024 at 3:54 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> There are some serious obstacles to changing it all over, though. I\n>> don't want to rewrite all the history, for example.\n> Because of the way git works, that really wouldn't be an issue. We'd\n> just push the tip of the master branch to main and then start\n> committing to main and delete master. The history wouldn't change at\n> all, because in git, a branch is really just a movable pointer to a\n> commit. The commits themselves don't know that they're part of a\n> branch.\n>\n> A lot of things would break, naturally. We'd still all have master\n> branches in our local repositories and somebody might accidentally try\n> to push one of those branches back to the upstream repository and the\n> buildfarm and lots of other tooling would get confused and it would\n> all be a mess for a while, but the history itself would not change.\n\n\n\nI think you misunderstood me. I wasn't referring to the git history, but \nthe buildfarm history.\n\nAnyway, I think what I have done should suffice. You should no longer \nsee the name HEAD on the buildfarm server, although it will continue to \nexists in the database.\n\nIncidentally, I wrote a blog post about changing the client default name \nsome years ago: \n<http://adpgtech.blogspot.com/2021/06/buildfarm-adopts-modern-git-naming.html>\n\nI also have scripting to do the git server changes (basically to set its \ndefault branch), although it's rather github-specific.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 11 Sep 2024 11:04:55 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Jargon and acronyms on this mailing list"
}
] |
[
{
"msg_contents": "Hello Hackers,\n\nWhile working on inlining non-SQL SRFs [1] I noticed we don't have tests for when a PL/pgSQL \nfunction requires materialize mode but doesn't have a result TupleDesc. Here is a patch adding tests \nfor that, as well as some other conditions around SRF calls with `SETOF RECORD` vs `TABLE (...)`. \nThere aren't any code changes, just some new tests.\n\nBut IMO it might be better to change the code. This error message is a bit confusing:\n\n+-- materialize mode requires a result TupleDesc:\n+select array_to_set2(array['one', 'two']); -- fail\n+ERROR: materialize mode required, but it is not allowed in this context\n+CONTEXT: PL/pgSQL function array_to_set2(anyarray) line 3 at RETURN QUERY\n\nPerhaps it would be better to give the same error as here?:\n\n+select * from array_to_set2(array['one', 'two']); -- fail\n+ERROR: a column definition list is required for functions returning \"record\"\n+LINE 1: select * from array_to_set2(array['one', 'two']);\n\nIf folks agree, I can work on a patch for that. Otherwise, at least this patch documents the current \nbehavior and increases coverage.\n\n[1] https://commitfest.postgresql.org/49/5083/\n\nYours,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com",
"msg_date": "Fri, 30 Aug 2024 13:28:58 -0700",
"msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Add tests for PL/pgSQL SRFs"
},
{
"msg_contents": "Paul Jungwirth <pj@illuminatedcomputing.com> writes:\n> While working on inlining non-SQL SRFs [1] I noticed we don't have tests for when a PL/pgSQL \n> function requires materialize mode but doesn't have a result TupleDesc. Here is a patch adding tests \n> for that, as well as some other conditions around SRF calls with `SETOF RECORD` vs `TABLE (...)`. \n> There aren't any code changes, just some new tests.\n\nAFAICT this test case adds coverage of exactly one line of code,\nand that line is an unremarkable ereport(ERROR). I can't get\nexcited about spending test cycles on this forevermore, especially\nnot core-regression-test cycles.\n\n> But IMO it might be better to change the code. This error message is a bit confusing:\n\n> +-- materialize mode requires a result TupleDesc:\n> +select array_to_set2(array['one', 'two']); -- fail\n> +ERROR: materialize mode required, but it is not allowed in this context\n> +CONTEXT: PL/pgSQL function array_to_set2(anyarray) line 3 at RETURN QUERY\n\nA quick grep shows that there are ten other places throwing exactly\nthis same error message. About half of them are throwing it strictly\nfor\n\n if (!(rsinfo->allowedModes & SFRM_Materialize))\n\nand I think that that's a reasonable way to report that condition.\nBut the other half are throwing in other conditions such as\n\n if (!(rsinfo->allowedModes & SFRM_Materialize) ||\n rsinfo->expectedDesc == NULL)\n\nand I agree with you that maybe we're being too lazy there.\nI could get behind separate error messages for these conditions,\nlike\n\n if (!(rsinfo->allowedModes & SFRM_Materialize))\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"materialize mode required, but it is not allowed in this context\")));\n if (rsinfo->expectedDesc == NULL)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"a column definition list is required for functions returning \\\"record\\\"\")));\n\nIt's not quite clear to me if that's the same thing you're suggesting?\n\nI'm also a bit uncomfortable with using that phrasing of the second\nerror, because it seems to be making assumptions that are well beyond\nwhat this code knows to be true. Is it possible to get here in a\nfunction that *doesn't* return record? Maybe we should just say\n\"a column definition list is required for this function\", or words\nto that effect (throwing in the function name might be good).\n\nIn any case, if we do something about it I'd want to do so in all\nof the places that are taking similar shortcuts, not only plpgsql.\n\nA different line of thought is to try to remove this implementation\nrestriction, but I've not looked at what that would entail.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 31 Aug 2024 12:59:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add tests for PL/pgSQL SRFs"
}
] |
[
{
"msg_contents": "Hello\n\nHere I present another attempt at making not-null constraints be\ncatalogued. This is largely based on the code reverted at 9ce04b50e120,\nexcept that we now have a not-null constraint automatically created for\nevery column of a primary key, and such constraint cannot be removed\nwhile the PK exists. Thanks to this, a lot of rather ugly code is gone,\nboth in pg_dump and in backend -- in particular the handling of NO\nINHERIT, which was needed for pg_dump.\n\nNoteworthy psql difference: because there are now even more not-null\nconstraints than before, the \\d+ display would be far too noisy if we\njust let it grow. So here I've made it omit any constraints that\nunderlie the primary key. This should be OK since you can't do much\nwith those constraints while the PK is still there. If you drop the PK,\nthe next \\d+ will show those constraints.\n\nOne thing that regretfully I haven't yet had time for, is paring down\nthe original test code: a lot of it is verifying the old semantics,\nparticularly for NO INHERIT constraints, which had grown ugly special\ncases. It now mostly raises errors; or the tests are simply redundant.\nI'm going to remove that stuff as soon as I'm back on my normal work\ntimezone.\n\nsepgsql is untested.\n\nI'm adding this to the September commitfest.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"¿Cómo puedes confiar en algo que pagas y que no ves,\ny no confiar en algo que te dan y te lo muestran?\" (Germán Poo)",
"msg_date": "Fri, 30 Aug 2024 23:58:33 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "not null constraints, again"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> 于2024年8月31日周六 11:59写道:\n\n> Hello\n>\n> Here I present another attempt at making not-null constraints be\n> catalogued. This is largely based on the code reverted at 9ce04b50e120,\n> except that we now have a not-null constraint automatically created for\n> every column of a primary key, and such constraint cannot be removed\n> while the PK exists. Thanks to this, a lot of rather ugly code is gone,\n> both in pg_dump and in backend -- in particular the handling of NO\n> INHERIT, which was needed for pg_dump.\n>\n> Noteworthy psql difference: because there are now even more not-null\n> constraints than before, the \\d+ display would be far too noisy if we\n> just let it grow. So here I've made it omit any constraints that\n> underlie the primary key. This should be OK since you can't do much\n> with those constraints while the PK is still there. If you drop the PK,\n> the next \\d+ will show those constraints.\n>\n> One thing that regretfully I haven't yet had time for, is paring down\n> the original test code: a lot of it is verifying the old semantics,\n> particularly for NO INHERIT constraints, which had grown ugly special\n> cases. It now mostly raises errors; or the tests are simply redundant.\n> I'm going to remove that stuff as soon as I'm back on my normal work\n> timezone.\n>\n> sepgsql is untested.\n>\n> I'm adding this to the September commitfest.\n\n\nThanks for working on this again.\n\n AT_PASS_OLD_COL_ATTRS, I didn't see any other code that used it. I don't\nthink it's necessary.\n\nI will take the time to look over the attached patch.\n\n-- \nTender Wang\n\nAlvaro Herrera <alvherre@alvh.no-ip.org> 于2024年8月31日周六 11:59写道:Hello\n\nHere I present another attempt at making not-null constraints be\ncatalogued. This is largely based on the code reverted at 9ce04b50e120,\nexcept that we now have a not-null constraint automatically created for\nevery column of a primary key, and such constraint cannot be removed\nwhile the PK exists. Thanks to this, a lot of rather ugly code is gone,\nboth in pg_dump and in backend -- in particular the handling of NO\nINHERIT, which was needed for pg_dump.\n\nNoteworthy psql difference: because there are now even more not-null\nconstraints than before, the \\d+ display would be far too noisy if we\njust let it grow. So here I've made it omit any constraints that\nunderlie the primary key. This should be OK since you can't do much\nwith those constraints while the PK is still there. If you drop the PK,\nthe next \\d+ will show those constraints.\n\nOne thing that regretfully I haven't yet had time for, is paring down\nthe original test code: a lot of it is verifying the old semantics,\nparticularly for NO INHERIT constraints, which had grown ugly special\ncases. It now mostly raises errors; or the tests are simply redundant.\nI'm going to remove that stuff as soon as I'm back on my normal work\ntimezone.\n\nsepgsql is untested.\n\nI'm adding this to the September commitfest. Thanks for working on this again. AT_PASS_OLD_COL_ATTRS, I didn't see any other code that used it. I don't think it's necessary.I will take the time to look over the attached patch.-- Tender Wang",
"msg_date": "Sun, 1 Sep 2024 13:45:57 +0800",
"msg_from": "Tender Wang <tndrwang@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> 于2024年8月31日周六 11:59写道:\n\n> Hello\n>\n> Here I present another attempt at making not-null constraints be\n> catalogued. This is largely based on the code reverted at 9ce04b50e120,\n> except that we now have a not-null constraint automatically created for\n> every column of a primary key, and such constraint cannot be removed\n> while the PK exists. Thanks to this, a lot of rather ugly code is gone,\n> both in pg_dump and in backend -- in particular the handling of NO\n> INHERIT, which was needed for pg_dump.\n>\n> Noteworthy psql difference: because there are now even more not-null\n> constraints than before, the \\d+ display would be far too noisy if we\n> just let it grow. So here I've made it omit any constraints that\n> underlie the primary key. This should be OK since you can't do much\n> with those constraints while the PK is still there. If you drop the PK,\n> the next \\d+ will show those constraints.\n>\n> One thing that regretfully I haven't yet had time for, is paring down\n> the original test code: a lot of it is verifying the old semantics,\n> particularly for NO INHERIT constraints, which had grown ugly special\n> cases. It now mostly raises errors; or the tests are simply redundant.\n> I'm going to remove that stuff as soon as I'm back on my normal work\n> timezone.\n>\n> sepgsql is untested.\n>\n> I'm adding this to the September commitfest.\n>\n\nThe attached patch adds List *nnconstraints, which store the not-null\ndefinition, in struct CreateStmt.\nThis makes me a little confused about List *constraints in struct\nCreateStmt. Actually, the List constraints\nstore ckeck constraint, and it will be better if the comments can reflect\nthat. Re-naming it to List *ckconstraints\nseems more reasonable. But a lot of codes that use stmt->constraints will\nbe changed.\n\nSince AddRelationNewConstraints() can now add not-null column constraint,\nthe comments about AddRelationNewConstraints()\nshould tweak a little.\n\"All entries in newColDefaults will be processed. Entries in\nnewConstraints\nwill be processed only if they are CONSTR_CHECK type.\"\nNow, the type of new constraints may be not-null constraints.\n\n\nIf the column has already had one not-null constraint, and we add same\nnot-null constraint again.\nThen the code will call AdjustNotNullInheritance1() in\nAddRelationNewConstraints().\nThe comments\nbefore entering AdjustNotNullInheritance1() in AddRelationNewConstraints()\nlook confusing to me.\nBecause constraint is not inherited.\n\n-- \nTender Wang\n\nAlvaro Herrera <alvherre@alvh.no-ip.org> 于2024年8月31日周六 11:59写道:Hello\n\nHere I present another attempt at making not-null constraints be\ncatalogued. This is largely based on the code reverted at 9ce04b50e120,\nexcept that we now have a not-null constraint automatically created for\nevery column of a primary key, and such constraint cannot be removed\nwhile the PK exists. Thanks to this, a lot of rather ugly code is gone,\nboth in pg_dump and in backend -- in particular the handling of NO\nINHERIT, which was needed for pg_dump.\n\nNoteworthy psql difference: because there are now even more not-null\nconstraints than before, the \\d+ display would be far too noisy if we\njust let it grow. So here I've made it omit any constraints that\nunderlie the primary key. This should be OK since you can't do much\nwith those constraints while the PK is still there. If you drop the PK,\nthe next \\d+ will show those constraints.\n\nOne thing that regretfully I haven't yet had time for, is paring down\nthe original test code: a lot of it is verifying the old semantics,\nparticularly for NO INHERIT constraints, which had grown ugly special\ncases. It now mostly raises errors; or the tests are simply redundant.\nI'm going to remove that stuff as soon as I'm back on my normal work\ntimezone.\n\nsepgsql is untested.\n\nI'm adding this to the September commitfest.The attached patch adds List *nnconstraints, which store the not-null definition, in struct CreateStmt.This makes me a little confused about List *constraints in struct CreateStmt. Actually, the List constraintsstore ckeck constraint, and it will be better if the comments can reflect that. Re-naming it to List *ckconstraintsseems more reasonable. But a lot of codes that use stmt->constraints will be changed.Since AddRelationNewConstraints() can now add not-null column constraint, the comments about AddRelationNewConstraints()should tweak a little. \"All entries in newColDefaults will be processed. Entries in newConstraints will be processed only if they are CONSTR_CHECK type.\"Now, the type of new constraints may be not-null constraints.If the column has already had one not-null constraint, and we add same not-null constraint again. Then the code will call AdjustNotNullInheritance1() in AddRelationNewConstraints(). The commentsbefore entering AdjustNotNullInheritance1() in AddRelationNewConstraints() look confusing to me.Because constraint is not inherited.-- Tender Wang",
"msg_date": "Mon, 2 Sep 2024 18:33:18 +0800",
"msg_from": "Tender Wang <tndrwang@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> 于2024年8月31日周六 11:59写道:\n\n> Hello\n>\n> Here I present another attempt at making not-null constraints be\n> catalogued. This is largely based on the code reverted at 9ce04b50e120,\n> except that we now have a not-null constraint automatically created for\n> every column of a primary key, and such constraint cannot be removed\n> while the PK exists. Thanks to this, a lot of rather ugly code is gone,\n> both in pg_dump and in backend -- in particular the handling of NO\n> INHERIT, which was needed for pg_dump.\n>\n> Noteworthy psql difference: because there are now even more not-null\n> constraints than before, the \\d+ display would be far too noisy if we\n> just let it grow. So here I've made it omit any constraints that\n> underlie the primary key. This should be OK since you can't do much\n> with those constraints while the PK is still there. If you drop the PK,\n> the next \\d+ will show those constraints.\n>\n> One thing that regretfully I haven't yet had time for, is paring down\n> the original test code: a lot of it is verifying the old semantics,\n> particularly for NO INHERIT constraints, which had grown ugly special\n> cases. It now mostly raises errors; or the tests are simply redundant.\n> I'm going to remove that stuff as soon as I'm back on my normal work\n> timezone.\n>\n> sepgsql is untested.\n>\n> I'm adding this to the September commitfest.\n>\n\nThe test case in constraints.sql, as below:\nCREATE TABLE notnull_tbl1 (a INTEGER NOT NULL NOT NULL);\n\n ^^^^^^^^^^\nThere are two not-null definitions, and is the second one redundant?\n\nWhen we drop the column not-null constraint, we will enter\nATExecDropNotNull().\nThen, it calls findNotNullConstraint() to get the constraint tuple. We\nalready have\nattnum before the call findNotNullConstraint(). Can we directly call\nfindNotNullConstraintAttnum()?\n\nIn RemoveConstraintById(), I see below comments:\n\n\"For not-null and primary key constraints,\nobtain the list of columns affected, so that we can reset their\nattnotnull flags below.\"\n\nHowever, there are no related codes that reflect the above comments.\n\n--\nThanks,\nTender Wang\n\nAlvaro Herrera <alvherre@alvh.no-ip.org> 于2024年8月31日周六 11:59写道:Hello\n\nHere I present another attempt at making not-null constraints be\ncatalogued. This is largely based on the code reverted at 9ce04b50e120,\nexcept that we now have a not-null constraint automatically created for\nevery column of a primary key, and such constraint cannot be removed\nwhile the PK exists. Thanks to this, a lot of rather ugly code is gone,\nboth in pg_dump and in backend -- in particular the handling of NO\nINHERIT, which was needed for pg_dump.\n\nNoteworthy psql difference: because there are now even more not-null\nconstraints than before, the \\d+ display would be far too noisy if we\njust let it grow. So here I've made it omit any constraints that\nunderlie the primary key. This should be OK since you can't do much\nwith those constraints while the PK is still there. If you drop the PK,\nthe next \\d+ will show those constraints.\n\nOne thing that regretfully I haven't yet had time for, is paring down\nthe original test code: a lot of it is verifying the old semantics,\nparticularly for NO INHERIT constraints, which had grown ugly special\ncases. It now mostly raises errors; or the tests are simply redundant.\nI'm going to remove that stuff as soon as I'm back on my normal work\ntimezone.\n\nsepgsql is untested.\n\nI'm adding this to the September commitfest.The test case in constraints.sql, as below: CREATE TABLE notnull_tbl1 (a INTEGER NOT NULL NOT NULL); ^^^^^^^^^^There are two not-null definitions, and is the second one redundant?When we drop the column not-null constraint, we will enter ATExecDropNotNull().Then, it calls findNotNullConstraint() to get the constraint tuple. We already have attnum before the call findNotNullConstraint(). Can we directly call findNotNullConstraintAttnum()?In RemoveConstraintById(), I see below comments:\"For not-null and primary key constraints,obtain the list of columns affected, so that we can reset theirattnotnull flags below.\"However, there are no related codes that reflect the above comments.--Thanks,Tender Wang",
"msg_date": "Tue, 3 Sep 2024 15:17:33 +0800",
"msg_from": "Tender Wang <tndrwang@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On Sat, Aug 31, 2024 at 11:59 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Hello\n>\n> Here I present another attempt at making not-null constraints be\n> catalogued. This is largely based on the code reverted at 9ce04b50e120,\n> except that we now have a not-null constraint automatically created for\n> every column of a primary key, and such constraint cannot be removed\n> while the PK exists. Thanks to this, a lot of rather ugly code is gone,\n> both in pg_dump and in backend -- in particular the handling of NO\n> INHERIT, which was needed for pg_dump.\n>\n> Noteworthy psql difference: because there are now even more not-null\n> constraints than before, the \\d+ display would be far too noisy if we\n> just let it grow. So here I've made it omit any constraints that\n> underlie the primary key. This should be OK since you can't do much\n> with those constraints while the PK is still there. If you drop the PK,\n> the next \\d+ will show those constraints.\n>\n\nhi.\nmy brief review.\n\ncreate table t1(a int, b int, c int not null, primary key(a, b));\n\\d+ t1\nERROR: operator is not unique: smallint[] <@ smallint[]\nLINE 8: coalesce(NOT ARRAY[at.attnum] <@ (SELECT conkey FROM pg_cata...\n ^\nHINT: Could not choose a best candidate operator. You might need to\nadd explicit type casts.\n\n\nthe regression test still passed, i have no idea why.\nanyway, the following changes make the above ERROR disappear.\nalso seems more lean.\n\nprintfPQExpBuffer(&buf,\n /* FIXME the coalesce trick looks silly. What's a better way? */\n \"SELECT co.conname, at.attname, co.connoinherit, co.conislocal,\\n\"\n \"co.coninhcount <> 0\\n\"\n \"FROM pg_catalog.pg_constraint co JOIN\\n\"\n \"pg_catalog.pg_attribute at ON\\n\"\n \"(at.attrelid = co.conrelid AND at.attnum = co.conkey[1])\\n\"\n \"WHERE co.contype = 'n' AND\\n\"\n \"co.conrelid = '%s'::pg_catalog.regclass AND\\n\"\n \"coalesce(NOT ARRAY[at.attnum] <@ (SELECT conkey FROM\npg_catalog.pg_constraint\\n\"\n \" WHERE contype = 'p' AND conrelid = '%s'::regclass), true)\\n\"\n \"ORDER BY at.attnum\",\n oid,\n oid);\n\nchange to\n\n printfPQExpBuffer(&buf,\n \"SELECT co.conname, at.attname,\nco.connoinherit, co.conislocal,\\n\"\n \"co.coninhcount <> 0\\n\"\n \"FROM pg_catalog.pg_constraint co JOIN\\n\"\n \"pg_catalog.pg_attribute at ON\\n\"\n \"(at.attrelid = co.conrelid AND\nat.attnum = co.conkey[1])\\n\"\n \"WHERE co.contype = 'n' AND\\n\"\n \"co.conrelid = '%s'::pg_catalog.regclass AND\\n\"\n \"NOT EXISTS (SELECT 1 FROM\npg_catalog.pg_constraint co1 where co1.contype = 'p'\\n\"\n \"AND at.attnum = any(co1.conkey) AND\nco1.conrelid = '%s'::pg_catalog.regclass)\\n\"\n \"ORDER BY at.attnum\",\n oid,\n oid);\n\nsteal idea from https://stackoverflow.com/a/75614278/15603477\n============\ncreate type comptype as (r float8, i float8);\ncreate domain dcomptype1 as comptype not null no inherit;\nwith cte as (\n SELECT oid, conrelid::regclass, conname FROM pg_catalog.pg_constraint\n where contypid in ('dcomptype1'::regtype))\nselect pg_get_constraintdef(oid) from cte;\ncurrent output is\nNOT NULL\n\nbut it's not the same as\nCREATE TABLE ATACC1 (a int, not null a no inherit);\nwith cte as ( SELECT oid, conrelid::regclass, conname FROM\npg_catalog.pg_constraint\n where conrelid in ('ATACC1'::regclass))\nselect pg_get_constraintdef(oid) from cte;\nNOT NULL a NO INHERIT\n\ni don't really sure the meaning of \"on inherit\" in\n\"create domain dcomptype1 as comptype not null no inherit;\"\n\n====================\nbold idea. print out the constraint name: violates not-null constraint \\\"%s\\\"\nfor the following code:\n ereport(ERROR,\n (errcode(ERRCODE_NOT_NULL_VIOLATION),\n errmsg(\"null value in column \\\"%s\\\" of\nrelation \\\"%s\\\" violates not-null constraint\",\n NameStr(att->attname),\n RelationGetRelationName(orig_rel)),\n val_desc ? errdetail(\"Failing row contains\n%s.\", val_desc) : 0,\n errtablecol(orig_rel, attrChk)));\n\n\n\n====================\nin extractNotNullColumn\nwe can Assert(colnum > 0);\n\n\n\ncreate table t3(a int , b int , c int ,not null a, not null c, not\nnull b, not null tableoid);\nthis should not be allowed?\n\n\n\n foreach(lc,\nRelationGetNotNullConstraints(RelationGetRelid(relation), false))\n {\n AlterTableCmd *atsubcmd;\n\n atsubcmd = makeNode(AlterTableCmd);\n atsubcmd->subtype = AT_AddConstraint;\n atsubcmd->def = (Node *) lfirst_node(Constraint, lc);\n atsubcmds = lappend(atsubcmds, atsubcmd);\n }\nforgive me for being hypocritical.\nI guess this is not a good coding pattern.\none reason would be: if you do:\n=\nlist *a = RelationGetNotNullConstraints(RelationGetRelid(relation), false);\nforeach(lc, a)\n=\nthen you can call pprint(a).\n\n\n+ /*\n+ * If INCLUDING INDEXES is not given and a primary key exists, we need to\n+ * add not-null constraints to the columns covered by the PK (except those\n+ * that already have one.) This is required for backwards compatibility.\n+ */\n+ if ((table_like_clause->options & CREATE_TABLE_LIKE_INDEXES) == 0)\n+ {\n+ Bitmapset *pkcols;\n+ int x = -1;\n+ Bitmapset *donecols = NULL;\n+ ListCell *lc;\n+\n+ /*\n+ * Obtain a bitmapset of columns on which we'll add not-null\n+ * constraints in expandTableLikeClause, so that we skip this for\n+ * those.\n+ */\n+ foreach(lc, RelationGetNotNullConstraints(RelationGetRelid(relation), true))\n+ {\n+ CookedConstraint *cooked = (CookedConstraint *) lfirst(lc);\n+\n+ donecols = bms_add_member(donecols, cooked->attnum);\n+ }\n+\n+ pkcols = RelationGetIndexAttrBitmap(relation,\n+ INDEX_ATTR_BITMAP_PRIMARY_KEY);\n+ while ((x = bms_next_member(pkcols, x)) >= 0)\n+ {\n+ Constraint *notnull;\n+ AttrNumber attnum = x + FirstLowInvalidHeapAttributeNumber;\n+ String *colname;\n+ Form_pg_attribute attForm;\n+\n+ /* ignore if we already have one for this column */\n+ if (bms_is_member(attnum, donecols))\n+ continue;\n+\n+ attForm = TupleDescAttr(tupleDesc, attnum - 1);\n+ colname = makeString(pstrdup(NameStr(attForm->attname)));\n+ notnull = makeNotNullConstraint(colname);\n+\n+ cxt->nnconstraints = lappend(cxt->nnconstraints, notnull);\n+ }\n+ }\nthis part (\" if (bms_is_member(attnum, donecols))\" will always be true?\ndonecols is all not-null attnums, pkcols is pk not-null attnums.\nso pkcols info will always be included in donecols.\ni placed a \"elog(INFO, \"%s we are in\", __func__);\"\nabove\n\"attForm = TupleDescAttr(tupleDesc, attnum - 1);\"\nall regression tests still passed.\n\n\n",
"msg_date": "Mon, 9 Sep 2024 09:36:00 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On Mon, Sep 2, 2024 at 6:33 PM Tender Wang <tndrwang@gmail.com> wrote:\n>\n>\n>\n> The attached patch adds List *nnconstraints, which store the not-null definition, in struct CreateStmt.\n> This makes me a little confused about List *constraints in struct CreateStmt. Actually, the List constraints\n> store ckeck constraint, and it will be better if the comments can reflect that. Re-naming it to List *ckconstraints\n> seems more reasonable. But a lot of codes that use stmt->constraints will be changed.\n>\nhi.\nseems you forgot to attach the patch?\nI also noticed this minor issue.\nI have no preference for Renaming it to List *ckconstraints.\n+1 for better comments. maybe reword to\n>>>\nList *constraints; /* CHECK constraints (list of Constraint nodes) */\n>>>\n\n\n\nOn Tue, Sep 3, 2024 at 3:17 PM Tender Wang <tndrwang@gmail.com> wrote:\n>\n> The test case in constraints.sql, as below:\n> CREATE TABLE notnull_tbl1 (a INTEGER NOT NULL NOT NULL);\n> ^^^^^^^^^^\n> There are two not-null definitions, and is the second one redundant?\n>\n\nhi.\ni think this is ok. please see\nfunction transformColumnDefinition and variable saw_nullable.\n\nwe need to make sure this:\nCREATE TABLE notnull_tbl3 (a INTEGER NULL NOT NULL);\nfails.\n\n\nof course, it's also OK do this:\nCREATE TABLE notnull_tbl3 (a INTEGER NULL NULL);\n\n\n",
"msg_date": "Mon, 9 Sep 2024 16:31:33 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "jian he <jian.universality@gmail.com> 于2024年9月9日周一 16:31写道:\n\n> On Mon, Sep 2, 2024 at 6:33 PM Tender Wang <tndrwang@gmail.com> wrote:\n> >\n> >\n> >\n> > The attached patch adds List *nnconstraints, which store the not-null\n> definition, in struct CreateStmt.\n> > This makes me a little confused about List *constraints in struct\n> CreateStmt. Actually, the List constraints\n> > store ckeck constraint, and it will be better if the comments can\n> reflect that. Re-naming it to List *ckconstraints\n> > seems more reasonable. But a lot of codes that use stmt->constraints\n> will be changed.\n> >\n> hi.\n> seems you forgot to attach the patch?\n> I also noticed this minor issue.\n> I have no preference for Renaming it to List *ckconstraints.\n> +1 for better comments. maybe reword to\n> >>>\n> List *constraints; /* CHECK constraints (list of Constraint\n> nodes) */\n> >>>\n>\n\nI just gave advice; whether it is accepted or not, it's up to Alvaro.\nIf Alvaro agrees with the advice, he will patch a new one. We can continue\nto review the\nnew patch.\nIf Alvaro disagrees, he doesn't need to change the current patch. I think\nthis way will be\nmore straightforward for others who will review this feature.\n\n\n>\n> On Tue, Sep 3, 2024 at 3:17 PM Tender Wang <tndrwang@gmail.com> wrote:\n> >\n> > The test case in constraints.sql, as below:\n> > CREATE TABLE notnull_tbl1 (a INTEGER NOT NULL NOT NULL);\n> >\n> ^^^^^^^^^^\n> > There are two not-null definitions, and is the second one redundant?\n> >\n>\n> hi.\n> i think this is ok. please see\n> function transformColumnDefinition and variable saw_nullable.\n>\n\nYeah, it is ok.\n\n\n> we need to make sure this:\n> CREATE TABLE notnull_tbl3 (a INTEGER NULL NOT NULL);\n> fails.\n>\n>\n> of course, it's also OK do this:\n> CREATE TABLE notnull_tbl3 (a INTEGER NULL NULL);\n>\n\n\n-- \nThanks,\nTender Wang\n\njian he <jian.universality@gmail.com> 于2024年9月9日周一 16:31写道:On Mon, Sep 2, 2024 at 6:33 PM Tender Wang <tndrwang@gmail.com> wrote:\n>\n>\n>\n> The attached patch adds List *nnconstraints, which store the not-null definition, in struct CreateStmt.\n> This makes me a little confused about List *constraints in struct CreateStmt. Actually, the List constraints\n> store ckeck constraint, and it will be better if the comments can reflect that. Re-naming it to List *ckconstraints\n> seems more reasonable. But a lot of codes that use stmt->constraints will be changed.\n>\nhi.\nseems you forgot to attach the patch?\nI also noticed this minor issue.\nI have no preference for Renaming it to List *ckconstraints.\n+1 for better comments. maybe reword to\n>>>\nList *constraints; /* CHECK constraints (list of Constraint nodes) */\n>>>I just gave advice; whether it is accepted or not, it's up to Alvaro.If Alvaro agrees with the advice, he will patch a new one. We can continue to review thenew patch. If Alvaro disagrees, he doesn't need to change the current patch. I think this way will bemore straightforward for others who will review this feature. \n\n\nOn Tue, Sep 3, 2024 at 3:17 PM Tender Wang <tndrwang@gmail.com> wrote:\n>\n> The test case in constraints.sql, as below:\n> CREATE TABLE notnull_tbl1 (a INTEGER NOT NULL NOT NULL);\n> ^^^^^^^^^^\n> There are two not-null definitions, and is the second one redundant?\n>\n\nhi.\ni think this is ok. please see\nfunction transformColumnDefinition and variable saw_nullable.Yeah, it is ok. \n\nwe need to make sure this:\nCREATE TABLE notnull_tbl3 (a INTEGER NULL NOT NULL);\nfails.\n\n\nof course, it's also OK do this:\nCREATE TABLE notnull_tbl3 (a INTEGER NULL NULL);\n-- Thanks,Tender Wang",
"msg_date": "Mon, 9 Sep 2024 17:08:09 +0800",
"msg_from": "Tender Wang <tndrwang@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-09, jian he wrote:\n\n> bold idea. print out the constraint name: violates not-null constraint \\\"%s\\\"\n> for the following code:\n> ereport(ERROR,\n> (errcode(ERRCODE_NOT_NULL_VIOLATION),\n> errmsg(\"null value in column \\\"%s\\\" of\n> relation \\\"%s\\\" violates not-null constraint\",\n> NameStr(att->attname),\n> RelationGetRelationName(orig_rel)),\n> val_desc ? errdetail(\"Failing row contains\n> %s.\", val_desc) : 0,\n> errtablecol(orig_rel, attrChk)));\n\nI gave this a quick run, but I'm not sure it actually improves things\nmuch. Here's one change from the regression tests. What do you think?\n\n INSERT INTO reloptions_test VALUES (1, NULL), (NULL, NULL);\n -ERROR: null value in column \"i\" of relation \"reloptions_test\" violates not-null constraint\n +ERROR: null value in column \"i\" of relation \"reloptions_test\" violates not-null constraint \"reloptions_test_i_not_null\"\n\nWhat do I get from having the constraint name? It's not like I'm going\nto drop the constraint and retry the insert.\n\nHere's the POC-quality patch for this. I changes a lot of regression\ntests, which I don't patch here. (But that's not the reason for me\nthinking that this isn't worth it.)\n\ndiff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c\nindex 29e186fa73d..d84137f4f43 100644\n--- a/src/backend/executor/execMain.c\n+++ b/src/backend/executor/execMain.c\n@@ -1907,6 +1907,7 @@ ExecPartitionCheckEmitError(ResultRelInfo *resultRelInfo,\n * have been converted from the original input tuple after tuple routing.\n * 'resultRelInfo' is the final result relation, after tuple routing.\n */\n+#include \"catalog/pg_constraint.h\"\n void\n ExecConstraints(ResultRelInfo *resultRelInfo,\n \t\t\t\tTupleTableSlot *slot, EState *estate)\n@@ -1932,6 +1933,7 @@ ExecConstraints(ResultRelInfo *resultRelInfo,\n \t\t\t\tchar\t *val_desc;\n \t\t\t\tRelation\torig_rel = rel;\n \t\t\t\tTupleDesc\torig_tupdesc = RelationGetDescr(rel);\n+\t\t\t\tchar\t *conname;\n \n \t\t\t\t/*\n \t\t\t\t * If the tuple has been routed, it's been converted to the\n@@ -1970,14 +1972,24 @@ ExecConstraints(ResultRelInfo *resultRelInfo,\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t tupdesc,\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t modifiedCols,\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t 64);\n+\t\t\t\t{\n+\t\t\t\t\tHeapTuple\ttuple;\n+\t\t\t\t\tForm_pg_constraint conForm;\n+\n+\t\t\t\t\ttuple = findNotNullConstraintAttnum(RelationGetRelid(orig_rel),\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\tatt->attnum);\n+\t\t\t\t\tconForm = (Form_pg_constraint) GETSTRUCT(tuple);\n+\t\t\t\t\tconname = pstrdup(NameStr(conForm->conname));\n+\t\t\t\t}\n \n \t\t\t\tereport(ERROR,\n-\t\t\t\t\t\t(errcode(ERRCODE_NOT_NULL_VIOLATION),\n-\t\t\t\t\t\t errmsg(\"null value in column \\\"%s\\\" of relation \\\"%s\\\" violates not-null constraint\",\n-\t\t\t\t\t\t\t\tNameStr(att->attname),\n-\t\t\t\t\t\t\t\tRelationGetRelationName(orig_rel)),\n-\t\t\t\t\t\t val_desc ? errdetail(\"Failing row contains %s.\", val_desc) : 0,\n-\t\t\t\t\t\t errtablecol(orig_rel, attrChk)));\n+\t\t\t\t\t\terrcode(ERRCODE_NOT_NULL_VIOLATION),\n+\t\t\t\t\t\terrmsg(\"null value in column \\\"%s\\\" of relation \\\"%s\\\" violates not-null constraint \\\"%s\\\"\",\n+\t\t\t\t\t\t\t NameStr(att->attname),\n+\t\t\t\t\t\t\t RelationGetRelationName(orig_rel),\n+\t\t\t\t\t\t\t conname),\n+\t\t\t\t\t\tval_desc ? errdetail(\"Failing row contains %s.\", val_desc) : 0,\n+\t\t\t\t\t\terrtablecol(orig_rel, attrChk));\n \t\t\t}\n \t\t}\n \t}\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 10 Sep 2024 16:05:10 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-02, Tender Wang wrote:\n\n> The attached patch adds List *nnconstraints, which store the not-null\n> definition, in struct CreateStmt. This makes me a little confused\n> about List *constraints in struct CreateStmt. Actually, the List\n> constraints store ckeck constraint, and it will be better if the\n> comments can reflect that. Re-naming it to List *ckconstraints seems\n> more reasonable. But a lot of codes that use stmt->constraints will be\n> changed.\n\nWell, if you look at the comment about CreateStmt, there's this:\n\n/* ----------------------\n *\t\tCreate Table Statement\n *\n * NOTE: in the raw gram.y output, ColumnDef and Constraint nodes are\n * intermixed in tableElts, and constraints and nnconstraints are NIL. After\n * parse analysis, tableElts contains just ColumnDefs, nnconstraints contains\n * Constraint nodes of CONSTR_NOTNULL type from various sources, and\n * constraints contains just CONSTR_CHECK Constraint nodes.\n * ----------------------\n */\n\nSo if we were to rename 'constraints' to 'ckconstraints', it would no\nlonger reflect the fact that not-null ones can be in the former list\ninitially.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No hay ausente sin culpa ni presente sin disculpa\" (Prov. francés)\n\n\n",
"msg_date": "Tue, 10 Sep 2024 16:09:20 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "Hello, here's a v2 of this patch. I have fixed --I think-- all the\nissues you and Tender Wang reported (unless I declined a fix in some\nprevious email).\n\nIt turns out I have not finished cleaning up the regression tests from\nnow-useless additions, because while doing so last time around I found\nsome bugs (especially one around comments in not-null constraints, which\nweren't being preserved by an ALTER TABLE TYPE command; that required a\nnew strange hack in RememberConstraintForRebuilding), but also the LIKE\nclause again). Also, in this version there's a problem in the\npg_upgrade test, which I hope to fix tomorrow.\n\nThis code can also be found at\nhttps://github.com/alvherre/postgres/tree/notnull-init-18\n(Please do ignore 89685e691f75309fec882723272c8b17106e6aa2, that was a\nmerge mistake).\n\nOn 2024-Sep-09, jian he wrote:\n\n> change to\n> \n> printfPQExpBuffer(&buf,\n> \"SELECT co.conname, at.attname,\n> co.connoinherit, co.conislocal,\\n\"\n> \"co.coninhcount <> 0\\n\"\n> \"FROM pg_catalog.pg_constraint co JOIN\\n\"\n> \"pg_catalog.pg_attribute at ON\\n\"\n> \"(at.attrelid = co.conrelid AND\n> at.attnum = co.conkey[1])\\n\"\n> \"WHERE co.contype = 'n' AND\\n\"\n> \"co.conrelid = '%s'::pg_catalog.regclass AND\\n\"\n> \"NOT EXISTS (SELECT 1 FROM\n> pg_catalog.pg_constraint co1 where co1.contype = 'p'\\n\"\n> \"AND at.attnum = any(co1.conkey) AND\n> co1.conrelid = '%s'::pg_catalog.regclass)\\n\"\n> \"ORDER BY at.attnum\",\n> oid,\n> oid);\n\nAh, obvious now that I see it, thanks.\n\n> ============\n> create type comptype as (r float8, i float8);\n> create domain dcomptype1 as comptype not null no inherit;\n> with cte as (\n> SELECT oid, conrelid::regclass, conname FROM pg_catalog.pg_constraint\n> where contypid in ('dcomptype1'::regtype))\n> select pg_get_constraintdef(oid) from cte;\n\n> i don't really sure the meaning of \"on inherit\" in\n> \"create domain dcomptype1 as comptype not null no inherit;\"\n\nYeah, I think we need to reject NO INHERIT constraints for domains.\nI've done so in this new version.\n\n> ====================\n> in extractNotNullColumn\n> we can Assert(colnum > 0);\n\nTrue, assuming we reject the case for system columns as you say below.\n\n> create table t3(a int , b int , c int ,not null a, not null c, not\n> null b, not null tableoid);\n> this should not be allowed?\n\nAdded explicit rejection here and in a couple of other places.\n\n> foreach(lc, RelationGetNotNullConstraints(RelationGetRelid(relation), false))\n\n> forgive me for being hypocritical.\n> I guess this is not a good coding pattern.\n> one reason would be: if you do:\n> =\n> list *a = RelationGetNotNullConstraints(RelationGetRelid(relation), false);\n> foreach(lc, a)\n> =\n> then you can call pprint(a).\n\nI'm undecided about this, but seeing that we don't use this pattern\nalmost anywhere, I've gone ahead and added the extra local variable.\n\n> + /*\n> + * If INCLUDING INDEXES is not given and a primary key exists, we need to\n> + * add not-null constraints to the columns covered by the PK (except those\n> + * that already have one.) This is required for backwards compatibility.\n\n\n> this part (\" if (bms_is_member(attnum, donecols))\" will always be true?\n> donecols is all not-null attnums, pkcols is pk not-null attnums.\n> so pkcols info will always be included in donecols.\n> i placed a \"elog(INFO, \"%s we are in\", __func__);\"\n> above\n> \"attForm = TupleDescAttr(tupleDesc, attnum - 1);\"\n> all regression tests still passed.\n\nYes, this code is completely unnecessary now. Removed.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 10 Sep 2024 20:18:35 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 2:18 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Hello, here's a v2 of this patch. I have fixed --I think-- all the\n> issues you and Tender Wang reported (unless I declined a fix in some\n> previous email).\n>\n\n+ /*\n+ * The constraint must appear as inherited in children, so create a\n+ * modified constraint object to use.\n+ */\n+ constr = copyObject(constr);\n+ constr->inhcount = 1;\n\nin ATAddCheckNNConstraint, we don't need the above copyObject call.\nbecause at the beginning of ATAddCheckNNConstraint, we do\n newcons = AddRelationNewConstraints(rel, NIL,\n list_make1(copyObject(constr)),\n recursing || is_readd, /*\nallow_merge */\n !recursing, /* is_local */\n is_readd, /* is_internal */\n NULL); /* queryString not available\n * here */\n\n\npg_constraint manual <<<<QUOTE<<<\nconislocal bool\nThis constraint is defined locally for the relation. Note that a\nconstraint can be locally defined and inherited simultaneously.\nconinhcount int2\nThe number of direct inheritance ancestors this constraint has. A\nconstraint with a nonzero number of ancestors cannot be dropped nor\nrenamed.\n<<<<END OF QUOTE\n\ndrop table idxpart cascade;\ncreate table idxpart (a int) partition by range (a);\ncreate table idxpart0 (like idxpart);\nalter table idxpart0 add primary key (a);\nalter table idxpart attach partition idxpart0 for values from (0) to (1000);\nalter table idxpart add primary key (a);\n\nalter table idxpart0 DROP CONSTRAINT idxpart0_pkey;\nalter table idxpart0 DROP CONSTRAINT idxpart0_a_not_null;\n\nFirst DROP CONSTRAINT failed as the doc said,\nbut the second success.\nbut the second DROP CONSTRAINT should fail?\nEven if you drop success, idxpart0_a_not_null still exists.\nit also conflicts with the pg_constraint I've quoted above.\n\n\ntransformTableLikeClause, expandTableLikeClause\ncan be further simplified when the relation don't have not-null as all like:\n\n /*\n * Reproduce not-null constraints by copying them. This doesn't require\n * any option to have been given.\n */\n if (tupleDesc->constr && tupleDesc->constr->has_not_null)\n {\n lst = RelationGetNotNullConstraints(RelationGetRelid(relation), false);\n cxt->nnconstraints = list_concat(cxt->nnconstraints, lst);\n }\n\n\n\n\nwe can do:\ncreate table parent (a text, b int);\ncreate table child () inherits (parent);\nalter table child no inherit parent;\n\nso comments in AdjustNotNullInheritance\n * AdjustNotNullInheritance\n * Adjust not-null constraints' inhcount/islocal for\n * ALTER TABLE [NO] INHERITS\n\n\"ALTER TABLE [NO] INHERITS\"\nshould be\n\"ALTER TABLE ALTER COLUMN [NO] INHERITS\"\n?\n\nAlso, seems AdjustNotNullInheritance never being called/used?\n\n\n",
"msg_date": "Wed, 11 Sep 2024 09:11:47 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 9:11 AM jian he <jian.universality@gmail.com> wrote:\n>\n> On Wed, Sep 11, 2024 at 2:18 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > Hello, here's a v2 of this patch. I have fixed --I think-- all the\n> > issues you and Tender Wang reported (unless I declined a fix in some\n> > previous email).\n> >\n\nafter applying your changes.\n\nin ATExecAddConstraint, ATAddCheckNNConstraint.\nATAddCheckNNConstraint(wqueue, tab, rel,\n newConstraint, recurse, false, is_readd,\n lockmode);\nif passed to ATAddCheckNNConstraint rel is a partitioned table.\nATAddCheckNNConstraint itself can recurse to create not-null pg_constraint\nfor itself and it's partitions (children table).\nThis is fine as long as we only call ATExecAddConstraint once.\n\nbut ATExecAddConstraint itself will recurse, it will call\nthe partitioned table and each of the partitions.\n\nThe first time ATExecAddConstraint with a partitioned table with the\nfollowing calling chain\nATAddCheckNNConstraint-> AddRelationNewConstraints -> AdjustNotNullInheritance1\nworks fine.\n\nthe second time ATExecAddConstraint with the partitions\nATAddCheckNNConstraint-> AddRelationNewConstraints -> AdjustNotNullInheritance1\nAdjustNotNullInheritance1 will make the partitions\npg_constraint->coninhcount bigger than 1.\n\n\nfor example:\ndrop table if exists idxpart, idxpart0, idxpart1 cascade;\ncreate table idxpart (a int) partition by range (a);\ncreate table idxpart0 (a int primary key);\nalter table idxpart attach partition idxpart0 for values from (0) to (1000);\nalter table idxpart add primary key (a);\n\nAfter the above query\npg_constraint->coninhcount of idxpart0_a_not_null becomes 2.\nbut it should be 1\n?\n\n\n",
"msg_date": "Wed, 11 Sep 2024 17:27:00 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-11, jian he wrote:\n\n> On Wed, Sep 11, 2024 at 2:18 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > Hello, here's a v2 of this patch. I have fixed --I think-- all the\n> > issues you and Tender Wang reported (unless I declined a fix in some\n> > previous email).\n> >\n> \n> + /*\n> + * The constraint must appear as inherited in children, so create a\n> + * modified constraint object to use.\n> + */\n> + constr = copyObject(constr);\n> + constr->inhcount = 1;\n> \n> in ATAddCheckNNConstraint, we don't need the above copyObject call.\n> because at the beginning of ATAddCheckNNConstraint, we do\n> newcons = AddRelationNewConstraints(rel, NIL,\n> list_make1(copyObject(constr)),\n> recursing || is_readd, /*\n> allow_merge */\n> !recursing, /* is_local */\n> is_readd, /* is_internal */\n> NULL); /* queryString not available\n> * here */\n\nI'm disinclined to change this. The purpose of creating a copy at this\npoint is to avoid modifying an object that doesn't belong to us. It\ndoesn't really matter much that we have an additional copy anyway; this\nisn't in any way performance-critical or memory-intensive.\n\n> create table idxpart (a int) partition by range (a);\n> create table idxpart0 (like idxpart);\n> alter table idxpart0 add primary key (a);\n> alter table idxpart attach partition idxpart0 for values from (0) to (1000);\n> alter table idxpart add primary key (a);\n> \n> alter table idxpart0 DROP CONSTRAINT idxpart0_pkey;\n> alter table idxpart0 DROP CONSTRAINT idxpart0_a_not_null;\n> \n> First DROP CONSTRAINT failed as the doc said,\n> but the second success.\n> but the second DROP CONSTRAINT should fail?\n> Even if you drop success, idxpart0_a_not_null still exists.\n> it also conflicts with the pg_constraint I've quoted above.\n\nHmm, this is because we allow a DROP CONSTRAINT to set conislocal from\ntrue to false. So the constraint isn't *actually* dropped. If you try\nthe DROP CONSTRAINT a second time, you'll get an error then. Maybe I\nshould change the order of checks here, so that we forbid doing the\nconislocal change; that would be more consistent with what we document.\nI'm undecided about this TBH -- maybe I should reword the documentation\nyou cite in a different way.\n\n> transformTableLikeClause, expandTableLikeClause\n> can be further simplified when the relation don't have not-null as all like:\n> \n> /*\n> * Reproduce not-null constraints by copying them. This doesn't require\n> * any option to have been given.\n> */\n> if (tupleDesc->constr && tupleDesc->constr->has_not_null)\n> {\n> lst = RelationGetNotNullConstraints(RelationGetRelid(relation), false);\n> cxt->nnconstraints = list_concat(cxt->nnconstraints, lst);\n> }\n\nTrue.\n\n> Also, seems AdjustNotNullInheritance never being called/used?\n\nOh, right, I removed the last callsite recently. I'll remove the\nfunction, and rename AdjustNotNullInheritance1 to\nAdjustNotNullInheritance now that that name is free.\n\nThanks for reviewing! I'll handle your other comment tomorrow.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 11 Sep 2024 21:32:16 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "> > On Wed, Sep 11, 2024 at 2:18 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > Hello, here's a v2 of this patch. I have fixed --I think-- all the\n> > > issues you and Tender Wang reported (unless I declined a fix in some\n> > > previous email).\n> > >\n\nPREPARE constr_meta (name[]) AS\nwith cte as\n(\nselect con.oid as conoid, conrelid::regclass, pc.relkind as relkind,\ncon.coninhcount as inhcnt\n,con.conname, con.contype, con.connoinherit as noinherit\n,con.conislocal as islocal, pa.attname, pa.attnum\nfrom pg_constraint con join pg_class pc on pc.oid = con.conrelid\njoin pg_attribute pa on pa.attrelid = pc.oid\nand pa.attnum = any(conkey)\nwhere con.contype in ('n', 'p', 'c') and\npc.relname = ANY ($1)\n)\nselect pg_get_constraintdef(conoid), * from cte\norder by contype, inhcnt, islocal, attnum;\n\nThe above constr_meta is used to query meta info, nothing fancy.\n\n---exampleA\ndrop table pp1,cc1, cc2;\ncreate table pp1 (f1 int);\ncreate table cc1 (f2 text, f3 int) inherits (pp1);\ncreate table cc2(f4 float) inherits(pp1,cc1);\nalter table pp1 alter column f1 set not null;\nexecute constr_meta('{pp1,cc1, cc2}');\n\n---exampleB\ndrop table pp1,cc1, cc2;\ncreate table pp1 (f1 int not null);\ncreate table cc1 (f2 text, f3 int) inherits (pp1);\ncreate table cc2(f4 float) inherits(pp1,cc1);\nexecute constr_meta('{pp1,cc1, cc2}');\n\nShould exampleA and exampleB\nreturn same pg_constraint->coninhcount\nfor not-null constraint \"cc2_f1_not_null\"\n?\n\n\n\n\n\nWe only have this Synopsis\nALTER [ COLUMN ] column_name { SET | DROP } NOT NULL\n\n--tests from src/test/regress/sql/inherit.sql\nCREATE TABLE inh_nn_parent (a int, NOT NULL a NO INHERIT);\nALTER TABLE inh_nn_parent ALTER a SET NOT NULL;\ncurrent fail at ATExecSetNotNull\nERROR: cannot change NO INHERIT status of NOT NULL constraint\n\"inh_nn_parent_a_not_null\" on relation \"inh_nn_parent\"\n\nseems we translate\nALTER TABLE inh_nn_parent ALTER a SET NOT NULL;\nto\nALTER TABLE inh_nn_parent ALTER a SET NOT NULL INHERIT\n\nbut we cannot (syntax error)\nALTER TABLE inh_nn_parent ALTER a SET NOT NULL NO INHERIT;\n\nIn this case, why not make it no-op, this column \"a\" already NOT NULL.\n\nso ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL\nwill change not-null information, no need to consider other not-null\nrelated information.\n\n\n /*\n- * Return the address of the modified column. If the column was already NOT\n- * NULL, InvalidObjectAddress is returned.\n+ * ALTER TABLE ALTER COLUMN SET NOT NULL\n+ *\n+ * Add a not-null constraint to a single table and its children. Returns\n+ * the address of the constraint added to the parent relation, if one gets\n+ * added, or InvalidObjectAddress otherwise.\n+ *\n+ * We must recurse to child tables during execution, rather than using\n+ * ALTER TABLE's normal prep-time recursion.\n */\n static ObjectAddress\n-ATExecSetNotNull(AlteredTableInfo *tab, Relation rel,\n- const char *colName, LOCKMODE lockmode)\n+ATExecSetNotNull(List **wqueue, Relation rel, char *conName, char *colName,\n+ bool recurse, bool recursing, List **readyRels,\n+ LOCKMODE lockmode)\n\nyou introduced two boolean \"recurse\", \"recursing\", don't have enough\nexplanation.\nThat is too much to comprehend.\n\n\" * We must recurse to child tables during execution, rather than using\n\" * ALTER TABLE's normal prep-time recursion.\nWhat does this sentence mean for these two boolean \"recurse\", \"recursing\"?\n\nFinally, I did some cosmetic changes, also improved error message\nin MergeConstraintsIntoExisting",
"msg_date": "Thu, 12 Sep 2024 16:18:57 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-12, jian he wrote:\n\n> ---exampleA\n> drop table pp1,cc1, cc2;\n> create table pp1 (f1 int);\n> create table cc1 (f2 text, f3 int) inherits (pp1);\n> create table cc2(f4 float) inherits(pp1,cc1);\n> alter table pp1 alter column f1 set not null;\n> execute constr_meta('{pp1,cc1, cc2}');\n> \n> ---exampleB\n> drop table pp1,cc1, cc2;\n> create table pp1 (f1 int not null);\n> create table cc1 (f2 text, f3 int) inherits (pp1);\n> create table cc2(f4 float) inherits(pp1,cc1);\n> execute constr_meta('{pp1,cc1, cc2}');\n> \n> Should exampleA and exampleB\n> return same pg_constraint->coninhcount\n> for not-null constraint \"cc2_f1_not_null\"\n> ?\n\nYes, they should be identical. In this case example A is in the wrong,\nthe constraint in cc2 should have inhcount=2 (which example B has) and\nit has inhcount=1. This becomes obvious if you do ALTER TABLE NO\nINHERIT of both parents -- in example A, it fails the second one with\n ERROR: relation 43823 has non-inherited constraint \"cc2_f1_not_null\"\nbecause the inhcount was set wrong by SET NOT NULL. Will fix. (I think\nthe culprit is the \"readyRels\" stuff I had added -- I should nuke that\nand add a CommandCounterIncrement in the right spot ...)\n\n\n> We only have this Synopsis\n> ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL\n\nYeah, this syntax is intended to add a \"normal\" not-null constraint,\ni.e. one that inherits.\n\n> --tests from src/test/regress/sql/inherit.sql\n> CREATE TABLE inh_nn_parent (a int, NOT NULL a NO INHERIT);\n> ALTER TABLE inh_nn_parent ALTER a SET NOT NULL;\n> current fail at ATExecSetNotNull\n> ERROR: cannot change NO INHERIT status of NOT NULL constraint\n> \"inh_nn_parent_a_not_null\" on relation \"inh_nn_parent\"\n\nThis is correct, because here you want a normal not-null constraint but\nthe table already has the weird ones that don't inherit.\n\n> seems we translate\n> ALTER TABLE inh_nn_parent ALTER a SET NOT NULL;\n> to\n> ALTER TABLE inh_nn_parent ALTER a SET NOT NULL INHERIT\n\nWell, we don't \"translate\" it as such. It's just what's normal.\n\n> but we cannot (syntax error)\n> ALTER TABLE inh_nn_parent ALTER a SET NOT NULL NO INHERIT;\n\nI don't feel a need to support this syntax. You can do with with the\nADD CONSTRAINT syntax if you need it.\n\n\n> /*\n> - * Return the address of the modified column. If the column was already NOT\n> - * NULL, InvalidObjectAddress is returned.\n> + * ALTER TABLE ALTER COLUMN SET NOT NULL\n> + *\n> + * Add a not-null constraint to a single table and its children. Returns\n> + * the address of the constraint added to the parent relation, if one gets\n> + * added, or InvalidObjectAddress otherwise.\n> + *\n> + * We must recurse to child tables during execution, rather than using\n> + * ALTER TABLE's normal prep-time recursion.\n> */\n> static ObjectAddress\n> -ATExecSetNotNull(AlteredTableInfo *tab, Relation rel,\n> - const char *colName, LOCKMODE lockmode)\n> +ATExecSetNotNull(List **wqueue, Relation rel, char *conName, char *colName,\n> + bool recurse, bool recursing, List **readyRels,\n> + LOCKMODE lockmode)\n> \n> you introduced two boolean \"recurse\", \"recursing\", don't have enough\n> explanation.\n> That is too much to comprehend.\n\nApologies. I think it's a well-established pattern in tablecmds.c.\n\"bool recurse\" is for the caller (ATRewriteCatalogs) to request\nrecursion. \"bool recursing\" is for the function informing itself that\nit is calling itself recursively, i.e. \"I'm already recursing\". This is\nmostly (?) used to skip things like permission checks.\n\n\n> \" * We must recurse to child tables during execution, rather than using\n> \" * ALTER TABLE's normal prep-time recursion.\n> What does this sentence mean for these two boolean \"recurse\", \"recursing\"?\n\nHere \"recurse during execution\" means ALTER TABLE's phase 2, that is,\nATRewriteCatalogs (which means some ATExecFoo function needs to\nimplement recursion internally), and \"normal prep-time recursion\" means\nthe recursion set up by phase 1 (ATPrepCmd), which creates separate\nAlterTableCmd nodes for each child table. See the comments for\nAlterTable and the code for ATController.\n\n> Finally, I did some cosmetic changes, also improved error message\n> in MergeConstraintsIntoExisting\n\nThanks, will incorporate.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Once again, thank you and all of the developers for your hard work on\nPostgreSQL. This is by far the most pleasant management experience of\nany database I've worked on.\" (Dan Harris)\nhttp://archives.postgresql.org/pgsql-performance/2006-04/msg00247.php\n\n\n",
"msg_date": "Thu, 12 Sep 2024 12:41:49 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "Sadly, there were some other time-wasting events that I failed to\nconsider, but here's now v3 which has fixed (AFAICS) all the problems\nyou reported.\n\nOn 2024-Sep-11, jian he wrote:\n\n> after applying your changes.\n> \n> in ATExecAddConstraint, ATAddCheckNNConstraint.\n> ATAddCheckNNConstraint(wqueue, tab, rel,\n> newConstraint, recurse, false, is_readd,\n> lockmode);\n> if passed to ATAddCheckNNConstraint rel is a partitioned table.\n> ATAddCheckNNConstraint itself can recurse to create not-null pg_constraint\n> for itself and it's partitions (children table).\n> This is fine as long as we only call ATExecAddConstraint once.\n> \n> but ATExecAddConstraint itself will recurse, it will call\n> the partitioned table and each of the partitions.\n\nYeah, this is because ATPrepAddPrimaryKey was queueing SetNotNull nodes\nfor each column on each children, which is repetitive and causes the\nproblem you see. That was a leftover from the previous way we handled\nPKs; we no longer need it to work that way. I have changed it so that\nit queues one constraint addition per column, on the same table that\nreceives the PK. It now works correctly as far as I can tell.\n\nSadly, there's one more pg_dump issue, which causes the pg_upgrade tests\nto fail. The problem is that if you have this sequence (taken from\nconstraints.sql):\n\nCREATE TABLE notnull_tbl4 (a INTEGER PRIMARY KEY INITIALLY DEFERRED);\nCREATE TABLE notnull_tbl4_cld2 (PRIMARY KEY (a) DEFERRABLE) INHERITS (notnull_tbl4);\n\nthis is dumped by pg_dump in this other way:\n\nCREATE TABLE public.notnull_tbl4 (a integer NOT NULL);\nCREATE TABLE public.notnull_tbl4_cld2 () INHERITS (public.notnull_tbl4);\nALTER TABLE ONLY public.notnull_tbl4_cld2 ADD CONSTRAINT notnull_tbl4_cld2_pkey PRIMARY KEY (a) DEFERRABLE;\nALTER TABLE ONLY public.notnull_tbl4 ADD CONSTRAINT notnull_tbl4_pkey PRIMARY KEY (a) DEFERRABLE INITIALLY DEFERRED;\n\nThis is almost exactly the same, except that the PK for\nnotnull_tbl4_cld2 is created in a separate command ... and IIUC this\ncauses the not-null constraint to obtain a different name, or a\ndifferent inheritance characteristic, and then from the\nrestored-by-pg_upgrade database, it's dumped by pg_dump separately.\nThis is what causes the pg_upgrade test to fail.\n\nAnyway, this made me realize that there is a more general problem, to\nwit, that pg_dump is not dumping not-null constraint names correctly --\nsometimes they just not dumped, which is Not Good. I'll have to look\ninto that once more.\n\n\n(Also: there are still a few additional test stanzas in regress/ that\nought to be removed; also, I haven't re-tested sepgsql, so it's probably\nbroken ATM.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Mon, 16 Sep 2024 19:47:09 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On Tue, Sep 17, 2024 at 1:47 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n\n\nstill digging inheritance related issues.\n\ndrop table if exists pp1,cc1, cc2;\ncreate table pp1 (f1 int, constraint nn check (f1 > 1));\ncreate table cc1 (f2 text, f3 int ) inherits (pp1);\ncreate table cc2(f4 float, constraint nn check (f1 > 1)) inherits(pp1,cc1);\nexecute constr_meta('{pp1,cc1, cc2}');\nalter table only cc2 drop constraint nn;\nERROR: cannot drop inherited constraint \"nn\" of relation \"cc2\"\n\nSo:\n\ndrop table if exists pp1,cc1, cc2;\ncreate table pp1 (f1 int not null);\ncreate table cc1 (f2 text, f3 int not null no inherit) inherits (pp1);\ncreate table cc2(f4 float, f1 int not null) inherits(pp1,cc1);\nexecute constr_meta('{pp1,cc1, cc2}');\nalter table only cc2 drop constraint cc2_f1_not_null;\n\nLast \"alter table only cc2\" should fail?\nbecause it violates catalog-pg-constraint coninhcount description:\n\"The number of direct inheritance ancestors this constraint has. A\nconstraint with a nonzero number of ancestors cannot be dropped nor\nrenamed.\"\n\nalso\nalter table only cc2 drop constraint cc2_f1_not_null;\nsuccess executed.\nsome pg_constraint attribute info may change.\nbut constraint cc2_f1_not_null still exists.\n\n\n",
"msg_date": "Thu, 19 Sep 2024 11:30:00 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "still based on v3.\nin src/sgml/html/ddl-partitioning.html\n<<<QUOTE<<\nBoth CHECK and NOT NULL constraints of a partitioned table are always\ninherited by all its partitions.\nCHECK constraints that are marked NO INHERIT are not allowed to be\ncreated on partitioned tables.\nYou cannot drop a NOT NULL constraint on a partition's column if the\nsame constraint is present in the parent table.\n<<<QUOTE<<\nwe can change\n\"CHECK constraints that are marked NO INHERIT are not allowed to be\ncreated on partitioned tables.\"\nto\n\"CHECK and NOT NULL constraints that are marked NO INHERIT are not\nallowed to be created on partitioned tables.\"\n\n\n\nin sql-altertable.html we have:\n<<<QUOTE<<\nATTACH PARTITION partition_name { FOR VALUES partition_bound_spec | DEFAULT }\nIf any of the CHECK constraints of the table being attached are marked\nNO INHERIT, the command will fail; such constraints must be recreated\nwithout the NO INHERIT clause.\n<<<QUOTE<<\n\ncreate table idxpart (a int constraint nn not null) partition by range (a);\ncreate table idxpart0 (a int constraint nn not null no inherit);\nalter table idxpart attach partition idxpart0 for values from (0) to (1000);\n\nIn the above sql query case,\nwe changed a constraint (\"nn\" on idxpart0) connoinherit attribute\nafter ATTACH PARTITION.\n(connoinherit from true to false)\nDo we need extra sentences to explain it?\nhere not-null constraint behavior seems to divert from CHECK constraint.\n\n\n\ndrop table if exists idxpart, idxpart0, idxpart1 cascade;\ncreate table idxpart (a int) partition by range (a);\ncreate table idxpart0 (a int primary key);\nalter table idxpart attach partition idxpart0 for values from (0) to (1000);\nalter table idxpart alter column a set not null;\nalter table idxpart0 alter column a drop not null;\nalter table idxpart0 drop constraint idxpart0_a_not_null;\n\n\"alter table idxpart0 alter column a drop not null;\"\nis logically equivalent to\n\"alter table idxpart0 drop constraint idxpart0_a_not_null;\"\n\nthe first one (alter column) ERROR out,\nthe second success.\nthe second \"drop constraint\" should also ERROR out?\nsince it violates the sentence in ddl-partitioning.html\n\"You cannot drop a NOT NULL constraint on a partition's column if the\nsame constraint is present in the parent table.\"\n\n\n",
"msg_date": "Thu, 19 Sep 2024 14:40:12 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-19, jian he wrote:\n\n> still based on v3.\n> in src/sgml/html/ddl-partitioning.html\n> <<<QUOTE<<\n> Both CHECK and NOT NULL constraints of a partitioned table are always\n> inherited by all its partitions.\n> CHECK constraints that are marked NO INHERIT are not allowed to be\n> created on partitioned tables.\n> You cannot drop a NOT NULL constraint on a partition's column if the\n> same constraint is present in the parent table.\n> <<<QUOTE<<\n> we can change\n> \"CHECK constraints that are marked NO INHERIT are not allowed to be\n> created on partitioned tables.\"\n> to\n> \"CHECK and NOT NULL constraints that are marked NO INHERIT are not\n> allowed to be created on partitioned tables.\"\n\nRight. Your proposed text is correct but sounds a bit repetitive with\nthe phrase just prior, and also the next one about inability to drop a\nNOT NULL applies equally to CHECK constraints; so I modified the whole\nparagraph to this:\n\n Both <literal>CHECK</literal> and <literal>NOT NULL</literal>\n constraints of a partitioned table are always inherited by all its\n partitions; it is not allowed to create <literal>NO INHERIT<literal>\n constraints of those types.\n You cannot drop a constraint of those types if the same constraint\n is present in the parent table.\n\n\n> in sql-altertable.html we have:\n> <<<QUOTE<<\n> ATTACH PARTITION partition_name { FOR VALUES partition_bound_spec | DEFAULT }\n> If any of the CHECK constraints of the table being attached are marked\n> NO INHERIT, the command will fail; such constraints must be recreated\n> without the NO INHERIT clause.\n> <<<QUOTE<<\n> \n> create table idxpart (a int constraint nn not null) partition by range (a);\n> create table idxpart0 (a int constraint nn not null no inherit);\n> alter table idxpart attach partition idxpart0 for values from (0) to (1000);\n> \n> In the above sql query case,\n> we changed a constraint (\"nn\" on idxpart0) connoinherit attribute\n> after ATTACH PARTITION.\n> (connoinherit from true to false)\n> Do we need extra sentences to explain it?\n> here not-null constraint behavior seems to divert from CHECK constraint.\n\nAh, yeah, the docs are misleading: we do allow these constraints to\nmutate from NO INHERIT to INHERIT. There's no danger in this, because\nsuch a table cannot have children: no inheritance children (because\ninheritance-parent tables cannot be partitions) and no partitions\neither, because partitioned tables are not allowed to have NOT NULL NO INHERIT \nconstraints. So this can only happen on a standalone table, and thus\nchanging the existing not-null constraint from NO INHERIT to normal does\nno harm.\n\nI think we could make CHECK behave the same way on this point; but in the\nmeantime, I propose this text:\n\n If any of the <literal>CHECK</literal> constraints of the table being\n attached are marked <literal>NO INHERIT</literal>, the command will fail;\n such constraints must be recreated without the\n <literal>NO INHERIT</literal> clause.\n By contrast, a <literal>NOT NULL</literal> constraint that was created\n as <literal>NO INHERIT</literal> will be changed to a normal inheriting\n one during attach.\n\n\n> drop table if exists idxpart, idxpart0, idxpart1 cascade;\n> create table idxpart (a int) partition by range (a);\n> create table idxpart0 (a int primary key);\n> alter table idxpart attach partition idxpart0 for values from (0) to (1000);\n> alter table idxpart alter column a set not null;\n> alter table idxpart0 alter column a drop not null;\n> alter table idxpart0 drop constraint idxpart0_a_not_null;\n> \n> \"alter table idxpart0 alter column a drop not null;\"\n> is logically equivalent to\n> \"alter table idxpart0 drop constraint idxpart0_a_not_null;\"\n> \n> the first one (alter column) ERROR out,\n> the second success.\n> the second \"drop constraint\" should also ERROR out?\n> since it violates the sentence in ddl-partitioning.html\n> \"You cannot drop a NOT NULL constraint on a partition's column if the\n> same constraint is present in the parent table.\"\n\nYeah, I modified this code already a few days ago, and now it does error\nout like this\n\nERROR: cannot drop inherited constraint \"idxpart0_a_not_null\" of relation \"idxpart0\"\n\nAnyway, as I mentioned back then, the DROP CONSTRAINT didn't _actually_\nremove the constraint; it only marked the constraint as no longer\nlocally defined (conislocal=false), which had no practical effect other\nthan changing the representation during pg_dump. Even detaching the\npartition after having \"dropped\" the constraint would make the not-null\nconstraint appear again as coninhcount=0,conislocal=true rather than\ndrop it.\n\n\nSpeaking of pg_dump, I'm still on the nightmarish trip to get it to\nbehave correctly for all cases (esp. for pg_upgrade). It seems I\ntripped up on my own code from the previous round, having\nunder-commented and misunderstood it :-(\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The eagle never lost so much time, as\nwhen he submitted to learn of the crow.\" (William Blake)\n\n\n",
"msg_date": "Thu, 19 Sep 2024 10:26:00 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On Thu, Sep 19, 2024 at 4:26 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n>\n> > drop table if exists idxpart, idxpart0, idxpart1 cascade;\n> > create table idxpart (a int) partition by range (a);\n> > create table idxpart0 (a int primary key);\n> > alter table idxpart attach partition idxpart0 for values from (0) to (1000);\n> > alter table idxpart alter column a set not null;\n> > alter table idxpart0 alter column a drop not null;\n> > alter table idxpart0 drop constraint idxpart0_a_not_null;\n> >\n> > \"alter table idxpart0 alter column a drop not null;\"\n> > is logically equivalent to\n> > \"alter table idxpart0 drop constraint idxpart0_a_not_null;\"\n> >\n> > the first one (alter column) ERROR out,\n> > the second success.\n> > the second \"drop constraint\" should also ERROR out?\n> > since it violates the sentence in ddl-partitioning.html\n> > \"You cannot drop a NOT NULL constraint on a partition's column if the\n> > same constraint is present in the parent table.\"\n>\n> Yeah, I modified this code already a few days ago, and now it does error\n> out like this\n>\n> ERROR: cannot drop inherited constraint \"idxpart0_a_not_null\" of relation \"idxpart0\"\n>\n> Anyway, as I mentioned back then, the DROP CONSTRAINT didn't _actually_\n> remove the constraint; it only marked the constraint as no longer\n> locally defined (conislocal=false), which had no practical effect other\n> than changing the representation during pg_dump. Even detaching the\n> partition after having \"dropped\" the constraint would make the not-null\n> constraint appear again as coninhcount=0,conislocal=true rather than\n> drop it.\n>\n\nfunny.\nas the previously sql example, if you execute\n\"alter table idxpart0 drop constraint idxpart0_a_not_null;\"\nagain\n\nthen\nERROR: cannot drop inherited constraint \"idxpart0_a_not_null\" of\nrelation \"idxpart0\"\n\nI am not sure if that's logically OK or if the user can deduce the\nlogic from the manual.\nlike, the first time you use \"alter table drop constraint\"\nto drop a constraint, the constraint is not totally dropped,\nthe second time you execute it again the constraint cannot be dropped directly.\n\n\ni think the issue is the changes we did in dropconstraint_internal\nin dropconstraint_internal, we have:\n-----------\n if (con->contype == CONSTRAINT_NOTNULL &&\n con->conislocal && con->coninhcount > 0)\n {\n HeapTuple copytup;\n copytup = heap_copytuple(constraintTup);\n con = (Form_pg_constraint) GETSTRUCT(copytup);\n con->conislocal = false;\n CatalogTupleUpdate(conrel, ©tup->t_self, copytup);\n ObjectAddressSet(conobj, ConstraintRelationId, con->oid);\n CommandCounterIncrement();\n table_close(conrel, RowExclusiveLock);\n return conobj;\n }\n /* Don't allow drop of inherited constraints */\n if (con->coninhcount > 0 && !recursing)\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n errmsg(\"cannot drop inherited constraint \\\"%s\\\" of\nrelation \\\"%s\\\"\",\n constrName, RelationGetRelationName(rel))));\n-----------\n\n\n\ncomments in dropconstraint_internal\n\"* Reset pg_constraint.attnotnull, if this is a not-null constraint.\"\nshould be\n\"pg_attribute.attnotnull\"\n\n\n\nalso, we don't have tests for not-null constraint similar to check\nconstraint tests on\nsrc/test/regress/sql/alter_table.sql (line 2067 to line 2073)\n\n\n",
"msg_date": "Thu, 19 Sep 2024 17:26:59 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "another bug.\nI will dig later, just want to share it first.\n\nminimum producer:\ndrop table if exists pp1,cc1, cc2,cc3;\ncreate table pp1 (f1 int );\ncreate table cc1 () inherits (pp1);\ncreate table cc2() inherits(pp1,cc1);\ncreate table cc3() inherits(pp1,cc1,cc2);\n\nalter table pp1 alter f1 set not null;\nERROR: tuple already updated by self\n\n\n",
"msg_date": "Fri, 20 Sep 2024 11:34:15 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "jian he <jian.universality@gmail.com> 于2024年9月20日周五 11:34写道:\n\n> another bug.\n> I will dig later, just want to share it first.\n>\n> minimum producer:\n> drop table if exists pp1,cc1, cc2,cc3;\n> create table pp1 (f1 int );\n> create table cc1 () inherits (pp1);\n> create table cc2() inherits(pp1,cc1);\n> create table cc3() inherits(pp1,cc1,cc2);\n>\n> alter table pp1 alter f1 set not null;\n> ERROR: tuple already updated by self\n>\n\nI guess some place needs call CommandCounterIncrement().\n\n-- \nThanks,\nTender Wang\n\njian he <jian.universality@gmail.com> 于2024年9月20日周五 11:34写道:another bug.\nI will dig later, just want to share it first.\n\nminimum producer:\ndrop table if exists pp1,cc1, cc2,cc3;\ncreate table pp1 (f1 int );\ncreate table cc1 () inherits (pp1);\ncreate table cc2() inherits(pp1,cc1);\ncreate table cc3() inherits(pp1,cc1,cc2);\n\nalter table pp1 alter f1 set not null;\nERROR: tuple already updated by self\nI guess some place needs call CommandCounterIncrement().-- Thanks,Tender Wang",
"msg_date": "Fri, 20 Sep 2024 12:14:12 +0800",
"msg_from": "Tender Wang <tndrwang@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "By the way, the v3 failed applying on Head(d35e293878)\ngit am v3-0001-Catalog-not-null-constraints.patch\nApplying: Catalog not-null constraints\nerror: patch failed: doc/src/sgml/ref/create_table.sgml:77\nerror: doc/src/sgml/ref/create_table.sgml: patch does not apply\nerror: patch failed: src/backend/commands/tablecmds.c:4834\nerror: src/backend/commands/tablecmds.c: patch does not apply\nerror: patch failed: src/backend/parser/gram.y:4141\nerror: src/backend/parser/gram.y: patch does not apply\nerror: patch failed: src/backend/parser/parse_utilcmd.c:2385\nerror: src/backend/parser/parse_utilcmd.c: patch does not apply\nPatch failed at 0001 Catalog not-null constraints\n\nAlvaro Herrera <alvherre@alvh.no-ip.org> 于2024年9月17日周二 01:47写道:\n\n> Sadly, there were some other time-wasting events that I failed to\n> consider, but here's now v3 which has fixed (AFAICS) all the problems\n> you reported.\n>\n> On 2024-Sep-11, jian he wrote:\n>\n> > after applying your changes.\n> >\n> > in ATExecAddConstraint, ATAddCheckNNConstraint.\n> > ATAddCheckNNConstraint(wqueue, tab, rel,\n> > newConstraint, recurse, false, is_readd,\n> > lockmode);\n> > if passed to ATAddCheckNNConstraint rel is a partitioned table.\n> > ATAddCheckNNConstraint itself can recurse to create not-null\n> pg_constraint\n> > for itself and it's partitions (children table).\n> > This is fine as long as we only call ATExecAddConstraint once.\n> >\n> > but ATExecAddConstraint itself will recurse, it will call\n> > the partitioned table and each of the partitions.\n>\n> Yeah, this is because ATPrepAddPrimaryKey was queueing SetNotNull nodes\n> for each column on each children, which is repetitive and causes the\n> problem you see. That was a leftover from the previous way we handled\n> PKs; we no longer need it to work that way. I have changed it so that\n> it queues one constraint addition per column, on the same table that\n> receives the PK. It now works correctly as far as I can tell.\n>\n> Sadly, there's one more pg_dump issue, which causes the pg_upgrade tests\n> to fail. The problem is that if you have this sequence (taken from\n> constraints.sql):\n>\n> CREATE TABLE notnull_tbl4 (a INTEGER PRIMARY KEY INITIALLY DEFERRED);\n> CREATE TABLE notnull_tbl4_cld2 (PRIMARY KEY (a) DEFERRABLE) INHERITS\n> (notnull_tbl4);\n>\n> this is dumped by pg_dump in this other way:\n>\n> CREATE TABLE public.notnull_tbl4 (a integer NOT NULL);\n> CREATE TABLE public.notnull_tbl4_cld2 () INHERITS (public.notnull_tbl4);\n> ALTER TABLE ONLY public.notnull_tbl4_cld2 ADD CONSTRAINT\n> notnull_tbl4_cld2_pkey PRIMARY KEY (a) DEFERRABLE;\n> ALTER TABLE ONLY public.notnull_tbl4 ADD CONSTRAINT notnull_tbl4_pkey\n> PRIMARY KEY (a) DEFERRABLE INITIALLY DEFERRED;\n>\n> This is almost exactly the same, except that the PK for\n> notnull_tbl4_cld2 is created in a separate command ... and IIUC this\n> causes the not-null constraint to obtain a different name, or a\n> different inheritance characteristic, and then from the\n> restored-by-pg_upgrade database, it's dumped by pg_dump separately.\n> This is what causes the pg_upgrade test to fail.\n>\n> Anyway, this made me realize that there is a more general problem, to\n> wit, that pg_dump is not dumping not-null constraint names correctly --\n> sometimes they just not dumped, which is Not Good. I'll have to look\n> into that once more.\n>\n>\n> (Also: there are still a few additional test stanzas in regress/ that\n> ought to be removed; also, I haven't re-tested sepgsql, so it's probably\n> broken ATM.)\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n>\n\n\n-- \nThanks,\nTender Wang\n\nBy the way, the v3 failed applying on Head(d35e293878)git am v3-0001-Catalog-not-null-constraints.patchApplying: Catalog not-null constraintserror: patch failed: doc/src/sgml/ref/create_table.sgml:77error: doc/src/sgml/ref/create_table.sgml: patch does not applyerror: patch failed: src/backend/commands/tablecmds.c:4834error: src/backend/commands/tablecmds.c: patch does not applyerror: patch failed: src/backend/parser/gram.y:4141error: src/backend/parser/gram.y: patch does not applyerror: patch failed: src/backend/parser/parse_utilcmd.c:2385error: src/backend/parser/parse_utilcmd.c: patch does not applyPatch failed at 0001 Catalog not-null constraintsAlvaro Herrera <alvherre@alvh.no-ip.org> 于2024年9月17日周二 01:47写道:Sadly, there were some other time-wasting events that I failed to\nconsider, but here's now v3 which has fixed (AFAICS) all the problems\nyou reported.\n\nOn 2024-Sep-11, jian he wrote:\n\n> after applying your changes.\n> \n> in ATExecAddConstraint, ATAddCheckNNConstraint.\n> ATAddCheckNNConstraint(wqueue, tab, rel,\n> newConstraint, recurse, false, is_readd,\n> lockmode);\n> if passed to ATAddCheckNNConstraint rel is a partitioned table.\n> ATAddCheckNNConstraint itself can recurse to create not-null pg_constraint\n> for itself and it's partitions (children table).\n> This is fine as long as we only call ATExecAddConstraint once.\n> \n> but ATExecAddConstraint itself will recurse, it will call\n> the partitioned table and each of the partitions.\n\nYeah, this is because ATPrepAddPrimaryKey was queueing SetNotNull nodes\nfor each column on each children, which is repetitive and causes the\nproblem you see. That was a leftover from the previous way we handled\nPKs; we no longer need it to work that way. I have changed it so that\nit queues one constraint addition per column, on the same table that\nreceives the PK. It now works correctly as far as I can tell.\n\nSadly, there's one more pg_dump issue, which causes the pg_upgrade tests\nto fail. The problem is that if you have this sequence (taken from\nconstraints.sql):\n\nCREATE TABLE notnull_tbl4 (a INTEGER PRIMARY KEY INITIALLY DEFERRED);\nCREATE TABLE notnull_tbl4_cld2 (PRIMARY KEY (a) DEFERRABLE) INHERITS (notnull_tbl4);\n\nthis is dumped by pg_dump in this other way:\n\nCREATE TABLE public.notnull_tbl4 (a integer NOT NULL);\nCREATE TABLE public.notnull_tbl4_cld2 () INHERITS (public.notnull_tbl4);\nALTER TABLE ONLY public.notnull_tbl4_cld2 ADD CONSTRAINT notnull_tbl4_cld2_pkey PRIMARY KEY (a) DEFERRABLE;\nALTER TABLE ONLY public.notnull_tbl4 ADD CONSTRAINT notnull_tbl4_pkey PRIMARY KEY (a) DEFERRABLE INITIALLY DEFERRED;\n\nThis is almost exactly the same, except that the PK for\nnotnull_tbl4_cld2 is created in a separate command ... and IIUC this\ncauses the not-null constraint to obtain a different name, or a\ndifferent inheritance characteristic, and then from the\nrestored-by-pg_upgrade database, it's dumped by pg_dump separately.\nThis is what causes the pg_upgrade test to fail.\n\nAnyway, this made me realize that there is a more general problem, to\nwit, that pg_dump is not dumping not-null constraint names correctly --\nsometimes they just not dumped, which is Not Good. I'll have to look\ninto that once more.\n\n\n(Also: there are still a few additional test stanzas in regress/ that\nought to be removed; also, I haven't re-tested sepgsql, so it's probably\nbroken ATM.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n-- Thanks,Tender Wang",
"msg_date": "Fri, 20 Sep 2024 14:31:09 +0800",
"msg_from": "Tender Wang <tndrwang@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-20, Tender Wang wrote:\n\n> jian he <jian.universality@gmail.com> 于2024年9月20日周五 11:34写道:\n> \n> > another bug.\n> > I will dig later, just want to share it first.\n> >\n> > minimum producer:\n> > drop table if exists pp1,cc1, cc2,cc3;\n> > create table pp1 (f1 int );\n> > create table cc1 () inherits (pp1);\n> > create table cc2() inherits(pp1,cc1);\n> > create table cc3() inherits(pp1,cc1,cc2);\n> >\n> > alter table pp1 alter f1 set not null;\n> > ERROR: tuple already updated by self\n> \n> I guess some place needs call CommandCounterIncrement().\n\nYeah ... this fixes it:\n\ndiff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c\nindex 579b8075b5..3f66e43b9a 100644\n--- a/src/backend/commands/tablecmds.c\n+++ b/src/backend/commands/tablecmds.c\n@@ -7877,12 +7877,6 @@ ATExecSetNotNull(List **wqueue, Relation rel, char *conName, char *colName,\n \t{\n \t\tList\t *children;\n \n-\t\t/*\n-\t\t * Make previous addition visible, in case we process the same\n-\t\t * relation again while chasing down multiple inheritance trees.\n-\t\t */\n-\t\tCommandCounterIncrement();\n-\n \t\tchildren = find_inheritance_children(RelationGetRelid(rel),\n \t\t\t\t\t\t\t\t\t\t\t lockmode);\n \n@@ -7890,6 +7884,8 @@ ATExecSetNotNull(List **wqueue, Relation rel, char *conName, char *colName,\n \t\t{\n \t\t\tRelation\tchildrel = table_open(childoid, NoLock);\n \n+\t\t\tCommandCounterIncrement();\n+\n \t\t\tATExecSetNotNull(wqueue, childrel, conName, colName,\n \t\t\t\t\t\t\t recurse, true, lockmode);\n \t\t\ttable_close(childrel, NoLock);\n\n\nI was trying to save on the number of CCIs that we perform, but it's\nlikely not a wise expenditure of time given that this isn't a very\ncommon scenario anyway. (Nobody with thousands of millions of children\ntables will try to run thousands of commands in a single transaction\nanyway ... so saving a few increments doesn't make any actual\ndifference. If such people exist, they can show us their use case and\nwe can investigate and fix it then.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"This is what I like so much about PostgreSQL. Most of the surprises\nare of the \"oh wow! That's cool\" Not the \"oh shit!\" kind. :)\"\nScott Marlowe, http://archives.postgresql.org/pgsql-admin/2008-10/msg00152.php\n\n\n",
"msg_date": "Fri, 20 Sep 2024 09:31:13 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-20, Tender Wang wrote:\n\n> By the way, the v3 failed applying on Head(d35e293878)\n> git am v3-0001-Catalog-not-null-constraints.patch\n> Applying: Catalog not-null constraints\n> error: patch failed: doc/src/sgml/ref/create_table.sgml:77\n\nYeah, there's a bunch of conflicts in current master. I rebased\nyesterday but I'm still composing the email for v4. Coming soon.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"En las profundidades de nuestro inconsciente hay una obsesiva necesidad\nde un universo lógico y coherente. Pero el universo real se halla siempre\nun paso más allá de la lógica\" (Irulan)\n\n\n",
"msg_date": "Fri, 20 Sep 2024 09:32:27 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "about set_attnotnull.\n\nwe can make set_attnotnull look less recursive.\ninstead of calling find_inheritance_children,\nlet's just one pass, directly call find_all_inheritors\noverall, I think it would be more intuitive.\n\nplease check the attached refactored set_attnotnull.\nregress test passed, i only test regress.\n\nI am also beginning to wonder if ATExecSetNotNull inside can also call\nfind_all_inheritors.",
"msg_date": "Fri, 20 Sep 2024 16:27:28 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "> > We only have this Synopsis\n> > ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL\n>\n> Yeah, this syntax is intended to add a \"normal\" not-null constraint,\n> i.e. one that inherits.\n>\n> > --tests from src/test/regress/sql/inherit.sql\n> > CREATE TABLE inh_nn_parent (a int, NOT NULL a NO INHERIT);\n> > ALTER TABLE inh_nn_parent ALTER a SET NOT NULL;\n> > current fail at ATExecSetNotNull\n> > ERROR: cannot change NO INHERIT status of NOT NULL constraint\n> > \"inh_nn_parent_a_not_null\" on relation \"inh_nn_parent\"\n>\n> This is correct, because here you want a normal not-null constraint but\n> the table already has the weird ones that don't inherit.\n>\n\ni found a case,that in a sense kind of support to make it a no-op.\nno-op means, if this attribute is already not-null, ALTER column SET NOT NULL;\nwon't have any effect.\nor maybe there is a bug somewhere.\n\ndrop table if exists pp1;\ncreate table pp1 (f1 int not null no inherit);\nALTER TABLE pp1 ALTER f1 SET NOT NULL;\nALTER TABLE ONLY pp1 ALTER f1 SET NOT NULL;\n\nThere is no child table, no partition, just a single regular table.\nso, in this case, with or without ONLY should behave the same?\nnow \"ALTER TABLE ONLY\" works, \"ALTER TABLE\" error out.\n\nper sql-altertable.html:\nname\nThe name (optionally schema-qualified) of an existing table to alter.\nIf ONLY is specified before the table name, only that table is\naltered. If ONLY is not specified, the table and all its descendant\ntables (if any) are altered.\n\n\n\n\ndiff --git a/doc/src/sgml/ref/create_table.sgml\nb/doc/src/sgml/ref/create_table.sgml\nindex 93b3f664f2..57c4ecd93a 100644\n--- a/doc/src/sgml/ref/create_table.sgml\n+++ b/doc/src/sgml/ref/create_table.sgml\n@@ -77,6 +77,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } |\nUNLOGGED ] TABLE [ IF NOT EXI\n\n [ CONSTRAINT <replaceable class=\"parameter\">constraint_name</replaceable> ]\n { CHECK ( <replaceable class=\"parameter\">expression</replaceable> ) [\nNO INHERIT ] |\n+ NOT NULL <replaceable class=\"parameter\">column_name</replaceable> [\nNO INHERIT ] |\n UNIQUE [ NULLS [ NOT ] DISTINCT ] ( <replaceable\nclass=\"parameter\">column_name</replaceable> [, ... ] ) <replaceable\nclass=\"parameter\">index_parameters</replaceable> |\n PRIMARY KEY ( <replaceable\nclass=\"parameter\">column_name</replaceable> [, ... ] ) <replaceable\nclass=\"parameter\">index_parameters</replaceable> |\n EXCLUDE [ USING <replaceable\nclass=\"parameter\">index_method</replaceable> ] ( <replaceable\nclass=\"parameter\">exclude_element</replaceable> WITH <replaceable\nclass=\"parameter\">operator</replaceable> [, ... ] ) <replaceable\nclass=\"parameter\">index_parameters</replaceable> [ WHERE (\n<replaceable class=\"parameter\">predicate</replaceable> ) ] |\n\nwe can\ncreate table pp1 (f1 int not null no inherit);\ncreate table pp1 (f1 int, constraint nn not null f1 no inherit);\n\n\"NO INHERIT\" should be applied for column_constraint and table_constraint?\n\n\n",
"msg_date": "Fri, 20 Sep 2024 17:31:24 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-20, jian he wrote:\n\n> about set_attnotnull.\n> \n> we can make set_attnotnull look less recursive.\n> instead of calling find_inheritance_children,\n> let's just one pass, directly call find_all_inheritors\n> overall, I think it would be more intuitive.\n> \n> please check the attached refactored set_attnotnull.\n> regress test passed, i only test regress.\n\nHmm, what do we gain from doing this change? It's longer in number of\nlines of code, and it's not clear to me that it is simpler.\n\n> I am also beginning to wonder if ATExecSetNotNull inside can also call\n> find_all_inheritors.\n\nThe point of descending levels one by one in ATExecSetNotNull is that we\ncan stop for any child on which a constraint already exists. We don't\nneed to scan any children thereof, which saves work.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Use it up, wear it out, make it do, or do without\"\n\n\n",
"msg_date": "Fri, 20 Sep 2024 14:08:28 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-20, Alvaro Herrera wrote:\n\n> Yeah, there's a bunch of conflicts in current master. I rebased\n> yesterday but I'm still composing the email for v4. Coming soon.\n\nOkay, so here is v4 with these problems fixed, including correct\npropagation of constraint names to children tables, which I had\ninadvertently broken earlier. This one does pass the pg_upgrade tests\nand as far as I can see pg_dump does all the correct things also. I\ncleaned up the tests to remove everything that's unneeded, redundant, or\ntesting behavior that no longer exists.\n\nI changed the behavior of ALTER TABLE ONLY <parent> ADD PRIMARY KEY, so\nthat it throws error in case a child does not have a NOT NULL constraint\non one of the columns, rather than silently creating such a constraint.\n(This is how `master` currently behaves). I think this is better\nbehavior, because it lets the user decide whether they want to scan the\ntable to create that constraint or not. It's a bit crude at present,\nbecause (1) a child could have a NO INHERIT constraint and have further\nchildren, which would foil the check (I think changing\nfind_inheritance_children to find_all_inheritors would be sufficient to\nfix this, but that's only needed in legacy inheritance not\npartitioning); (2) the error message doesn't have an errcode, and the\nwording might need work.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Learn about compilers. Then everything looks like either a compiler or\na database, and now you have two problems but one of them is fun.\"\n https://twitter.com/thingskatedid/status/1456027786158776329",
"msg_date": "Fri, 20 Sep 2024 23:15:19 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> 于2024年9月21日周六 05:15写道:\n\n> On 2024-Sep-20, Alvaro Herrera wrote:\n>\n> > Yeah, there's a bunch of conflicts in current master. I rebased\n> > yesterday but I'm still composing the email for v4. Coming soon.\n>\n> Okay, so here is v4 with these problems fixed, including correct\n> propagation of constraint names to children tables, which I had\n> inadvertently broken earlier. This one does pass the pg_upgrade tests\n> and as far as I can see pg_dump does all the correct things also. I\n> cleaned up the tests to remove everything that's unneeded, redundant, or\n> testing behavior that no longer exists.\n>\n> I changed the behavior of ALTER TABLE ONLY <parent> ADD PRIMARY KEY, so\n> that it throws error in case a child does not have a NOT NULL constraint\n> on one of the columns, rather than silently creating such a constraint.\n> (This is how `master` currently behaves). I think this is better\n> behavior, because it lets the user decide whether they want to scan the\n> table to create that constraint or not. It's a bit crude at present,\n> because (1) a child could have a NO INHERIT constraint and have further\n> children, which would foil the check (I think changing\n> find_inheritance_children to find_all_inheritors would be sufficient to\n> fix this, but that's only needed in legacy inheritance not\n> partitioning); (2) the error message doesn't have an errcode, and the\n> wording might need work.\n>\n\nThe indexing test case in regress failed with v4 patch.\nalter table only idxpart add primary key (a); -- fail, no not-null\nconstraint\n-ERROR: column a of table idxpart0 is not marked not null\n+ERROR: column \"a\" of table \"idxpart0\" is not marked NOT NULL\n\nIt seemed the error message forgot to change.\n\n--\nThanks,\nTender Wang\nhttps://www.openpie.com/\n\nAlvaro Herrera <alvherre@alvh.no-ip.org> 于2024年9月21日周六 05:15写道:On 2024-Sep-20, Alvaro Herrera wrote:\n\n> Yeah, there's a bunch of conflicts in current master. I rebased\n> yesterday but I'm still composing the email for v4. Coming soon.\n\nOkay, so here is v4 with these problems fixed, including correct\npropagation of constraint names to children tables, which I had\ninadvertently broken earlier. This one does pass the pg_upgrade tests\nand as far as I can see pg_dump does all the correct things also. I\ncleaned up the tests to remove everything that's unneeded, redundant, or\ntesting behavior that no longer exists.\n\nI changed the behavior of ALTER TABLE ONLY <parent> ADD PRIMARY KEY, so\nthat it throws error in case a child does not have a NOT NULL constraint\non one of the columns, rather than silently creating such a constraint.\n(This is how `master` currently behaves). I think this is better\nbehavior, because it lets the user decide whether they want to scan the\ntable to create that constraint or not. It's a bit crude at present,\nbecause (1) a child could have a NO INHERIT constraint and have further\nchildren, which would foil the check (I think changing\nfind_inheritance_children to find_all_inheritors would be sufficient to\nfix this, but that's only needed in legacy inheritance not\npartitioning); (2) the error message doesn't have an errcode, and the\nwording might need work.The indexing test case in regress failed with v4 patch.alter table only idxpart add primary key (a); -- fail, no not-null constraint-ERROR: column a of table idxpart0 is not marked not null+ERROR: column \"a\" of table \"idxpart0\" is not marked NOT NULLIt seemed the error message forgot to change.--Thanks,Tender Wanghttps://www.openpie.com/",
"msg_date": "Mon, 23 Sep 2024 14:40:00 +0800",
"msg_from": "Tender Wang <tndrwang@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On Sat, Sep 21, 2024 at 5:15 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Okay, so here is v4 with these problems fixed, including correct\n> propagation of constraint names to children tables, which I had\n> inadvertently broken earlier. This one does pass the pg_upgrade tests\n> and as far as I can see pg_dump does all the correct things also. I\n> cleaned up the tests to remove everything that's unneeded, redundant, or\n> testing behavior that no longer exists.\n>\n\nin findNotNullConstraintAttnum\n if (con->contype != CONSTRAINT_NOTNULL)\n continue;\n if (!con->convalidated)\n continue;\n\nif con->convalidated is false, then we have a bigger problem?\nmaybe we can change to ERROR to expose/capture potential problems.\nlike:\n if (con->contype != CONSTRAINT_NOTNULL)\n continue;\n if (!con->convalidated)\n elog(ERROR, \"not-null constraint is not validated\");\n\n------<<<<<<<<------------------\nHeapTuple\nfindNotNullConstraint(Oid relid, const char *colname)\n{\n AttrNumber attnum = get_attnum(relid, colname);\n return findNotNullConstraintAttnum(relid, attnum);\n}\n\nwe can change to\n\nHeapTuple\nfindNotNullConstraint(Oid relid, const char *colname)\n{\n AttrNumber attnum = get_attnum(relid, colname);\n if (attnum <= InvalidAttrNumber)\n return NULL;\n return findNotNullConstraintAttnum(relid, attnum);\n}\n------<<<<<<<<------------------\n\nsql-createtable.html\nSECTION: LIKE source_table [ like_option ... ]\nINCLUDING CONSTRAINTS\nCHECK constraints will be copied. No distinction is made between\ncolumn constraints and table constraints. Not-null constraints are\nalways copied to the new table.\n\ndrop table if exists t, t_1,ssa;\ncreate table t(a int, b int, not null a no inherit);\ncreate table ssa (like t INCLUDING all);\n\nHere create table like won't include no inherit not-null constraint,\nseems to conflict with the doc?\n\n------<<<<<<<<------------------\ndrop table if exists t, t_1;\ncreate table t(a int primary key, b int, not null a no inherit);\ncreate table t_1 () inherits (t);\n\nt_1 will inherit the not-null constraint from t,\nso the syntax \"not null a no inherit\" information is ignored.\n\nother cases:\ncreate table t(a int not null, b int, not null a no inherit);\ncreate table t(a int not null no inherit, b int, not null a);\n\nseems currently, column constraint have not-null constraint, then use\nit and table constraint (not-null)\nare ignored.\nbut if column constraint don't have not-null then according to table constraint.\n\n\n",
"msg_date": "Tue, 24 Sep 2024 11:22:00 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On Fri, Sep 20, 2024 at 8:08 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2024-Sep-20, jian he wrote:\n>\n> > about set_attnotnull.\n> >\n> > we can make set_attnotnull look less recursive.\n> > instead of calling find_inheritance_children,\n> > let's just one pass, directly call find_all_inheritors\n> > overall, I think it would be more intuitive.\n> >\n> > please check the attached refactored set_attnotnull.\n> > regress test passed, i only test regress.\n>\n> Hmm, what do we gain from doing this change? It's longer in number of\n> lines of code, and it's not clear to me that it is simpler.\n>\n\n\nstatic void\nset_attnotnull(List **wqueue, Relation rel, AttrNumber attnum, bool recurse,\n LOCKMODE lockmode)\n{\n HeapTuple tuple;\n Form_pg_attribute attForm;\n bool changed = false;\n List *all_oids;\n Relation thisrel;\n AttrNumber childattno;\n const char *attrname;\n CheckAlterTableIsSafe(rel);\n attrname = get_attname(RelationGetRelid(rel), attnum, false);\n if (recurse)\n all_oids = find_all_inheritors(RelationGetRelid(rel), lockmode,\n NULL);\n else\n all_oids = list_make1_int(RelationGetRelid(rel));\n foreach_oid(reloid, all_oids)\n {\n thisrel = table_open(reloid, NoLock);\n if (reloid != RelationGetRelid(rel))\n CheckAlterTableIsSafe(thisrel);\n childattno = get_attnum(reloid, attrname);\n tuple = SearchSysCacheCopyAttNum(reloid, childattno);\n if (!HeapTupleIsValid(tuple))\n elog(ERROR, \"cache lookup failed for attribute %d of relation %s\",\n attnum, RelationGetRelationName(thisrel));\n attForm = (Form_pg_attribute) GETSTRUCT(tuple);\n if (!attForm->attnotnull)\n {\n Relation attr_rel;\n attr_rel = table_open(AttributeRelationId, RowExclusiveLock);\n attForm->attnotnull = true;\n CatalogTupleUpdate(attr_rel, &tuple->t_self, tuple);\n table_close(attr_rel, RowExclusiveLock);\n if (wqueue && !NotNullImpliedByRelConstraints(thisrel, attForm))\n {\n AlteredTableInfo *tab;\n tab = ATGetQueueEntry(wqueue, thisrel);\n tab->verify_new_notnull = true;\n }\n changed = true;\n }\n if (changed)\n CommandCounterIncrement();\n changed = false;\n table_close(thisrel, NoLock);\n }\n}\n\n\nWhat do you think of the above refactor?\n(I intentionally deleted empty new line)\n\n\n",
"msg_date": "Tue, 24 Sep 2024 15:03:00 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "static Oid\nStoreRelNotNull(Relation rel, const char *nnname, AttrNumber attnum,\n bool is_validated, bool is_local, int inhcount,\n bool is_no_inherit)\n{\n Oid constrOid;\n Assert(attnum > InvalidAttrNumber);\n constrOid =\n CreateConstraintEntry(nnname,\n RelationGetNamespace(rel),\n CONSTRAINT_NOTNULL,\n false,\n false,\n is_validated\n....\n}\nis is_validated always true, can we add an Assert on it?\n\n\nin AddRelationNotNullConstraints\nfor (int outerpos = 0; outerpos < list_length(old_notnulls); outerpos++)\n{\n}\nCookedConstraint struct already has \"int inhcount;\"\ncan we rely on that, rather than using add_inhcount?\nwe can also add an Assert: \"Assert(!cooked->is_no_inherit);\"\n\nI've put these points into a patch,\nplease check the attached.\n\n\n\n\n /*\n * Remember primary key index, if any. We do this only if the index\n * is valid; but if the table is partitioned, then we do it even if\n * it's invalid.\n *\n * The reason for returning invalid primary keys for foreign tables is\n * because of pg_dump of NOT NULL constraints, and the fact that PKs\n * remain marked invalid until the partitions' PKs are attached to it.\n * If we make rd_pkindex invalid, then the attnotnull flag is reset\n * after the PK is created, which causes the ALTER INDEX ATTACH\n * PARTITION to fail with 'column ... is not marked NOT NULL'. With\n * this, dropconstraint_internal() will believe that the columns must\n * not have attnotnull reset, so the PKs-on-partitions can be attached\n * correctly, until finally the PK-on-parent is marked valid.\n *\n * Also, this doesn't harm anything, because rd_pkindex is not a\n * \"real\" index anyway, but a RELKIND_PARTITIONED_INDEX.\n */\n if (index->indisprimary &&\n (index->indisvalid ||\n relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE))\n {\n pkeyIndex = index->indexrelid;\n pkdeferrable = !index->indimmediate;\n }\nThe comment (especially paragraph \"The reason for returning invalid\nprimary keys\") is overwhelming.\nCan you also add some sql examples into the comments.\nI guess some sql examples, people can understand it more easily?",
"msg_date": "Tue, 24 Sep 2024 17:04:31 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-24, jian he wrote:\n\n> static Oid\n> StoreRelNotNull(Relation rel, const char *nnname, AttrNumber attnum,\n> bool is_validated, bool is_local, int inhcount,\n> bool is_no_inherit)\n> {\n> Oid constrOid;\n> Assert(attnum > InvalidAttrNumber);\n> constrOid =\n> CreateConstraintEntry(nnname,\n> RelationGetNamespace(rel),\n> CONSTRAINT_NOTNULL,\n> false,\n> false,\n> is_validated\n> ....\n> }\n> is is_validated always true, can we add an Assert on it?\n\nSure. FWIW the reason it's a parameter at all, is that the obvious next\npatch is to add support for NOT VALID constraints. I don't want to\nintroduce support for NOT VALID immediately with the first patch because\nI'm sure some wrinkles will appear; but a followup patch will surely\nfollow shortly.\n\n> in AddRelationNotNullConstraints\n> for (int outerpos = 0; outerpos < list_length(old_notnulls); outerpos++)\n> {\n> }\n> CookedConstraint struct already has \"int inhcount;\"\n> can we rely on that, rather than using add_inhcount?\n> we can also add an Assert: \"Assert(!cooked->is_no_inherit);\"\n\nI'm not sure that works, because if your parent has two parents, you\ndon't want to add two -- you still have only one immediate parent.\n\nI think the best way to check for correctness is to set up a scenario\nwhere you would have that cooked->inhcount=2 (using whatever CREATE\nTABLEs are necessary) and then see if ALTER TABLE NO INHERIT reach the\ncorrect count (0) when all [immediate] parents are detached. But\nanyway, keep in mind that inhcount keeps the number of _immediate_\nparents, not the number of ancestors.\n\n> /*\n> * Remember primary key index, if any. We do this only if the index\n> * is valid; but if the table is partitioned, then we do it even if\n> * it's invalid.\n> *\n> * The reason for returning invalid primary keys for foreign tables is\n> * because of pg_dump of NOT NULL constraints, and the fact that PKs\n> * remain marked invalid until the partitions' PKs are attached to it.\n> * If we make rd_pkindex invalid, then the attnotnull flag is reset\n> * after the PK is created, which causes the ALTER INDEX ATTACH\n> * PARTITION to fail with 'column ... is not marked NOT NULL'. With\n> * this, dropconstraint_internal() will believe that the columns must\n> * not have attnotnull reset, so the PKs-on-partitions can be attached\n> * correctly, until finally the PK-on-parent is marked valid.\n> *\n> * Also, this doesn't harm anything, because rd_pkindex is not a\n> * \"real\" index anyway, but a RELKIND_PARTITIONED_INDEX.\n> */\n> if (index->indisprimary &&\n> (index->indisvalid ||\n> relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE))\n> {\n> pkeyIndex = index->indexrelid;\n> pkdeferrable = !index->indimmediate;\n> }\n> The comment (especially paragraph \"The reason for returning invalid\n> primary keys\") is overwhelming.\n> Can you also add some sql examples into the comments.\n> I guess some sql examples, people can understand it more easily?\n\nOoh, thanks for catching this -- this comment is a leftover from\nprevious idea that you could have PKs without NOT NULL. I think it\nmostly needs to be removed, and maybe the whole \"if\" clause put back to\nits original form.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"If it is not right, do not do it.\nIf it is not true, do not say it.\" (Marcus Aurelius, Meditations)\n\n\n",
"msg_date": "Tue, 24 Sep 2024 11:43:26 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-24, jian he wrote:\n\n> static void\n> set_attnotnull(List **wqueue, Relation rel, AttrNumber attnum, bool recurse,\n> LOCKMODE lockmode)\n> {\n\n> What do you think of the above refactor?\n> (I intentionally deleted empty new line)\n\nLooks nicer ... but you know what? After spending some more time with\nit, I realized that one caller is dead code (in\nAttachPartitionEnsureIndexes) and another caller doesn't need to ask for\nrecursion, because it recurses itself (in ATAddCheckNNConstraint). So\nthat leaves us with a grand total of zero callers that need the\nrecursion here ... which means we can simplify it to the case that it\nonly examines a single relation and never recurses.\n\nSo I've stripped it down to its bare minimum:\n\n/*\n * Helper to set pg_attribute.attnotnull if it isn't set, and to tell phase 3\n * to verify it.\n *\n * When called to alter an existing table, 'wqueue' must be given so that we\n * can queue a check that existing tuples pass the constraint. When called\n * from table creation, 'wqueue' should be passed as NULL.\n */\nstatic void\nset_attnotnull(List **wqueue, Relation rel, AttrNumber attnum,\n\t\t\t LOCKMODE lockmode)\n{\n\tOid\t\t\treloid = RelationGetRelid(rel);\n\tHeapTuple\ttuple;\n\tForm_pg_attribute attForm;\n\n\tCheckAlterTableIsSafe(rel);\n\n\ttuple = SearchSysCacheCopyAttNum(reloid, attnum);\n\tif (!HeapTupleIsValid(tuple))\n\t\telog(ERROR, \"cache lookup failed for attribute %d of relation %u\",\n\t\t\t attnum, reloid);\n\tattForm = (Form_pg_attribute) GETSTRUCT(tuple);\n\tif (!attForm->attnotnull)\n\t{\n\t\tRelation\tattr_rel;\n\n\t\tattr_rel = table_open(AttributeRelationId, RowExclusiveLock);\n\n\t\tattForm->attnotnull = true;\n\t\tCatalogTupleUpdate(attr_rel, &tuple->t_self, tuple);\n\n\t\tif (wqueue && !NotNullImpliedByRelConstraints(rel, attForm))\n\t\t{\n\t\t\tAlteredTableInfo *tab;\n\n\t\t\ttab = ATGetQueueEntry(wqueue, rel);\n\t\t\ttab->verify_new_notnull = true;\n\t\t}\n\n\t\tCommandCounterIncrement();\n\n\t\ttable_close(attr_rel, RowExclusiveLock);\n\t}\n\n\theap_freetuple(tuple);\n}\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 24 Sep 2024 12:51:54 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-24, jian he wrote:\n\n> sql-createtable.html\n> SECTION: LIKE source_table [ like_option ... ]\n> INCLUDING CONSTRAINTS\n> CHECK constraints will be copied. No distinction is made between\n> column constraints and table constraints. Not-null constraints are\n> always copied to the new table.\n> \n> drop table if exists t, t_1,ssa;\n> create table t(a int, b int, not null a no inherit);\n> create table ssa (like t INCLUDING all);\n> \n> Here create table like won't include no inherit not-null constraint,\n> seems to conflict with the doc?\n\nHmm, actually I think this is a bug, because if you have CHECK\nconstraint with NO INHERIT, it will be copied:\n\ncreate table t (a int check (a > 0) no inherit);\ncreate table ssa (like t including constraints);\n\n55490 18devel 141626=# \\d+ ssa\n Tabla «public.ssa»\n Columna │ Tipo │ Ordenamiento │ Nulable │ Por omisión │ Almacenamiento>\n─────────┼─────────┼──────────────┼─────────┼─────────────┼───────────────>\n a │ integer │ │ │ │ plain >\nRestricciones CHECK:\n \"t_a_check\" CHECK (a > 0) NO INHERIT\nMétodo de acceso: heap\n\nIt seems that NOT NULL constraint should behave the same as CHECK\nconstraints in this regard, i.e., we should not heed NO INHERIT in this\ncase.\n\n\nI have made these changes and added some tests, and will be posting a v5\nshortly.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n<inflex> really, I see PHP as like a strange amalgamation of C, Perl, Shell\n<crab> inflex: you know that \"amalgam\" means \"mixture with mercury\",\n more or less, right?\n<crab> i.e., \"deadly poison\"\n\n\n",
"msg_date": "Tue, 24 Sep 2024 18:59:46 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-24, Alvaro Herrera wrote:\n\n> I have made these changes and added some tests, and will be posting a v5\n> shortly.\n\nI ran the coverage report and found a couple of ereports are not covered\nby any tests. I'm adding those. May add more tomorrow, after looking\nat the coverage report some more.\n\nI should give a try at running Andres' differential coverage report[1]\nat some point ...\n\n[1] https://postgr.es/m/20240414223305.m3i5eju6zylabvln%40awork3.anarazel.de\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 24 Sep 2024 21:07:14 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "copy from src/test/regress/sql/index_including.sql\n-- Unique index and unique constraint\nCREATE TABLE tbl_include_unique1 (c1 int, c2 int, c3 int, c4 box);\nINSERT INTO tbl_include_unique1 SELECT x, 2*x, 3*x, box('4,4,4,4')\nFROM generate_series(1,10) AS x;\nCREATE UNIQUE INDEX tbl_include_unique1_idx_unique ON\ntbl_include_unique1 using btree (c1, c2) INCLUDE (c3, c4);\nALTER TABLE tbl_include_unique1 add UNIQUE USING INDEX\ntbl_include_unique1_idx_unique;\n\\d+ tbl_include_unique1\n\ntransformIndexConstraint(Constraint *constraint, CreateStmtContext *cxt)\n /* Ensure these columns get a NOT NULL constraint */\n cxt->nnconstraints =\n lappend(cxt->nnconstraints,\n makeNotNullConstraint(makeString(attname)));\nthe above code can only apply when (constraint->contype ==\nCONSTR_UNIQUE ) is false.\nThe above sql example shows that (constraint->contype == CONSTR_UNIQUE\n) can be true.\n\n\n\ndrop table if exists idxpart, idxpart0 cascade;\ncreate table idxpart (a int) partition by range (a);\ncreate table idxpart0 (a int not null);\nalter table idxpart attach partition idxpart0 for values from (0) to (100);\nalter table idxpart alter column a set not null;\nalter table idxpart alter column a drop not null;\n\n\"alter table idxpart alter column a set not null;\"\nwill make idxpart0_a_not_null constraint islocal and inhertited,\nwhich is not OK?\nfor partition trees, only the top level/root can be local for not-null\nconstraint?\n\n\"alter table idxpart alter column a drop not null;\"\nshould cascade to idxpart0?\n\n\n\n <para>\n However, a column can have at most one explicit not-null constraint.\n </para>\nmaybe we can add a sentence:\n\"Adding not-null constraints on a column marked as not-null is a no-op.\"\nthen we can easily explain case like:\ncreate table t(a int primary key , b int, constraint nn not null a );\nthe final not-null constraint name is \"t_a_not_null1\"\n\n\n\n /*\n * Run through the constraints that need to generate an index, and do so.\n *\n * For PRIMARY KEY, in addition we set each column's attnotnull flag true.\n * We do not create a separate not-null constraint, as that would be\n * redundant: the PRIMARY KEY constraint itself fulfills that role. Other\n * constraint types don't need any not-null markings.\n */\nthe above comments in transformIndexConstraints is wrong\nand not necessary?\n\"create table t(a int primary key)\"\nwe create a primary key and also do create separate a not-null\nconstraint for \"t\"\n\n\n\n /*\n * column is defined in the new table. For PRIMARY KEY, we\n * can apply the not-null constraint cheaply here. Note that\n * this isn't effective in ALTER TABLE, unless the column is\n * being added in the same command.\n */\nin transformIndexConstraint, i am not sure the meaning of the third\nsentence in above comments\n\n\n\ni see no error message like\nERROR: NOT NULL constraints cannot be marked NOT VALID\nERROR: not-null constraints for domains cannot be marked NO INHERIT\nin regress tests. we can add some in src/test/regress/sql/domain.sql\nlike:\n\ncreate domain d1 as text not null no inherit;\ncreate domain d1 as text constraint nn not null no inherit;\ncreate domain d1 as text constraint nn not null;\nALTER DOMAIN d1 ADD constraint nn not null NOT VALID;\ndrop domain d1;\n\n\n",
"msg_date": "Wed, 25 Sep 2024 16:31:38 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "in ATExecSetNotNull\n /*\n * If we find an appropriate constraint, we're almost done, but just\n * need to change some properties on it: if we're recursing, increment\n * coninhcount; if not, set conislocal if not already set.\n */\n if (recursing)\n {\n conForm->coninhcount++;\n changed = true;\n }\n else if (!conForm->conislocal)\n {\n conForm->conislocal = true;\n changed = true;\n elog(INFO, \"constraint islocal attribute changed\");\n }\n if (recursing && !conForm->conislocal)\n elog(INFO, \"should not happenX\");\n\n\n\"should not happenX\" appeared in regression.diff, but not\n\"constraint islocal attribute changed\"\nDoes that mean the IF, ELSE IF logic is not right?\n\n\n\n\nin doc/src/sgml/ref/create_table.sgml\n[ NO INHERIT ]\ncan apply to\n<replaceable class=\"parameter\">table_constraint</replaceable>\nand\n<replaceable class=\"parameter\">column_constraint</replaceable>\nso we should change create_table.sgml\naccordingly?\n\n\n",
"msg_date": "Wed, 25 Sep 2024 20:10:23 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-25, jian he wrote:\n\n> copy from src/test/regress/sql/index_including.sql\n> -- Unique index and unique constraint\n> CREATE TABLE tbl_include_unique1 (c1 int, c2 int, c3 int, c4 box);\n> INSERT INTO tbl_include_unique1 SELECT x, 2*x, 3*x, box('4,4,4,4')\n> FROM generate_series(1,10) AS x;\n> CREATE UNIQUE INDEX tbl_include_unique1_idx_unique ON\n> tbl_include_unique1 using btree (c1, c2) INCLUDE (c3, c4);\n> ALTER TABLE tbl_include_unique1 add UNIQUE USING INDEX\n> tbl_include_unique1_idx_unique;\n> \\d+ tbl_include_unique1\n> \n> transformIndexConstraint(Constraint *constraint, CreateStmtContext *cxt)\n> /* Ensure these columns get a NOT NULL constraint */\n> cxt->nnconstraints =\n> lappend(cxt->nnconstraints,\n> makeNotNullConstraint(makeString(attname)));\n> the above code can only apply when (constraint->contype ==\n> CONSTR_UNIQUE ) is false.\n> The above sql example shows that (constraint->contype == CONSTR_UNIQUE\n> ) can be true.\n\nDoh, yeah. Fixed and added a test for this.\n\n> drop table if exists idxpart, idxpart0 cascade;\n> create table idxpart (a int) partition by range (a);\n> create table idxpart0 (a int not null);\n> alter table idxpart attach partition idxpart0 for values from (0) to (100);\n> alter table idxpart alter column a set not null;\n> alter table idxpart alter column a drop not null;\n> \n> \"alter table idxpart alter column a set not null;\"\n> will make idxpart0_a_not_null constraint islocal and inhertited,\n> which is not OK?\n> for partition trees, only the top level/root can be local for not-null\n> constraint?\n> \n> \"alter table idxpart alter column a drop not null;\"\n> should cascade to idxpart0?\n\nHmm, I think this behaves OK. It's valid to have a child with a\nconstraint that the parent doesn't have. And then if the parent\nacquires one and passes it down to the children, then deleting it from\nthe parent should not leave the child unprotected. This is the whole\nreason we have the \"inhcount/islocal\" system, after all.\n\nOne small glitch here is that detaching a partition (or removing\ninheritance) does not remove the constraint, even if islocal=false and\ninhcount reaches 0. Instead, we turn islocal=true, so that the\nconstraint continues to exist. This is a bit weird, but the intent is\nto preserve properties and give the user an explicit choice; they can\nstill drop the constraint after detaching. Also, columns also work that\nway:\n\ncreate table parent (a int);\ncreate table child () inherits (parent);\nselect attrelid::regclass, attname, attislocal, attinhcount from pg_attribute where attname = 'a';\n attrelid │ attname │ attislocal │ attinhcount \n──────────┼─────────┼────────────┼─────────────\n parent │ a │ t │ 0\n child │ a │ f │ 1\n\nalter table child no inherit parent;\n\nselect attrelid::regclass, attname, attislocal, attinhcount from pg_attribute where attname = 'a';\n attrelid │ attname │ attislocal │ attinhcount \n──────────┼─────────┼────────────┼─────────────\n parent │ a │ t │ 0\n child │ a │ t │ 0\n\nHere the column on child, which didn't have a local definition, becomes\na local column during NO INHERIT.\n\n\n> <para>\n> However, a column can have at most one explicit not-null constraint.\n> </para>\n> maybe we can add a sentence:\n> \"Adding not-null constraints on a column marked as not-null is a no-op.\"\n> then we can easily explain case like:\n> create table t(a int primary key , b int, constraint nn not null a );\n> the final not-null constraint name is \"t_a_not_null1\"\n\nYeah, I've been thinking about this in connection with the restriction I\njust added to forbid two NOT NULLs with differing NO INHERIT flags: we\nneed to preserve a constraint name if it's specified, or raise an error\nif two different names are specified. This requires a change in\nAddRelationNotNullConstraints() to propagate a name specified later in\nthe constraint list. This made me realize that\ntransformColumnDefinition() also has a related problem, in that it\nignores a subsequent constraint if multiple ones are defined on the same\ncolumn, such as in\n create table notnull_tbl2 (a int primary key generated by default as\n identity constraint foo not null constraint foo not null no inherit);\nhere, the constraint lacks the NO INHERIT flag even though it was\nspecifically requested the second time.\n\n\n> /*\n> * Run through the constraints that need to generate an index, and do so.\n> *\n> * For PRIMARY KEY, in addition we set each column's attnotnull flag true.\n> * We do not create a separate not-null constraint, as that would be\n> * redundant: the PRIMARY KEY constraint itself fulfills that role. Other\n> * constraint types don't need any not-null markings.\n> */\n> the above comments in transformIndexConstraints is wrong\n> and not necessary?\n> \"create table t(a int primary key)\"\n> we create a primary key and also do create separate a not-null\n> constraint for \"t\"\n\nI'm going to replace it with \"For PRIMARY KEY, we queue not-null\nconstraints for each column.\"\n\n> /*\n> * column is defined in the new table. For PRIMARY KEY, we\n> * can apply the not-null constraint cheaply here. Note that\n> * this isn't effective in ALTER TABLE, unless the column is\n> * being added in the same command.\n> */\n> in transformIndexConstraint, i am not sure the meaning of the third\n> sentence in above comments\n\nYeah, this is mostly a preexisting comment (though it was originally\ntalking about tables OF TYPE, which is a completely different thing):\n\ncreate type atype as (a int, b text);\ncreate table atable of atype (not null a no inherit);\n\\d+ atable \n Tabla «public.atable»\n Columna │ Tipo │ Ordenamiento │ Nulable │ Por omisión │ Almacenamiento │ Compresió>\n─────────┼─────────┼──────────────┼──────────┼─────────────┼────────────────┼──────────>\n a │ integer │ │ not null │ │ plain │ >\n b │ text │ │ │ │ extended │ >\nNot-null constraints:\n \"atable_a_not_null\" NOT NULL \"a\" NO INHERIT\nTabla tipada de tipo: atype\n\n\nAnyway, what this comment means is that if the ALTER TABLE is doing ADD\nCONSTRAINT on columns that already exist on the table (as opposed to\ndoing it on columns that the same ALTER TABLE command is doing ADD\nCOLUMN for), then \"this isn't effective\" (ie. it doesn't do anything).\nIn reality, this comment is now wrong, because during ALTER TABLE the\nNOT NULL constraints are added by ATPrepAddPrimaryKey, which occurs\nbefore this code runs, so the column->is_not_null clause is always true\nand this block is not executed. This code is only used during CREATE\nTABLE. So the comment needs to be removed, or maybe done this way with\nan extra assertion:\n\n /*\n+ * column is defined in the new table. For CREATE TABLE with\n+ * a PRIMARY KEY, we can apply the not-null constraint cheaply\n+ * here. Note that ALTER TABLE never needs this, because\n+ * those constraints have already been added by\n+ * ATPrepAddPrimaryKey.\n */\n if (constraint->contype == CONSTR_PRIMARY &&\n !column->is_not_null)\n {\n+ Assert(!cxt->isalter); /* doesn't occur in ALTER TABLE */\n column->is_not_null = true;\n cxt->nnconstraints =\n lappend(cxt->nnconstraints,\n makeNotNullConstraint(makeString(key)));\n }\n\n\n> i see no error message like\n> ERROR: NOT NULL constraints cannot be marked NOT VALID\n> ERROR: not-null constraints for domains cannot be marked NO INHERIT\n> in regress tests. we can add some in src/test/regress/sql/domain.sql\n> like:\n> \n> create domain d1 as text not null no inherit;\n> create domain d1 as text constraint nn not null no inherit;\n> create domain d1 as text constraint nn not null;\n> ALTER DOMAIN d1 ADD constraint nn not null NOT VALID;\n> drop domain d1;\n\nYeah, I too noticed the lack of tests for not-valid not-null constraints\non domains a few days ago. While I was exploring that I noticed that\nthey have some NO INHERIT that seems to be doing nothing (as it should,\nbecause what would it actually mean?), so we should remove the gram.y\nbits that try to handle it. We could add these tests you suggest\nirrespective of this not-nulls patch in this thread.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 25 Sep 2024 22:14:53 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "Please check the attached minor doc changes.\nmake the create_foreign_table.sgml, alter_foreign_table.sgml\nnot-null description\nconsistent with normal tables.\n\nchange\ndoc/src/sgml/ref/create_table.sgml\nParameters section\nfrom\n<term><literal>NOT NULL </literal></term>\nto\n<term><literal>NOT NULL [ NO INHERIT ] </literal></term>.\n\n\n\nin doc/src/sgml/ref/alter_table.sgml\n Adding a constraint recurses only for <literal>CHECK</literal> constraints\n that are not marked <literal>NO INHERIT</literal>.\n\nThis sentence needs to be rephrased to:\n Adding a constraint recurses for <literal>CHECK</literal> and\n<literal>NOT NULL </literal> constraints\n that are not marked <literal>NO INHERIT</literal>.",
"msg_date": "Thu, 26 Sep 2024 15:41:05 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "+-- a PK in parent must have a not-null in child that it can mark inherited\n+create table inh_parent (a int primary key);\n+create table inh_child (a int primary key);\n+alter table inh_child inherit inh_parent; -- nope\n+alter table inh_child alter a set not null;\n+alter table inh_child inherit inh_parent; -- now it works\n+ERROR: relation \"inh_parent\" would be inherited from more than once\nin src/test/regress/sql/inherit.sql, the comments at the end of the\ncommand, seem to conflict with the output?\n\n-------------------------------------------------------------------------------\n\nALTER TABLE ALTER COLUMN SET NOT NULL\nimplicitly means\nALTER TABLE ALTER COLUMN SET NOT NULL NO INHERIT.\n\nSo in ATExecSetNotNull\n if (conForm->connoinherit && recurse)\n ereport(ERROR,\n errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"cannot change NO INHERIT status of NOT\nNULL constraint \\\"%s\\\" on relation \\\"%s\\\"\",\n NameStr(conForm->conname),\n RelationGetRelationName(rel)));\nshould be\n if (conForm->connoinherit)\n ereport(ERROR,\n errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"cannot change NO INHERIT status of NOT\nNULL constraint \\\"%s\\\" on relation \\\"%s\\\"\",\n NameStr(conForm->conname),\n RelationGetRelationName(rel)));\n\nthen we can avoid the weird case like below:\n\ndrop table if exists pp1;\ncreate table pp1 (f1 int not null no inherit);\nALTER TABLE pp1 ALTER f1 SET NOT NULL;\nALTER TABLE ONLY pp1 ALTER f1 SET NOT NULL;\n\n-------------------------------------------------------------------------------\n\n+ else if (rel->rd_rel->relhassubclass &&\n+ find_inheritance_children(RelationGetRelid(rel), NoLock) != NIL)\n+ {\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n+ errmsg(\"not-null constraint on column \\\"%s\\\" must be removed in\nchild tables too\",\n+ colName),\n+ errhint(\"Do not specify the ONLY keyword.\"));\n+ }\nthis part in ATExecDropNotNull is not necessary?\n\nper alter_table.sql\n<<<<<<---------->>>>>>\n-- make sure we can drop a constraint on the parent but it remains on the child\nCREATE TABLE test_drop_constr_parent (c text CHECK (c IS NOT NULL));\nCREATE TABLE test_drop_constr_child () INHERITS (test_drop_constr_parent);\nALTER TABLE ONLY test_drop_constr_parent DROP CONSTRAINT\n\"test_drop_constr_parent_c_check\";\n<<<<<<---------->>>>>>\nby the same way, we can drop a not-null constraint ONLY on the parent,\nbut it remains on the child.\nif we not remove the above part then\nALTER TABLE ONLY DROP CONSTRAINT\nwill behave differently from\nALTER TABLE ONLY ALTER COLUMN DROP NOT NULL.\n\nexample:\ndrop table pp1,cc1, cc2;\ncreate table pp1 (f1 int not null);\ncreate table cc1 (f2 text, f3 int) inherits (pp1);\ncreate table cc2(f4 float) inherits(pp1,cc1);\n\nalter table only pp1 drop constraint pp1_f1_not_null; --works.\nalter table only pp1 alter column f1 drop not null; --- error, should also work.\n-------------------------------------------------------------------------------\n\n\n",
"msg_date": "Thu, 26 Sep 2024 20:19:46 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-26, jian he wrote:\n\n> +-- a PK in parent must have a not-null in child that it can mark inherited\n> +create table inh_parent (a int primary key);\n> +create table inh_child (a int primary key);\n> +alter table inh_child inherit inh_parent; -- nope\n> +alter table inh_child alter a set not null;\n> +alter table inh_child inherit inh_parent; -- now it works\n> +ERROR: relation \"inh_parent\" would be inherited from more than once\n> in src/test/regress/sql/inherit.sql, the comments at the end of the\n> command, seem to conflict with the output?\n\nOutdated, useless -- removed.\n\n> -------------------------------------------------------------------------------\n> \n> ALTER TABLE ALTER COLUMN SET NOT NULL\n> implicitly means\n> ALTER TABLE ALTER COLUMN SET NOT NULL NO INHERIT.\n> \n> So in ATExecSetNotNull\n> if (conForm->connoinherit && recurse)\n> ereport(ERROR,\n> errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> errmsg(\"cannot change NO INHERIT status of NOT\n> NULL constraint \\\"%s\\\" on relation \\\"%s\\\"\",\n> NameStr(conForm->conname),\n> RelationGetRelationName(rel)));\n> should be\n> if (conForm->connoinherit)\n> ereport(ERROR,\n> errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> errmsg(\"cannot change NO INHERIT status of NOT\n> NULL constraint \\\"%s\\\" on relation \\\"%s\\\"\",\n> NameStr(conForm->conname),\n> RelationGetRelationName(rel)));\n> \n> then we can avoid the weird case like below:\n> \n> drop table if exists pp1;\n> create table pp1 (f1 int not null no inherit);\n> ALTER TABLE pp1 ALTER f1 SET NOT NULL;\n> ALTER TABLE ONLY pp1 ALTER f1 SET NOT NULL;\n\nHmm, I don't understand why you say SET NOT NULL implicitly means SET\nNOT NULL NO INHERIT. That's definitely not the intention. As I\nexplained earlier, the normal state is that a constraint is inheritable,\nso if you do SET NOT NULL you want that constraint to be INHERIT.\n\nAnyway, I don't see what you see as weird in the commands you list. To\nme it reacts like this:\n\n=# create table pp1 (f1 int not null no inherit);\nCREATE TABLE\n=# ALTER TABLE pp1 ALTER f1 SET NOT NULL;\nERROR: cannot change NO INHERIT status of NOT NULL constraint \"pp1_f1_not_null\" on relation \"pp1\"\n=# ALTER TABLE ONLY pp1 ALTER f1 SET NOT NULL;\nALTER TABLE\n=# \\d+ pp1\n Tabla «public.pp1»\n Columna │ Tipo │ Ordenamiento │ Nulable │ Por omisión │ Almacenamiento │ Compresión │ Estadísticas │ Descripción \n─────────┼─────────┼──────────────┼──────────┼─────────────┼────────────────┼────────────┼──────────────┼─────────────\n f1 │ integer │ │ not null │ │ plain │ │ │ \nNot-null constraints:\n \"pp1_f1_not_null\" NOT NULL \"f1\" NO INHERIT\nMétodo de acceso: heap\n\nwhich seems to be exactly what we want.\n\n\n> -------------------------------------------------------------------------------\n> \n> + else if (rel->rd_rel->relhassubclass &&\n> + find_inheritance_children(RelationGetRelid(rel), NoLock) != NIL)\n> + {\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> + errmsg(\"not-null constraint on column \\\"%s\\\" must be removed in\n> child tables too\",\n> + colName),\n> + errhint(\"Do not specify the ONLY keyword.\"));\n> + }\n> this part in ATExecDropNotNull is not necessary?\n> \n> per alter_table.sql\n> <<<<<<---------->>>>>>\n> -- make sure we can drop a constraint on the parent but it remains on the child\n> CREATE TABLE test_drop_constr_parent (c text CHECK (c IS NOT NULL));\n> CREATE TABLE test_drop_constr_child () INHERITS (test_drop_constr_parent);\n> ALTER TABLE ONLY test_drop_constr_parent DROP CONSTRAINT\n> \"test_drop_constr_parent_c_check\";\n> <<<<<<---------->>>>>>\n> by the same way, we can drop a not-null constraint ONLY on the parent,\n> but it remains on the child.\n> if we not remove the above part then\n> ALTER TABLE ONLY DROP CONSTRAINT\n> will behave differently from\n> ALTER TABLE ONLY ALTER COLUMN DROP NOT NULL.\n> \n> example:\n> drop table pp1,cc1, cc2;\n> create table pp1 (f1 int not null);\n> create table cc1 (f2 text, f3 int) inherits (pp1);\n> create table cc2(f4 float) inherits(pp1,cc1);\n> \n> alter table only pp1 drop constraint pp1_f1_not_null; --works.\n> alter table only pp1 alter column f1 drop not null; --- error, should also work.\n> -------------------------------------------------------------------------------\n\nHmm. I'm not sure I like this behavior, but there is precedent in\nCHECK, and since DROP CONSTRAINT also already works that way, I suppose\nDROP NOT NULL should do that too. I'll get it changed.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I must say, I am absolutely impressed with what pgsql's implementation of\nVALUES allows me to do. It's kind of ridiculous how much \"work\" goes away in\nmy code. Too bad I can't do this at work (Oracle 8/9).\" (Tom Allison)\n http://archives.postgresql.org/pgsql-general/2007-06/msg00016.php\n\n\n",
"msg_date": "Thu, 26 Sep 2024 18:53:01 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
},
{
"msg_contents": "On 2024-Sep-25, jian he wrote:\n\n> in ATExecSetNotNull\n> /*\n> * If we find an appropriate constraint, we're almost done, but just\n> * need to change some properties on it: if we're recursing, increment\n> * coninhcount; if not, set conislocal if not already set.\n> */\n> if (recursing)\n> {\n> conForm->coninhcount++;\n> changed = true;\n> }\n> else if (!conForm->conislocal)\n> {\n> conForm->conislocal = true;\n> changed = true;\n> elog(INFO, \"constraint islocal attribute changed\");\n> }\n> if (recursing && !conForm->conislocal)\n> elog(INFO, \"should not happenX\");\n> \n> \n> \"should not happenX\" appeared in regression.diff, but not\n> \"constraint islocal attribute changed\"\n> Does that mean the IF, ELSE IF logic is not right?\n\nI don't see a problem here. It means recursing is true, therefore we're\ndown one level already and don't need to set conislocal. Modifying\nconinhcount is enough.\n\nI attach v6 of this patch, including the requisite removal of the\nATExecDropNotNull ereport(ERROR) that I mentioned in the other\nthread[1]. I think I have made fixes for all your comments, though I\nwould like to go back and verify all of them once again, as well as read\nit in full.\n\n[1] https://postgr.es/m/202409261752.nbvlawkxsttf@alvherre.pgsql\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Digital and video cameras have this adjustment and film cameras don't for the\nsame reason dogs and cats lick themselves: because they can.\" (Ken Rockwell)",
"msg_date": "Fri, 27 Sep 2024 15:07:09 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: not null constraints, again"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nThe nullness of casetest.value can be determined at the JIT compile\ntime. We can emit fewer codes by utilizing this property. The attached\npatch is trying to fix it.\n\nBest Regards,\nXing",
"msg_date": "Sat, 31 Aug 2024 16:04:40 +0800",
"msg_from": "Xing Guo <higuoxing@gmail.com>",
"msg_from_op": true,
"msg_subject": "JIT: The nullness of casetest.value can be determined at the JIT\n compile time."
},
{
"msg_contents": "On 8/31/24 10:04 AM, Xing Guo wrote:\n> The nullness of casetest.value can be determined at the JIT compile\n> time. We can emit fewer codes by utilizing this property. The attached\n> patch is trying to fix it.\n\nI have not reviewed the code yet but the idea seems good.\n\nBut I wonder if we shouldn't instead simplify the code a bit by \nspecializing these steps when generating them instead of doing the work \nruntime/while generating machine code. Yes, I doubt the performance \nbenefits matter but I personally think the code is cleaner before my \npatch than after it.\n\nLong term it would be nice to get rid off \ncaseValue_datum/domainValue_datum as mentioned by Andres[1] but that is \na bigger job so think that either your patch or my patch would make \nsense to apply before that.\n\n1. \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a7f107df2b700c859e4d9ad2ca66b07a465d6223\n\nAndreas",
"msg_date": "Tue, 3 Sep 2024 14:09:04 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: JIT: The nullness of casetest.value can be determined at the JIT\n compile time."
},
{
"msg_contents": "On Tue, Sep 3, 2024 at 8:09 PM Andreas Karlsson <andreas@proxel.se> wrote:\n>\n> On 8/31/24 10:04 AM, Xing Guo wrote:\n> > The nullness of casetest.value can be determined at the JIT compile\n> > time. We can emit fewer codes by utilizing this property. The attached\n> > patch is trying to fix it.\n>\n> I have not reviewed the code yet but the idea seems good.\n>\n> But I wonder if we shouldn't instead simplify the code a bit by\n> specializing these steps when generating them instead of doing the work\n> runtime/while generating machine code. Yes, I doubt the performance\n> benefits matter but I personally think the code is cleaner before my\n> patch than after it.\n\n+1 to the idea.\n\n> Long term it would be nice to get rid off\n> caseValue_datum/domainValue_datum as mentioned by Andres[1] but that is\n> a bigger job so think that either your patch or my patch would make\n> sense to apply before that.\n\nI think your patch makes more sense than mine! Thanks!\n\nBest Regards,\nXing\n\n\n",
"msg_date": "Tue, 3 Sep 2024 23:18:39 +0800",
"msg_from": "Xing Guo <higuoxing@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: JIT: The nullness of casetest.value can be determined at the JIT\n compile time."
}
] |
[
{
"msg_contents": "Hi,\n\nIt's been quite a while since the last version of the AIO patchset that I have\nposted. Of course parts of the larger project have since gone upstream [1].\n\nA lot of time since the last versions was spent understanding the performance\ncharacteristics of using AIO with WAL and understanding some other odd\nperformance characteristics I didn't understand. I think I mostly understand\nthat now and what the design implications for an AIO subsystem are.\n\nThe prototype I had been working on unfortunately suffered from a few design\nissues that weren't trivial to fix.\n\nThe biggest was that each backend could essentially have hard references to\nunbounded numbers of \"AIO handles\" and that these references prevented these\nhandles from being reused. Because \"AIO handles\" have to live in shared memory\n(so other backends can wait on them, that IO workers can perform them, etc)\nthat's obviously an issue. There was always a way to just run out of AIO\nhandles. I went through quite a few iterations of a design for how to resolve\nthat - I think I finally got there.\n\nAnother significant issue was that when I wrote the AIO prototype,\nbufmgr.c/smgr.c/md.c only issued IOs in BLCKSZ increments, with the AIO\nsubsystem merging them into larger IOs. Thomas et al's work around streaming\nread make bufmgr.c issue larger IOs - which is good for performance. But it\nwas surprisingly hard to fit into my older design.\n\n\nIt took me much longer than I had hoped to address these issues in\nprototype. In the end I made progress by working on a rewriting the patchset\nfrom scratch (well, with a bit of copy & paste).\n\nThe main reason I had previously implemented WAL AIO etc was to know the\ndesign implications - but now that they're somewhat understood, I'm planning\nto keep the patchset much smaller, with the goal of making it upstreamable.\n\n\nWhile making v2 somewhat presentable I unfortunately found a few more design\nissues - they're now mostly resolved, I think. But I only resolved the last\none a few hours ago, who knows what a few nights of sleeping on it will\nbring. Unfortunately that prevented me from doing some of the polishing that I\nhad wanted to finish...\n\n\nBecause of the aforementioned move, I currently do not have access to my\nworkstation. I just have access to my laptop - which has enough thermal issues\nto make benchmarks not particularly reliable.\n\n\nSo here are just a few teaser numbers, on an PCIe v4 NVMe SSD, note however\nthat this is with the BAS_BULKREAD size increased, with the default 256kB, we\ncan only keep one IO in flight at a time (due to io_combine_limit building\nlarger IOs) - we'll need to do something better than this, but that's yet\nanother separate discussion.\n\n\nWorkload: pg_prewarm('pgbench_accounts') of a scale 5k database, which is\nbigger than memory:\n\n time\nmaster: 59.097\naio v2.0, worker: 11.211\naio v2.0, uring *: 19.991\naio v2.0, direct, worker: 09.617\naio v2.0, direct, uring *: 09.802\n\nWorkload: SELECT sum(abalance) FROM pgbench_accounts;\n\n 0 workers 1 worker 2 workers 4 workers\nmaster: 65.753 33.246 21.095 12.918\naio v2.0, worker: 21.519 12.636 10.450 10.004\naio v2.0, uring*: 31.446 17.745 12.889 10.395\naio v2.0, uring** 23.497 13.824 10.881 10.589\naio v2.0, direct, worker: 22.377 11.989 09.915 09.772\naio v2.0, direct, uring*: 24.502 12.603 10.058 09.759\n\n* the reason io_uring is slower is that workers effectively parallelize\n *memcpy, at the cost of increased CPU usage\n** a simple heuristic to use IOSQE_ASYNC to force some parallelism of memcpys\n\n\n\nWorkload: checkpointing ~20GB of dirty data, mostly sequential:\n\n time\nmaster: 10.209\naio v2.0, worker: 05.391\naio v2.0, uring: 04.593\naio v2.0, direct, worker: 07.745\naio v2.0, direct, uring: 03.351\n\n\nTo solve the issue with an unbounded number of AIO references there are few\nchanges compared to the prior approach:\n\n1) Only one AIO handle can be \"handed out\" to a backend, without being\n defined. Previously the process of getting an AIO handle wasn't super\n lightweight, which made it appealing to cache AIO handles - which was one\n part of the problem for running out of AIO handles.\n\n2) Nothing in a backend can force a \"defined\" AIO handle (i.e. one that is a\n valid operation) to stay around, it's always possible to execute the AIO\n operation and then reuse the handle. This provides a forward guarantee, by\n ensuring that completing AIOs can free up handles (previously they couldn't\n be reused until the backend local reference was released).\n\n3) Callbacks on AIOs are not allowed to error out anymore, unless it's ok to\n take the server down.\n\n4) Obviously some code needs to know the result of AIO operation and be able\n to error out. To allow for that the issuer of an AIO can provide a pointer\n to local memory that'll receive the result of an AIO, including details\n about what kind of errors occurred (possible errors are e.g. a read failing\n or a buffer's checksum validation failing).\n\n\nIn the next few days I'll add a bunch more documentation and comments as well\nas some better perf numbers (assuming my workstation survived...).\n\nBesides that, I am planning to introduce \"io_method=sync\", which will just\nexecute IO synchrously. Besides that being a good capability to have, it'll\nalso make it more sensible to split off worker mode support into its own\ncommit(s).\n\n\nGreetings,\n\nAndres Freund\n\n\n[1] bulk relation extension, streaming read\n[2] personal health challenges, family health challenges and now moving from\n the US West Coast to the East Coast, ...",
"msg_date": "Sun, 1 Sep 2024 02:27:50 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "AIO v2.0"
},
{
"msg_contents": "On 01/09/2024 09:27, Andres Freund wrote:\n> The main reason I had previously implemented WAL AIO etc was to know the\n> design implications - but now that they're somewhat understood, I'm planning\n> to keep the patchset much smaller, with the goal of making it upstreamable.\n\n+1 on that approach.\n\n> To solve the issue with an unbounded number of AIO references there are few\n> changes compared to the prior approach:\n> \n> 1) Only one AIO handle can be \"handed out\" to a backend, without being\n> defined. Previously the process of getting an AIO handle wasn't super\n> lightweight, which made it appealing to cache AIO handles - which was one\n> part of the problem for running out of AIO handles.\n> \n> 2) Nothing in a backend can force a \"defined\" AIO handle (i.e. one that is a\n> valid operation) to stay around, it's always possible to execute the AIO\n> operation and then reuse the handle. This provides a forward guarantee, by\n> ensuring that completing AIOs can free up handles (previously they couldn't\n> be reused until the backend local reference was released).\n> \n> 3) Callbacks on AIOs are not allowed to error out anymore, unless it's ok to\n> take the server down.\n> \n> 4) Obviously some code needs to know the result of AIO operation and be able\n> to error out. To allow for that the issuer of an AIO can provide a pointer\n> to local memory that'll receive the result of an AIO, including details\n> about what kind of errors occurred (possible errors are e.g. a read failing\n> or a buffer's checksum validation failing).\n> \n> \n> In the next few days I'll add a bunch more documentation and comments as well\n> as some better perf numbers (assuming my workstation survived...).\n\nYeah, a high-level README would be nice. Without that, it's hard to \nfollow what \"handed out\" and \"defined\" above means for example.\n\nA few quick comments the patches:\n\nv2.0-0001-bufmgr-Return-early-in-ScheduleBufferTagForWrit.patch\n\n+1, this seems ready to be committed right away.\n\nv2.0-0002-Allow-lwlocks-to-be-unowned.patch\n\nWith LOCK_DEBUG, LWLock->owner will point to the backend that acquired \nthe lock, but it doesn't own it anymore. That's reasonable, but maybe \nadd a boolean to the LWLock to mark whether the lock is currently owned \nor not.\n\nThe LWLockReleaseOwnership() name is a bit confusing together with \nLWLockReleaseUnowned() and LWLockrelease(). From the names, you might \nthink that they all release the lock, but LWLockReleaseOwnership() just \ndisassociates it from the current process. Rename it to LWLockDisown() \nperhaps.\n\nv2.0-0003-Use-aux-process-resource-owner-in-walsender.patch\n\n+1. The old comment \"We don't currently need any ResourceOwner in a \nwalsender process\" was a bit misleading, because the walsender did \ncreate the short-lived \"base backup\" resource owner, so it's nice to get \nthat fixed.\n\nv2.0-0008-aio-Skeleton-IO-worker-infrastructure.patch\n\nMy refactoring around postmaster.c child process handling will conflict \nwith this [1]. Not in any fundamental way, but can I ask you to review \nthose patch, please? After those patches, AIO workers should also have \nPMChild slots (formerly known as Backend structs).\n\n[1] \nhttps://www.postgresql.org/message-id/a102f15f-eac4-4ff2-af02-f9ff209ec66f@iki.fi\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 2 Sep 2024 13:03:07 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: AIO v2.0"
},
{
"msg_contents": "Hi,\n\nOn 2024-09-02 13:03:07 +0300, Heikki Linnakangas wrote:\n> On 01/09/2024 09:27, Andres Freund wrote:\n> > In the next few days I'll add a bunch more documentation and comments as well\n> > as some better perf numbers (assuming my workstation survived...).\n> \n> Yeah, a high-level README would be nice. Without that, it's hard to follow\n> what \"handed out\" and \"defined\" above means for example.\n\nYea - I had actually written a bunch of that before, but then redesigns just\nobsoleted most of it :(\n\nFWIW, \"handed out\" is an IO handle acquired by code, which doesn't yet have an\noperation associated with it. Once \"defined\" it actually could be - but isn't\nyet - executed.\n\n\n> A few quick comments the patches:\n> \n> v2.0-0001-bufmgr-Return-early-in-ScheduleBufferTagForWrit.patch\n> \n> +1, this seems ready to be committed right away.\n\nCool\n\n\n> v2.0-0002-Allow-lwlocks-to-be-unowned.patch\n> \n> With LOCK_DEBUG, LWLock->owner will point to the backend that acquired the\n> lock, but it doesn't own it anymore. That's reasonable, but maybe add a\n> boolean to the LWLock to mark whether the lock is currently owned or not.\n\nHm, not sure it's worth doing that...\n\n\n> The LWLockReleaseOwnership() name is a bit confusing together with\n> LWLockReleaseUnowned() and LWLockrelease(). From the names, you might think\n> that they all release the lock, but LWLockReleaseOwnership() just\n> disassociates it from the current process. Rename it to LWLockDisown()\n> perhaps.\n\nYea, that makes sense.\n\n\n> v2.0-0008-aio-Skeleton-IO-worker-infrastructure.patch\n> \n> My refactoring around postmaster.c child process handling will conflict with\n> this [1]. Not in any fundamental way, but can I ask you to review those\n> patch, please? After those patches, AIO workers should also have PMChild\n> slots (formerly known as Backend structs).\n\nI'll try to do that soonish!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 3 Sep 2024 10:29:07 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: AIO v2.0"
},
{
"msg_contents": "I hope there can be a high-level design document that includes a\ndescription, high-level architecture, and low-level design.\nThis way, others can also participate in reviewing the code.\nFor example, which paths were modified in the AIO module? Is it the\npath for writing WAL logs, or the path for flushing pages, etc.?\n\nAlso, I recommend keeping this patch as small as possible.\nFor example, the first step could be to introduce libaio only, without\nconsidering io_uring, as that would make it too complex.\n\n\n",
"msg_date": "Thu, 5 Sep 2024 01:37:34 +0800",
"msg_from": "=?UTF-8?B?6ZmI5a6X5b+X?= <baotiao@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: AIO v2.0"
},
{
"msg_contents": "On Sun, 1 Sept 2024 at 18:28, Andres Freund <andres@anarazel.de> wrote:\n> 0 workers 1 worker 2 workers 4 workers\n> master: 65.753 33.246 21.095 12.918\n> aio v2.0, worker: 21.519 12.636 10.450 10.004\n> aio v2.0, uring*: 31.446 17.745 12.889 10.395\n> aio v2.0, uring** 23.497 13.824 10.881 10.589\n> aio v2.0, direct, worker: 22.377 11.989 09.915 09.772\n> aio v2.0, direct, uring*: 24.502 12.603 10.058 09.759\n\nI took this for a test drive on an AMD 3990x machine with a 1TB\nSamsung 980 Pro SSD on PCIe 4. I only tried io_method = io_uring, but\nI did try with and without direct IO.\n\nThis machine has 64GB RAM and I was using ClickBench Q2 [1], which is\n\"SELECT SUM(AdvEngineID), COUNT(*), AVG(ResolutionWidth) FROM hits;\"\n(for some reason they use 0-based query IDs). This table is 64GBs\nwithout indexes.\n\nI'm seeing direct IO slower than buffered IO with smaller worker\ncounts. That's counter to what I would have expected as I'd have\nexpected the memcpys from the kernel space to be quite an overhead in\nthe buffered IO case. With larger worker counts the bottleneck is\ncertainly disk. The part that surprised me was that the bottleneck is\nreached more quickly with buffered IO. I was seeing iotop going up to\n5.54GB/s at higher worker counts.\n\ntimes in milliseconds\nworkers buffered direct cmp\n0 58880 102852 57%\n1 33622 53538 63%\n2 24573 40436 61%\n4 18557 27359 68%\n8 14844 17330 86%\n16 12491 12754 98%\n32 11802 11956 99%\n64 11895 11941 100%\n\nIs there some other information I can provide to help this make sense?\n(Or maybe it does already to you.)\n\nDavid\n\n[1] https://github.com/ClickHouse/ClickBench/blob/main/postgresql-tuned/queries.sql\n\n\n",
"msg_date": "Fri, 6 Sep 2024 13:42:24 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: AIO v2.0"
},
{
"msg_contents": "Hi,\n\nAttached is the next version of the patchset. Changes:\n\n- added \"sync\" io method, the main benefit of that is that the main AIO commit\n doesn't need to include worker mode\n\n- split worker and io_uring methods into their own commits\n\n- added src/backend/storage/aio/README.md, explaining design constraints and\n the resulting design on a high level\n\n- renamed LWLockReleaseOwnership as suggested by Heikki\n\n- a bunch of small cleanups and improvements\n\nThere's plenty more to do, but I thought this would be a useful checkpoint.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 6 Sep 2024 15:38:16 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: AIO v2.0"
},
{
"msg_contents": "Hi,\n\nOn 2024-09-05 01:37:34 +0800, 陈宗志 wrote:\n> I hope there can be a high-level design document that includes a\n> description, high-level architecture, and low-level design.\n> This way, others can also participate in reviewing the code.\n\nYep, that was already on my todo list. The version I just posted includes\nthat.\n\n\n> For example, which paths were modified in the AIO module?\n> Is it the path for writing WAL logs, or the path for flushing pages, etc.?\n\nI don't think it's good to document this in a design document - that's just\nbound to get out of date.\n\nFor now the patchset causes AIO to be used for\n\n1) all users of read_stream.h, e.g. sequential scans\n\n2) bgwriter / checkpointer, mainly to have way to exercise the write path. As\n mentioned in my email upthread, the code for that is in a somewhat rough\n shape as Thomas Munro is working on a more general abstraction for some of\n this.\n\nThe earlier patchset added a lot more AIO uses because I needed to know all\nthe design constraints. It e.g. added AIO use in WAL. While that allowed me to\nlearn a lot, it's not something that makes sense to continue working on for\nnow, as it requires a lot of work that's independent of AIO. Thus I am\nfocusing on the above users for now.\n\n\n> Also, I recommend keeping this patch as small as possible.\n\nYep. That's my goal (as mentioned upthread).\n\n\n> For example, the first step could be to introduce libaio only, without\n> considering io_uring, as that would make it too complex.\n\nCurrently the patchset doesn't contain libaio support and I am not planning to\nwork on using libaio. Nor do I think it makes sense for anybody else to do so\n- libaio doesn't work for buffered IO, making it imo not particularly useful\nfor us.\n\nThe io_uring specific code isn't particularly complex / large compared to the\nmain AIO infrastructure.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 6 Sep 2024 15:47:35 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: AIO v2.0"
},
{
"msg_contents": "Hi Andres\n\nThanks for the AIO patch update. I gave it a try and ran into a FATAL\nin bgwriter when executing a benchmark.\n\n2024-09-12 01:38:00.851 PDT [2780939] PANIC: no more bbs\n2024-09-12 01:38:00.854 PDT [2780473] LOG: background writer process\n(PID 2780939) was terminated by signal 6: Aborted\n2024-09-12 01:38:00.854 PDT [2780473] LOG: terminating any other\nactive server processes\n\nI debugged a bit and found that BgBufferSync() is not capping the\nbatch size under io_bounce_buffers like BufferSync() for checkpoint.\nHere is a small patch to fix it.\n\nBest regards\nRobert\n\n\n\n\nOn Fri, Sep 6, 2024 at 12:47 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2024-09-05 01:37:34 +0800, 陈宗志 wrote:\n> > I hope there can be a high-level design document that includes a\n> > description, high-level architecture, and low-level design.\n> > This way, others can also participate in reviewing the code.\n>\n> Yep, that was already on my todo list. The version I just posted includes\n> that.\n>\n>\n> > For example, which paths were modified in the AIO module?\n> > Is it the path for writing WAL logs, or the path for flushing pages, etc.?\n>\n> I don't think it's good to document this in a design document - that's just\n> bound to get out of date.\n>\n> For now the patchset causes AIO to be used for\n>\n> 1) all users of read_stream.h, e.g. sequential scans\n>\n> 2) bgwriter / checkpointer, mainly to have way to exercise the write path. As\n> mentioned in my email upthread, the code for that is in a somewhat rough\n> shape as Thomas Munro is working on a more general abstraction for some of\n> this.\n>\n> The earlier patchset added a lot more AIO uses because I needed to know all\n> the design constraints. It e.g. added AIO use in WAL. While that allowed me to\n> learn a lot, it's not something that makes sense to continue working on for\n> now, as it requires a lot of work that's independent of AIO. Thus I am\n> focusing on the above users for now.\n>\n>\n> > Also, I recommend keeping this patch as small as possible.\n>\n> Yep. That's my goal (as mentioned upthread).\n>\n>\n> > For example, the first step could be to introduce libaio only, without\n> > considering io_uring, as that would make it too complex.\n>\n> Currently the patchset doesn't contain libaio support and I am not planning to\n> work on using libaio. Nor do I think it makes sense for anybody else to do so\n> - libaio doesn't work for buffered IO, making it imo not particularly useful\n> for us.\n>\n> The io_uring specific code isn't particularly complex / large compared to the\n> main AIO infrastructure.\n>\n> Greetings,\n>\n> Andres Freund\n>\n>",
"msg_date": "Thu, 12 Sep 2024 14:55:49 -0700",
"msg_from": "Robert Pang <robertpang@google.com>",
"msg_from_op": false,
"msg_subject": "Re: AIO v2.0"
},
{
"msg_contents": "On Fri, Sep 06, 2024 at 03:38:16PM -0400, Andres Freund wrote:\n> There's plenty more to do, but I thought this would be a useful checkpoint.\n\nI find patches 1-5 are Ready for Committer.\n\n> +typedef enum PgAioHandleState\n\nThis enum clarified a lot for me, so I wish I had read it before anything\nelse. I recommend referring to it in README.md. Would you also cover the\nvalid state transitions and which of them any backend can do vs. which are\nspecific to the defining backend?\n\n> +{\n> +\t/* not in use */\n> +\tAHS_IDLE = 0,\n> +\n> +\t/* returned by pgaio_io_get() */\n> +\tAHS_HANDED_OUT,\n> +\n> +\t/* pgaio_io_start_*() has been called, but IO hasn't been submitted yet */\n> +\tAHS_DEFINED,\n> +\n> +\t/* subjects prepare() callback has been called */\n> +\tAHS_PREPARED,\n> +\n> +\t/* IO is being executed */\n> +\tAHS_IN_FLIGHT,\n\nLet's align terms between functions and states those functions reach. For\nexample, I recommend calling this state AHS_SUBMITTED, because\npgaio_io_prepare_submit() is the function reaching this state.\n(Alternatively, use in_flight in the function name.)\n\n> +\n> +\t/* IO finished, but result has not yet been processed */\n> +\tAHS_REAPED,\n> +\n> +\t/* IO completed, shared completion has been called */\n> +\tAHS_COMPLETED_SHARED,\n> +\n> +\t/* IO completed, local completion has been called */\n> +\tAHS_COMPLETED_LOCAL,\n> +} PgAioHandleState;\n\n> +void\n> +pgaio_io_release_resowner(dlist_node *ioh_node, bool on_error)\n> +{\n> +\tPgAioHandle *ioh = dlist_container(PgAioHandle, resowner_node, ioh_node);\n> +\n> +\tAssert(ioh->resowner);\n> +\n> +\tResourceOwnerForgetAioHandle(ioh->resowner, &ioh->resowner_node);\n> +\tioh->resowner = NULL;\n> +\n> +\tswitch (ioh->state)\n> +\t{\n> +\t\tcase AHS_IDLE:\n> +\t\t\telog(ERROR, \"unexpected\");\n> +\t\t\tbreak;\n> +\t\tcase AHS_HANDED_OUT:\n> +\t\t\tAssert(ioh == my_aio->handed_out_io || my_aio->handed_out_io == NULL);\n> +\n> +\t\t\tif (ioh == my_aio->handed_out_io)\n> +\t\t\t{\n> +\t\t\t\tmy_aio->handed_out_io = NULL;\n> +\t\t\t\tif (!on_error)\n> +\t\t\t\t\telog(WARNING, \"leaked AIO handle\");\n> +\t\t\t}\n> +\n> +\t\t\tpgaio_io_reclaim(ioh);\n> +\t\t\tbreak;\n> +\t\tcase AHS_DEFINED:\n> +\t\tcase AHS_PREPARED:\n> +\t\t\t/* XXX: Should we warn about this when is_commit? */\n\nYes.\n\n> +\t\t\tpgaio_submit_staged();\n> +\t\t\tbreak;\n> +\t\tcase AHS_IN_FLIGHT:\n> +\t\tcase AHS_REAPED:\n> +\t\tcase AHS_COMPLETED_SHARED:\n> +\t\t\t/* this is expected to happen */\n> +\t\t\tbreak;\n> +\t\tcase AHS_COMPLETED_LOCAL:\n> +\t\t\t/* XXX: unclear if this ought to be possible? */\n> +\t\t\tpgaio_io_reclaim(ioh);\n> +\t\t\tbreak;\n> +\t}\n\n> +void\n> +pgaio_io_ref_wait(PgAioHandleRef *ior)\n> +{\n> +\tuint64\t\tref_generation;\n> +\tPgAioHandleState state;\n> +\tbool\t\tam_owner;\n> +\tPgAioHandle *ioh;\n> +\n> +\tioh = pgaio_io_from_ref(ior, &ref_generation);\n> +\n> +\tam_owner = ioh->owner_procno == MyProcNumber;\n> +\n> +\n> +\tif (pgaio_io_was_recycled(ioh, ref_generation, &state))\n> +\t\treturn;\n> +\n> +\tif (am_owner)\n> +\t{\n> +\t\tif (state == AHS_DEFINED || state == AHS_PREPARED)\n> +\t\t{\n> +\t\t\t/* XXX: Arguably this should be prevented by callers? */\n> +\t\t\tpgaio_submit_staged();\n\nAgreed for AHS_DEFINED, if not both. AHS_DEFINED here would suggest a past\nlongjmp out of pgaio_io_prepare() w/o a subxact rollback to cleanup. Even so,\nthe next point might remove the need here:\n\n> +void\n> +pgaio_io_prepare(PgAioHandle *ioh, PgAioOp op)\n> +{\n> +\tAssert(ioh->state == AHS_HANDED_OUT);\n> +\tAssert(pgaio_io_has_subject(ioh));\n> +\n> +\tioh->op = op;\n> +\tioh->state = AHS_DEFINED;\n> +\tioh->result = 0;\n> +\n> +\t/* allow a new IO to be staged */\n> +\tmy_aio->handed_out_io = NULL;\n> +\n> +\tpgaio_io_prepare_subject(ioh);\n> +\n> +\tioh->state = AHS_PREPARED;\n\nAs defense in depth, let's add a critical section from before assigning\nAHS_DEFINED to here. This code already needs to be safe for that (per\nREADME.md). When running outside a critical section, an ERROR in a subject\ncallback could leak the lwlock disowned in shared_buffer_prepare_common(). I\ndoubt there's a plausible way to reach that leak today, but future subject\ncallbacks could add risk over time.\n\n> +if test \"$with_liburing\" = yes; then\n> + PKG_CHECK_MODULES(LIBURING, liburing)\n> +fi\n\nI used the attached makefile patch to build w/ liburing.\n\n> +pgaio_uring_shmem_init(bool first_time)\n> +{\n> +\tuint32\t\tTotalProcs = MaxBackends + NUM_AUXILIARY_PROCS - MAX_IO_WORKERS;\n> +\tbool\t\tfound;\n> +\n> +\taio_uring_contexts = (PgAioUringContext *)\n> +\t\tShmemInitStruct(\"AioUring\", pgaio_uring_shmem_size(), &found);\n> +\n> +\tif (found)\n> +\t\treturn;\n> +\n> +\tfor (int contextno = 0; contextno < TotalProcs; contextno++)\n> +\t{\n> +\t\tPgAioUringContext *context = &aio_uring_contexts[contextno];\n> +\t\tint\t\t\tret;\n> +\n> +\t\t/*\n> +\t\t * XXX: Probably worth sharing the WQ between the different rings,\n> +\t\t * when supported by the kernel. Could also cause additional\n> +\t\t * contention, I guess?\n> +\t\t */\n> +#if 0\n> +\t\tif (!AcquireExternalFD())\n> +\t\t\telog(ERROR, \"No external FD available\");\n> +#endif\n> +\t\tret = io_uring_queue_init(io_max_concurrency, &context->io_uring_ring, 0);\n\nWith EXEC_BACKEND, \"make check PG_TEST_INITDB_EXTRA_OPTS=-cio_method=io_uring\"\nfails early:\n\n2024-09-15 12:46:08.168 PDT postmaster[2069397] LOG: starting PostgreSQL 18devel on x86_64-pc-linux-gnu, compiled by gcc (Debian 13.2.0-13) 13.2.0, 64-bit\n2024-09-15 12:46:08.168 PDT postmaster[2069397] LOG: listening on Unix socket \"/tmp/pg_regress-xgQOPH/.s.PGSQL.65312\"\n2024-09-15 12:46:08.203 PDT startup[2069423] LOG: database system was shut down at 2024-09-15 12:46:07 PDT\n2024-09-15 12:46:08.209 PDT client backend[2069425] [unknown] FATAL: the database system is starting up\n2024-09-15 12:46:08.222 PDT postmaster[2069397] LOG: database system is ready to accept connections\n2024-09-15 12:46:08.254 PDT autovacuum launcher[2069435] PANIC: failed: -9/Bad file descriptor\n2024-09-15 12:46:08.286 PDT client backend[2069444] [unknown] PANIC: failed: -95/Operation not supported\n2024-09-15 12:46:08.355 PDT client backend[2069455] [unknown] PANIC: unexpected: -95/Operation not supported: No such file or directory\n2024-09-15 12:46:08.370 PDT postmaster[2069397] LOG: received fast shutdown request\n\nI expect that's from io_uring_queue_init() stashing in shared memory a file\ndescriptor and mmap address, which aren't valid in EXEC_BACKEND children.\nReattaching descriptors and memory in each child may work, or one could just\nblock io_method=io_uring under EXEC_BACKEND.\n\n> +pgaio_uring_submit(uint16 num_staged_ios, PgAioHandle **staged_ios)\n> +{\n> +\tstruct io_uring *uring_instance = &my_shared_uring_context->io_uring_ring;\n> +\n> +\tAssert(num_staged_ios <= PGAIO_SUBMIT_BATCH_SIZE);\n> +\n> +\tfor (int i = 0; i < num_staged_ios; i++)\n> +\t{\n> +\t\tPgAioHandle *ioh = staged_ios[i];\n> +\t\tstruct io_uring_sqe *sqe;\n> +\n> +\t\tsqe = io_uring_get_sqe(uring_instance);\n> +\n> +\t\tpgaio_io_prepare_submit(ioh);\n> +\t\tpgaio_uring_sq_from_io(ioh, sqe);\n> +\t}\n> +\n> +\twhile (true)\n> +\t{\n> +\t\tint\t\t\tret;\n> +\n> +\t\tpgstat_report_wait_start(WAIT_EVENT_AIO_SUBMIT);\n> +\t\tret = io_uring_submit(uring_instance);\n> +\t\tpgstat_report_wait_end();\n> +\n> +\t\tif (ret == -EINTR)\n> +\t\t{\n> +\t\t\telog(DEBUG3, \"submit EINTR, nios: %d\", num_staged_ios);\n> +\t\t\tcontinue;\n> +\t\t}\n\nSince io_uring_submit() is a wrapper around io_uring_enter(), this should also\nretry on EAGAIN. \"man io_uring_enter\" has:\n\n EAGAIN The kernel was unable to allocate memory for the request, or\n otherwise ran out of resources to handle it. The application should wait\n for some completions and try again.\n\n> +FileStartWriteV(struct PgAioHandle *ioh, File file,\n> +\t\t\t\tint iovcnt, off_t offset,\n> +\t\t\t\tuint32 wait_event_info)\n> +{\n> +\tint\t\t\treturnCode;\n> +\tVfd\t\t *vfdP;\n> +\n> +\tAssert(FileIsValid(file));\n> +\n> +\tDO_DB(elog(LOG, \"FileStartWriteV: %d (%s) \" INT64_FORMAT \" %d\",\n> +\t\t\t file, VfdCache[file].fileName,\n> +\t\t\t (int64) offset,\n> +\t\t\t iovcnt));\n> +\n> +\treturnCode = FileAccess(file);\n> +\tif (returnCode < 0)\n> +\t\treturn returnCode;\n> +\n> +\tvfdP = &VfdCache[file];\n> +\n> +\t/* FIXME: think about / reimplement temp_file_limit */\n> +\n> +\tpgaio_io_prep_writev(ioh, vfdP->fd, iovcnt, offset);\n> +\n> +\treturn 0;\n> +}\n\nFileStartWriteV() gets to state AHS_PREPARED, so let's align with the state\nname by calling it FilePrepareWriteV (or FileWriteVPrepare or whatever).\n\n\nFor non-sync IO methods, I gather it's essential that a process other than the\nIO definer be scanning for incomplete IOs and completing them. Otherwise,\ndeadlocks like this would happen:\n\nbackend1 locks blk1 for non-IO reasons\nbackend2 locks blk2, starts AIO write\nbackend1 waits for lock on blk2 for non-IO reasons\nbackend2 waits for lock on blk1 for non-IO reasons\n\nIf that's right, in worker mode, the IO worker resolves that deadlock. What\nresolves it under io_uring? Another process that happens to do\npgaio_io_ref_wait() would dislodge things, but I didn't locate the code to\nmake that happen systematically. Could you add a mention of \"deadlock\" in the\ncomment at whichever code achieves that?\n\n\nI could share more-tactical observations about patches 6-20, but they're\nprobably things you'd change without those observations. Is there any\nspecific decision you'd like to settle before patch 6 exits WIP?\n\nThanks,\nnm",
"msg_date": "Mon, 16 Sep 2024 07:43:49 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: AIO v2.0"
},
{
"msg_contents": "Hi,\n\nThanks for the review!\n\nOn 2024-09-16 07:43:49 -0700, Noah Misch wrote:\n> On Fri, Sep 06, 2024 at 03:38:16PM -0400, Andres Freund wrote:\n> > There's plenty more to do, but I thought this would be a useful checkpoint.\n>\n> I find patches 1-5 are Ready for Committer.\n\nCool!\n\n\n> > +typedef enum PgAioHandleState\n>\n> This enum clarified a lot for me, so I wish I had read it before anything\n> else. I recommend referring to it in README.md.\n\nMakes sense.\n\n\n> Would you also cover the valid state transitions and which of them any\n> backend can do vs. which are specific to the defining backend?\n\nYea, we should. I earlier had something, but because details were still\nchanging it was hard to keep up2date.\n\n\n> > +{\n> > +\t/* not in use */\n> > +\tAHS_IDLE = 0,\n> > +\n> > +\t/* returned by pgaio_io_get() */\n> > +\tAHS_HANDED_OUT,\n> > +\n> > +\t/* pgaio_io_start_*() has been called, but IO hasn't been submitted yet */\n> > +\tAHS_DEFINED,\n> > +\n> > +\t/* subjects prepare() callback has been called */\n> > +\tAHS_PREPARED,\n> > +\n> > +\t/* IO is being executed */\n> > +\tAHS_IN_FLIGHT,\n>\n> Let's align terms between functions and states those functions reach. For\n> example, I recommend calling this state AHS_SUBMITTED, because\n> pgaio_io_prepare_submit() is the function reaching this state.\n> (Alternatively, use in_flight in the function name.)\n\nThere used to be a separate SUBMITTED, but I removed it at some point as not\nnecessary anymore. Arguably it might be useful to re-introduce it so that\ne.g. with worker mode one can tell the difference between the IO being queued\nand the IO actually being processed.\n\n\n> > +void\n> > +pgaio_io_ref_wait(PgAioHandleRef *ior)\n> > +{\n> > +\tuint64\t\tref_generation;\n> > +\tPgAioHandleState state;\n> > +\tbool\t\tam_owner;\n> > +\tPgAioHandle *ioh;\n> > +\n> > +\tioh = pgaio_io_from_ref(ior, &ref_generation);\n> > +\n> > +\tam_owner = ioh->owner_procno == MyProcNumber;\n> > +\n> > +\n> > +\tif (pgaio_io_was_recycled(ioh, ref_generation, &state))\n> > +\t\treturn;\n> > +\n> > +\tif (am_owner)\n> > +\t{\n> > +\t\tif (state == AHS_DEFINED || state == AHS_PREPARED)\n> > +\t\t{\n> > +\t\t\t/* XXX: Arguably this should be prevented by callers? */\n> > +\t\t\tpgaio_submit_staged();\n>\n> Agreed for AHS_DEFINED, if not both. AHS_DEFINED here would suggest a past\n> longjmp out of pgaio_io_prepare() w/o a subxact rollback to cleanup.\n\nThat, or not having submitted the IO. One thing I've been thinking about as\nbeing potentially helpful infrastructure is to have something similar to a\ncritical section, except that it asserts that one is not allowed to block or\nforget submitting staged IOs.\n\n\n\n> > +void\n> > +pgaio_io_prepare(PgAioHandle *ioh, PgAioOp op)\n> > +{\n> > +\tAssert(ioh->state == AHS_HANDED_OUT);\n> > +\tAssert(pgaio_io_has_subject(ioh));\n> > +\n> > +\tioh->op = op;\n> > +\tioh->state = AHS_DEFINED;\n> > +\tioh->result = 0;\n> > +\n> > +\t/* allow a new IO to be staged */\n> > +\tmy_aio->handed_out_io = NULL;\n> > +\n> > +\tpgaio_io_prepare_subject(ioh);\n> > +\n> > +\tioh->state = AHS_PREPARED;\n>\n> As defense in depth, let's add a critical section from before assigning\n> AHS_DEFINED to here. This code already needs to be safe for that (per\n> README.md). When running outside a critical section, an ERROR in a subject\n> callback could leak the lwlock disowned in shared_buffer_prepare_common(). I\n> doubt there's a plausible way to reach that leak today, but future subject\n> callbacks could add risk over time.\n\nMakes sense.\n\n\n> > +if test \"$with_liburing\" = yes; then\n> > + PKG_CHECK_MODULES(LIBURING, liburing)\n> > +fi\n>\n> I used the attached makefile patch to build w/ liburing.\n\nThanks, will incorporate.\n\n\n> With EXEC_BACKEND, \"make check PG_TEST_INITDB_EXTRA_OPTS=-cio_method=io_uring\"\n> fails early:\n\nRight - that's to be expected.\n\n> 2024-09-15 12:46:08.168 PDT postmaster[2069397] LOG: starting PostgreSQL 18devel on x86_64-pc-linux-gnu, compiled by gcc (Debian 13.2.0-13) 13.2.0, 64-bit\n> 2024-09-15 12:46:08.168 PDT postmaster[2069397] LOG: listening on Unix socket \"/tmp/pg_regress-xgQOPH/.s.PGSQL.65312\"\n> 2024-09-15 12:46:08.203 PDT startup[2069423] LOG: database system was shut down at 2024-09-15 12:46:07 PDT\n> 2024-09-15 12:46:08.209 PDT client backend[2069425] [unknown] FATAL: the database system is starting up\n> 2024-09-15 12:46:08.222 PDT postmaster[2069397] LOG: database system is ready to accept connections\n> 2024-09-15 12:46:08.254 PDT autovacuum launcher[2069435] PANIC: failed: -9/Bad file descriptor\n> 2024-09-15 12:46:08.286 PDT client backend[2069444] [unknown] PANIC: failed: -95/Operation not supported\n> 2024-09-15 12:46:08.355 PDT client backend[2069455] [unknown] PANIC: unexpected: -95/Operation not supported: No such file or directory\n> 2024-09-15 12:46:08.370 PDT postmaster[2069397] LOG: received fast shutdown request\n>\n> I expect that's from io_uring_queue_init() stashing in shared memory a file\n> descriptor and mmap address, which aren't valid in EXEC_BACKEND children.\n> Reattaching descriptors and memory in each child may work, or one could just\n> block io_method=io_uring under EXEC_BACKEND.\n\nI think the latter option is saner - I don't think there's anything to be\ngained by supporting io_uring in this situation. It's not like anybody will\nuse it for real-world workloads where performance matters. Nor would it be\nuseful fo portability testing.\n\n\n> > +pgaio_uring_submit(uint16 num_staged_ios, PgAioHandle **staged_ios)\n> > +{\n\n> > +\t\tif (ret == -EINTR)\n> > +\t\t{\n> > +\t\t\telog(DEBUG3, \"submit EINTR, nios: %d\", num_staged_ios);\n> > +\t\t\tcontinue;\n> > +\t\t}\n>\n> Since io_uring_submit() is a wrapper around io_uring_enter(), this should also\n> retry on EAGAIN. \"man io_uring_enter\" has:\n>\n> EAGAIN The kernel was unable to allocate memory for the request, or\n> otherwise ran out of resources to handle it. The application should wait\n> for some completions and try again.\n\nHm. I'm not sure that makes sense. We only allow a limited number of IOs to be\nin flight for each uring instance. That's different to a use of uring to\ne.g. wait for incoming network data on thousands of sockets, where you could\nhave essentially unbounded amount of requests outstanding.\n\nWhat would we wait for? What if we were holding a critical lock in that\nmoment? Would it be safe to just block for some completions? What if there's\nactually no IO in progress?\n\n\n> > +FileStartWriteV(struct PgAioHandle *ioh, File file,\n> > +\t\t\t\tint iovcnt, off_t offset,\n> > +\t\t\t\tuint32 wait_event_info)\n> > +{\n> > ...\n>\n> FileStartWriteV() gets to state AHS_PREPARED, so let's align with the state\n> name by calling it FilePrepareWriteV (or FileWriteVPrepare or whatever).\n\nHm - that doesn't necessarily seem right to me. I don't think the caller\nshould assume that the IO will just be prepared and not already completed by\nthe time FileStartWriteV() returns - we might actually do the IO\nsynchronously.\n\n\n> For non-sync IO methods, I gather it's essential that a process other than the\n> IO definer be scanning for incomplete IOs and completing them.\n\nYep - it's something I've been fighting with / redesigning a *lot*. Earlier\nthe AIO subsystem could transparently retry IOs, but that ends up being a\nnightmare - or at least I couldn't find a way to not make it a\nnightmare. There are two main complexities:\n\n1) What if the IO is being completed in a critical section? We can't reopen\n the file in that situation. My initial fix for this was to defer retries,\n but that's problematic too:\n\n2) Acquiring an IO needs to be able to guarantee forward progress. Because\n there's a limited number of IOs that means we need to be able to complete\n IOs while acquiring an IO. So we can't just keep the IO handle around -\n which in turn means that we'd need to save the state for retrying\n somewhere. Which would require some pre-allocated memory to save that\n state.\n\nThus I think it's actually better if we delegate retries to the callsites. I\nwas thinking that for partial reads of shared buffers we ought to not set\nBM_IO_ERROR though...\n\n\n> Otherwise, deadlocks like this would happen:\n\n> backend1 locks blk1 for non-IO reasons\n> backend2 locks blk2, starts AIO write\n> backend1 waits for lock on blk2 for non-IO reasons\n> backend2 waits for lock on blk1 for non-IO reasons\n>\n> If that's right, in worker mode, the IO worker resolves that deadlock. What\n> resolves it under io_uring? Another process that happens to do\n> pgaio_io_ref_wait() would dislodge things, but I didn't locate the code to\n> make that happen systematically.\n\nYea, it's code that I haven't forward ported yet. I think basically\nLockBuffer[ForCleanup] ought to call pgaio_io_ref_wait() when it can't\nimmediately acquire the lock and if the buffer has IO going on.\n\n\n> I could share more-tactical observations about patches 6-20, but they're\n> probably things you'd change without those observations.\n\nAgreed.\n\n\n> Is there any specific decision you'd like to settle before patch 6 exits\n> WIP?\n\nPatch 6 specifically? That I really mainly kept separate for review - it\ndoesn't seem particularly interesting to commit it earlier than 7, or do you\nthink differently?\n\nIn case you mean 6+7 or 6 to ~11, I can think of the following:\n\n- I am worried about the need for bounce buffers for writes of checksummed\n buffers. That quickly ends up being a significant chunk of memory,\n particularly when using a small shared_buffers with a higher than default\n number of connection. I'm currently hacking up a prototype that'd prevent us\n from setting hint bits with just a share lock. I'm planning to start a\n separate thread about that.\n\n- The header split doesn't yet quite seem right yet\n\n- I'd like to implement retries in the later patches, to make sure that it\n doesn't have design implications\n\n- Worker mode needs to be able to automatically adjust the number of running\n workers, I think - otherwise it's going to be too hard to tune.\n\n- I think the PgAioHandles need to be slimmed down a bit - there's some design\n evolution visible that should not end up in the tree.\n\n- I'm not sure that I like name \"subject\" for the different things AIO is\n performed for\n\n- I am wondering if the need for pgaio_io_set_io_data_32() (to store the set\n of buffer ids that are affected by one IO) could be replaced by repurposing\n BufferDesc->freeNext or something along those lines. I don't like the amount\n of memory required for storing those arrays, even if it's not that much\n compared to needing space to store struct iovec[PG_IOV_MAX] for each AIO\n handle.\n\n- I'd like to extend the test module to actually test more cases, it's too\n hard to reach some paths, particularly without [a lot] of users yet. That's\n not strictly a dependency of the earlier patches - since the initial patches\n can't actually do much in the way of IO.\n\n- We shouldn't reserve AioHandles etc for io workers - but because different\n tpes of aux processes don't use a predetermined ProcNumber, that's not\n entirely trivial without adding more complexity. I've actually wondered\n whether IO workes should be their own \"top-level\" kind of process, rather\n than an aux process. But that seems quite costly.\n\n- Right now the io_uring mode has each backend's io_uring instance visible to\n each other process. That ends up using a fair number of FDs. That's OK from\n an efficiency perspective, but I think we'd need to add code to adjust the\n soft RLIMIT_NOFILE (it's set to 1024 on most distros because there are\n various programs that iterate over all possible FDs, causing significant\n slowdowns when the soft limit defaults to something high). I earlier had a\n limited number of io_uring instances, but that added a fair amount of\n overhead because then submitting IO would require a lock.\n\n That again doesn't have to be solved as part of the earlier patches but\n might have some minor design impact.\n\n\nThanks again,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 16 Sep 2024 13:51:42 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: AIO v2.0"
},
{
"msg_contents": "Hi,\n\nOn 2024-09-12 14:55:49 -0700, Robert Pang wrote:\n> Hi Andres\n> \n> Thanks for the AIO patch update. I gave it a try and ran into a FATAL\n> in bgwriter when executing a benchmark.\n> \n> 2024-09-12 01:38:00.851 PDT [2780939] PANIC: no more bbs\n> 2024-09-12 01:38:00.854 PDT [2780473] LOG: background writer process\n> (PID 2780939) was terminated by signal 6: Aborted\n> 2024-09-12 01:38:00.854 PDT [2780473] LOG: terminating any other\n> active server processes\n> \n> I debugged a bit and found that BgBufferSync() is not capping the\n> batch size under io_bounce_buffers like BufferSync() for checkpoint.\n> Here is a small patch to fix it.\n\nGood catch, thanks!\n\n\nI am hoping (as described in my email to Noah a few minutes ago) that we can\nget away from needing bounce buffers. They are a quite expensive solution to a\nproblem we made for ourselves...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 16 Sep 2024 13:56:09 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: AIO v2.0"
},
{
"msg_contents": "On Mon, Sep 16, 2024 at 01:51:42PM -0400, Andres Freund wrote:\n> On 2024-09-16 07:43:49 -0700, Noah Misch wrote:\n> > On Fri, Sep 06, 2024 at 03:38:16PM -0400, Andres Freund wrote:\n\n> > Reattaching descriptors and memory in each child may work, or one could just\n> > block io_method=io_uring under EXEC_BACKEND.\n> \n> I think the latter option is saner\n\nWorks for me.\n\n> > > +pgaio_uring_submit(uint16 num_staged_ios, PgAioHandle **staged_ios)\n> > > +{\n> \n> > > +\t\tif (ret == -EINTR)\n> > > +\t\t{\n> > > +\t\t\telog(DEBUG3, \"submit EINTR, nios: %d\", num_staged_ios);\n> > > +\t\t\tcontinue;\n> > > +\t\t}\n> >\n> > Since io_uring_submit() is a wrapper around io_uring_enter(), this should also\n> > retry on EAGAIN. \"man io_uring_enter\" has:\n> >\n> > EAGAIN The kernel was unable to allocate memory for the request, or\n> > otherwise ran out of resources to handle it. The application should wait\n> > for some completions and try again.\n> \n> Hm. I'm not sure that makes sense. We only allow a limited number of IOs to be\n> in flight for each uring instance. That's different to a use of uring to\n> e.g. wait for incoming network data on thousands of sockets, where you could\n> have essentially unbounded amount of requests outstanding.\n> \n> What would we wait for? What if we were holding a critical lock in that\n> moment? Would it be safe to just block for some completions? What if there's\n> actually no IO in progress?\n\nI'd try the following. First, scan for all IOs of all processes at\nAHS_DEFINED and later, advancing them to AHS_COMPLETED_SHARED. This might be\nunsafe today, but discovering why it's unsafe likely will inform design beyond\nEAGAIN returns. I don't specifically know of a way it's unsafe. Do just one\npass of that; there may be newer IOs in progress afterward. If submit still\ngets EAGAIN, sleep a bit and retry. Like we do in pgwin32_open_handle(), fail\nafter a fixed number of iterations. This isn't great if we hold a critical\nlock, but it beats the alternative of PANIC on the first EAGAIN.\n\n> > > +FileStartWriteV(struct PgAioHandle *ioh, File file,\n> > > +\t\t\t\tint iovcnt, off_t offset,\n> > > +\t\t\t\tuint32 wait_event_info)\n> > > +{\n> > > ...\n> >\n> > FileStartWriteV() gets to state AHS_PREPARED, so let's align with the state\n> > name by calling it FilePrepareWriteV (or FileWriteVPrepare or whatever).\n> \n> Hm - that doesn't necessarily seem right to me. I don't think the caller\n> should assume that the IO will just be prepared and not already completed by\n> the time FileStartWriteV() returns - we might actually do the IO\n> synchronously.\n\nYes. Even if it doesn't become synchronous IO, some other process may advance\nthe IO to AHS_COMPLETED_SHARED by the next wake-up of the process that defined\nthe IO. Still, I think this shouldn't use the term \"Start\" while no state\nname uses that term. What else could remove that mismatch?\n\n> > Is there any specific decision you'd like to settle before patch 6 exits\n> > WIP?\n> \n> Patch 6 specifically? That I really mainly kept separate for review - it\n\nNo. I'll rephrase as \"Is there any specific decision you'd like to settle\nbefore the next cohort of patches exits WIP?\"\n\n> doesn't seem particularly interesting to commit it earlier than 7, or do you\n> think differently?\n\nNo, I agree a lone commit of 6 isn't a win. Roughly, the eight patches\n6-9,12-15 could be a minimal attractive unit. I've not thought through that\ngrouping much.\n\n> In case you mean 6+7 or 6 to ~11, I can think of the following:\n> \n> - I am worried about the need for bounce buffers for writes of checksummed\n> buffers. That quickly ends up being a significant chunk of memory,\n> particularly when using a small shared_buffers with a higher than default\n> number of connection. I'm currently hacking up a prototype that'd prevent us\n> from setting hint bits with just a share lock. I'm planning to start a\n> separate thread about that.\n\nAioChooseBounceBuffers() limits usage to 256 blocks (2MB) per MaxBackends.\nDoing better is nice, but I don't consider this a blocker. I recommend\ndealing with the worry by reducing the limit initially (128 blocks?). Can\nalways raise it later.\n\n> - The header split doesn't yet quite seem right yet\n\nI won't have a strong opinion on that one. The aio.c/aio_io.c split did catch\nmy attention. I made a note to check it again once those files get header\ncomments.\n\n> - I'd like to implement retries in the later patches, to make sure that it\n> doesn't have design implications\n\nYes, that's a blocker to me.\n\n> - Worker mode needs to be able to automatically adjust the number of running\n> workers, I think - otherwise it's going to be too hard to tune.\n\nChanging that later wouldn't affect much else, so I'd not consider it a\nblocker. (The worst case is that we think the initial AIO release will be a\nloss for most users, so we wrap it in debug_ terminology like we did for\ndebug_io_direct. I'm not saying worker scaling will push AIO from one side of\nthat line to another, but that's why I'm fine with commits that omit\nself-contained optimizations.)\n\n> - I think the PgAioHandles need to be slimmed down a bit - there's some design\n> evolution visible that should not end up in the tree.\n\nOkay.\n\n> - I'm not sure that I like name \"subject\" for the different things AIO is\n> performed for\n\nHow about one of these six terms:\n\n- listener, observer [if you view smgr as an observer of IOs in the sense of https://en.wikipedia.org/wiki/Observer_pattern]\n- class, subclass, type, tag [if you view an SmgrIO as a subclass of an IO, in the object-oriented sense]\n\n> - I am wondering if the need for pgaio_io_set_io_data_32() (to store the set\n> of buffer ids that are affected by one IO) could be replaced by repurposing\n> BufferDesc->freeNext or something along those lines. I don't like the amount\n> of memory required for storing those arrays, even if it's not that much\n> compared to needing space to store struct iovec[PG_IOV_MAX] for each AIO\n> handle.\n\nHere too, changing that later wouldn't affect much else, so I'd not consider\nit a blocker.\n\n> - I'd like to extend the test module to actually test more cases, it's too\n> hard to reach some paths, particularly without [a lot] of users yet. That's\n> not strictly a dependency of the earlier patches - since the initial patches\n> can't actually do much in the way of IO.\n\nAgreed. Among the post-patch check-world coverage, which uncovered parts have\nthe most risk?\n\n> - We shouldn't reserve AioHandles etc for io workers - but because different\n> tpes of aux processes don't use a predetermined ProcNumber, that's not\n> entirely trivial without adding more complexity. I've actually wondered\n> whether IO workes should be their own \"top-level\" kind of process, rather\n> than an aux process. But that seems quite costly.\n\nHere too, changing that later wouldn't affect much else, so I'd not consider\nit a blocker. Of these ones I'm calling non-blockers, which would you most\nregret deferring?\n\n> - Right now the io_uring mode has each backend's io_uring instance visible to\n> each other process. That ends up using a fair number of FDs. That's OK from\n> an efficiency perspective, but I think we'd need to add code to adjust the\n> soft RLIMIT_NOFILE (it's set to 1024 on most distros because there are\n> various programs that iterate over all possible FDs, causing significant\n\nAgreed on raising the soft limit. Docs and/or errhint() likely will need to\nmention system configuration nonetheless, since some users will encounter\nRLIMIT_MEMLOCK or /proc/sys/kernel/io_uring_disabled.\n\n> slowdowns when the soft limit defaults to something high). I earlier had a\n> limited number of io_uring instances, but that added a fair amount of\n> overhead because then submitting IO would require a lock.\n> \n> That again doesn't have to be solved as part of the earlier patches but\n> might have some minor design impact.\n\nHow far do you see the design impact spreading on that one?\n\nThanks,\nnm\n\n\n",
"msg_date": "Tue, 17 Sep 2024 11:08:19 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: AIO v2.0"
},
{
"msg_contents": "Hi,\n\nOn 2024-09-17 11:08:19 -0700, Noah Misch wrote:\n> > - I am worried about the need for bounce buffers for writes of checksummed\n> > buffers. That quickly ends up being a significant chunk of memory,\n> > particularly when using a small shared_buffers with a higher than default\n> > number of connection. I'm currently hacking up a prototype that'd prevent us\n> > from setting hint bits with just a share lock. I'm planning to start a\n> > separate thread about that.\n> \n> AioChooseBounceBuffers() limits usage to 256 blocks (2MB) per MaxBackends.\n> Doing better is nice, but I don't consider this a blocker. I recommend\n> dealing with the worry by reducing the limit initially (128 blocks?). Can\n> always raise it later.\n\nOn storage that has nontrivial latency, like just about all cloud storage,\neven 256 will be too low. Particularly for checkpointer.\n\nAssuming 1ms latency - which isn't the high end of cloud storage latency - 256\nblocks in flight limits you to <= 256MByte/s, even on storage that can have a\nlot more throughput. With 3ms, which isn't uncommon, it's 85MB/s.\n\nOf course this could be addressed by tuning, but it seems like something that\nshouldn't need to be tuned by the majority of folks running postgres.\n\n\nWe also discussed the topic at https://postgr.es/m/20240925020022.c5.nmisch%40google.com\n> ... neither BM_SETTING_HINTS nor keeping bounce buffers looks like a bad\n> decision. From what I've heard so far of the performance effects, if it were\n> me, I would keep the bounce buffers. I'd pursue BM_SETTING_HINTS and bounce\n> buffer removal as a distinct project after the main AIO capability. Bounce\n> buffers have an implementation. They aren't harming other design decisions.\n> The AIO project is big, so I'd want to err on the side of not designating\n> other projects as its prerequisites.\n\nGiven the issues that modifying pages while in flight causes, not just with PG\nlevel checksums, but also filesystem level checksum, I don't feel like it's a\nparticularly promising approach.\n\nHowever, I think this doesn't have to mean that the BM_SETTING_HINTS stuff has\nto be completed before we can move forward with AIO. If I split out the write\nportion from the read portion a bit further, the main AIO changes and the\nshared-buffer read user can be merged before there's a dependency on the hint\nbit stuff being done.\n\nDoes that seem reasonable?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 30 Sep 2024 10:49:17 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: AIO v2.0"
},
{
"msg_contents": "On Mon, 30 Sept 2024 at 16:49, Andres Freund <andres@anarazel.de> wrote:\n> On 2024-09-17 11:08:19 -0700, Noah Misch wrote:\n> > > - I am worried about the need for bounce buffers for writes of checksummed\n> > > buffers. That quickly ends up being a significant chunk of memory,\n> > > particularly when using a small shared_buffers with a higher than default\n> > > number of connection. I'm currently hacking up a prototype that'd prevent us\n> > > from setting hint bits with just a share lock. I'm planning to start a\n> > > separate thread about that.\n> >\n> > AioChooseBounceBuffers() limits usage to 256 blocks (2MB) per MaxBackends.\n> > Doing better is nice, but I don't consider this a blocker. I recommend\n> > dealing with the worry by reducing the limit initially (128 blocks?). Can\n> > always raise it later.\n>\n> On storage that has nontrivial latency, like just about all cloud storage,\n> even 256 will be too low. Particularly for checkpointer.\n>\n> Assuming 1ms latency - which isn't the high end of cloud storage latency - 256\n> blocks in flight limits you to <= 256MByte/s, even on storage that can have a\n> lot more throughput. With 3ms, which isn't uncommon, it's 85MB/s.\n\nFYI, I think you're off by a factor 8, i.e. that would be 2GB/sec and\n666MB/sec respectively, given a normal page size of 8kB and exactly\n1ms/3ms full round trip latency:\n\n1 page/1 ms * 8kB/page * 256 concurrency = 256 pages/ms * 8kB/page =\n2MiB/ms ~= 2GiB/sec.\nfor 3ms divide by 3 -> ~666MiB/sec.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 30 Sep 2024 17:55:37 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: AIO v2.0"
},
{
"msg_contents": "On Mon, Sep 30, 2024 at 10:49:17AM -0400, Andres Freund wrote:\n> We also discussed the topic at https://postgr.es/m/20240925020022.c5.nmisch%40google.com\n> > ... neither BM_SETTING_HINTS nor keeping bounce buffers looks like a bad\n> > decision. From what I've heard so far of the performance effects, if it were\n> > me, I would keep the bounce buffers. I'd pursue BM_SETTING_HINTS and bounce\n> > buffer removal as a distinct project after the main AIO capability. Bounce\n> > buffers have an implementation. They aren't harming other design decisions.\n> > The AIO project is big, so I'd want to err on the side of not designating\n> > other projects as its prerequisites.\n> \n> Given the issues that modifying pages while in flight causes, not just with PG\n> level checksums, but also filesystem level checksum, I don't feel like it's a\n> particularly promising approach.\n> \n> However, I think this doesn't have to mean that the BM_SETTING_HINTS stuff has\n> to be completed before we can move forward with AIO. If I split out the write\n> portion from the read portion a bit further, the main AIO changes and the\n> shared-buffer read user can be merged before there's a dependency on the hint\n> bit stuff being done.\n> \n> Does that seem reasonable?\n\nYes.\n\n\n",
"msg_date": "Mon, 30 Sep 2024 12:29:53 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: AIO v2.0"
}
] |
[
{
"msg_contents": "adf97c156 added support to allow ExprStates to support hashing and\nadjusted Hash Join to make use of that. That allowed a speedup in hash\nvalue generation as it allowed JIT compilation of hash values. It also\nallowed more efficient tuple deforming as all required attributes are\ndeformed in one go rather than on demand when hashing each join key.\n\nThe attached does the same for GROUP BY and hashed SubPlans. The win\nfor the tuple deformation does not exist here, but there does seem to\nbe some gains still to be had from JIT compilation.\n\nUsing a scale=1 TPC-H lineitem table, I ran the attached script.\n\nThe increase is far from impressive, but likely worth migrating these\nover to use ExprState too.\n\nmaster:\n\nalter system set jit = 0;\nlatency average = 1509.116 ms\nlatency average = 1502.496 ms\nlatency average = 1507.560 ms\nalter system set jit = 1;\nlatency average = 1396.015 ms\nlatency average = 1392.138 ms\nlatency average = 1396.476 ms\nalter system set jit_optimize_above_cost = 0;\nlatency average = 1290.463 ms\nlatency average = 1293.364 ms\nlatency average = 1290.366 ms\nalter system set jit_inline_above_cost = 0;\nlatency average = 1294.540 ms\nlatency average = 1300.970 ms\nlatency average = 1302.181 ms\n\npatched:\n\nalter system set jit = 0;\nlatency average = 1500.183 ms\nlatency average = 1500.911 ms\nlatency average = 1504.150 ms (+0.31%)\nalter system set jit = 1;\nlatency average = 1367.427 ms\nlatency average = 1367.329 ms\nlatency average = 1366.473 ms (+2.03%)\nalter system set jit_optimize_above_cost = 0;\nlatency average = 1273.453 ms\nlatency average = 1265.348 ms\nlatency average = 1272.598 ms (+1.65%)\nalter system set jit_inline_above_cost = 0;\nlatency average = 1264.657 ms\nlatency average = 1272.661 ms\nlatency average = 1273.179 ms (+2.29%)\n\nDavid",
"msg_date": "Sun, 1 Sep 2024 23:49:19 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add ExprState hashing for GROUP BY and hashed SubPlans"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 18598\nLogged by: Alexander Lakhin\nEmail address: exclusion@gmail.com\nPostgreSQL version: 17beta3\nOperating system: Ubuntu 22.04\nDescription: \n\nThe following query:\r\nSELECT JSON_OBJECTAGG(i: (i)::text FORMAT JSON WITH UNIQUE)\r\n FROM generate_series(1, 100000) i;\r\n\r\ntriggers an asan-detected error:\r\n==973230==ERROR: AddressSanitizer: heap-use-after-free on address\n0x7fde473f4428 at pc 0x558af80f20a6 bp 0x7ffe6b8e2df0 sp 0x7ffe6b8e2598\r\nREAD of size 7 at 0x7fde473f4428 thread T0\r\n #0 0x558af80f20a5 in __interceptor_strncmp.part.0\n(.../usr/local/pgsql/bin/postgres+0x32d40a5)\r\n #1 0x558af9ed5276 in json_unique_hash_match\n.../src/backend/utils/adt/json.c:922\r\n #2 0x558afa49c6ce in hash_search_with_hash_value\n.../src/backend/utils/hash/dynahash.c:1021\r\n #3 0x558afa49bfbc in hash_search\n.../src/backend/utils/hash/dynahash.c:960\r\n #4 0x558af9ed58b4 in json_unique_check_key\n.../src/backend/utils/adt/json.c:967\r\n #5 0x558af9ed6a71 in json_object_agg_transfn_worker\n.../src/backend/utils/adt/json.c:1116\r\n #6 0x558af9ed6fc5 in json_object_agg_unique_transfn\n.../src/backend/utils/adt/json.c:1163\r\n #7 0x558af8e3dcbe in ExecAggPlainTransByVal\n.../src/backend/executor/execExprInterp.c:5382\r\n...\r\n0x7fde473f4428 is located 506920 bytes inside of 524352-byte region\n[0x7fde47378800,0x7fde473f8840)\r\nfreed by thread T0 here:\r\n #0 0x558af8114038 in realloc\n(.../usr/local/pgsql/bin/postgres+0x32f6038)\r\n #1 0x558afa52c970 in AllocSetRealloc\n.../src/backend/utils/mmgr/aset.c:1226\r\n #2 0x558afa56c0e9 in repalloc .../src/backend/utils/mmgr/mcxt.c:1566\r\n #3 0x558afa66c94a in enlargeStringInfo .../src/common/stringinfo.c:349\r\n #4 0x558afa66be4a in appendBinaryStringInfo\n.../src/common/stringinfo.c:238\r\n #5 0x558afa66b612 in appendStringInfoString\n.../src/common/stringinfo.c:184\r\n #6 0x558af9ed66b9 in json_object_agg_transfn_worker\n.../src/backend/utils/adt/json.c:1102\r\n #7 0x558af9ed6fc5 in json_object_agg_unique_transfn\n.../src/backend/utils/adt/json.c:1163\r\n #8 0x558af8e3dcbe in ExecAggPlainTransByVal\n.../src/backend/executor/execExprInterp.c:5382\r\n...\r\npreviously allocated by thread T0 here:\r\n #0 0x558af8114038 in realloc\n(.../usr/local/pgsql/bin/postgres+0x32f6038)\r\n #1 0x558afa52c970 in AllocSetRealloc\n.../src/backend/utils/mmgr/aset.c:1226\r\n #2 0x558afa56c0e9 in repalloc .../src/backend/utils/mmgr/mcxt.c:1566\r\n #3 0x558afa66c94a in enlargeStringInfo .../src/common/stringinfo.c:349\r\n #4 0x558afa66be4a in appendBinaryStringInfo\n.../src/common/stringinfo.c:238\r\n #5 0x558afa66b612 in appendStringInfoString\n.../src/common/stringinfo.c:184\r\n #6 0x558af9ed0559 in datum_to_json_internal\n.../src/backend/utils/adt/json.c:279\r\n #7 0x558af9ed6ee3 in json_object_agg_transfn_worker\n.../src/backend/utils/adt/json.c:1132\r\n #8 0x558af9ed6fc5 in json_object_agg_unique_transfn\n.../src/backend/utils/adt/json.c:1163\r\n #9 0x558af8e3dcbe in ExecAggPlainTransByVal\n.../src/backend/executor/execExprInterp.c:5382\r\n...\r\n\r\nReproduced starting from 7081ac46a.",
"msg_date": "Sun, 01 Sep 2024 19:00:01 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #18598: AddressSanitizer detects use after free inside\n json_unique_hash_match()"
},
{
"msg_contents": "On 9/1/24 21:00, PG Bug reporting form wrote:\n> The following bug has been logged on the website:\n> \n> Bug reference: 18598\n> Logged by: Alexander Lakhin\n> Email address: exclusion@gmail.com\n> PostgreSQL version: 17beta3\n> Operating system: Ubuntu 22.04\n> Description: \n> \n> The following query:\n> SELECT JSON_OBJECTAGG(i: (i)::text FORMAT JSON WITH UNIQUE)\n> FROM generate_series(1, 100000) i;\n> \n> triggers an asan-detected error:\n> ==973230==ERROR: AddressSanitizer: heap-use-after-free on address\n> 0x7fde473f4428 at pc 0x558af80f20a6 bp 0x7ffe6b8e2df0 sp 0x7ffe6b8e2598\n> READ of size 7 at 0x7fde473f4428 thread T0\n> #0 0x558af80f20a5 in __interceptor_strncmp.part.0\n> (.../usr/local/pgsql/bin/postgres+0x32d40a5)\n> #1 0x558af9ed5276 in json_unique_hash_match\n> ...\n> \n> Reproduced starting from 7081ac46a.\n> \n\nFWIW I can reproduce this using valgrind, with the same stacks reported.\n\nThis feels very much like a classical memory context bug - pointing to\nmemory in a short-lived memory context. I see datum_to_json_internal()\nallocates the result in ExprContext, and that's bound to be reset pretty\noften. But I'm not too familiar with the JSON aggregate stuff enough to\npinpoint what it does wrong.\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Wed, 4 Sep 2024 09:21:10 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18598: AddressSanitizer detects use after free inside\n json_unique_hash_match()"
},
{
"msg_contents": "On Wed, Sep 4, 2024 at 3:21 PM Tomas Vondra <tomas@vondra.me> wrote:\n>\n> On 9/1/24 21:00, PG Bug reporting form wrote:\n> > The following bug has been logged on the website:\n> >\n> > Bug reference: 18598\n> > Logged by: Alexander Lakhin\n> > Email address: exclusion@gmail.com\n> > PostgreSQL version: 17beta3\n> > Operating system: Ubuntu 22.04\n> > Description:\n> >\n> > The following query:\n> > SELECT JSON_OBJECTAGG(i: (i)::text FORMAT JSON WITH UNIQUE)\n> > FROM generate_series(1, 100000) i;\n> >\n> > triggers an asan-detected error:\n> > ==973230==ERROR: AddressSanitizer: heap-use-after-free on address\n> > 0x7fde473f4428 at pc 0x558af80f20a6 bp 0x7ffe6b8e2df0 sp 0x7ffe6b8e2598\n> > READ of size 7 at 0x7fde473f4428 thread T0\n> > #0 0x558af80f20a5 in __interceptor_strncmp.part.0\n> > (.../usr/local/pgsql/bin/postgres+0x32d40a5)\n> > #1 0x558af9ed5276 in json_unique_hash_match\n> > ...\n> >\n> > Reproduced starting from 7081ac46a.\n> >\n>\n> FWIW I can reproduce this using valgrind, with the same stacks reported.\n>\n> This feels very much like a classical memory context bug - pointing to\n> memory in a short-lived memory context. I see datum_to_json_internal()\n> allocates the result in ExprContext, and that's bound to be reset pretty\n> often. But I'm not too familiar with the JSON aggregate stuff enough to\n> pinpoint what it does wrong.\n>\n> regards\n>\n> --\n> Tomas Vondra\n>\n>\n\nISTM that the JsonUniqueHashEntry.key point to an address later got\ninvalidated by enlargeStringInfo, we can resolve this by explicitly\npstrdup the key in the same MemoryContext of JsonAggState, like:\n\n@@ -1009,6 +1009,7 @@ json_object_agg_transfn_worker(FunctionCallInfo fcinfo,\n Datum arg;\n bool skip;\n int key_offset;\n+ const char *key;\n\n if (!AggCheckCallContext(fcinfo, &aggcontext))\n {\n@@ -1111,7 +1112,9 @@ json_object_agg_transfn_worker(FunctionCallInfo fcinfo,\n\n if (unique_keys)\n {\n- const char *key = &out->data[key_offset];\n+ oldcontext = MemoryContextSwitchTo(aggcontext);\n+ key = pstrdup(&out->data[key_offset]);\n+ MemoryContextSwitchTo(oldcontext);\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 4 Sep 2024 17:55:29 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18598: AddressSanitizer detects use after free inside\n json_unique_hash_match()"
},
{
"msg_contents": "On 9/4/24 11:55, Junwang Zhao wrote:\n> ...\n> \n> ISTM that the JsonUniqueHashEntry.key point to an address later got\n> invalidated by enlargeStringInfo, we can resolve this by explicitly\n> pstrdup the key in the same MemoryContext of JsonAggState, like:\n\nYes, this fixes the issue (at least per valgrind).\n\n> @@ -1009,6 +1009,7 @@ json_object_agg_transfn_worker(FunctionCallInfo fcinfo,\n> Datum arg;\n> bool skip;\n> int key_offset;\n> + const char *key;\n> \n> if (!AggCheckCallContext(fcinfo, &aggcontext))\n> {\n> @@ -1111,7 +1112,9 @@ json_object_agg_transfn_worker(FunctionCallInfo fcinfo,\n> \n> if (unique_keys)\n> {\n> - const char *key = &out->data[key_offset];\n> + oldcontext = MemoryContextSwitchTo(aggcontext);\n> + key = pstrdup(&out->data[key_offset]);\n> + MemoryContextSwitchTo(oldcontext);\n> \n\nI think you don't need the new key declaration (there's already a local\none), and you can simply do just\n\n const char *key = MemoryContextStrdup(aggcontext,\n &out->data[key_offset]);\n\nI wonder if the other json_unique_check_key() call might have a similar\nissue. I've not succeeded in constructing a broken query, but perhaps\nyou could give it a try too?\n\n\nThanks!\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Wed, 4 Sep 2024 13:54:56 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18598: AddressSanitizer detects use after free inside\n json_unique_hash_match()"
},
{
"msg_contents": "On Wed, Sep 4, 2024 at 7:54 PM Tomas Vondra <tomas@vondra.me> wrote:\n>\n> On 9/4/24 11:55, Junwang Zhao wrote:\n> > ...\n> >\n> > ISTM that the JsonUniqueHashEntry.key point to an address later got\n> > invalidated by enlargeStringInfo, we can resolve this by explicitly\n> > pstrdup the key in the same MemoryContext of JsonAggState, like:\n>\n> Yes, this fixes the issue (at least per valgrind).\n>\n> > @@ -1009,6 +1009,7 @@ json_object_agg_transfn_worker(FunctionCallInfo fcinfo,\n> > Datum arg;\n> > bool skip;\n> > int key_offset;\n> > + const char *key;\n> >\n> > if (!AggCheckCallContext(fcinfo, &aggcontext))\n> > {\n> > @@ -1111,7 +1112,9 @@ json_object_agg_transfn_worker(FunctionCallInfo fcinfo,\n> >\n> > if (unique_keys)\n> > {\n> > - const char *key = &out->data[key_offset];\n> > + oldcontext = MemoryContextSwitchTo(aggcontext);\n> > + key = pstrdup(&out->data[key_offset]);\n> > + MemoryContextSwitchTo(oldcontext);\n> >\n>\n> I think you don't need the new key declaration (there's already a local\n> one), and you can simply do just\n>\n> const char *key = MemoryContextStrdup(aggcontext,\n> &out->data[key_offset]);\n>\n\nSure, I will file a patch later.\n\n> I wonder if the other json_unique_check_key() call might have a similar\n> issue. I've not succeeded in constructing a broken query, but perhaps\n> you could give it a try too?\n\nSure, I will give it a try, thanks for the comment.\n>\n>\n> Thanks!\n>\n> --\n> Tomas Vondra\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 4 Sep 2024 20:24:12 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18598: AddressSanitizer detects use after free inside\n json_unique_hash_match()"
},
{
"msg_contents": "CC'd hackers list.\n\nOn Wed, Sep 4, 2024 at 7:54 PM Tomas Vondra <tomas@vondra.me> wrote:\n>\n> On 9/4/24 11:55, Junwang Zhao wrote:\n> > ...\n> >\n> > ISTM that the JsonUniqueHashEntry.key point to an address later got\n> > invalidated by enlargeStringInfo, we can resolve this by explicitly\n> > pstrdup the key in the same MemoryContext of JsonAggState, like:\n>\n> Yes, this fixes the issue (at least per valgrind).\n>\n> > @@ -1009,6 +1009,7 @@ json_object_agg_transfn_worker(FunctionCallInfo fcinfo,\n> > Datum arg;\n> > bool skip;\n> > int key_offset;\n> > + const char *key;\n> >\n> > if (!AggCheckCallContext(fcinfo, &aggcontext))\n> > {\n> > @@ -1111,7 +1112,9 @@ json_object_agg_transfn_worker(FunctionCallInfo fcinfo,\n> >\n> > if (unique_keys)\n> > {\n> > - const char *key = &out->data[key_offset];\n> > + oldcontext = MemoryContextSwitchTo(aggcontext);\n> > + key = pstrdup(&out->data[key_offset]);\n> > + MemoryContextSwitchTo(oldcontext);\n> >\n>\n> I think you don't need the new key declaration (there's already a local\n> one), and you can simply do just\n>\n> const char *key = MemoryContextStrdup(aggcontext,\n> &out->data[key_offset]);\n>\n> I wonder if the other json_unique_check_key() call might have a similar\n> issue. I've not succeeded in constructing a broken query, but perhaps\n> you could give it a try too?\n\nI found two other places called json_unique_check_key.\n\nOne is *json_build_object_worker*, and the usage is the same as\n*json_object_agg_transfn_worker*, I fix that the same way, PSA\n\nThe following sql should trigger the problem, I haven't tried asan\nbut traced the *repalloc* in gdb, I will try this later when I set up my\nasan building.\n\nSELECT JSON_OBJECT(1: 1, '2': NULL, '3': 1, repeat('x', 1000): 1, 2:\nrepeat('a', 100) WITH UNIQUE);\n\nThe other place is json_unique_object_field_start, it is set\nas a callback of JsonSemAction, and in the parse state machine,\nthe fname used by the callback has been correctly handled.\nsee [1] & [2].\n\n[1]: https://github.com/postgres/postgres/blob/master/src/common/jsonapi.c#L1069-L1087\n[2]: https://github.com/postgres/postgres/blob/master/src/common/jsonapi.c#L785-L823\n\n\n>\n>\n> Thanks!\n>\n> --\n> Tomas Vondra\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Thu, 5 Sep 2024 12:06:46 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18598: AddressSanitizer detects use after free inside\n json_unique_hash_match()"
},
{
"msg_contents": "On Wed, Sep 4, 2024 at 7:54 PM Tomas Vondra <tomas@vondra.me> wrote:\n>\n> On 9/4/24 11:55, Junwang Zhao wrote:\n> > ...\n> >\n> > ISTM that the JsonUniqueHashEntry.key point to an address later got\n> > invalidated by enlargeStringInfo, we can resolve this by explicitly\n> > pstrdup the key in the same MemoryContext of JsonAggState, like:\n>\n> Yes, this fixes the issue (at least per valgrind).\n>\n> > @@ -1009,6 +1009,7 @@ json_object_agg_transfn_worker(FunctionCallInfo fcinfo,\n> > Datum arg;\n> > bool skip;\n> > int key_offset;\n> > + const char *key;\n> >\n> > if (!AggCheckCallContext(fcinfo, &aggcontext))\n> > {\n> > @@ -1111,7 +1112,9 @@ json_object_agg_transfn_worker(FunctionCallInfo fcinfo,\n> >\n> > if (unique_keys)\n> > {\n> > - const char *key = &out->data[key_offset];\n> > + oldcontext = MemoryContextSwitchTo(aggcontext);\n> > + key = pstrdup(&out->data[key_offset]);\n> > + MemoryContextSwitchTo(oldcontext);\n> >\n>\n> I think you don't need the new key declaration (there's already a local\n> one), and you can simply do just\n>\n> const char *key = MemoryContextStrdup(aggcontext,\n> &out->data[key_offset]);\n>\n> I wonder if the other json_unique_check_key() call might have a similar\n> issue. I've not succeeded in constructing a broken query, but perhaps\n> you could give it a try too?\n\nI found two other places called json_unique_check_key.\n\nOne is *json_build_object_worker*, and the usage is the same as\n*json_object_agg_transfn_worker*, I fix that the same way, PSA\n\nThe following sql should trigger the problem, I haven't tried asan\nbut traced the *repalloc* in gdb, I will try this later when I set up my\nasan building.\n\nSELECT JSON_OBJECT(1: 1, '2': NULL, '3': 1, repeat('x', 1000): 1, 2:\nrepeat('a', 100) WITH UNIQUE);\n\nThe other place is json_unique_object_field_start, it is set\nas a callback of JsonSemAction, and in the parse state machine,\nthe fname used by the callback has been correctly handled.\nsee [1] & [2].\n\n[1]: https://github.com/postgres/postgres/blob/master/src/common/jsonapi.c#L1069-L1087\n[2]: https://github.com/postgres/postgres/blob/master/src/common/jsonapi.c#L785-L823\n\n>\n>\n> Thanks!\n>\n> --\n> Tomas Vondra\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Thu, 5 Sep 2024 13:20:20 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18598: AddressSanitizer detects use after free inside\n json_unique_hash_match()"
},
{
"msg_contents": "On 9/5/24 06:06, Junwang Zhao wrote:\n> ...\n> \n> I found two other places called json_unique_check_key.\n> \n> One is *json_build_object_worker*, and the usage is the same as\n> *json_object_agg_transfn_worker*, I fix that the same way, PSA\n> \n> The following sql should trigger the problem, I haven't tried asan\n> but traced the *repalloc* in gdb, I will try this later when I set up my\n> asan building.\n> \n> SELECT JSON_OBJECT(1: 1, '2': NULL, '3': 1, repeat('x', 1000): 1, 2:\n> repeat('a', 100) WITH UNIQUE);\n> \n> The other place is json_unique_object_field_start, it is set\n> as a callback of JsonSemAction, and in the parse state machine,\n> the fname used by the callback has been correctly handled.\n> see [1] & [2].\n> \n> [1]: https://github.com/postgres/postgres/blob/master/src/common/jsonapi.c#L1069-L1087\n> [2]: https://github.com/postgres/postgres/blob/master/src/common/jsonapi.c#L785-L823\n> \n\nThanks for the fix and the idea for a function triggering issue in the\nother place. I verified that it indeed triggers a valgrind report, and\nthe fix makes it go away.\n\nAttached is a patch with proper commit message, explaining the issue,\nand various other details. Can you please proofread it and check that I\ngot all the details right?\n\nThe patch also adds two regression tests, triggering the issue (without\nthe rest of the patch applied). It took me a while to realize the\nexisting tests pass simply because the objects are tiny and don't\nrequire enlarging the buffer, and thus the repalloc.\n\nThe only question that bothers me a little bit is the possibility of a\nmemory leak - could it happen that we keep the copied key much longer\nthan needed? Or does aggcontext have with the right life span? AFAICS\nthat's where we allocate the aggregate state, so it seems fine.\n\nAlso, how far back do we need to backpatch this? ITSM PG15 does not have\nthis issue, and it was introduced with the SQL/JSON stuff in PG16. Is\nthat correct?\n\n\nregards\n\n-- \nTomas Vondra",
"msg_date": "Tue, 10 Sep 2024 21:20:53 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18598: AddressSanitizer detects use after free inside\n json_unique_hash_match()"
},
{
"msg_contents": "On 9/5/24 06:06, Junwang Zhao wrote:\n> \n> ...\n> \n> I found two other places called json_unique_check_key.\n> \n> One is *json_build_object_worker*, and the usage is the same as\n> *json_object_agg_transfn_worker*, I fix that the same way, PSA\n> \n> The following sql should trigger the problem, I haven't tried asan\n> but traced the *repalloc* in gdb, I will try this later when I set up my\n> asan building.\n> \n> SELECT JSON_OBJECT(1: 1, '2': NULL, '3': 1, repeat('x', 1000): 1, 2:\n> repeat('a', 100) WITH UNIQUE);\n> \n> The other place is json_unique_object_field_start, it is set\n> as a callback of JsonSemAction, and in the parse state machine,\n> the fname used by the callback has been correctly handled.\n> see [1] & [2].\n> \n> [1]: https://github.com/postgres/postgres/blob/master/src/common/jsonapi.c#L1069-L1087\n> [2]: https://github.com/postgres/postgres/blob/master/src/common/jsonapi.c#L785-L823\n> \n> \n\n\nThanks for the fix and the idea for a function triggering issue in the\nother place. I verified that it indeed triggers a valgrind report, and\nthe fix makes it go away.\n\nAttached is a patch with proper commit message, explaining the issue,\nand various other details. Can you please proofread it and check that I\ngot all the details right?\n\nThe patch also adds two regression tests, triggering the issue (without\nthe rest of the patch applied). It took me a while to realize the\nexisting tests pass simply because the objects are tiny and don't\nrequire enlarging the buffer, and thus the repalloc.\n\nThe only question that bothers me a little bit is the possibility of a\nmemory leak - could it happen that we keep the copied key much longer\nthan needed? Or does aggcontext have with the right life span? AFAICS\nthat's where we allocate the aggregate state, so it seems fine.\n\nAlso, how far back do we need to backpatch this? ITSM PG15 does not have\nthis issue, and it was introduced with the SQL/JSON stuff in PG16. Is\nthat correct?\n\n\nregards\n\n-- \nTomas Vondra",
"msg_date": "Tue, 10 Sep 2024 21:47:16 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18598: AddressSanitizer detects use after free inside\n json_unique_hash_match()"
},
{
"msg_contents": "On 9/10/24 21:47, Tomas Vondra wrote:\n> ...\n>\n> The only question that bothers me a little bit is the possibility of a\n> memory leak - could it happen that we keep the copied key much longer\n> than needed? Or does aggcontext have with the right life span? AFAICS\n> that's where we allocate the aggregate state, so it seems fine.\n> \n> Also, how far back do we need to backpatch this? ITSM PG15 does not have\n> this issue, and it was introduced with the SQL/JSON stuff in PG16. Is\n> that correct?\n> \n\nNah, I spent a bit of time looking for a memory leak, but I don't think\nthere's one, or at least not a new one. We use the same memory context\nas for the hash table / buffer, so that should be fine.\n\nBut this made me realize the code in json_build_object_worker() can\nsimply use pstrdup() to copy the key into CurrentMemoryContext, which is\nwhere the hash table of unique keys is. In fact, using unique_check.mcxt\nwould not be quite right:\n\n MemoryContext mcxt; /* context for saving skipped keys */\n\nAnd this has nothing to do with skipped keys.\n\nSo I adjusted that way and pushed.\n\n\n\nThanks for the report / patch.\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Wed, 11 Sep 2024 14:08:23 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18598: AddressSanitizer detects use after free inside\n json_unique_hash_match()"
},
{
"msg_contents": "Hi Tomas,\n\nOn Wed, Sep 11, 2024 at 8:08 PM Tomas Vondra <tomas@vondra.me> wrote:\n>\n> On 9/10/24 21:47, Tomas Vondra wrote:\n> > ...\n> >\n> > The only question that bothers me a little bit is the possibility of a\n> > memory leak - could it happen that we keep the copied key much longer\n> > than needed? Or does aggcontext have with the right life span? AFAICS\n> > that's where we allocate the aggregate state, so it seems fine.\n> >\n> > Also, how far back do we need to backpatch this? ITSM PG15 does not have\n> > this issue, and it was introduced with the SQL/JSON stuff in PG16. Is\n> > that correct?\n> >\n>\n> Nah, I spent a bit of time looking for a memory leak, but I don't think\n> there's one, or at least not a new one. We use the same memory context\n> as for the hash table / buffer, so that should be fine.\n>\n> But this made me realize the code in json_build_object_worker() can\n> simply use pstrdup() to copy the key into CurrentMemoryContext, which is\n> where the hash table of unique keys is. In fact, using unique_check.mcxt\n> would not be quite right:\n>\n> MemoryContext mcxt; /* context for saving skipped keys */\n>\n> And this has nothing to do with skipped keys.\n>\n> So I adjusted that way and pushed.\n>\n\nI didn't get the time to reply to you quickly, sorry about that.\nThank you for improving the patch and appreciate your time\nfor working on this.\n\n>\n>\n> Thanks for the report / patch.\n>\n> --\n> Tomas Vondra\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 11 Sep 2024 22:56:33 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #18598: AddressSanitizer detects use after free inside\n json_unique_hash_match()"
}
] |
[
{
"msg_contents": "Hi hackers. While reviewing another thread I had cause to look at the\ndocs for the pg_replication_slot 'inactive_since' field [1] for the\nfirst time.\n\nI was confused by the description, which is as follows:\n----\ninactive_since timestamptz\nThe time since the slot has become inactive. NULL if the slot is\ncurrently being used.\n----\n\nThen I found the github history for the patch [2], and the\naccompanying long thread discussion [3] about the renaming of that\nfield. I have no intention to re-open that can-of-worms, but OTOH I\nfeel the first sentence of the field description is wrong and needs\nfixing.\n\nSpecifically, IMO describing something as \"The time since...\" means\nsome amount of elapsed time since some occurrence, but that is not the\ncorrect description for this timestamp field.\n\nThis is not just a case of me being pedantic. For example, here is\nwhat Chat-GPT had to say:\n----\nI asked:\nWhat does \"The time since the slot has become inactive.\" mean?\n\nChatGPT said:\n\"The time since the slot has become inactive\" refers to the duration\nthat has passed from the moment a specific slot (likely a database\nreplication slot or a similar entity) stopped being active. In other\nwords, it measures how much time has elapsed since the slot\ntransitioned from an active state to an inactive state.\n\nFor example, if a slot became inactive 2 hours ago, \"the time since\nthe slot has become inactive\" would be 2 hours.\n----\n\nTo summarize, the current description wrongly describes the field as a\ntime duration:\n\"The time since the slot has become inactive.\"\n\nI suggest replacing it with:\n\"The slot has been inactive since this time.\"\n\nThe attached patch makes this suggested change.\n\n======\n[1] docs - https://www.postgresql.org/docs/devel/view-pg-replication-slots.html\n[2] thread - https://www.postgresql.org/message-id/CA+Tgmob_Ta-t2ty8QrKHBGnNLrf4ZYcwhGHGFsuUoFrAEDw4sA@mail.gmail.com\n[3] push - https://github.com/postgres/postgres/commit/6d49c8d4b4f4a20eb5b4c501d78cf894fa13c0ea\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 2 Sep 2024 10:16:50 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "DOCS - pg_replication_slot . Fix the 'inactive_since' description"
},
{
"msg_contents": "On Mon, Sep 2, 2024 at 5:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi hackers. While reviewing another thread I had cause to look at the\n> docs for the pg_replication_slot 'inactive_since' field [1] for the\n> first time.\n>\n> I was confused by the description, which is as follows:\n> ----\n> inactive_since timestamptz\n> The time since the slot has become inactive. NULL if the slot is\n> currently being used.\n> ----\n>\n> Then I found the github history for the patch [2], and the\n> accompanying long thread discussion [3] about the renaming of that\n> field. I have no intention to re-open that can-of-worms, but OTOH I\n> feel the first sentence of the field description is wrong and needs\n> fixing.\n>\n> Specifically, IMO describing something as \"The time since...\" means\n> some amount of elapsed time since some occurrence, but that is not the\n> correct description for this timestamp field.\n>\n> This is not just a case of me being pedantic. For example, here is\n> what Chat-GPT had to say:\n> ----\n> I asked:\n> What does \"The time since the slot has become inactive.\" mean?\n>\n> ChatGPT said:\n> \"The time since the slot has become inactive\" refers to the duration\n> that has passed from the moment a specific slot (likely a database\n> replication slot or a similar entity) stopped being active. In other\n> words, it measures how much time has elapsed since the slot\n> transitioned from an active state to an inactive state.\n>\n> For example, if a slot became inactive 2 hours ago, \"the time since\n> the slot has become inactive\" would be 2 hours.\n> ----\n>\n> To summarize, the current description wrongly describes the field as a\n> time duration:\n> \"The time since the slot has become inactive.\"\n>\n> I suggest replacing it with:\n> \"The slot has been inactive since this time.\"\n>\n\n+1 for the change. If I had read the document without knowing about\nthe patch, I too would have interpreted it as a duration.\n\nthanks\nShveta\n\n\n",
"msg_date": "Mon, 2 Sep 2024 09:13:48 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DOCS - pg_replication_slot . Fix the 'inactive_since' description"
},
{
"msg_contents": "On Mon, Sep 2, 2024 at 9:14 AM shveta malik <shveta.malik@gmail.com> wrote:\n>\n> On Mon, Sep 2, 2024 at 5:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > ----\n> >\n> > To summarize, the current description wrongly describes the field as a\n> > time duration:\n> > \"The time since the slot has become inactive.\"\n> >\n> > I suggest replacing it with:\n> > \"The slot has been inactive since this time.\"\n> >\n>\n> +1 for the change. If I had read the document without knowing about\n> the patch, I too would have interpreted it as a duration.\n>\n\nThe suggested change looks good to me as well. I'll wait for a day or\ntwo before pushing to see if anyone thinks otherwise.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 3 Sep 2024 10:43:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DOCS - pg_replication_slot . Fix the 'inactive_since' description"
},
{
"msg_contents": "On Tue, Sep 3, 2024 at 10:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > >\n> > > To summarize, the current description wrongly describes the field as a\n> > > time duration:\n> > > \"The time since the slot has become inactive.\"\n> > >\n> > > I suggest replacing it with:\n> > > \"The slot has been inactive since this time.\"\n> > >\n> >\n> > +1 for the change. If I had read the document without knowing about\n> > the patch, I too would have interpreted it as a duration.\n> >\n>\n> The suggested change looks good to me as well. I'll wait for a day or\n> two before pushing to see if anyone thinks otherwise.\n\nShall we make the change in code-comment as well:\n\ntypedef struct ReplicationSlot\n{\n...\n /* The time since the slot has become inactive */\n TimestampTz inactive_since;\n}\n\nthanks\nShveta\n\n\n",
"msg_date": "Tue, 3 Sep 2024 11:42:32 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DOCS - pg_replication_slot . Fix the 'inactive_since' description"
},
{
"msg_contents": "Hi,\n\nOn Tue, Sep 03, 2024 at 10:43:14AM +0530, Amit Kapila wrote:\n> On Mon, Sep 2, 2024 at 9:14 AM shveta malik <shveta.malik@gmail.com> wrote:\n> >\n> > On Mon, Sep 2, 2024 at 5:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > ----\n> > >\n> > > To summarize, the current description wrongly describes the field as a\n> > > time duration:\n> > > \"The time since the slot has become inactive.\"\n> > >\n> > > I suggest replacing it with:\n> > > \"The slot has been inactive since this time.\"\n> > >\n> >\n> > +1 for the change. If I had read the document without knowing about\n> > the patch, I too would have interpreted it as a duration.\n> >\n> \n> The suggested change looks good to me as well. I'll wait for a day or\n> two before pushing to see if anyone thinks otherwise.\n\nI'm not 100% convinced the current wording is confusing because:\n\n- the field type is described as a \"timestamptz\".\n- there is no duration unit in the wording (if we were to describe a duration,\nwe would probably add an unit to it, like \"The time (in s)...\").\n\nThat said, if we want to highlight that this is not a duration, what about?\n\n\"The time (not duration) since the slot has become inactive.\"\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 3 Sep 2024 06:35:03 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DOCS - pg_replication_slot . Fix the 'inactive_since' description"
},
{
"msg_contents": "On Tue, Sep 3, 2024 at 4:35 PM Bertrand Drouvot\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> Hi,\n>\n> On Tue, Sep 03, 2024 at 10:43:14AM +0530, Amit Kapila wrote:\n> > On Mon, Sep 2, 2024 at 9:14 AM shveta malik <shveta.malik@gmail.com> wrote:\n> > >\n> > > On Mon, Sep 2, 2024 at 5:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > > ----\n> > > >\n> > > > To summarize, the current description wrongly describes the field as a\n> > > > time duration:\n> > > > \"The time since the slot has become inactive.\"\n> > > >\n> > > > I suggest replacing it with:\n> > > > \"The slot has been inactive since this time.\"\n> > > >\n> > >\n> > > +1 for the change. If I had read the document without knowing about\n> > > the patch, I too would have interpreted it as a duration.\n> > >\n> >\n> > The suggested change looks good to me as well. I'll wait for a day or\n> > two before pushing to see if anyone thinks otherwise.\n>\n> I'm not 100% convinced the current wording is confusing because:\n>\n> - the field type is described as a \"timestamptz\".\n> - there is no duration unit in the wording (if we were to describe a duration,\n> we would probably add an unit to it, like \"The time (in s)...\").\n>\n\nHmm. I assure you it is confusing because in English \"The time since\"\nimplies duration, and that makes the sentence contrary to the\ntimestamptz field type. Indeed, I cited the Chat-GPT's interpretation\nabove specifically so that people would not just take this as my\nopinion.\n\n> That said, if we want to highlight that this is not a duration, what about?\n>\n> \"The time (not duration) since the slot has become inactive.\"\n>\n\nThere is no need to \"highlight\" anything. To avoid confusion we only\nneed to say what we mean.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 3 Sep 2024 17:52:53 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DOCS - pg_replication_slot . Fix the 'inactive_since' description"
},
{
"msg_contents": "Hi,\n\nOn Tue, Sep 03, 2024 at 05:52:53PM +1000, Peter Smith wrote:\n> On Tue, Sep 3, 2024 at 4:35 PM Bertrand Drouvot\n> <bertranddrouvot.pg@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > On Tue, Sep 03, 2024 at 10:43:14AM +0530, Amit Kapila wrote:\n> > > On Mon, Sep 2, 2024 at 9:14 AM shveta malik <shveta.malik@gmail.com> wrote:\n> > > >\n> > > > On Mon, Sep 2, 2024 at 5:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > > > ----\n> > > > >\n> > > > > To summarize, the current description wrongly describes the field as a\n> > > > > time duration:\n> > > > > \"The time since the slot has become inactive.\"\n> > > > >\n> > > > > I suggest replacing it with:\n> > > > > \"The slot has been inactive since this time.\"\n> > > > >\n> > > >\n> > > > +1 for the change. If I had read the document without knowing about\n> > > > the patch, I too would have interpreted it as a duration.\n> > > >\n> > >\n> > > The suggested change looks good to me as well. I'll wait for a day or\n> > > two before pushing to see if anyone thinks otherwise.\n> >\n> > I'm not 100% convinced the current wording is confusing because:\n> >\n> > - the field type is described as a \"timestamptz\".\n> > - there is no duration unit in the wording (if we were to describe a duration,\n> > we would probably add an unit to it, like \"The time (in s)...\").\n> >\n> \n> Hmm. I assure you it is confusing because in English \"The time since\"\n> implies duration, and that makes the sentence contrary to the\n> timestamptz field type.\n\nOh if that implies duration (I'm not a native English speaker and would have\nassumed that it does not imply a duration 100% of the time) then yeah there is\nsome contradiction between the wording and the returned type.\n\n> Indeed, I cited the Chat-GPT's interpretation\n> above specifically so that people would not just take this as my\n> opinion.\n\nRight, I was just wondering what would Chat-GPT answer if you add \"knowing\nthat the time is of timestamptz datatype\" to the question?\n\n> To avoid confusion we only need to say what we mean.\n\nSure, I was just saying that I did not see any confusion given the returned\ndatatype. Now that you say that \"The time since\" implies duration then yeah, in\nthat case, it's better to provide the right wording then.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 3 Sep 2024 08:39:08 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DOCS - pg_replication_slot . Fix the 'inactive_since' description"
},
{
"msg_contents": "On Tue, Sep 3, 2024 at 4:12 PM shveta malik <shveta.malik@gmail.com> wrote:\n>\n...\n> Shall we make the change in code-comment as well:\n>\n> typedef struct ReplicationSlot\n> {\n> ...\n> /* The time since the slot has become inactive */\n> TimestampTz inactive_since;\n> }\n>\n\nHi Shveta,\n\nYes, I think so. I hadn't bothered to include this in the v1 patch\nonly because the docs are user-facing, but this code comment isn't.\nHowever, now that you've mentioned it, I made the same change here\nalso. Thanks. See patch v2.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 4 Sep 2024 14:59:20 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DOCS - pg_replication_slot . Fix the 'inactive_since' description"
},
{
"msg_contents": "At Tue, 3 Sep 2024 10:43:14 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \r\n> On Mon, Sep 2, 2024 at 9:14 AM shveta malik <shveta.malik@gmail.com> wrote:\r\n> >\r\n> > On Mon, Sep 2, 2024 at 5:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> > > ----\r\n> > >\r\n> > > To summarize, the current description wrongly describes the field as a\r\n> > > time duration:\r\n> > > \"The time since the slot has become inactive.\"\r\n> > >\r\n> > > I suggest replacing it with:\r\n> > > \"The slot has been inactive since this time.\"\r\n> > >\r\n> >\r\n> > +1 for the change. If I had read the document without knowing about\r\n> > the patch, I too would have interpreted it as a duration.\r\n> >\r\n> \r\n> The suggested change looks good to me as well. I'll wait for a day or\r\n> two before pushing to see if anyone thinks otherwise.\r\n\r\nIf possible, I'd prefer to use \"the time\" as the subject. For example,\r\nwould \"The time this slot was inactivated\" be acceptable? However,\r\nthis loses the sense of continuation up to that point, so if that's\r\ncrucial, the current proposal might be better.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Wed, 04 Sep 2024 14:32:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DOCS - pg_replication_slot . Fix the 'inactive_since'\n description"
},
{
"msg_contents": "On Tuesday, September 3, 2024, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Tue, 3 Sep 2024 10:43:14 +0530, Amit Kapila <amit.kapila16@gmail.com>\n> wrote in\n> > On Mon, Sep 2, 2024 at 9:14 AM shveta malik <shveta.malik@gmail.com>\n> wrote:\n> > >\n> > > On Mon, Sep 2, 2024 at 5:47 AM Peter Smith <smithpb2250@gmail.com>\n> wrote:\n> > > > ----\n> > > >\n> > > > To summarize, the current description wrongly describes the field as\n> a\n> > > > time duration:\n> > > > \"The time since the slot has become inactive.\"\n> > > >\n> > > > I suggest replacing it with:\n> > > > \"The slot has been inactive since this time.\"\n> > > >\n> > >\n> > > +1 for the change. If I had read the document without knowing about\n> > > the patch, I too would have interpreted it as a duration.\n> > >\n> >\n> > The suggested change looks good to me as well. I'll wait for a day or\n> > two before pushing to see if anyone thinks otherwise.\n>\n> If possible, I'd prefer to use \"the time\" as the subject. For example,\n> would \"The time this slot was inactivated\" be acceptable? However,\n> this loses the sense of continuation up to that point, so if that's\n> crucial, the current proposal might be better.\n>\n\nAgree on sticking with “The time…”\n\nThus I suggest either:\n\nThe time when the slot became inactive.\nThe time when the slot was deactivated.\n\nApparently inactivate is a valid choice here but it definitely sounds like\nunusual usage in this context. Existing usage (via GibHub search…someone\nmay want to grep) seems to support deactivate as well.\n\nI like the first suggestion more, especially since becoming inactive can\nhappen without user input. Saying deactivate could be seen to imply user\nintervention.\n\nDavid J.\n\nOn Tuesday, September 3, 2024, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Tue, 3 Sep 2024 10:43:14 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Mon, Sep 2, 2024 at 9:14 AM shveta malik <shveta.malik@gmail.com> wrote:\n> >\n> > On Mon, Sep 2, 2024 at 5:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > ----\n> > >\n> > > To summarize, the current description wrongly describes the field as a\n> > > time duration:\n> > > \"The time since the slot has become inactive.\"\n> > >\n> > > I suggest replacing it with:\n> > > \"The slot has been inactive since this time.\"\n> > >\n> >\n> > +1 for the change. If I had read the document without knowing about\n> > the patch, I too would have interpreted it as a duration.\n> >\n> \n> The suggested change looks good to me as well. I'll wait for a day or\n> two before pushing to see if anyone thinks otherwise.\n\nIf possible, I'd prefer to use \"the time\" as the subject. For example,\nwould \"The time this slot was inactivated\" be acceptable? However,\nthis loses the sense of continuation up to that point, so if that's\ncrucial, the current proposal might be better.\nAgree on sticking with “The time…”Thus I suggest either:The time when the slot became inactive.The time when the slot was deactivated.Apparently inactivate is a valid choice here but it definitely sounds like unusual usage in this context. Existing usage (via GibHub search…someone may want to grep) seems to support deactivate as well.I like the first suggestion more, especially since becoming inactive can happen without user input. Saying deactivate could be seen to imply user intervention.David J.",
"msg_date": "Tue, 3 Sep 2024 23:04:28 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DOCS - pg_replication_slot . Fix the 'inactive_since' description"
},
{
"msg_contents": "On Wed, Sep 4, 2024 at 11:34 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n\n>\n>\n> Agree on sticking with “The time…”\n>\n> Thus I suggest either:\n>\n> The time when the slot became inactive.\n\n+1. Definitely removes confusion caused by \"since\" and keeps The time\nas subject.\n\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 4 Sep 2024 18:48:39 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DOCS - pg_replication_slot . Fix the 'inactive_since' description"
},
{
"msg_contents": "Saying \"The time...\" is fine, but the suggestions given seem backwards to me:\n- The time this slot was inactivated\n- The time when the slot became inactive.\n- The time when the slot was deactivated.\n\ne.g. It is not like light switch. So, there is no moment when the\nactive slot \"became inactive\" or \"was deactivated\".\n\nRather, the 'inactive_since' timestamp field is simply:\n- The time the slot was last active.\n- The last time the slot was active.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 9 Sep 2024 11:54:57 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DOCS - pg_replication_slot . Fix the 'inactive_since' description"
},
{
"msg_contents": "On Sun, Sep 8, 2024, 18:55 Peter Smith <smithpb2250@gmail.com> wrote:\n\n> Saying \"The time...\" is fine, but the suggestions given seem backwards to\n> me:\n> - The time this slot was inactivated\n> - The time when the slot became inactive.\n> - The time when the slot was deactivated.\n>\n> e.g. It is not like light switch. So, there is no moment when the\n> active slot \"became inactive\" or \"was deactivated\".\n>\n\nWhile this is plausible the existing wording and the name of the field\ndefinitely fail to convey this.\n\n\n> Rather, the 'inactive_since' timestamp field is simply:\n> - The time the slot was last active.\n> - The last time the slot was active.\n>\n\nI see your point but that wording is also quite confusing when an active\nslot returns null for this field.\n\nAt this point I'm confused enough to need whatever wording is taken to be\nsupported by someone explaining the code that interacts with this field.\n\nI suppose I'm expecting something like: The time the last activity\nfinished, or null if an activity is in-progress.\n\nDavid J.\n\n\n>\n>\n\nOn Sun, Sep 8, 2024, 18:55 Peter Smith <smithpb2250@gmail.com> wrote:Saying \"The time...\" is fine, but the suggestions given seem backwards to me:\n- The time this slot was inactivated\n- The time when the slot became inactive.\n- The time when the slot was deactivated.\n\ne.g. It is not like light switch. So, there is no moment when the\nactive slot \"became inactive\" or \"was deactivated\".While this is plausible the existing wording and the name of the field definitely fail to convey this.\n\nRather, the 'inactive_since' timestamp field is simply:\n- The time the slot was last active.\n- The last time the slot was active.I see your point but that wording is also quite confusing when an active slot returns null for this field.At this point I'm confused enough to need whatever wording is taken to be supported by someone explaining the code that interacts with this field.I suppose I'm expecting something like: The time the last activity finished, or null if an activity is in-progress.David J.",
"msg_date": "Sun, 8 Sep 2024 19:20:27 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DOCS - pg_replication_slot . Fix the 'inactive_since' description"
},
{
"msg_contents": "On Mon, Sep 9, 2024 at 12:20 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n>\n>\n> On Sun, Sep 8, 2024, 18:55 Peter Smith <smithpb2250@gmail.com> wrote:\n>>\n>> Saying \"The time...\" is fine, but the suggestions given seem backwards to me:\n>> - The time this slot was inactivated\n>> - The time when the slot became inactive.\n>> - The time when the slot was deactivated.\n>>\n>> e.g. It is not like light switch. So, there is no moment when the\n>> active slot \"became inactive\" or \"was deactivated\".\n>\n>\n> While this is plausible the existing wording and the name of the field definitely fail to convey this.\n>\n>>\n>> Rather, the 'inactive_since' timestamp field is simply:\n>> - The time the slot was last active.\n>> - The last time the slot was active.\n>\n>\n> I see your point but that wording is also quite confusing when an active slot returns null for this field.\n>\n> At this point I'm confused enough to need whatever wording is taken to be supported by someone explaining the code that interacts with this field.\n>\n\nMe too. I created this thread primarily to get the description changed\nto clarify this field represents a moment in time, rather than a\nduration. So I will be happy with any wording that addresses that.\n\n> I suppose I'm expecting something like: The time the last activity finished, or null if an activity is in-progress.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 9 Sep 2024 13:15:32 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: DOCS - pg_replication_slot . Fix the 'inactive_since' description"
}
] |
[
{
"msg_contents": "Hi,\nI have created my own output plugin for logical decoding.\nI am storing decoded data in Apache kafka via pg_recvlogical utility.\nUsing pg_recvlogical I am updating confirmed_flush_lsn of slot based on the\nvalue that I'm storing in kafka,this is done for every 10 secs.\n\nIn case of walsender shutdown I found that canditate_restart_lsn is not\ncleared from shared memory.I also found just after restarting\nsometimes canditate_restart_lsn is far more greater than actual restart_lsn\nof slot,due to this frequent checkpoints are deleting the required WAL\nfiles.\n\nCan I clear the canditate_restart_lsn in plugin_start callback.Is there\nany consequences for this?\n\nThanks and Regards,\nG R MANJUNATH.\n\nHi,I have created my own output plugin for logical decoding.I am storing decoded data in Apache kafka via pg_recvlogical utility.Using pg_recvlogical I am updating confirmed_flush_lsn of slot based on the value that I'm storing in kafka,this is done for every 10 secs.In case of walsender shutdown I found that canditate_restart_lsn is not cleared from shared memory.I also found just after restarting sometimes canditate_restart_lsn is far more greater than actual restart_lsn of slot,due to this frequent checkpoints are deleting the required WAL files.Can I clear the canditate_restart_lsn in plugin_start callback.Is there any consequences for this?Thanks and Regards,G R MANJUNATH.",
"msg_date": "Mon, 2 Sep 2024 10:25:45 +0530",
"msg_from": "reddy manjunath <rmanjunath050@gmail.com>",
"msg_from_op": true,
"msg_subject": "Regarding canditate_restart_lsn in logical decoding."
}
] |
[
{
"msg_contents": "While working on some other code I noticed that in\nFindReplTupleInLocalRel() there is an assert [1] that seems to be\npassing IndexRelation to GetRelationIdentityOrPK() whereas it should\nbe passing normal relation.\n\n[1]\nif (OidIsValid(localidxoid))\n{\n#ifdef USE_ASSERT_CHECKING\n Relation idxrel = index_open(localidxoid, AccessShareLock);\n\n /* Index must be PK, RI, or usable for REPLICA IDENTITY FULL tables */\n Assert(GetRelationIdentityOrPK(idxrel) == localidxoid ||\n IsIndexUsableForReplicaIdentityFull(BuildIndexInfo(idxrel),\n edata->targetRel->attrmap));\n index_close(idxrel, AccessShareLock);\n#endif\n\nIn the above code, we are passing idxrel to GetRelationIdentityOrPK(),\nwhereas we should be passing localrel\n\nFix should be\n\n\n@@ -2929,7 +2929,7 @@ FindReplTupleInLocalRel(ApplyExecutionData\n*edata, Relation localrel,\n\n Relation idxrel = index_open(localidxoid,\nAccessShareLock);\n\n /* Index must be PK, RI, or usable for REPLICA\nIDENTITY FULL tables */\n- Assert(GetRelationIdentityOrPK(idxrel) == localidxoid ||\n+ Assert(GetRelationIdentityOrPK(localrel) == localidxoid ||\n\nIsIndexUsableForReplicaIdentityFull(BuildIndexInfo(idxrel),\n\n edata->targetRel->attrmap));\n\n index_close(idxrel, AccessShareLock);\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 2 Sep 2024 11:21:13 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Invalid Assert while validating REPLICA IDENTITY?"
},
{
"msg_contents": "On Mon, Sep 2, 2024 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> While working on some other code I noticed that in\n> FindReplTupleInLocalRel() there is an assert [1] that seems to be\n> passing IndexRelation to GetRelationIdentityOrPK() whereas it should\n> be passing normal relation.\n>\n\nAgreed. But this should lead to assertion failure. Did you try testing it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 2 Sep 2024 15:32:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Invalid Assert while validating REPLICA IDENTITY?"
},
{
"msg_contents": "On Mon, Sep 2, 2024 at 3:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 2, 2024 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > While working on some other code I noticed that in\n> > FindReplTupleInLocalRel() there is an assert [1] that seems to be\n> > passing IndexRelation to GetRelationIdentityOrPK() whereas it should\n> > be passing normal relation.\n> >\n>\n> Agreed. But this should lead to assertion failure. Did you try testing it?\n\nNo, I did not test this particular case, it impacted me with my other\naddition of the code where I got Index Relation as input to the\nRelationGetIndexList() function, and my local changes were impacted by\nthat. I will write a test for this stand-alone case so that it hits\nthe assert. Thanks for looking into this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 2 Sep 2024 18:22:09 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Invalid Assert while validating REPLICA IDENTITY?"
},
{
"msg_contents": "On Mon, 2 Sept 2024 at 18:22, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Sep 2, 2024 at 3:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Sep 2, 2024 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > While working on some other code I noticed that in\n> > > FindReplTupleInLocalRel() there is an assert [1] that seems to be\n> > > passing IndexRelation to GetRelationIdentityOrPK() whereas it should\n> > > be passing normal relation.\n> > >\n> >\n> > Agreed. But this should lead to assertion failure. Did you try testing it?\n>\n> No, I did not test this particular case, it impacted me with my other\n> addition of the code where I got Index Relation as input to the\n> RelationGetIndexList() function, and my local changes were impacted by\n> that. I will write a test for this stand-alone case so that it hits\n> the assert. Thanks for looking into this.\n\nThe FindReplTupleInLocalRel function can be triggered by both update\nand delete operations, but this only occurs if the relation has been\nmarked as updatable by the logicalrep_rel_mark_updatable function. If\nthe relation is marked as non-updatable, an error will be thrown by\ncheck_relation_updatable. Given this, if a relation is updatable, the\nIsIndexUsableForReplicaIdentityFull condition might always evaluate to\ntrue due to the previous checks in logicalrep_rel_mark_updatable.\nTherefore, it’s possible that we might not encounter the Assert\nstatement, as IsIndexUsableForReplicaIdentityFull may consistently be\ntrue.\nThoughts?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 6 Sep 2024 16:48:51 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Invalid Assert while validating REPLICA IDENTITY?"
},
{
"msg_contents": "On Fri, Sep 6, 2024 at 4:48 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, 2 Sept 2024 at 18:22, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Sep 2, 2024 at 3:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Sep 2, 2024 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > While working on some other code I noticed that in\n> > > > FindReplTupleInLocalRel() there is an assert [1] that seems to be\n> > > > passing IndexRelation to GetRelationIdentityOrPK() whereas it should\n> > > > be passing normal relation.\n> > > >\n> > >\n> > > Agreed. But this should lead to assertion failure. Did you try testing it?\n> >\n> > No, I did not test this particular case, it impacted me with my other\n> > addition of the code where I got Index Relation as input to the\n> > RelationGetIndexList() function, and my local changes were impacted by\n> > that. I will write a test for this stand-alone case so that it hits\n> > the assert. Thanks for looking into this.\n>\n> The FindReplTupleInLocalRel function can be triggered by both update\n> and delete operations, but this only occurs if the relation has been\n> marked as updatable by the logicalrep_rel_mark_updatable function. If\n> the relation is marked as non-updatable, an error will be thrown by\n> check_relation_updatable. Given this, if a relation is updatable, the\n> IsIndexUsableForReplicaIdentityFull condition might always evaluate to\n> true due to the previous checks in logicalrep_rel_mark_updatable.\n> Therefore, it’s possible that we might not encounter the Assert\n> statement, as IsIndexUsableForReplicaIdentityFull may consistently be\n> true.\n> Thoughts?\n\nWith that it seems that the first Assert condition is useless isn't it?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Sep 2024 11:44:42 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Invalid Assert while validating REPLICA IDENTITY?"
},
{
"msg_contents": "On Mon, Sep 9, 2024 at 11:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Sep 6, 2024 at 4:48 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, 2 Sept 2024 at 18:22, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Mon, Sep 2, 2024 at 3:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Sep 2, 2024 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > While working on some other code I noticed that in\n> > > > > FindReplTupleInLocalRel() there is an assert [1] that seems to be\n> > > > > passing IndexRelation to GetRelationIdentityOrPK() whereas it should\n> > > > > be passing normal relation.\n> > > > >\n> > > >\n> > > > Agreed. But this should lead to assertion failure. Did you try testing it?\n> > >\n> > > No, I did not test this particular case, it impacted me with my other\n> > > addition of the code where I got Index Relation as input to the\n> > > RelationGetIndexList() function, and my local changes were impacted by\n> > > that. I will write a test for this stand-alone case so that it hits\n> > > the assert. Thanks for looking into this.\n> >\n> > The FindReplTupleInLocalRel function can be triggered by both update\n> > and delete operations, but this only occurs if the relation has been\n> > marked as updatable by the logicalrep_rel_mark_updatable function. If\n> > the relation is marked as non-updatable, an error will be thrown by\n> > check_relation_updatable. Given this, if a relation is updatable, the\n> > IsIndexUsableForReplicaIdentityFull condition might always evaluate to\n> > true due to the previous checks in logicalrep_rel_mark_updatable.\n> > Therefore, it’s possible that we might not encounter the Assert\n> > statement, as IsIndexUsableForReplicaIdentityFull may consistently be\n> > true.\n> > Thoughts?\n>\n> With that it seems that the first Assert condition is useless isn't it?\n>\n\nThe second part of the assertion is incomplete. The\nIsIndexUsableForReplicaIdentityFull() should be used only when the\nremote relation has REPLICA_IDENTITY_FULL set. I haven't tested all\npossible cases yet but I think the attached should be a better way to\nwrite this assertion.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 9 Sep 2024 13:12:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Invalid Assert while validating REPLICA IDENTITY?"
},
{
"msg_contents": "On Mon, 9 Sept 2024 at 13:12, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 9, 2024 at 11:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Fri, Sep 6, 2024 at 4:48 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Mon, 2 Sept 2024 at 18:22, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Mon, Sep 2, 2024 at 3:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Sep 2, 2024 at 11:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > >\n> > > > > > While working on some other code I noticed that in\n> > > > > > FindReplTupleInLocalRel() there is an assert [1] that seems to be\n> > > > > > passing IndexRelation to GetRelationIdentityOrPK() whereas it should\n> > > > > > be passing normal relation.\n> > > > > >\n> > > > >\n> > > > > Agreed. But this should lead to assertion failure. Did you try testing it?\n> > > >\n> > > > No, I did not test this particular case, it impacted me with my other\n> > > > addition of the code where I got Index Relation as input to the\n> > > > RelationGetIndexList() function, and my local changes were impacted by\n> > > > that. I will write a test for this stand-alone case so that it hits\n> > > > the assert. Thanks for looking into this.\n> > >\n> > > The FindReplTupleInLocalRel function can be triggered by both update\n> > > and delete operations, but this only occurs if the relation has been\n> > > marked as updatable by the logicalrep_rel_mark_updatable function. If\n> > > the relation is marked as non-updatable, an error will be thrown by\n> > > check_relation_updatable. Given this, if a relation is updatable, the\n> > > IsIndexUsableForReplicaIdentityFull condition might always evaluate to\n> > > true due to the previous checks in logicalrep_rel_mark_updatable.\n> > > Therefore, it’s possible that we might not encounter the Assert\n> > > statement, as IsIndexUsableForReplicaIdentityFull may consistently be\n> > > true.\n> > > Thoughts?\n> >\n> > With that it seems that the first Assert condition is useless isn't it?\n> >\n>\n> The second part of the assertion is incomplete. The\n> IsIndexUsableForReplicaIdentityFull() should be used only when the\n> remote relation has REPLICA_IDENTITY_FULL set. I haven't tested all\n> possible cases yet but I think the attached should be a better way to\n> write this assertion.\n\nThe changes look good to me.\nI have verified the following test which hits the Assert condition:\nCheck the first condition in the assert statement with the following test:\n-- pub\ncreate table test_update_assert(c1 int primary key, c2 int);\ncreate publication pub1 for table test_update_assert;\n--sub\ncreate table test_update_assert(c1 int primary key, c2 int);\ncreate subscription ...;\n\n--pub\ninsert into test_update_assert values(1,1);\nupdate test_update_assert set c1 = 2;\n\nI also verified that if we replace localrel with idxrel, the assert\nwill fail for the above test. This is the original issue reported by\nDilip.\n\nCheck the 2nd condition in assert with the following test:\n-- pub\ncreate table test_update_assert1(c1 int, c2 int);\nalter table test_update_assert1 replica identity full;\ncreate publication pub1 for table test_update_assert1;\n\n-- sub\ncreate table test_update_assert1(c1 int, c2 int);\ncreate unique index idx1 on test_update_assert1(c1,c2);\ncreate subscription ...;\n\n--pub\ninsert into test_update_assert1 values(1,1);\nupdate test_update_assert1 set c1 = 2;\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 10 Sep 2024 11:36:02 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Invalid Assert while validating REPLICA IDENTITY?"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 11:36 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, 9 Sept 2024 at 13:12, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > The second part of the assertion is incomplete. The\n> > IsIndexUsableForReplicaIdentityFull() should be used only when the\n> > remote relation has REPLICA_IDENTITY_FULL set. I haven't tested all\n> > possible cases yet but I think the attached should be a better way to\n> > write this assertion.\n>\n> The changes look good to me.\n>\n\nThanks, I'll push this tomorrow unless Dilip or anyone else has any\ncomments. BTW, as the current code doesn't lead to any bug or\nassertion failure, I am planning to push this change to HEAD only, let\nme know if you think otherwise.\n\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Sep 2024 14:16:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Invalid Assert while validating REPLICA IDENTITY?"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 2:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 10, 2024 at 11:36 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, 9 Sept 2024 at 13:12, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > The second part of the assertion is incomplete. The\n> > > IsIndexUsableForReplicaIdentityFull() should be used only when the\n> > > remote relation has REPLICA_IDENTITY_FULL set. I haven't tested all\n> > > possible cases yet but I think the attached should be a better way to\n> > > write this assertion.\n> >\n> > The changes look good to me.\n> >\n>\n> Thanks, I'll push this tomorrow unless Dilip or anyone else has any\n> comments. BTW, as the current code doesn't lead to any bug or\n> assertion failure, I am planning to push this change to HEAD only, let\n> me know if you think otherwise.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 11 Sep 2024 10:05:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Invalid Assert while validating REPLICA IDENTITY?"
},
{
"msg_contents": "On Wed, 11 Sept 2024 at 10:05, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 10, 2024 at 2:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Sep 10, 2024 at 11:36 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Mon, 9 Sept 2024 at 13:12, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > The second part of the assertion is incomplete. The\n> > > > IsIndexUsableForReplicaIdentityFull() should be used only when the\n> > > > remote relation has REPLICA_IDENTITY_FULL set. I haven't tested all\n> > > > possible cases yet but I think the attached should be a better way to\n> > > > write this assertion.\n> > >\n> > > The changes look good to me.\n> > >\n> >\n> > Thanks, I'll push this tomorrow unless Dilip or anyone else has any\n> > comments. BTW, as the current code doesn't lead to any bug or\n> > assertion failure, I am planning to push this change to HEAD only, let\n> > me know if you think otherwise.\n> >\n>\n> Pushed.\n\nI noticed that Drongo failed the subscription-check for the 015_stream\ntest, as reported at [1].\n\nIn publisher log I could see the following statement executed\n015_stream.pl line 253:\n$node_publisher->safe_psql('postgres', \"TRUNCATE TABLE test_tab_2\");\n\nWith the following log content from 015_stream_publisher.log:\n2024-09-11 09:13:36.886 UTC [512:4] 015_stream.pl LOG: statement:\nTRUNCATE TABLE test_tab_2\n\nand lastly executed wait_for_catchup 015_stream.pl line 254:\n2024-09-11 09:13:38.811 UTC [4320:4] 015_stream.pl LOG: statement:\nSELECT '0/20E3BC8' <= replay_lsn AND state = 'streaming'\n FROM pg_catalog.pg_stat_replication\n WHERE application_name IN ('tap_sub', 'walreceiver')\n\nIn subscriber log I could see the following statement executed\n015_stream.pl line 255:\n$node_subscriber->safe_psql('postgres',\n\"CREATE UNIQUE INDEX idx_tab on test_tab_2(a)\");\n\nWith the following log content from 015_stream_subscriber.log:\n2024-09-11 09:13:39.610 UTC [3096:4] 015_stream.pl LOG: statement:\nCREATE UNIQUE INDEX idx_tab on test_tab_2(a)\n\nHowever, there is no clear indication of what transpired afterward or\nwhy the remaining statements were not executed. It appears this issue\nis unrelated to the recent patch, as the changes introduced by the\npatch affect only update and delete operations.\nI will keep monitoring the buildfarm for additional runs to determine\nif the issue recurs and draw conclusions based on the observations.\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-09-11%2007%3A24%3A53\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 11 Sep 2024 22:29:56 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Invalid Assert while validating REPLICA IDENTITY?"
}
] |
[
{
"msg_contents": "There's a category[1] of random build farm/CI failures where Windows\nbehaves differently and our stuff breaks, which also affects end\nusers. A recent BF failure[2] that looks like one of those jangled my\nnerves when I pushed a commit, so I looked into a new theory on how to\nfix it. First, let me restate my understanding of the two categories\nof known message loss on Windows, since the information is scattered\nfar and wide across many threads:\n\n1. When a backend exits without closing the socket gracefully, which\nwas briefly fixed[3] but later reverted because it broke something\nelse, a Windows server's network stack might fail to send data that it\nhad buffered but not yet physically sent[4].\n\nThe reason we reverted that and went back to abortive socket shutdown\n(ie just exit()) is that our WL_SOCKET_READABLE was buggy, and could\nmiss FD_CLOSE events from graceful but not abortive shutdowns (which\nkeep reporting themselves repeatedly, something to do with being an\nerror state (?)). Sometimes a libpq socket we're waiting for with\nWaitLatchOrSocket() on the client end of the socket could hang\nforever. Concretely: a replication connection or postgres_fdw running\ninside another PostgreSQL server. We fixed that event loss, albeit in\na gross kludgy way[5], because other ideas seemed too complicated (to\nwit, various ways to manage extra state associated with each socket,\nreally hard to retro-fit in a satisfying way). Graceful shutdown\nshould fix the race cases where the next thing the client calls is\nrecv(), as far as I know.\n\n2. If a Windows client tries to send() and gets an ECONNRESET/EPIPE\nerror, then the network stack seems to drop already received data, so\na following recv() will never see it. In other words, it depends on\nwhether the application-level protocol is strictly request/response\nbased, or has sequence points at which both ends might send(). AFAIK\nthe main consequence for real users is that FATAL recovery conflict,\nidle termination, etc messages are not delivered to clients, leaving\njust \"server closed the connection unexpectedly\".\n\nI have wondered whether it might help to kludgify the Windows TCP code\neven more by doing an extra poll() for POLLRD before every single\nsend(). \"Hey network stack, before I try to send this message, is\nthere anything the server wanted to tell me?\", but I guess that must\nbe racy because the goodbye message could arrive between poll() and\nsend(). Annoyingly, I suspect it would *mostly* work.\n\nThe new thought I had about the second category of problem is: if you\nuse asynchronous networking APIs, then the kernel *can't* throw your\ndata out, because it doesn't even have it. If the server's FATAL\nmessage arrives before the client calls send(), then the data is\nalready written to user space memory and the I/O is marked as\ncomplete. If it arrives after, then there's no issue, because\ncomputers can't see into the future yet. That's my hypothesis,\nanyway. To try that, I started with a very simple program[6] on my\nlocal FreeBSD system that does a failing send, and tries synchronous\nand asynchronous recv():\n\n=== synchronous ===\nsend -> -1, error = 32\nrecv -> \"FATAL: flux capacitor failed\", error = 0\n=== posix aio ===\nsend -> -1, error = 32\nasync recv -> \"FATAL: flux capacitor failed\", error = 0\n\n... and then googled enough Windows-fu to translate it and run it on\nCI, and saw the known category 2 failure with the plain old\nsynchronous version. The good news is that the async version sees the\ngoodbye message:\n\n=== synchronous ===\nsend -> 14, error = 0\nrecv -> \"\", error = 10054\n=== windows overlapped ===\nsend -> 14, error = 0\nasync recv -> \"FATAL: flux capacitor failed\", error = 0\n\nThat's not the same as a torture test for weird timings, and I have\nzero knowledge of the implementation of this stuff, but I currently\ncan't imagine how it could possibly be implemented in any way that\ncould give a different answer.\n\nPerhaps we could figure out a way to use that API to simulate\nsynchronous recv() built on top of that stuff, but I think a more\nsatisfying use of our time and energy would be to redesign all our\nnetworking code to do cross-platform AIO. I think that will mostly\ncome down to a bunch of network buffer management redesign work.\nAnyway, I don't have anything concrete there, I just wanted to share\nthis observation.\n\n[1] https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures#Miscellaneous_tests_fail_on_Windows_due_to_a_connection_closed_before_receiving_a_final_error_message\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-08-31%2007%3A54%3A58\n[3] https://github.com/postgres/postgres/commit/6051857fc953a62db318329c4ceec5f9668fd42a\n[4] https://learn.microsoft.com/en-us/windows/win32/winsock/graceful-shutdown-linger-options-and-socket-closure-2\n[5] https://github.com/postgres/postgres/commit/a8458f508a7a441242e148f008293128676df003\n[6] https://github.com/macdice/hello-windows/blob/socket-hacking/test.c\n\n\n",
"msg_date": "Mon, 2 Sep 2024 21:20:21 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Windows socket problems, interesting connection to AIO"
},
{
"msg_contents": "On Mon, Sep 02, 2024 at 09:20:21PM +1200, Thomas Munro wrote:\n> 2. If a Windows client tries to send() and gets an ECONNRESET/EPIPE\n> error, then the network stack seems to drop already received data, so\n> a following recv() will never see it. In other words, it depends on\n> whether the application-level protocol is strictly request/response\n> based, or has sequence points at which both ends might send(). AFAIK\n> the main consequence for real users is that FATAL recovery conflict,\n> idle termination, etc messages are not delivered to clients, leaving\n> just \"server closed the connection unexpectedly\".\n\n> The new thought I had about the second category of problem is: if you\n> use asynchronous networking APIs, then the kernel *can't* throw your\n> data out, because it doesn't even have it. If the server's FATAL\n> message arrives before the client calls send(), then the data is\n> already written to user space memory and the I/O is marked as\n> complete.\n\nGood point.\n\n> just wanted to share this observation.\n\nThanks for sharing that and the test program.\n\n\n",
"msg_date": "Tue, 10 Sep 2024 15:40:52 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows socket problems, interesting connection to AIO"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reviewing another thread I was looking at the view\n'pg_stats_subscription_stats' view. In particular, I was looking at\nthe order of the \"*_count\" columns of that view.\n\nIMO there is an intuitive/natural ordering for the logical replication\noperations (LR) being counted. For example, LR \"initial tablesync\"\nalways comes before LR \"apply\".\n\nI propose that the columns of the view should also be in this same\nintuitive order: Specifically, \"sync_error_count\" should come before\n\"apply_error_count\" (left-to-right in view, top-to-bottom in docs).\n\nCurrently, they are not arranged that way.\n\nThe view today has only 2 count columns in HEAD, so this proposal\nseems trivial, but there is another patch [2] soon to be pushed, which\nwill add more conflict count columns. As the number of columns\nincreases IMO it becomes more important that each column is where you\nwould intuitively expect to find it.\n\n~\n\nChanges would be needed in several places:\n- docs (doc/src/sgml/monitoring.sgml)\n- function pg_stat_get_subscription_stats (pg_proc.dat)\n- view pg_stat_subscription_stats (src/backend/catalog/system_views.sql)\n- TAP test SELECTs (test/subscription/t/026_stats.pl)\n\n~\n\nThoughts?\n\n======\n[1] docs - https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-SUBSCRIPTION-STATS\n[2] stats for conflicts -\nhttps://www.postgresql.org/message-id/flat/OS0PR01MB57160A07BD575773045FC214948F2%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 2 Sep 2024 19:24:20 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_stats_subscription_stats order of the '*_count' columns"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nPlease find attached a patch to implement $SUBJECT.\n\nWhile pg_stat_io provides cluster-wide I/O statistics, this patch adds a new\npg_my_stat_io view to display \"my\" backend I/O statistics and a new\npg_stat_get_backend_io() function to retrieve the I/O statistics for a given\nbackend pid.\n\nBy having the per backend level of granularity, one could for example identify\nwhich running backend is responsible for most of the reads, most of the extends\nand so on... The pg_my_stat_io view could also be useful to check the\nimpact on the I/O made by some operations, queries,... in the current session.\n\nSome remarks:\n\n- it is split in 2 sub patches: 0001 introducing the necessary changes to provide\nthe pg_my_stat_io view and 0002 to add the pg_stat_get_backend_io() function.\n- the idea of having per backend I/O statistics has already been mentioned in\n[1] by Andres.\n\nSome implementation choices:\n\n- The KIND_IO stats are still \"fixed amount\" ones as the maximum number of\nbackend is fixed.\n- The statistics snapshot is made for the global stats (the aggregated ones) and\nfor my backend stats. The snapshot is not build for all the backend stats (that\ncould be memory expensive depending on the number of max connections and given\nthe fact that PgStat_IO is 16KB long).\n- The above point means that pg_stat_get_backend_io() behaves as if\nstats_fetch_consistency is set to none (each execution re-fetches counters\nfrom shared memory).\n- The above 2 points are also the reasons why the pg_my_stat_io view has been\nadded (as its results takes care of the stats_fetch_consistency setting). I think\nthat makes sense to rely on it in that case, while I'm not sure that would make\na lot of sense to retrieve other's backend I/O stats and taking care of\nstats_fetch_consistency.\n\n\n[1]: https://www.postgresql.org/message-id/20230309003438.rectf7xo7pw5t5cj%40awork3.anarazel.de\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 2 Sep 2024 14:55:52 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "per backend I/O statistics"
},
{
"msg_contents": "At Mon, 2 Sep 2024 14:55:52 +0000, Bertrand Drouvot <bertranddrouvot.pg@gmail.com> wrote in \n> Hi hackers,\n> \n> Please find attached a patch to implement $SUBJECT.\n> \n> While pg_stat_io provides cluster-wide I/O statistics, this patch adds a new\n> pg_my_stat_io view to display \"my\" backend I/O statistics and a new\n> pg_stat_get_backend_io() function to retrieve the I/O statistics for a given\n> backend pid.\n> \n> By having the per backend level of granularity, one could for example identify\n> which running backend is responsible for most of the reads, most of the extends\n> and so on... The pg_my_stat_io view could also be useful to check the\n> impact on the I/O made by some operations, queries,... in the current session.\n> \n> Some remarks:\n> \n> - it is split in 2 sub patches: 0001 introducing the necessary changes to provide\n> the pg_my_stat_io view and 0002 to add the pg_stat_get_backend_io() function.\n> - the idea of having per backend I/O statistics has already been mentioned in\n> [1] by Andres.\n> \n> Some implementation choices:\n> \n> - The KIND_IO stats are still \"fixed amount\" ones as the maximum number of\n> backend is fixed.\n> - The statistics snapshot is made for the global stats (the aggregated ones) and\n> for my backend stats. The snapshot is not build for all the backend stats (that\n> could be memory expensive depending on the number of max connections and given\n> the fact that PgStat_IO is 16KB long).\n> - The above point means that pg_stat_get_backend_io() behaves as if\n> stats_fetch_consistency is set to none (each execution re-fetches counters\n> from shared memory).\n> - The above 2 points are also the reasons why the pg_my_stat_io view has been\n> added (as its results takes care of the stats_fetch_consistency setting). I think\n> that makes sense to rely on it in that case, while I'm not sure that would make\n> a lot of sense to retrieve other's backend I/O stats and taking care of\n> stats_fetch_consistency.\n> \n> \n> [1]: https://www.postgresql.org/message-id/20230309003438.rectf7xo7pw5t5cj%40awork3.anarazel.de\n\nI'm not sure about the usefulness of having the stats only available\nfrom the current session. Since they are stored in shared memory,\nshouldn't we make them accessible to all backends? However, this would\nintroduce permission considerations and could become complex.\n\nWhen I first looked at this patch, my initial thought was whether we\nshould let these stats stay \"fixed.\" The reason why the current\nPGSTAT_KIND_IO is fixed is that there is only one global statistics\nstorage for the entire database. If we have stats for a flexible\nnumber of backends, it would need to be non-fixed, perhaps with the\nentry for INVALID_PROC_NUMBER storing the global I/O stats, I\nsuppose. However, one concern with that approach would be the impact\non performance due to the frequent creation and deletion of stats\nentries caused by high turnover of backends.\n\nJust to be clear, the above comments are not meant to oppose the\ncurrent implementation approach. They are purely for the sake of\ndiscussing comparisons with other possible approaches.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 03 Sep 2024 15:37:49 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "At Tue, 03 Sep 2024 15:37:49 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> When I first looked at this patch, my initial thought was whether we\n> should let these stats stay \"fixed.\" The reason why the current\n> PGSTAT_KIND_IO is fixed is that there is only one global statistics\n> storage for the entire database. If we have stats for a flexible\n> number of backends, it would need to be non-fixed, perhaps with the\n> entry for INVALID_PROC_NUMBER storing the global I/O stats, I\n> suppose. However, one concern with that approach would be the impact\n> on performance due to the frequent creation and deletion of stats\n> entries caused by high turnover of backends.\n\nAs an additional benefit of this approach, the client can set a\nconnection variable, for example, no_backend_iostats to true, or set\nits inverse variable to false, to restrict memory usage to only the\nrequired backends.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 03 Sep 2024 16:07:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "Hi,\n\nOn Tue, Sep 03, 2024 at 03:37:49PM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 2 Sep 2024 14:55:52 +0000, Bertrand Drouvot <bertranddrouvot.pg@gmail.com> wrote in \n> > Hi hackers,\n> > \n> > Please find attached a patch to implement $SUBJECT.\n> > \n> > While pg_stat_io provides cluster-wide I/O statistics, this patch adds a new\n> > pg_my_stat_io view to display \"my\" backend I/O statistics and a new\n> > pg_stat_get_backend_io() function to retrieve the I/O statistics for a given\n> > backend pid.\n> > \n> \n> I'm not sure about the usefulness of having the stats only available\n> from the current session. Since they are stored in shared memory,\n> shouldn't we make them accessible to all backends?\n\nThanks for the feedback!\n\nThe stats are accessible to all backends thanks to 0002 and the introduction\nof the pg_stat_get_backend_io() function.\n\n> However, this would\n> introduce permission considerations and could become complex.\n\nNot sure that the data exposed here is sensible enough to consider permission\nrestriction.\n\n> When I first looked at this patch, my initial thought was whether we\n> should let these stats stay \"fixed.\" The reason why the current\n> PGSTAT_KIND_IO is fixed is that there is only one global statistics\n> storage for the entire database. If we have stats for a flexible\n> number of backends, it would need to be non-fixed, perhaps with the\n> entry for INVALID_PROC_NUMBER storing the global I/O stats, I\n> suppose. However, one concern with that approach would be the impact\n> on performance due to the frequent creation and deletion of stats\n> entries caused by high turnover of backends.\n>\n\nThe pros of using the fixed amount are:\n\n- less code change (I think as I did not write the non fixed counterpart)\n- probably better performance and less scalabilty impact (in case of high rate\nof backends creation/ deletion)\n\nCons is probably allocating shared memory space that might not be used (\nsizeof(PgStat_IO) is 16392 so that could be a concern for a high number of\nunused connection). OTOH, if a high number of connections is not used that might\nbe worth to reduce the max_connections setting.\n\n\"Conceptually\" speaking, we know what the maximum number of backend is, so I\nthink that using the fixed amount approach makes sense (somehow I think it can\nbe compared to PGSTAT_KIND_SLRU which relies on SLRU_NUM_ELEMENTS).\n\n> Just to be clear, the above comments are not meant to oppose the\n> current implementation approach. They are purely for the sake of\n> discussing comparisons with other possible approaches.\n\nNo problem at all, thanks for your feedback and sharing your thoughts!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 3 Sep 2024 07:21:23 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "Hi,\n\nOn Tue, Sep 03, 2024 at 04:07:58PM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 03 Sep 2024 15:37:49 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > When I first looked at this patch, my initial thought was whether we\n> > should let these stats stay \"fixed.\" The reason why the current\n> > PGSTAT_KIND_IO is fixed is that there is only one global statistics\n> > storage for the entire database. If we have stats for a flexible\n> > number of backends, it would need to be non-fixed, perhaps with the\n> > entry for INVALID_PROC_NUMBER storing the global I/O stats, I\n> > suppose. However, one concern with that approach would be the impact\n> > on performance due to the frequent creation and deletion of stats\n> > entries caused by high turnover of backends.\n> \n> As an additional benefit of this approach, the client can set a\n> connection variable, for example, no_backend_iostats to true, or set\n> its inverse variable to false, to restrict memory usage to only the\n> required backends.\n\nThanks for the feedback!\n\nIf we were to add an on/off switch button, I think I'd vote for a global one\ninstead. Indeed, I see this feature more like an \"Administrator\" one, where\nthe administrator wants to be able to find out which session is reponsible of\nwhat (from an I/O point of view): like being able to anwser \"which session is\ngenerating this massive amount of reads\"?\n\nIf we allow each session to disable the feature then the administrator\nwould lost this ability.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 4 Sep 2024 04:45:24 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "On 2024-Sep-03, Bertrand Drouvot wrote:\n\n> Cons is probably allocating shared memory space that might not be used (\n> sizeof(PgStat_IO) is 16392 so that could be a concern for a high number of\n> unused connection). OTOH, if a high number of connections is not used that might\n> be worth to reduce the max_connections setting.\n\nI was surprised by the size you mention so I went to look for the\ndefinition of that struct:\n\ntypedef struct PgStat_IO\n{\n\tTimestampTz stat_reset_timestamp;\n\tPgStat_BktypeIO stats[BACKEND_NUM_TYPES];\n} PgStat_IO;\n\n(It would be good to have more than zero comments here BTW)\n\nSo what's happening is that struct PgStat_IO stores the data for every\nsingle process type, be it regular backends, backend-like auxiliary\nprocesses, and all other potential postmaster children. So it seems to\nme that storing one of these struct for \"my backend stats\" is wrong: I\nthink you should be using PgStat_BktypeIO instead (or maybe another\nstruct which has a reset timestamp plus BktypeIO, if you care about the\nreset time). That struct is only 1024 bytes long, so it seems much more\nreasonable to have one of those per backend.\n\nAnother way to think about this might be to say that the B_BACKEND\nelement of the PgStat_Snapshot.io array should really be spread out to\nMaxBackends separate elements. This would make it more difficult to\nobtain a total value accumulating ops done by all backends (since it's\nrequire to sum the values of each backend), but it allows you to obtain\nbackend-specific values, including those of remote backends rather than\nonly your own, as Kyotaro suggests.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 5 Sep 2024 15:03:32 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "Hi,\n\nOn Thu, Sep 05, 2024 at 03:03:32PM +0200, Alvaro Herrera wrote:\n> On 2024-Sep-03, Bertrand Drouvot wrote:\n> \n> > Cons is probably allocating shared memory space that might not be used (\n> > sizeof(PgStat_IO) is 16392 so that could be a concern for a high number of\n> > unused connection). OTOH, if a high number of connections is not used that might\n> > be worth to reduce the max_connections setting.\n> \n> I was surprised by the size you mention so I went to look for the\n> definition of that struct:\n>\n\nThanks for looking at it!\n\n \n> typedef struct PgStat_IO\n> {\n> \tTimestampTz stat_reset_timestamp;\n> \tPgStat_BktypeIO stats[BACKEND_NUM_TYPES];\n> } PgStat_IO;\n> \n> (It would be good to have more than zero comments here BTW)\n> \n> So what's happening is that struct PgStat_IO stores the data for every\n> single process type, be it regular backends, backend-like auxiliary\n> processes, and all other potential postmaster children.\n\nYeap.\n\n> So it seems to\n> me that storing one of these struct for \"my backend stats\" is wrong: I\n> think you should be using PgStat_BktypeIO instead (or maybe another\n> struct which has a reset timestamp plus BktypeIO, if you care about the\n> reset time). That struct is only 1024 bytes long, so it seems much more\n> reasonable to have one of those per backend.\n\nYeah, that could be an area of improvement. Thanks, I'll look at it.\nCurrently the filtering is done when retrieving the per backend stats but better\nto do it when storing the stats.\n\n> Another way to think about this might be to say that the B_BACKEND\n> element of the PgStat_Snapshot.io array should really be spread out to\n> MaxBackends separate elements. This would make it more difficult to\n> obtain a total value accumulating ops done by all backends (since it's\n> require to sum the values of each backend), but it allows you to obtain\n> backend-specific values, including those of remote backends rather than\n> only your own, as Kyotaro suggests.\n>\n\nOne backend can already see the stats of the other backends thanks to the\npg_stat_get_backend_io() function (that takes a backend pid as parameter)\nthat is introduced in the 0002 sub-patch.\n\nI'll ensure that's still the case with the next version of the patch.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 5 Sep 2024 14:14:47 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "Hi,\n\nOn Thu, Sep 05, 2024 at 02:14:47PM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Thu, Sep 05, 2024 at 03:03:32PM +0200, Alvaro Herrera wrote:\n> > So it seems to\n> > me that storing one of these struct for \"my backend stats\" is wrong: I\n> > think you should be using PgStat_BktypeIO instead (or maybe another\n> > struct which has a reset timestamp plus BktypeIO, if you care about the\n> > reset time). That struct is only 1024 bytes long, so it seems much more\n> > reasonable to have one of those per backend.\n> \n> Yeah, that could be an area of improvement. Thanks, I'll look at it.\n> Currently the filtering is done when retrieving the per backend stats but better\n> to do it when storing the stats.\n\nI ended up creating (in v2 attached):\n\n\"\ntypedef struct PgStat_Backend_IO\n{\n TimestampTz stat_reset_timestamp;\n BackendType bktype;\n PgStat_BktypeIO stats;\n} PgStat_Backend_IO;\n\"\n\nThe stat_reset_timestamp is there so that one knows when a particular backend\nhad its I/O stats reset. Also the backend type is part of the struct to\nbe able to filter the stats correctly when we display them.\n\nThe struct size is 1040 Bytes and that's much more reasonable than the size\nneeded for per backend I/O stats in v1 (about 16KB).\n\n> One backend can already see the stats of the other backends thanks to the\n> pg_stat_get_backend_io() function (that takes a backend pid as parameter)\n> that is introduced in the 0002 sub-patch.\n\n0002 still provides the pg_stat_get_backend_io() function so that one could\nget the stats of other backends.\n\nExample:\n\npostgres=# select backend_type,object,context,reads,extends,hits from pg_stat_get_backend_io(3779502);\n backend_type | object | context | reads | extends | hits\n----------------+---------------+-----------+-------+---------+--------\n client backend | relation | bulkread | 0 | | 0\n client backend | relation | bulkwrite | 0 | 0 | 0\n client backend | relation | normal | 73 | 2216 | 504674\n client backend | relation | vacuum | 0 | 0 | 0\n client backend | temp relation | normal | 0 | 0 | 0\n(5 rows)\n\ncould be an individual walsender:\n\npostgres=# select pid, backend_type from pg_stat_activity where backend_type = 'walsender';\n pid | backend_type\n---------+--------------\n 3779565 | walsender\n(1 row)\n\npostgres=# select backend_type,object,context,reads,hits from pg_stat_get_backend_io(3779565);\n backend_type | object | context | reads | hits\n--------------+---------------+-----------+-------+------\n walsender | relation | bulkread | 0 | 0\n walsender | relation | bulkwrite | 0 | 0\n walsender | relation | normal | 6 | 48\n walsender | relation | vacuum | 0 | 0\n walsender | temp relation | normal | 0 | 0\n(5 rows)\n\nand so on...\n\nRemarks:\n\n- As stated up-thread, the pg_stat_get_backend_io() behaves as if\nstats_fetch_consistency is set to none (each execution re-fetches counters\nfrom shared memory). Indeed, the snapshot is not build in each backend to copy\nall the others backends stats, as 1/ there is no use case (there is no need to\nget others backends I/O statistics while taking care of the stats_fetch_consistency)\nand 2/ that could be memory expensive depending of the number of max connections.\n\n- If we agree on the current proposal then I'll do some refactoring in \npg_stat_get_backend_io(), pg_stat_get_my_io() and pg_stat_get_io() to avoid\nduplicated code (it's not done yet to ease the review).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 6 Sep 2024 15:03:17 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "Hi,\n\nOn Fri, Sep 06, 2024 at 03:03:17PM +0000, Bertrand Drouvot wrote:\n> - If we agree on the current proposal then I'll do some refactoring in \n> pg_stat_get_backend_io(), pg_stat_get_my_io() and pg_stat_get_io() to avoid\n> duplicated code (it's not done yet to ease the review).\n\nPlease find attached v3, mandatory rebase due to fc415edf8c.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 9 Sep 2024 06:29:18 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "Hi,\n\nOn Fri, Sep 06, 2024 at 03:03:17PM +0000, Bertrand Drouvot wrote:\n> The struct size is 1040 Bytes and that's much more reasonable than the size\n> needed for per backend I/O stats in v1 (about 16KB).\n\nOne could think that's a high increase of shared memory usage with a high\nnumber of connections (due to the fact that it's implemented as \"fixed amount\"\nstats based on the max number of backends).\n\nSo out of curiosity, I did some measurements with:\n\ndynamic_shared_memory_type=sysv\nshared_memory_type=sysv\nmax_connections=20000\n\nOn my lab, ipcs -m gives me:\n\n- on master\n\nkey shmid owner perms bytes nattch status\n0x00113a04 51347487 postgres 600 1149394944 6\n0x4bc9f2fa 51347488 postgres 600 4006976 6\n0x46790800 51347489 postgres 600 1048576 2\n\n- with v3 applied\n\nkey shmid owner perms bytes nattch status\n0x00113a04 51347477 postgres 600 1170227200 6\n0x08e04b0a 51347478 postgres 600 4006976 6\n0x74688c9c 51347479 postgres 600 1048576 2\n\nSo, with 20000 max_connections (not advocating that's a reasonable number in \nall the cases), that's a 1170227200 - 1149394944 = 20 MB increase of shared\nmemory (expected with 20K max_connections and the new struct size of 1040 Bytes)\nas compare to master which is 1096 MB. It means that v3 produces about 2% shared\nmemory increase with 20000 max_connections.\n\nOut of curiosity with max_connections=100000, then:\n\n- on master:\n\n------ Shared Memory Segments --------\nkey shmid owner perms bytes nattch status\n0x00113a04 52953134 postgres 600 5053915136 6\n0x37abf5ce 52953135 postgres 600 20006976 6\n0x71c07d5c 52953136 postgres 600 1048576 2\n\n- with v3:\n\n------ Shared Memory Segments --------\nkey shmid owner perms bytes nattch status\n0x00113a04 52953144 postgres 600 5157945344 6\n0x7afb24de 52953145 postgres 600 20006976 6\n0x2695af58 52953146 postgres 600 1048576 2\n\nSo, with 100000 max_connections (not advocating that's a reasonable number in \nall the cases), that's a 5157945344 - 5053915136 = 100 MB increase of shared\nmemory (expected with 100K max_connections and the new struct size of 1040 Bytes)\nas compare to master which is about 4800 MB. It means that v3 produces about\n2% shared memory increase with 100000 max_connections.\n\n\nThen, based on those numbers, I think that the \"fixed amount\" approach is a good\none as 1/ the amount of shared memory increase is \"relatively\" small and 2/ most\nprobably provides performance benefits as compare to the \"non fixed\" approach.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 13 Sep 2024 04:25:46 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "Hi,\n\nThanks for working on this!\n\nYour patch applies and builds cleanly.\n\nOn Fri, 6 Sept 2024 at 18:03, Bertrand Drouvot\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> - As stated up-thread, the pg_stat_get_backend_io() behaves as if\n> stats_fetch_consistency is set to none (each execution re-fetches counters\n> from shared memory). Indeed, the snapshot is not build in each backend to copy\n> all the others backends stats, as 1/ there is no use case (there is no need to\n> get others backends I/O statistics while taking care of the stats_fetch_consistency)\n> and 2/ that could be memory expensive depending of the number of max connections.\n\nI believe this is the correct approach.\n\nI manually tested your patches, and they work as expected. Here is\nsome feedback:\n\n- The stats_reset column is NULL in both pg_my_stat_io and\npg_stat_get_backend_io() until the first call to reset io statistics.\nWhile I'm not sure if it's necessary, it might be useful to display\nthe more recent of the two times in the stats_reset column: the\nstatistics reset time or the backend creation time.\n\n- The pgstat_reset_io_counter_internal() is called in the\npgstat_shutdown_hook(). This causes the stats_reset column showing the\ntermination time of the old backend when its proc num is reassigned to\na new backend.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Fri, 13 Sep 2024 16:45:08 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "Hi,\n\nOn Fri, Sep 13, 2024 at 04:45:08PM +0300, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> Thanks for working on this!\n> \n> Your patch applies and builds cleanly.\n\nThanks for looking at it!\n\n> On Fri, 6 Sept 2024 at 18:03, Bertrand Drouvot\n> <bertranddrouvot.pg@gmail.com> wrote:\n> >\n> > - As stated up-thread, the pg_stat_get_backend_io() behaves as if\n> > stats_fetch_consistency is set to none (each execution re-fetches counters\n> > from shared memory). Indeed, the snapshot is not build in each backend to copy\n> > all the others backends stats, as 1/ there is no use case (there is no need to\n> > get others backends I/O statistics while taking care of the stats_fetch_consistency)\n> > and 2/ that could be memory expensive depending of the number of max connections.\n> \n> I believe this is the correct approach.\n\nThanks for sharing your thoughts.\n\n> I manually tested your patches, and they work as expected. Here is\n> some feedback:\n> \n> - The stats_reset column is NULL in both pg_my_stat_io and\n> pg_stat_get_backend_io() until the first call to reset io statistics.\n> While I'm not sure if it's necessary, it might be useful to display\n> the more recent of the two times in the stats_reset column: the\n> statistics reset time or the backend creation time.\n\nI'm not sure about that as one can already get the backend \"creation time\"\nthrough pg_stat_activity.backend_start.\n\n> - The pgstat_reset_io_counter_internal() is called in the\n> pgstat_shutdown_hook(). This causes the stats_reset column showing the\n> termination time of the old backend when its proc num is reassigned to\n> a new backend.\n\ndoh! Nice catch, thanks!\n\nAnd also new backends that are not re-using a previous \"existing\" process slot\nare getting the last reset time (if any). So the right place to fix this is in\npgstat_io_init_backend_cb(): done in v4 attached. v4 also sets the reset time to\n0 in pgstat_shutdown_hook() (but that's not necessary though, that's just to be\nright \"conceptually\" speaking).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 13 Sep 2024 16:09:40 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "Hi,\n\nOn Fri, 13 Sept 2024 at 19:09, Bertrand Drouvot\n<bertranddrouvot.pg@gmail.com> wrote:\n> On Fri, Sep 13, 2024 at 04:45:08PM +0300, Nazir Bilal Yavuz wrote:\n> > - The pgstat_reset_io_counter_internal() is called in the\n> > pgstat_shutdown_hook(). This causes the stats_reset column showing the\n> > termination time of the old backend when its proc num is reassigned to\n> > a new backend.\n>\n> doh! Nice catch, thanks!\n>\n> And also new backends that are not re-using a previous \"existing\" process slot\n> are getting the last reset time (if any). So the right place to fix this is in\n> pgstat_io_init_backend_cb(): done in v4 attached. v4 also sets the reset time to\n> 0 in pgstat_shutdown_hook() (but that's not necessary though, that's just to be\n> right \"conceptually\" speaking).\n\nThanks, I confirm that it is fixed.\n\nYou mentioned some refactoring upthread:\n\nOn Fri, 6 Sept 2024 at 18:03, Bertrand Drouvot\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> - If we agree on the current proposal then I'll do some refactoring in\n> pg_stat_get_backend_io(), pg_stat_get_my_io() and pg_stat_get_io() to avoid\n> duplicated code (it's not done yet to ease the review).\n\nCould we remove pg_stat_get_my_io() completely and use\npg_stat_get_backend_io() with the current backend's pid to get the\ncurrent backend's stats? If you meant the same thing, please don't\nmind it.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Tue, 17 Sep 2024 14:52:01 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "Hi,\n\nOn Tue, Sep 17, 2024 at 02:52:01PM +0300, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> On Fri, 13 Sept 2024 at 19:09, Bertrand Drouvot\n> <bertranddrouvot.pg@gmail.com> wrote:\n> > On Fri, Sep 13, 2024 at 04:45:08PM +0300, Nazir Bilal Yavuz wrote:\n> > > - The pgstat_reset_io_counter_internal() is called in the\n> > > pgstat_shutdown_hook(). This causes the stats_reset column showing the\n> > > termination time of the old backend when its proc num is reassigned to\n> > > a new backend.\n> >\n> > doh! Nice catch, thanks!\n> >\n> > And also new backends that are not re-using a previous \"existing\" process slot\n> > are getting the last reset time (if any). So the right place to fix this is in\n> > pgstat_io_init_backend_cb(): done in v4 attached. v4 also sets the reset time to\n> > 0 in pgstat_shutdown_hook() (but that's not necessary though, that's just to be\n> > right \"conceptually\" speaking).\n> \n> Thanks, I confirm that it is fixed.\n\nThanks!\n\n> You mentioned some refactoring upthread:\n> \n> On Fri, 6 Sept 2024 at 18:03, Bertrand Drouvot\n> <bertranddrouvot.pg@gmail.com> wrote:\n> >\n> > - If we agree on the current proposal then I'll do some refactoring in\n> > pg_stat_get_backend_io(), pg_stat_get_my_io() and pg_stat_get_io() to avoid\n> > duplicated code (it's not done yet to ease the review).\n> \n> Could we remove pg_stat_get_my_io() completely and use\n> pg_stat_get_backend_io() with the current backend's pid to get the\n> current backend's stats?\n\nThe reason why I keep pg_stat_get_my_io() is because (as mentioned in [1]), the\nstatistics snapshot is build for \"my backend stats\" (means it depends of the\nstats_fetch_consistency value). I can see use cases for that.\n\nOTOH, pg_stat_get_backend_io() behaves as if stats_fetch_consistency is set to\nnone (each execution re-fetches counters from shared memory) (see [2]). Indeed,\nthe snapshot is not build in each backend to copy all the others backends stats,\nas 1/ I think that there is no use case (there is no need to get others backends\nI/O statistics while taking care of the stats_fetch_consistency) and 2/ that\ncould be memory expensive depending of the number of max connections.\n\nSo I think it's better to keep both functions as they behave differently.\n\nThoughts?\n\n> If you meant the same thing, please don't\n> mind it.\n\nWhat I meant to say is to try to remove the duplicate code that we can find in\nthose 3 functions (say creating a new function that would contain the duplicate\ncode and make use of this new function in the 3 others). Did not look at it in\ndetails yet.\n\n[1]: https://www.postgresql.org/message-id/ZtXR%2BCtkEVVE/LHF%40ip-10-97-1-34.eu-west-3.compute.internal\n[2]: https://www.postgresql.org/message-id/ZtsZtaRza9bFFeF8%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 17 Sep 2024 13:07:07 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "Hi,\n\nOn Tue, 17 Sept 2024 at 16:07, Bertrand Drouvot\n<bertranddrouvot.pg@gmail.com> wrote:\n> On Tue, Sep 17, 2024 at 02:52:01PM +0300, Nazir Bilal Yavuz wrote:\n> > Could we remove pg_stat_get_my_io() completely and use\n> > pg_stat_get_backend_io() with the current backend's pid to get the\n> > current backend's stats?\n>\n> The reason why I keep pg_stat_get_my_io() is because (as mentioned in [1]), the\n> statistics snapshot is build for \"my backend stats\" (means it depends of the\n> stats_fetch_consistency value). I can see use cases for that.\n>\n> OTOH, pg_stat_get_backend_io() behaves as if stats_fetch_consistency is set to\n> none (each execution re-fetches counters from shared memory) (see [2]). Indeed,\n> the snapshot is not build in each backend to copy all the others backends stats,\n> as 1/ I think that there is no use case (there is no need to get others backends\n> I/O statistics while taking care of the stats_fetch_consistency) and 2/ that\n> could be memory expensive depending of the number of max connections.\n>\n> So I think it's better to keep both functions as they behave differently.\n>\n> Thoughts?\n\nYes, that is correct. Sorry, you already had explained it and I had\nagreed with it but I forgot.\n\n> > If you meant the same thing, please don't\n> > mind it.\n>\n> What I meant to say is to try to remove the duplicate code that we can find in\n> those 3 functions (say creating a new function that would contain the duplicate\n> code and make use of this new function in the 3 others). Did not look at it in\n> details yet.\n\nI got it, thanks for the explanation.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Tue, 17 Sep 2024 16:47:51 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "Hi,\n\nOn Tue, Sep 17, 2024 at 04:47:51PM +0300, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> On Tue, 17 Sept 2024 at 16:07, Bertrand Drouvot\n> <bertranddrouvot.pg@gmail.com> wrote:\n> > So I think it's better to keep both functions as they behave differently.\n> >\n> > Thoughts?\n> \n> Yes, that is correct. Sorry, you already had explained it and I had\n> agreed with it but I forgot.\n\nNo problem at all! (I re-explained because I'm not always 100% sure that my\nexplanations are crystal clear ;-) )\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 17 Sep 2024 13:56:34 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "On Wed, Sep 04, 2024 at 04:45:24AM +0000, Bertrand Drouvot wrote:\n> On Tue, Sep 03, 2024 at 04:07:58PM +0900, Kyotaro Horiguchi wrote:\n>> As an additional benefit of this approach, the client can set a\n>> connection variable, for example, no_backend_iostats to true, or set\n>> its inverse variable to false, to restrict memory usage to only the\n>> required backends.\n> \n> Thanks for the feedback!\n> \n> If we were to add an on/off switch button, I think I'd vote for a global one\n> instead. Indeed, I see this feature more like an \"Administrator\" one, where\n> the administrator wants to be able to find out which session is reponsible of\n> what (from an I/O point of view): like being able to anwser \"which session is\n> generating this massive amount of reads\"?\n> \n> If we allow each session to disable the feature then the administrator\n> would lost this ability.\n\nHmm, I've been studying this patch, and I am not completely sure to\nagree with this feeling of using fixed-numbered stats, actually, after\nreading the whole and seeing the structure of the patch\n(FLEXIBLE_ARRAY_MEMBER is a new way to handle the fact that we don't\nknow exactly the number of slots we need to know for the\nfixed-numbered stats as MaxBackends may change). If we make these\nkind of stats variable-numbered, does it have to actually involve many\ncreations or removals of the stats entries, though? One point is that\nthe number of entries to know about is capped by max_connections,\nwhich is a PGC_POSTMASTER. That's the same kind of control as\nreplication slots. So one approach would be to reuse entries in the\ndshash and use in the hashing key the number in the procarrays. If a\nnew connection spawns and reuses a slot that was used in the past,\nthen reset all the existing fields and assign its PID.\n\nAnother thing is the consistency of the data that we'd like to keep at\nshutdown. If the connections have a balanced amount of stats shared\namong them, doing decision-making based on them is kind of easy. But\nthat may cause confusion if the activity is unbalanced across the\nsessions. We could also not flush them to disk as an option, but it\nstill seems more useful to me to save this data across restarts if one\ntakes frequent snapshots of the new system view reporting everything,\nso as it is possible to get an idea of the deltas across the snapshots\nfor each connection slot.\n--\nMichael",
"msg_date": "Fri, 20 Sep 2024 12:53:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: per backend I/O statistics"
},
{
"msg_contents": "On Tue, Sep 17, 2024 at 01:56:34PM +0000, Bertrand Drouvot wrote:\n> No problem at all! (I re-explained because I'm not always 100% sure that my\n> explanations are crystal clear ;-) )\n\nWe've discussed a bit this patch offline, but after studying the patch\nI doubt that this part is a good idea:\n\n+\t/* has to be at the end due to FLEXIBLE_ARRAY_MEMBER */\n+\tPgStatShared_IO io;\n } PgStat_ShmemControl;\n\nWe are going to be in trouble if we introduce a second member in this\nroutine that has a FLEXIBLE_ARRAY_MEMBER, because PgStat_ShmemControl\nrelies on the fact that all its members after deterministic offset\npositions in this structure. So this lacks flexibility. This choice\nis caused by the fact that we don't exactly know the number of\nbackends because that's controlled by the PGC_POSTMASTER GUC\nmax_connections so the size of the structure would be undefined.\n\nThere is a parallel with replication slot statistics here, where we\nsave the replication slot data in the dshash based on their index\nnumber in shmem. Hence, wouldn't it be better to do like replication\nslot stats, where we use the dshash and a key based on the procnum of\neach backend or auxiliary process (ProcNumber in procnumber.h)? If at\nrestart max_connections is lower than what was previously used, we\ncould discard entries that would not fit anymore into the charts.\nThis is probably not something that happens often, so I'm not really\nworried about forcing the removal of these stats depending on how the\nupper-bound of ProcNumber evolves.\n\nSo, using a new kind of ID and making this kind variable-numbered may\nease the implementation quite a bit, while avoiding any extensibility\nissues with the shmem portion of the patch if these are\nfixed-numbered. The reporting of these stats comes down to having a\nparallel with pgstat_count_io_op_time(), but to make sure that the\nstats are split by connection slot number rather than the current\nsplit of pg_stat_io. All its callers are in localbuf.c, bufmgr.c and\nmd.c, living with some duplication in the code paths to gather the\nstats may be OK.\n\npg_stat_get_my_io() is based on a O(n^3). IOOBJECT_NUM_TYPES is\nfortunately low, still that's annoying.\n\nThis would rely on the fact that we would use the ProcNumber for the\ndshash key, and this information is not provided in pg_stat_activity.\nPerhaps we should add this information in pg_stat_activity so as it\nwould be easily possible to do joins with a SQL function that returns\na SRF with all the stats associated with a given connection slot\n(auxiliary or backend process)? That would be a separate patch.\nPerhaps that's even something that has popped up for the work with\nthreading (did not follow this part closely, TBH)?\n\nThe active PIDs of the live sessions are not stored in the active\nstats, why not? Perhaps that's useless anyway if we expose the\nProcNumbers in pg_stat_activity and make the stats available with a\nsingle function taking in input a ProcNumber. Just mentioning an\noption to consider.\n--\nMichael",
"msg_date": "Fri, 20 Sep 2024 13:26:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: per backend I/O statistics"
}
] |
[
{
"msg_contents": "There are only a few (not necessarily thread-safe) strerror() calls in \nthe backend; most other potential users use %m in a format string.\n\nIn two cases, the reason for using strerror() was that we needed to \nprint the error message twice, and so errno has to be reset for the \nsecond time. And/or some of this code is from before snprintf() gained \n%m support. This can easily be simplified now.\n\nThe other is a workaround for OpenSSL that we have already handled in an \nequivalent way in libpq.\n\n(And there is one in postmaster.c, but that one is before forking.)\n\nI think we can apply these patches now to check this off the list of \nnot-thread-safe functions to check.",
"msg_date": "Mon, 2 Sep 2024 21:45:03 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "thread-safety: strerror_r()"
},
{
"msg_contents": "Peter Eisentraut <peter@eisentraut.org> writes:\n> I think we can apply these patches now to check this off the list of \n> not-thread-safe functions to check.\n\n+1 for the first patch. I'm less happy with\n\n-\tstatic char errbuf[36];\n+\tstatic char errbuf[128];\n\nAs a minor point, shouldn't this be\n\n+\tstatic char errbuf[PG_STRERROR_R_BUFLEN];\n\nBut the bigger issue is that the use of a static buffer makes\nthis not thread-safe, so having it use strerror_r to fill that\nbuffer is just putting lipstick on a pig. If we really want\nto make this thread-ready, we need to adopt the approach used\nin libpq's fe-secure-openssl.c, where callers have to free the\nbuffer later. Or maybe we could just palloc the result, and\ntrust that it's not in a long-lived context?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Sep 2024 15:56:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: thread-safety: strerror_r()"
},
{
"msg_contents": "On 02.09.24 21:56, Tom Lane wrote:\n> Peter Eisentraut <peter@eisentraut.org> writes:\n>> I think we can apply these patches now to check this off the list of\n>> not-thread-safe functions to check.\n> \n> +1 for the first patch. I'm less happy with\n> \n> -\tstatic char errbuf[36];\n> +\tstatic char errbuf[128];\n> \n> As a minor point, shouldn't this be\n> \n> +\tstatic char errbuf[PG_STRERROR_R_BUFLEN];\n> \n> But the bigger issue is that the use of a static buffer makes\n> this not thread-safe, so having it use strerror_r to fill that\n> buffer is just putting lipstick on a pig. If we really want\n> to make this thread-ready, we need to adopt the approach used\n> in libpq's fe-secure-openssl.c, where callers have to free the\n> buffer later. Or maybe we could just palloc the result, and\n> trust that it's not in a long-lived context?\n\nOk, I have committed the first patch. We can think about the best \nsolution for the second issue when we get to the business end of all \nthis. The current situation doesn't really prevent making any progress \non that.\n\n\n\n",
"msg_date": "Wed, 4 Sep 2024 14:54:09 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: thread-safety: strerror_r()"
}
] |
[
{
"msg_contents": "Hi all,\n\nCurrently, the backend-level initialization of pgstats happens in\npgstat_initialize(), where we are using a shortcut for the WAL stats,\nwith pgstat_init_wal().\n\nI'd like to propose to add a callback for that, so as in-core or\ncustom pgstats kinds can assign custom actions when initializing a\nbackend. The use-case I had for this one are pretty close to what we\ndo for WAL, where we would rely on a difference of state depending on\nwhat may have existed before reaching the initialization path. So\nthis can offer more precision. Another case, perhaps less\ninteresting, is to be able to initialize some static backend state.\n\nI wanted to get that sent for the current commit fest, but did not get\nback in time to it. Anyway, here it is. This gives the simple patch\nattached.\n\nThanks,\n--\nMichael",
"msg_date": "Tue, 3 Sep 2024 10:52:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Add callback in pgstats for backend initialization "
},
{
"msg_contents": "Hi,\n\nOn Tue, Sep 03, 2024 at 10:52:20AM +0900, Michael Paquier wrote:\n> Hi all,\n> \n> Currently, the backend-level initialization of pgstats happens in\n> pgstat_initialize(), where we are using a shortcut for the WAL stats,\n> with pgstat_init_wal().\n> \n> I'd like to propose to add a callback for that, so as in-core or\n> custom pgstats kinds can assign custom actions when initializing a\n> backend. The use-case I had for this one are pretty close to what we\n> do for WAL, where we would rely on a difference of state depending on\n> what may have existed before reaching the initialization path. So\n> this can offer more precision. Another case, perhaps less\n> interesting, is to be able to initialize some static backend state.\n\nI think the proposal makes sense and I can see the use cases too, so +1.\n\n> This gives the simple patch\n> attached.\n\nShould we add a few words about this new callback (and the others in\nPgStat_KindInfo while at it) in the \"Custom Cumulative Statistics\" section of\nxfunc.sgml?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 3 Sep 2024 05:00:54 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add callback in pgstats for backend initialization"
},
{
"msg_contents": "On Tue, Sep 03, 2024 at 05:00:54AM +0000, Bertrand Drouvot wrote:\n> Should we add a few words about this new callback (and the others in\n> PgStat_KindInfo while at it) in the \"Custom Cumulative Statistics\" section of\n> xfunc.sgml?\n\nNot sure if it is worth having. This adds extra maintenance and\nthere's already a mention of src/include/utils/pgstat_internal.h\ntelling where the callbacks are described.\n--\nMichael",
"msg_date": "Tue, 3 Sep 2024 14:58:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Add callback in pgstats for backend initialization"
},
{
"msg_contents": "At Tue, 3 Sep 2024 05:00:54 +0000, Bertrand Drouvot <bertranddrouvot.pg@gmail.com> wrote in \n> Hi,\n> \n> On Tue, Sep 03, 2024 at 10:52:20AM +0900, Michael Paquier wrote:\n> > Hi all,\n> > \n> > Currently, the backend-level initialization of pgstats happens in\n> > pgstat_initialize(), where we are using a shortcut for the WAL stats,\n> > with pgstat_init_wal().\n> > \n> > I'd like to propose to add a callback for that, so as in-core or\n> > custom pgstats kinds can assign custom actions when initializing a\n> > backend. The use-case I had for this one are pretty close to what we\n> > do for WAL, where we would rely on a difference of state depending on\n> > what may have existed before reaching the initialization path. So\n> > this can offer more precision. Another case, perhaps less\n> > interesting, is to be able to initialize some static backend state.\n> \n> I think the proposal makes sense and I can see the use cases too, so +1.\n\n+1, too.\n\nThe name \"init_backend\" makes it sound like the function initializes\nthe backend. backend_init might be a better choice, but I'm not sure.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 04 Sep 2024 14:15:43 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add callback in pgstats for backend initialization"
},
{
"msg_contents": "On Wed, Sep 04, 2024 at 02:15:43PM +0900, Kyotaro Horiguchi wrote:\n> The name \"init_backend\" makes it sound like the function initializes\n> the backend. backend_init might be a better choice, but I'm not sure.\n\nWe (kind of) tend to prefer $verb_$thing-or-action_cb for the name of\nthe callbacks, which is why I chose this naming. If you feel strongly\nabout \"backend_init_cb\", that's also fine by me; you are the original\nauthor of this code area with Andres.\n--\nMichael",
"msg_date": "Wed, 4 Sep 2024 15:04:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Add callback in pgstats for backend initialization"
},
{
"msg_contents": "At Wed, 4 Sep 2024 15:04:09 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Sep 04, 2024 at 02:15:43PM +0900, Kyotaro Horiguchi wrote:\n> > The name \"init_backend\" makes it sound like the function initializes\n> > the backend. backend_init might be a better choice, but I'm not sure.\n> \n> We (kind of) tend to prefer $verb_$thing-or-action_cb for the name of\n> the callbacks, which is why I chose this naming. If you feel strongly\n> about \"backend_init_cb\", that's also fine by me; you are the original\n> author of this code area with Andres.\n\nMore accurately, I believe this applies when the sentence follows a\nverb-object structure. In this case, the function’s meaning is “pgstat\ninitialization on backend,” which seems somewhat different from the\npolicy you mentioned. However, it could also be interpreted as\n“initialize backend status related to pgstat.” Either way, I’m okay\nwith the current name if you are, based on the discussion above.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 04 Sep 2024 18:42:33 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add callback in pgstats for backend initialization"
},
{
"msg_contents": "On Wed, Sep 04, 2024 at 06:42:33PM +0900, Kyotaro Horiguchi wrote:\n> More accurately, I believe this applies when the sentence follows a\n> verb-object structure. In this case, the function’s meaning is “pgstat\n> initialization on backend,” which seems somewhat different from the\n> policy you mentioned. However, it could also be interpreted as\n> “initialize backend status related to pgstat.” Either way, I’m okay\n> with the current name if you are, based on the discussion above.\n\nNot sure which one is better than the other, TBH. Either way, we\nstill have a full release cycle to finalize that, and I am OK to\nswitch the name to something else if there are more folks in favor of\none, the other or even something entirely different but descriptive\nenough.\n--\nMichael",
"msg_date": "Thu, 5 Sep 2024 12:41:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Add callback in pgstats for backend initialization"
},
{
"msg_contents": "On Thu, Sep 05, 2024 at 12:41:09PM +0900, Michael Paquier wrote:\n> Not sure which one is better than the other, TBH. Either way, we\n> still have a full release cycle to finalize that, and I am OK to\n> switch the name to something else if there are more folks in favor of\n> one, the other or even something entirely different but descriptive\n> enough.\n\nI've used init_backend_cb at the end in 1b373aed20e6. A change would\nbe OK by me, if needed.\n--\nMichael",
"msg_date": "Thu, 5 Sep 2024 16:08:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Add callback in pgstats for backend initialization"
}
] |
[
{
"msg_contents": "In [1] there was a short discussion about removing no-op\nPlaceHolderVars. This thread is for continuing that discussion.\n\nWe may insert PlaceHolderVars when pulling up a subquery that is\nwithin the nullable side of an outer join: if subquery pullup needs to\nreplace a subquery-referencing Var that has nonempty varnullingrels\nwith an expression that is not simply a Var, we may need to wrap the\nreplacement expression into a PlaceHolderVar. However, if the outer\njoin above is later reduced to an inner join, the PHVs would become\nno-ops with no phnullingrels bits.\n\nI think it's always desirable to remove these no-op PlaceHolderVars\nbecause they can constrain optimization opportunities. The issue\nreported in [1] shows that they can block subexpression folding, which\ncan prevent an expression from being matched to an index. I provided\nanother example in that thread which shows that no-op PlaceHolderVars\ncan force join orders, potentially forcing us to resort to nestloop\njoin in some cases.\n\nAs explained in the comment of remove_nulling_relids_mutator, PHVs are\nused in some cases to enforce separate identity of subexpressions. In\nother cases, however, it should be safe to remove a PHV if its\nphnullingrels becomes empty.\n\nAttached is a WIP patch that marks PHVs that need to be kept because\nthey are serving to isolate subexpressions, and removes all other PHVs\nin remove_nulling_relids_mutator if their phnullingrels bits become\nempty. One problem with it is that a PHV's contained expression may\nnot have been fully preprocessed. For example if the PHV is used as a\nqual clause, its contained expression is not processed for AND/OR\nflattening, which could trigger the Assert in make_restrictinfo that\nthe clause shouldn't be an AND clause.\n\nFor example:\n\ncreate table t (a boolean, b boolean);\n\nselect * from t t1 left join\n lateral (select (t1.a and t1.b) as x, * from t t2) s on true\nwhere s.x;\n\nThe other problem with this is that it breaks 17 test cases. We need\nto modify them one by one to ensure that they still test what they are\nsupposed to, which is not a trivial task.\n\nBefore delving into these two problems, I'd like to know whether this\noptimization is worthwhile, and whether I'm going in the right\ndirection.\n\n[1] https://postgr.es/m/CAMbWs4-9dYrF44pkpkFrPoLXc_YH15DL8xT8J-oHggp_WvsLLA@mail.gmail.com\n\nThanks\nRichard",
"msg_date": "Tue, 3 Sep 2024 10:53:21 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove no-op PlaceHolderVars"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> Attached is a WIP patch that marks PHVs that need to be kept because\n> they are serving to isolate subexpressions, and removes all other PHVs\n> in remove_nulling_relids_mutator if their phnullingrels bits become\n> empty. One problem with it is that a PHV's contained expression may\n> not have been fully preprocessed.\n\nYeah. I've been mulling over how we could do this, and the real\nproblem is that the expression containing the PHV *has* been fully\npreprocessed by the time we get to outer join strength reduction\n(cf. file header comment in prepjointree.c). Simply dropping the PHV\ncan break various invariants that expression preprocessing is supposed\nto establish, such as \"no RelabelType directly above another\" or \"no\nAND clause directly above another\". I haven't thought of a reliable\nway to fix that short of re-running eval_const_expressions afterwards,\nwhich seems like a mighty expensive answer. We could try to make\nremove_nulling_relids_mutator preserve all these invariants, but\nkeeping it in sync with what eval_const_expressions does seems like\na maintenance nightmare.\n\n> The other problem with this is that it breaks 17 test cases.\n\nI've not looked into that, but yeah, it would need some tedious\nanalysis to verify whether the changes are OK.\n\n> Before delving into these two problems, I'd like to know whether this\n> optimization is worthwhile, and whether I'm going in the right\n> direction.\n\nI believe this is an area worth working on. I've been wondering\nwhether it'd be better to handle the expression-identity problems by\nintroducing an \"expression wrapper\" node type that is distinct from\nPHV and has the sole purpose of demarcating a subexpression we don't\nwant to be folded into its parent. I think that doesn't really move\nthe football in terms of fixing the problems mentioned above, but\nperhaps it's conceptually cleaner than adding a bool field to PHV.\n\nAnother thought is that grouping sets provide one of the big reasons\nwhy we need an \"expression wrapper\" or equivalent functionality.\nSo maybe we should try to move forward on your other patch to change\nthe representation of those before we spend too much effort here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Sep 2024 23:31:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove no-op PlaceHolderVars"
},
{
"msg_contents": "On Tue, Sep 3, 2024 at 11:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah. I've been mulling over how we could do this, and the real\n> problem is that the expression containing the PHV *has* been fully\n> preprocessed by the time we get to outer join strength reduction\n> (cf. file header comment in prepjointree.c). Simply dropping the PHV\n> can break various invariants that expression preprocessing is supposed\n> to establish, such as \"no RelabelType directly above another\" or \"no\n> AND clause directly above another\".\n\nYeah, this is what the problem is.\n\n> I haven't thought of a reliable\n> way to fix that short of re-running eval_const_expressions afterwards,\n> which seems like a mighty expensive answer.\n\nThis seems likely to result in a lot of duplicate work.\n\n> We could try to make\n> remove_nulling_relids_mutator preserve all these invariants, but\n> keeping it in sync with what eval_const_expressions does seems like\n> a maintenance nightmare.\n\nYeah, and it seems that we also need to know the EXPRKIND code for the\nexpression containing the to-be-dropped PHV in remove_nulling_relids\nto know how we should preserve all these invariants, which seems to\nrequire a lot of code changes.\n\nLooking at the routines run in preprocess_expression, most of them\nrecurse into PlaceHolderVars and preprocess the contained expressions.\nThe two exceptions are canonicalize_qual and make_ands_implicit. I\nwonder if we can modify them to also recurse into PlaceHolderVars.\nWill this resolve our problem here?\n\n> I believe this is an area worth working on. I've been wondering\n> whether it'd be better to handle the expression-identity problems by\n> introducing an \"expression wrapper\" node type that is distinct from\n> PHV and has the sole purpose of demarcating a subexpression we don't\n> want to be folded into its parent. I think that doesn't really move\n> the football in terms of fixing the problems mentioned above, but\n> perhaps it's conceptually cleaner than adding a bool field to PHV.\n>\n> Another thought is that grouping sets provide one of the big reasons\n> why we need an \"expression wrapper\" or equivalent functionality.\n> So maybe we should try to move forward on your other patch to change\n> the representation of those before we spend too much effort here.\n\nHmm, in the case of grouping sets, we have a similar situation where\nwe do not want a subexpression that is part of grouping items to be\nfolded into its parent, because otherwise we may fail to match these\nsubexpressions to lower target items.\n\nFor grouping sets, this problem is much easier to resolve, because\nwe've already replaced such subexpressions with Vars referencing the\nGROUP RTE. As a result, during expression preprocessing, these\nsubexpressions will not be folded into their parents.\n\nAn ensuing effect of this approach is that a HAVING clause may contain\nexpressions that are not fully preprocessed if they are part of\ngrouping items. This is not an issue as long as the clause remains in\nHAVING, because these expressions will be matched to lower target\nitems in setrefs.c. However, if the clause is moved or copied into\nWHERE, we need to re-preprocess these expressions. This should not be\ntoo expensive, as we only need to re-preprocess the HAVING clauses\nthat are moved to WHERE, not the entire tree. The v13 patch in that\nthread implements this approach.\n\nThanks\nRichard\n\n\n",
"msg_date": "Tue, 3 Sep 2024 16:50:07 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove no-op PlaceHolderVars"
}
] |
[
{
"msg_contents": "Hi all,\n\nThe last TODO item I had in my bucket about the generalization of\npgstats is the option to a better control on the flush of the stats\ndepending on the kind for fixed-numbered stats. Currently, this is\ncontrolled by pgstat_report_stat(), that includes special handling for\nWAL, IO and SLRU stats, with two generic concepts:\n- Check if there are pending entries, allowing a fast-path exit.\n- Do the actual flush, with a recheck on pending entries.\n\nSLRU and IO control that with one variable each, and WAL uses a\nroutine for the same called pgstat_have_pending_wal(). Please find\nattached a patch to generalize the concept, with two new callbacks\nthat can be used for fixed-numbered stats. SLRU, IO and WAL are\nswitched to use these (the two pgstat_flush_* routines have been kept\non purpose). This brings some clarity in the code, by making\nhave_iostats and have_slrustats static in their respective files. The\ntwo pgstat_flush_* wrappers do not need a boolean as return result.\n\nRunning Postgres on scissors with a read-only workload that does not\ntrigger stats, I was not able to see a difference in runtime, but that\nwas on my own laptop, and I am planning to do more measurements on a\nbigger machine.\n\nThis is in the same line of thoughts as the recent thread about the\nbackend init callback, generalizing more the whole facility:\nhttps://www.postgresql.org/message-id/ZtZr1K4PLdeWclXY@paquier.xyz\n\nLike the other one, I wanted to send that a few days ago, but well,\nlife likes going its own ways sometimes.\n\nThanks,\n--\nMichael",
"msg_date": "Tue, 3 Sep 2024 13:48:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Add callbacks for fixed-numbered stats flush in pgstats"
},
{
"msg_contents": "At Tue, 3 Sep 2024 13:48:59 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Hi all,\n> \n> The last TODO item I had in my bucket about the generalization of\n> pgstats is the option to a better control on the flush of the stats\n> depending on the kind for fixed-numbered stats. Currently, this is\n> controlled by pgstat_report_stat(), that includes special handling for\n> WAL, IO and SLRU stats, with two generic concepts:\n> - Check if there are pending entries, allowing a fast-path exit.\n> - Do the actual flush, with a recheck on pending entries.\n> \n> SLRU and IO control that with one variable each, and WAL uses a\n> routine for the same called pgstat_have_pending_wal(). Please find\n> attached a patch to generalize the concept, with two new callbacks\n> that can be used for fixed-numbered stats. SLRU, IO and WAL are\n> switched to use these (the two pgstat_flush_* routines have been kept\n> on purpose). This brings some clarity in the code, by making\n> have_iostats and have_slrustats static in their respective files. The\n> two pgstat_flush_* wrappers do not need a boolean as return result.\n\nThe generalization sounds good to me, and hiding the private flags in\nprivate places also seems good.\n\nRegarding pgstat_io_flush_cb, I think it no longer needs to check the\nhave_iostats variable, and that check should be moved to the new\npgstat_flush_io(). The same applies to pgstat_wal_flush_cb() (and\npgstat_flush_wal()). As for pgstat_slru_flush_cb, it simply doesn't\nneed the check anymore.\n\n> Running Postgres on scissors with a read-only workload that does not\n> trigger stats, I was not able to see a difference in runtime, but that\n> was on my own laptop, and I am planning to do more measurements on a\n> bigger machine.\n\nI don't think it matters, since the actual flushing occurs at\n10-second intervals during busy times. We could change the check from\na callback to a variable, but that would just shift the function call\noverhead to a more frequently called side.\n\n> This is in the same line of thoughts as the recent thread about the\n> backend init callback, generalizing more the whole facility:\n> https://www.postgresql.org/message-id/ZtZr1K4PLdeWclXY@paquier.xyz\n> \n> Like the other one, I wanted to send that a few days ago, but well,\n> life likes going its own ways sometimes.\n\nreards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 04 Sep 2024 14:05:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add callbacks for fixed-numbered stats flush in pgstats"
},
{
"msg_contents": "Hi,\n\nOn Wed, Sep 04, 2024 at 02:05:46PM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 3 Sep 2024 13:48:59 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > Hi all,\n> > \n> > The last TODO item I had in my bucket about the generalization of\n> > pgstats is the option to a better control on the flush of the stats\n> > depending on the kind for fixed-numbered stats. Currently, this is\n> > controlled by pgstat_report_stat(), that includes special handling for\n> > WAL, IO and SLRU stats, with two generic concepts:\n> > - Check if there are pending entries, allowing a fast-path exit.\n> > - Do the actual flush, with a recheck on pending entries.\n> > \n> > SLRU and IO control that with one variable each, and WAL uses a\n> > routine for the same called pgstat_have_pending_wal(). Please find\n> > attached a patch to generalize the concept, with two new callbacks\n> > that can be used for fixed-numbered stats. SLRU, IO and WAL are\n> > switched to use these (the two pgstat_flush_* routines have been kept\n> > on purpose). This brings some clarity in the code, by making\n> > have_iostats and have_slrustats static in their respective files. The\n> > two pgstat_flush_* wrappers do not need a boolean as return result.\n> \n> The generalization sounds good to me, and hiding the private flags in\n> private places also seems good.\n\n+1 on the idea.\n\n> Regarding pgstat_io_flush_cb, I think it no longer needs to check the\n> have_iostats variable, and that check should be moved to the new\n> pgstat_flush_io(). The same applies to pgstat_wal_flush_cb() (and\n> pgstat_flush_wal()). As for pgstat_slru_flush_cb, it simply doesn't\n> need the check anymore.\n\nAgree. Another option could be to add only one callback (the flush_fixed_cb one)\nand get rid of the have_fixed_pending_cb one. Then add an extra parameter to the\nflush_fixed_cb that would only check if there is pending stats (without\nflushing them). I think those 2 callbacks are highly related that's why I \nthink we could \"merge\" them, thoughts?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 4 Sep 2024 05:28:56 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add callbacks for fixed-numbered stats flush in pgstats"
},
{
"msg_contents": "On Wed, Sep 04, 2024 at 05:28:56AM +0000, Bertrand Drouvot wrote:\n> On Wed, Sep 04, 2024 at 02:05:46PM +0900, Kyotaro Horiguchi wrote:\n>> The generalization sounds good to me, and hiding the private flags in\n>> private places also seems good.\n> \n> +1 on the idea.\n\nThanks for the feedback.\n\n>> Regarding pgstat_io_flush_cb, I think it no longer needs to check the\n>> have_iostats variable, and that check should be moved to the new\n>> pgstat_flush_io(). The same applies to pgstat_wal_flush_cb() (and\n>> pgstat_flush_wal()). As for pgstat_slru_flush_cb, it simply doesn't\n>> need the check anymore.\n> \n> Agree. Another option could be to add only one callback (the flush_fixed_cb one)\n> and get rid of the have_fixed_pending_cb one. Then add an extra parameter to the\n> flush_fixed_cb that would only check if there is pending stats (without\n> flushing them). I think those 2 callbacks are highly related that's why I \n> think we could \"merge\" them, thoughts?\n\nI would still value the shortcut that we can use if there is no\nactivity to avoid the clock check with GetCurrentTimestamp(), though,\nwhich is why I'd rather stick with two callbacks as that can lead to a\nmuch cheaper path.\n\nAnyway, I am not sure to follow your point about removing the boolean\nchecks in the flush callbacks. It's surely always a better to never\ncall LWLockAcquire() or LWLockConditionalAcquire() if we don't have\nto, and we would go through the whole flush dance as long as we detect\nsome stats activity in at least one stats kind. I mean, that's cheap\nenough to keep around.\n--\nMichael",
"msg_date": "Wed, 4 Sep 2024 15:12:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Add callbacks for fixed-numbered stats flush in pgstats"
},
{
"msg_contents": "Hi,\n\nOn Wed, Sep 04, 2024 at 03:12:37PM +0900, Michael Paquier wrote:\n> On Wed, Sep 04, 2024 at 05:28:56AM +0000, Bertrand Drouvot wrote:\n> > On Wed, Sep 04, 2024 at 02:05:46PM +0900, Kyotaro Horiguchi wrote:\n> >> The generalization sounds good to me, and hiding the private flags in\n> >> private places also seems good.\n> > \n> > +1 on the idea.\n> \n> Thanks for the feedback.\n> \n> >> Regarding pgstat_io_flush_cb, I think it no longer needs to check the\n> >> have_iostats variable, and that check should be moved to the new\n> >> pgstat_flush_io(). The same applies to pgstat_wal_flush_cb() (and\n> >> pgstat_flush_wal()). As for pgstat_slru_flush_cb, it simply doesn't\n> >> need the check anymore.\n> > \n> > Agree. Another option could be to add only one callback (the flush_fixed_cb one)\n> > and get rid of the have_fixed_pending_cb one. Then add an extra parameter to the\n> > flush_fixed_cb that would only check if there is pending stats (without\n> > flushing them). I think those 2 callbacks are highly related that's why I \n> > think we could \"merge\" them, thoughts?\n> \n> I would still value the shortcut that we can use if there is no\n> activity to avoid the clock check with GetCurrentTimestamp(),\n\nAgree. The idea was to add an additional parameter (say \"check_only\") to the\nflush_fixed_cb. If this parameter is set to true then the flush_fixed_cb would\ndo nothing (no flush at all) but return a boolean that would indicate if there\nis pending stats. In case it returns false, then we could avoid the clock check.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 4 Sep 2024 06:27:43 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add callbacks for fixed-numbered stats flush in pgstats"
},
{
"msg_contents": "At Wed, 4 Sep 2024 15:12:37 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Sep 04, 2024 at 05:28:56AM +0000, Bertrand Drouvot wrote:\n> > On Wed, Sep 04, 2024 at 02:05:46PM +0900, Kyotaro Horiguchi wrote:\n> >> The generalization sounds good to me, and hiding the private flags in\n> >> private places also seems good.\n> > \n> > +1 on the idea.\n> \n> Thanks for the feedback.\n> \n> >> Regarding pgstat_io_flush_cb, I think it no longer needs to check the\n> >> have_iostats variable, and that check should be moved to the new\n> >> pgstat_flush_io(). The same applies to pgstat_wal_flush_cb() (and\n> >> pgstat_flush_wal()). As for pgstat_slru_flush_cb, it simply doesn't\n> >> need the check anymore.\n> > \n> > Agree. Another option could be to add only one callback (the flush_fixed_cb one)\n> > and get rid of the have_fixed_pending_cb one. Then add an extra parameter to the\n> > flush_fixed_cb that would only check if there is pending stats (without\n> > flushing them). I think those 2 callbacks are highly related that's why I \n> > think we could \"merge\" them, thoughts?\n> \n> I would still value the shortcut that we can use if there is no\n> activity to avoid the clock check with GetCurrentTimestamp(), though,\n> which is why I'd rather stick with two callbacks as that can lead to a\n> much cheaper path.\n\nYeah, I was wrong in this point. \n\n> Anyway, I am not sure to follow your point about removing the boolean\n> checks in the flush callbacks. It's surely always a better to never\n> call LWLockAcquire() or LWLockConditionalAcquire() if we don't have\n> to, and we would go through the whole flush dance as long as we detect\n> some stats activity in at least one stats kind. I mean, that's cheap\n> enough to keep around.\n\nI doubt that overprotection is always better, but in this case, it's\nnot overprotection at all. The flush callbacks are called\nunconditionally once we decide to flush anything. Sorry for the noise.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 04 Sep 2024 18:37:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add callbacks for fixed-numbered stats flush in pgstats"
},
{
"msg_contents": "On Wed, Sep 04, 2024 at 06:37:01PM +0900, Kyotaro Horiguchi wrote:\n> I doubt that overprotection is always better, but in this case, it's\n> not overprotection at all. The flush callbacks are called\n> unconditionally once we decide to flush anything. Sorry for the noise.\n\nYes, it's intended to be a fastpath, with checks happening if we have\nno data pending for any variable-numbered stats.\n--\nMichael",
"msg_date": "Mon, 9 Sep 2024 11:17:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Add callbacks for fixed-numbered stats flush in pgstats"
},
{
"msg_contents": "On Wed, Sep 04, 2024 at 06:27:43AM +0000, Bertrand Drouvot wrote:\n> Agree. The idea was to add an additional parameter (say \"check_only\") to the\n> flush_fixed_cb. If this parameter is set to true then the flush_fixed_cb would\n> do nothing (no flush at all) but return a boolean that would indicate if there\n> is pending stats. In case it returns false, then we could avoid the clock check.\n\nHmm. It's the same thing in terms of logic, but I am not really\nconvinced that it is a good idea to mix a code path for a sanity check\nwith a code path dealing with the action, particularly considering\nthat there is the nowait option to think about in the flush callback\nwhich would require one to think about 4 possible states rather than\ntwo possibilities. So that's more error-prone for extension\ndevelopers, IMO.\n\nAt the end, I have used my original suggestions for the callbacks. If\nthere are voices in favor of something different, we still have a good\nchunk of the development and beta cycle for that.\n--\nMichael",
"msg_date": "Mon, 9 Sep 2024 11:20:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Add callbacks for fixed-numbered stats flush in pgstats"
}
] |
[
{
"msg_contents": "Hi\n\nI try to install devel package\n\n[student@localhost ~]$ LANG=C sudo dnf install postgresql15-devel\nLast metadata expiration check: 0:23:04 ago on Tue Sep 3 06:08:29 2024.\nError:\n Problem: cannot install the best candidate for the job\n - nothing provides perl(IPC::Run) needed by\npostgresql15-devel-15.8-1PGDG.rhel8.x86_64 from pgdg15\n\nRegards\n\nPavel\n\nHiI try to install devel package[student@localhost ~]$ LANG=C sudo dnf install postgresql15-develLast metadata expiration check: 0:23:04 ago on Tue Sep 3 06:08:29 2024.Error: Problem: cannot install the best candidate for the job - nothing provides perl(IPC::Run) needed by postgresql15-devel-15.8-1PGDG.rhel8.x86_64 from pgdg15RegardsPavel",
"msg_date": "Tue, 3 Sep 2024 12:34:03 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "broken devel package for rocky linux"
},
{
"msg_contents": "Hi Pavel,\n\nOn Tue, 2024-09-03 at 12:34 +0200, Pavel Stehule wrote:\n> Hi\n> \n> I try to install devel package\n> \n> [student@localhost ~]$ LANG=C sudo dnf install postgresql15-devel\n> Last metadata expiration check: 0:23:04 ago on Tue Sep 3 06:08:29\n> 2024.\n> Error:\n> Problem: cannot install the best candidate for the job\n> - nothing provides perl(IPC::Run) needed by\n> postgresql15-devel-15.8-1PGDG.rhel8.x86_64 from pgdg15\n\n dnf config-manager --enable powertools\n\nFWIW packaging discussions are in the pgsql-pkg-yum mailing list, not\nhere.\n\nCheers,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR",
"msg_date": "Tue, 03 Sep 2024 13:38:22 +0300",
"msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>",
"msg_from_op": false,
"msg_subject": "Re: broken devel package for rocky linux"
},
{
"msg_contents": "Hi\n\nút 3. 9. 2024 v 12:38 odesílatel Devrim Gündüz <devrim@gunduz.org> napsal:\n\n> Hi Pavel,\n>\n> On Tue, 2024-09-03 at 12:34 +0200, Pavel Stehule wrote:\n> > Hi\n> >\n> > I try to install devel package\n> >\n> > [student@localhost ~]$ LANG=C sudo dnf install postgresql15-devel\n> > Last metadata expiration check: 0:23:04 ago on Tue Sep 3 06:08:29\n> > 2024.\n> > Error:\n> > Problem: cannot install the best candidate for the job\n> > - nothing provides perl(IPC::Run) needed by\n> > postgresql15-devel-15.8-1PGDG.rhel8.x86_64 from pgdg15\n>\n> dnf config-manager --enable powertools\n>\n\nworks perfectly, thank you\n\n\n>\n> FWIW packaging discussions are in the pgsql-pkg-yum mailing list, not\n> here.\n>\n\nI am sorry\n\nRegards\n\nPavel\n\n\n>\n> Cheers,\n> --\n> Devrim Gündüz\n> Open Source Solution Architect, PostgreSQL Major Contributor\n> Twitter: @DevrimGunduz , @DevrimGunduzTR\n>\n\nHiút 3. 9. 2024 v 12:38 odesílatel Devrim Gündüz <devrim@gunduz.org> napsal:Hi Pavel,\n\nOn Tue, 2024-09-03 at 12:34 +0200, Pavel Stehule wrote:\n> Hi\n> \n> I try to install devel package\n> \n> [student@localhost ~]$ LANG=C sudo dnf install postgresql15-devel\n> Last metadata expiration check: 0:23:04 ago on Tue Sep 3 06:08:29\n> 2024.\n> Error:\n> Problem: cannot install the best candidate for the job\n> - nothing provides perl(IPC::Run) needed by\n> postgresql15-devel-15.8-1PGDG.rhel8.x86_64 from pgdg15\n\n dnf config-manager --enable powertoolsworks perfectly, thank you \n\nFWIW packaging discussions are in the pgsql-pkg-yum mailing list, not\nhere.I am sorryRegardsPavel \n\nCheers,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR",
"msg_date": "Tue, 3 Sep 2024 12:40:52 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: broken devel package for rocky linux"
},
{
"msg_contents": "Hello Devrim,\n\n\n<comments-inline>\n\n\nOn Tue, Sep 3, 2024 at 3:38 PM Devrim Gündüz <devrim@gunduz.org> wrote:\n\n> Hi Pavel,\n>\n> On Tue, 2024-09-03 at 12:34 +0200, Pavel Stehule wrote:\n> > Hi\n> >\n> > I try to install devel package\n> >\n> > [student@localhost ~]$ LANG=C sudo dnf install postgresql15-devel\n> > Last metadata expiration check: 0:23:04 ago on Tue Sep 3 06:08:29\n> > 2024.\n> > Error:\n> > Problem: cannot install the best candidate for the job\n> > - nothing provides perl(IPC::Run) needed by\n> > postgresql15-devel-15.8-1PGDG.rhel8.x86_64 from pgdg15\n>\n> dnf config-manager --enable powertools\n>\n\nThis information is missing on the PostgreSQL packages installation\nconfiguration page. I think it's helpful if we provide the information on\nthe relevant page.\nhttps://www.postgresql.org/download/linux/redhat/\n\n[image: image.png]\n\n\n\n>\n> FWIW packaging discussions are in the pgsql-pkg-yum mailing list, not\n> here.\n>\n> Cheers,\n> --\n> Devrim Gündüz\n> Open Source Solution Architect, PostgreSQL Major Contributor\n> Twitter: @DevrimGunduz , @DevrimGunduzTR\n>",
"msg_date": "Tue, 3 Sep 2024 15:48:27 +0500",
"msg_from": "Zaid Shabbir <zaidshabbir@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: broken devel package for rocky linux"
},
{
"msg_contents": "Hi,\n\nOn Tue, 2024-09-03 at 15:48 +0500, Zaid Shabbir wrote:\n> This information is missing on the PostgreSQL packages installation\n> configuration page. I think it's helpful if we provide the information\n> on\n> the relevant page.\n> https://www.postgresql.org/download/linux/redhat/\n\nThat page is not intended for -devel subpackage users. Please read\ndetails instructions here:\n\nhttps://yum.postgresql.org/howto/\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR",
"msg_date": "Tue, 03 Sep 2024 13:55:12 +0300",
"msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>",
"msg_from_op": false,
"msg_subject": "Re: broken devel package for rocky linux"
},
{
"msg_contents": "Got it, Thank you.\n\nOn Tue, Sep 3, 2024 at 3:55 PM Devrim Gündüz <devrim@gunduz.org> wrote:\n\n> Hi,\n>\n> On Tue, 2024-09-03 at 15:48 +0500, Zaid Shabbir wrote:\n> > This information is missing on the PostgreSQL packages installation\n> > configuration page. I think it's helpful if we provide the information\n> > on\n> > the relevant page.\n> > https://www.postgresql.org/download/linux/redhat/\n>\n> That page is not intended for -devel subpackage users. Please read\n> details instructions here:\n>\n> https://yum.postgresql.org/howto/\n>\n> Regards,\n> --\n> Devrim Gündüz\n> Open Source Solution Architect, PostgreSQL Major Contributor\n> Twitter: @DevrimGunduz , @DevrimGunduzTR\n>\n\nGot it, Thank you.On Tue, Sep 3, 2024 at 3:55 PM Devrim Gündüz <devrim@gunduz.org> wrote:Hi,\n\nOn Tue, 2024-09-03 at 15:48 +0500, Zaid Shabbir wrote:\n> This information is missing on the PostgreSQL packages installation\n> configuration page. I think it's helpful if we provide the information\n> on\n> the relevant page.\n> https://www.postgresql.org/download/linux/redhat/\n\nThat page is not intended for -devel subpackage users. Please read\ndetails instructions here:\n\nhttps://yum.postgresql.org/howto/\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR",
"msg_date": "Wed, 4 Sep 2024 09:49:26 +0500",
"msg_from": "Zaid Shabbir <zaidshabbir@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: broken devel package for rocky linux"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nI ran into an issue (previously discussed[1]; quoting Andres out of \r\ncontext that not addressing it then would \"[a]ll but guarantee that \r\nwe'll have this discussion again\"[2]) when trying to build a very large \r\nexpression index that did not fit within the page boundary. The \r\nreal-world use case was related to a vector search technique where I \r\nwanted to use binary quantization based on the relationship between a \r\nconstant vector (the average at a point-in-time across the entire data \r\nset) and the target vector[3][4]. An example:\r\n\r\nCREATE INDEX ON embeddings\r\n USING hnsw((quantization_func(embedding, $VECTOR)) bit_hamming_ops);\r\n\r\nHowever, I ran into the issue in[1], where pg_index was identified as \r\ncatalog that is missing a toast table, even though `indexprs` is marked \r\nfor extended storage. Providing a very simple reproducer in psql below:\r\n\r\n----\r\nCREATE TABLE def (id int);\r\nSELECT array_agg(n) b FROM generate_series(1,10_000) n \\gset\r\nCREATE OR REPLACE FUNCTION vec_quantizer (a int, b int[]) RETURNS bool \r\nAS $$ SELECT true $$ LANGUAGE SQL IMMUTABLE;\r\nCREATE INDEX ON def (vec_quantizer(id, :'b'));\r\n\r\nERROR: row is too big: size 29448, maximum size 8160\r\n---\r\n\r\nThis can come up with vector searches as vectors can be quite large - \r\nthe case I was testing involved a 1536-dim floating point vector (~6KB), \r\nand the node parse tree pushed past the page boundary by about 2KB.\r\n\r\nOne could argue that pgvector or an extension can build in capabilities \r\nto handle quantization internally without requiring the user to provide \r\na source vector (pgvectorscale does this). However, this also limits \r\nflexibility to users, as they may want to bring their own quantization \r\nfunctions to vector searches, e.g., as different quantization techniques \r\nemerge, or if a particular technique is more suitable for a person's \r\ndataset.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/flat/84ddff04-f122-784b-b6c5-3536804495f8%40joeconway.com\r\n[2] \r\nhttps://www.postgresql.org/message-id/20180720000356.5zkhvfpsqswngyob%40alap3.anarazel.de\r\n[3] https://github.com/pgvector/pgvector\r\n[4] https://jkatz05.com/post/postgres/pgvector-scalar-binary-quantization/",
"msg_date": "Tue, 3 Sep 2024 12:35:42 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "On Tue, Sep 03, 2024 at 12:35:42PM -0400, Jonathan S. Katz wrote:\n> However, I ran into the issue in[1], where pg_index was identified as\n> catalog that is missing a toast table, even though `indexprs` is marked for\n> extended storage. Providing a very simple reproducer in psql below:\n\nThanks to commit 96cdeae, only a few catalogs remain that are missing TOAST\ntables: pg_attribute, pg_class, pg_index, pg_largeobject, and\npg_largeobject_metadata. I've attached a short patch to add one for\npg_index, which resolves the issue cited here. This passes \"check-world\"\nand didn't fail for a few ad hoc tests (e.g., VACUUM FULL on pg_index). I\nhaven't spent too much time investigating possible circularity issues, but\nI'll note that none of the system indexes presently use the indexprs and\nindpred columns.\n\nIf we do want to proceed with adding a TOAST table to pg_index, IMHO it\nwould be better to do it sooner than later so that it has plenty of time to\nbake.\n\n-- \nnathan",
"msg_date": "Wed, 4 Sep 2024 13:40:49 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Thanks to commit 96cdeae, only a few catalogs remain that are missing TOAST\n> tables: pg_attribute, pg_class, pg_index, pg_largeobject, and\n> pg_largeobject_metadata. I've attached a short patch to add one for\n> pg_index, which resolves the issue cited here. This passes \"check-world\"\n> and didn't fail for a few ad hoc tests (e.g., VACUUM FULL on pg_index). I\n> haven't spent too much time investigating possible circularity issues, but\n> I'll note that none of the system indexes presently use the indexprs and\n> indpred columns.\n\nYeah, the possibility of circularity seems like the main hazard, but\nI agree it's unlikely that the entries for system indexes could ever\nneed out-of-line storage. There are many other things that would have\nto be improved before a system index could use indexprs or indpred.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Sep 2024 15:08:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "On 9/4/24 3:08 PM, Tom Lane wrote:\r\n> Nathan Bossart <nathandbossart@gmail.com> writes:\r\n>> Thanks to commit 96cdeae, only a few catalogs remain that are missing TOAST\r\n>> tables: pg_attribute, pg_class, pg_index, pg_largeobject, and\r\n>> pg_largeobject_metadata. I've attached a short patch to add one for\r\n>> pg_index, which resolves the issue cited here. This passes \"check-world\"\r\n>> and didn't fail for a few ad hoc tests (e.g., VACUUM FULL on pg_index). I\r\n>> haven't spent too much time investigating possible circularity issues, but\r\n>> I'll note that none of the system indexes presently use the indexprs and\r\n>> indpred columns.\r\n> \r\n> Yeah, the possibility of circularity seems like the main hazard, but\r\n> I agree it's unlikely that the entries for system indexes could ever\r\n> need out-of-line storage. There are many other things that would have\r\n> to be improved before a system index could use indexprs or indpred.\r\n\r\nAgreed on the unlikeliness of that, certainly in the short-to-mid term. \r\nThe impetus driving this is dealing with a data type that can be quite \r\nlarge, and it's unlikely system catalogs will be dealing with anything \r\nof that nature, or requiring very long expressions that couldn't be \r\nencapsulated in a different way.\r\n\r\nJust to be fair, in the case I presented there's an argument that what \r\nI'm trying to do is fairly inefficient for an expression, given I'm \r\npassing around an additional several KB payload into the query. However, \r\nwe'd likely have to do that anyway for this problem space, and the \r\noverall performance hit is negligible compared to the search relevancy \r\nboost.\r\n\r\nI'm working on a much more robust test, but using a known 10MM 768-dim \r\ndataset and two C-based quantization functions (one using the \r\nexpression), I got a 3% relevancy boost with a 2% reduction in latency \r\nand throughput. On some other known datasets, I was able to improve \r\nrelevancy 40% or more, though given they were initially returning with \r\n0% relevancy in some cases, it's not fair to compare performance numbers.\r\n\r\nThere are other ways to solve the problem as well, but allowing for the \r\nlarger expression gives users more choices in how they can approach it.\r\n\r\nJonathan",
"msg_date": "Wed, 4 Sep 2024 15:20:33 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "On Wed, Sep 04, 2024 at 03:20:33PM -0400, Jonathan S. Katz wrote:\n> On 9/4/24 3:08 PM, Tom Lane wrote:\n>> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> > Thanks to commit 96cdeae, only a few catalogs remain that are missing TOAST\n>> > tables: pg_attribute, pg_class, pg_index, pg_largeobject, and\n>> > pg_largeobject_metadata. I've attached a short patch to add one for\n>> > pg_index, which resolves the issue cited here. This passes \"check-world\"\n>> > and didn't fail for a few ad hoc tests (e.g., VACUUM FULL on pg_index). I\n>> > haven't spent too much time investigating possible circularity issues, but\n>> > I'll note that none of the system indexes presently use the indexprs and\n>> > indpred columns.\n>> \n>> Yeah, the possibility of circularity seems like the main hazard, but\n>> I agree it's unlikely that the entries for system indexes could ever\n>> need out-of-line storage. There are many other things that would have\n>> to be improved before a system index could use indexprs or indpred.\n> \n> Agreed on the unlikeliness of that, certainly in the short-to-mid term. The\n> impetus driving this is dealing with a data type that can be quite large,\n> and it's unlikely system catalogs will be dealing with anything of that\n> nature, or requiring very long expressions that couldn't be encapsulated in\n> a different way.\n\nAny objections to committing this? I've still been unable to identify any\nbreakage, and adding it now would give us ~1 year of testing before it'd be\navailable in a GA release. Perhaps we should at least add something to\nmisc_sanity.sql that verifies no system indexes are using pg_index's TOAST\ntable.\n\n-- \nnathan\n\n\n",
"msg_date": "Wed, 18 Sep 2024 09:47:45 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Wed, Sep 04, 2024 at 03:20:33PM -0400, Jonathan S. Katz wrote:\n>> On 9/4/24 3:08 PM, Tom Lane wrote:\n>>> Nathan Bossart <nathandbossart@gmail.com> writes:\n>>>> Thanks to commit 96cdeae, only a few catalogs remain that are missing TOAST\n>>>> tables: pg_attribute, pg_class, pg_index, pg_largeobject, and\n>>>> pg_largeobject_metadata. I've attached a short patch to add one for\n>>>> pg_index, which resolves the issue cited here.\n\n> Any objections to committing this?\n\nNope.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Sep 2024 10:54:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "On Wed, Sep 18, 2024 at 10:54:56AM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> Any objections to committing this?\n> \n> Nope.\n\nCommitted. I waffled on whether to add a test for system indexes that used\npg_index's varlena columns, but I ended up leaving it out. I've attached\nit here in case anyone thinks we should add it.\n\n-- \nnathan",
"msg_date": "Wed, 18 Sep 2024 14:52:53 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "Hello Nathan,\n\n18.09.2024 22:52, Nathan Bossart wrote:\n> Committed. I waffled on whether to add a test for system indexes that used\n> pg_index's varlena columns, but I ended up leaving it out. I've attached\n> it here in case anyone thinks we should add it.\n\nI've discovered that Jonathan's initial script:\nCREATE TABLE def (id int);\nSELECT array_agg(n) b FROM generate_series(1,10_000) n \\gset\nCREATE OR REPLACE FUNCTION vec_quantizer (a int, b int[]) RETURNS bool\nAS $$ SELECT true $$ LANGUAGE SQL IMMUTABLE;\nCREATE INDEX ON def (vec_quantizer(id, :'b'));\n\ncompleted with:\nDROP INDEX CONCURRENTLY def_vec_quantizer_idx;\n\ntriggers an assertion failure:\nTRAP: failed Assert(\"HaveRegisteredOrActiveSnapshot()\"), File: \"toast_internals.c\", Line: 668, PID: 3723372\n\nwith the following stack trace:\nExceptionalCondition at assert.c:52:13\ninit_toast_snapshot at toast_internals.c:670:2\ntoast_delete_datum at toast_internals.c:429:60\ntoast_tuple_cleanup at toast_helper.c:303:30\nheap_toast_insert_or_update at heaptoast.c:335:9\nheap_update at heapam.c:3752:14\nsimple_heap_update at heapam.c:4210:11\nCatalogTupleUpdate at indexing.c:324:2\nindex_set_state_flags at index.c:3522:2\nindex_concurrently_set_dead at index.c:1848:2\nindex_drop at index.c:2286:3\ndoDeletion at dependency.c:1362:5\ndeleteOneObject at dependency.c:1279:12\ndeleteObjectsInList at dependency.c:229:3\nperformMultipleDeletions at dependency.c:393:2\nRemoveRelations at tablecmds.c:1594:2\nExecDropStmt at utility.c:2008:4\n...\n\nThis class of assert failures is not new, see e. g., bugs #13809, #18127,\nbut this concrete instance (with index_set_state_flags()) emerged with\nb52c4fc3c and may be worth fixing while on it...\n\nBest regards,\nAlexander\n\n\n\n\n\nHello Nathan,\n\n 18.09.2024 22:52, Nathan Bossart wrote:\n\n\nCommitted. I waffled on whether to add a test for system indexes that used\npg_index's varlena columns, but I ended up leaving it out. I've attached\nit here in case anyone thinks we should add it.\n\n\n\n I've discovered that Jonathan's initial script:\n CREATE TABLE def (id int);\n SELECT array_agg(n) b FROM generate_series(1,10_000) n \\gset\n CREATE OR REPLACE FUNCTION vec_quantizer (a int, b int[]) RETURNS\n bool \n AS $$ SELECT true $$ LANGUAGE SQL IMMUTABLE;\n CREATE INDEX ON def (vec_quantizer(id, :'b'));\n\n completed with:\n DROP INDEX CONCURRENTLY def_vec_quantizer_idx;\n\n triggers an assertion failure:\n TRAP: failed Assert(\"HaveRegisteredOrActiveSnapshot()\"), File:\n \"toast_internals.c\", Line: 668, PID: 3723372\n\n with the following stack trace:\n ExceptionalCondition at assert.c:52:13\n init_toast_snapshot at toast_internals.c:670:2\n toast_delete_datum at toast_internals.c:429:60\n toast_tuple_cleanup at toast_helper.c:303:30\n heap_toast_insert_or_update at heaptoast.c:335:9\n heap_update at heapam.c:3752:14\n simple_heap_update at heapam.c:4210:11\n CatalogTupleUpdate at indexing.c:324:2\n index_set_state_flags at index.c:3522:2\n index_concurrently_set_dead at index.c:1848:2\n index_drop at index.c:2286:3\n doDeletion at dependency.c:1362:5\n deleteOneObject at dependency.c:1279:12\n deleteObjectsInList at dependency.c:229:3\n performMultipleDeletions at dependency.c:393:2\n RemoveRelations at tablecmds.c:1594:2\n ExecDropStmt at utility.c:2008:4\n ...\n\n This class of assert failures is not new, see e. g., bugs #13809,\n #18127,\n but this concrete instance (with index_set_state_flags()) emerged\n with\n b52c4fc3c and may be worth fixing while on it...\n\n Best regards,\n Alexander",
"msg_date": "Thu, 19 Sep 2024 12:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "On Thu, Sep 19, 2024 at 12:00:00PM +0300, Alexander Lakhin wrote:\n> I've discovered that Jonathan's initial script:\n> CREATE TABLE def (id int);\n> SELECT array_agg(n) b FROM generate_series(1,10_000) n \\gset\n> CREATE OR REPLACE FUNCTION vec_quantizer (a int, b int[]) RETURNS bool\n> AS $$ SELECT true $$ LANGUAGE SQL IMMUTABLE;\n> CREATE INDEX ON def (vec_quantizer(id, :'b'));\n> \n> completed with:\n> DROP INDEX CONCURRENTLY def_vec_quantizer_idx;\n> \n> triggers an assertion failure:\n> TRAP: failed Assert(\"HaveRegisteredOrActiveSnapshot()\"), File: \"toast_internals.c\", Line: 668, PID: 3723372\n\nHa, that was fast. The attached patch seems to fix the assertion failures.\nIt's probably worth checking if any of the adjacent code paths are\naffected, too.\n\n-- \nnathan",
"msg_date": "Thu, 19 Sep 2024 13:36:36 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "On Thu, Sep 19, 2024 at 01:36:36PM -0500, Nathan Bossart wrote:\n> +\t\tPushActiveSnapshot(GetTransactionSnapshot());\n> \n> \t\t/*\n> \t\t * Now we must wait until no running transaction could be using the\n> @@ -2283,8 +2284,10 @@ index_drop(Oid indexId, bool concurrent, bool concurrent_lock_mode)\n> \t\t * Again, commit the transaction to make the pg_index update visible\n> \t\t * to other sessions.\n> \t\t */\n> +\t\tPopActiveSnapshot();\n> \t\tCommitTransactionCommand();\n> \t\tStartTransactionCommand();\n> +\t\tPushActiveSnapshot(GetTransactionSnapshot());\n> \n> \t\t/*\n> \t\t * Wait till every transaction that saw the old index state has\n> @@ -2387,6 +2390,8 @@ index_drop(Oid indexId, bool concurrent, bool concurrent_lock_mode)\n> \t{\n> \t\tUnlockRelationIdForSession(&heaprelid, ShareUpdateExclusiveLock);\n> \t\tUnlockRelationIdForSession(&indexrelid, ShareUpdateExclusiveLock);\n> +\n> +\t\tPopActiveSnapshot();\n> \t}\n> }\n\nPerhaps the reason why these snapshots are pushed should be documented\nwith a comment?\n--\nMichael",
"msg_date": "Fri, 20 Sep 2024 08:16:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "Hello Nathan,\n\n19.09.2024 21:36, Nathan Bossart wrote:\n> On Thu, Sep 19, 2024 at 12:00:00PM +0300, Alexander Lakhin wrote:\n>> completed with:\n>> DROP INDEX CONCURRENTLY def_vec_quantizer_idx;\n>>\n>> triggers an assertion failure:\n>> TRAP: failed Assert(\"HaveRegisteredOrActiveSnapshot()\"), File: \"toast_internals.c\", Line: 668, PID: 3723372\n> Ha, that was fast. The attached patch seems to fix the assertion failures.\n> It's probably worth checking if any of the adjacent code paths are\n> affected, too.\n>\n\nThank you for your attention to that issue!\n\nI've found another two paths to reach that condition:\nCREATE INDEX CONCURRENTLY ON def (vec_quantizer(id, :'b'));\nERROR: cannot fetch toast data without an active snapshot\n\nREINDEX INDEX CONCURRENTLY def_vec_quantizer_idx;\n(or REINDEX TABLE CONCURRENTLY def;)\nTRAP: failed Assert(\"HaveRegisteredOrActiveSnapshot()\"), File: \"toast_internals.c\", Line: 668, PID: 2934502\nExceptionalCondition at assert.c:52:13\ninit_toast_snapshot at toast_internals.c:670:2\ntoast_delete_datum at toast_internals.c:429:60\ntoast_tuple_cleanup at toast_helper.c:303:30\nheap_toast_insert_or_update at heaptoast.c:335:9\nheap_update at heapam.c:3752:14\nsimple_heap_update at heapam.c:4210:11\nCatalogTupleUpdate at indexing.c:324:2\nindex_concurrently_swap at index.c:1649:2\nReindexRelationConcurrently at indexcmds.c:4270:3\nReindexIndex at indexcmds.c:2962:1\nExecReindex at indexcmds.c:2884:4\nProcessUtilitySlow at utility.c:1570:22\n...\n\nPerhaps it would make sense to check all CatalogTupleUpdate(pg_index, ...)\ncalls (I've found 10 such instances, but haven't checked them yet).\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 20 Sep 2024 07:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "On Fri, Sep 20, 2024 at 08:16:24AM +0900, Michael Paquier wrote:\n> Perhaps the reason why these snapshots are pushed should be documented\n> with a comment?\n\nDefinitely. I'll add those once we are more confident that we've\nidentified all the bugs.\n\n-- \nnathan\n\n\n",
"msg_date": "Fri, 20 Sep 2024 11:50:02 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "On Fri, Sep 20, 2024 at 07:00:00AM +0300, Alexander Lakhin wrote:\n> I've found another two paths to reach that condition:\n> CREATE INDEX CONCURRENTLY ON def (vec_quantizer(id, :'b'));\n> ERROR:� cannot fetch toast data without an active snapshot\n> \n> REINDEX INDEX CONCURRENTLY def_vec_quantizer_idx;\n> (or REINDEX TABLE CONCURRENTLY def;)\n> TRAP: failed Assert(\"HaveRegisteredOrActiveSnapshot()\"), File: \"toast_internals.c\", Line: 668, PID: 2934502\n\nHere's a (probably naive) attempt at fixing these, too. I'll give each\npath a closer look once it feels like we've identified all the bugs.\n\n> Perhaps it would make sense to check all CatalogTupleUpdate(pg_index, ...)\n> calls (I've found 10 such instances, but haven't checked them yet).\n\nIndeed.\n\n-- \nnathan",
"msg_date": "Fri, 20 Sep 2024 11:51:50 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "Hello Nathan,\n\n20.09.2024 19:51, Nathan Bossart wrote:\n> Here's a (probably naive) attempt at fixing these, too. I'll give each\n> path a closer look once it feels like we've identified all the bugs.\n\nThank you for the updated patch!\n\nI tested it with two code modifications (1st is to make each created\nexpression index TOASTed (by prepending 1M of spaces to the indexeprs\nvalue) and 2nd to make each created index an expression index (by\nmodifying index_elem_options in gram.y) — both modifications are kludgy so\nI don't dare to publish them) and found no other snapshot-related issues\nduring `make check-world`.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 23 Sep 2024 16:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "On Mon, Sep 23, 2024 at 04:00:00PM +0300, Alexander Lakhin wrote:\n> I tested it with two code modifications (1st is to make each created\n> expression index TOASTed (by prepending 1M of spaces to the indexeprs\n> value) and 2nd to make each created index an expression index (by\n> modifying index_elem_options in gram.y) - both modifications are kludgy so\n> I don't dare to publish them) and found no other snapshot-related issues\n> during `make check-world`.\n\nThanks. Here is an updated patch with tests and comments. I've also moved\nthe calls to PushActiveSnapshot()/PopActiveSnapshot() to surround only the\nsection of code where the snapshot is needed. Besides being more similar\nin style to other fixes I found, I think this is safer because much of this\ncode is cautious to avoid deadlocks. For example, DefineIndex() has the\nfollowing comment:\n\n\t/*\n\t * The snapshot subsystem could still contain registered snapshots that\n\t * are holding back our process's advertised xmin; in particular, if\n\t * default_transaction_isolation = serializable, there is a transaction\n\t * snapshot that is still active. The CatalogSnapshot is likewise a\n\t * hazard. To ensure no deadlocks, we must commit and start yet another\n\t * transaction, and do our wait before any snapshot has been taken in it.\n\t */\n\nI carefully inspected all the code paths this patch touches, and I think\nI've got all the details right, but I would be grateful if someone else\ncould take a look.\n\n-- \nnathan",
"msg_date": "Mon, 23 Sep 2024 10:50:21 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "On Mon, Sep 23, 2024 at 10:50:21AM -0500, Nathan Bossart wrote:\n> I carefully inspected all the code paths this patch touches, and I think\n> I've got all the details right, but I would be grateful if someone else\n> could take a look.\n\nNo objections from here with putting the snapshots pops and pushes\noutside the inner routines of reindex/drop concurrently, meaning that\nReindexRelationConcurrently(), DefineIndex() and index_drop() are fine\nto do these operations.\n\nLooking at the patch, we could just add an assertion based on\nActiveSnapshotSet() in index_set_state_flags().\n\nActually, thinking more... Could it be better to have some more\nsanity checks in the stack outside the toast code for catalogs with\ntoast tables? For example, I could imagine adding a check in\nCatalogTupleUpdate() so as all catalog accessed that have a toast \nrelation require an active snapshot. That would make checks more\naggressive, because we would not need any toast data in a catalog to\nmake sure that there is a snapshot set. This strikes me as something\nwe could do better to improve the detection of failures like the one\nreported by Alexander when updating catalog tuples as this can be\ntriggered each time we do a CatalogTupleUpdate() when dirtying a\ncatalog tuple. The idea is then to have something before the\nHaveRegisteredOrActiveSnapshot() in the toast internals, for catalogs,\nand we would not require toast data to detect problems.\n--\nMichael",
"msg_date": "Tue, 24 Sep 2024 13:21:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "On Tue, Sep 24, 2024 at 01:21:45PM +0900, Michael Paquier wrote:\n> On Mon, Sep 23, 2024 at 10:50:21AM -0500, Nathan Bossart wrote:\n>> I carefully inspected all the code paths this patch touches, and I think\n>> I've got all the details right, but I would be grateful if someone else\n>> could take a look.\n> \n> No objections from here with putting the snapshots pops and pushes\n> outside the inner routines of reindex/drop concurrently, meaning that\n> ReindexRelationConcurrently(), DefineIndex() and index_drop() are fine\n> to do these operations.\n\nGreat. I plan to push 0001 shortly.\n\n> Actually, thinking more... Could it be better to have some more\n> sanity checks in the stack outside the toast code for catalogs with\n> toast tables? For example, I could imagine adding a check in\n> CatalogTupleUpdate() so as all catalog accessed that have a toast \n> relation require an active snapshot. That would make checks more\n> aggressive, because we would not need any toast data in a catalog to\n> make sure that there is a snapshot set. This strikes me as something\n> we could do better to improve the detection of failures like the one\n> reported by Alexander when updating catalog tuples as this can be\n> triggered each time we do a CatalogTupleUpdate() when dirtying a\n> catalog tuple. The idea is then to have something before the\n> HaveRegisteredOrActiveSnapshot() in the toast internals, for catalogs,\n> and we would not require toast data to detect problems.\n\nI gave this a try and, unsurprisingly, found a bunch of other problems. I\nhastily hacked together the attached patch that should fix all of them, but\nI'd still like to comb through the code a bit more. The three catalogs\nwith problems are pg_replication_origin, pg_subscription, and\npg_constraint. pg_contraint has had a TOAST table for a while, and I don't\nthink it's unheard of for conbin to be large, so this one is probably worth\nfixing. pg_subscription hasn't had its TOAST table for quite as long, but\npresumably subpublications could be large enough to require out-of-line\nstorage. pg_replication_origin, however, only has one varlena column:\nroname. Three out of the seven problem areas involve\npg_replication_origin, but AFAICT that'd only ever be a problem if the name\nof your replication origin requires out-of-line storage. So... maybe we\nshould just remove pg_replication_origin's TOAST table instead...\n\n-- \nnathan",
"msg_date": "Tue, 24 Sep 2024 14:26:08 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "On Tue, Sep 24, 2024 at 02:26:08PM -0500, Nathan Bossart wrote:\n> I gave this a try and, unsurprisingly, found a bunch of other problems. I\n> hastily hacked together the attached patch that should fix all of them, but\n> I'd still like to comb through the code a bit more. The three catalogs\n> with problems are pg_replication_origin, pg_subscription, and\n> pg_constraint.\n\nRegression tests don't blow up after this patch and the reindex parts.\n\n> pg_contraint has had a TOAST table for a while, and I don't\n> think it's unheard of for conbin to be large, so this one is probably worth\n> fixing.\n\nAhh. That's the tablecmds.c part for the partition detach.\n\n> pg_subscription hasn't had its TOAST table for quite as long, but\n> presumably subpublications could be large enough to require out-of-line\n> storage. pg_replication_origin, however, only has one varlena column:\n> roname. Three out of the seven problem areas involve\n> pg_replication_origin, but AFAICT that'd only ever be a problem if the name\n> of your replication origin requires out-of-line storage. So... maybe we\n> should just remove pg_replication_origin's TOAST table instead...\n\nI'd rather keep it, FWIW. Contrary to pg_authid it does not imply\nproblems at the same scale because we would have access to the toast\nrelation in all the code paths with logical workers or table syncs.\nThe other one was at early authentication stages.\n\n+\t/*\n+\t * If we might need TOAST access, make sure the caller has set up a valid\n+\t * snapshot.\n+\t */\n+\tAssert(HaveRegisteredOrActiveSnapshot() ||\n+\t\t !OidIsValid(heapRel->rd_rel->reltoastrelid) ||\n+\t\t !IsNormalProcessingMode());\n+\n\nI didn't catch that we could just reuse the opened Relation in these\npaths and check for reltoastrelid. Nice.\n\nIt sounds to me that we should be much more proactive in detecting\nthese failures and add something like that on HEAD. That's cheap\nenough. As the checks are the same for all these code paths, perhaps\njust hide them behind a local macro to reduce the duplication?\n\nNot the responsibility of this patch, but the business with\nclear_subscription_skip_lsn() with its conditional transaction start\nfeels messy. This comes down to the way handles work for 2PC and the\nstreaming, which may or may not be in a transaction depending on the\nstate of the upper caller. Your patch looks right in the way\nsnapshots are set, as far as I've checked.\n--\nMichael",
"msg_date": "Wed, 25 Sep 2024 13:05:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
},
{
"msg_contents": "On Tue, Sep 24, 2024 at 02:26:08PM -0500, Nathan Bossart wrote:\n> On Tue, Sep 24, 2024 at 01:21:45PM +0900, Michael Paquier wrote:\n>> On Mon, Sep 23, 2024 at 10:50:21AM -0500, Nathan Bossart wrote:\n>>> I carefully inspected all the code paths this patch touches, and I think\n>>> I've got all the details right, but I would be grateful if someone else\n>>> could take a look.\n>> \n>> No objections from here with putting the snapshots pops and pushes\n>> outside the inner routines of reindex/drop concurrently, meaning that\n>> ReindexRelationConcurrently(), DefineIndex() and index_drop() are fine\n>> to do these operations.\n> \n> Great. I plan to push 0001 shortly.\n\nCommitted this one.\n\n-- \nnathan\n\n\n",
"msg_date": "Thu, 26 Sep 2024 15:53:33 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Large expressions in indexes can't be stored (non-TOASTable)"
}
] |
[
{
"msg_contents": "Hello hackers,\n(Cc people involved in the earlier discussion)\n\nI would like to discuss the $Subject.\n\nWhile discussing Logical Replication's Conflict Detection and\nResolution (CDR) design in [1] , it came to our notice that the\ncommit LSN and timestamp may not correlate perfectly i.e. commits may\nhappen with LSN1 < LSN2 but with Ts1 > Ts2. This issue may arise\nbecause, during the commit process, the timestamp (xactStopTimestamp)\nis captured slightly earlier than when space is reserved in the WAL.\n\n ~~\n\n Reproducibility of conflict-resolution problem due to the timestamp inversion\n------------------------------------------------\nIt was suggested that timestamp inversion *may* impact the time-based\nresolutions such as last_update_wins (targeted to be implemented in\n[1]) as we may end up making wrong decisions if timestamps and LSNs\nare not correctly ordered. And thus we tried some tests but failed to\nfind any practical scenario where it could be a problem.\n\nBasically, the proposed conflict resolution is a row-level resolution,\nand to cause the row value to be inconsistent, we need to modify the\nsame row in concurrent transactions and commit the changes\nconcurrently. But this doesn't seem possible because concurrent\nupdates on the same row are disallowed (e.g., the later update will be\nblocked due to the row lock). See [2] for the details.\n\nWe tried to give some thoughts on multi table cases as well e.g.,\nupdate table A with foreign key and update the table B that table A\nrefers to. But update on table A will block the update on table B as\nwell, so we could not reproduce data-divergence due to the\nLSN/timestamp mismatch issue there.\n\n ~~\n\nIdea proposed to fix the timestamp inversion issue\n------------------------------------------------\nThere was a suggestion in [3] to acquire the timestamp while reserving\nthe space (because that happens in LSN order). The clock would need to\nbe monotonic (easy enough with CLOCK_MONOTONIC), but also cheap. The\nmain problem why it's being done outside the critical section, because\ngettimeofday() may be quite expensive. There's a concept of hybrid\nclock, combining \"time\" and logical counter, which might be useful\nindependently of CDR.\n\nOn further analyzing this idea, we found that CLOCK_MONOTONIC can be\naccepted only by clock_gettime() which has more precision than\ngettimeofday() and thus is equally or more expensive theoretically (we\nplan to test it and post the results). It does not look like a good\nidea to call any of these when holding spinlock to reserve the wal\nposition. As for the suggested solution \"hybrid clock\", it might not\nhelp here because the logical counter is only used to order the\ntransactions with the same timestamp. The problem here is how to get\nthe timestamp along with wal position\nreservation(ReserveXLogInsertLocation).\n\n ~~\n\n We can explore further but as we are not able to see any real-time\nscenario where this could actually be problem, it may or may not be\nworth to spend time on this. Thoughts?\n\n\n[1]:\n(See: \"How is this going to deal with the fact that commit LSN and\ntimestamps may not correlate perfectly?\").\nhttps://www.postgresql.org/message-id/CAJpy0uBWBEveM8LO2b7wNZ47raZ9tVJw3D2_WXd8-b6LSqP6HA%40mail.gmail.com\n\n[2]:\nhttps://www.postgresql.org/message-id/CAA4eK1JTMiBOoGqkt%3DaLPLU8Rs45ihbLhXaGHsz8XC76%2BOG3%2BQ%40mail.gmail.com\n\n[3]:\n(See: \"The clock would need to be monotonic (easy enough with\nCLOCK_MONOTONIC\").\nhttps://www.postgresql.org/message-id/a3a70a19-a35e-426c-8646-0898cdc207c8%40enterprisedb.com\n\nthanks\nShveta\n\n\n",
"msg_date": "Wed, 4 Sep 2024 12:23:11 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": true,
"msg_subject": "Commit Timestamp and LSN Inversion issue"
},
{
"msg_contents": "Hi Shveta,\n\n> While discussing Logical Replication's Conflict Detection and\n> Resolution (CDR) design in [1] , it came to our notice that the\n> commit LSN and timestamp may not correlate perfectly i.e. commits may\n> happen with LSN1 < LSN2 but with Ts1 > Ts2. This issue may arise\n> because, during the commit process, the timestamp (xactStopTimestamp)\n> is captured slightly earlier than when space is reserved in the WAL.\n> [...]\n> There was a suggestion in [3] to acquire the timestamp while reserving\n> the space (because that happens in LSN order). The clock would need to\n> be monotonic (easy enough with CLOCK_MONOTONIC), but also cheap. The\n> main problem why it's being done outside the critical section, because\n> gettimeofday() may be quite expensive. There's a concept of hybrid\n> clock, combining \"time\" and logical counter, which might be useful\n> independently of CDR.\n\nI don't think you can rely on a system clock for conflict resolution.\nIn a corner case a DBA can move the clock forward or backward between\nrecordings of Ts1 and Ts2. On top of that there is no guarantee that\n2+ servers have synchronised clocks. It seems to me that what you are\nproposing will just hide the problem instead of solving it in the\ngeneral case.\n\nThis is the reason why {latest|earliest}_timestamp_wins strategies you\nare proposing to use for CDR are poor strategies. In practice they\nwork as random_key_wins which is not extremely useful (what it does is\nbasically _deleting_ random data, not solving conflicts). On top of\nthat strategies like this don't take into account checks and\nconstraints the database may have, including high-level constraints\nthat may not be explicitly defined in the DBMS but may exist in the\napplication logic.\n\nUnfortunately I can't reference a particular article or white paper on\nthe subject but I know that \"last write wins\" was widely criticized\nback in the 2010s when people were building distributed systems on\ncommodity hardware. In this time period I worked on several projects\nas a backend software engineer and I can assure you that LWW is not\nsomething you want.\n\nIMO the right approach to the problem would be defining procedures for\nconflict resolution that may not only semi-randomly choose between two\ntuples but also implement a user-defined logic. Similarly to INSERT\nINTO ... ON CONFLICT ... semantics, or similar approaches from\nlong-lived and well-explored distributed system, e.g. Riak.\nAlternatively / additionally we could support CRDTs in Postgres.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 4 Sep 2024 11:34:51 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit Timestamp and LSN Inversion issue"
},
{
"msg_contents": "On Wed, Sep 4, 2024 at 2:05 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> > While discussing Logical Replication's Conflict Detection and\n> > Resolution (CDR) design in [1] , it came to our notice that the\n> > commit LSN and timestamp may not correlate perfectly i.e. commits may\n> > happen with LSN1 < LSN2 but with Ts1 > Ts2. This issue may arise\n> > because, during the commit process, the timestamp (xactStopTimestamp)\n> > is captured slightly earlier than when space is reserved in the WAL.\n> > [...]\n> > There was a suggestion in [3] to acquire the timestamp while reserving\n> > the space (because that happens in LSN order). The clock would need to\n> > be monotonic (easy enough with CLOCK_MONOTONIC), but also cheap. The\n> > main problem why it's being done outside the critical section, because\n> > gettimeofday() may be quite expensive. There's a concept of hybrid\n> > clock, combining \"time\" and logical counter, which might be useful\n> > independently of CDR.\n>\n> I don't think you can rely on a system clock for conflict resolution.\n> In a corner case a DBA can move the clock forward or backward between\n> recordings of Ts1 and Ts2. On top of that there is no guarantee that\n> 2+ servers have synchronised clocks. It seems to me that what you are\n> proposing will just hide the problem instead of solving it in the\n> general case.\n>\n\nIt is possible that we can't rely on the system clock for conflict\nresolution but that is not the specific point of this thread. As\nmentioned in the subject of this thread, the problem is \"Commit\nTimestamp and LSN Inversion issue\". The LSN value and timestamp for a\ncommit are not generated atomically, so two different transactions can\nhave them in different order.\n\nYour point as far as I can understand is that in the first place, it\nis not a good idea to have a strategy like \"last_write_wins\" which\nrelies on the system clock. So, even if LSN->Timestamp ordering has\nany problem, it won't matter to us. Now, we can discuss whether\n\"last_write_wins\" is a poor strategy or not but if possible, for the\nsake of the point of this thread, let's assume that users using the\nresolution feature (\"last_write_wins\") ensure that clocks are synced\nor they won't enable this feature and then see if we can think of any\nproblem w.r.t the current code.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 4 Sep 2024 17:04:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit Timestamp and LSN Inversion issue"
},
{
"msg_contents": "Hi Amit,\n\n> > I don't think you can rely on a system clock for conflict resolution.\n> > In a corner case a DBA can move the clock forward or backward between\n> > recordings of Ts1 and Ts2. On top of that there is no guarantee that\n> > 2+ servers have synchronised clocks. It seems to me that what you are\n> > proposing will just hide the problem instead of solving it in the\n> > general case.\n> >\n>\n> It is possible that we can't rely on the system clock for conflict\n> resolution but that is not the specific point of this thread. As\n> mentioned in the subject of this thread, the problem is \"Commit\n> Timestamp and LSN Inversion issue\". The LSN value and timestamp for a\n> commit are not generated atomically, so two different transactions can\n> have them in different order.\n\nHm.... Then I'm having difficulties understanding why this is a\nproblem and why it was necessary to mention CDR in this context in the\nfirst place.\n\nOK, let's forget about CDR completely. Who is affected by the current\nbehavior and why would it be beneficial changing it?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 4 Sep 2024 16:04:50 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit Timestamp and LSN Inversion issue"
},
{
"msg_contents": "On Wed, Sep 4, 2024 at 6:35 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> > > I don't think you can rely on a system clock for conflict resolution.\n> > > In a corner case a DBA can move the clock forward or backward between\n> > > recordings of Ts1 and Ts2. On top of that there is no guarantee that\n> > > 2+ servers have synchronised clocks. It seems to me that what you are\n> > > proposing will just hide the problem instead of solving it in the\n> > > general case.\n> > >\n> >\n> > It is possible that we can't rely on the system clock for conflict\n> > resolution but that is not the specific point of this thread. As\n> > mentioned in the subject of this thread, the problem is \"Commit\n> > Timestamp and LSN Inversion issue\". The LSN value and timestamp for a\n> > commit are not generated atomically, so two different transactions can\n> > have them in different order.\n>\n> Hm.... Then I'm having difficulties understanding why this is a\n> problem\n\nThis is a potential problem pointed out during discussion of CDR [1]\n(Please read the point starting from \"How is this going to deal ..\"\nand response by Shveta). The point of this thread is that though it\nappears to be a problem but practically there is no scenario where it\ncan impact even when we implement \"last_write_wins\" startegy as\nexplained in the initial email. If you or someone sees a problem due\nto LSN<->timestamp inversion then we need to explore the solution for\nit.\n\n>\n> and why it was necessary to mention CDR in this context in the\n> first place.\n>\n> OK, let's forget about CDR completely. Who is affected by the current\n> behavior and why would it be beneficial changing it?\n>\n\nWe can't forget CDR completely as this could only be a potential\nproblem in that context. Right now, we don't have any built-in\nresolution strategies, so this can't impact but if this is a problem\nthen we need to have a solution for it before considering a solution\nlike \"last_write_wins\" strategy.\n\nNow, instead of discussing LSN<->timestamp inversion issue, you\nstarted to discuss \"last_write_wins\" strategy itself which we have\ndiscussed to some extent in the thread [2]. BTW, we are planning to\nstart a separate thread as well just to discuss the clock skew problem\nw.r.t resolution strategies like \"last_write_wins\" strategy. So, we\ncan discuss clock skew in that thread and keep the focus of this\nthread LSN<->timestamp inversion problem.\n\n[1] - https://www.postgresql.org/message-id/CAJpy0uBWBEveM8LO2b7wNZ47raZ9tVJw3D2_WXd8-b6LSqP6HA%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAJpy0uD0-DpYVMtsxK5R%3DzszXauZBayQMAYET9sWr_w0CNWXxQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 5 Sep 2024 11:09:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit Timestamp and LSN Inversion issue"
},
{
"msg_contents": "On Wed, Sep 4, 2024 at 12:23 PM shveta malik <shveta.malik@gmail.com> wrote:\n>\n> Hello hackers,\n> (Cc people involved in the earlier discussion)\n>\n> I would like to discuss the $Subject.\n>\n> While discussing Logical Replication's Conflict Detection and\n> Resolution (CDR) design in [1] , it came to our notice that the\n> commit LSN and timestamp may not correlate perfectly i.e. commits may\n> happen with LSN1 < LSN2 but with Ts1 > Ts2. This issue may arise\n> because, during the commit process, the timestamp (xactStopTimestamp)\n> is captured slightly earlier than when space is reserved in the WAL.\n>\n> ~~\n>\n> Reproducibility of conflict-resolution problem due to the timestamp inversion\n> ------------------------------------------------\n> It was suggested that timestamp inversion *may* impact the time-based\n> resolutions such as last_update_wins (targeted to be implemented in\n> [1]) as we may end up making wrong decisions if timestamps and LSNs\n> are not correctly ordered. And thus we tried some tests but failed to\n> find any practical scenario where it could be a problem.\n>\n> Basically, the proposed conflict resolution is a row-level resolution,\n> and to cause the row value to be inconsistent, we need to modify the\n> same row in concurrent transactions and commit the changes\n> concurrently. But this doesn't seem possible because concurrent\n> updates on the same row are disallowed (e.g., the later update will be\n> blocked due to the row lock). See [2] for the details.\n>\n> We tried to give some thoughts on multi table cases as well e.g.,\n> update table A with foreign key and update the table B that table A\n> refers to. But update on table A will block the update on table B as\n> well, so we could not reproduce data-divergence due to the\n> LSN/timestamp mismatch issue there.\n>\n> ~~\n>\n> Idea proposed to fix the timestamp inversion issue\n> ------------------------------------------------\n> There was a suggestion in [3] to acquire the timestamp while reserving\n> the space (because that happens in LSN order). The clock would need to\n> be monotonic (easy enough with CLOCK_MONOTONIC), but also cheap. The\n> main problem why it's being done outside the critical section, because\n> gettimeofday() may be quite expensive. There's a concept of hybrid\n> clock, combining \"time\" and logical counter, which might be useful\n> independently of CDR.\n>\n> On further analyzing this idea, we found that CLOCK_MONOTONIC can be\n> accepted only by clock_gettime() which has more precision than\n> gettimeofday() and thus is equally or more expensive theoretically (we\n> plan to test it and post the results). It does not look like a good\n> idea to call any of these when holding spinlock to reserve the wal\n> position. As for the suggested solution \"hybrid clock\", it might not\n> help here because the logical counter is only used to order the\n> transactions with the same timestamp. The problem here is how to get\n> the timestamp along with wal position\n> reservation(ReserveXLogInsertLocation).\n>\n\nHere are the tests done to compare clock_gettime() and gettimeofday()\nperformance.\n\nMachine details :\nIntel(R) Xeon(R) CPU E7-4890 v2 @ 2.80GHz\nCPU(s): 120; 800GB RAM\n\nThree functions were tested across three different call volumes (1\nmillion, 100 million, and 1 billion):\n1) clock_gettime() with CLOCK_REALTIME\n2) clock_gettime() with CLOCK_MONOTONIC\n3) gettimeofday()\n\n--> clock_gettime() with CLOCK_MONOTONIC sometimes shows slightly\nbetter performance, but not consistently. The difference in time taken\nby all three functions is minimal, with averages varying by no more\nthan ~2.5%. Overall, the performance between CLOCK_MONOTONIC and\ngettimeofday() is essentially the same.\n\nBelow are the test results -\n(each test was run twice for consistency)\n\n1) For 1 million calls:\n 1a) clock_gettime() with CLOCK_REALTIME:\n - Run 1: 0.01770 seconds, Run 2: 0.01772 seconds, Average: 0.01771 seconds.\n 1b) clock_gettime() with CLOCK_MONOTONIC:\n - Run 1: 0.01753 seconds, Run 2: 0.01748 seconds, Average: 0.01750 seconds.\n 1c) gettimeofday():\n - Run 1: 0.01742 seconds, Run 2: 0.01777 seconds, Average: 0.01760 seconds.\n\n2) For 100 million calls:\n 2a) clock_gettime() with CLOCK_REALTIME:\n - Run 1: 1.76649 seconds, Run 2: 1.76602 seconds, Average: 1.76625 seconds.\n 2b) clock_gettime() with CLOCK_MONOTONIC:\n - Run 1: 1.72768 seconds, Run 2: 1.72988 seconds, Average: 1.72878 seconds.\n 2c) gettimeofday():\n - Run 1: 1.72436 seconds, Run 2: 1.72174 seconds, Average: 1.72305 seconds.\n\n3) For 1 billion calls:\n 3a) clock_gettime() with CLOCK_REALTIME:\n - Run 1: 17.63859 seconds, Run 2: 17.65529 seconds, Average:\n17.64694 seconds.\n 3b) clock_gettime() with CLOCK_MONOTONIC:\n - Run 1: 17.15109 seconds, Run 2: 17.27406 seconds, Average:\n17.21257 seconds.\n 3c) gettimeofday():\n - Run 1: 17.21368 seconds, Run 2: 17.22983 seconds, Average:\n17.22175 seconds.\n~~~~\nAttached the scripts used for tests.\n\n--\nThanks,\nNisha",
"msg_date": "Mon, 9 Sep 2024 11:41:13 +0530",
"msg_from": "Nisha Moond <nisha.moond412@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit Timestamp and LSN Inversion issue"
}
] |
[
{
"msg_contents": "Hello,\n\nWhile reviewing a patch, I noticed that enum RecoveryTargetAction is\nstill in xlog_internal.h, even though it seems like it should be in\nxlogrecovery.h. Commit 70e81861fa separated out xlogrecovery.c/h and\nmoved several enums related to recovery targets to\nxlogrecovery.h. However, it appears that enum RecoveryTargetAction was\ninadvertently left behind in that commit.\n\nPlease find the attached patch, which addresses this oversight.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 04 Sep 2024 17:30:13 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "RecoveryTargetAction is left out in xlog_interna.h"
}
] |
[
{
"msg_contents": "Hi,\n\nIn the AIO patchset there are cases where we have to LOG failures inside a\ncritical section. This is necessary because e.g. a buffer read might complete\nwhile we are waiting for a WAL write inside a critical section.\n\nWe can't just defer the log message, as the IO might end up being\nwaited-on/completed-by another backend than the backend that issued the IO, so\nwe'd defer logging issues until an effectively arbitrary later time.\n\nIn general emitting a LOG inside a critical section isn't a huge issue - we\nmade sure that elog.c has a reserve of memory to be able to log without\ncrashing.\n\nHowever, the current message for buffer IO issues use relpath*() (ending up in\na call to GetRelationPath()). Which in turn uses psprintf() to generate the\npath. Which in turn violates the no-memory-allocations-in-critical-sections\nrule, as the containing memory context will typically not have\n->allowInCritSection == true.\n\nIt's not obvious to me what the best way to deal with this is.\n\nOne idea I had was to add an errrelpath() that switches to\nedata->assoc_context before calling relpath(), but that would end up leaking\nmemory, as FreeErrorDataContents() wouldn't know about the allocation.\n\nObviously we could add a version of GetRelationPath() that just prints into a\ncaller provided buffer - but that's somewhat awkward API wise.\n\nA third approach would be to have a dedicated memory context for this kind of\nthing that's reset after logging the message - but that comes awkwardly close\nto duplicating ErrorContext.\n\n\nI wonder if we're lacking a bit of infrastructure here...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 4 Sep 2024 11:58:33 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "GetRelationPath() vs critical sections"
},
{
"msg_contents": "On Wed, Sep 04, 2024 at 11:58:33AM -0400, Andres Freund wrote:\n> In general emitting a LOG inside a critical section isn't a huge issue - we\n> made sure that elog.c has a reserve of memory to be able to log without\n> crashing.\n> \n> However, the current message for buffer IO issues use relpath*() (ending up in\n> a call to GetRelationPath()). Which in turn uses psprintf() to generate the\n> path. Which in turn violates the no-memory-allocations-in-critical-sections\n> rule, as the containing memory context will typically not have\n> ->allowInCritSection == true.\n> \n> It's not obvious to me what the best way to deal with this is.\n> \n> One idea I had was to add an errrelpath() that switches to\n> edata->assoc_context before calling relpath(), but that would end up leaking\n> memory, as FreeErrorDataContents() wouldn't know about the allocation.\n> \n> Obviously we could add a version of GetRelationPath() that just prints into a\n> caller provided buffer - but that's somewhat awkward API wise.\n\nAgreed.\n\n> A third approach would be to have a dedicated memory context for this kind of\n> thing that's reset after logging the message - but that comes awkwardly close\n> to duplicating ErrorContext.\n\nThat's how I'd try to do it. Today's ErrorContext is the context for\nallocations FreeErrorDataContents() knows how to find. The new context would\nbe for calling into arbitrary code unknown to FreeErrorDataContents(). Most\nof the time, we'd avoid reset work for the new context, since it would have no\nallocations.\n\nIdeally, errstart() would switch to the new context before returning true, and\nerrfinish() would switch back. That way, you could just call relpath*()\nwithout an errrelpath() to help. This does need functions called in ereport()\nargument lists to not use CurrentMemoryContext for allocations that need to\nsurvive longer. I'd not be concerned about imposing that in a major release.\nWhat obstacles would arise if we did that?\n\n> I wonder if we're lacking a bit of infrastructure here...\n\nConceptually, the ereport() argument list should be a closure that runs in a\nsuitable mcxt. I think we're not far from the goal.\n\n\n",
"msg_date": "Wed, 4 Sep 2024 12:03:53 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: GetRelationPath() vs critical sections"
},
{
"msg_contents": "On Thu, Sep 5, 2024 at 3:58 AM Andres Freund <andres@anarazel.de> wrote:\n> Obviously we could add a version of GetRelationPath() that just prints into a\n> caller provided buffer - but that's somewhat awkward API wise.\n\nFor the record, that's exactly what I did in the patch I proposed to\nfix our long standing RelationTruncate() data-eating bug:\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKG%2B5nfWcpnZ%3DZ%3DUpGvY1tTF%3D4QU_0U_07EFaKmH7Nr%2BNLQ%40mail.gmail.com#aa061db119ee7a4b5390af56e24f475d\n\n\n",
"msg_date": "Thu, 5 Sep 2024 08:46:57 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GetRelationPath() vs critical sections"
}
] |
[
{
"msg_contents": "Hi.\n\nPer coverity.\n\nI think that commit c582b75\n<https://github.com/postgres/postgres/commit/c582b75851c2d096ce050d753494505a957cee75>,\nleft an oversight.\n\nThe report is:\nCID 1559993: (#1 of 1): Logically dead code (DEADCODE)\n\nTrivial patch attached.\n\nbest regards,\nRanier Vilela",
"msg_date": "Wed, 4 Sep 2024 13:50:24 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Avoid dead code (contrib/pg_visibility/pg_visibility.c)"
},
{
"msg_contents": "On Wed, Sep 04, 2024 at 01:50:24PM -0300, Ranier Vilela wrote:\n> I think that commit c582b75\n> <https://github.com/postgres/postgres/commit/c582b75851c2d096ce050d753494505a957cee75>,\n> left an oversight.\n> \n> The report is:\n> CID 1559993: (#1 of 1): Logically dead code (DEADCODE)\n> \n> Trivial patch attached.\n\nI am not sure to understand what you mean here and if this is still\nrelevant as of Noah's latest commit in 65c310b310a6.\n--\nMichael",
"msg_date": "Thu, 12 Sep 2024 14:18:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Avoid dead code (contrib/pg_visibility/pg_visibility.c)"
},
{
"msg_contents": "Hi,\n\nOn Thu, 12 Sept 2024 at 08:19, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Sep 04, 2024 at 01:50:24PM -0300, Ranier Vilela wrote:\n> > I think that commit c582b75\n> > <https://github.com/postgres/postgres/commit/c582b75851c2d096ce050d753494505a957cee75>,\n> > left an oversight.\n> >\n> > The report is:\n> > CID 1559993: (#1 of 1): Logically dead code (DEADCODE)\n\nThanks for the report!\n\n> I am not sure to understand what you mean here and if this is still\n> relevant as of Noah's latest commit in 65c310b310a6.\n\nThis should be fixed in 65c310b310a6.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 12 Sep 2024 09:10:44 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid dead code (contrib/pg_visibility/pg_visibility.c)"
},
{
"msg_contents": "Em qui., 12 de set. de 2024 às 02:18, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Wed, Sep 04, 2024 at 01:50:24PM -0300, Ranier Vilela wrote:\n> > I think that commit c582b75\n> > <\n> https://github.com/postgres/postgres/commit/c582b75851c2d096ce050d753494505a957cee75\n> >,\n> > left an oversight.\n> >\n> > The report is:\n> > CID 1559993: (#1 of 1): Logically dead code (DEADCODE)\n> >\n> > Trivial patch attached.\n>\n> I am not sure to understand what you mean here and if this is still\n> relevant as of Noah's latest commit in 65c310b310a6.\n>\nSorry Michael, but this patch became irrelevant after ddfc556\n<http://ddfc556a644404a8942e77651f75f09aa5188782>\nNote the omission to connect the dots, from the commit.\n\nbest regards,\nRanier Vilela\n\nEm qui., 12 de set. de 2024 às 02:18, Michael Paquier <michael@paquier.xyz> escreveu:On Wed, Sep 04, 2024 at 01:50:24PM -0300, Ranier Vilela wrote:\n> I think that commit c582b75\n> <https://github.com/postgres/postgres/commit/c582b75851c2d096ce050d753494505a957cee75>,\n> left an oversight.\n> \n> The report is:\n> CID 1559993: (#1 of 1): Logically dead code (DEADCODE)\n> \n> Trivial patch attached.\n\nI am not sure to understand what you mean here and if this is still\nrelevant as of Noah's latest commit in 65c310b310a6.Sorry Michael, but this patch became irrelevant after ddfc556Note the omission to connect the dots, from the commit.best regards,Ranier Vilela",
"msg_date": "Thu, 12 Sep 2024 09:07:20 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid dead code (contrib/pg_visibility/pg_visibility.c)"
},
{
"msg_contents": "On Thu, Sep 12, 2024 at 09:10:44AM +0300, Nazir Bilal Yavuz wrote:\n> This should be fixed in 65c310b310a6.\n\nThanks for the confirmation.\n--\nMichael",
"msg_date": "Fri, 13 Sep 2024 09:10:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Avoid dead code (contrib/pg_visibility/pg_visibility.c)"
}
] |
[
{
"msg_contents": "Hi.\n\nPer Coverity.\n\nI think that commit 6ebeeae\n<http://6ebeeae29626e742bbe16db3fa6fccf1186c0dfb> left out an oversight.\n\nThe report is:\nCID 1559991: (#1 of 1): Dereference null return value (NULL_RETURNS)\n\nThe function *findTypeByOid* can return NULL.\nIt is necessary to check the function's return,\nas is already done in other parts of the source.\n\npatch attached.\n\nBest regards,\nRanier Vilela",
"msg_date": "Wed, 4 Sep 2024 14:10:28 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Avoid possible dereference null pointer (src/bin/pg_dump/pg_dump.c)"
},
{
"msg_contents": "On Wed, Sep 04, 2024 at 02:10:28PM -0300, Ranier Vilela wrote:\n> I think that commit 6ebeeae\n> <http://6ebeeae29626e742bbe16db3fa6fccf1186c0dfb> left out an oversight.\n> \n> The report is:\n> CID 1559991: (#1 of 1): Dereference null return value (NULL_RETURNS)\n> \n> The function *findTypeByOid* can return NULL.\n> It is necessary to check the function's return,\n> as is already done in other parts of the source.\n> \n> patch attached.\n\nYeah, that looks like a problem to me. I've cc'd Daniel here.\n\n-- \nnathan\n\n\n",
"msg_date": "Wed, 4 Sep 2024 12:30:01 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid possible dereference null pointer\n (src/bin/pg_dump/pg_dump.c)"
},
{
"msg_contents": "> On 4 Sep 2024, at 19:30, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n> On Wed, Sep 04, 2024 at 02:10:28PM -0300, Ranier Vilela wrote:\n>> I think that commit 6ebeeae\n>> <http://6ebeeae29626e742bbe16db3fa6fccf1186c0dfb> left out an oversight.\n>> \n>> The report is:\n>> CID 1559991: (#1 of 1): Dereference null return value (NULL_RETURNS)\n>> \n>> The function *findTypeByOid* can return NULL.\n>> It is necessary to check the function's return,\n>> as is already done in other parts of the source.\n>> \n>> patch attached.\n> \n> Yeah, that looks like a problem to me. I've cc'd Daniel here.\n\nThanks for the report, it does seem genuine to me too. I'll get that handled\nlater today.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 4 Sep 2024 20:35:51 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Avoid possible dereference null pointer\n (src/bin/pg_dump/pg_dump.c)"
},
{
"msg_contents": "> On 4 Sep 2024, at 20:35, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 4 Sep 2024, at 19:30, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> On Wed, Sep 04, 2024 at 02:10:28PM -0300, Ranier Vilela wrote:\n\n>>> patch attached.\n>> \n>> Yeah, that looks like a problem to me. I've cc'd Daniel here.\n> \n> Thanks for the report, it does seem genuine to me too. I'll get that handled\n> later today.\n\nApplied, thanks!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 5 Sep 2024 15:39:30 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Avoid possible dereference null pointer\n (src/bin/pg_dump/pg_dump.c)"
},
{
"msg_contents": "Em qui., 5 de set. de 2024 às 10:39, Daniel Gustafsson <daniel@yesql.se>\nescreveu:\n\n> > On 4 Sep 2024, at 20:35, Daniel Gustafsson <daniel@yesql.se> wrote:\n> >> On 4 Sep 2024, at 19:30, Nathan Bossart <nathandbossart@gmail.com>\n> wrote:\n> >> On Wed, Sep 04, 2024 at 02:10:28PM -0300, Ranier Vilela wrote:\n>\n> >>> patch attached.\n> >>\n> >> Yeah, that looks like a problem to me. I've cc'd Daniel here.\n> >\n> > Thanks for the report, it does seem genuine to me too. I'll get that\n> handled\n> > later today.\n>\n> Applied, thanks!\n>\nThank you Daniel.\n\nbest regards,\nRanier Vilela\n\nEm qui., 5 de set. de 2024 às 10:39, Daniel Gustafsson <daniel@yesql.se> escreveu:> On 4 Sep 2024, at 20:35, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 4 Sep 2024, at 19:30, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> On Wed, Sep 04, 2024 at 02:10:28PM -0300, Ranier Vilela wrote:\n\n>>> patch attached.\n>> \n>> Yeah, that looks like a problem to me. I've cc'd Daniel here.\n> \n> Thanks for the report, it does seem genuine to me too. I'll get that handled\n> later today.\n\nApplied, thanks!Thank you Daniel.best regards,Ranier Vilela",
"msg_date": "Thu, 5 Sep 2024 10:43:24 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid possible dereference null pointer\n (src/bin/pg_dump/pg_dump.c)"
}
] |
[
{
"msg_contents": "Hi.\n\nPer Coverity.\n\nThe commit 9758174 <http://9758174e2e5cd278cf37e0980da76b51890e0011>,\nincluded the source src/backend/replication/logical/conflict.c.\nThe function *errdetail_apply_conflict* reports potential conflicts.\nBut do not care about possible resource leaks.\nHowever, the leaked size can be considerable, since it can have logs with\nthe LOG level.\nThe function *ReportSlotInvalidation* has similar utility, but on the\ncontrary, be careful not to leak.\n\nIMO, these potential leaks need to be fixed.\n\nPatch attached.\n\nbest regards,\nRanier Vilela",
"msg_date": "Wed, 4 Sep 2024 14:53:15 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix possible resource leaks\n (src/backend/replication/logical/conflict.c)"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Per Coverity.\n\nCoverity is never to be trusted about \"leaks\" in the backend,\nbecause it has no idea about short- versus long-lived memory\ncontexts.\n\n> The function *errdetail_apply_conflict* reports potential conflicts.\n> But do not care about possible resource leaks.\n> However, the leaked size can be considerable, since it can have logs with\n> the LOG level.\n> The function *ReportSlotInvalidation* has similar utility, but on the\n> contrary, be careful not to leak.\n> IMO, these potential leaks need to be fixed.\n\nThis is nonsense. If there is a problem here, then we also have\nleaks to worry about in the immediate caller ReportApplyConflict,\nnot to mention its callers. The correct solution is to be sure that\nthe whole thing happens in a short-lived context, which I believe\nis true --- looks like it should be running in ApplyMessageContext,\nwhich will be reset after each replication message.\n\n(I'm inclined to suspect that that pfree in ReportSlotInvalidation\nis equally useless, but I didn't track down its call sites.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Sep 2024 14:06:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix possible resource leaks\n (src/backend/replication/logical/conflict.c)"
}
] |
[
{
"msg_contents": "Hi.\n\nPer Coverity.\n\nThe commit 7949d95 <http://7949d9594582ab49dee221e1db1aa5401ace49d4>, left\nout an oversight.\n\nThe report is:\nCID 1559468: (#1 of 1): Overflowed array index read (INTEGER_OVERFLOW)\n\nI think that Coverity is right.\nIn the function *pgstat_read_statsfile* It is necessary to first check\nwhether it is the most restrictive case.\n\nOtherwise, if PgStat_Kind is greater than 11, a negative index may occur.\n\nPatch attached.\n\nbest regards,\nRanier Vilela",
"msg_date": "Wed, 4 Sep 2024 15:14:34 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Avoid overflowed array index (src/backend/utils/activity/pgstat.c)"
},
{
"msg_contents": "On Wed, Sep 04, 2024 at 03:14:34PM -0300, Ranier Vilela wrote:\n> The commit 7949d95 <http://7949d9594582ab49dee221e1db1aa5401ace49d4>, left\n> out an oversight.\n> \n> The report is:\n> CID 1559468: (#1 of 1): Overflowed array index read (INTEGER_OVERFLOW)\n> \n> I think that Coverity is right.\n> In the function *pgstat_read_statsfile* It is necessary to first check\n> whether it is the most restrictive case.\n> \n> Otherwise, if PgStat_Kind is greater than 11, a negative index may occur.\n\nYou are missing the fact that there is a call to\npgstat_is_kind_valid() a couple of lines above, meaning that we are\nsure that the kind ID we are dealing with is within the range [1,11]\nfor built-in kinds or [128,256] for the custom kinds, so any ID not\nwithin the first range would just be within the second range.\n\nSpeaking of which, I am spotting two possible pointer dereferences\nwhen reading the stats file if we are loading custom stats that do not\nexist anymore compared to the moment when they were written, for two\ncases:\n- Fixed-numbered stats entries.\n- Named entries, like replication slot stats, but for the custom case.\nIt would mean that we'd crash at startup when reading stats depending\non how shared_preload_libraries has changed, which is not the original\nintention. The patch includes details to inform what was found\nwrong with two new WARNING messages. Will fix in a bit, attaching it\nfor now.\n\nKind of interesting that your tool did not spot that, and missed the\ntwo I have noticed considering that we're dealing with the same code\npaths. The community coverity did not complain on any of them, AFAIK.\n--\nMichael",
"msg_date": "Thu, 5 Sep 2024 08:58:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overflowed array index\n (src/backend/utils/activity/pgstat.c)"
},
{
"msg_contents": "m qua., 4 de set. de 2024 às 20:58, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Wed, Sep 04, 2024 at 03:14:34PM -0300, Ranier Vilela wrote:\n> > The commit 7949d95 <http://7949d9594582ab49dee221e1db1aa5401ace49d4>,\n> left\n> > out an oversight.\n> >\n> > The report is:\n> > CID 1559468: (#1 of 1): Overflowed array index read (INTEGER_OVERFLOW)\n> >\n> > I think that Coverity is right.\n> > In the function *pgstat_read_statsfile* It is necessary to first check\n> > whether it is the most restrictive case.\n> >\n> > Otherwise, if PgStat_Kind is greater than 11, a negative index may occur.\n>\n> You are missing the fact that there is a call to\n> pgstat_is_kind_valid() a couple of lines above, meaning that we are\n> sure that the kind ID we are dealing with is within the range [1,11]\n> for built-in kinds or [128,256] for the custom kinds, so any ID not\n> within the first range would just be within the second range.\n>\nYeah, it seems that I and Coverity are mistaken about this warning.\nSorry for the noise.\n\n\n>\n> Speaking of which, I am spotting two possible pointer dereferences\n> when reading the stats file if we are loading custom stats that do not\n> exist anymore compared to the moment when they were written, for two\n> cases:\n> - Fixed-numbered stats entries.\n> - Named entries, like replication slot stats, but for the custom case.\n> It would mean that we'd crash at startup when reading stats depending\n> on how shared_preload_libraries has changed, which is not the original\n> intention. The patch includes details to inform what was found\n> wrong with two new WARNING messages. Will fix in a bit, attaching it\n> for now.\n>\n> Kind of interesting that your tool did not spot that, and missed the\n> two I have noticed considering that we're dealing with the same code\n> paths. The community coverity did not complain on any of them, AFAIK.\n>\nYeah, Coverity do not spot this.\n\nAfter reading the code more carefully, I found some possible issues.\nI think it's worth reviewing the attached patch more carefully.\n\nTherefore, I attach the patch for your consideration,\nthat tries to fix these issues.\n\nbest regards,\nRanier Vilela",
"msg_date": "Thu, 5 Sep 2024 09:18:50 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid overflowed array index\n (src/backend/utils/activity/pgstat.c)"
},
{
"msg_contents": "Em qui., 5 de set. de 2024 às 09:18, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> m qua., 4 de set. de 2024 às 20:58, Michael Paquier <michael@paquier.xyz>\n> escreveu:\n>\n>> On Wed, Sep 04, 2024 at 03:14:34PM -0300, Ranier Vilela wrote:\n>> > The commit 7949d95 <http://7949d9594582ab49dee221e1db1aa5401ace49d4>,\n>> left\n>> > out an oversight.\n>> >\n>> > The report is:\n>> > CID 1559468: (#1 of 1): Overflowed array index read (INTEGER_OVERFLOW)\n>> >\n>> > I think that Coverity is right.\n>> > In the function *pgstat_read_statsfile* It is necessary to first check\n>> > whether it is the most restrictive case.\n>> >\n>> > Otherwise, if PgStat_Kind is greater than 11, a negative index may\n>> occur.\n>>\n>> You are missing the fact that there is a call to\n>> pgstat_is_kind_valid() a couple of lines above, meaning that we are\n>> sure that the kind ID we are dealing with is within the range [1,11]\n>> for built-in kinds or [128,256] for the custom kinds, so any ID not\n>> within the first range would just be within the second range.\n>>\n> Yeah, it seems that I and Coverity are mistaken about this warning.\n> Sorry for the noise.\n>\n>\n>>\n>> Speaking of which, I am spotting two possible pointer dereferences\n>> when reading the stats file if we are loading custom stats that do not\n>> exist anymore compared to the moment when they were written, for two\n>> cases:\n>> - Fixed-numbered stats entries.\n>> - Named entries, like replication slot stats, but for the custom case.\n>> It would mean that we'd crash at startup when reading stats depending\n>> on how shared_preload_libraries has changed, which is not the original\n>> intention. The patch includes details to inform what was found\n>> wrong with two new WARNING messages. Will fix in a bit, attaching it\n>> for now.\n>>\n>> Kind of interesting that your tool did not spot that, and missed the\n>> two I have noticed considering that we're dealing with the same code\n>> paths. The community coverity did not complain on any of them, AFAIK.\n>>\n> Yeah, Coverity do not spot this.\n>\n> After reading the code more carefully, I found some possible issues.\n> I think it's worth reviewing the attached patch more carefully.\n>\n> Therefore, I attach the patch for your consideration,\n> that tries to fix these issues.\n>\nPlease, disregard the first patch, it contains a bug.\nNew version attached, v1.\n\nbest regards,\nRanier Vilela",
"msg_date": "Thu, 5 Sep 2024 09:25:11 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid overflowed array index\n (src/backend/utils/activity/pgstat.c)"
},
{
"msg_contents": "On Thu, Sep 05, 2024 at 09:25:11AM -0300, Ranier Vilela wrote:\n> Please, disregard the first patch, it contains a bug.\n> New version attached, v1.\n\nThese are wrong, because they are changing code paths where we know\nthat stats kinds should be set. For custom stats, particularly, this\nwould silently skip doing something for code paths that would expect\nan action to be taken. (I thought originally about using some\nelog(ERRORs) in these areas, refrained from it.)\n\nThe change in pgstat_write_statsfile() is equally unnecessary: we are\nnot going to write to the pgstats file an entry that we have not found\npreviously, as per the knowledge that these would be compiled with the\ncode for the builtin code, or added at startup for the custom ones.\n\nI'd suggest to study the code a bit more. Perhaps more documentation\nis required, not sure about that yet.\n--\nMichael",
"msg_date": "Fri, 6 Sep 2024 08:21:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Avoid overflowed array index\n (src/backend/utils/activity/pgstat.c)"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nAttached is the draft of the PostgreSQL 17 release announcement. This is \r\na draft of the text that will go into the press kit, with the key \r\nportions to review starting from the top of the document, up until the \r\n\"About PostgreSQL\" section.\r\n\r\nPlease provide feedback on content accuracy, notable omissions or items \r\nthat should be excluded, or if an explanation is unclear and needs \r\nbetter phrasing. On the last point, I'm looking to ensure the wording is \r\nclear and is easy to translate into different languages.\r\n\r\nBased on feedback, I'll be posting a revision once a day (if there's \r\nfeedback) until the review cut-off. We'll have to freeze the \r\nannouncement by Mon, Sep 9 @ 12:00 UTC so we can begin the translation \r\nprocess.\r\n\r\nThank you for your help with the release process!\r\n\r\nJonathan",
"msg_date": "Wed, 4 Sep 2024 17:04:32 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 17 release announcement draft"
},
{
"msg_contents": "On 9/4/24 5:04 PM, Jonathan S. Katz wrote:\r\n> Hi,\r\n> \r\n> Attached is the draft of the PostgreSQL 17 release announcement. This is \r\n> a draft of the text that will go into the press kit, with the key \r\n> portions to review starting from the top of the document, up until the \r\n> \"About PostgreSQL\" section.\r\n> \r\n> Please provide feedback on content accuracy, notable omissions or items \r\n> that should be excluded, or if an explanation is unclear and needs \r\n> better phrasing. On the last point, I'm looking to ensure the wording is \r\n> clear and is easy to translate into different languages.\r\n> \r\n> Based on feedback, I'll be posting a revision once a day (if there's \r\n> feedback) until the review cut-off. We'll have to freeze the \r\n> announcement by Mon, Sep 9 @ 12:00 UTC so we can begin the translation \r\n> process.\r\n> \r\n> Thank you for your help with the release process!\r\n\r\nPlease see v2 attached. As per original note, please provide feedback \r\nbefore Mon, Sep 9 @ 12:00 UTC so we can begin the translation process.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Fri, 6 Sep 2024 13:03:54 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 17 release announcement draft"
},
{
"msg_contents": "On Fri, 6 Sept 2024 at 19:04, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>\n> On 9/4/24 5:04 PM, Jonathan S. Katz wrote:\n> > Hi,\n> >\n> > Attached is the draft of the PostgreSQL 17 release announcement. This is\n> > a draft of the text that will go into the press kit, with the key\n> > portions to review starting from the top of the document, up until the\n> > \"About PostgreSQL\" section.\n> >\n> > Please provide feedback on content accuracy, notable omissions or items\n> > that should be excluded, or if an explanation is unclear and needs\n> > better phrasing. On the last point, I'm looking to ensure the wording is\n> > clear and is easy to translate into different languages.\n> >\n> > Based on feedback, I'll be posting a revision once a day (if there's\n> > feedback) until the review cut-off. We'll have to freeze the\n> > announcement by Mon, Sep 9 @ 12:00 UTC so we can begin the translation\n> > process.\n> >\n> > Thank you for your help with the release process!\n>\n> Please see v2 attached. As per original note, please provide feedback\n> before Mon, Sep 9 @ 12:00 UTC so we can begin the translation process.\n\n> [`EXPLAIN`](...) now shows the time spent for I/O block reads and writes\n\nI think this needs some adjustment: IIUC the new feature in PG17's\n295c36c0 is that we now also track (and show) timings for local\nblocks. I/O timings on shared and temp blocks were already tracked\n(and displayed with the BUFFERS option) when track_io_timing was\nenabled: temp timing was introduced with efb0ef90 in early April 2022,\nand the output of IO timings for shared blocks has existed since the\nintroduction of track_io_timing in 40b9b957 back in late March of\n2012.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Fri, 6 Sep 2024 20:01:44 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 17 release announcement draft"
},
{
"msg_contents": "On Fri, 6 Sept 2024 at 19:04, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Please see v2 attached. As per original note, please provide feedback\n> before Mon, Sep 9 @ 12:00 UTC so we can begin the translation process.\n\nThe following sentence was part of the beta1 release announcement, but\nis not part of this draft. Did that happen by accident or was it done\non purpose? If on purpose, I'm curious why since I contributed the\nfeature.\n\n\"PostgreSQL 17 also provides better support for asynchronous and more\nsecure query cancellation routines, which drivers can adopt using the\nlibpq API.\"\n\n\n",
"msg_date": "Sat, 7 Sep 2024 00:40:12 +0200",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 17 release announcement draft"
},
{
"msg_contents": "On 9/6/24 6:40 PM, Jelte Fennema-Nio wrote:\r\n> On Fri, 6 Sept 2024 at 19:04, Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>> Please see v2 attached. As per original note, please provide feedback\r\n>> before Mon, Sep 9 @ 12:00 UTC so we can begin the translation process.\r\n> \r\n> The following sentence was part of the beta1 release announcement, but\r\n> is not part of this draft. Did that happen by accident or was it done\r\n> on purpose? If on purpose, I'm curious why since I contributed the\r\n> feature.\r\n> \r\n> \"PostgreSQL 17 also provides better support for asynchronous and more\r\n> secure query cancellation routines, which drivers can adopt using the\r\n> libpq API.\"\r\n\r\nThe beta announcement typically calls out more of the features available \r\nto ensure we can get folks testing them. Features that target API \r\nintegrations/internals aren't usually mentioned in the GA announcement.\r\n\r\nJonathan",
"msg_date": "Sat, 7 Sep 2024 14:05:00 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 17 release announcement draft"
},
{
"msg_contents": "On 9/6/24 2:01 PM, Matthias van de Meent wrote:\r\n\r\n> I think this needs some adjustment: IIUC the new feature in PG17's\r\n> 295c36c0 is that we now also track (and show) timings for local\r\n> blocks. I/O timings on shared and temp blocks were already tracked\r\n> (and displayed with the BUFFERS option) when track_io_timing was\r\n> enabled: temp timing was introduced with efb0ef90 in early April 2022,\r\n> and the output of IO timings for shared blocks has existed since the\r\n> introduction of track_io_timing in 40b9b957 back in late March of\r\n> 2012.\r\nThanks Matthias. I updated the draft to specify this is for local \r\nblocks. I've attached the latest copy.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Sat, 7 Sep 2024 14:44:37 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 17 release announcement draft"
},
{
"msg_contents": "On Sun, 8 Sept 2024 at 06:44, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> I've attached the latest copy.\n\nIs \"This release expands on functionality both for managing data in\npartitions\" still relevant given the MERGE/SPLIT PARTITION was\nreverted [1]?\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3890d90c1508125729ed20038d90513694fc3a7b\n\n\n",
"msg_date": "Sun, 8 Sep 2024 10:58:33 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 17 release announcement draft"
},
{
"msg_contents": "On 9/7/24 6:58 PM, David Rowley wrote:\r\n> On Sun, 8 Sept 2024 at 06:44, Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>> I've attached the latest copy.\r\n> \r\n> Is \"This release expands on functionality both for managing data in\r\n> partitions\" still relevant given the MERGE/SPLIT PARTITION was\r\n> reverted [1]?\r\n\r\nAFAICT yes, as identity columns and exclusion constraints can now be \r\nused. It looks like there are a few items with the optimizer (which \r\nlooking at the release notes, you'd be aware of :) but unsure if those \r\nshould be added.\r\n\r\nTiming wise, we're rapidly approaching the Sep 9 12:00 UTC cutoff; if \r\nthere's no additional feedback, once I wake up I'll do one more pass \r\nover the text and then freeze it.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Sun, 8 Sep 2024 22:47:19 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 17 release announcement draft"
}
] |
[
{
"msg_contents": "Hi,\n\nI need to assign role permissions from one role to another. However, after\ngranting the role, I see that the permission list for the target role has\nnot been updated. For this process, I followed the PostgreSQL documentation\navailable at PostgreSQL Role Membership\n<https://www.postgresql.org/docs/current/role-membership.html>. Please let\nme know if I've missed anything.\n\nI am using PostgreSQL version 16 and I have followed these steps.\npostgres=# select version();\n version\n---------------------------------------------------------------------------------------------------------\n PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0\n20210514 (Red Hat 8.5.0-22), 64-bit\n(1 row)\n\n1. Create a role with specific permissions\n\nCREATE ROLE rep_admin WITH LOGIN CREATEDB CREATEROLE REPLICATION;\n\n2.Create another role named replication_expert:\n\nCREATE ROLE replication_expert;\n\n3.Grant the rep_admin role to the replication_expert role with inheritance:\n\n GRANT rep_admin TO replication_expert with INHERIT true;\nGRANT ROLE\n\n4.Attempt to log in using the replication_expert role:\n\npostgres=# \\c postgres replication_expert\nconnection to server on socket \"/run/postgresql/.s.PGSQL.5432\" failed:\nFATAL: role \"replication_expert\" is not permitted to log in\n\n5.Check the role attributes to see if they have been reflected:\n\npostgres=# \\du+\n List of roles\n Role name | Attributes\n | Description\n--------------------+------------------------------------------------------------+-------------\n postgres | Superuser, Create role, Create DB, Replication,\nBypass RLS |\n rep_admin | Create role, Create DB, Replication\n |\n replication_expert | Cannot login\n\n6.Examine the pg_roles table to confirm that the permissions for\nreplication_expert have not been updated:\n\npostgres=# SELECT rolname,rolinherit, rolcreaterole, rolcreatedb,\nrolcanlogin,rolreplication\nFROM pg_roles where rolname in('rep_admin','replication_expert');;\n rolname | rolinherit | rolcreaterole | rolcreatedb |\nrolcanlogin | rolreplication\n--------------------+------------+---------------+-------------+-------------+----------------\n rep_admin | t | t | t | t\n | t\n replication_expert | t | f | f | f\n | f\n(2 rows)\n\npostgres=#\n\nRegards,\nMuhammad Imtiaz\n\nHi,I need to assign role permissions from one role to another. However, after granting the role, I see that the permission list for the target role has not been updated. For this process, I followed the PostgreSQL documentation available at PostgreSQL Role Membership. Please let me know if I've missed anything.I am using PostgreSQL version 16 and I have followed these steps.postgres=# select version(); version--------------------------------------------------------------------------------------------------------- PostgreSQL 16.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-22), 64-bit(1 row)1. Create a role with specific permissionsCREATE ROLE rep_admin WITH LOGIN CREATEDB CREATEROLE REPLICATION;2.Create another role named replication_expert:CREATE ROLE replication_expert;3.Grant the rep_admin role to the replication_expert role with inheritance: GRANT rep_admin TO replication_expert with INHERIT true;GRANT ROLE4.Attempt to log in using the replication_expert role:postgres=# \\c postgres replication_expertconnection to server on socket \"/run/postgresql/.s.PGSQL.5432\" failed: FATAL: role \"replication_expert\" is not permitted to log in5.Check the role attributes to see if they have been reflected:postgres=# \\du+ List of roles Role name | Attributes | Description--------------------+------------------------------------------------------------+------------- postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | rep_admin | Create role, Create DB, Replication | replication_expert | Cannot login 6.Examine the pg_roles table to confirm that the permissions for replication_expert have not been updated:postgres=# SELECT rolname,rolinherit, rolcreaterole, rolcreatedb, rolcanlogin,rolreplicationFROM pg_roles where rolname in('rep_admin','replication_expert');; rolname | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication--------------------+------------+---------------+-------------+-------------+---------------- rep_admin | t | t | t | t | t replication_expert | t | f | f | f | f(2 rows)postgres=#Regards,Muhammad Imtiaz",
"msg_date": "Thu, 5 Sep 2024 09:05:05 +0500",
"msg_from": "Muhammad Imtiaz <imtiazpg712@gmail.com>",
"msg_from_op": true,
"msg_subject": "Role Granting Issues in PostgreSQL: Need Help"
},
{
"msg_contents": "On Wednesday, September 4, 2024, Muhammad Imtiaz <imtiazpg712@gmail.com>\nwrote:\n\n>\n> 1. Create a role with specific permissions\n>\n> CREATE ROLE rep_admin WITH LOGIN CREATEDB CREATEROLE REPLICATION;\n>\n> List of roles\n> Role name | Attributes\n> | Description\n> --------------------+---------------------------------------\n> ---------------------+-------------\n> postgres | Superuser, Create role, Create DB, Replication,\n> Bypass RLS |\n> rep_admin | Create role, Create DB, Replication\n> |\n> replication_expert | Cannot login\n>\n>\n> 6.Examine the pg_roles table to confirm that the permissions for\n> replication_expert have not been updated:\n>\n> postgres=# SELECT rolname,rolinherit, rolcreaterole, rolcreatedb,\n> rolcanlogin,rolreplication\n> FROM pg_roles where rolname in('rep_admin','replication_expert');;\n> rolname | rolinherit | rolcreaterole | rolcreatedb |\n> rolcanlogin | rolreplication\n> --------------------+------------+---------------+----------\n> ---+-------------+----------------\n> rep_admin | t | t | t | t\n> | t\n> replication_expert | t | f | f | f\n> | f\n> (2 rows)\n>\n>\nThose are not permissions, they are attributes, and attributes are not\ninherited.\n\nDavid J.\n\nOn Wednesday, September 4, 2024, Muhammad Imtiaz <imtiazpg712@gmail.com> wrote:1. Create a role with specific permissionsCREATE ROLE rep_admin WITH LOGIN CREATEDB CREATEROLE REPLICATION; List of roles Role name | Attributes | Description--------------------+------------------------------------------------------------+------------- postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | rep_admin | Create role, Create DB, Replication | replication_expert | Cannot login 6.Examine the pg_roles table to confirm that the permissions for replication_expert have not been updated:postgres=# SELECT rolname,rolinherit, rolcreaterole, rolcreatedb, rolcanlogin,rolreplicationFROM pg_roles where rolname in('rep_admin','replication_expert');; rolname | rolinherit | rolcreaterole | rolcreatedb | rolcanlogin | rolreplication--------------------+------------+---------------+-------------+-------------+---------------- rep_admin | t | t | t | t | t replication_expert | t | f | f | f | f(2 rows)Those are not permissions, they are attributes, and attributes are not inherited.David J.",
"msg_date": "Wed, 4 Sep 2024 21:14:28 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Granting Issues in PostgreSQL: Need Help"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Wednesday, September 4, 2024, Muhammad Imtiaz <imtiazpg712@gmail.com>\n> wrote:\n>> replication_expert | Cannot login\n\n> Those are not permissions, they are attributes, and attributes are not\n> inherited.\n\nSpecifically: the NOLOGIN attribute on a role is a hard block on\nlogging in with that role, independently of any and every other\ncondition.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Sep 2024 00:25:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Role Granting Issues in PostgreSQL: Need Help"
}
] |
[
{
"msg_contents": "hi.\nWhile reviewing virtual generated columns,\nI find some issues with the updatable view interacting with stored\ngenerated columns.\n\n-----------------------\ndrop table if exists base_tbl cascade;\nCREATE TABLE base_tbl (a int, b int GENERATED ALWAYS AS (22) stored, d\nint default 22);\ncreate view rw_view1 as select * from base_tbl;\ninsert into rw_view1(a) values (12) returning *;\n\nalter view rw_view1 alter column b set default 11.1;\ninsert into rw_view1(a,b,d) values ( 12, default,33) returning *;\ninsert into rw_view1(a,d) values (12,33) returning *;\ninsert into rw_view1 default values returning *;\n\nSELECT events & 4 != 0 AS can_upd,\n events & 8 != 0 AS can_ins,\n events & 16 != 0 AS can_del\nFROM pg_catalog.pg_relation_is_updatable('rw_view1'::regclass, false) t(events);\n-----------------------\n\n\"alter view rw_view1 alter column b set default 11.1;\"\nbecause rw_view1 view relation in ATExecColumnDefault\nTupleDesc->attgenerated == '\\0',\notherwise it can error out in ATExecColumnDefault.\nNow after we set default, we cannot insert any value to rw_view1,\nwhich makes it not updatable.\n\n\n",
"msg_date": "Thu, 5 Sep 2024 16:08:23 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": true,
"msg_subject": "updatable view set default interact with base rel generated stored\n columns"
},
{
"msg_contents": "jian he <jian.universality@gmail.com> writes:\n> -----------------------\n> drop table if exists base_tbl cascade;\n> CREATE TABLE base_tbl (a int, b int GENERATED ALWAYS AS (22) stored, d\n> int default 22);\n> create view rw_view1 as select * from base_tbl;\n> insert into rw_view1(a) values (12) returning *;\n\n> alter view rw_view1 alter column b set default 11.1;\n> insert into rw_view1(a,b,d) values ( 12, default,33) returning *;\n> insert into rw_view1(a,d) values (12,33) returning *;\n> insert into rw_view1 default values returning *;\n\n> SELECT events & 4 != 0 AS can_upd,\n> events & 8 != 0 AS can_ins,\n> events & 16 != 0 AS can_del\n> FROM pg_catalog.pg_relation_is_updatable('rw_view1'::regclass, false) t(events);\n> -----------------------\n\nI don't really see anything wrong here. Yeah, you broke insertions\ninto the view yet it still claims to be updatable. But there is\nnothing about the view that makes it not-updatable; it's something\nthat happens at runtime in the base table that is problematic.\nIf we try to detect all such cases we'll be solving the halting\nproblem. That is, I don't see any functional difference between\nthis example and, say, a default value attached to the view that\nviolates a CHECK constraint of the base table.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Sep 2024 09:58:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: updatable view set default interact with base rel generated\n stored columns"
}
] |
[
{
"msg_contents": "The meson build currently does not produce llvm bitcode (.bc) files. \nAFAIK, this is the last major regression for using meson for production \nbuilds.\n\nIs anyone working on that? I vaguely recall that some in-progress code \nwas shared a couple of years ago, but I haven't seen anything since. It \nwould be great if we could collect any existing code and notes to maybe \nget this moving again.\n\n\n",
"msg_date": "Thu, 5 Sep 2024 10:56:26 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "meson vs. llvm bitcode files"
},
{
"msg_contents": "Hi,\n\nOn Thu, 5 Sept 2024 at 11:56, Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> The meson build currently does not produce llvm bitcode (.bc) files.\n> AFAIK, this is the last major regression for using meson for production\n> builds.\n>\n> Is anyone working on that? I vaguely recall that some in-progress code\n> was shared a couple of years ago, but I haven't seen anything since. It\n> would be great if we could collect any existing code and notes to maybe\n> get this moving again.\n\nI found that Andres shared a patch\n(v17-0021-meson-Add-LLVM-bitcode-emission.patch) a while ago [1].\n\n[1] https://www.postgresql.org/message-id/20220927011951.j3h4o7n6bhf7dwau%40awork3.anarazel.de\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Thu, 5 Sep 2024 12:24:51 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson vs. llvm bitcode files"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI am starting a new thread to discuss and propose the conflict detection for\nupdate_deleted scenarios during logical replication. This conflict occurs when\nthe apply worker cannot find the target tuple to be updated, as the tuple might\nhave been removed by another origin.\n\n---\nBACKGROUND\n---\n\nCurrently, when the apply worker cannot find the target tuple during an update,\nan update_missing conflict is logged. However, to facilitate future automatic\nconflict resolution, it has been agreed[1][2] that we need to detect both\nupdate_missing and update_deleted conflicts. Specifically, we will detect an\nupdate_deleted conflict if any dead tuple matching the old key value of the\nupdate operation is found; otherwise, it will be classified as update_missing.\n\nDetecting both update_deleted and update_missing conflicts is important for\nachieving eventual consistency in a bidirectional cluster, because the\nresolution for each conflict type can differs. For example, for an\nupdate_missing conflict, a feasible solution might be converting the update to\nan insert and applying it. While for an update_deleted conflict, the preferred\napproach could be to skip the update or compare the timestamps of the delete\ntransactions with the remote update transaction's and choose the most recent\none. For additional context, please refer to [3], which gives examples about\nhow these differences could lead to data divergence.\n\n---\nISSUES and SOLUTION\n---\n\nTo detect update_deleted conflicts, we need to search for dead tuples in the\ntable. However, dead tuples can be removed by VACUUM at any time. Therefore, to\nensure consistent and accurate conflict detection, tuples deleted by other\norigins must not be removed by VACUUM before the conflict detection process. If\nthe tuples are removed prematurely, it might lead to incorrect conflict\nidentification and resolution, causing data divergence between nodes.\n\nHere is an example of how VACUUM could affect conflict detection and how to\nprevent this issue. Assume we have a bidirectional cluster with two nodes, A\nand B.\n\nNode A:\n T1: INSERT INTO t (id, value) VALUES (1,1);\n T2: DELETE FROM t WHERE id = 1;\n\nNode B:\n T3: UPDATE t SET value = 2 WHERE id = 1;\n\nTo retain the deleted tuples, the initial idea was that once transaction T2 had\nbeen applied to both nodes, there was no longer a need to preserve the dead\ntuple on Node A. However, a scenario arises where transactions T3 and T2 occur\nconcurrently, with T3 committing slightly earlier than T2. In this case, if\nNode B applies T2 and Node A removes the dead tuple (1,1) via VACUUM, and then\nNode A applies T3 after the VACUUM operation, it can only result in an\nupdate_missing conflict. Given that the default resolution for update_missing\nconflicts is apply_or_skip (e.g. convert update to insert if possible and apply\nthe insert), Node A will eventually hold a row (1,2) while Node B becomes\nempty, causing data inconsistency.\n\nTherefore, the strategy needs to be expanded as follows: Node A cannot remove\nthe dead tuple until:\n(a) The DELETE operation is replayed on all remote nodes, *AND*\n(b) The transactions on logical standbys occurring before the replay of Node\nA's DELETE are replayed on Node A as well.\n\n---\nTHE DESIGN\n---\n\nTo achieve the above, we plan to allow the logical walsender to maintain and\nadvance the slot.xmin to protect the data in the user table and introduce a new\nlogical standby feedback message. This message reports the WAL position that\nhas been replayed on the logical standby *AND* the changes occurring on the\nlogical standby before the WAL position are also replayed to the walsender's\nnode (where the walsender is running). After receiving the new feedback\nmessage, the walsender will advance the slot.xmin based on the flush info,\nsimilar to the advancement of catalog_xmin. Currently, the effective_xmin/xmin\nof logical slot are unused during logical replication, so I think it's safe and\nwon't cause side-effect to reuse the xmin for this feature.\n\nWe have introduced a new subscription option (feedback_slots='slot1,...'),\nwhere these slots will be used to check condition (b): the transactions on\nlogical standbys occurring before the replay of Node A's DELETE are replayed on\nNode A as well. Therefore, on Node B, users should specify the slots\ncorresponding to Node A in this option. The apply worker will get the oldest\nconfirmed flush LSN among the specified slots and send the LSN as a feedback\nmessage to the walsender. -- I also thought of making it an automaic way, e.g.\nlet apply worker select the slots that acquired by the walsenders which connect\nto the same remote server(e.g. if apply worker's connection info or some other\nflags is same as the walsender's connection info). But it seems tricky because\nif some slots are inactive which means the walsenders are not there, the apply\nworker could not find the correct slots to check unless we save the host along\nwith the slot's persistence data.\n\nThe new feedback message is sent only if feedback_slots is not NULL. If the\nslots in feedback_slots are removed, a final message containing\nInvalidXLogRecPtr will be sent to inform the walsender to forget about the\nslot.xmin.\n\nTo detect update_deleted conflicts during update operations, if the target row\ncannot be found, we perform an additional scan of the table using snapshotAny.\nThis scan aims to locate the most recently deleted row that matches the old\ncolumn values from the remote update operation and has not yet been removed by\nVACUUM. If any such tuples are found, we report the update_deleted conflict\nalong with the origin and transaction information that deleted the tuple.\n\nPlease refer to the attached POC patch set which implements above design. The\npatch set is split into some parts to make it easier for the initial review.\nPlease note that each patch is interdependent and cannot work independently.\n\nThanks a lot to Kuroda-San and Amit for the off-list discussion.\n\nSuggestions and comments are highly appreciated !\n\n[1]https://www.postgresql.org/message-id/CAJpy0uCov4JfZJeOvY0O21_gk9bcgNUDp4jf8%2BBbMp%2BEAv8cVQ%40mail.gmail.com\n[2]https://www.postgresql.org/message-id/CAA4eK1Lj-PWrP789KnKxZydisHajd38rSihWXO8MVBLDwxG1Kg%40mail.gmail.com\n[3]https://www.postgresql.org/message-id/CAJpy0uC6Zs5WwwiyuvG_kEB6Q3wyDWpya7PXm3SMT_YG%3DXJJ1w%40mail.gmail.com\n\nBest Regards,\nHou Zhijie",
"msg_date": "Thu, 5 Sep 2024 11:36:38 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Thu, Sep 5, 2024 at 5:07 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n>\n> Hi hackers,\n>\n> I am starting a new thread to discuss and propose the conflict detection for\n> update_deleted scenarios during logical replication. This conflict occurs when\n> the apply worker cannot find the target tuple to be updated, as the tuple might\n> have been removed by another origin.\n>\n> ---\n> BACKGROUND\n> ---\n>\n> Currently, when the apply worker cannot find the target tuple during an update,\n> an update_missing conflict is logged. However, to facilitate future automatic\n> conflict resolution, it has been agreed[1][2] that we need to detect both\n> update_missing and update_deleted conflicts. Specifically, we will detect an\n> update_deleted conflict if any dead tuple matching the old key value of the\n> update operation is found; otherwise, it will be classified as update_missing.\n>\n> Detecting both update_deleted and update_missing conflicts is important for\n> achieving eventual consistency in a bidirectional cluster, because the\n> resolution for each conflict type can differs. For example, for an\n> update_missing conflict, a feasible solution might be converting the update to\n> an insert and applying it. While for an update_deleted conflict, the preferred\n> approach could be to skip the update or compare the timestamps of the delete\n> transactions with the remote update transaction's and choose the most recent\n> one. For additional context, please refer to [3], which gives examples about\n> how these differences could lead to data divergence.\n>\n> ---\n> ISSUES and SOLUTION\n> ---\n>\n> To detect update_deleted conflicts, we need to search for dead tuples in the\n> table. However, dead tuples can be removed by VACUUM at any time. Therefore, to\n> ensure consistent and accurate conflict detection, tuples deleted by other\n> origins must not be removed by VACUUM before the conflict detection process. If\n> the tuples are removed prematurely, it might lead to incorrect conflict\n> identification and resolution, causing data divergence between nodes.\n>\n> Here is an example of how VACUUM could affect conflict detection and how to\n> prevent this issue. Assume we have a bidirectional cluster with two nodes, A\n> and B.\n>\n> Node A:\n> T1: INSERT INTO t (id, value) VALUES (1,1);\n> T2: DELETE FROM t WHERE id = 1;\n>\n> Node B:\n> T3: UPDATE t SET value = 2 WHERE id = 1;\n>\n> To retain the deleted tuples, the initial idea was that once transaction T2 had\n> been applied to both nodes, there was no longer a need to preserve the dead\n> tuple on Node A. However, a scenario arises where transactions T3 and T2 occur\n> concurrently, with T3 committing slightly earlier than T2. In this case, if\n> Node B applies T2 and Node A removes the dead tuple (1,1) via VACUUM, and then\n> Node A applies T3 after the VACUUM operation, it can only result in an\n> update_missing conflict. Given that the default resolution for update_missing\n> conflicts is apply_or_skip (e.g. convert update to insert if possible and apply\n> the insert), Node A will eventually hold a row (1,2) while Node B becomes\n> empty, causing data inconsistency.\n>\n> Therefore, the strategy needs to be expanded as follows: Node A cannot remove\n> the dead tuple until:\n> (a) The DELETE operation is replayed on all remote nodes, *AND*\n> (b) The transactions on logical standbys occurring before the replay of Node\n> A's DELETE are replayed on Node A as well.\n>\n> ---\n> THE DESIGN\n> ---\n>\n> To achieve the above, we plan to allow the logical walsender to maintain and\n> advance the slot.xmin to protect the data in the user table and introduce a new\n> logical standby feedback message. This message reports the WAL position that\n> has been replayed on the logical standby *AND* the changes occurring on the\n> logical standby before the WAL position are also replayed to the walsender's\n> node (where the walsender is running). After receiving the new feedback\n> message, the walsender will advance the slot.xmin based on the flush info,\n> similar to the advancement of catalog_xmin. Currently, the effective_xmin/xmin\n> of logical slot are unused during logical replication, so I think it's safe and\n> won't cause side-effect to reuse the xmin for this feature.\n>\n> We have introduced a new subscription option (feedback_slots='slot1,...'),\n> where these slots will be used to check condition (b): the transactions on\n> logical standbys occurring before the replay of Node A's DELETE are replayed on\n> Node A as well. Therefore, on Node B, users should specify the slots\n> corresponding to Node A in this option. The apply worker will get the oldest\n> confirmed flush LSN among the specified slots and send the LSN as a feedback\n> message to the walsender. -- I also thought of making it an automaic way, e.g.\n> let apply worker select the slots that acquired by the walsenders which connect\n> to the same remote server(e.g. if apply worker's connection info or some other\n> flags is same as the walsender's connection info). But it seems tricky because\n> if some slots are inactive which means the walsenders are not there, the apply\n> worker could not find the correct slots to check unless we save the host along\n> with the slot's persistence data.\n>\n> The new feedback message is sent only if feedback_slots is not NULL. If the\n> slots in feedback_slots are removed, a final message containing\n> InvalidXLogRecPtr will be sent to inform the walsender to forget about the\n> slot.xmin.\n>\n> To detect update_deleted conflicts during update operations, if the target row\n> cannot be found, we perform an additional scan of the table using snapshotAny.\n> This scan aims to locate the most recently deleted row that matches the old\n> column values from the remote update operation and has not yet been removed by\n> VACUUM. If any such tuples are found, we report the update_deleted conflict\n> along with the origin and transaction information that deleted the tuple.\n>\n> Please refer to the attached POC patch set which implements above design. The\n> patch set is split into some parts to make it easier for the initial review.\n> Please note that each patch is interdependent and cannot work independently.\n>\n> Thanks a lot to Kuroda-San and Amit for the off-list discussion.\n>\n> Suggestions and comments are highly appreciated !\n>\n\nThank You Hou-San for explaining the design. But to make it easier to\nunderstand, would you be able to explain the sequence/timeline of the\n*new* actions performed by the walsender and the apply processes for\nthe given example along with new feedback_slot config needed\n\nNode A: (Procs: walsenderA, applyA)\n T1: INSERT INTO t (id, value) VALUES (1,1); ts=10.00 AM\n T2: DELETE FROM t WHERE id = 1; ts=10.02 AM\n\nNode B: (Procs: walsenderB, applyB)\n T3: UPDATE t SET value = 2 WHERE id = 1; ts=10.01 AM\n\nthanks\nShveta\n\n\n",
"msg_date": "Tue, 10 Sep 2024 12:14:55 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Tuesday, September 10, 2024 2:45 PM shveta malik <shveta.malik@gmail.com> wrote:\r\n> > ---\r\n> > THE DESIGN\r\n> > ---\r\n> >\r\n> > To achieve the above, we plan to allow the logical walsender to\r\n> > maintain and advance the slot.xmin to protect the data in the user\r\n> > table and introduce a new logical standby feedback message. This\r\n> > message reports the WAL position that has been replayed on the logical\r\n> > standby *AND* the changes occurring on the logical standby before the\r\n> > WAL position are also replayed to the walsender's node (where the\r\n> > walsender is running). After receiving the new feedback message, the\r\n> > walsender will advance the slot.xmin based on the flush info, similar\r\n> > to the advancement of catalog_xmin. Currently, the effective_xmin/xmin\r\n> > of logical slot are unused during logical replication, so I think it's safe and\r\n> won't cause side-effect to reuse the xmin for this feature.\r\n> >\r\n> > We have introduced a new subscription option\r\n> > (feedback_slots='slot1,...'), where these slots will be used to check\r\n> > condition (b): the transactions on logical standbys occurring before\r\n> > the replay of Node A's DELETE are replayed on Node A as well.\r\n> > Therefore, on Node B, users should specify the slots corresponding to\r\n> > Node A in this option. The apply worker will get the oldest confirmed\r\n> > flush LSN among the specified slots and send the LSN as a feedback\r\n> message to the walsender. -- I also thought of making it an automaic way, e.g.\r\n> > let apply worker select the slots that acquired by the walsenders\r\n> > which connect to the same remote server(e.g. if apply worker's\r\n> > connection info or some other flags is same as the walsender's\r\n> > connection info). But it seems tricky because if some slots are\r\n> > inactive which means the walsenders are not there, the apply worker\r\n> > could not find the correct slots to check unless we save the host along with\r\n> the slot's persistence data.\r\n> >\r\n> > The new feedback message is sent only if feedback_slots is not NULL.\r\n> > If the slots in feedback_slots are removed, a final message containing\r\n> > InvalidXLogRecPtr will be sent to inform the walsender to forget about\r\n> > the slot.xmin.\r\n> >\r\n> > To detect update_deleted conflicts during update operations, if the\r\n> > target row cannot be found, we perform an additional scan of the table using\r\n> snapshotAny.\r\n> > This scan aims to locate the most recently deleted row that matches\r\n> > the old column values from the remote update operation and has not yet\r\n> > been removed by VACUUM. If any such tuples are found, we report the\r\n> > update_deleted conflict along with the origin and transaction information\r\n> that deleted the tuple.\r\n> >\r\n> > Please refer to the attached POC patch set which implements above\r\n> > design. The patch set is split into some parts to make it easier for the initial\r\n> review.\r\n> > Please note that each patch is interdependent and cannot work\r\n> independently.\r\n> >\r\n> > Thanks a lot to Kuroda-San and Amit for the off-list discussion.\r\n> >\r\n> > Suggestions and comments are highly appreciated !\r\n> >\r\n> \r\n> Thank You Hou-San for explaining the design. But to make it easier to\r\n> understand, would you be able to explain the sequence/timeline of the\r\n> *new* actions performed by the walsender and the apply processes for the\r\n> given example along with new feedback_slot config needed\r\n> \r\n> Node A: (Procs: walsenderA, applyA)\r\n> T1: INSERT INTO t (id, value) VALUES (1,1); ts=10.00 AM\r\n> T2: DELETE FROM t WHERE id = 1; ts=10.02 AM\r\n> \r\n> Node B: (Procs: walsenderB, applyB)\r\n> T3: UPDATE t SET value = 2 WHERE id = 1; ts=10.01 AM\r\n\r\nThanks for reviewing! Let me elaborate further on the example:\r\n\r\nOn node A, feedback_slots should include the logical slot that used to replicate changes\r\nfrom Node A to Node B. On node B, feedback_slots should include the logical\r\nslot that replicate changes from Node B to Node A.\r\n\r\nAssume the slot.xmin on Node A has been initialized to a valid number(740) before the\r\nfollowing flow:\r\n\r\nNode A executed T1\t\t\t\t\t\t\t\t\t- 10.00 AM\r\nT1 replicated and applied on Node B\t\t\t\t\t\t\t- 10.0001 AM\r\nNode B executed T3\t\t\t\t\t\t\t\t\t- 10.01 AM\r\nNode A executed T2 (741)\t\t\t\t\t\t\t\t- 10.02 AM\r\nT2 replicated and applied on Node B\t(delete_missing)\t\t\t\t- 10.03 AM\r\nT3 replicated and applied on Node A\t(new action, detect update_deleted)\t\t- 10.04 AM\r\n\r\n(new action) Apply worker on Node B has confirmed that T2 has been applied\r\nlocally and the transactions before T2 (e.g., T3) has been replicated and\r\napplied to Node A (e.g. feedback_slot.confirmed_flush_lsn >= lsn of the local\r\nreplayed T2), thus send the new feedback message to Node A.\t\t\t\t- 10.05 AM\t\t\t\t\t\t\t\t\t\t\t\t\t\t\r\n\r\n(new action) Walsender on Node A received the message and would advance the slot.xmin.- 10.06 AM\r\n\r\nThen, after the slot.xmin is advanced to a number greater than 741, the VACUUM would be able to\r\nremove the dead tuple on Node A.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Tue, 10 Sep 2024 08:10:33 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 1:40 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, September 10, 2024 2:45 PM shveta malik <shveta.malik@gmail.com> wrote:\n> > > ---\n> > > THE DESIGN\n> > > ---\n> > >\n> > > To achieve the above, we plan to allow the logical walsender to\n> > > maintain and advance the slot.xmin to protect the data in the user\n> > > table and introduce a new logical standby feedback message. This\n> > > message reports the WAL position that has been replayed on the logical\n> > > standby *AND* the changes occurring on the logical standby before the\n> > > WAL position are also replayed to the walsender's node (where the\n> > > walsender is running). After receiving the new feedback message, the\n> > > walsender will advance the slot.xmin based on the flush info, similar\n> > > to the advancement of catalog_xmin. Currently, the effective_xmin/xmin\n> > > of logical slot are unused during logical replication, so I think it's safe and\n> > won't cause side-effect to reuse the xmin for this feature.\n> > >\n> > > We have introduced a new subscription option\n> > > (feedback_slots='slot1,...'), where these slots will be used to check\n> > > condition (b): the transactions on logical standbys occurring before\n> > > the replay of Node A's DELETE are replayed on Node A as well.\n> > > Therefore, on Node B, users should specify the slots corresponding to\n> > > Node A in this option. The apply worker will get the oldest confirmed\n> > > flush LSN among the specified slots and send the LSN as a feedback\n> > message to the walsender. -- I also thought of making it an automaic way, e.g.\n> > > let apply worker select the slots that acquired by the walsenders\n> > > which connect to the same remote server(e.g. if apply worker's\n> > > connection info or some other flags is same as the walsender's\n> > > connection info). But it seems tricky because if some slots are\n> > > inactive which means the walsenders are not there, the apply worker\n> > > could not find the correct slots to check unless we save the host along with\n> > the slot's persistence data.\n> > >\n> > > The new feedback message is sent only if feedback_slots is not NULL.\n> > > If the slots in feedback_slots are removed, a final message containing\n> > > InvalidXLogRecPtr will be sent to inform the walsender to forget about\n> > > the slot.xmin.\n> > >\n> > > To detect update_deleted conflicts during update operations, if the\n> > > target row cannot be found, we perform an additional scan of the table using\n> > snapshotAny.\n> > > This scan aims to locate the most recently deleted row that matches\n> > > the old column values from the remote update operation and has not yet\n> > > been removed by VACUUM. If any such tuples are found, we report the\n> > > update_deleted conflict along with the origin and transaction information\n> > that deleted the tuple.\n> > >\n> > > Please refer to the attached POC patch set which implements above\n> > > design. The patch set is split into some parts to make it easier for the initial\n> > review.\n> > > Please note that each patch is interdependent and cannot work\n> > independently.\n> > >\n> > > Thanks a lot to Kuroda-San and Amit for the off-list discussion.\n> > >\n> > > Suggestions and comments are highly appreciated !\n> > >\n> >\n> > Thank You Hou-San for explaining the design. But to make it easier to\n> > understand, would you be able to explain the sequence/timeline of the\n> > *new* actions performed by the walsender and the apply processes for the\n> > given example along with new feedback_slot config needed\n> >\n> > Node A: (Procs: walsenderA, applyA)\n> > T1: INSERT INTO t (id, value) VALUES (1,1); ts=10.00 AM\n> > T2: DELETE FROM t WHERE id = 1; ts=10.02 AM\n> >\n> > Node B: (Procs: walsenderB, applyB)\n> > T3: UPDATE t SET value = 2 WHERE id = 1; ts=10.01 AM\n>\n> Thanks for reviewing! Let me elaborate further on the example:\n>\n> On node A, feedback_slots should include the logical slot that used to replicate changes\n> from Node A to Node B. On node B, feedback_slots should include the logical\n> slot that replicate changes from Node B to Node A.\n>\n> Assume the slot.xmin on Node A has been initialized to a valid number(740) before the\n> following flow:\n>\n> Node A executed T1 - 10.00 AM\n> T1 replicated and applied on Node B - 10.0001 AM\n> Node B executed T3 - 10.01 AM\n> Node A executed T2 (741) - 10.02 AM\n> T2 replicated and applied on Node B (delete_missing) - 10.03 AM\n\nNot related to this feature, but do you mean delete_origin_differ here?\n\n> T3 replicated and applied on Node A (new action, detect update_deleted) - 10.04 AM\n>\n> (new action) Apply worker on Node B has confirmed that T2 has been applied\n> locally and the transactions before T2 (e.g., T3) has been replicated and\n> applied to Node A (e.g. feedback_slot.confirmed_flush_lsn >= lsn of the local\n> replayed T2), thus send the new feedback message to Node A. - 10.05 AM\n>\n> (new action) Walsender on Node A received the message and would advance the slot.xmin.- 10.06 AM\n>\n> Then, after the slot.xmin is advanced to a number greater than 741, the VACUUM would be able to\n> remove the dead tuple on Node A.\n>\n\nThanks for the example. Can you please review below and let me know if\nmy understanding is correct.\n\n1)\nIn a bidirectional replication setup, the user has to create slots in\na way that NodeA's sub's slot is Node B's feedback_slot and Node B's\nsub's slot is Node A's feedback slot. And then only this feature will\nwork well, is it correct to say?\n\n2)\nNow coming back to multiple feedback_slots in a subscription, is the\nbelow correct:\n\nSay Node A has publications and subscriptions as follow:\n------------------\nA_pub1\n\nA_sub1 (subscribing to B_pub1 with the default slot_name of A_sub1)\nA_sub2 (subscribing to B_pub2 with the default slot_name of A_sub2)\nA_sub3 (subscribing to B_pub3 with the default slot_name of A_sub3)\n\n\nSay Node B has publications and subscriptions as follow:\n------------------\nB_sub1 (subscribing to A_pub1 with the default slot_name of B_sub1)\n\nB_pub1\nB_pub2\nB_pub3\n\nThen what will be the feedback_slot configuration for all\nsubscriptions of A and B? Is below correct:\n------------------\nA_sub1, A_sub2, A_sub3: feedback_slots=B_sub1\nB_sub1: feedback_slots=A_sub1,A_sub2, A_sub3\n\n3)\nIf the above is true, then do we have a way to make sure that the user\n has given this configuration exactly the above way? If users end up\ngiving feedback_slots as some random slot (say A_slot4 or incomplete\nlist), do we validate that? (I have not looked at code yet, just\ntrying to understand design first).\n\n4)\nNow coming to this:\n\n> The apply worker will get the oldest\n> confirmed flush LSN among the specified slots and send the LSN as a feedback\n> message to the walsender.\n\n There will be one apply worker on B which will be due to B_sub1, so\nwill it check confirmed_lsn of all slots A_sub1,A_sub2, A_sub3? Won't\nit be sufficient to check confimed_lsn of say slot A_sub1 alone which\nhas subscribed to table 't' on which delete has been performed? Rest\nof the lots (A_sub2, A_sub3) might have subscribed to different\ntables?\n\nthanks\nShveta\n\n\n",
"msg_date": "Tue, 10 Sep 2024 15:25:58 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Tuesday, September 10, 2024 5:56 PM shveta malik <shveta.malik@gmail.com> wrote:\r\n> \r\n> On Tue, Sep 10, 2024 at 1:40 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>\r\n> wrote:\r\n> >\r\n> > On Tuesday, September 10, 2024 2:45 PM shveta malik\r\n> <shveta.malik@gmail.com> wrote:\r\n> > >\r\n> > > Thank You Hou-San for explaining the design. But to make it easier\r\n> > > to understand, would you be able to explain the sequence/timeline of\r\n> > > the\r\n> > > *new* actions performed by the walsender and the apply processes for\r\n> > > the given example along with new feedback_slot config needed\r\n> > >\r\n> > > Node A: (Procs: walsenderA, applyA)\r\n> > > T1: INSERT INTO t (id, value) VALUES (1,1); ts=10.00 AM\r\n> > > T2: DELETE FROM t WHERE id = 1; ts=10.02 AM\r\n> > >\r\n> > > Node B: (Procs: walsenderB, applyB)\r\n> > > T3: UPDATE t SET value = 2 WHERE id = 1; ts=10.01 AM\r\n> >\r\n> > Thanks for reviewing! Let me elaborate further on the example:\r\n> >\r\n> > On node A, feedback_slots should include the logical slot that used to\r\n> > replicate changes from Node A to Node B. On node B, feedback_slots\r\n> > should include the logical slot that replicate changes from Node B to Node A.\r\n> >\r\n> > Assume the slot.xmin on Node A has been initialized to a valid\r\n> > number(740) before the following flow:\r\n> >\r\n> > Node A executed T1 - 10.00 AM\r\n> > T1 replicated and applied on Node B - 10.0001 AM\r\n> > Node B executed T3 - 10.01 AM\r\n> > Node A executed T2 (741) - 10.02 AM\r\n> > T2 replicated and applied on Node B (delete_missing) - 10.03 AM\r\n> \r\n> Not related to this feature, but do you mean delete_origin_differ here?\r\n\r\nOh sorry, It's a miss. I meant delete_origin_differ.\r\n\r\n> \r\n> > T3 replicated and applied on Node A (new action, detect\r\n> update_deleted) - 10.04 AM\r\n> >\r\n> > (new action) Apply worker on Node B has confirmed that T2 has been\r\n> > applied locally and the transactions before T2 (e.g., T3) has been\r\n> > replicated and applied to Node A (e.g. feedback_slot.confirmed_flush_lsn\r\n> >= lsn of the local\r\n> > replayed T2), thus send the new feedback message to Node A.\r\n> - 10.05 AM\r\n> >\r\n> > (new action) Walsender on Node A received the message and would\r\n> > advance the slot.xmin.- 10.06 AM\r\n> >\r\n> > Then, after the slot.xmin is advanced to a number greater than 741,\r\n> > the VACUUM would be able to remove the dead tuple on Node A.\r\n> >\r\n> \r\n> Thanks for the example. Can you please review below and let me know if my\r\n> understanding is correct.\r\n> \r\n> 1)\r\n> In a bidirectional replication setup, the user has to create slots in a way that\r\n> NodeA's sub's slot is Node B's feedback_slot and Node B's sub's slot is Node\r\n> A's feedback slot. And then only this feature will work well, is it correct to say?\r\n\r\nYes, your understanding is correct.\r\n\r\n> \r\n> 2)\r\n> Now coming back to multiple feedback_slots in a subscription, is the below\r\n> correct:\r\n> \r\n> Say Node A has publications and subscriptions as follow:\r\n> ------------------\r\n> A_pub1\r\n> \r\n> A_sub1 (subscribing to B_pub1 with the default slot_name of A_sub1)\r\n> A_sub2 (subscribing to B_pub2 with the default slot_name of A_sub2)\r\n> A_sub3 (subscribing to B_pub3 with the default slot_name of A_sub3)\r\n> \r\n> \r\n> Say Node B has publications and subscriptions as follow:\r\n> ------------------\r\n> B_sub1 (subscribing to A_pub1 with the default slot_name of B_sub1)\r\n> \r\n> B_pub1\r\n> B_pub2\r\n> B_pub3\r\n> \r\n> Then what will be the feedback_slot configuration for all subscriptions of A and\r\n> B? Is below correct:\r\n> ------------------\r\n> A_sub1, A_sub2, A_sub3: feedback_slots=B_sub1\r\n> B_sub1: feedback_slots=A_sub1,A_sub2, A_sub3\r\n\r\nRight. The above configurations are correct.\r\n\r\n> \r\n> 3)\r\n> If the above is true, then do we have a way to make sure that the user has\r\n> given this configuration exactly the above way? If users end up giving\r\n> feedback_slots as some random slot (say A_slot4 or incomplete list), do we\r\n> validate that? (I have not looked at code yet, just trying to understand design\r\n> first).\r\n\r\nThe patch doesn't validate if the feedback slots belong to the correct\r\nsubscriptions on remote server. It only validates if the slot is an existing,\r\nvalid, logical slot. I think there are few challenges to validate it further.\r\nE.g. We need a way to identify the which server the slot is replicating\r\nchanges to, which could be tricky as the slot currently doesn't have any info\r\nto identify the remote server. Besides, the slot could be inactive temporarily\r\ndue to some subscriber side error, in which case we cannot verify the\r\nsubscription that used it.\r\n\r\n> \r\n> 4)\r\n> Now coming to this:\r\n> \r\n> > The apply worker will get the oldest\r\n> > confirmed flush LSN among the specified slots and send the LSN as a\r\n> > feedback message to the walsender.\r\n> \r\n> There will be one apply worker on B which will be due to B_sub1, so will it\r\n> check confirmed_lsn of all slots A_sub1,A_sub2, A_sub3? Won't it be\r\n> sufficient to check confimed_lsn of say slot A_sub1 alone which has\r\n> subscribed to table 't' on which delete has been performed? Rest of the lots\r\n> (A_sub2, A_sub3) might have subscribed to different tables?\r\n\r\nI think it's theoretically correct to only check the A_sub1. We could document\r\nthat user can do this by identifying the tables that each subscription\r\nreplicates, but it may not be user friendly.\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n",
"msg_date": "Tue, 10 Sep 2024 11:00:01 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Thu, Sep 5, 2024 at 5:07 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> ---\n> ISSUES and SOLUTION\n> ---\n>\n> To detect update_deleted conflicts, we need to search for dead tuples in the\n> table. However, dead tuples can be removed by VACUUM at any time. Therefore, to\n> ensure consistent and accurate conflict detection, tuples deleted by other\n> origins must not be removed by VACUUM before the conflict detection process. If\n> the tuples are removed prematurely, it might lead to incorrect conflict\n> identification and resolution, causing data divergence between nodes.\n>\n> Here is an example of how VACUUM could affect conflict detection and how to\n> prevent this issue. Assume we have a bidirectional cluster with two nodes, A\n> and B.\n>\n> Node A:\n> T1: INSERT INTO t (id, value) VALUES (1,1);\n> T2: DELETE FROM t WHERE id = 1;\n>\n> Node B:\n> T3: UPDATE t SET value = 2 WHERE id = 1;\n>\n> To retain the deleted tuples, the initial idea was that once transaction T2 had\n> been applied to both nodes, there was no longer a need to preserve the dead\n> tuple on Node A. However, a scenario arises where transactions T3 and T2 occur\n> concurrently, with T3 committing slightly earlier than T2. In this case, if\n> Node B applies T2 and Node A removes the dead tuple (1,1) via VACUUM, and then\n> Node A applies T3 after the VACUUM operation, it can only result in an\n> update_missing conflict. Given that the default resolution for update_missing\n> conflicts is apply_or_skip (e.g. convert update to insert if possible and apply\n> the insert), Node A will eventually hold a row (1,2) while Node B becomes\n> empty, causing data inconsistency.\n>\n> Therefore, the strategy needs to be expanded as follows: Node A cannot remove\n> the dead tuple until:\n> (a) The DELETE operation is replayed on all remote nodes, *AND*\n> (b) The transactions on logical standbys occurring before the replay of Node\n> A's DELETE are replayed on Node A as well.\n>\n> ---\n> THE DESIGN\n> ---\n>\n> To achieve the above, we plan to allow the logical walsender to maintain and\n> advance the slot.xmin to protect the data in the user table and introduce a new\n> logical standby feedback message. This message reports the WAL position that\n> has been replayed on the logical standby *AND* the changes occurring on the\n> logical standby before the WAL position are also replayed to the walsender's\n> node (where the walsender is running). After receiving the new feedback\n> message, the walsender will advance the slot.xmin based on the flush info,\n> similar to the advancement of catalog_xmin. Currently, the effective_xmin/xmin\n> of logical slot are unused during logical replication, so I think it's safe and\n> won't cause side-effect to reuse the xmin for this feature.\n>\n> We have introduced a new subscription option (feedback_slots='slot1,...'),\n> where these slots will be used to check condition (b): the transactions on\n> logical standbys occurring before the replay of Node A's DELETE are replayed on\n> Node A as well. Therefore, on Node B, users should specify the slots\n> corresponding to Node A in this option. The apply worker will get the oldest\n> confirmed flush LSN among the specified slots and send the LSN as a feedback\n> message to the walsender. -- I also thought of making it an automaic way, e.g.\n> let apply worker select the slots that acquired by the walsenders which connect\n> to the same remote server(e.g. if apply worker's connection info or some other\n> flags is same as the walsender's connection info). But it seems tricky because\n> if some slots are inactive which means the walsenders are not there, the apply\n> worker could not find the correct slots to check unless we save the host along\n> with the slot's persistence data.\n>\n> The new feedback message is sent only if feedback_slots is not NULL.\n>\n\nDon't you need to deal with versioning stuff for sending this new\nmessage? I mean what if the receiver side of this message is old and\ndoesn't support this new message.\n\nOne minor comment on 0003\n=======================\n1.\nget_slot_confirmed_flush()\n{\n...\n+ /*\n+ * To prevent concurrent slot dropping and creation while filtering the\n+ * slots, take the ReplicationSlotControlLock outside of the loop.\n+ */\n+ LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);\n+\n+ foreach_ptr(String, name, MySubscription->feedback_slots)\n+ {\n+ XLogRecPtr confirmed_flush;\n+ ReplicationSlot *slot;\n+\n+ slot = ValidateAndGetFeedbackSlot(strVal(name));\n\nWhy do we need to validate slots each time here? Isn't it better to do it once?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Sep 2024 16:54:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Tuesday, September 10, 2024 7:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Sep 5, 2024 at 5:07 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>\r\n> wrote:\r\n> >\r\n> > ---\r\n> > ISSUES and SOLUTION\r\n> > ---\r\n> >\r\n> > To detect update_deleted conflicts, we need to search for dead tuples\r\n> > in the table. However, dead tuples can be removed by VACUUM at any\r\n> > time. Therefore, to ensure consistent and accurate conflict detection,\r\n> > tuples deleted by other origins must not be removed by VACUUM before\r\n> > the conflict detection process. If the tuples are removed prematurely,\r\n> > it might lead to incorrect conflict identification and resolution, causing data\r\n> divergence between nodes.\r\n> >\r\n> > Here is an example of how VACUUM could affect conflict detection and\r\n> > how to prevent this issue. Assume we have a bidirectional cluster with\r\n> > two nodes, A and B.\r\n> >\r\n> > Node A:\r\n> > T1: INSERT INTO t (id, value) VALUES (1,1);\r\n> > T2: DELETE FROM t WHERE id = 1;\r\n> >\r\n> > Node B:\r\n> > T3: UPDATE t SET value = 2 WHERE id = 1;\r\n> >\r\n> > To retain the deleted tuples, the initial idea was that once\r\n> > transaction T2 had been applied to both nodes, there was no longer a\r\n> > need to preserve the dead tuple on Node A. However, a scenario arises\r\n> > where transactions T3 and T2 occur concurrently, with T3 committing\r\n> > slightly earlier than T2. In this case, if Node B applies T2 and Node\r\n> > A removes the dead tuple (1,1) via VACUUM, and then Node A applies T3\r\n> > after the VACUUM operation, it can only result in an update_missing\r\n> > conflict. Given that the default resolution for update_missing\r\n> > conflicts is apply_or_skip (e.g. convert update to insert if possible\r\n> > and apply the insert), Node A will eventually hold a row (1,2) while Node B\r\n> becomes empty, causing data inconsistency.\r\n> >\r\n> > Therefore, the strategy needs to be expanded as follows: Node A cannot\r\n> > remove the dead tuple until:\r\n> > (a) The DELETE operation is replayed on all remote nodes, *AND*\r\n> > (b) The transactions on logical standbys occurring before the replay\r\n> > of Node A's DELETE are replayed on Node A as well.\r\n> >\r\n> > ---\r\n> > THE DESIGN\r\n> > ---\r\n> >\r\n> > To achieve the above, we plan to allow the logical walsender to\r\n> > maintain and advance the slot.xmin to protect the data in the user\r\n> > table and introduce a new logical standby feedback message. This\r\n> > message reports the WAL position that has been replayed on the logical\r\n> > standby *AND* the changes occurring on the logical standby before the\r\n> > WAL position are also replayed to the walsender's node (where the\r\n> > walsender is running). After receiving the new feedback message, the\r\n> > walsender will advance the slot.xmin based on the flush info, similar\r\n> > to the advancement of catalog_xmin. Currently, the effective_xmin/xmin\r\n> > of logical slot are unused during logical replication, so I think it's safe and\r\n> won't cause side-effect to reuse the xmin for this feature.\r\n> >\r\n> > We have introduced a new subscription option\r\n> > (feedback_slots='slot1,...'), where these slots will be used to check\r\n> > condition (b): the transactions on logical standbys occurring before\r\n> > the replay of Node A's DELETE are replayed on Node A as well.\r\n> > Therefore, on Node B, users should specify the slots corresponding to\r\n> > Node A in this option. The apply worker will get the oldest confirmed\r\n> > flush LSN among the specified slots and send the LSN as a feedback\r\n> message to the walsender. -- I also thought of making it an automaic way, e.g.\r\n> > let apply worker select the slots that acquired by the walsenders\r\n> > which connect to the same remote server(e.g. if apply worker's\r\n> > connection info or some other flags is same as the walsender's\r\n> > connection info). But it seems tricky because if some slots are\r\n> > inactive which means the walsenders are not there, the apply worker\r\n> > could not find the correct slots to check unless we save the host along with\r\n> the slot's persistence data.\r\n> >\r\n> > The new feedback message is sent only if feedback_slots is not NULL.\r\n> >\r\n> \r\n> Don't you need to deal with versioning stuff for sending this new message? I\r\n> mean what if the receiver side of this message is old and doesn't support this\r\n> new message.\r\n\r\nYes, I think we can avoid sending the new message if the remote server version\r\ndoesn't support handling this message (e.g. server_version < 18). Will address\r\nthis in next version.\r\n\r\n> \r\n> One minor comment on 0003\r\n> =======================\r\n> 1.\r\n> get_slot_confirmed_flush()\r\n> {\r\n> ...\r\n> + /*\r\n> + * To prevent concurrent slot dropping and creation while filtering the\r\n> + * slots, take the ReplicationSlotControlLock outside of the loop.\r\n> + */\r\n> + LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);\r\n> +\r\n> + foreach_ptr(String, name, MySubscription->feedback_slots) { XLogRecPtr\r\n> + confirmed_flush; ReplicationSlot *slot;\r\n> +\r\n> + slot = ValidateAndGetFeedbackSlot(strVal(name));\r\n> \r\n> Why do we need to validate slots each time here? Isn't it better to do it once?\r\n\r\nI think it's possible that the slot was correct but changed or dropped later,\r\nso it could be useful to give a warning in this case to hint user to adjust the\r\nslots, otherwise, the xmin of the publisher's slot won't be advanced and might\r\ncause dead tuples accumulation. This is similar to the checks we performed for\r\nthe slots in \"synchronized_standby_slots\". (E.g. StandbySlotsHaveCaughtup)\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Wed, 11 Sep 2024 03:02:48 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 4:30 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, September 10, 2024 5:56 PM shveta malik <shveta.malik@gmail.com> wrote:\n> >\n> > On Tue, Sep 10, 2024 at 1:40 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>\n> > wrote:\n> > >\n> > > On Tuesday, September 10, 2024 2:45 PM shveta malik\n> > <shveta.malik@gmail.com> wrote:\n> > > >\n> > > > Thank You Hou-San for explaining the design. But to make it easier\n> > > > to understand, would you be able to explain the sequence/timeline of\n> > > > the\n> > > > *new* actions performed by the walsender and the apply processes for\n> > > > the given example along with new feedback_slot config needed\n> > > >\n> > > > Node A: (Procs: walsenderA, applyA)\n> > > > T1: INSERT INTO t (id, value) VALUES (1,1); ts=10.00 AM\n> > > > T2: DELETE FROM t WHERE id = 1; ts=10.02 AM\n> > > >\n> > > > Node B: (Procs: walsenderB, applyB)\n> > > > T3: UPDATE t SET value = 2 WHERE id = 1; ts=10.01 AM\n> > >\n> > > Thanks for reviewing! Let me elaborate further on the example:\n> > >\n> > > On node A, feedback_slots should include the logical slot that used to\n> > > replicate changes from Node A to Node B. On node B, feedback_slots\n> > > should include the logical slot that replicate changes from Node B to Node A.\n> > >\n> > > Assume the slot.xmin on Node A has been initialized to a valid\n> > > number(740) before the following flow:\n> > >\n> > > Node A executed T1 - 10.00 AM\n> > > T1 replicated and applied on Node B - 10.0001 AM\n> > > Node B executed T3 - 10.01 AM\n> > > Node A executed T2 (741) - 10.02 AM\n> > > T2 replicated and applied on Node B (delete_missing) - 10.03 AM\n> >\n> > Not related to this feature, but do you mean delete_origin_differ here?\n>\n> Oh sorry, It's a miss. I meant delete_origin_differ.\n>\n> >\n> > > T3 replicated and applied on Node A (new action, detect\n> > update_deleted) - 10.04 AM\n> > >\n> > > (new action) Apply worker on Node B has confirmed that T2 has been\n> > > applied locally and the transactions before T2 (e.g., T3) has been\n> > > replicated and applied to Node A (e.g. feedback_slot.confirmed_flush_lsn\n> > >= lsn of the local\n> > > replayed T2), thus send the new feedback message to Node A.\n> > - 10.05 AM\n> > >\n> > > (new action) Walsender on Node A received the message and would\n> > > advance the slot.xmin.- 10.06 AM\n> > >\n> > > Then, after the slot.xmin is advanced to a number greater than 741,\n> > > the VACUUM would be able to remove the dead tuple on Node A.\n> > >\n> >\n> > Thanks for the example. Can you please review below and let me know if my\n> > understanding is correct.\n> >\n> > 1)\n> > In a bidirectional replication setup, the user has to create slots in a way that\n> > NodeA's sub's slot is Node B's feedback_slot and Node B's sub's slot is Node\n> > A's feedback slot. And then only this feature will work well, is it correct to say?\n>\n> Yes, your understanding is correct.\n>\n> >\n> > 2)\n> > Now coming back to multiple feedback_slots in a subscription, is the below\n> > correct:\n> >\n> > Say Node A has publications and subscriptions as follow:\n> > ------------------\n> > A_pub1\n> >\n> > A_sub1 (subscribing to B_pub1 with the default slot_name of A_sub1)\n> > A_sub2 (subscribing to B_pub2 with the default slot_name of A_sub2)\n> > A_sub3 (subscribing to B_pub3 with the default slot_name of A_sub3)\n> >\n> >\n> > Say Node B has publications and subscriptions as follow:\n> > ------------------\n> > B_sub1 (subscribing to A_pub1 with the default slot_name of B_sub1)\n> >\n> > B_pub1\n> > B_pub2\n> > B_pub3\n> >\n> > Then what will be the feedback_slot configuration for all subscriptions of A and\n> > B? Is below correct:\n> > ------------------\n> > A_sub1, A_sub2, A_sub3: feedback_slots=B_sub1\n> > B_sub1: feedback_slots=A_sub1,A_sub2, A_sub3\n>\n> Right. The above configurations are correct.\n\nOkay. It seems difficult to understand configuration from user's perspective.\n\n> >\n> > 3)\n> > If the above is true, then do we have a way to make sure that the user has\n> > given this configuration exactly the above way? If users end up giving\n> > feedback_slots as some random slot (say A_slot4 or incomplete list), do we\n> > validate that? (I have not looked at code yet, just trying to understand design\n> > first).\n>\n> The patch doesn't validate if the feedback slots belong to the correct\n> subscriptions on remote server. It only validates if the slot is an existing,\n> valid, logical slot. I think there are few challenges to validate it further.\n> E.g. We need a way to identify the which server the slot is replicating\n> changes to, which could be tricky as the slot currently doesn't have any info\n> to identify the remote server. Besides, the slot could be inactive temporarily\n> due to some subscriber side error, in which case we cannot verify the\n> subscription that used it.\n\nOkay, I understand the challenges here.\n\n> >\n> > 4)\n> > Now coming to this:\n> >\n> > > The apply worker will get the oldest\n> > > confirmed flush LSN among the specified slots and send the LSN as a\n> > > feedback message to the walsender.\n> >\n> > There will be one apply worker on B which will be due to B_sub1, so will it\n> > check confirmed_lsn of all slots A_sub1,A_sub2, A_sub3? Won't it be\n> > sufficient to check confimed_lsn of say slot A_sub1 alone which has\n> > subscribed to table 't' on which delete has been performed? Rest of the lots\n> > (A_sub2, A_sub3) might have subscribed to different tables?\n>\n> I think it's theoretically correct to only check the A_sub1. We could document\n> that user can do this by identifying the tables that each subscription\n> replicates, but it may not be user friendly.\n>\n\nSorry, I fail to understand how user can identify the tables and give\nfeedback_slots accordingly? I thought feedback_slots is a one time\nconfiguration when replication is setup (or say setup changes in\nfuture); it can not keep on changing with each query. Or am I missing\nsomething?\n\nIMO, it is something which should be identified internally. Since the\nquery is on table 't1', feedback-slot which is for 't1' shall be used\nto check lsn. But on rethinking,this optimization may not be worth the\neffort, the identification part could be tricky, so it might be okay\nto check all the slots.\n\n~~\n\nAnother query is about 3 node setup. I couldn't figure out what would\nbe feedback_slots setting when it is not bidirectional, as in consider\nthe case where there are three nodes A,B,C. Node C is subscribing to\nboth Node A and Node B. Node A and Node B are the ones doing\nconcurrent \"update\" and \"delete\" which will both be replicated to Node\nC. In this case what will be the feedback_slots setting on Node C? We\ndon't have any slots here which will be replicating changes from Node\nC to Node A and Node C to Node B. This is given in [3] in your first\nemail ([1])\n\n[1]:\nhttps://www.postgresql.org/message-id/OS0PR01MB5716BE80DAEB0EE2A6A5D1F5949D2%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nthanks\nShveta\n\n\n",
"msg_date": "Wed, 11 Sep 2024 09:48:25 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Wednesday, September 11, 2024 12:18 PM shveta malik <shveta.malik@gmail.com> wrote:\r\n> \r\n> On Tue, Sep 10, 2024 at 4:30 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>\r\n> wrote:\r\n> >\r\n> > On Tuesday, September 10, 2024 5:56 PM shveta malik\r\n> <shveta.malik@gmail.com> wrote:\r\n> > >\r\n> > > Thanks for the example. Can you please review below and let me know\r\n> > > if my understanding is correct.\r\n> > >\r\n> > > 1)\r\n> > > In a bidirectional replication setup, the user has to create slots\r\n> > > in a way that NodeA's sub's slot is Node B's feedback_slot and Node\r\n> > > B's sub's slot is Node A's feedback slot. And then only this feature will\r\n> work well, is it correct to say?\r\n> >\r\n> > Yes, your understanding is correct.\r\n> >\r\n> > >\r\n> > > 2)\r\n> > > Now coming back to multiple feedback_slots in a subscription, is the\r\n> > > below\r\n> > > correct:\r\n> > >\r\n> > > Say Node A has publications and subscriptions as follow:\r\n> > > ------------------\r\n> > > A_pub1\r\n> > >\r\n> > > A_sub1 (subscribing to B_pub1 with the default slot_name of A_sub1)\r\n> > > A_sub2 (subscribing to B_pub2 with the default slot_name of A_sub2)\r\n> > > A_sub3 (subscribing to B_pub3 with the default slot_name of A_sub3)\r\n> > >\r\n> > >\r\n> > > Say Node B has publications and subscriptions as follow:\r\n> > > ------------------\r\n> > > B_sub1 (subscribing to A_pub1 with the default slot_name of B_sub1)\r\n> > >\r\n> > > B_pub1\r\n> > > B_pub2\r\n> > > B_pub3\r\n> > >\r\n> > > Then what will be the feedback_slot configuration for all\r\n> > > subscriptions of A and B? Is below correct:\r\n> > > ------------------\r\n> > > A_sub1, A_sub2, A_sub3: feedback_slots=B_sub1\r\n> > > B_sub1: feedback_slots=A_sub1,A_sub2, A_sub3\r\n> >\r\n> > Right. The above configurations are correct.\r\n> \r\n> Okay. It seems difficult to understand configuration from user's perspective.\r\n\r\nRight. I think we could give an example in the document to make it clear.\r\n\r\n> \r\n> > >\r\n> > > 3)\r\n> > > If the above is true, then do we have a way to make sure that the\r\n> > > user has given this configuration exactly the above way? If users\r\n> > > end up giving feedback_slots as some random slot (say A_slot4 or\r\n> > > incomplete list), do we validate that? (I have not looked at code\r\n> > > yet, just trying to understand design first).\r\n> >\r\n> > The patch doesn't validate if the feedback slots belong to the correct\r\n> > subscriptions on remote server. It only validates if the slot is an\r\n> > existing, valid, logical slot. I think there are few challenges to validate it\r\n> further.\r\n> > E.g. We need a way to identify the which server the slot is\r\n> > replicating changes to, which could be tricky as the slot currently\r\n> > doesn't have any info to identify the remote server. Besides, the slot\r\n> > could be inactive temporarily due to some subscriber side error, in\r\n> > which case we cannot verify the subscription that used it.\r\n> \r\n> Okay, I understand the challenges here.\r\n> \r\n> > >\r\n> > > 4)\r\n> > > Now coming to this:\r\n> > >\r\n> > > > The apply worker will get the oldest confirmed flush LSN among the\r\n> > > > specified slots and send the LSN as a feedback message to the\r\n> > > > walsender.\r\n> > >\r\n> > > There will be one apply worker on B which will be due to B_sub1, so\r\n> > > will it check confirmed_lsn of all slots A_sub1,A_sub2, A_sub3?\r\n> > > Won't it be sufficient to check confimed_lsn of say slot A_sub1\r\n> > > alone which has subscribed to table 't' on which delete has been\r\n> > > performed? Rest of the lots (A_sub2, A_sub3) might have subscribed to\r\n> different tables?\r\n> >\r\n> > I think it's theoretically correct to only check the A_sub1. We could\r\n> > document that user can do this by identifying the tables that each\r\n> > subscription replicates, but it may not be user friendly.\r\n> >\r\n> \r\n> Sorry, I fail to understand how user can identify the tables and give\r\n> feedback_slots accordingly? I thought feedback_slots is a one time\r\n> configuration when replication is setup (or say setup changes in future); it can\r\n> not keep on changing with each query. Or am I missing something?\r\n\r\nI meant that user have all the publication information(including the tables\r\nadded in a publication) that the subscription subscribes to, and could also\r\nhave the slot_name, so I think it's possible to identify the tables that each\r\nsubscription includes and add the feedback_slots correspondingly before\r\nstarting the replication. It would be pretty complicate although possible, so I\r\nprefer to not mention it in the first place if it could not bring much\r\nbenefits.\r\n\r\n> \r\n> IMO, it is something which should be identified internally. Since the query is on\r\n> table 't1', feedback-slot which is for 't1' shall be used to check lsn. But on\r\n> rethinking,this optimization may not be worth the effort, the identification part\r\n> could be tricky, so it might be okay to check all the slots.\r\n\r\nI agree that identifying these internally would add complexity.\r\n\r\n> \r\n> ~~\r\n> \r\n> Another query is about 3 node setup. I couldn't figure out what would be\r\n> feedback_slots setting when it is not bidirectional, as in consider the case\r\n> where there are three nodes A,B,C. Node C is subscribing to both Node A and\r\n> Node B. Node A and Node B are the ones doing concurrent \"update\" and\r\n> \"delete\" which will both be replicated to Node C. In this case what will be the\r\n> feedback_slots setting on Node C? We don't have any slots here which will be\r\n> replicating changes from Node C to Node A and Node C to Node B. This is given\r\n> in [3] in your first email ([1])\r\n\r\nThanks for pointing this, the link was a bit misleading. I think the solution\r\nproposed in this thread is only used to allow detecting update_deleted reliably\r\nin a bidirectional cluster. For non- bidirectional cases, it would be more\r\ntricky to predict the timing till when should we retain the dead tuples.\r\n\r\n\r\n> \r\n> [1]:\r\n> https://www.postgresql.org/message-id/OS0PR01MB5716BE80DAEB0EE2A\r\n> 6A5D1F5949D2%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Wed, 11 Sep 2024 04:45:08 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 10:15 AM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, September 11, 2024 12:18 PM shveta malik <shveta.malik@gmail.com> wrote:\n> >\n> > On Tue, Sep 10, 2024 at 4:30 PM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com>\n> > wrote:\n> > >\n> > > On Tuesday, September 10, 2024 5:56 PM shveta malik\n> > <shveta.malik@gmail.com> wrote:\n> > > >\n> > > > Thanks for the example. Can you please review below and let me know\n> > > > if my understanding is correct.\n> > > >\n> > > > 1)\n> > > > In a bidirectional replication setup, the user has to create slots\n> > > > in a way that NodeA's sub's slot is Node B's feedback_slot and Node\n> > > > B's sub's slot is Node A's feedback slot. And then only this feature will\n> > work well, is it correct to say?\n> > >\n> > > Yes, your understanding is correct.\n> > >\n> > > >\n> > > > 2)\n> > > > Now coming back to multiple feedback_slots in a subscription, is the\n> > > > below\n> > > > correct:\n> > > >\n> > > > Say Node A has publications and subscriptions as follow:\n> > > > ------------------\n> > > > A_pub1\n> > > >\n> > > > A_sub1 (subscribing to B_pub1 with the default slot_name of A_sub1)\n> > > > A_sub2 (subscribing to B_pub2 with the default slot_name of A_sub2)\n> > > > A_sub3 (subscribing to B_pub3 with the default slot_name of A_sub3)\n> > > >\n> > > >\n> > > > Say Node B has publications and subscriptions as follow:\n> > > > ------------------\n> > > > B_sub1 (subscribing to A_pub1 with the default slot_name of B_sub1)\n> > > >\n> > > > B_pub1\n> > > > B_pub2\n> > > > B_pub3\n> > > >\n> > > > Then what will be the feedback_slot configuration for all\n> > > > subscriptions of A and B? Is below correct:\n> > > > ------------------\n> > > > A_sub1, A_sub2, A_sub3: feedback_slots=B_sub1\n> > > > B_sub1: feedback_slots=A_sub1,A_sub2, A_sub3\n> > >\n> > > Right. The above configurations are correct.\n> >\n> > Okay. It seems difficult to understand configuration from user's perspective.\n>\n> Right. I think we could give an example in the document to make it clear.\n>\n> >\n> > > >\n> > > > 3)\n> > > > If the above is true, then do we have a way to make sure that the\n> > > > user has given this configuration exactly the above way? If users\n> > > > end up giving feedback_slots as some random slot (say A_slot4 or\n> > > > incomplete list), do we validate that? (I have not looked at code\n> > > > yet, just trying to understand design first).\n> > >\n> > > The patch doesn't validate if the feedback slots belong to the correct\n> > > subscriptions on remote server. It only validates if the slot is an\n> > > existing, valid, logical slot. I think there are few challenges to validate it\n> > further.\n> > > E.g. We need a way to identify the which server the slot is\n> > > replicating changes to, which could be tricky as the slot currently\n> > > doesn't have any info to identify the remote server. Besides, the slot\n> > > could be inactive temporarily due to some subscriber side error, in\n> > > which case we cannot verify the subscription that used it.\n> >\n> > Okay, I understand the challenges here.\n> >\n> > > >\n> > > > 4)\n> > > > Now coming to this:\n> > > >\n> > > > > The apply worker will get the oldest confirmed flush LSN among the\n> > > > > specified slots and send the LSN as a feedback message to the\n> > > > > walsender.\n> > > >\n> > > > There will be one apply worker on B which will be due to B_sub1, so\n> > > > will it check confirmed_lsn of all slots A_sub1,A_sub2, A_sub3?\n> > > > Won't it be sufficient to check confimed_lsn of say slot A_sub1\n> > > > alone which has subscribed to table 't' on which delete has been\n> > > > performed? Rest of the lots (A_sub2, A_sub3) might have subscribed to\n> > different tables?\n> > >\n> > > I think it's theoretically correct to only check the A_sub1. We could\n> > > document that user can do this by identifying the tables that each\n> > > subscription replicates, but it may not be user friendly.\n> > >\n> >\n> > Sorry, I fail to understand how user can identify the tables and give\n> > feedback_slots accordingly? I thought feedback_slots is a one time\n> > configuration when replication is setup (or say setup changes in future); it can\n> > not keep on changing with each query. Or am I missing something?\n>\n> I meant that user have all the publication information(including the tables\n> added in a publication) that the subscription subscribes to, and could also\n> have the slot_name, so I think it's possible to identify the tables that each\n> subscription includes and add the feedback_slots correspondingly before\n> starting the replication. It would be pretty complicate although possible, so I\n> prefer to not mention it in the first place if it could not bring much\n> benefits.\n>\n> >\n> > IMO, it is something which should be identified internally. Since the query is on\n> > table 't1', feedback-slot which is for 't1' shall be used to check lsn. But on\n> > rethinking,this optimization may not be worth the effort, the identification part\n> > could be tricky, so it might be okay to check all the slots.\n>\n> I agree that identifying these internally would add complexity.\n>\n> >\n> > ~~\n> >\n> > Another query is about 3 node setup. I couldn't figure out what would be\n> > feedback_slots setting when it is not bidirectional, as in consider the case\n> > where there are three nodes A,B,C. Node C is subscribing to both Node A and\n> > Node B. Node A and Node B are the ones doing concurrent \"update\" and\n> > \"delete\" which will both be replicated to Node C. In this case what will be the\n> > feedback_slots setting on Node C? We don't have any slots here which will be\n> > replicating changes from Node C to Node A and Node C to Node B. This is given\n> > in [3] in your first email ([1])\n>\n> Thanks for pointing this, the link was a bit misleading. I think the solution\n> proposed in this thread is only used to allow detecting update_deleted reliably\n> in a bidirectional cluster. For non- bidirectional cases, it would be more\n> tricky to predict the timing till when should we retain the dead tuples.\n>\n\nSo in brief, this solution is only for bidrectional setup? For\nnon-bidirectional, feedback_slots is non-configurable and thus\nirrelevant.\n\nIrrespective of above, if user ends up setting feedback_slot to some\nrandom but existing slot which is not at all consuming changes, then\nit may so happen that the node will never send feedback msg to another\nnode resulting in accumulation of dead tuples on another node. Is that\na possibility?\n\nthanks\nShveta\n\n\n",
"msg_date": "Wed, 11 Sep 2024 10:32:48 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Wednesday, September 11, 2024 1:03 PM shveta malik <shveta.malik@gmail.com> wrote:\r\n> \r\n> On Wed, Sep 11, 2024 at 10:15 AM Zhijie Hou (Fujitsu)\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Wednesday, September 11, 2024 12:18 PM shveta malik\r\n> <shveta.malik@gmail.com> wrote:\r\n> > >\r\n> > > ~~\r\n> > >\r\n> > > Another query is about 3 node setup. I couldn't figure out what\r\n> > > would be feedback_slots setting when it is not bidirectional, as in\r\n> > > consider the case where there are three nodes A,B,C. Node C is\r\n> > > subscribing to both Node A and Node B. Node A and Node B are the\r\n> > > ones doing concurrent \"update\" and \"delete\" which will both be\r\n> > > replicated to Node C. In this case what will be the feedback_slots\r\n> > > setting on Node C? We don't have any slots here which will be\r\n> > > replicating changes from Node C to Node A and Node C to Node B. This\r\n> > > is given in [3] in your first email ([1])\r\n> >\r\n> > Thanks for pointing this, the link was a bit misleading. I think the\r\n> > solution proposed in this thread is only used to allow detecting\r\n> > update_deleted reliably in a bidirectional cluster. For non-\r\n> > bidirectional cases, it would be more tricky to predict the timing till when\r\n> should we retain the dead tuples.\r\n> >\r\n> \r\n> So in brief, this solution is only for bidrectional setup? For non-bidirectional,\r\n> feedback_slots is non-configurable and thus irrelevant.\r\n\r\nRight.\r\n\r\n> \r\n> Irrespective of above, if user ends up setting feedback_slot to some random but\r\n> existing slot which is not at all consuming changes, then it may so happen that\r\n> the node will never send feedback msg to another node resulting in\r\n> accumulation of dead tuples on another node. Is that a possibility?\r\n\r\nYes, It's possible. I think this is a common situation for this kind of user\r\nspecified options. Like the user DML will be blocked, if any inactive standby\r\nnames are added synchronous_standby_names.\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n\r\n",
"msg_date": "Wed, 11 Sep 2024 05:36:55 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 8:32 AM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, September 10, 2024 7:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > One minor comment on 0003\n> > =======================\n> > 1.\n> > get_slot_confirmed_flush()\n> > {\n> > ...\n> > + /*\n> > + * To prevent concurrent slot dropping and creation while filtering the\n> > + * slots, take the ReplicationSlotControlLock outside of the loop.\n> > + */\n> > + LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);\n> > +\n> > + foreach_ptr(String, name, MySubscription->feedback_slots) { XLogRecPtr\n> > + confirmed_flush; ReplicationSlot *slot;\n> > +\n> > + slot = ValidateAndGetFeedbackSlot(strVal(name));\n> >\n> > Why do we need to validate slots each time here? Isn't it better to do it once?\n>\n> I think it's possible that the slot was correct but changed or dropped later,\n> so it could be useful to give a warning in this case to hint user to adjust the\n> slots, otherwise, the xmin of the publisher's slot won't be advanced and might\n> cause dead tuples accumulation. This is similar to the checks we performed for\n> the slots in \"synchronized_standby_slots\". (E.g. StandbySlotsHaveCaughtup)\n>\n\nIn the case of \"synchronized_standby_slots\", we seem to be invoking\nsuch checks via StandbySlotsHaveCaughtup() when we need to wait for\nWAL and also we have some optimizations that avoid the frequent\nchecking for validation checks. OTOH, this patch doesn't have any such\noptimizations. We can optimize it by maintaining a local copy of\nfeedback slots to avoid looping all the slots each time (if this is\nrequired, we can make it a top-up patch so that it can be reviewed\nseparately). I have also thought of maintaining the updated value of\nconfirmed_flush_lsn for feedback slots corresponding to a subscription\nin shared memory but that seems tricky because then we have to\nmaintain slot->subscription mapping. Can you think of any other ways?\n\nHaving said that it is better to profile this in various scenarios\nlike by increasing the frequency of keep_alieve message and or in idle\nsubscriber cases where we try to send this new feedback message.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 11 Sep 2024 14:40:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 11:07 AM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, September 11, 2024 1:03 PM shveta malik <shveta.malik@gmail.com> wrote:\n> >\n> > > >\n> > > > Another query is about 3 node setup. I couldn't figure out what\n> > > > would be feedback_slots setting when it is not bidirectional, as in\n> > > > consider the case where there are three nodes A,B,C. Node C is\n> > > > subscribing to both Node A and Node B. Node A and Node B are the\n> > > > ones doing concurrent \"update\" and \"delete\" which will both be\n> > > > replicated to Node C. In this case what will be the feedback_slots\n> > > > setting on Node C? We don't have any slots here which will be\n> > > > replicating changes from Node C to Node A and Node C to Node B. This\n> > > > is given in [3] in your first email ([1])\n> > >\n> > > Thanks for pointing this, the link was a bit misleading. I think the\n> > > solution proposed in this thread is only used to allow detecting\n> > > update_deleted reliably in a bidirectional cluster. For non-\n> > > bidirectional cases, it would be more tricky to predict the timing till when\n> > should we retain the dead tuples.\n> > >\n> >\n> > So in brief, this solution is only for bidrectional setup? For non-bidirectional,\n> > feedback_slots is non-configurable and thus irrelevant.\n>\n> Right.\n>\n\nOne possible idea to address the non-bidirectional case raised by\nShveta is to use a time-based cut-off to remove dead tuples. As\nmentioned earlier in my email [1], we can define a new GUC parameter\nsay vacuum_committs_age which would indicate that we will allow rows\nto be removed only if the modified time of the tuple as indicated by\ncommitts module is greater than the vacuum_committs_age. We could keep\nthis parameter a table-level option without introducing a GUC as this\nmay not apply to all tables. I checked and found that some other\nreplication solutions like GoldenGate also allowed similar parameters\n(tombstone_deletes) to be specified at table level [2]. The other\nadvantage of allowing it at table level is that it won't hamper the\nperformance of hot-pruning or vacuum in general. Note, I am careful\nhere because to decide whether to remove a dead tuple or not we need\nto compare its committs_time both during hot-pruning and vacuum.\n\nNote that tombstones_deletes is a general concept used by replication\nsolutions to detect updated_deleted conflict and time-based purging is\nrecommended. See [3][4]. We previously discussed having tombstone\ntables to keep the deleted records information but it was suggested to\nprevent the vacuum from removing the required dead tuples as that\nwould be simpler than inventing a new kind of tables/store for\ntombstone_deletes [5]. So, we came up with the idea of feedback slots\ndiscussed in this email but that didn't work out in all cases and\nappears difficult to configure as pointed out by Shveta. So, now, we\nare back to one of the other ideas [1] discussed previously to solve\nthis problem.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Lj-PWrP789KnKxZydisHajd38rSihWXO8MVBLDwxG1Kg%40mail.gmail.com\n[2] -\nBEGIN\n DBMS_GOLDENGATE_ADM.ALTER_AUTO_CDR(\n schema_name => 'hr',\n table_name => 'employees',\n tombstone_deletes => TRUE);\nEND;\n/\n[3] - https://en.wikipedia.org/wiki/Tombstone_(data_store)\n[4] - https://docs.oracle.com/en/middleware/goldengate/core/19.1/oracle-db/automatic-conflict-detection-and-resolution1.html#GUID-423C6EE8-1C62-4085-899C-8454B8FB9C92\n[5] - https://www.postgresql.org/message-id/e4cdb849-d647-4acf-aabe-7049ae170fbf%40enterprisedb.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 13 Sep 2024 11:37:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Fri, Sep 13, 2024 at 11:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > >\n> > > So in brief, this solution is only for bidrectional setup? For non-bidirectional,\n> > > feedback_slots is non-configurable and thus irrelevant.\n> >\n> > Right.\n> >\n>\n> One possible idea to address the non-bidirectional case raised by\n> Shveta is to use a time-based cut-off to remove dead tuples. As\n> mentioned earlier in my email [1], we can define a new GUC parameter\n> say vacuum_committs_age which would indicate that we will allow rows\n> to be removed only if the modified time of the tuple as indicated by\n> committs module is greater than the vacuum_committs_age. We could keep\n> this parameter a table-level option without introducing a GUC as this\n> may not apply to all tables. I checked and found that some other\n> replication solutions like GoldenGate also allowed similar parameters\n> (tombstone_deletes) to be specified at table level [2]. The other\n> advantage of allowing it at table level is that it won't hamper the\n> performance of hot-pruning or vacuum in general. Note, I am careful\n> here because to decide whether to remove a dead tuple or not we need\n> to compare its committs_time both during hot-pruning and vacuum.\n\n+1 on the idea, but IIUC this value doesn’t need to be significant; it\ncan be limited to just a few minutes. The one which is sufficient to\nhandle replication delays caused by network lag or other factors,\nassuming clock skew has already been addressed.\n\nThis new parameter is necessary only for cases where an UPDATE and\nDELETE on the same row occur concurrently, but the replication order\nto a third node is not preserved, which could result in data\ndivergence. Consider the following example:\n\nNode A:\n T1: INSERT INTO t (id, value) VALUES (1,1); (10.01 AM)\n T2: DELETE FROM t WHERE id = 1; (10.03 AM)\n\nNode B:\n T3: UPDATE t SET value = 2 WHERE id = 1; (10.02 AM)\n\nAssume a third node (Node C) subscribes to both Node A and Node B. The\n\"correct\" order of messages received by Node C would be T1-T3-T2, but\nit could also receive them in the order T1-T2-T3, wherein sayT3 is\nreceived with a lag of say 2 mins. In such a scenario, T3 should be\nable to recognize that the row was deleted by T2 on Node C, thereby\ndetecting the update-deleted conflict and skipping the apply.\n\nThe 'vacuum_committs_age' parameter should account for this lag, which\ncould lead to the order reversal of UPDATE and DELETE operations.\n\nAny subsequent attempt to update the same row after conflict detection\nand resolution should not pose an issue. For example, if Node A\ntriggers the following at 10:20 AM:\nUPDATE t SET value = 3 WHERE id = 1;\n\nSince the row has already been deleted, the UPDATE will not proceed\nand therefore will not generate a replication operation on the other\nnodes, indicating that vacuum need not to preserve the dead row to\nthis far.\n\nthanks\nShveta\n\n\n",
"msg_date": "Fri, 13 Sep 2024 13:25:52 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Fri, Sep 13, 2024 at 12:56 AM shveta malik <shveta.malik@gmail.com> wrote:\n>\n> On Fri, Sep 13, 2024 at 11:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > >\n> > > > So in brief, this solution is only for bidrectional setup? For non-bidirectional,\n> > > > feedback_slots is non-configurable and thus irrelevant.\n> > >\n> > > Right.\n> > >\n> >\n> > One possible idea to address the non-bidirectional case raised by\n> > Shveta is to use a time-based cut-off to remove dead tuples. As\n> > mentioned earlier in my email [1], we can define a new GUC parameter\n> > say vacuum_committs_age which would indicate that we will allow rows\n> > to be removed only if the modified time of the tuple as indicated by\n> > committs module is greater than the vacuum_committs_age. We could keep\n> > this parameter a table-level option without introducing a GUC as this\n> > may not apply to all tables. I checked and found that some other\n> > replication solutions like GoldenGate also allowed similar parameters\n> > (tombstone_deletes) to be specified at table level [2]. The other\n> > advantage of allowing it at table level is that it won't hamper the\n> > performance of hot-pruning or vacuum in general. Note, I am careful\n> > here because to decide whether to remove a dead tuple or not we need\n> > to compare its committs_time both during hot-pruning and vacuum.\n>\n> +1 on the idea,\n\nI agree that this idea is much simpler than the idea originally\nproposed in this thread.\n\nIIUC vacuum_committs_age specifies a time rather than an XID age. But\nhow can we implement it? If it ends up affecting the vacuum cutoff, we\nshould be careful not to end up with the same result of\nvacuum_defer_cleanup_age that was discussed before[1]. Also, I think\nthe implementation needs not to affect the performance of\nComputeXidHorizons().\n\n> but IIUC this value doesn’t need to be significant; it\n> can be limited to just a few minutes. The one which is sufficient to\n> handle replication delays caused by network lag or other factors,\n> assuming clock skew has already been addressed.\n\nI think that in a non-bidirectional case the value could need to be a\nlarge number. Is that right?\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/20230317230930.nhsgk3qfk7f4axls%40awork3.anarazel.de\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 16 Sep 2024 17:38:16 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Tue, Sep 17, 2024 at 6:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Sep 13, 2024 at 12:56 AM shveta malik <shveta.malik@gmail.com> wrote:\n> >\n> > On Fri, Sep 13, 2024 at 11:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > >\n> > > > > So in brief, this solution is only for bidrectional setup? For non-bidirectional,\n> > > > > feedback_slots is non-configurable and thus irrelevant.\n> > > >\n> > > > Right.\n> > > >\n> > >\n> > > One possible idea to address the non-bidirectional case raised by\n> > > Shveta is to use a time-based cut-off to remove dead tuples. As\n> > > mentioned earlier in my email [1], we can define a new GUC parameter\n> > > say vacuum_committs_age which would indicate that we will allow rows\n> > > to be removed only if the modified time of the tuple as indicated by\n> > > committs module is greater than the vacuum_committs_age. We could keep\n> > > this parameter a table-level option without introducing a GUC as this\n> > > may not apply to all tables. I checked and found that some other\n> > > replication solutions like GoldenGate also allowed similar parameters\n> > > (tombstone_deletes) to be specified at table level [2]. The other\n> > > advantage of allowing it at table level is that it won't hamper the\n> > > performance of hot-pruning or vacuum in general. Note, I am careful\n> > > here because to decide whether to remove a dead tuple or not we need\n> > > to compare its committs_time both during hot-pruning and vacuum.\n> >\n> > +1 on the idea,\n>\n> I agree that this idea is much simpler than the idea originally\n> proposed in this thread.\n>\n> IIUC vacuum_committs_age specifies a time rather than an XID age.\n>\n\nYour understanding is correct that vacuum_committs_age specifies a time.\n\n>\n> But\n> how can we implement it? If it ends up affecting the vacuum cutoff, we\n> should be careful not to end up with the same result of\n> vacuum_defer_cleanup_age that was discussed before[1]. Also, I think\n> the implementation needs not to affect the performance of\n> ComputeXidHorizons().\n>\n\nI haven't thought about the implementation details yet but I think\nduring pruning (for example in heap_prune_satisfies_vacuum()), apart\nfrom checking if the tuple satisfies\nHeapTupleSatisfiesVacuumHorizon(), we should also check if the tuple's\ncommitts is greater than configured vacuum_committs_age (for the\ntable) to decide whether tuple can be removed. One thing to consider\nis what to do in case of aggressive vacuum where we expect\nrelfrozenxid to be advanced to FreezeLimit (at a minimum). We may want\nto just ignore vacuum_committs_age during aggressive vacuum and LOG if\nwe end up removing some tuple. This will allow users to retain deleted\ntuples by respecting the freeze limits which also avoid xid_wrap\naround. I think we can't retain tuples forever if the user\nmisconfigured vacuum_committs_age and to avoid that we can keep the\nmaximum limit on this parameter to say an hour or so. Also, users can\ntune freeze parameters if they want to retain tuples for longer.\n\n> > but IIUC this value doesn’t need to be significant; it\n> > can be limited to just a few minutes. The one which is sufficient to\n> > handle replication delays caused by network lag or other factors,\n> > assuming clock skew has already been addressed.\n>\n> I think that in a non-bidirectional case the value could need to be a\n> large number. Is that right?\n>\n\nAs per my understanding, even for non-bidirectional cases, the value\nshould be small. For example, in the case, pointed out by Shveta [1],\nwhere the updates from 2 nodes are received by a third node, this\nsetting is expected to be small. This setting primarily deals with\nconcurrent transactions on multiple nodes, so it should be small but I\ncould be missing something.\n\n[1] - https://www.postgresql.org/message-id/CAJpy0uAzzOzhXGH-zBc7Zt8ndXRf6r4OnLzgRrHyf8cvd%2Bfpwg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 17 Sep 2024 12:23:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Mon, Sep 16, 2024 at 11:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 17, 2024 at 6:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Sep 13, 2024 at 12:56 AM shveta malik <shveta.malik@gmail.com> wrote:\n> > >\n> > > On Fri, Sep 13, 2024 at 11:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > > >\n> > > > > > So in brief, this solution is only for bidrectional setup? For non-bidirectional,\n> > > > > > feedback_slots is non-configurable and thus irrelevant.\n> > > > >\n> > > > > Right.\n> > > > >\n> > > >\n> > > > One possible idea to address the non-bidirectional case raised by\n> > > > Shveta is to use a time-based cut-off to remove dead tuples. As\n> > > > mentioned earlier in my email [1], we can define a new GUC parameter\n> > > > say vacuum_committs_age which would indicate that we will allow rows\n> > > > to be removed only if the modified time of the tuple as indicated by\n> > > > committs module is greater than the vacuum_committs_age. We could keep\n> > > > this parameter a table-level option without introducing a GUC as this\n> > > > may not apply to all tables. I checked and found that some other\n> > > > replication solutions like GoldenGate also allowed similar parameters\n> > > > (tombstone_deletes) to be specified at table level [2]. The other\n> > > > advantage of allowing it at table level is that it won't hamper the\n> > > > performance of hot-pruning or vacuum in general. Note, I am careful\n> > > > here because to decide whether to remove a dead tuple or not we need\n> > > > to compare its committs_time both during hot-pruning and vacuum.\n> > >\n> > > +1 on the idea,\n> >\n> > I agree that this idea is much simpler than the idea originally\n> > proposed in this thread.\n> >\n> > IIUC vacuum_committs_age specifies a time rather than an XID age.\n> >\n>\n> Your understanding is correct that vacuum_committs_age specifies a time.\n>\n> >\n> > But\n> > how can we implement it? If it ends up affecting the vacuum cutoff, we\n> > should be careful not to end up with the same result of\n> > vacuum_defer_cleanup_age that was discussed before[1]. Also, I think\n> > the implementation needs not to affect the performance of\n> > ComputeXidHorizons().\n> >\n>\n> I haven't thought about the implementation details yet but I think\n> during pruning (for example in heap_prune_satisfies_vacuum()), apart\n> from checking if the tuple satisfies\n> HeapTupleSatisfiesVacuumHorizon(), we should also check if the tuple's\n> committs is greater than configured vacuum_committs_age (for the\n> table) to decide whether tuple can be removed.\n\nSounds very costly. I think we need to do performance tests. Even if\nthe vacuum gets slower only on the particular table having the\nvacuum_committs_age setting, it would affect overall autovacuum\nperformance. Also, it would affect HOT pruning performance.\n\n>\n> > > but IIUC this value doesn’t need to be significant; it\n> > > can be limited to just a few minutes. The one which is sufficient to\n> > > handle replication delays caused by network lag or other factors,\n> > > assuming clock skew has already been addressed.\n> >\n> > I think that in a non-bidirectional case the value could need to be a\n> > large number. Is that right?\n> >\n>\n> As per my understanding, even for non-bidirectional cases, the value\n> should be small. For example, in the case, pointed out by Shveta [1],\n> where the updates from 2 nodes are received by a third node, this\n> setting is expected to be small. This setting primarily deals with\n> concurrent transactions on multiple nodes, so it should be small but I\n> could be missing something.\n>\n\nI might be missing something but the scenario I was thinking of is\nsomething below.\n\nSuppose that we setup uni-directional logical replication between Node\nA and Node B (e.g., Node A -> Node B) and both nodes have the same row\nwith key = 1:\n\nNode A:\n T1: UPDATE t SET val = 2 WHERE key = 1; (10:00 AM)\n -> This change is applied on Node B at 10:01 AM.\n\nNode B:\n T2: DELETE FROM t WHERE key = 1; (05:00 AM)\n\nIf a vacuum runs on Node B at 06:00 AM, the change of T1 coming from\nNode A would raise an \"update_missing\" conflict. On the other hand, if\na vacuum runs on Node B at 11:00 AM, the change would raise an\n\"update_deleted\" conflict. It looks whether we detect an\n\"update_deleted\" or an \"updated_missing\" depends on the timing of\nvacuum, and to avoid such a situation, we would need to set\nvacuum_committs_age to more than 5 hours.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 17 Sep 2024 10:54:05 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Tue, Sep 17, 2024 at 11:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Sep 16, 2024 at 11:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Sep 17, 2024 at 6:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I haven't thought about the implementation details yet but I think\n> > during pruning (for example in heap_prune_satisfies_vacuum()), apart\n> > from checking if the tuple satisfies\n> > HeapTupleSatisfiesVacuumHorizon(), we should also check if the tuple's\n> > committs is greater than configured vacuum_committs_age (for the\n> > table) to decide whether tuple can be removed.\n>\n> Sounds very costly. I think we need to do performance tests. Even if\n> the vacuum gets slower only on the particular table having the\n> vacuum_committs_age setting, it would affect overall autovacuum\n> performance. Also, it would affect HOT pruning performance.\n>\n\nAgreed that we should do some performance testing and additionally\nthink of any better way to implement. I think the cost won't be much\nif the tuples to be removed are from a single transaction because the\nrequired commit_ts information would be cached but when the tuples are\nfrom different transactions, we could see a noticeable impact. We need\nto test to say anything concrete on this.\n\n> >\n> > > > but IIUC this value doesn’t need to be significant; it\n> > > > can be limited to just a few minutes. The one which is sufficient to\n> > > > handle replication delays caused by network lag or other factors,\n> > > > assuming clock skew has already been addressed.\n> > >\n> > > I think that in a non-bidirectional case the value could need to be a\n> > > large number. Is that right?\n> > >\n> >\n> > As per my understanding, even for non-bidirectional cases, the value\n> > should be small. For example, in the case, pointed out by Shveta [1],\n> > where the updates from 2 nodes are received by a third node, this\n> > setting is expected to be small. This setting primarily deals with\n> > concurrent transactions on multiple nodes, so it should be small but I\n> > could be missing something.\n> >\n>\n> I might be missing something but the scenario I was thinking of is\n> something below.\n>\n> Suppose that we setup uni-directional logical replication between Node\n> A and Node B (e.g., Node A -> Node B) and both nodes have the same row\n> with key = 1:\n>\n> Node A:\n> T1: UPDATE t SET val = 2 WHERE key = 1; (10:00 AM)\n> -> This change is applied on Node B at 10:01 AM.\n>\n> Node B:\n> T2: DELETE FROM t WHERE key = 1; (05:00 AM)\n>\n> If a vacuum runs on Node B at 06:00 AM, the change of T1 coming from\n> Node A would raise an \"update_missing\" conflict. On the other hand, if\n> a vacuum runs on Node B at 11:00 AM, the change would raise an\n> \"update_deleted\" conflict. It looks whether we detect an\n> \"update_deleted\" or an \"updated_missing\" depends on the timing of\n> vacuum, and to avoid such a situation, we would need to set\n> vacuum_committs_age to more than 5 hours.\n>\n\nYeah, in this case, it would detect a different conflict (if we don't\nset vacuum_committs_age to greater than 5 hours) but as per my\nunderstanding, the primary purpose of conflict detection and\nresolution is to avoid data inconsistency in a bi-directional setup.\nAssume, in the above case it is a bi-directional setup, then we want\nto have the same data in both nodes. Now, if there are other cases\nlike the one you mentioned that require to detect the conflict\nreliably than I agree this value could be large and probably not the\nbest way to achieve it. I think we can mention in the docs that the\nprimary purpose of this is to achieve data consistency among\nbi-directional kind of setups.\n\nHaving said that even in the above case, the result should be the same\nwhether the vacuum has removed the row or not. Say, if the vacuum has\nnot yet removed the row (due to vacuum_committs_age or otherwise) then\nalso because the incoming update has a later timestamp, we will\nconvert the update to insert as per last_update_wins resolution\nmethod, so the conflict will be considered as update_missing. And,\nsay, the vacuum has removed the row and the conflict detected is\nupdate_missing, then also we will convert the update to insert. In\nshort, if UPDATE has lower commit-ts, DELETE should win and if UPDATE\nhas higher commit-ts, UPDATE should win.\n\nSo, we can expect data consistency in bidirectional cases and expect a\ndeterministic behavior in other cases (e.g. the final data in a table\ndoes not depend on the order of applying the transactions from other\nnodes).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 18 Sep 2024 09:59:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Tue, Sep 17, 2024 at 9:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 17, 2024 at 11:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Sep 16, 2024 at 11:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Sep 17, 2024 at 6:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I haven't thought about the implementation details yet but I think\n> > > during pruning (for example in heap_prune_satisfies_vacuum()), apart\n> > > from checking if the tuple satisfies\n> > > HeapTupleSatisfiesVacuumHorizon(), we should also check if the tuple's\n> > > committs is greater than configured vacuum_committs_age (for the\n> > > table) to decide whether tuple can be removed.\n> >\n> > Sounds very costly. I think we need to do performance tests. Even if\n> > the vacuum gets slower only on the particular table having the\n> > vacuum_committs_age setting, it would affect overall autovacuum\n> > performance. Also, it would affect HOT pruning performance.\n> >\n>\n> Agreed that we should do some performance testing and additionally\n> think of any better way to implement. I think the cost won't be much\n> if the tuples to be removed are from a single transaction because the\n> required commit_ts information would be cached but when the tuples are\n> from different transactions, we could see a noticeable impact. We need\n> to test to say anything concrete on this.\n\nAgreed.\n\n>\n> > >\n> > > > > but IIUC this value doesn’t need to be significant; it\n> > > > > can be limited to just a few minutes. The one which is sufficient to\n> > > > > handle replication delays caused by network lag or other factors,\n> > > > > assuming clock skew has already been addressed.\n> > > >\n> > > > I think that in a non-bidirectional case the value could need to be a\n> > > > large number. Is that right?\n> > > >\n> > >\n> > > As per my understanding, even for non-bidirectional cases, the value\n> > > should be small. For example, in the case, pointed out by Shveta [1],\n> > > where the updates from 2 nodes are received by a third node, this\n> > > setting is expected to be small. This setting primarily deals with\n> > > concurrent transactions on multiple nodes, so it should be small but I\n> > > could be missing something.\n> > >\n> >\n> > I might be missing something but the scenario I was thinking of is\n> > something below.\n> >\n> > Suppose that we setup uni-directional logical replication between Node\n> > A and Node B (e.g., Node A -> Node B) and both nodes have the same row\n> > with key = 1:\n> >\n> > Node A:\n> > T1: UPDATE t SET val = 2 WHERE key = 1; (10:00 AM)\n> > -> This change is applied on Node B at 10:01 AM.\n> >\n> > Node B:\n> > T2: DELETE FROM t WHERE key = 1; (05:00 AM)\n> >\n> > If a vacuum runs on Node B at 06:00 AM, the change of T1 coming from\n> > Node A would raise an \"update_missing\" conflict. On the other hand, if\n> > a vacuum runs on Node B at 11:00 AM, the change would raise an\n> > \"update_deleted\" conflict. It looks whether we detect an\n> > \"update_deleted\" or an \"updated_missing\" depends on the timing of\n> > vacuum, and to avoid such a situation, we would need to set\n> > vacuum_committs_age to more than 5 hours.\n> >\n>\n> Yeah, in this case, it would detect a different conflict (if we don't\n> set vacuum_committs_age to greater than 5 hours) but as per my\n> understanding, the primary purpose of conflict detection and\n> resolution is to avoid data inconsistency in a bi-directional setup.\n> Assume, in the above case it is a bi-directional setup, then we want\n> to have the same data in both nodes. Now, if there are other cases\n> like the one you mentioned that require to detect the conflict\n> reliably than I agree this value could be large and probably not the\n> best way to achieve it. I think we can mention in the docs that the\n> primary purpose of this is to achieve data consistency among\n> bi-directional kind of setups.\n>\n> Having said that even in the above case, the result should be the same\n> whether the vacuum has removed the row or not. Say, if the vacuum has\n> not yet removed the row (due to vacuum_committs_age or otherwise) then\n> also because the incoming update has a later timestamp, we will\n> convert the update to insert as per last_update_wins resolution\n> method, so the conflict will be considered as update_missing. And,\n> say, the vacuum has removed the row and the conflict detected is\n> update_missing, then also we will convert the update to insert. In\n> short, if UPDATE has lower commit-ts, DELETE should win and if UPDATE\n> has higher commit-ts, UPDATE should win.\n>\n> So, we can expect data consistency in bidirectional cases and expect a\n> deterministic behavior in other cases (e.g. the final data in a table\n> does not depend on the order of applying the transactions from other\n> nodes).\n\nAgreed.\n\nI think that such a time-based configuration parameter would be a\nreasonable solution. The current concerns are that it might affect\nvacuum performance and lead to a similar bug we had with\nvacuum_defer_cleanup_age.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 19 Sep 2024 11:49:18 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "\r\n\r\n> -----Original Message-----\r\n> From: Masahiko Sawada <sawada.mshk@gmail.com>\r\n> Sent: Friday, September 20, 2024 2:49 AM\r\n> To: Amit Kapila <amit.kapila16@gmail.com>\r\n> Cc: shveta malik <shveta.malik@gmail.com>; Hou, Zhijie/侯 志杰\r\n> <houzj.fnst@fujitsu.com>; pgsql-hackers <pgsql-hackers@postgresql.org>\r\n> Subject: Re: Conflict detection for update_deleted in logical replication\r\n> \r\n> On Tue, Sep 17, 2024 at 9:29 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, Sep 17, 2024 at 11:24 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > On Mon, Sep 16, 2024 at 11:53 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > >\r\n> > > > On Tue, Sep 17, 2024 at 6:08 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > > >\r\n> > > > I haven't thought about the implementation details yet but I think\r\n> > > > during pruning (for example in heap_prune_satisfies_vacuum()),\r\n> > > > apart from checking if the tuple satisfies\r\n> > > > HeapTupleSatisfiesVacuumHorizon(), we should also check if the\r\n> > > > tuple's committs is greater than configured vacuum_committs_age\r\n> > > > (for the\r\n> > > > table) to decide whether tuple can be removed.\r\n> > >\r\n> > > Sounds very costly. I think we need to do performance tests. Even if\r\n> > > the vacuum gets slower only on the particular table having the\r\n> > > vacuum_committs_age setting, it would affect overall autovacuum\r\n> > > performance. Also, it would affect HOT pruning performance.\r\n> > >\r\n> >\r\n> > Agreed that we should do some performance testing and additionally\r\n> > think of any better way to implement. I think the cost won't be much\r\n> > if the tuples to be removed are from a single transaction because the\r\n> > required commit_ts information would be cached but when the tuples are\r\n> > from different transactions, we could see a noticeable impact. We need\r\n> > to test to say anything concrete on this.\r\n> \r\n> Agreed.\r\n> \r\n> >\r\n> > > >\r\n> > > > > > but IIUC this value doesn’t need to be significant; it can be\r\n> > > > > > limited to just a few minutes. The one which is sufficient to\r\n> > > > > > handle replication delays caused by network lag or other\r\n> > > > > > factors, assuming clock skew has already been addressed.\r\n> > > > >\r\n> > > > > I think that in a non-bidirectional case the value could need to\r\n> > > > > be a large number. Is that right?\r\n> > > > >\r\n> > > >\r\n> > > > As per my understanding, even for non-bidirectional cases, the\r\n> > > > value should be small. For example, in the case, pointed out by\r\n> > > > Shveta [1], where the updates from 2 nodes are received by a third\r\n> > > > node, this setting is expected to be small. This setting primarily\r\n> > > > deals with concurrent transactions on multiple nodes, so it should\r\n> > > > be small but I could be missing something.\r\n> > > >\r\n> > >\r\n> > > I might be missing something but the scenario I was thinking of is\r\n> > > something below.\r\n> > >\r\n> > > Suppose that we setup uni-directional logical replication between\r\n> > > Node A and Node B (e.g., Node A -> Node B) and both nodes have the\r\n> > > same row with key = 1:\r\n> > >\r\n> > > Node A:\r\n> > > T1: UPDATE t SET val = 2 WHERE key = 1; (10:00 AM)\r\n> > > -> This change is applied on Node B at 10:01 AM.\r\n> > >\r\n> > > Node B:\r\n> > > T2: DELETE FROM t WHERE key = 1; (05:00 AM)\r\n> > >\r\n> > > If a vacuum runs on Node B at 06:00 AM, the change of T1 coming from\r\n> > > Node A would raise an \"update_missing\" conflict. On the other hand,\r\n> > > if a vacuum runs on Node B at 11:00 AM, the change would raise an\r\n> > > \"update_deleted\" conflict. It looks whether we detect an\r\n> > > \"update_deleted\" or an \"updated_missing\" depends on the timing of\r\n> > > vacuum, and to avoid such a situation, we would need to set\r\n> > > vacuum_committs_age to more than 5 hours.\r\n> > >\r\n> >\r\n> > Yeah, in this case, it would detect a different conflict (if we don't\r\n> > set vacuum_committs_age to greater than 5 hours) but as per my\r\n> > understanding, the primary purpose of conflict detection and\r\n> > resolution is to avoid data inconsistency in a bi-directional setup.\r\n> > Assume, in the above case it is a bi-directional setup, then we want\r\n> > to have the same data in both nodes. Now, if there are other cases\r\n> > like the one you mentioned that require to detect the conflict\r\n> > reliably than I agree this value could be large and probably not the\r\n> > best way to achieve it. I think we can mention in the docs that the\r\n> > primary purpose of this is to achieve data consistency among\r\n> > bi-directional kind of setups.\r\n> >\r\n> > Having said that even in the above case, the result should be the same\r\n> > whether the vacuum has removed the row or not. Say, if the vacuum has\r\n> > not yet removed the row (due to vacuum_committs_age or otherwise) then\r\n> > also because the incoming update has a later timestamp, we will\r\n> > convert the update to insert as per last_update_wins resolution\r\n> > method, so the conflict will be considered as update_missing. And,\r\n> > say, the vacuum has removed the row and the conflict detected is\r\n> > update_missing, then also we will convert the update to insert. In\r\n> > short, if UPDATE has lower commit-ts, DELETE should win and if UPDATE\r\n> > has higher commit-ts, UPDATE should win.\r\n> >\r\n> > So, we can expect data consistency in bidirectional cases and expect a\r\n> > deterministic behavior in other cases (e.g. the final data in a table\r\n> > does not depend on the order of applying the transactions from other\r\n> > nodes).\r\n> \r\n> Agreed.\r\n> \r\n> I think that such a time-based configuration parameter would be a reasonable\r\n> solution. The current concerns are that it might affect vacuum performance and\r\n> lead to a similar bug we had with vacuum_defer_cleanup_age.\r\n\r\nThanks for the feedback!\r\n\r\nI am working on the POC patch and doing some initial performance tests on this idea.\r\nI will share the results after finishing.\r\n\r\nApart from the vacuum_defer_cleanup_age idea. we’ve given more thought to our\r\napproach for retaining dead tuples and have come up with another idea that can\r\nreliably detect conflicts without requiring users to choose a wise value for\r\nthe vacuum_committs_age. This new idea could also reduce the performance\r\nimpact. Thanks a lot to Amit for off-list discussion.\r\n\r\nThe concept of the new idea is that, the dead tuples are only useful to detect\r\nconflicts when applying *concurrent* transactions from remotes. Any subsequent\r\nUPDATE from a remote node after removing the dead tuples should have a later\r\ntimestamp, meaning it's reasonable to detect an update_missing scenario and\r\nconvert the UPDATE to an INSERT when applying it.\r\n\r\nTo achieve above, we can create an additional replication slot on the\r\nsubscriber side, maintained by the apply worker. This slot is used to retain\r\nthe dead tuples. The apply worker will advance the slot.xmin after confirming\r\nthat all the concurrent transaction on publisher has been applied locally.\r\n\r\nThe process of advancing the slot.xmin could be:\r\n\r\n1) the apply worker call GetRunningTransactionData() to get the\r\n'oldestRunningXid' and consider this as 'candidate_xmin'.\r\n2) the apply worker send a new message to walsender to request the latest wal\r\nflush position(GetFlushRecPtr) on publisher, and save it to\r\n'candidate_remote_wal_lsn'. Here we could introduce a new feedback message or\r\nextend the existing keepalive message(e,g extends the requestReply bit in\r\nkeepalive message to add a 'request_wal_position' value)\r\n3) The apply worker can continue to apply changes. After applying all the WALs\r\nupto 'candidate_remote_wal_lsn', the apply worker can then advance the\r\nslot.xmin to 'candidate_xmin'.\r\n\r\nThis approach ensures that dead tuples are not removed until all concurrent\r\ntransactions have been applied. It can be effective for both bidirectional and\r\nnon-bidirectional replication cases.\r\n\r\nWe could introduce a boolean subscription option (retain_dead_tuples) to\r\ncontrol whether this feature is enabled. Each subscription intending to detect\r\nupdate-delete conflicts should set retain_dead_tuples to true.\r\n\r\nThe following explains how it works in different cases to achieve data\r\nconsistency:\r\n\r\n--\r\n2 nodes, bidirectional case 1:\r\n--\r\nNode A:\r\n T1: INSERT INTO t (id, value) VALUES (1,1);\t\tts=10.00 AM\r\n T2: DELETE FROM t WHERE id = 1;\t\t\tts=10.02 AM\r\n\r\nNode B:\r\n T3: UPDATE t SET value = 2 WHERE id = 1;\t\tts=10.01 AM\r\n\r\nsubscription retain_dead_tuples = true/false\r\n\r\nAfter executing T2, the apply worker on Node A will check the latest wal flush\r\nlocation on Node B. Till that time, the T3 should have finished, so the xmin\r\nwill be advanced only after applying the WALs that is later than T3. So, the\r\ndead tuple will not be removed before applying the T3, which means the\r\nupdate_delete can be detected.\r\n\r\n--\r\n2 nodes, bidirectional case 2:\r\n--\r\nNode A:\r\n T1: INSERT INTO t (id, value) VALUES (1,1);\t\tts=10.00 AM\r\n T2: DELETE FROM t WHERE id = 1;\t\t\tts=10.01 AM\r\n\r\nNode B:\r\n T3: UPDATE t SET value = 2 WHERE id = 1;\t\tts=10.02 AM\r\n\r\nAfter executing T2, the apply worker on Node A will request the latest wal\r\nflush location on Node B. And the T3 is either running concurrently or has not\r\nstarted. In both cases, the T3 must have a later timestamp. So, even if the\r\ndead tuple is removed in this cases and update_missing is detected, the default\r\nresolution is to convert UDPATE to INSERT which is OK because the data are\r\nstill consistent on Node A and B.\r\n\r\n--\r\n3 nodes, non-bidirectional, Node C subscribes to both Node A and Node B:\r\n--\r\n\r\nNode A:\r\n T1: INSERT INTO t (id, value) VALUES (1,1);\t\tts=10.00 AM\r\n T2: DELETE FROM t WHERE id = 1;\t\t\tts=10.01 AM\r\n\r\nNode B:\r\n T3: UPDATE t SET value = 2 WHERE id = 1;\t\tts=10.02 AM\r\n\r\nNode C:\r\n\tapply T1, T2, T3\r\n\r\nAfter applying T2, the apply worker on Node C will check the latest wal flush\r\nlocation on Node B. Till that time, the T3 should have finished, so the xmin\r\nwill be advanced only after applying the WALs that is later than T3. So, the\r\ndead tuple will not be removed before applying the T3, which means the\r\nupdate_delete can be detected.\r\n\r\nYour feedback on this idea would be greatly appreciated.\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n\r\n",
"msg_date": "Fri, 20 Sep 2024 02:54:59 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Friday, September 20, 2024 10:55 AM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com> wrote:\r\n> On Friday, September 20, 2024 2:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> > \r\n> >\r\n> > I think that such a time-based configuration parameter would be a\r\n> > reasonable solution. The current concerns are that it might affect\r\n> > vacuum performance and lead to a similar bug we had with\r\n> vacuum_defer_cleanup_age.\r\n> \r\n> Thanks for the feedback!\r\n> \r\n> I am working on the POC patch and doing some initial performance tests on\r\n> this idea.\r\n> I will share the results after finishing.\r\n> \r\n> Apart from the vacuum_defer_cleanup_age idea. we’ve given more thought to\r\n> our approach for retaining dead tuples and have come up with another idea that\r\n> can reliably detect conflicts without requiring users to choose a wise value for\r\n> the vacuum_committs_age. This new idea could also reduce the performance\r\n> impact. Thanks a lot to Amit for off-list discussion.\r\n> \r\n> The concept of the new idea is that, the dead tuples are only useful to detect\r\n> conflicts when applying *concurrent* transactions from remotes. Any\r\n> subsequent UPDATE from a remote node after removing the dead tuples\r\n> should have a later timestamp, meaning it's reasonable to detect an\r\n> update_missing scenario and convert the UPDATE to an INSERT when\r\n> applying it.\r\n> \r\n> To achieve above, we can create an additional replication slot on the subscriber\r\n> side, maintained by the apply worker. This slot is used to retain the dead tuples.\r\n> The apply worker will advance the slot.xmin after confirming that all the\r\n> concurrent transaction on publisher has been applied locally.\r\n> \r\n> The process of advancing the slot.xmin could be:\r\n> \r\n> 1) the apply worker call GetRunningTransactionData() to get the\r\n> 'oldestRunningXid' and consider this as 'candidate_xmin'.\r\n> 2) the apply worker send a new message to walsender to request the latest wal\r\n> flush position(GetFlushRecPtr) on publisher, and save it to\r\n> 'candidate_remote_wal_lsn'. Here we could introduce a new feedback\r\n> message or extend the existing keepalive message(e,g extends the\r\n> requestReply bit in keepalive message to add a 'request_wal_position' value)\r\n> 3) The apply worker can continue to apply changes. After applying all the WALs\r\n> upto 'candidate_remote_wal_lsn', the apply worker can then advance the\r\n> slot.xmin to 'candidate_xmin'.\r\n> \r\n> This approach ensures that dead tuples are not removed until all concurrent\r\n> transactions have been applied. It can be effective for both bidirectional and\r\n> non-bidirectional replication cases.\r\n> \r\n> We could introduce a boolean subscription option (retain_dead_tuples) to\r\n> control whether this feature is enabled. Each subscription intending to detect\r\n> update-delete conflicts should set retain_dead_tuples to true.\r\n> \r\n> The following explains how it works in different cases to achieve data\r\n> consistency:\r\n...\r\n> --\r\n> 3 nodes, non-bidirectional, Node C subscribes to both Node A and Node B:\r\n> --\r\n\r\nSorry for a typo here, the time of T2 and T3 were reversed.\r\nPlease see the following correction:\r\n\r\n> \r\n> Node A:\r\n> T1: INSERT INTO t (id, value) VALUES (1,1);\t\tts=10.00 AM\r\n> T2: DELETE FROM t WHERE id = 1;\t\t\tts=10.01 AM\r\n\r\nHere T2 should be at ts=10.02 AM\r\n\r\n> \r\n> Node B:\r\n> T3: UPDATE t SET value = 2 WHERE id = 1;\t\tts=10.02 AM\r\n\r\nT3 should be at ts=10.01 AM\r\n\r\n> \r\n> Node C:\r\n> \tapply T1, T2, T3\r\n> \r\n> After applying T2, the apply worker on Node C will check the latest wal flush\r\n> location on Node B. Till that time, the T3 should have finished, so the xmin will\r\n> be advanced only after applying the WALs that is later than T3. So, the dead\r\n> tuple will not be removed before applying the T3, which means the\r\n> update_delete can be detected.\r\n> \r\n> Your feedback on this idea would be greatly appreciated.\r\n> \r\n\r\nBest Regards,\r\nHou zj \r\n\r\n",
"msg_date": "Fri, 20 Sep 2024 03:59:07 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Fri, Sep 20, 2024 at 8:25 AM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Apart from the vacuum_defer_cleanup_age idea.\n>\n\nI think you meant to say vacuum_committs_age idea.\n\n> we’ve given more thought to our\n> approach for retaining dead tuples and have come up with another idea that can\n> reliably detect conflicts without requiring users to choose a wise value for\n> the vacuum_committs_age. This new idea could also reduce the performance\n> impact. Thanks a lot to Amit for off-list discussion.\n>\n> The concept of the new idea is that, the dead tuples are only useful to detect\n> conflicts when applying *concurrent* transactions from remotes. Any subsequent\n> UPDATE from a remote node after removing the dead tuples should have a later\n> timestamp, meaning it's reasonable to detect an update_missing scenario and\n> convert the UPDATE to an INSERT when applying it.\n>\n> To achieve above, we can create an additional replication slot on the\n> subscriber side, maintained by the apply worker. This slot is used to retain\n> the dead tuples. The apply worker will advance the slot.xmin after confirming\n> that all the concurrent transaction on publisher has been applied locally.\n>\n> The process of advancing the slot.xmin could be:\n>\n> 1) the apply worker call GetRunningTransactionData() to get the\n> 'oldestRunningXid' and consider this as 'candidate_xmin'.\n> 2) the apply worker send a new message to walsender to request the latest wal\n> flush position(GetFlushRecPtr) on publisher, and save it to\n> 'candidate_remote_wal_lsn'. Here we could introduce a new feedback message or\n> extend the existing keepalive message(e,g extends the requestReply bit in\n> keepalive message to add a 'request_wal_position' value)\n> 3) The apply worker can continue to apply changes. After applying all the WALs\n> upto 'candidate_remote_wal_lsn', the apply worker can then advance the\n> slot.xmin to 'candidate_xmin'.\n>\n> This approach ensures that dead tuples are not removed until all concurrent\n> transactions have been applied. It can be effective for both bidirectional and\n> non-bidirectional replication cases.\n>\n> We could introduce a boolean subscription option (retain_dead_tuples) to\n> control whether this feature is enabled. Each subscription intending to detect\n> update-delete conflicts should set retain_dead_tuples to true.\n>\n\nAs each apply worker needs a separate slot to retain deleted rows, the\nrequirement for slots will increase. The other possibility is to\nmaintain one slot by launcher or some other central process that\ntraverses all subscriptions, remember the ones marked with\nretain_dead_rows (let's call this list as retain_sub_list). Then using\nrunning_transactions get the oldest running_xact, and then get the\nremote flush location from the other node (publisher node) and store\nthose as candidate values (candidate_xmin and\ncandidate_remote_wal_lsn) in slot. We can probably reuse existing\ncandidate variables of the slot. Next, we can check the remote_flush\nlocations from all the origins corresponding in retain_sub_list and if\nall are ahead of candidate_remote_wal_lsn, we can update the slot's\nxmin to candidate_xmin.\n\nI think in the above idea we can an optimization to combine the\nrequest for remote wal LSN from different subscriptions pointing to\nthe same node to avoid sending multiple requests to the same node. I\nam not sure if using pg_subscription.subconninfo is sufficient for\nthis, if not we can probably leave this optimization.\n\nIf this idea is feasible then it would reduce the number of slots\nrequired to retain the deleted rows but the launcher needs to get the\nremote wal location corresponding to each publisher node. There are\ntwo ways to achieve that (a) launcher requests one of the apply\nworkers corresponding to subscriptions pointing to the same publisher\nnode to get this information; (b) launcher launches another worker to\nget the remote wal flush location.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 20 Sep 2024 15:16:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "Hi,\n\nThank you for considering another idea.\n\nOn Fri, Sep 20, 2024 at 2:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 20, 2024 at 8:25 AM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > Apart from the vacuum_defer_cleanup_age idea.\n> >\n>\n> I think you meant to say vacuum_committs_age idea.\n>\n> > we’ve given more thought to our\n> > approach for retaining dead tuples and have come up with another idea that can\n> > reliably detect conflicts without requiring users to choose a wise value for\n> > the vacuum_committs_age. This new idea could also reduce the performance\n> > impact. Thanks a lot to Amit for off-list discussion.\n> >\n> > The concept of the new idea is that, the dead tuples are only useful to detect\n> > conflicts when applying *concurrent* transactions from remotes. Any subsequent\n> > UPDATE from a remote node after removing the dead tuples should have a later\n> > timestamp, meaning it's reasonable to detect an update_missing scenario and\n> > convert the UPDATE to an INSERT when applying it.\n> >\n> > To achieve above, we can create an additional replication slot on the\n> > subscriber side, maintained by the apply worker. This slot is used to retain\n> > the dead tuples. The apply worker will advance the slot.xmin after confirming\n> > that all the concurrent transaction on publisher has been applied locally.\n\nThe replication slot used for this purpose will be a physical one or\nlogical one? And IIUC such a slot doesn't need to retain WAL but if we\ndo that, how do we advance the LSN of the slot?\n\n> > 2) the apply worker send a new message to walsender to request the latest wal\n> > flush position(GetFlushRecPtr) on publisher, and save it to\n> > 'candidate_remote_wal_lsn'. Here we could introduce a new feedback message or\n> > extend the existing keepalive message(e,g extends the requestReply bit in\n> > keepalive message to add a 'request_wal_position' value)\n\nThe apply worker sends a keepalive message when it didn't receive\nanything more than wal_receiver_timeout / 2. So in a very active\nsystem, we cannot rely on piggybacking new information to the\nkeepalive messages to get the latest remote flush LSN.\n\n> > 3) The apply worker can continue to apply changes. After applying all the WALs\n> > upto 'candidate_remote_wal_lsn', the apply worker can then advance the\n> > slot.xmin to 'candidate_xmin'.\n> >\n> > This approach ensures that dead tuples are not removed until all concurrent\n> > transactions have been applied. It can be effective for both bidirectional and\n> > non-bidirectional replication cases.\n> >\n> > We could introduce a boolean subscription option (retain_dead_tuples) to\n> > control whether this feature is enabled. Each subscription intending to detect\n> > update-delete conflicts should set retain_dead_tuples to true.\n> >\n\nI'm still studying this idea but let me confirm the following scenario.\n\nSuppose both Node-A and Node-B have the same row (1,1) in table t, and\nXIDs and commit LSNs of T2 and T3 are the following:\n\nNode A\n T2: DELETE FROM t WHERE id = 1 (10:02 AM) XID:100, commit-LSN:1000\n\nNode B\n T3: UPDATE t SET value = 2 WHERE id 1 (10:01 AM) XID:500, commit-LSN:5000\n\nFurther suppose that it's now 10:05 AM, and the latest XID and the\nlatest flush WAL position of Node-A and Node-B are following:\n\nNode A\n current XID: 300\n latest flush LSN; 3000\n\nNode B\n current XID: 700\n latest flush LSN: 7000\n\nBoth T2 and T3 are NOT sent to Node B and Node A yet, respectively\n(i.e., the logical replication is delaying for 5 min).\n\nConsider the following scenario:\n\n1. The apply worker on Node-A calls GetRunningTransactionData() and\ngets 301 (set as candidate_xmin).\n2. The apply worker on Node-A requests the latest WAL flush position\nfrom Node-B, and gets 7000 (set as candidate_remote_wal_lsn).\n3. T2 is applied on Node-B, and the latest flush position of Node-B is now 8000.\n4. The apply worker on Node-A continues applying changes, and applies\nthe transactions up to remote (commit) LSN 7100.\n5. Now that the apply worker on Node-A applied all changes smaller\nthan candidate_remote_wal_lsn (7000), it increases the slot.xmin to\n301 (candidate_xmin).\n6. On Node-A, vacuum runs and physically removes the tuple that was\ndeleted by T2.\n\nHere, on Node-B, there might be a transition between LSN 7100 and 8000\nthat might require the tuple that is deleted by T2.\n\nFor example, \"UPDATE t SET value = 3 WHERE id = 1\" (say T4) is\nexecuted on Node-B at LSN 7200, and it's sent to Node-A after step 6.\nOn Node-A, whether we detect \"update_deleted\" or \"update_missing\"\nstill depends on when vacuum removes the tuple deleted by T2.\n\nIf applying T4 raises an \"update_missing\" (i.e. the changes are\napplied in the order of T2->T3->(vacuum)->T4), it converts into an\ninsert, resulting in the table having a row with value = 3.\n\nIf applying T4 raises an \"update_deleted\" (i.e. the changes are\napplied in the order of T2->T3->T4->(vacuum)), it's skipped, resulting\nin the table having no row.\n\nOn the other hand, in this scenario, Node-B applies changes in the\norder of T3->T4->T2, and applying T2 raises a \"delete_origin_differ\",\nresulting in the table having a row with val=3 (assuming\nlatest_committs_win is the default resolver for this confliction).\n\nPlease confirm this scenario as I might be missing something.\n\n>\n> As each apply worker needs a separate slot to retain deleted rows, the\n> requirement for slots will increase. The other possibility is to\n> maintain one slot by launcher or some other central process that\n> traverses all subscriptions, remember the ones marked with\n> retain_dead_rows (let's call this list as retain_sub_list). Then using\n> running_transactions get the oldest running_xact, and then get the\n> remote flush location from the other node (publisher node) and store\n> those as candidate values (candidate_xmin and\n> candidate_remote_wal_lsn) in slot. We can probably reuse existing\n> candidate variables of the slot. Next, we can check the remote_flush\n> locations from all the origins corresponding in retain_sub_list and if\n> all are ahead of candidate_remote_wal_lsn, we can update the slot's\n> xmin to candidate_xmin.\n\nDoes it mean that we use one candiate_remote_wal_lsn in a slot for all\nsubscriptions (in retain_sub_list)? IIUC candiate_remote_wal_lsn is a\nLSN of one of publishers, so other publishers could have completely\ndifferent LSNs. How do we compare the candidate_remote_wal_lsn to\nremote_flush locations from all the origins?\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 23 Sep 2024 14:05:17 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Tuesday, September 24, 2024 5:05 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> Thank you for considering another idea.\r\n\r\nThanks for reviewing the idea!\r\n\r\n> \r\n> On Fri, Sep 20, 2024 at 2:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Fri, Sep 20, 2024 at 8:25 AM Zhijie Hou (Fujitsu)\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > Apart from the vacuum_defer_cleanup_age idea.\r\n> > >\r\n> >\r\n> > I think you meant to say vacuum_committs_age idea.\r\n> >\r\n> > > we’ve given more thought to our\r\n> > > approach for retaining dead tuples and have come up with another idea\r\n> that can\r\n> > > reliably detect conflicts without requiring users to choose a wise value for\r\n> > > the vacuum_committs_age. This new idea could also reduce the\r\n> performance\r\n> > > impact. Thanks a lot to Amit for off-list discussion.\r\n> > >\r\n> > > The concept of the new idea is that, the dead tuples are only useful to\r\n> detect\r\n> > > conflicts when applying *concurrent* transactions from remotes. Any\r\n> subsequent\r\n> > > UPDATE from a remote node after removing the dead tuples should have a\r\n> later\r\n> > > timestamp, meaning it's reasonable to detect an update_missing scenario\r\n> and\r\n> > > convert the UPDATE to an INSERT when applying it.\r\n> > >\r\n> > > To achieve above, we can create an additional replication slot on the\r\n> > > subscriber side, maintained by the apply worker. This slot is used to retain\r\n> > > the dead tuples. The apply worker will advance the slot.xmin after\r\n> confirming\r\n> > > that all the concurrent transaction on publisher has been applied locally.\r\n> \r\n> The replication slot used for this purpose will be a physical one or\r\n> logical one? And IIUC such a slot doesn't need to retain WAL but if we\r\n> do that, how do we advance the LSN of the slot?\r\n\r\nI think it would be a logical slot. We can keep the\r\nrestart_lsn/confirmed_flush_lsn as invalid because we don't need to retain the\r\nWALs for decoding purpose.\r\n\r\n> \r\n> > > 2) the apply worker send a new message to walsender to request the latest\r\n> wal\r\n> > > flush position(GetFlushRecPtr) on publisher, and save it to\r\n> > > 'candidate_remote_wal_lsn'. Here we could introduce a new feedback\r\n> message or\r\n> > > extend the existing keepalive message(e,g extends the requestReply bit in\r\n> > > keepalive message to add a 'request_wal_position' value)\r\n> \r\n> The apply worker sends a keepalive message when it didn't receive\r\n> anything more than wal_receiver_timeout / 2. So in a very active\r\n> system, we cannot rely on piggybacking new information to the\r\n> keepalive messages to get the latest remote flush LSN.\r\n\r\nRight. I think we need to send this new message at some interval independent of\r\nwal_receiver_timeout.\r\n\r\n> \r\n> > > 3) The apply worker can continue to apply changes. After applying all the\r\n> WALs\r\n> > > upto 'candidate_remote_wal_lsn', the apply worker can then advance the\r\n> > > slot.xmin to 'candidate_xmin'.\r\n> > >\r\n> > > This approach ensures that dead tuples are not removed until all\r\n> concurrent\r\n> > > transactions have been applied. It can be effective for both bidirectional\r\n> and\r\n> > > non-bidirectional replication cases.\r\n> > >\r\n> > > We could introduce a boolean subscription option (retain_dead_tuples) to\r\n> > > control whether this feature is enabled. Each subscription intending to\r\n> detect\r\n> > > update-delete conflicts should set retain_dead_tuples to true.\r\n> > >\r\n> \r\n> I'm still studying this idea but let me confirm the following scenario.\r\n> \r\n> Suppose both Node-A and Node-B have the same row (1,1) in table t, and\r\n> XIDs and commit LSNs of T2 and T3 are the following:\r\n> \r\n> Node A\r\n> T2: DELETE FROM t WHERE id = 1 (10:02 AM) XID:100, commit-LSN:1000\r\n> \r\n> Node B\r\n> T3: UPDATE t SET value = 2 WHERE id 1 (10:01 AM) XID:500,\r\n> commit-LSN:5000\r\n> \r\n> Further suppose that it's now 10:05 AM, and the latest XID and the\r\n> latest flush WAL position of Node-A and Node-B are following:\r\n> \r\n> Node A\r\n> current XID: 300\r\n> latest flush LSN; 3000\r\n> \r\n> Node B\r\n> current XID: 700\r\n> latest flush LSN: 7000\r\n> \r\n> Both T2 and T3 are NOT sent to Node B and Node A yet, respectively\r\n> (i.e., the logical replication is delaying for 5 min).\r\n> \r\n> Consider the following scenario:\r\n> \r\n> 1. The apply worker on Node-A calls GetRunningTransactionData() and\r\n> gets 301 (set as candidate_xmin).\r\n> 2. The apply worker on Node-A requests the latest WAL flush position\r\n> from Node-B, and gets 7000 (set as candidate_remote_wal_lsn).\r\n> 3. T2 is applied on Node-B, and the latest flush position of Node-B is now 8000.\r\n> 4. The apply worker on Node-A continues applying changes, and applies\r\n> the transactions up to remote (commit) LSN 7100.\r\n> 5. Now that the apply worker on Node-A applied all changes smaller\r\n> than candidate_remote_wal_lsn (7000), it increases the slot.xmin to\r\n> 301 (candidate_xmin).\r\n> 6. On Node-A, vacuum runs and physically removes the tuple that was\r\n> deleted by T2.\r\n> \r\n> Here, on Node-B, there might be a transition between LSN 7100 and 8000\r\n> that might require the tuple that is deleted by T2.\r\n> \r\n> For example, \"UPDATE t SET value = 3 WHERE id = 1\" (say T4) is\r\n> executed on Node-B at LSN 7200, and it's sent to Node-A after step 6.\r\n> On Node-A, whether we detect \"update_deleted\" or \"update_missing\"\r\n> still depends on when vacuum removes the tuple deleted by T2.\r\n\r\nI think in this case, no matter we detect \"update_delete\" or \"update_missing\",\r\nthe final data is the same. Because T4's commit timestamp should be later than\r\nT2 on node A, so in the case of \"update_deleted\", it will compare the commit\r\ntimestamp of the deleted tuple's xmax with T4's timestamp, and T4 should win,\r\nwhich means we will convert the update into insert and apply. Even if the\r\ndeleted tuple is deleted and \"update_missing\" is detected, the update will\r\nstill be converted into insert and applied. So, the result is the same.\r\n\r\n> \r\n> If applying T4 raises an \"update_missing\" (i.e. the changes are\r\n> applied in the order of T2->T3->(vacuum)->T4), it converts into an\r\n> insert, resulting in the table having a row with value = 3.\r\n> \r\n> If applying T4 raises an \"update_deleted\" (i.e. the changes are\r\n> applied in the order of T2->T3->T4->(vacuum)), it's skipped, resulting\r\n> in the table having no row.\r\n> \r\n> On the other hand, in this scenario, Node-B applies changes in the\r\n> order of T3->T4->T2, and applying T2 raises a \"delete_origin_differ\",\r\n> resulting in the table having a row with val=3 (assuming\r\n> latest_committs_win is the default resolver for this confliction).\r\n> \r\n> Please confirm this scenario as I might be missing something.\r\n\r\nAs explained above, I think the data can be consistent in this case as well.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Tue, 24 Sep 2024 03:32:33 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Tue, Sep 24, 2024 at 2:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> >\n> > As each apply worker needs a separate slot to retain deleted rows, the\n> > requirement for slots will increase. The other possibility is to\n> > maintain one slot by launcher or some other central process that\n> > traverses all subscriptions, remember the ones marked with\n> > retain_dead_rows (let's call this list as retain_sub_list). Then using\n> > running_transactions get the oldest running_xact, and then get the\n> > remote flush location from the other node (publisher node) and store\n> > those as candidate values (candidate_xmin and\n> > candidate_remote_wal_lsn) in slot. We can probably reuse existing\n> > candidate variables of the slot. Next, we can check the remote_flush\n> > locations from all the origins corresponding in retain_sub_list and if\n> > all are ahead of candidate_remote_wal_lsn, we can update the slot's\n> > xmin to candidate_xmin.\n>\n> Does it mean that we use one candiate_remote_wal_lsn in a slot for all\n> subscriptions (in retain_sub_list)? IIUC candiate_remote_wal_lsn is a\n> LSN of one of publishers, so other publishers could have completely\n> different LSNs. How do we compare the candidate_remote_wal_lsn to\n> remote_flush locations from all the origins?\n>\n\nThis should be an array/list with one element per publisher. We can\ncopy candidate_xmin to actual xmin only when the\ncandiate_remote_wal_lsn's corresponding to all publishers have been\napplied aka their remote_flush locations (present in origins) are\nahead. The advantages I see with this are (a) reduces the number of\nslots required to achieve the retention of deleted rows for conflict\ndetection, (b) in some cases we can avoid sending messages to the\npublisher because with this we only need to send message to a\nparticular publisher once rather than by all the apply workers\ncorresponding to same publisher node.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 24 Sep 2024 09:35:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Tue, Sep 24, 2024 at 9:02 AM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, September 24, 2024 5:05 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Thank you for considering another idea.\n>\n> Thanks for reviewing the idea!\n>\n> >\n> > On Fri, Sep 20, 2024 at 2:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Sep 20, 2024 at 8:25 AM Zhijie Hou (Fujitsu)\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > > Apart from the vacuum_defer_cleanup_age idea.\n> > > >\n> > >\n> > > I think you meant to say vacuum_committs_age idea.\n> > >\n> > > > we’ve given more thought to our\n> > > > approach for retaining dead tuples and have come up with another idea\n> > that can\n> > > > reliably detect conflicts without requiring users to choose a wise value for\n> > > > the vacuum_committs_age. This new idea could also reduce the\n> > performance\n> > > > impact. Thanks a lot to Amit for off-list discussion.\n> > > >\n> > > > The concept of the new idea is that, the dead tuples are only useful to\n> > detect\n> > > > conflicts when applying *concurrent* transactions from remotes. Any\n> > subsequent\n> > > > UPDATE from a remote node after removing the dead tuples should have a\n> > later\n> > > > timestamp, meaning it's reasonable to detect an update_missing scenario\n> > and\n> > > > convert the UPDATE to an INSERT when applying it.\n> > > >\n> > > > To achieve above, we can create an additional replication slot on the\n> > > > subscriber side, maintained by the apply worker. This slot is used to retain\n> > > > the dead tuples. The apply worker will advance the slot.xmin after\n> > confirming\n> > > > that all the concurrent transaction on publisher has been applied locally.\n> >\n> > The replication slot used for this purpose will be a physical one or\n> > logical one? And IIUC such a slot doesn't need to retain WAL but if we\n> > do that, how do we advance the LSN of the slot?\n>\n> I think it would be a logical slot. We can keep the\n> restart_lsn/confirmed_flush_lsn as invalid because we don't need to retain the\n> WALs for decoding purpose.\n>\n\nAs per my understanding, one of the main reasons to keep it logical is\nto allow syncing it to standbys (slotsync functionality). It is\nrequired because after promotion the subscriptions replicated to\nstandby could be enabled to make it a subscriber. If that is not\npossible due to any reason then we can consider it to be a physical\nslot as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 24 Sep 2024 10:49:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Mon, Sep 23, 2024 at 8:32 PM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, September 24, 2024 5:05 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Thank you for considering another idea.\n>\n> Thanks for reviewing the idea!\n>\n> >\n> > On Fri, Sep 20, 2024 at 2:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Sep 20, 2024 at 8:25 AM Zhijie Hou (Fujitsu)\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > > Apart from the vacuum_defer_cleanup_age idea.\n> > > >\n> > >\n> > > I think you meant to say vacuum_committs_age idea.\n> > >\n> > > > we’ve given more thought to our\n> > > > approach for retaining dead tuples and have come up with another idea\n> > that can\n> > > > reliably detect conflicts without requiring users to choose a wise value for\n> > > > the vacuum_committs_age. This new idea could also reduce the\n> > performance\n> > > > impact. Thanks a lot to Amit for off-list discussion.\n> > > >\n> > > > The concept of the new idea is that, the dead tuples are only useful to\n> > detect\n> > > > conflicts when applying *concurrent* transactions from remotes. Any\n> > subsequent\n> > > > UPDATE from a remote node after removing the dead tuples should have a\n> > later\n> > > > timestamp, meaning it's reasonable to detect an update_missing scenario\n> > and\n> > > > convert the UPDATE to an INSERT when applying it.\n> > > >\n> > > > To achieve above, we can create an additional replication slot on the\n> > > > subscriber side, maintained by the apply worker. This slot is used to retain\n> > > > the dead tuples. The apply worker will advance the slot.xmin after\n> > confirming\n> > > > that all the concurrent transaction on publisher has been applied locally.\n> >\n> > The replication slot used for this purpose will be a physical one or\n> > logical one? And IIUC such a slot doesn't need to retain WAL but if we\n> > do that, how do we advance the LSN of the slot?\n>\n> I think it would be a logical slot. We can keep the\n> restart_lsn/confirmed_flush_lsn as invalid because we don't need to retain the\n> WALs for decoding purpose.\n>\n> >\n> > > > 2) the apply worker send a new message to walsender to request the latest\n> > wal\n> > > > flush position(GetFlushRecPtr) on publisher, and save it to\n> > > > 'candidate_remote_wal_lsn'. Here we could introduce a new feedback\n> > message or\n> > > > extend the existing keepalive message(e,g extends the requestReply bit in\n> > > > keepalive message to add a 'request_wal_position' value)\n> >\n> > The apply worker sends a keepalive message when it didn't receive\n> > anything more than wal_receiver_timeout / 2. So in a very active\n> > system, we cannot rely on piggybacking new information to the\n> > keepalive messages to get the latest remote flush LSN.\n>\n> Right. I think we need to send this new message at some interval independent of\n> wal_receiver_timeout.\n>\n> >\n> > > > 3) The apply worker can continue to apply changes. After applying all the\n> > WALs\n> > > > upto 'candidate_remote_wal_lsn', the apply worker can then advance the\n> > > > slot.xmin to 'candidate_xmin'.\n> > > >\n> > > > This approach ensures that dead tuples are not removed until all\n> > concurrent\n> > > > transactions have been applied. It can be effective for both bidirectional\n> > and\n> > > > non-bidirectional replication cases.\n> > > >\n> > > > We could introduce a boolean subscription option (retain_dead_tuples) to\n> > > > control whether this feature is enabled. Each subscription intending to\n> > detect\n> > > > update-delete conflicts should set retain_dead_tuples to true.\n> > > >\n> >\n> > I'm still studying this idea but let me confirm the following scenario.\n> >\n> > Suppose both Node-A and Node-B have the same row (1,1) in table t, and\n> > XIDs and commit LSNs of T2 and T3 are the following:\n> >\n> > Node A\n> > T2: DELETE FROM t WHERE id = 1 (10:02 AM) XID:100, commit-LSN:1000\n> >\n> > Node B\n> > T3: UPDATE t SET value = 2 WHERE id 1 (10:01 AM) XID:500,\n> > commit-LSN:5000\n> >\n> > Further suppose that it's now 10:05 AM, and the latest XID and the\n> > latest flush WAL position of Node-A and Node-B are following:\n> >\n> > Node A\n> > current XID: 300\n> > latest flush LSN; 3000\n> >\n> > Node B\n> > current XID: 700\n> > latest flush LSN: 7000\n> >\n> > Both T2 and T3 are NOT sent to Node B and Node A yet, respectively\n> > (i.e., the logical replication is delaying for 5 min).\n> >\n> > Consider the following scenario:\n> >\n> > 1. The apply worker on Node-A calls GetRunningTransactionData() and\n> > gets 301 (set as candidate_xmin).\n> > 2. The apply worker on Node-A requests the latest WAL flush position\n> > from Node-B, and gets 7000 (set as candidate_remote_wal_lsn).\n> > 3. T2 is applied on Node-B, and the latest flush position of Node-B is now 8000.\n> > 4. The apply worker on Node-A continues applying changes, and applies\n> > the transactions up to remote (commit) LSN 7100.\n> > 5. Now that the apply worker on Node-A applied all changes smaller\n> > than candidate_remote_wal_lsn (7000), it increases the slot.xmin to\n> > 301 (candidate_xmin).\n> > 6. On Node-A, vacuum runs and physically removes the tuple that was\n> > deleted by T2.\n> >\n> > Here, on Node-B, there might be a transition between LSN 7100 and 8000\n> > that might require the tuple that is deleted by T2.\n> >\n> > For example, \"UPDATE t SET value = 3 WHERE id = 1\" (say T4) is\n> > executed on Node-B at LSN 7200, and it's sent to Node-A after step 6.\n> > On Node-A, whether we detect \"update_deleted\" or \"update_missing\"\n> > still depends on when vacuum removes the tuple deleted by T2.\n>\n> I think in this case, no matter we detect \"update_delete\" or \"update_missing\",\n> the final data is the same. Because T4's commit timestamp should be later than\n> T2 on node A, so in the case of \"update_deleted\", it will compare the commit\n> timestamp of the deleted tuple's xmax with T4's timestamp, and T4 should win,\n> which means we will convert the update into insert and apply. Even if the\n> deleted tuple is deleted and \"update_missing\" is detected, the update will\n> still be converted into insert and applied. So, the result is the same.\n\nThe \"latest_timestamp_wins\" is the default resolution method for\n\"update_deleted\"? When I checked the wiki page[1], the \"skip\" was the\ndefault solution method for that.\n\nRegards,\n\n[1] https://wiki.postgresql.org/wiki/Conflict_Detection_and_Resolution#Defaults\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 23 Sep 2024 23:42:15 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Tuesday, September 24, 2024 2:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> On Mon, Sep 23, 2024 at 8:32 PM Zhijie Hou (Fujitsu)\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Tuesday, September 24, 2024 5:05 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > > I'm still studying this idea but let me confirm the following scenario.\r\n> > >\r\n> > > Suppose both Node-A and Node-B have the same row (1,1) in table t,\r\n> > > and XIDs and commit LSNs of T2 and T3 are the following:\r\n> > >\r\n> > > Node A\r\n> > > T2: DELETE FROM t WHERE id = 1 (10:02 AM) XID:100,\r\n> commit-LSN:1000\r\n> > >\r\n> > > Node B\r\n> > > T3: UPDATE t SET value = 2 WHERE id 1 (10:01 AM) XID:500,\r\n> > > commit-LSN:5000\r\n> > >\r\n> > > Further suppose that it's now 10:05 AM, and the latest XID and the\r\n> > > latest flush WAL position of Node-A and Node-B are following:\r\n> > >\r\n> > > Node A\r\n> > > current XID: 300\r\n> > > latest flush LSN; 3000\r\n> > >\r\n> > > Node B\r\n> > > current XID: 700\r\n> > > latest flush LSN: 7000\r\n> > >\r\n> > > Both T2 and T3 are NOT sent to Node B and Node A yet, respectively\r\n> > > (i.e., the logical replication is delaying for 5 min).\r\n> > >\r\n> > > Consider the following scenario:\r\n> > >\r\n> > > 1. The apply worker on Node-A calls GetRunningTransactionData() and\r\n> > > gets 301 (set as candidate_xmin).\r\n> > > 2. The apply worker on Node-A requests the latest WAL flush position\r\n> > > from Node-B, and gets 7000 (set as candidate_remote_wal_lsn).\r\n> > > 3. T2 is applied on Node-B, and the latest flush position of Node-B is now\r\n> 8000.\r\n> > > 4. The apply worker on Node-A continues applying changes, and\r\n> > > applies the transactions up to remote (commit) LSN 7100.\r\n> > > 5. Now that the apply worker on Node-A applied all changes smaller\r\n> > > than candidate_remote_wal_lsn (7000), it increases the slot.xmin to\r\n> > > 301 (candidate_xmin).\r\n> > > 6. On Node-A, vacuum runs and physically removes the tuple that was\r\n> > > deleted by T2.\r\n> > >\r\n> > > Here, on Node-B, there might be a transition between LSN 7100 and\r\n> > > 8000 that might require the tuple that is deleted by T2.\r\n> > >\r\n> > > For example, \"UPDATE t SET value = 3 WHERE id = 1\" (say T4) is\r\n> > > executed on Node-B at LSN 7200, and it's sent to Node-A after step 6.\r\n> > > On Node-A, whether we detect \"update_deleted\" or \"update_missing\"\r\n> > > still depends on when vacuum removes the tuple deleted by T2.\r\n> >\r\n> > I think in this case, no matter we detect \"update_delete\" or\r\n> > \"update_missing\", the final data is the same. Because T4's commit\r\n> > timestamp should be later than\r\n> > T2 on node A, so in the case of \"update_deleted\", it will compare the\r\n> > commit timestamp of the deleted tuple's xmax with T4's timestamp, and\r\n> > T4 should win, which means we will convert the update into insert and\r\n> > apply. Even if the deleted tuple is deleted and \"update_missing\" is\r\n> > detected, the update will still be converted into insert and applied. So, the\r\n> result is the same.\r\n> \r\n> The \"latest_timestamp_wins\" is the default resolution method for\r\n> \"update_deleted\"? When I checked the wiki page[1], the \"skip\" was the default\r\n> solution method for that.\r\n\r\nRight, I think the wiki needs some update.\r\n\r\nI think using 'skip' as default for update_delete could easily cause data\r\ndivergence when the dead tuple is deleted by an old transaction while the\r\nUPDATE has a newer timestamp like the case you mentioned. It's necessary to\r\nfollow the last update win strategy when the incoming update has later\r\ntimestamp, which is to convert update to insert.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Tue, 24 Sep 2024 07:14:53 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Tue, Sep 24, 2024 at 12:14 AM Zhijie Hou (Fujitsu)\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, September 24, 2024 2:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Sep 23, 2024 at 8:32 PM Zhijie Hou (Fujitsu)\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Tuesday, September 24, 2024 5:05 AM Masahiko Sawada\n> > <sawada.mshk@gmail.com> wrote:\n> > > > I'm still studying this idea but let me confirm the following scenario.\n> > > >\n> > > > Suppose both Node-A and Node-B have the same row (1,1) in table t,\n> > > > and XIDs and commit LSNs of T2 and T3 are the following:\n> > > >\n> > > > Node A\n> > > > T2: DELETE FROM t WHERE id = 1 (10:02 AM) XID:100,\n> > commit-LSN:1000\n> > > >\n> > > > Node B\n> > > > T3: UPDATE t SET value = 2 WHERE id 1 (10:01 AM) XID:500,\n> > > > commit-LSN:5000\n> > > >\n> > > > Further suppose that it's now 10:05 AM, and the latest XID and the\n> > > > latest flush WAL position of Node-A and Node-B are following:\n> > > >\n> > > > Node A\n> > > > current XID: 300\n> > > > latest flush LSN; 3000\n> > > >\n> > > > Node B\n> > > > current XID: 700\n> > > > latest flush LSN: 7000\n> > > >\n> > > > Both T2 and T3 are NOT sent to Node B and Node A yet, respectively\n> > > > (i.e., the logical replication is delaying for 5 min).\n> > > >\n> > > > Consider the following scenario:\n> > > >\n> > > > 1. The apply worker on Node-A calls GetRunningTransactionData() and\n> > > > gets 301 (set as candidate_xmin).\n> > > > 2. The apply worker on Node-A requests the latest WAL flush position\n> > > > from Node-B, and gets 7000 (set as candidate_remote_wal_lsn).\n> > > > 3. T2 is applied on Node-B, and the latest flush position of Node-B is now\n> > 8000.\n> > > > 4. The apply worker on Node-A continues applying changes, and\n> > > > applies the transactions up to remote (commit) LSN 7100.\n> > > > 5. Now that the apply worker on Node-A applied all changes smaller\n> > > > than candidate_remote_wal_lsn (7000), it increases the slot.xmin to\n> > > > 301 (candidate_xmin).\n> > > > 6. On Node-A, vacuum runs and physically removes the tuple that was\n> > > > deleted by T2.\n> > > >\n> > > > Here, on Node-B, there might be a transition between LSN 7100 and\n> > > > 8000 that might require the tuple that is deleted by T2.\n> > > >\n> > > > For example, \"UPDATE t SET value = 3 WHERE id = 1\" (say T4) is\n> > > > executed on Node-B at LSN 7200, and it's sent to Node-A after step 6.\n> > > > On Node-A, whether we detect \"update_deleted\" or \"update_missing\"\n> > > > still depends on when vacuum removes the tuple deleted by T2.\n> > >\n> > > I think in this case, no matter we detect \"update_delete\" or\n> > > \"update_missing\", the final data is the same. Because T4's commit\n> > > timestamp should be later than\n> > > T2 on node A, so in the case of \"update_deleted\", it will compare the\n> > > commit timestamp of the deleted tuple's xmax with T4's timestamp, and\n> > > T4 should win, which means we will convert the update into insert and\n> > > apply. Even if the deleted tuple is deleted and \"update_missing\" is\n> > > detected, the update will still be converted into insert and applied. So, the\n> > result is the same.\n> >\n> > The \"latest_timestamp_wins\" is the default resolution method for\n> > \"update_deleted\"? When I checked the wiki page[1], the \"skip\" was the default\n> > solution method for that.\n>\n> Right, I think the wiki needs some update.\n>\n> I think using 'skip' as default for update_delete could easily cause data\n> divergence when the dead tuple is deleted by an old transaction while the\n> UPDATE has a newer timestamp like the case you mentioned. It's necessary to\n> follow the last update win strategy when the incoming update has later\n> timestamp, which is to convert update to insert.\n\nRight. If \"latest_timestamp_wins\" is the default resolution for\n\"update_deleted\", I think your idea works fine unless I'm missing\ncorner cases.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 24 Sep 2024 10:24:56 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Fri, Sep 20, 2024 at 2:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 20, 2024 at 8:25 AM Zhijie Hou (Fujitsu)\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > Apart from the vacuum_defer_cleanup_age idea.\n> >\n>\n> I think you meant to say vacuum_committs_age idea.\n>\n> > we’ve given more thought to our\n> > approach for retaining dead tuples and have come up with another idea that can\n> > reliably detect conflicts without requiring users to choose a wise value for\n> > the vacuum_committs_age. This new idea could also reduce the performance\n> > impact. Thanks a lot to Amit for off-list discussion.\n> >\n> > The concept of the new idea is that, the dead tuples are only useful to detect\n> > conflicts when applying *concurrent* transactions from remotes. Any subsequent\n> > UPDATE from a remote node after removing the dead tuples should have a later\n> > timestamp, meaning it's reasonable to detect an update_missing scenario and\n> > convert the UPDATE to an INSERT when applying it.\n> >\n> > To achieve above, we can create an additional replication slot on the\n> > subscriber side, maintained by the apply worker. This slot is used to retain\n> > the dead tuples. The apply worker will advance the slot.xmin after confirming\n> > that all the concurrent transaction on publisher has been applied locally.\n> >\n> > The process of advancing the slot.xmin could be:\n> >\n> > 1) the apply worker call GetRunningTransactionData() to get the\n> > 'oldestRunningXid' and consider this as 'candidate_xmin'.\n> > 2) the apply worker send a new message to walsender to request the latest wal\n> > flush position(GetFlushRecPtr) on publisher, and save it to\n> > 'candidate_remote_wal_lsn'. Here we could introduce a new feedback message or\n> > extend the existing keepalive message(e,g extends the requestReply bit in\n> > keepalive message to add a 'request_wal_position' value)\n> > 3) The apply worker can continue to apply changes. After applying all the WALs\n> > upto 'candidate_remote_wal_lsn', the apply worker can then advance the\n> > slot.xmin to 'candidate_xmin'.\n> >\n> > This approach ensures that dead tuples are not removed until all concurrent\n> > transactions have been applied. It can be effective for both bidirectional and\n> > non-bidirectional replication cases.\n> >\n> > We could introduce a boolean subscription option (retain_dead_tuples) to\n> > control whether this feature is enabled. Each subscription intending to detect\n> > update-delete conflicts should set retain_dead_tuples to true.\n> >\n>\n> As each apply worker needs a separate slot to retain deleted rows, the\n> requirement for slots will increase. The other possibility is to\n> maintain one slot by launcher or some other central process that\n> traverses all subscriptions, remember the ones marked with\n> retain_dead_rows (let's call this list as retain_sub_list). Then using\n> running_transactions get the oldest running_xact, and then get the\n> remote flush location from the other node (publisher node) and store\n> those as candidate values (candidate_xmin and\n> candidate_remote_wal_lsn) in slot. We can probably reuse existing\n> candidate variables of the slot. Next, we can check the remote_flush\n> locations from all the origins corresponding in retain_sub_list and if\n> all are ahead of candidate_remote_wal_lsn, we can update the slot's\n> xmin to candidate_xmin.\n\nYeah, I think that such an idea to reduce the number required slots\nwould be necessary.\n\n>\n> I think in the above idea we can an optimization to combine the\n> request for remote wal LSN from different subscriptions pointing to\n> the same node to avoid sending multiple requests to the same node. I\n> am not sure if using pg_subscription.subconninfo is sufficient for\n> this, if not we can probably leave this optimization.\n>\n> If this idea is feasible then it would reduce the number of slots\n> required to retain the deleted rows but the launcher needs to get the\n> remote wal location corresponding to each publisher node. There are\n> two ways to achieve that (a) launcher requests one of the apply\n> workers corresponding to subscriptions pointing to the same publisher\n> node to get this information; (b) launcher launches another worker to\n> get the remote wal flush location.\n\nI think the remote wal flush location is asked using a replication\nprotocol. Therefore, if a new worker is responsible for asking wal\nflush location from multiple publishers (like the idea (b)), the\ncorresponding process would need to be launched on publisher sides and\nlogical replication would also need to start on each connection. I\nthink it would be better to get the remote wal flush location using\nthe existing logical replication connection (i.e., between the logical\nwal sender and the apply worker), and advertise the locations on the\nshared memory. Then, the central process who holds the slot to retain\nthe deleted row versions traverses them and increases slot.xmin if\npossible.\n\nThe cost of requesting the remote wal flush location would not be huge\nif we don't ask it very frequently. So probably we can start by having\neach apply worker (in the retain_sub_list) ask the remote wal flush\nlocation and can leave the optimization of avoiding sending the\nrequest for the same publisher.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 24 Sep 2024 11:22:36 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Friday, September 20, 2024 11:59 AM Hou, Zhijie/侯 志杰 wrote:\r\n> \r\n> On Friday, September 20, 2024 10:55 AM Zhijie Hou (Fujitsu)\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > On Friday, September 20, 2024 2:49 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > >\r\n> > > I think that such a time-based configuration parameter would be a\r\n> > > reasonable solution. The current concerns are that it might affect\r\n> > > vacuum performance and lead to a similar bug we had with\r\n> > vacuum_defer_cleanup_age.\r\n> >\r\n> > Thanks for the feedback!\r\n> >\r\n> > I am working on the POC patch and doing some initial performance tests\r\n> > on this idea.\r\n> > I will share the results after finishing.\r\n\r\nHere is a POC patch for vacuum_committs_age idea. The patch adds a GUC\r\nvacuum_committs_age to prevent dead rows from being removed if the age of the\r\ndelete transaction (xmax) has not exceeded the vacuum_committs_age threshold.\r\nE.g. , it ensures the row is retained if now() - commit_timestamp_of_xmax <\r\nvacuum_committs_age.\r\n\r\nHowever, please note that the patch is still unfinished due to a few\r\nissues that need to be addressed. For instance: We need to prevent\r\nrelfrozenxid/datfrozenxid from being advanced in both aggressive and\r\nnon-aggressive vacuum modes. Otherwise, the commit timestamp data is cleaned\r\nup after advancing frozenxid, and we won’t be able to compute the age of a tuple.\r\n\r\nAdditionally, the patch has a noticeable performance impact on vacuum\r\noperations when rows in a table are deleted by multiple transactions. Here are\r\nthe results of VACUUMing a table after deleting each row in a separate\r\ntransaction (total of 10000000 dead rows) and the xmax ages of all the dead\r\ntuples have exceeded the vacuum_committs_age in patched tests (see attachment\r\nfor the basic configuration of the tests):\r\n\r\n HEAD:\t\t\tTime: 848.637 ms\r\n patched, SLRU 8MB:\tTime: 1423.915 ms\r\n patched, SLRU 1G:\t\tTime: 1310.869 ms\r\n\r\nSince we have discussed about an alternative approach that can reliably retain\r\ndead tuples without modifying vacuum process. We plan to shift our focus to\r\nthis new approach [1]. I am currently working on another POC patch based on this\r\nnew approach and will share it later.\r\n\r\n[1] https://www.postgresql.org/message-id/CAD21AoD%3Dm-YHceYMpsdu0HnGCaezeyVhaCPFxDLHU7aN0wgzqg%40mail.gmail.com\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Wed, 25 Sep 2024 05:44:00 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Conflict detection for update_deleted in logical replication"
},
{
"msg_contents": "On Wednesday, September 25, 2024 2:23 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> On Fri, Sep 20, 2024 at 2:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Fri, Sep 20, 2024 at 8:25 AM Zhijie Hou (Fujitsu)\r\n> > <houzj.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > Apart from the vacuum_defer_cleanup_age idea.\r\n> > >\r\n> >\r\n> > I think you meant to say vacuum_committs_age idea.\r\n> >\r\n> > > we’ve given more thought to our\r\n> > > approach for retaining dead tuples and have come up with another\r\n> > > idea that can reliably detect conflicts without requiring users to\r\n> > > choose a wise value for the vacuum_committs_age. This new idea could\r\n> > > also reduce the performance impact. Thanks a lot to Amit for off-list\r\n> discussion.\r\n> > >\r\n> > > The concept of the new idea is that, the dead tuples are only useful\r\n> > > to detect conflicts when applying *concurrent* transactions from\r\n> > > remotes. Any subsequent UPDATE from a remote node after removing the\r\n> > > dead tuples should have a later timestamp, meaning it's reasonable\r\n> > > to detect an update_missing scenario and convert the UPDATE to an\r\n> INSERT when applying it.\r\n> > >\r\n> > > To achieve above, we can create an additional replication slot on\r\n> > > the subscriber side, maintained by the apply worker. This slot is\r\n> > > used to retain the dead tuples. The apply worker will advance the\r\n> > > slot.xmin after confirming that all the concurrent transaction on publisher\r\n> has been applied locally.\r\n> > >\r\n> > > The process of advancing the slot.xmin could be:\r\n> > >\r\n> > > 1) the apply worker call GetRunningTransactionData() to get the\r\n> > > 'oldestRunningXid' and consider this as 'candidate_xmin'.\r\n> > > 2) the apply worker send a new message to walsender to request the\r\n> > > latest wal flush position(GetFlushRecPtr) on publisher, and save it\r\n> > > to 'candidate_remote_wal_lsn'. Here we could introduce a new\r\n> > > feedback message or extend the existing keepalive message(e,g\r\n> > > extends the requestReply bit in keepalive message to add a\r\n> > > 'request_wal_position' value)\r\n> > > 3) The apply worker can continue to apply changes. After applying\r\n> > > all the WALs upto 'candidate_remote_wal_lsn', the apply worker can\r\n> > > then advance the slot.xmin to 'candidate_xmin'.\r\n> > >\r\n> > > This approach ensures that dead tuples are not removed until all\r\n> > > concurrent transactions have been applied. It can be effective for\r\n> > > both bidirectional and non-bidirectional replication cases.\r\n> > >\r\n> > > We could introduce a boolean subscription option\r\n> > > (retain_dead_tuples) to control whether this feature is enabled.\r\n> > > Each subscription intending to detect update-delete conflicts should set\r\n> retain_dead_tuples to true.\r\n> > >\r\n> >\r\n> > As each apply worker needs a separate slot to retain deleted rows, the\r\n> > requirement for slots will increase. The other possibility is to\r\n> > maintain one slot by launcher or some other central process that\r\n> > traverses all subscriptions, remember the ones marked with\r\n> > retain_dead_rows (let's call this list as retain_sub_list). Then using\r\n> > running_transactions get the oldest running_xact, and then get the\r\n> > remote flush location from the other node (publisher node) and store\r\n> > those as candidate values (candidate_xmin and\r\n> > candidate_remote_wal_lsn) in slot. We can probably reuse existing\r\n> > candidate variables of the slot. Next, we can check the remote_flush\r\n> > locations from all the origins corresponding in retain_sub_list and if\r\n> > all are ahead of candidate_remote_wal_lsn, we can update the slot's\r\n> > xmin to candidate_xmin.\r\n> \r\n> Yeah, I think that such an idea to reduce the number required slots would be\r\n> necessary.\r\n> \r\n> >\r\n> > I think in the above idea we can an optimization to combine the\r\n> > request for remote wal LSN from different subscriptions pointing to\r\n> > the same node to avoid sending multiple requests to the same node. I\r\n> > am not sure if using pg_subscription.subconninfo is sufficient for\r\n> > this, if not we can probably leave this optimization.\r\n> >\r\n> > If this idea is feasible then it would reduce the number of slots\r\n> > required to retain the deleted rows but the launcher needs to get the\r\n> > remote wal location corresponding to each publisher node. There are\r\n> > two ways to achieve that (a) launcher requests one of the apply\r\n> > workers corresponding to subscriptions pointing to the same publisher\r\n> > node to get this information; (b) launcher launches another worker to\r\n> > get the remote wal flush location.\r\n> \r\n> I think the remote wal flush location is asked using a replication protocol.\r\n> Therefore, if a new worker is responsible for asking wal flush location from\r\n> multiple publishers (like the idea (b)), the corresponding process would need\r\n> to be launched on publisher sides and logical replication would also need to\r\n> start on each connection. I think it would be better to get the remote wal flush\r\n> location using the existing logical replication connection (i.e., between the\r\n> logical wal sender and the apply worker), and advertise the locations on the\r\n> shared memory. Then, the central process who holds the slot to retain the\r\n> deleted row versions traverses them and increases slot.xmin if possible.\r\n> \r\n> The cost of requesting the remote wal flush location would not be huge if we\r\n> don't ask it very frequently. So probably we can start by having each apply\r\n> worker (in the retain_sub_list) ask the remote wal flush location and can leave\r\n> the optimization of avoiding sending the request for the same publisher.\r\n\r\nAgreed. Here is the POC patch set based on this idea.\r\n\r\nThe implementation is as follows:\r\n\r\nA subscription option is added to allow users to specify whether dead\r\ntuples on the subscriber, which are useful for detecting update_deleted\r\nconflicts, should be retained. The default setting is false. If set to true,\r\nthe detection of update_deleted will be enabled, and an additional replication\r\nslot named pg_conflict_detection will be created on the subscriber to prevent\r\ndead tuples from being removed. Note that if multiple subscriptions on one node\r\nenable this option, only one replication slot will be created.\r\n\r\nThis additional slot will be used to retain dead tuples. Each apply worker will\r\nmaintain its own non-removable transaction ID by following the steps:\r\n\r\n1) Calling GetRunningTransactionData() to take oldestRunningXid as the\r\ncandidate xid and send a new message to request the remote WAL position from\r\nthe walsender.\r\n2) It then waits (non-blocking) to receive the WAL position from the walsender.\r\n3) After receiving the WAL position, the non-removable transaction ID is\r\nadvanced if the current flush location has reached or surpassed the received\r\nWAL position.\r\n\r\nThese steps are repeated at intervals defined by wal_receiver_status_interval\r\nto minimize performance impact. It ensures that dead tuples are not\r\nremoved until all concurrent transactions have been applied.\r\n\r\nThe launcher periodically collects the oldest_nonremovable_xid from all apply\r\nworkers. It then computes the minimum transaction ID and advances the xmin\r\nvalue of the replication slot if it precedes the computed value.\r\n\r\nI will keep testing the patch internally and analyze whether it's necessary to enable\r\nfailover for this new replication slot.\r\n\r\nPlease refer to the commit message of V2-0001 for the overall design.\r\nThe patch set is split into some parts to make it easier for the initial\r\nreview. Please note that each patch is interdependent and cannot work\r\nindependently.\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Mon, 30 Sep 2024 06:32:42 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Conflict detection for update_deleted in logical replication"
}
] |
[
{
"msg_contents": "hi.\none minor issue for ATExecColumnDefault comments.\n\n\n/*\n * ALTER TABLE ALTER COLUMN SET/DROP DEFAULT\n *\n * Return the address of the affected column.\n */\nstatic ObjectAddress\nATExecColumnDefault(Relation rel, const char *colName,\n Node *newDefault, LOCKMODE lockmode)\n\nthe comment should be:\n * ALTER TABLE ALTER COLUMN SET/DROP DEFAULT\n * ALTER VIEW ALTER COLUMN SET/DROP DEFAULT\n?\n\n\nSometimes, comments are useful for quickly glance at what this function does.\nTherefore I guess reword it would be helpful.\n\n\n",
"msg_date": "Thu, 5 Sep 2024 21:50:26 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": true,
"msg_subject": "ATExecColumnDefault comments"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nIt's quite common that poor query performance can be attributed to \ninaccurate row estimations by the planner. To make it easier to detect \nthese discrepancies, rather than scrutinizing the estimates manually, it \nwould be helpful to output a dedicated |NOTICE| message.\n\nIn the current patch, I've introduced a new GUC parameter called \n'estimated_rows_scale_factor'. If the ratio of the estimated rows to the \nactual rows is less than this factor, or if the estimated rows \nsignificantly exceed the actual rows (when the ratio is greater than \nthis factor), a NOTICE message will be printed. The message reads: \n\"Estimated rows (%.0f) less(greater) than actual rows (%.0f).\"\n\n\nHere is an example:\n\nCREATE TABLE t(a int, b int);\nINSERT INTO t SELECT x/10, x FROM generate_series(1,10000000) g(x);\nANALYZE;\n\nSET estimated_rows_scale_factor = 0.9;\n\nEXPLAIN ANALYZE SELECT * FROM t WHERE a > 10 AND b <= 200;\nNOTICE: Estimated rows (1000) greater than actual rows (91).\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Gather (cost=1000.00..107848.00 rows=1000 width=8) (actual \ntime=0.446..122.476 rows=91 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on t (cost=0.00..106748.00 rows=417 width=8) \n(actual time=77.657..118.000 rows=30 loops=3)\n Filter: ((a > 10) AND (b <= 200))\n Rows Removed by Filter: 3333303\n Planning Time: 0.097 ms\n Execution Time: 122.502 ms\n(8 rows)\n\nEXPLAIN ANALYZE SELECT * FROM t WHERE a = 10 AND b <= 200;\nNOTICE: Estimated rows (1) less than actual rows (10).\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n Gather (cost=1000.00..107748.10 rows=1 width=8) (actual \ntime=0.280..104.752 rows=10 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on t (cost=0.00..106748.00 rows=1 width=8) \n(actual time=66.493..101.102 rows=3 loops=3)\n Filter: ((b <= 200) AND (a = 10))\n Rows Removed by Filter: 3333330\n Planning Time: 0.129 ms\n Execution Time: 104.768 ms\n(8 rows)\n\n\nIf you have any suggestions regarding the wording of the message, its \nplacement, or if you'd like to see a different criterion used, I would \ngreatly appreciate your feedback.\n\nLooking forward to your thoughts and suggestions.\n\n-- \nRegards,\nIlia Evdokimov,\nTantor Labs LCC.",
"msg_date": "Thu, 5 Sep 2024 19:05:06 +0300",
"msg_from": "Ilia Evdokimov <ilya.evdokimov@tantorlabs.com>",
"msg_from_op": true,
"msg_subject": "Adding NOTICE for differences between estimated and actual rows"
},
{
"msg_contents": "On Thu, Sep 5, 2024, at 1:05 PM, Ilia Evdokimov wrote:\n> It's quite common that poor query performance can be attributed to inaccurate row estimations by the planner. To make it easier to detect these discrepancies, rather than scrutinizing the estimates manually, it would be helpful to output a dedicated `NOTICE` message.\n> \n\nI don't know if NOTICE is a good UI for an inaccurate estimation. The main issue\nwith your proposal is that it does not indicate where it is. It is easier to\ninspect small query plans but what if you have a plan with hundreds of lines?\n\nIMO the client should provide this feature. The shell provides a way to change\nthe color and/or style from the output. I have a perl script that reads an\nEXPLAIN output and mark with different colors (red, yellow) if the estimations\nare off. psql could do the same.\n\nIn your case if the output was changed to something like:\n\n\\033[0;1;31mGather (cost=1000.00..107848.00 rows=1000 width=8) (actual time=0.446..122.476 rows=91 loops=1)\\033[0m\n Workers Planned: 2\n Workers Launched: 2\n -> \\033[0;1;31mParallel Seq Scan on t (cost=0.00..106748.00 rows=417 width=8) (actual time=77.657..118.000 rows=30 loops=3)\\033[0m\n Filter: ((a > 10) AND (b <= 200))\n Rows Removed by Filter: 3333303\nPlanning Time: 0.097 ms\nExecution Time: 122.502 ms\n(8 rows)\n\nNote \"\\033[0;1;31m\" and \"\\033[0m\" that means foreground bold red and default,\nrespectively.\n\nAnother alternative if you don't want to modify psql is to use the pager. Create\na script that contains your logic to apply color and/or style to the desired\n(sub)string(s). The following example that I extracted from [1] can apply colors\nto psql output.\n\n$ cat /tmp/pcc.pl\n#!/usr/bin/perl -n\nprint \"\\033[1m\\033[35m$1\\033[36m$2\\033[32m$3\\033[33m$4\\033[m\" while /([|+-]+)|([0-9]+)|([a-zA-Z_]+)|([^\\w])/g;\n\nand then you can start psql as:\n\n$ PAGER=\"/c/mypager.pl\" psql\n\n\n[1] https://stackoverflow.com/questions/5947742/how-to-change-the-output-color-of-echo-in-linux/28938235#28938235\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Sep 5, 2024, at 1:05 PM, Ilia Evdokimov wrote:It's quite common that poor query performance can be attributed\n to inaccurate row estimations by the planner. To make it easier to\n detect these discrepancies, rather than scrutinizing the estimates\n manually, it would be helpful to output a dedicated NOTICE message.I don't know if NOTICE is a good UI for an inaccurate estimation. The main issuewith your proposal is that it does not indicate where it is. It is easier toinspect small query plans but what if you have a plan with hundreds of lines?IMO the client should provide this feature. The shell provides a way to changethe color and/or style from the output. I have a perl script that reads anEXPLAIN output and mark with different colors (red, yellow) if the estimationsare off. psql could do the same.In your case if the output was changed to something like:\\033[0;1;31mGather (cost=1000.00..107848.00 rows=1000 width=8) (actual time=0.446..122.476 rows=91 loops=1)\\033[0m Workers Planned: 2 Workers Launched: 2 -> \\033[0;1;31mParallel Seq Scan on t (cost=0.00..106748.00 rows=417 width=8) (actual time=77.657..118.000 rows=30 loops=3)\\033[0m Filter: ((a > 10) AND (b <= 200)) Rows Removed by Filter: 3333303Planning Time: 0.097 msExecution Time: 122.502 ms(8 rows)Note \"\\033[0;1;31m\" and \"\\033[0m\" that means foreground bold red and default,respectively.Another alternative if you don't want to modify psql is to use the pager. Createa script that contains your logic to apply color and/or style to the desired(sub)string(s). The following example that I extracted from [1] can apply colorsto psql output.$ cat /tmp/pcc.pl#!/usr/bin/perl -nprint \"\\033[1m\\033[35m$1\\033[36m$2\\033[32m$3\\033[33m$4\\033[m\" while /([|+-]+)|([0-9]+)|([a-zA-Z_]+)|([^\\w])/g;and then you can start psql as:$ PAGER=\"/c/mypager.pl\" psql[1] https://stackoverflow.com/questions/5947742/how-to-change-the-output-color-of-echo-in-linux/28938235#28938235--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 05 Sep 2024 17:32:46 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding NOTICE for differences between estimated and actual rows"
},
{
"msg_contents": "On Thu, Sep 5, 2024, at 5:32 PM, Euler Taveira wrote:\n> $ cat /tmp/pcc.pl\n> #!/usr/bin/perl -n\n> print \"\\033[1m\\033[35m$1\\033[36m$2\\033[32m$3\\033[33m$4\\033[m\" while /([|+-]+)|([0-9]+)|([a-zA-Z_]+)|([^\\w])/g;\n> \n> and then you can start psql as:\n> \n> $ PAGER=\"/c/mypager.pl\" psql\n\nI meant:\n\n$ PAGER=\"/tmp/pcc.pl\" psql\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Sep 5, 2024, at 5:32 PM, Euler Taveira wrote:$ cat /tmp/pcc.pl#!/usr/bin/perl -nprint \"\\033[1m\\033[35m$1\\033[36m$2\\033[32m$3\\033[33m$4\\033[m\" while /([|+-]+)|([0-9]+)|([a-zA-Z_]+)|([^\\w])/g;and then you can start psql as:$ PAGER=\"/c/mypager.pl\" psqlI meant:$ PAGER=\"/tmp/pcc.pl\" psql--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 05 Sep 2024 17:57:44 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding NOTICE for differences between estimated and actual rows"
},
{
"msg_contents": "On 05.09.2024 23:32, Euler Taveira wrote:\n> On Thu, Sep 5, 2024, at 1:05 PM, Ilia Evdokimov wrote:\n>>\n>> It's quite common that poor query performance can be attributed to \n>> inaccurate row estimations by the planner. To make it easier to \n>> detect these discrepancies, rather than scrutinizing the estimates \n>> manually, it would be helpful to output a dedicated |NOTICE| message.\n>>\n>\n> I don't know if NOTICE is a good UI for an inaccurate estimation. The \n> main issue\n> with your proposal is that it does not indicate where it is. It is \n> easier to\n> inspect small query plans but what if you have a plan with hundreds of \n> lines?\n>\n> IMO the client should provide this feature. The shell provides a way \n> to change\n> the color and/or style from the output. I have a perl script that reads an\n> EXPLAIN output and mark with different colors (red, yellow) if the \n> estimations\n> are off. psql could do the same.\n>\n> In your case if the output was changed to something like:\n>\n> \\033[0;1;31mGather (cost=1000.00..107848.00 rows=1000 width=8) \n> (actual time=0.446..122.476 rows=91 loops=1)\\033[0m\n> Workers Planned: 2\n> Workers Launched: 2\n> -> \\033[0;1;31mParallel Seq Scan on t (cost=0.00..106748.00 \n> rows=417 width=8) (actual time=77.657..118.000 rows=30 loops=3)\\033[0m\n> Filter: ((a > 10) AND (b <= 200))\n> Rows Removed by Filter: 3333303\n> Planning Time: 0.097 ms\n> Execution Time: 122.502 ms\n> (8 rows)\n>\n> Note \"\\033[0;1;31m\" and \"\\033[0m\" that means foreground bold red and \n> default,\n> respectively.\n>\n> Another alternative if you don't want to modify psql is to use the \n> pager. Create\n> a script that contains your logic to apply color and/or style to the \n> desired\n> (sub)string(s). The following example that I extracted from [1] can \n> apply colors\n> to psql output.\n>\n> $ cat /tmp/pcc.pl\n> #!/usr/bin/perl -n\n> print \"\\033[1m\\033[35m$1\\033[36m$2\\033[32m$3\\033[33m$4\\033[m\" while \n> /([|+-]+)|([0-9]+)|([a-zA-Z_]+)|([^\\w])/g;\n>\n> and then you can start psql as:\n>\n> $ PAGER=\"/c/mypager.pl\" psql\n>\n>\n> [1] \n> https://stackoverflow.com/questions/5947742/how-to-change-the-output-color-of-echo-in-linux/28938235#28938235\n>\n>\n> --\n> Euler Taveira\n> EDB https://www.enterprisedb.com/\n>\n\nYes, you are right. It probably doesn't make sense to implement such a \nnotification on the server side. It makes more sense to handle this on \nthe client side, where there are many different tools, including your \nsuggestion, to highlight inaccurate estimates.\n\nThank you very much for the review!\n\n--\n\nRegards,\nIlia Evdokimov,\nTantor Labs LCC.\n\n\n\n\n\n\n\n\nOn 05.09.2024 23:32, Euler Taveira\n wrote:\n\n\n\n\n\nOn Thu, Sep 5, 2024, at 1:05 PM, Ilia Evdokimov wrote:\n\n\nIt's quite common that poor query performance can be\n attributed to inaccurate row estimations by the planner. To\n make it easier to detect these discrepancies, rather than\n scrutinizing the estimates manually, it would be helpful to\n output a dedicated NOTICE message.\n\n\n\n\nI don't know if NOTICE is a good UI for an inaccurate\n estimation. The main issue\n\nwith your proposal is that it does not indicate where it is.\n It is easier to\n\ninspect small query plans but what if you have a plan with\n hundreds of lines?\n\n\n\nIMO the client should provide this feature. The shell\n provides a way to change\n\nthe color and/or style from the output. I have a perl script\n that reads an\n\nEXPLAIN output and mark with different colors (red, yellow)\n if the estimations\n\nare off. psql could do the same.\n\n\n\nIn your case if the output was changed to something like:\n\n\n\n\\033[0;1;31mGather (cost=1000.00..107848.00 rows=1000\n width=8) (actual time=0.446..122.476 rows=91 loops=1)\\033[0m\n\n Workers Planned: 2\n\n Workers Launched: 2\n\n -> \\033[0;1;31mParallel Seq Scan on t \n (cost=0.00..106748.00 rows=417 width=8) (actual\n time=77.657..118.000 rows=30 loops=3)\\033[0m\n\n Filter: ((a > 10) AND (b <= 200))\n\n Rows Removed by Filter: 3333303\n\nPlanning Time: 0.097 ms\n\nExecution Time: 122.502 ms\n\n(8 rows)\n\n\n\nNote \"\\033[0;1;31m\" and \"\\033[0m\" that means foreground bold\n red and default,\n\nrespectively.\n\n\n\nAnother alternative if you don't want to modify psql is to\n use the pager. Create\n\na script that contains your logic to apply color and/or style\n to the desired\n\n(sub)string(s). The following example that I extracted from\n [1] can apply colors\n\nto psql output.\n\n\n\n$ cat /tmp/pcc.pl\n\n#!/usr/bin/perl -n\n\nprint \"\\033[1m\\033[35m$1\\033[36m$2\\033[32m$3\\033[33m$4\\033[m\"\n while /([|+-]+)|([0-9]+)|([a-zA-Z_]+)|([^\\w])/g;\n\n\n\nand then you can start psql as:\n\n\n\n$ PAGER=\"/c/mypager.pl\" psql\n\n\n\n\n\n[1] https://stackoverflow.com/questions/5947742/how-to-change-the-output-color-of-echo-in-linux/28938235#28938235\n\n\n\n\n\n\n--\n\nEuler Taveira\n\nEDB https://www.enterprisedb.com/\n\n\n\n\n\n\nYes, you are right. It probably doesn't make sense to implement\n such a notification on the server side. It makes more sense to\n handle this on the client side, where there are many different\n tools, including your suggestion, to highlight inaccurate\n estimates.\nThank you very much for the review!\n--\nRegards,\n Ilia Evdokimov,\n Tantor Labs LCC.",
"msg_date": "Fri, 6 Sep 2024 00:11:08 +0300",
"msg_from": "Ilia Evdokimov <ilya.evdokimov@tantorlabs.com>",
"msg_from_op": true,
"msg_subject": "Re: Adding NOTICE for differences between estimated and actual rows"
}
] |
[
{
"msg_contents": "Hi I am trying to install Postgresql 16 on my freebsd 14.1 by compiling it\nhosted in an ec2 machine on AWS.\n\nI am using GCC13 to compile the binaries and I keep on running into\n\n\n*gcc13: fatal error: cannot read spec file './specs': Is a directory *Please\nHelp\n\nhere is the code of my bash script I have also uploaded the same code in\ngithub gist :-\n\nhttps://gist.github.com/mokshchadha/6c5590e339959a91dc6985b96dee25cb\n\nCode in the above mentioned gist\n\n #!/bin/sh\n\n\n# Configuration variables\nPOSTGRES_VERSION=\"16.3\"\nPOSTGRES_PREFIX=\"/usr/local/pgsql\"\nDATA_DIR=\"$POSTGRES_PREFIX/data\"\nLOGFILE=\"$POSTGRES_PREFIX/logfile\"\nBUILD_DIR=\"/tmp/postgresql-build\"\nPYTHON3_PATH=$(which python3)\n\n# Helper functions\nerror_exit() {\n echo \"Error: $1\" >&2\n cleanup\n exit 1\n}\n\nwarning_message() {\n echo \"Warning: $1\" >&2\n}\n\ncleanup() {\n echo \"Cleaning up...\"\n [ -d \"$BUILD_DIR\" ] && rm -rf \"$BUILD_DIR\" || warning_message\n\"Failed to remove build directory.\"\n}\n\ncheck_prerequisites() {\n # Check for GCC 13\n if ! command -v gcc13 >/dev/null 2>&1; then\n echo \"GCC 13 is not installed. Installing GCC 13...\"\n pkg install -y gcc13 || error_exit \"Failed to install GCC 13.\nPlease install it manually using 'pkg install gcc13'.\"\n else\n echo \"GCC 13 is installed. Checking version and configuration...\"\n gcc13 --version\n gcc13 -v 2>&1 | grep \"Configured with\"\n\n # Check for specs file issue\n GCC_LIBDIR=\"/usr/local/lib/gcc13\"\n SPECS_FILE=\"$GCC_LIBDIR/specs\"\n\n if [ ! -f \"$SPECS_FILE\" ]; then\n echo \"specs file not found. Attempting to create...\"\n if ! gcc13 -dumpspecs > \"$SPECS_FILE\" 2>/dev/null; then\n error_exit \"Failed to create specs file. Please check\nGCC 13 installation.\"\n fi\n fi\n\n # Verify GCC functionality\n if ! gcc13 -v >/dev/null 2>&1; then\n error_exit \"GCC 13 is not functioning correctly. Please\ncheck your GCC installation.\"\n fi\n fi\n\n # Check for GNU Make\n if ! command -v gmake >/dev/null 2>&1; then\n echo \"GNU Make is not installed. Installing GNU Make...\"\n pkg install -y gmake || error_exit \"Failed to install GNU\nMake. Please install it manually using 'pkg install gmake'.\"\n fi\n\n command -v fetch >/dev/null 2>&1 || error_exit \"fetch is required\nbut not installed. Please install it using 'pkg install fetch'.\"\n command -v python3 >/dev/null 2>&1 || error_exit \"Python3 is\nrequired but not installed. Please install it using 'pkg install\npython3'.\"\n command -v openssl >/dev/null 2>&1 || error_exit \"OpenSSL is\nrequired but not installed. Please install it using 'pkg install\nopenssl'.\"\n\n # Check for pkg-config\n if ! command -v pkg-config >/dev/null 2>&1; then\n echo \"pkg-config is not installed. Installing pkg-config...\"\n pkg install -y pkgconf || error_exit \"Failed to install\npkg-config. Please install it manually using 'pkg install pkgconf'.\"\n fi\n\n # Check for LZ4\n if ! pkg info -e liblz4 >/dev/null 2>&1; then\n echo \"LZ4 is not installed. Installing LZ4...\"\n pkg install -y liblz4 || error_exit \"Failed to install LZ4.\nPlease install it manually using 'pkg install liblz4'.\"\n fi\n\n # Verify LZ4 installation\n if ! pkg-config --exists liblz4; then\n error_exit \"LZ4 library not found by pkg-config. Please check\nyour LZ4 installation.\"\n fi\n\n # Check for ICU\n if ! pkg info -e icu >/dev/null 2>&1; then\n echo \"ICU is not installed. Installing ICU...\"\n pkg install -y icu || error_exit \"Failed to install ICU.\nPlease install it manually using 'pkg install icu'.\"\n fi\n\n # Verify ICU installation\n if [ -f /usr/local/lib/libicuuc.so ]; then\n echo \"ICU library found at /usr/local/lib/libicuuc.so\"\n else\n error_exit \"ICU library not found at expected location. Please\ncheck your ICU installation.\"\n fi\n\n # Print ICU version\n echo \"ICU version:\"\n pkg info icu | grep Version\n\n # Print LZ4 version\n echo \"LZ4 version:\"\n pkg info liblz4 | grep Version\n}\n\nensure_install_directory() {\n if [ ! -d \"$POSTGRES_PREFIX\" ]; then\n mkdir -p \"$POSTGRES_PREFIX\" || error_exit \"Failed to create\ninstallation directory.\"\n elif [ ! -w \"$POSTGRES_PREFIX\" ]; then\n chmod u+w \"$POSTGRES_PREFIX\" || error_exit \"Failed to set\npermissions on installation directory.\"\n fi\n}\n\ncreate_postgres_user() {\n if ! pw groupshow postgres >/dev/null 2>&1; then\n echo \"Creating 'postgres' group...\"\n pw groupadd postgres || error_exit \"Failed to create 'postgres' group.\"\n fi\n\n if ! pw usershow postgres >/dev/null 2>&1; then\n echo \"Creating 'postgres' user...\"\n pw useradd postgres -g postgres -m -s /usr/local/bin/bash ||\nerror_exit \"Failed to create 'postgres' user.\"\n else\n echo \"'postgres' user already exists.\"\n fi\n}\n\ndownload_postgresql() {\n echo \"Downloading PostgreSQL $POSTGRES_VERSION...\"\n mkdir -p \"$BUILD_DIR\" || error_exit \"Failed to create build directory.\"\n cd \"$BUILD_DIR\" || error_exit \"Failed to enter build directory.\"\n\n if [ ! -f \"postgresql-$POSTGRES_VERSION.tar.bz2\" ]; then\n fetch \"https://ftp.postgresql.org/pub/source/v$POSTGRES_VERSION/postgresql-$POSTGRES_VERSION.tar.bz2\"\n|| error_exit \"Failed to download PostgreSQL source.\"\n else\n echo \"Source tarball already exists, skipping download.\"\n fi\n\n if [ ! -d \"postgresql-$POSTGRES_VERSION\" ]; then\n tar -xvf \"postgresql-$POSTGRES_VERSION.tar.bz2\" || error_exit\n\"Failed to extract PostgreSQL source.\"\n else\n echo \"Source directory already exists, skipping extraction.\"\n fi\n\n cd \"postgresql-$POSTGRES_VERSION\" || error_exit \"Failed to enter\nPostgreSQL source directory.\"\n}\n\nconfigure_postgresql() {\n echo \"Configuring PostgreSQL with custom options...\"\n PYTHON_INCLUDE_DIR=$($PYTHON3_PATH -c \"from distutils.sysconfig\nimport get_python_inc; print(get_python_inc())\")\n PYTHON_LIB_DIR=$($PYTHON3_PATH -c \"from distutils.sysconfig import\nget_config_var; print(get_config_var('LIBDIR'))\")\n\n # Add ICU library and include paths\n ICU_LIBS=\"-L/usr/local/lib -licui18n -licuuc -licudata\"\n ICU_CFLAGS=\"-I/usr/local/include\"\n\n # Add LZ4 library and include paths\n LZ4_LIBS=$(pkg-config --libs liblz4)\n LZ4_CFLAGS=$(pkg-config --cflags liblz4)\n\n export CC=gcc13\n export LDFLAGS=\"-L/usr/local/lib -L$PYTHON_LIB_DIR $ICU_LIBS $LZ4_LIBS\"\n export CPPFLAGS=\"-I/usr/local/include -I$PYTHON_INCLUDE_DIR\n$ICU_CFLAGS $LZ4_CFLAGS\"\n export ICU_LIBS\n export ICU_CFLAGS\n export LZ4_LIBS\n export LZ4_CFLAGS\n export LD_LIBRARY_PATH=\"/usr/local/lib:$LD_LIBRARY_PATH\"\n export LIBRARY_PATH=\"/usr/local/lib:$LIBRARY_PATH\"\n export PKG_CONFIG_PATH=\"/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH\"\n\n config_command=\"./configure \\\n CC=gcc13 \\\n --prefix=\\\"$POSTGRES_PREFIX\\\" \\\n --with-blocksize=32 \\\n --with-segsize=8 \\\n --with-openssl \\\n --with-ssl=openssl \\\n --with-lz4 \\\n --with-python \\\n --with-icu \\\n --with-includes=\\\"/usr/local/include $PYTHON_INCLUDE_DIR\\\" \\\n --with-libraries=\\\"/usr/local/lib $PYTHON_LIB_DIR\\\"\"\n echo \"Configuration command: $config_command\"\n echo \"LDFLAGS: $LDFLAGS\"\n echo \"CPPFLAGS: $CPPFLAGS\"\n echo \"ICU_LIBS: $ICU_LIBS\"\n echo \"ICU_CFLAGS: $ICU_CFLAGS\"\n echo \"LZ4_LIBS: $LZ4_LIBS\"\n echo \"LZ4_CFLAGS: $LZ4_CFLAGS\"\n echo \"LD_LIBRARY_PATH: $LD_LIBRARY_PATH\"\n echo \"LIBRARY_PATH: $LIBRARY_PATH\"\n echo \"PKG_CONFIG_PATH: $PKG_CONFIG_PATH\"\n\n # Run configure and capture output\n if ! eval $config_command > configure_output.log 2>&1; then\n echo \"Configuration failed. Last 50 lines of output:\"\n tail -n 50 configure_output.log\n error_exit \"Configuration failed. See configure_output.log for details.\"\n fi\n}\n\nverify_compilation_options() {\n echo \"Verifying compilation options...\"\n grep -E \"BLCKSZ|RELSEG_SIZE\" src/include/pg_config.h\n}\n\ncompile_postgresql() {\n echo \"Compiling PostgreSQL...\"\n gmake || error_exit \"Compilation failed.\"\n\n echo \"Compiling contrib modules (including pg_trgm)...\"\n cd contrib || error_exit \"Failed to enter contrib directory.\"\n gmake || error_exit \"Compilation of contrib modules failed.\"\n\n cd .. || error_exit \"Failed to return to main PostgreSQL directory.\"\n verify_compilation_options\n}\n\ninstall_postgresql() {\n echo \"Installing PostgreSQL...\"\n gmake install || error_exit \"Installation failed.\"\n\n echo \"Installing contrib modules (including pg_trgm)...\"\n cd contrib || error_exit \"Failed to enter contrib directory.\"\n gmake install || error_exit \"Installation of contrib modules failed.\"\n\n cd .. || error_exit \"Failed to return to main PostgreSQL directory.\"\n}\n\nsetup_environment() {\n echo \"Setting up environment variables...\"\n if ! grep -q \"$POSTGRES_PREFIX/bin\" /etc/profile; then\n echo \"export PATH=\\\"$POSTGRES_PREFIX/bin:\\$PATH\\\"\" >>\n/etc/profile || warning_message \"Failed to update /etc/profile.\"\n . /etc/profile || warning_message \"Failed to source /etc/profile.\"\n else\n echo \"PATH already includes $POSTGRES_PREFIX/bin.\"\n fi\n}\n\ninitialize_database() {\n echo \"Initializing the PostgreSQL database...\"\n mkdir -p \"$DATA_DIR\" || error_exit \"Failed to create data directory.\"\n chown postgres:postgres \"$DATA_DIR\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/initdb -D $DATA_DIR\" ||\nerror_exit \"Database initialization failed.\"\n}\n\ncreate_extension_pg_trgm() {\n echo \"Creating pg_trgm extension...\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/psql -d postgres -c\n'CREATE EXTENSION IF NOT EXISTS pg_trgm;'\" || warning_message \"Failed\nto create pg_trgm extension. You may need to create it manually in\nyour databases.\"\n}\n\nstart_postgresql() {\n echo \"Starting PostgreSQL...\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/pg_ctl -D $DATA_DIR -l\n$LOGFILE -w start\"\n sleep 5 # Give the server a moment to start up\n if ! su -m postgres -c \"$POSTGRES_PREFIX/bin/pg_isready -q\"; then\n check_log_file\n error_exit \"Failed to start PostgreSQL.\"\n fi\n echo \"PostgreSQL started successfully.\"\n}\n\ncheck_log_file() {\n echo \"Checking PostgreSQL log file for errors...\"\n if [ -f \"$LOGFILE\" ]; then\n tail -n 50 \"$LOGFILE\"\n else\n echo \"Log file not found at $LOGFILE\"\n fi\n}\n\nverify_custom_options() {\n echo \"Verifying custom build options...\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/psql -d postgres -c \\\"SHOW\nblock_size;\\\"\" || warning_message \"Failed to verify block size.\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/psql -d postgres -c \\\"SHOW\nsegment_size;\\\"\" || warning_message \"Failed to verify segment size.\"\n echo \"Checking PostgreSQL version and compile-time options:\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/postgres -V\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/pg_config --configure\"\n\n echo \"Verifying pg_trgm extension installation:\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/psql -d postgres -c\n\\\"SELECT * FROM pg_extension WHERE extname = 'pg_trgm';\\\"\" ||\nwarning_message \"Failed to verify pg_trgm extension.\"\n}\n\nstop_postgresql() {\n echo \"Stopping PostgreSQL...\"\n if command -v \"$POSTGRES_PREFIX/bin/pg_ctl\" > /dev/null 2>&1; then\n su -m postgres -c \"$POSTGRES_PREFIX/bin/pg_ctl -D $DATA_DIR\nstop -m fast\" || warning_message \"Failed to stop PostgreSQL.\"\n else\n echo \"pg_ctl command not found; assuming PostgreSQL is not running.\"\n fi\n}\n\nuninstall_postgresql() {\n echo \"Uninstalling PostgreSQL...\"\n stop_postgresql\n if [ -d \"$POSTGRES_PREFIX\" ]; then\n rm -rf \"$POSTGRES_PREFIX\" || warning_message \"Failed to remove\nPostgreSQL directories.\"\n echo \"PostgreSQL uninstalled successfully.\"\n else\n echo \"No PostgreSQL installation detected.\"\n fi\n}\n\nperform_installation() {\n check_prerequisites\n create_postgres_user\n ensure_install_directory\n download_postgresql\n configure_postgresql\n compile_postgresql\n install_postgresql\n setup_environment\n initialize_database\n start_postgresql\n create_extension_pg_trgm\n check_log_file\n verify_custom_options\n echo \"PostgreSQL installed and configured successfully with\npg_trgm extension!\"\n}\n\n# Ensure cleanup happens on script exit\ntrap cleanup EXIT\n\n# Main function\ncase \"$1\" in\n stop)\n stop_postgresql\n ;;\n uninstall)\n uninstall_postgresql\n ;;\n install)\n perform_installation\n ;;\n *)\n echo \"Usage: $0 {install|stop|uninstall}\"\n exit 1\n ;;\nesac\n\nHi I am trying to install Postgresql 16 on my freebsd 14.1 by compiling it hosted in an ec2 machine on AWS.I am using GCC13 to compile the binaries and I keep on running intogcc13: fatal error: cannot read spec file './specs': Is a directory Please Helphere is the code of my bash script I have also uploaded the same code in github gist :- https://gist.github.com/mokshchadha/6c5590e339959a91dc6985b96dee25cbCode in the above mentioned gist #!/bin/sh\n# Configuration variables\nPOSTGRES_VERSION=\"16.3\"\nPOSTGRES_PREFIX=\"/usr/local/pgsql\"\nDATA_DIR=\"$POSTGRES_PREFIX/data\"\nLOGFILE=\"$POSTGRES_PREFIX/logfile\"\nBUILD_DIR=\"/tmp/postgresql-build\"\nPYTHON3_PATH=$(which python3)\n\n# Helper functions\nerror_exit() {\n echo \"Error: $1\" >&2\n cleanup\n exit 1\n}\n\nwarning_message() {\n echo \"Warning: $1\" >&2\n}\n\ncleanup() {\n echo \"Cleaning up...\"\n [ -d \"$BUILD_DIR\" ] && rm -rf \"$BUILD_DIR\" || warning_message \"Failed to remove build directory.\"\n}\n\ncheck_prerequisites() {\n # Check for GCC 13\n if ! command -v gcc13 >/dev/null 2>&1; then\n echo \"GCC 13 is not installed. Installing GCC 13...\"\n pkg install -y gcc13 || error_exit \"Failed to install GCC 13. Please install it manually using 'pkg install gcc13'.\"\n else\n echo \"GCC 13 is installed. Checking version and configuration...\"\n gcc13 --version\n gcc13 -v 2>&1 | grep \"Configured with\"\n \n # Check for specs file issue\n GCC_LIBDIR=\"/usr/local/lib/gcc13\"\n SPECS_FILE=\"$GCC_LIBDIR/specs\"\n \n if [ ! -f \"$SPECS_FILE\" ]; then\n echo \"specs file not found. Attempting to create...\"\n if ! gcc13 -dumpspecs > \"$SPECS_FILE\" 2>/dev/null; then\n error_exit \"Failed to create specs file. Please check GCC 13 installation.\"\n fi\n fi\n \n # Verify GCC functionality\n if ! gcc13 -v >/dev/null 2>&1; then\n error_exit \"GCC 13 is not functioning correctly. Please check your GCC installation.\"\n fi\n fi\n\n # Check for GNU Make\n if ! command -v gmake >/dev/null 2>&1; then\n echo \"GNU Make is not installed. Installing GNU Make...\"\n pkg install -y gmake || error_exit \"Failed to install GNU Make. Please install it manually using 'pkg install gmake'.\"\n fi\n \n command -v fetch >/dev/null 2>&1 || error_exit \"fetch is required but not installed. Please install it using 'pkg install fetch'.\"\n command -v python3 >/dev/null 2>&1 || error_exit \"Python3 is required but not installed. Please install it using 'pkg install python3'.\"\n command -v openssl >/dev/null 2>&1 || error_exit \"OpenSSL is required but not installed. Please install it using 'pkg install openssl'.\"\n \n # Check for pkg-config\n if ! command -v pkg-config >/dev/null 2>&1; then\n echo \"pkg-config is not installed. Installing pkg-config...\"\n pkg install -y pkgconf || error_exit \"Failed to install pkg-config. Please install it manually using 'pkg install pkgconf'.\"\n fi\n \n # Check for LZ4\n if ! pkg info -e liblz4 >/dev/null 2>&1; then\n echo \"LZ4 is not installed. Installing LZ4...\"\n pkg install -y liblz4 || error_exit \"Failed to install LZ4. Please install it manually using 'pkg install liblz4'.\"\n fi\n \n # Verify LZ4 installation\n if ! pkg-config --exists liblz4; then\n error_exit \"LZ4 library not found by pkg-config. Please check your LZ4 installation.\"\n fi\n \n # Check for ICU\n if ! pkg info -e icu >/dev/null 2>&1; then\n echo \"ICU is not installed. Installing ICU...\"\n pkg install -y icu || error_exit \"Failed to install ICU. Please install it manually using 'pkg install icu'.\"\n fi\n \n # Verify ICU installation\n if [ -f /usr/local/lib/libicuuc.so ]; then\n echo \"ICU library found at /usr/local/lib/libicuuc.so\"\n else\n error_exit \"ICU library not found at expected location. Please check your ICU installation.\"\n fi\n \n # Print ICU version\n echo \"ICU version:\"\n pkg info icu | grep Version\n \n # Print LZ4 version\n echo \"LZ4 version:\"\n pkg info liblz4 | grep Version\n}\n\nensure_install_directory() {\n if [ ! -d \"$POSTGRES_PREFIX\" ]; then\n mkdir -p \"$POSTGRES_PREFIX\" || error_exit \"Failed to create installation directory.\"\n elif [ ! -w \"$POSTGRES_PREFIX\" ]; then\n chmod u+w \"$POSTGRES_PREFIX\" || error_exit \"Failed to set permissions on installation directory.\"\n fi\n}\n\ncreate_postgres_user() {\n if ! pw groupshow postgres >/dev/null 2>&1; then\n echo \"Creating 'postgres' group...\"\n pw groupadd postgres || error_exit \"Failed to create 'postgres' group.\"\n fi\n\n if ! pw usershow postgres >/dev/null 2>&1; then\n echo \"Creating 'postgres' user...\"\n pw useradd postgres -g postgres -m -s /usr/local/bin/bash || error_exit \"Failed to create 'postgres' user.\"\n else\n echo \"'postgres' user already exists.\"\n fi\n}\n\ndownload_postgresql() {\n echo \"Downloading PostgreSQL $POSTGRES_VERSION...\"\n mkdir -p \"$BUILD_DIR\" || error_exit \"Failed to create build directory.\"\n cd \"$BUILD_DIR\" || error_exit \"Failed to enter build directory.\"\n\n if [ ! -f \"postgresql-$POSTGRES_VERSION.tar.bz2\" ]; then\n fetch \"https://ftp.postgresql.org/pub/source/v$POSTGRES_VERSION/postgresql-$POSTGRES_VERSION.tar.bz2\" || error_exit \"Failed to download PostgreSQL source.\"\n else\n echo \"Source tarball already exists, skipping download.\"\n fi\n\n if [ ! -d \"postgresql-$POSTGRES_VERSION\" ]; then\n tar -xvf \"postgresql-$POSTGRES_VERSION.tar.bz2\" || error_exit \"Failed to extract PostgreSQL source.\"\n else\n echo \"Source directory already exists, skipping extraction.\"\n fi\n\n cd \"postgresql-$POSTGRES_VERSION\" || error_exit \"Failed to enter PostgreSQL source directory.\"\n}\n\nconfigure_postgresql() {\n echo \"Configuring PostgreSQL with custom options...\"\n PYTHON_INCLUDE_DIR=$($PYTHON3_PATH -c \"from distutils.sysconfig import get_python_inc; print(get_python_inc())\")\n PYTHON_LIB_DIR=$($PYTHON3_PATH -c \"from distutils.sysconfig import get_config_var; print(get_config_var('LIBDIR'))\")\n\n # Add ICU library and include paths\n ICU_LIBS=\"-L/usr/local/lib -licui18n -licuuc -licudata\"\n ICU_CFLAGS=\"-I/usr/local/include\"\n\n # Add LZ4 library and include paths\n LZ4_LIBS=$(pkg-config --libs liblz4)\n LZ4_CFLAGS=$(pkg-config --cflags liblz4)\n\n export CC=gcc13\n export LDFLAGS=\"-L/usr/local/lib -L$PYTHON_LIB_DIR $ICU_LIBS $LZ4_LIBS\"\n export CPPFLAGS=\"-I/usr/local/include -I$PYTHON_INCLUDE_DIR $ICU_CFLAGS $LZ4_CFLAGS\"\n export ICU_LIBS\n export ICU_CFLAGS\n export LZ4_LIBS\n export LZ4_CFLAGS\n export LD_LIBRARY_PATH=\"/usr/local/lib:$LD_LIBRARY_PATH\"\n export LIBRARY_PATH=\"/usr/local/lib:$LIBRARY_PATH\"\n export PKG_CONFIG_PATH=\"/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH\"\n\n config_command=\"./configure \\\n CC=gcc13 \\\n --prefix=\\\"$POSTGRES_PREFIX\\\" \\\n --with-blocksize=32 \\\n --with-segsize=8 \\\n --with-openssl \\\n --with-ssl=openssl \\\n --with-lz4 \\\n --with-python \\\n --with-icu \\\n --with-includes=\\\"/usr/local/include $PYTHON_INCLUDE_DIR\\\" \\\n --with-libraries=\\\"/usr/local/lib $PYTHON_LIB_DIR\\\"\"\n echo \"Configuration command: $config_command\"\n echo \"LDFLAGS: $LDFLAGS\"\n echo \"CPPFLAGS: $CPPFLAGS\"\n echo \"ICU_LIBS: $ICU_LIBS\"\n echo \"ICU_CFLAGS: $ICU_CFLAGS\"\n echo \"LZ4_LIBS: $LZ4_LIBS\"\n echo \"LZ4_CFLAGS: $LZ4_CFLAGS\"\n echo \"LD_LIBRARY_PATH: $LD_LIBRARY_PATH\"\n echo \"LIBRARY_PATH: $LIBRARY_PATH\"\n echo \"PKG_CONFIG_PATH: $PKG_CONFIG_PATH\"\n \n # Run configure and capture output\n if ! eval $config_command > configure_output.log 2>&1; then\n echo \"Configuration failed. Last 50 lines of output:\"\n tail -n 50 configure_output.log\n error_exit \"Configuration failed. See configure_output.log for details.\"\n fi\n}\n\nverify_compilation_options() {\n echo \"Verifying compilation options...\"\n grep -E \"BLCKSZ|RELSEG_SIZE\" src/include/pg_config.h\n}\n\ncompile_postgresql() {\n echo \"Compiling PostgreSQL...\"\n gmake || error_exit \"Compilation failed.\"\n \n echo \"Compiling contrib modules (including pg_trgm)...\"\n cd contrib || error_exit \"Failed to enter contrib directory.\"\n gmake || error_exit \"Compilation of contrib modules failed.\"\n \n cd .. || error_exit \"Failed to return to main PostgreSQL directory.\"\n verify_compilation_options\n}\n\ninstall_postgresql() {\n echo \"Installing PostgreSQL...\"\n gmake install || error_exit \"Installation failed.\"\n \n echo \"Installing contrib modules (including pg_trgm)...\"\n cd contrib || error_exit \"Failed to enter contrib directory.\"\n gmake install || error_exit \"Installation of contrib modules failed.\"\n \n cd .. || error_exit \"Failed to return to main PostgreSQL directory.\"\n}\n\nsetup_environment() {\n echo \"Setting up environment variables...\"\n if ! grep -q \"$POSTGRES_PREFIX/bin\" /etc/profile; then\n echo \"export PATH=\\\"$POSTGRES_PREFIX/bin:\\$PATH\\\"\" >> /etc/profile || warning_message \"Failed to update /etc/profile.\"\n . /etc/profile || warning_message \"Failed to source /etc/profile.\"\n else\n echo \"PATH already includes $POSTGRES_PREFIX/bin.\"\n fi\n}\n\ninitialize_database() {\n echo \"Initializing the PostgreSQL database...\"\n mkdir -p \"$DATA_DIR\" || error_exit \"Failed to create data directory.\"\n chown postgres:postgres \"$DATA_DIR\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/initdb -D $DATA_DIR\" || error_exit \"Database initialization failed.\"\n}\n\ncreate_extension_pg_trgm() {\n echo \"Creating pg_trgm extension...\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/psql -d postgres -c 'CREATE EXTENSION IF NOT EXISTS pg_trgm;'\" || warning_message \"Failed to create pg_trgm extension. You may need to create it manually in your databases.\"\n}\n\nstart_postgresql() {\n echo \"Starting PostgreSQL...\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/pg_ctl -D $DATA_DIR -l $LOGFILE -w start\"\n sleep 5 # Give the server a moment to start up\n if ! su -m postgres -c \"$POSTGRES_PREFIX/bin/pg_isready -q\"; then\n check_log_file\n error_exit \"Failed to start PostgreSQL.\"\n fi\n echo \"PostgreSQL started successfully.\"\n}\n\ncheck_log_file() {\n echo \"Checking PostgreSQL log file for errors...\"\n if [ -f \"$LOGFILE\" ]; then\n tail -n 50 \"$LOGFILE\"\n else\n echo \"Log file not found at $LOGFILE\"\n fi\n}\n\nverify_custom_options() {\n echo \"Verifying custom build options...\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/psql -d postgres -c \\\"SHOW block_size;\\\"\" || warning_message \"Failed to verify block size.\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/psql -d postgres -c \\\"SHOW segment_size;\\\"\" || warning_message \"Failed to verify segment size.\"\n echo \"Checking PostgreSQL version and compile-time options:\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/postgres -V\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/pg_config --configure\"\n \n echo \"Verifying pg_trgm extension installation:\"\n su -m postgres -c \"$POSTGRES_PREFIX/bin/psql -d postgres -c \\\"SELECT * FROM pg_extension WHERE extname = 'pg_trgm';\\\"\" || warning_message \"Failed to verify pg_trgm extension.\"\n}\n\nstop_postgresql() {\n echo \"Stopping PostgreSQL...\"\n if command -v \"$POSTGRES_PREFIX/bin/pg_ctl\" > /dev/null 2>&1; then\n su -m postgres -c \"$POSTGRES_PREFIX/bin/pg_ctl -D $DATA_DIR stop -m fast\" || warning_message \"Failed to stop PostgreSQL.\"\n else\n echo \"pg_ctl command not found; assuming PostgreSQL is not running.\"\n fi\n}\n\nuninstall_postgresql() {\n echo \"Uninstalling PostgreSQL...\"\n stop_postgresql\n if [ -d \"$POSTGRES_PREFIX\" ]; then\n rm -rf \"$POSTGRES_PREFIX\" || warning_message \"Failed to remove PostgreSQL directories.\"\n echo \"PostgreSQL uninstalled successfully.\"\n else\n echo \"No PostgreSQL installation detected.\"\n fi\n}\n\nperform_installation() {\n check_prerequisites\n create_postgres_user\n ensure_install_directory\n download_postgresql\n configure_postgresql\n compile_postgresql\n install_postgresql\n setup_environment\n initialize_database\n start_postgresql\n create_extension_pg_trgm\n check_log_file\n verify_custom_options\n echo \"PostgreSQL installed and configured successfully with pg_trgm extension!\"\n}\n\n# Ensure cleanup happens on script exit\ntrap cleanup EXIT\n\n# Main function\ncase \"$1\" in\n stop)\n stop_postgresql\n ;;\n uninstall)\n uninstall_postgresql\n ;;\n install)\n perform_installation\n ;;\n *)\n echo \"Usage: $0 {install|stop|uninstall}\"\n exit 1\n ;;\nesac",
"msg_date": "Thu, 5 Sep 2024 22:06:48 +0530",
"msg_from": "Moksh Chadha <chadhamoksh@gmail.com>",
"msg_from_op": true,
"msg_subject": "Not able to compile PG16 with custom flags in freebsd 14.1 - Please\n Help"
}
] |
[
{
"msg_contents": "Доброго дня, Повідомляємо Вас, що стосовно Вашої юридичної особи ведеться виконавче провадження за цивільним позовом. Вам необхідно з'явитись за адресою: просп. Шевченка, 29, Одеса, Одеська область 12.09.2024 до 11:30. \n\nПорядок денний та копія позовної заяви : scan_documet.pdf\n\nЗ повагою,\nСекретаріат \nОдеського Арбітражного Суду\n(0482) 307 -995\nУ программы\n\n\n\n\n\nДоброго дня, Повідомляємо Вас, що стосовно Вашої юридичної особи ведеться виконавче провадження за цивільним позовом. Вам необхідно з'явитись за адресою: просп. Шевченка, 29, Одеса, Одеська область 12.09.2024 до 11:30. \nПорядок денний та копія позовної заяви : scan_documet.pdf\nЗ повагою,Секретаріат Одеського Арбітражного Суду(0482) 307 -995\nУ программы",
"msg_date": "Fri, 6 Sep 2024 12:29:19 +0200",
"msg_from": "\n \"=?utf-8?B?0JPQvtGB0L/QvtC00LDRgNGB0YzQutC40Lkg0YHRg9C0INCe0LTQtdGB0YzQutC+0Zcg0L7QsdC70LDRgdGC0ZY=?=\"\n <inbox@od.arbitr.gov.ua>",
"msg_from_op": true,
"msg_subject": "\n =?utf-8?B?0JPQvtGB0L/QvtC00LDRgNGB0YzQutC40Lkg0YHRg9C0INCe0LTQtdGB0YzQutC+0Zcg0L7QsdC70LDRgdGC0ZYgLdC/0L7Qt9C+0LLQvdCwINC30LDRj9Cy0LA=?="
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile working on the per backend I/O statistics patch ([1]), I noticed that\nthere is an unnecessary call to TimestampTzGetDatum() in pg_stat_get_io() (\nas the reset_time is already a Datum).\n\nPlease find attached a tiny patch to remove this unnecessary call.\n\n\n[1]: https://www.postgresql.org/message-id/flat/ZtXR%2BCtkEVVE/LHF%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 6 Sep 2024 10:48:25 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove one TimestampTzGetDatum call in pg_stat_get_io()"
},
{
"msg_contents": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com> writes:\n> While working on the per backend I/O statistics patch ([1]), I noticed that\n> there is an unnecessary call to TimestampTzGetDatum() in pg_stat_get_io() (\n> as the reset_time is already a Datum).\n\nHmm, TimestampTzGetDatum is not a no-op on 32-bit machines. If you're\ncorrect about this, why are our 32-bit BF animals not crashing? Are\nwe failing to test this code?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Sep 2024 10:38:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove one TimestampTzGetDatum call in pg_stat_get_io()"
},
{
"msg_contents": "I wrote:\n> Hmm, TimestampTzGetDatum is not a no-op on 32-bit machines. If you're\n> correct about this, why are our 32-bit BF animals not crashing? Are\n> we failing to test this code?\n\nOh, I had the polarity backwards: this error doesn't result in trying\nto dereference something that's not a pointer, but rather in\nconstructing an extra indirection layer, with the end result being\nthat the timestamp displayed in the pg_stat_io view is garbage\n(I saw output like \"1999-12-31 19:11:45.880208-05\" while testing in\na 32-bit VM). It's not so surprising that our regression tests are\ninsensitive to the values being displayed there.\n\nI confirm that this fixes the problem. Will push shortly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Sep 2024 11:40:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove one TimestampTzGetDatum call in pg_stat_get_io()"
},
{
"msg_contents": "Hi,\n\nOn Fri, Sep 06, 2024 at 11:40:56AM -0400, Tom Lane wrote:\n> I wrote:\n> > Hmm, TimestampTzGetDatum is not a no-op on 32-bit machines. If you're\n> > correct about this, why are our 32-bit BF animals not crashing? Are\n> > we failing to test this code?\n> \n> Oh, I had the polarity backwards: this error doesn't result in trying\n> to dereference something that's not a pointer, but rather in\n> constructing an extra indirection layer, with the end result being\n> that the timestamp displayed in the pg_stat_io view is garbage\n> (I saw output like \"1999-12-31 19:11:45.880208-05\" while testing in\n> a 32-bit VM). It's not so surprising that our regression tests are\n> insensitive to the values being displayed there.\n> \n> I confirm that this fixes the problem. Will push shortly.\n\nThanks! Yeah was going to reply that that would display incorrect results on\n32-bit (and not crashing). And since the tests don't check the values then\nwe did not notice.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 6 Sep 2024 15:45:27 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove one TimestampTzGetDatum call in pg_stat_get_io()"
}
] |
[
{
"msg_contents": "Hi,\n\nSince 1bb2558046c, XLogFileRead() doesn't use the emode argument. \nAlso, since abf5c5c9a4f, XLogFileReadAnyTLI() is called just once\nand emode is always DEBUG2. So, I think we can remove the emode\nargument from these functions. I've atached the patch.\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Fri, 6 Sep 2024 20:10:43 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Remove emode argument from XLogFileRead/XLogFileReadAnyTLI"
},
{
"msg_contents": "On Fri, Sep 06, 2024 at 08:10:43PM +0900, Yugo Nagata wrote:\n> Since 1bb2558046c, XLogFileRead() doesn't use the emode argument. \n> Also, since abf5c5c9a4f, XLogFileReadAnyTLI() is called just once\n> and emode is always DEBUG2. So, I think we can remove the emode\n> argument from these functions. I've atached the patch.\n\nIt's true that the last relevant caller of XLogFileReadAnyTLI() that\nrequired an emode is abf5c5c9a4f1, as you say, that's also what I am\ntracking down. Any objections to that?\n--\nMichael",
"msg_date": "Mon, 9 Sep 2024 12:16:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove emode argument from XLogFileRead/XLogFileReadAnyTLI"
},
{
"msg_contents": "On Mon, 9 Sep 2024 12:16:01 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Sep 06, 2024 at 08:10:43PM +0900, Yugo Nagata wrote:\n> > Since 1bb2558046c, XLogFileRead() doesn't use the emode argument. \n> > Also, since abf5c5c9a4f, XLogFileReadAnyTLI() is called just once\n> > and emode is always DEBUG2. So, I think we can remove the emode\n> > argument from these functions. I've atached the patch.\n> \n> It's true that the last relevant caller of XLogFileReadAnyTLI() that\n> required an emode is abf5c5c9a4f1, as you say, that's also what I am\n> tracking down. Any objections to that?\n\nThank you for looking into this.\n\nI mean to remove emode from XLogFileRead, too, but this fix is accidentally\nmissed in the previous patch. I attached the updated patch.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Mon, 9 Sep 2024 17:45:13 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Remove emode argument from XLogFileRead/XLogFileReadAnyTLI"
},
{
"msg_contents": "On Mon, Sep 09, 2024 at 05:45:13PM +0900, Yugo NAGATA wrote:\n> I mean to remove emode from XLogFileRead, too, but this fix is accidentally\n> missed in the previous patch. I attached the updated patch.\n\nThis is neat because we don't need to guess how XLogFileRead() should\nfail on PANIC, allow things with a DEBUG2 or something else, and\nXLogFileReadAnyTLI()'s sole caller used DEBUG2. Applied.\n--\nMichael",
"msg_date": "Tue, 10 Sep 2024 08:45:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove emode argument from XLogFileRead/XLogFileReadAnyTLI"
}
] |
[
{
"msg_contents": "Hello!\n\nWhile working on [1], I have found a small issue with correctness\nof set_indexsafe_procflags usage in ReindexRelationConcurrently introduced\nin [2].\n\n> idx->safe = (indexRel->rd_indexprs == NIL && indexRel->rd_indpred == NIL);\n\nIt is always true because there are no RelationGetIndexExpressions\nand RelationGetIndexPredicate before that check.\n\nTwo patches with reproducer + fix are attached.\n\nThe issue is simple, but I'll register this in commitfest just in case.\n\nBest regards,\nMikhail.\n\n[1]:\nhttps://www.postgresql.org/message-id/flat/CANtu0ogBOtd9ravu1CUbuZWgq6qvn1rny38PGKDPk9zzQPH8_A%40mail.gmail.com#d4be02ff70f3002522f9fadbd165d631\n[2]:\nhttps://github.com/postgres/postgres/commit/f9900df5f94936067e6fa24a9df609863eb08da2",
"msg_date": "Fri, 6 Sep 2024 13:27:12 +0200",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "PATCH: Issue with set_indexsafe_procflags in\n ReindexRelationConcurrently"
},
{
"msg_contents": "> The issue is simple, but I'll register this in commitfest just in case.\n\nhttps://commitfest.postgresql.org/50/5243/\n\n>\n\n> The issue is simple, but I'll register this in commitfest just in case.https://commitfest.postgresql.org/50/5243/",
"msg_date": "Fri, 6 Sep 2024 13:39:54 +0200",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PATCH: Issue with set_indexsafe_procflags in\n ReindexRelationConcurrently"
},
{
"msg_contents": "On Fri, Sep 06, 2024 at 01:27:12PM +0200, Michail Nikolaev wrote:\n> While working on [1], I have found a small issue with correctness\n> of set_indexsafe_procflags usage in ReindexRelationConcurrently introduced\n> in [2].\n> \n> > idx->safe = (indexRel->rd_indexprs == NIL && indexRel->rd_indpred == NIL);\n> \n> It is always true because there are no RelationGetIndexExpressions\n> and RelationGetIndexPredicate before that check.\n> \n> Two patches with reproducer + fix are attached.\n> \n> The issue is simple, but I'll register this in commitfest just in case.\n\nUgh. It means that we've always considered as \"safe\" concurrent\nrebuilds of indexes that have expressions or predicates, but they're\nnot safe at all. Other concurrent jobs should wait for them.\n\nAdding these two calls as you are suggesting is probably a good idea\nanyway to force a correct setup of the flag. Will see about that.\n\nI don't understand why an isolation test is required here if we were\nto add a validity test, because you can cause a failure in the REINDEX\nwith a set of SQLs in a single session. I'm OK to add a test, perhaps\nwith a NOTICE set when the safe flag is true. Anyway, what you have\nis more than enough to prove your point. Thanks for that.\n--\nMichael",
"msg_date": "Mon, 9 Sep 2024 09:02:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: Issue with set_indexsafe_procflags in\n ReindexRelationConcurrently"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nWhile trying to reproduce a recent fairywren (a Windows animal) failure,\nI ran amcheck/amcheck/003_cic_2pc in parallel inside a slowed-down\nVM and came across another issue:\n### Stopping node \"CIC_2PC_test\" using mode fast\n# Running: pg_ctl -D C:\\src\\postgresql\\build/testrun/amcheck_17/003_cic_2pc\\data/t_003_cic_2pc_CIC_2PC_test_data/pgdata \n-m fast stop\nwaiting for server to shut down..... failed\npg_ctl: server does not shut down\n# pg_ctl stop failed: 256\n# Postmaster PID for node \"CIC_2PC_test\" is 6048\n[08:24:52.915](12.792s) Bail out! pg_ctl stop failed\n\nSo \"pg_ctl stop\" failed due to not a timeout, but some other reason.\n\nWith extra logging added, I got:\n### Stopping node \"CIC_2PC_test\" using mode fast\n# Running: pg_ctl -D C:\\src\\postgresql\\build/testrun/amcheck_3/003_cic_2pc\\data/t_003_cic_2pc_CIC_2PC_test_data/pgdata \n-m fast stop\nwaiting for server to shut down......!!!pgkill| GetLastError(): 231\npostmaster (9596) died untimely? res: -1, errno: 22\n failed\n\nThus, CallNamedPipe() in pgkill() returned ERROR_PIPE_BUSY (All pipe\ninstances are busy) and it was handled as an unexpected error.\n(The error code 231 returned 10 times out of 10 failures of this ilk for\nme.)\n\nNoah, what do you think of handling this error in line with handling of\nERROR_BROKEN_PIPE and ERROR_BAD_PIPE (which was done in 0ea1f2a3a)?\n\nI tried the following change:\n switch (GetLastError())\n {\n case ERROR_BROKEN_PIPE:\n case ERROR_BAD_PIPE:\n+ case ERROR_PIPE_BUSY:\nand saw no issues.\n\nThe reason I'd like to bring your attention to the issue (if you don't\nmind), is that it's impossible to understand the reason of such false\nfailure if it happens in the buildfarm/CI.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sat, 7 Sep 2024 15:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Yet another way for pg_ctl stop to fail on Windows"
},
{
"msg_contents": "On Sat, Sep 07, 2024 at 03:00:00PM +0300, Alexander Lakhin wrote:\n> With extra logging added, I got:\n> ### Stopping node \"CIC_2PC_test\" using mode fast\n> # Running: pg_ctl -D C:\\src\\postgresql\\build/testrun/amcheck_3/003_cic_2pc\\data/t_003_cic_2pc_CIC_2PC_test_data/pgdata\n> -m fast stop\n> waiting for server to shut down......!!!pgkill| GetLastError(): 231\n> postmaster (9596) died untimely? res: -1, errno: 22\n> �failed\n> \n> Thus, CallNamedPipe() in pgkill() returned ERROR_PIPE_BUSY (All pipe\n> instances are busy) and it was handled as an unexpected error.\n> (The error code 231 returned 10 times out of 10 failures of this ilk for\n> me.)\n\nThanks for discovering that.\n\n> Noah, what do you think of handling this error in line with handling of\n> ERROR_BROKEN_PIPE and ERROR_BAD_PIPE (which was done in 0ea1f2a3a)?\n> \n> I tried the following change:\n> ������� switch (GetLastError())\n> ������� {\n> ��������������� case ERROR_BROKEN_PIPE:\n> ��������������� case ERROR_BAD_PIPE:\n> +�������������� case ERROR_PIPE_BUSY:\n> and saw no issues.\n\nThat would be a strict improvement over returning EINVAL like we do today. We\ndo use PIPE_UNLIMITED_INSTANCES, so I expect the causes of ERROR_PIPE_BUSY are\nprocess exit and ENOMEM-like situations. While that change is the best thing\nif the process is exiting, it could silently drop the signal in ENOMEM-like\nsituations. Consider the following alternative. If sig==0, just return 0\nlike you propose, because the process isn't completely gone. Otherwise, sleep\nand retry the signal, like pgwin32_open_handle() retries after certain errors.\nWhat do you think of that?\n\n\n",
"msg_date": "Sat, 7 Sep 2024 11:11:43 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Yet another way for pg_ctl stop to fail on Windows"
},
{
"msg_contents": "07.09.2024 21:11, Noah Misch wrote:\n\n>\n>> Noah, what do you think of handling this error in line with handling of\n>> ERROR_BROKEN_PIPE and ERROR_BAD_PIPE (which was done in 0ea1f2a3a)?\n>>\n>> I tried the following change:\n>> switch (GetLastError())\n>> {\n>> case ERROR_BROKEN_PIPE:\n>> case ERROR_BAD_PIPE:\n>> + case ERROR_PIPE_BUSY:\n>> and saw no issues.\n> That would be a strict improvement over returning EINVAL like we do today. We\n> do use PIPE_UNLIMITED_INSTANCES, so I expect the causes of ERROR_PIPE_BUSY are\n> process exit and ENOMEM-like situations. While that change is the best thing\n> if the process is exiting, it could silently drop the signal in ENOMEM-like\n> situations. Consider the following alternative. If sig==0, just return 0\n> like you propose, because the process isn't completely gone. Otherwise, sleep\n> and retry the signal, like pgwin32_open_handle() retries after certain errors.\n> What do you think of that?\n\nThank you for your attention to the issue!\n\nI agree with your approach. It looks like Microsoft recommends to loop on\nERROR_PIPE_BUSY: [1] (they say \"Calling CallNamedPipe is equivalent to\ncalling the CreateFile ...\" at [2]).\n\nSo if we aim to not only fix \"pg_ctl stop\", but to make pgkill() robust,\nit's the way to go, IMHO. I'm not sure about an infinite loop they show,\nI'd vote for a loop with the same characteristics as in\npgwin32_open_handle().\n\nI've managed to trigger ERROR_PIPE_BUSY with \"pg_ctl reload\", when running\n20 TAP tests (see attached) in parallel (with 20 threads executing\n$node->reload() in each), and with the kill() call inside do_reload()\nrepeated x100 (see the modification patch attached too):\n...\n# Running: pg_ctl -D .../099_test_pgkill\\data/t_099_test_pgkill_node_data/pgdata reload\n### Reloading node \"node\"\n# Running: pg_ctl -D .../099_test_pgkill\\data/t_099_test_pgkill_node_data/pgdata reload\n[13:41:46.850](2.400s) # 18\nserver signaled\nserver signaled\nserver signaled\nserver signaled\nserver signaled\nserver signaled\nserver signaled\nserver signaled\n!!!pgkill| GetLastError(): 231\npg_ctl: could not send reload signal (PID: 3912), iteration: 81, res: -1, errno: 22: Invalid argument\nserver signaled\n[13:41:52.594](5.744s) # 19\n...\n\n[1] https://learn.microsoft.com/en-us/windows/win32/ipc/named-pipe-client\n[2] https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-callnamedpipea\n\nBest regards,\nAlexander",
"msg_date": "Sun, 8 Sep 2024 18:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Yet another way for pg_ctl stop to fail on Windows"
},
{
"msg_contents": "On Sun, Sep 08, 2024 at 06:00:00PM +0300, Alexander Lakhin wrote:\n> 07.09.2024 21:11, Noah Misch wrote:\n\n> > > Noah, what do you think of handling this error in line with handling of\n> > > ERROR_BROKEN_PIPE and ERROR_BAD_PIPE (which was done in 0ea1f2a3a)?\n> > > \n> > > I tried the following change:\n> > > ������� switch (GetLastError())\n> > > ������� {\n> > > ��������������� case ERROR_BROKEN_PIPE:\n> > > ��������������� case ERROR_BAD_PIPE:\n> > > +�������������� case ERROR_PIPE_BUSY:\n> > > and saw no issues.\n> > That would be a strict improvement over returning EINVAL like we do today. We\n> > do use PIPE_UNLIMITED_INSTANCES, so I expect the causes of ERROR_PIPE_BUSY are\n> > process exit and ENOMEM-like situations. While that change is the best thing\n> > if the process is exiting, it could silently drop the signal in ENOMEM-like\n> > situations. Consider the following alternative. If sig==0, just return 0\n> > like you propose, because the process isn't completely gone. Otherwise, sleep\n> > and retry the signal, like pgwin32_open_handle() retries after certain errors.\n> > What do you think of that?\n\n> I agree with your approach. It looks like Microsoft recommends to loop on\n> ERROR_PIPE_BUSY: [1] (they say \"Calling CallNamedPipe is equivalent to\n> calling the CreateFile ...\" at [2]).\n\nI see Microsoft suggests WaitNamedPipeA() as opposed to just polling.\nWaitNamedPipeA() should be more responsive. Given how rare this has been, it\nlikely doesn't matter whether we use WaitNamedPipeA() or polling. I'd lean\ntoward whichever makes the code simpler, probably polling.\n\n> So if we aim to not only fix \"pg_ctl stop\", but to make pgkill() robust,\n> it's the way to go, IMHO. I'm not sure about an infinite loop they show,\n> I'd vote for a loop with the same characteristics as in\n> pgwin32_open_handle().\n\nI agree with bounding the total time of each kill(), like\npgwin32_open_handle() does for open().\n\n> [1] https://learn.microsoft.com/en-us/windows/win32/ipc/named-pipe-client\n> [2] https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-callnamedpipea\n\n\n",
"msg_date": "Sun, 8 Sep 2024 09:53:55 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Yet another way for pg_ctl stop to fail on Windows"
}
] |
[
{
"msg_contents": "Some days ago Tom Lane said ...\n\nSELECT events & 4 != 0 AS can_upd, events & 8 != 0 AS can_ins, events & 16\n!= 0 AS can_del FROM\npg_catalog.pg_relation_is_updatable('_pessoa'::regclass, false) t(events);\n\nWell, I didn't find that function on DOCs and then I thought, are there\nother functions which are not documented ? Why ?\nThen if I get all functions from pg_catalog\nselect string_agg(distinct format('\"%s\"',proname),',') from pg_proc where\npronamespace::regnamespace::text = 'pg_catalog'\n\nusing PowerShell declare a variable, on the SGML folder I use ...\n\n$vv = (result of that select)\nforeach ($v in $vv) {if (!((Get-Content . | %{$_ -match $v}) -contains\n$true)) {\nWrite-Host $v}}\n\nI'll get all functions which are on pg_catalog but not on SGML files, so\nare not documented.\n\nExample, elem_contained_by_range is not documented. I know I can use\nselect 2 <@ '[1,3]'::int4range\nBut why is that function not documented ?\nselect elem_contained_by_range(2,'[1,3]'::int4range);\n\nAnd what other functions are cool to use but are not documented ?\n\nregards\nMarcos\n\nSome days ago Tom Lane said ...SELECT events & 4 != 0 AS can_upd, events & 8 != 0 AS can_ins, events & 16 != 0 AS can_del FROM pg_catalog.pg_relation_is_updatable('_pessoa'::regclass, false) t(events);Well, I didn't find that function on DOCs and then I thought, are there other functions which are not documented ? Why ?Then if I get all functions from pg_catalogselect string_agg(distinct format('\"%s\"',proname),',') from pg_proc where pronamespace::regnamespace::text = 'pg_catalog'using PowerShell declare a variable, on the SGML folder I use ...$vv = (result of that select)foreach ($v in $vv) {if (!((Get-Content . | %{$_ -match $v}) -contains $true)) {Write-Host $v}}I'll get all functions which are on pg_catalog but not on SGML files, so are not documented. Example, elem_contained_by_range is not documented. I know I can use select 2 <@ '[1,3]'::int4rangeBut why is that function not documented ?select elem_contained_by_range(2,'[1,3]'::int4range);And what other functions are cool to use but are not documented ?regardsMarcos",
"msg_date": "Sat, 7 Sep 2024 15:58:00 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Undocumented functions"
},
{
"msg_contents": "Hi\n\nso 7. 9. 2024 v 20:58 odesílatel Marcos Pegoraro <marcos@f10.com.br> napsal:\n\n> Some days ago Tom Lane said ...\n>\n> SELECT events & 4 != 0 AS can_upd, events & 8 != 0 AS can_ins, events & 16\n> != 0 AS can_del FROM\n> pg_catalog.pg_relation_is_updatable('_pessoa'::regclass, false) t(events);\n>\n> Well, I didn't find that function on DOCs and then I thought, are there\n> other functions which are not documented ? Why ?\n> Then if I get all functions from pg_catalog\n> select string_agg(distinct format('\"%s\"',proname),',') from pg_proc where\n> pronamespace::regnamespace::text = 'pg_catalog'\n>\n> using PowerShell declare a variable, on the SGML folder I use ...\n>\n> $vv = (result of that select)\n> foreach ($v in $vv) {if (!((Get-Content . | %{$_ -match $v}) -contains\n> $true)) {\n> Write-Host $v}}\n>\n> I'll get all functions which are on pg_catalog but not on SGML files, so\n> are not documented.\n>\n> Example, elem_contained_by_range is not documented. I know I can use\n> select 2 <@ '[1,3]'::int4range\n> But why is that function not documented ?\n> select elem_contained_by_range(2,'[1,3]'::int4range);\n>\n> And what other functions are cool to use but are not documented ?\n>\n\nthere are lot of useful undocumented functions - see queries from\nhttps://github.com/postgres/postgres/blob/master/src/bin/psql/describe.c\n\nI see the main reason for the existence of undocumented functions inside\nPostgres or MSSQL is a missing guarantee of stability or existence.\n\nEverything in the pg_catalog schema can be different with every major\nrelease. pg_relation_is_updatable is part of FDW support, it is not\ndesigned for users.\n\nRegards\n\nPavel\n\n\n\n\n> regards\n> Marcos\n>\n\nHiso 7. 9. 2024 v 20:58 odesílatel Marcos Pegoraro <marcos@f10.com.br> napsal:Some days ago Tom Lane said ...SELECT events & 4 != 0 AS can_upd, events & 8 != 0 AS can_ins, events & 16 != 0 AS can_del FROM pg_catalog.pg_relation_is_updatable('_pessoa'::regclass, false) t(events);Well, I didn't find that function on DOCs and then I thought, are there other functions which are not documented ? Why ?Then if I get all functions from pg_catalogselect string_agg(distinct format('\"%s\"',proname),',') from pg_proc where pronamespace::regnamespace::text = 'pg_catalog'using PowerShell declare a variable, on the SGML folder I use ...$vv = (result of that select)foreach ($v in $vv) {if (!((Get-Content . | %{$_ -match $v}) -contains $true)) {Write-Host $v}}I'll get all functions which are on pg_catalog but not on SGML files, so are not documented. Example, elem_contained_by_range is not documented. I know I can use select 2 <@ '[1,3]'::int4rangeBut why is that function not documented ?select elem_contained_by_range(2,'[1,3]'::int4range);And what other functions are cool to use but are not documented ?there are lot of useful undocumented functions - see queries from https://github.com/postgres/postgres/blob/master/src/bin/psql/describe.cI see the main reason for the existence of undocumented functions inside Postgres or MSSQL is a missing guarantee of stability or existence.Everything in the pg_catalog schema can be different with every major release. pg_relation_is_updatable is part of FDW support, it is not designed for users.RegardsPavel regardsMarcos",
"msg_date": "Sat, 7 Sep 2024 21:51:44 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Undocumented functions"
},
{
"msg_contents": "Marcos Pegoraro <marcos@f10.com.br> writes:\n> Example, elem_contained_by_range is not documented. I know I can use\n> select 2 <@ '[1,3]'::int4range\n> But why is that function not documented ?\n\nFunctions that are primarily meant to implement operators are\nnormally not documented separately: we feel it would bloat the\ndocs without adding a lot. There are pg_description entries for\nthem, eg\n\nregression=# \\df+ elem_contained_by_range\n List of functions\n Schema | Name | Result data type | Argument data types | Type | Volatility | Parallel | Owner | Security | Access privileges | Language | Internal name | Description \n------------+-------------------------+------------------+----------------------+------+------------+----------+----------+----------+-------------------+----------+-------------------------+-------------------------------\n pg_catalog | elem_contained_by_range | boolean | anyelement, anyrange | func | immutable | safe | postgres | invoker | | internal | elem_contained_by_range | implementation of <@ operator\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(1 row)\n\nI think pg_relation_is_updatable is primarily meant as support for the\ninformation_schema views, which may explain why it's not in the docs\neither. There's less of a formal policy about functions underlying\nsystem views, but the majority of them probably aren't documented.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Sep 2024 16:18:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Undocumented functions"
},
{
"msg_contents": "Em sáb., 7 de set. de 2024 às 17:18, Tom Lane <tgl@sss.pgh.pa.us> escreveu\nFunctions that are primarily meant to implement operators are normally not\ndocumented separately: we feel it would bloat the\ndocs without adding a lot\n\nThose two functions, elem_contained_by_range and pg_relation_is_updatable\nwere only examples of hundreds of functions which are not documented.\nThe real question here is not for the ones that are used internally because\noperators or types need them, I'm talking about those ones which does not\nhave a way to replace it ?\n\npg_get_shmem_allocations is cool and is not mentioned on DOC\npg_is_in_backup was mentioned until version 12, then removed. Why, it´s not\nused anymore.\n\nThis is the question, what functions exist and are useful but are not\ndocumented ?\n\nregards\nMarcos\n\nMarcos\n\nEm sáb., 7 de set. de 2024 às 17:18, Tom Lane <tgl@sss.pgh.pa.us> escreveuFunctions that are primarily meant to implement operators are normally not documented separately: we feel it would bloat thedocs without adding a lotThose two functions, elem_contained_by_range and pg_relation_is_updatable were only examples of hundreds of functions which are not documented. The real question here is not for the ones that are used internally because operators or types need them, I'm talking about those ones which does not have a way to replace it ? pg_get_shmem_allocations is cool and is not mentioned on DOCpg_is_in_backup was mentioned until version 12, then removed. Why, it´s not used anymore.This is the question, what functions exist and are useful but are not documented ?regardsMarcosMarcos",
"msg_date": "Mon, 9 Sep 2024 10:12:13 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Undocumented functions"
}
] |
[
{
"msg_contents": "Hi all, I found that there is a race condition between two global transaction, which may cause instance\r\nrestart failed with error 'could not access status of transaction xxx\",\"Could not open file \"\"pg_xact/xxx\"\": No such file or directory'.\r\n\r\n\r\n The scenery to reproduce the problem is:\r\n 1. gxact1 is doing `FinishPreparedTransaction` and checkpoint\r\n is issued, so gxact1 will generate a 2pc file.\r\n 2. then gxact1 was removed from `TwoPhaseState->prepXacts` and\r\n its state memory was returned to freelist.\r\n 3. but just before gxact1 remove its 2pc file, gxact2 is issued,\r\n gxact2 will reuse the same state memory of gxact1 and will\r\n reset `gxact->ondisk` to false.\r\n 4. gxact1 continue and found that `gxact->ondisk` is false, it won't\r\n remove its 2pc file. This file is orphaned.\r\n\r\n\r\n If gxact1's local xid is not frozen, the startup process will remove\r\nthe orphaned 2pc file. However, if the xid's corresponding clog file is\r\ntruncated by `vacuum`, the startup process will raise error 'could not\r\naccess status of transaction xxx', due to it could not found the\r\ntransaction's status file in dir `pg_xact`.\r\n\r\n\r\n\r\n The potential fix is attached.",
"msg_date": "Sun, 8 Sep 2024 13:01:37 +0800",
"msg_from": "\"=?gb18030?B?x+XHsw==?=\" <drec.wu@foxmail.com>",
"msg_from_op": true,
"msg_subject": "Fix orphaned 2pc file which may casue instance restart failed"
},
{
"msg_contents": "On Sun, Sep 08, 2024 at 01:01:37PM +0800, 清浅 wrote:\n> Hi all, I found that there is a race condition\n> between two global transaction, which may cause instance restart\n> failed with error 'could not access status of transaction\n> xxx\",\"Could not open file \"\"pg_xact/xxx\"\": No such file or\n> directory'.\n> \n> \n> The scenery to reproduce the problem is:\n> 1. gxact1 is doing `FinishPreparedTransaction` and checkpoint\n> is issued, so gxact1 will generate a 2pc file.\n> 2. then gxact1 was removed from `TwoPhaseState->prepXacts` and\n> its state memory was returned to freelist.\n> 3. but just before gxact1 remove its 2pc file, gxact2 is issued,\n> gxact2 will reuse the same state memory of gxact1 and will\n> reset `gxact->ondisk` to false.\n> 4. gxact1 continue and found that `gxact->ondisk` is false, it won't\n> remove its 2pc file. This file is orphaned.\n> \n> If gxact1's local xid is not frozen, the startup process will remove\n> the orphaned 2pc file. However, if the xid's corresponding clog file is\n> truncated by `vacuum`, the startup process will raise error 'could not\n> access status of transaction xxx', due to it could not found the\n> transaction's status file in dir `pg_xact`.\n\nHmm. I've not seen that in the field. Let me check that..\n--\nMichael",
"msg_date": "Wed, 11 Sep 2024 18:21:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix orphaned 2pc file which may casue instance restart failed"
}
] |
[
{
"msg_contents": "Basically, when we are inserting into a leaf relation (or lower level\nrelation of the partitioned relation), we acquire the lock on the leaf\nrelation during parsing time itself whereas parent lock is acquire\nduring generate_partition_qual(). Now concurrently if we try to drop\nthe partition root then it will acquire the lock in reverse order,\ni.e. parent first and then child so this will create a deadlock.\nBelow example reproduce this case.\n\nSetup:\n--------\n\nCREATE TABLE test(a int, b int) partition by range(a);\nCREATE TABLE test1 partition of test for values from (1) to (100000);\n\nTest:\n------\n--Session1:\nINSERT INTO test1 VALUES (1, 4);\n-- let session is lock the relation test1 and make it wait before it\nlocks test (put breakpoint in ExecInitModifyTable)\n\n--Session2:\n-- try to drop the top table which will try to take AccessExclusive\nlock on all partitions\nDROP TABLE test;\n\n--session3\n-- see PG_LOCKS\n-- we can see that session1 has locked locked root table test(16384)\nwaiting on test1(16387) as session1 is holding that lock\n\n locktype | database | relation | pid | mode | granted\n---------------+----------+---------------+-------+---------------------+------------\n relation | 5 | 16387 | 30368 | RowExclusiveLock | t\n relation | 5 | 16387 | 30410 | AccessExclusiveLock | f\n relation | 5 | 16384 | 30410 | AccessExclusiveLock | t\n(11 rows)\n\n\n--Session1, now as soon as you continue in gdb in session 1 it will\nhit the deadlock\nERROR: 40P01: deadlock detected\nDETAIL: Process 30368 waits for AccessShareLock on relation 16384 of\ndatabase 5; blocked by process 30410.\nProcess 30410 waits for AccessExclusiveLock on relation 16387 of\ndatabase 5; blocked by process 30368.\nHINT: See server log for query details.\nLOCATION: DeadLockReport, deadlock.c:1135\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 8 Sep 2024 15:50:27 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Deadlock due to locking order violation while inserting into a leaf\n relation"
}
] |
[
{
"msg_contents": "Dear hackers,\n\nI'm the maintainer of ruby-pg the ruby interface to the PostgreSQL \ndatabase. This binding uses the asynchronous API of libpq by default to \nfacilitate the ruby IO wait and scheduling mechanisms.\n\nThis works well with the vanilla postgresql server, but it leads to \nstarvation with other types of servers using the postgresql wire \nprotocol 3. This is because the current functioning of the libpq async \ninterface depends on a maximum size of SSL records of 8kB.\n\nThe following servers were reported to starve with ruby-pg:\n\n* AWS RDS Aurora Serverless [1]\n* YugabyteDb [2]\n* CockroachDB [3]\n\nThey block infinitely on certain message sizes sent from the backend to \nthe libpq frontend. It is best described in [4]. A repro docker \ncomposition is provided by YugabyteDB at [2].\n\nTo fix this issue the attached patch calls pqReadData() repeatedly in \nPQconsumeInput() until there is no buffered SSL data left to be read. \nAnother solution could be to process buffered SSL read bytes in \nPQisBusy() instead of PQconsumeInput() .\n\nThe synchronous libpq API isn't affected, since it supports arbitrary \nSSL record sizes already. That's why I think that the asynchronous API \nshould also support bigger SSL record sizes.\n\nRegards, Lars\n\n\n[1] https://github.com/ged/ruby-pg/issues/325\n[2] https://github.com/ged/ruby-pg/issues/588\n[3] https://github.com/ged/ruby-pg/issues/583\n[4] https://github.com/ged/ruby-pg/issues/325#issuecomment-737561270",
"msg_date": "Sun, 8 Sep 2024 22:07:53 +0200",
"msg_from": "Lars Kanis <lars@greiz-reinsdorf.de>",
"msg_from_op": true,
"msg_subject": "libpq: Process buffered SSL read bytes to support records >8kB on\n async API"
},
{
"msg_contents": "On Sun, Sep 8, 2024 at 1:08 PM Lars Kanis <lars@greiz-reinsdorf.de> wrote:\n> I'm the maintainer of ruby-pg the ruby interface to the PostgreSQL\n> database. This binding uses the asynchronous API of libpq by default to\n> facilitate the ruby IO wait and scheduling mechanisms.\n>\n> This works well with the vanilla postgresql server, but it leads to\n> starvation with other types of servers using the postgresql wire\n> protocol 3. This is because the current functioning of the libpq async\n> interface depends on a maximum size of SSL records of 8kB.\n\nThanks for the report! I wanted evidence that this wasn't a\nruby-pg-specific problem, so I set up a test case with\nPython/psycopg2.\n\nI was able to reproduce a hang when all of the following were true:\n- psycopg2's async mode was enabled\n- the client performs a PQconsumeInput/PQisBusy loop, waiting on\nsocket read events when the connection is busy (I used\npsycopg2.extras.wait_select() for this)\n- the server splits a large message over many large TLS records\n- the server packs the final ReadyForQuery message into the same\nrecord as the split message's final fragment\n\nGory details of the packet sizes, if it's helpful:\n- max TLS record size is 12k, because it made the math easier\n- server sends DataRow of 32006 bytes, followed by DataRow of 806\nbytes, followed by CommandComplete/ReadyForQuery\n- so there are three TLS records on the wire containing\n 1) DataRow 1 fragment 1 (12k bytes)\n 2) DataRow 1 fragment 2 (12k bytes)\n 3) DataRow 1 fragment 3 (7430 bytes) + DataRow 2 (806 bytes)\n + CommandComplete + ReadyForQuery\n\n> To fix this issue the attached patch calls pqReadData() repeatedly in\n> PQconsumeInput() until there is no buffered SSL data left to be read.\n> Another solution could be to process buffered SSL read bytes in\n> PQisBusy() instead of PQconsumeInput() .\n\nI agree that PQconsumeInput() needs to ensure that the transport\nbuffers are all drained. But I'm not sure this is a complete solution;\ndoesn't GSS have the same problem? And are there any other sites that\nneed to make the same guarantee before returning?\n\nI need to switch away from this for a bit. Would you mind adding this\nto the next Commitfest as a placeholder?\n\n https://commitfest.postgresql.org/50/\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 10 Sep 2024 11:49:38 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: libpq: Process buffered SSL read bytes to support records >8kB on\n async API"
},
{
"msg_contents": "Thank you Jacob for verifying this issue!\n\n> Gory details of the packet sizes, if it's helpful:\n> - max TLS record size is 12k, because it made the math easier\n> - server sends DataRow of 32006 bytes, followed by DataRow of 806\n> bytes, followed by CommandComplete/ReadyForQuery\n> - so there are three TLS records on the wire containing\n> 1) DataRow 1 fragment 1 (12k bytes)\n> 2) DataRow 1 fragment 2 (12k bytes)\n> 3) DataRow 1 fragment 3 (7430 bytes) + DataRow 2 (806 bytes)\n> + CommandComplete + ReadyForQuery\n\nHow did you verify the issue on the server side - with YugabyteDB or \nwith a modified Postgres server? I'd like to verify the GSSAPI part and \nI'm familiar with the Postgres server only.\n\n\n> I agree that PQconsumeInput() needs to ensure that the transport\n> buffers are all drained. But I'm not sure this is a complete solution;\n> doesn't GSS have the same problem? And are there any other sites that\n> need to make the same guarantee before returning?\n\nWhich other sites do you mean? The synchronous transfer already works, \nsince the select() is short-circuit in case of pending bytes: [1]\n\n\n> I need to switch away from this for a bit. Would you mind adding this\n> to the next Commitfest as a placeholder?\n\nNo problem; registered: https://commitfest.postgresql.org/50/5251/\n\n--\n\nRegards, Lars\n\n[1] \nhttps://github.com/postgres/postgres/blob/77761ee5dddc0518235a51c533893e81e5f375b9/src/interfaces/libpq/fe-misc.c#L1070\n\n\n\n",
"msg_date": "Wed, 11 Sep 2024 21:08:45 +0200",
"msg_from": "Lars Kanis <lars@greiz-reinsdorf.de>",
"msg_from_op": true,
"msg_subject": "Re: libpq: Process buffered SSL read bytes to support records >8kB on\n async API"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 12:08 PM Lars Kanis <lars@greiz-reinsdorf.de> wrote:\n> How did you verify the issue on the server side - with YugabyteDB or\n> with a modified Postgres server? I'd like to verify the GSSAPI part and\n> I'm familiar with the Postgres server only.\n\nNeither, unfortunately -- I have a protocol testbed that I use for\nthis kind of stuff. I'm happy to share once I get it cleaned up, but\nit's unlikely to help you in this case since I haven't implemented\ngssenc support. Patching the Postgres server itself seems like a good\nway to go.\n\n> > And are there any other sites that\n> > need to make the same guarantee before returning?\n>\n> Which other sites do you mean?\n\nI'm mostly worried that other parts of libpq might assume that a\nsingle call to pqReadData will drain the buffers. If not, great! --\nbut I haven't had time to check all the call sites.\n\n> > I need to switch away from this for a bit. Would you mind adding this\n> > to the next Commitfest as a placeholder?\n>\n> No problem; registered: https://commitfest.postgresql.org/50/5251/\n\nThank you!\n\n--Jacob\n\n\n",
"msg_date": "Wed, 11 Sep 2024 16:00:25 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: libpq: Process buffered SSL read bytes to support records >8kB on\n async API"
}
] |
[
{
"msg_contents": "Over in the thread about teaching to_number() to handle Roman\nnumerals [1], it was noted that we currently have precisely zero\ntest coverage for to_char()'s existing Roman-numeral code, except\nin the timestamp code path which shares nothing with the numeric\ncode path.\n\nIn looking at this, I found that there's also no test coverage\nfor the EEEE, V, or PL format codes. Also, the possibility of\noverflow while converting an input value to int in order to\npass it to int_to_roman was ignored. Attached is a patch that\nadds more test coverage and cleans up the Roman-numeral code\na little bit.\n\nBTW, I also discovered that there is a little bit of support\nfor a \"B\" format code: we can parse it, but then we ignore it.\nAnd it's not documented. Oracle defines this code as a flag\nthat:\n\n\tReturns blanks for the integer part of a fixed-point number\n\twhen the integer part is zero (regardless of zeros in the\n\tformat model).\n\nIt doesn't seem super hard to implement that, but given the\ncomplete lack of complaints about it being missing, maybe we\nshould just rip out the incomplete support instead?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CAMWA6ybh4M1VQqpmnu2tfSwO%2B3gAPeA8YKnMHVADeB%3DXDEvT_A%40mail.gmail.com",
"msg_date": "Sun, 08 Sep 2024 17:32:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Test improvements and minor code fixes for formatting.c."
},
{
"msg_contents": "I wrote:\n> [ v1-formatting-test-improvements.patch ]\n\nMeh ... cfbot points out I did the float-to-int conversions wrong.\nv2 attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 08 Sep 2024 18:39:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Test improvements and minor code fixes for formatting.c."
},
{
"msg_contents": "Hi Tom,\n\n> Meh ... cfbot points out I did the float-to-int conversions wrong.\n> v2 attached.\n\nI'm having difficulties applying the patch. Could you please do `git\nformat-patch` and resend it?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 11 Sep 2024 12:32:50 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Test improvements and minor code fixes for formatting.c."
},
{
"msg_contents": "Hi,\n\nOn Wed, Sep 11, 2024 at 2:33 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> I'm having difficulties applying the patch. Could you please do `git\n> format-patch` and resend it?\n>\n\nYes, I guess there is some issue with the patch but somehow I was able to\napply it.\n\nmake installcheck-world -> tested, passed\nDocumentation -> tested, passed\n\nRegards,\nHunaid Sohail\n\nHi,On Wed, Sep 11, 2024 at 2:33 PM Aleksander Alekseev <aleksander@timescale.com> wrote:\nI'm having difficulties applying the patch. Could you please do `git\nformat-patch` and resend it?Yes, I guess there is some issue with the patch but somehow I was able to apply it.make installcheck-world -> tested, passedDocumentation -> tested, passedRegards,Hunaid Sohail",
"msg_date": "Wed, 11 Sep 2024 16:29:52 +0500",
"msg_from": "Hunaid Sohail <hunaidpgml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Test improvements and minor code fixes for formatting.c."
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> I'm having difficulties applying the patch. Could you please do `git\n> format-patch` and resend it?\n\npatch(1) is generally far more forgiving than 'git am'.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 Sep 2024 10:01:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Test improvements and minor code fixes for formatting.c."
},
{
"msg_contents": "On Sun, Sep 08, 2024 at 05:32:16PM -0400, Tom Lane wrote:\n> In looking at this, I found that there's also no test coverage\n> for the EEEE, V, or PL format codes. Also, the possibility of\n> overflow while converting an input value to int in order to\n> pass it to int_to_roman was ignored. Attached is a patch that\n> adds more test coverage and cleans up the Roman-numeral code\n> a little bit.\n\nI stared at the patch for a while, and it looks good to me.\n\n> BTW, I also discovered that there is a little bit of support\n> for a \"B\" format code: we can parse it, but then we ignore it.\n> And it's not documented. Oracle defines this code as a flag\n> that:\n> \n> \tReturns blanks for the integer part of a fixed-point number\n> \twhen the integer part is zero (regardless of zeros in the\n> \tformat model).\n> \n> It doesn't seem super hard to implement that, but given the\n> complete lack of complaints about it being missing, maybe we\n> should just rip out the incomplete support instead?\n\nAFAICT it's been like that since it was introduced [0]. I searched the\narchives and couldn't find any discussion about this format code. Given\nthat, I don't have any concerns about removing it unless it causes ERRORs\nfor calls that currently succeed, but even then, it's probably fine. This\nstrikes me as something that might be fun for an aspiring hacker, though.\n\n[0] https://postgr.es/c/b866d2e\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 17 Sep 2024 16:59:37 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Test improvements and minor code fixes for formatting.c."
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Sun, Sep 08, 2024 at 05:32:16PM -0400, Tom Lane wrote:\n>> In looking at this, I found that there's also no test coverage\n>> for the EEEE, V, or PL format codes. Also, the possibility of\n>> overflow while converting an input value to int in order to\n>> pass it to int_to_roman was ignored. Attached is a patch that\n>> adds more test coverage and cleans up the Roman-numeral code\n>> a little bit.\n\n> I stared at the patch for a while, and it looks good to me.\n\nPushed, thanks for looking!\n\n>> BTW, I also discovered that there is a little bit of support\n>> for a \"B\" format code: we can parse it, but then we ignore it.\n\n> AFAICT it's been like that since it was introduced [0]. I searched the\n> archives and couldn't find any discussion about this format code. Given\n> that, I don't have any concerns about removing it unless it causes ERRORs\n> for calls that currently succeed, but even then, it's probably fine. This\n> strikes me as something that might be fun for an aspiring hacker, though.\n\nYeah, I left that alone for now. I don't have much interest in\nmaking it work, but perhaps someone else will.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Sep 2024 11:04:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Test improvements and minor code fixes for formatting.c."
}
] |
[
{
"msg_contents": "In rescanLatestTimeLine(), if a new target timeline is found, expectedTLEs is \nreplaced with newExpectedTLEs that is newly allocated by readTimeLineHistory(),\nand old expectedTLEs is released using list_free_deep().\n\nHowever, if the current timeline is not part of the history of the new timeline, \nthe function returns without using newExpectedTLEs, nor releasing it. I wonder \nthis is a memory leak and it is better to release it, although the affect may\nbe not so much. \n\nI've attached the patch.\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>",
"msg_date": "Mon, 9 Sep 2024 10:53:35 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Fix possible memory leak in rescanLatestTimeLine()"
}
] |
[
{
"msg_contents": "While looking at label_sort_with_costsize() due to another issue, I\nnoticed that we build explicit Sort nodes but not explicit Incremental\nSort nodes. I wonder why this is the case. It seems to me that\nIncremental Sorts are preferable in some cases. For example:\n\ncreate table t (a int, b int);\ncreate index on t (a);\n\nset enable_seqscan to off;\n\nexplain (costs off)\nselect * from t t1 join t t2 on t1.a = t2.a and t1.b = t2.b;\n QUERY PLAN\n-------------------------------------------------\n Merge Join\n Merge Cond: ((t1.a = t2.a) AND (t1.b = t2.b))\n -> Sort\n Sort Key: t1.a, t1.b\n -> Index Scan using t_a_idx on t t1\n -> Sort\n Sort Key: t2.a, t2.b\n -> Index Scan using t_a_idx on t t2\n(8 rows)\n\nFor the outer path of a mergejoin, I think we can leverage Incremental\nSort to save effort. For the inner path, though, we cannot do this\nbecause Incremental Sort does not support mark/restore.\n\nIt could be argued that what if a mergejoin with an Incremental Sort\nas the outer path is selected as the inner path of another mergejoin?\nWell, I don't think this is a problem, because mergejoin itself does\nnot support mark/restore either, and we will add a Material node on\ntop of it anyway in this case (see final_cost_mergejoin).\n\nlabel_sort_with_costsize is also called in create_append_plan,\ncreate_merge_append_plan and create_unique_plan. In these cases, we\nmay also consider using Incremental Sort. But I haven't looked into\nthese cases.\n\nAny thoughts?\n\nThanks\nRichard\n\n\n",
"msg_date": "Mon, 9 Sep 2024 17:39:24 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "Hi,\n\nOn 9/9/24 11:39, Richard Guo wrote:\n> While looking at label_sort_with_costsize() due to another issue, I\n> noticed that we build explicit Sort nodes but not explicit Incremental\n> Sort nodes. I wonder why this is the case. It seems to me that\n> Incremental Sorts are preferable in some cases. For example:\n> \n\nI think we intentionally added incremental sort ... incrementally ;-)\n\nI don't recall the reasoning exactly, but I think we realized the\nincremental sort costing can be a bit shaky (and AFAIK we saw a couple\ncases of that reported). So we added it to places where the reasoning\nwas it wouldn't be a problem and the incremental sort would be a clear\nwin, e.g. thanks to the \"cheap startup\" thing.\n\n> create table t (a int, b int);\n> create index on t (a);\n> \n> set enable_seqscan to off;\n> \n> explain (costs off)\n> select * from t t1 join t t2 on t1.a = t2.a and t1.b = t2.b;\n> QUERY PLAN\n> -------------------------------------------------\n> Merge Join\n> Merge Cond: ((t1.a = t2.a) AND (t1.b = t2.b))\n> -> Sort\n> Sort Key: t1.a, t1.b\n> -> Index Scan using t_a_idx on t t1\n> -> Sort\n> Sort Key: t2.a, t2.b\n> -> Index Scan using t_a_idx on t t2\n> (8 rows)\n> \n> For the outer path of a mergejoin, I think we can leverage Incremental\n> Sort to save effort. For the inner path, though, we cannot do this\n> because Incremental Sort does not support mark/restore.\n> \n> It could be argued that what if a mergejoin with an Incremental Sort\n> as the outer path is selected as the inner path of another mergejoin?\n> Well, I don't think this is a problem, because mergejoin itself does\n> not support mark/restore either, and we will add a Material node on\n> top of it anyway in this case (see final_cost_mergejoin).\n> \n> label_sort_with_costsize is also called in create_append_plan,\n> create_merge_append_plan and create_unique_plan. In these cases, we\n> may also consider using Incremental Sort. But I haven't looked into\n> these cases.\n> \n> Any thoughts?\n> \n\nI think one challenge with this case is that create_mergejoin_plan\ncreates these Sort plans explicitly - it's not \"pathified\" so it doesn't\ngo through the usual cost comparison etc. And it certainly is not the\ncase that incremental sort would win always, so we'd need to replicate\nthe cost comparison logic too.\n\nI have not thought about this particular case too much, but how likely\nis it that mergejoin will win for this plan in practice? If we consider\na query that only needs a fraction of rows (say, thanks to LIMIT),\naren't we likely to pick a nested loop (which can do incremental sort\nfor the outer plan)? For joins that need to run to completion it might\nbe a win, but there's also the higher risk of a poor costing.\n\nI'm not saying it's not worth exploring, I'm just trying recall reasons\nwhy it might not work. Also I don't think it's fundamentally impossible\nto do mark/restore for incremental sort - I just haven't tried doing it\nbecause it wasn't necessary. In the worst case we could simply add a\nMaterialize node on top, no?\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Mon, 9 Sep 2024 12:22:50 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "On Mon, Sep 9, 2024 at 6:22 PM Tomas Vondra <tomas@vondra.me> wrote:\n> I think we intentionally added incremental sort ... incrementally ;-)\n\nHaha, right.\n\n> I think one challenge with this case is that create_mergejoin_plan\n> creates these Sort plans explicitly - it's not \"pathified\" so it doesn't\n> go through the usual cost comparison etc. And it certainly is not the\n> case that incremental sort would win always, so we'd need to replicate\n> the cost comparison logic too.\n>\n> I have not thought about this particular case too much, but how likely\n> is it that mergejoin will win for this plan in practice? If we consider\n> a query that only needs a fraction of rows (say, thanks to LIMIT),\n> aren't we likely to pick a nested loop (which can do incremental sort\n> for the outer plan)? For joins that need to run to completion it might\n> be a win, but there's also the higher risk of a poor costing.\n\nYeah, currently mergejoin path always assumes that full sort would be\nused on top of the outer path and inner path if they are not already\nordered appropriately. So initial_cost_mergejoin directly charges the\ncost of full sort into outer/inner path's cost, without going through\nthe usual cost comparison with incremental sort.\n\nIt seems to me that some parts of our code assume that, for a given\ninput path that is partially ordered, an incremental sort is always\npreferable to a full sort (see commit 4a29eabd1). Am I getting this\ncorrectly? If this is the case, then I think using the following\nouter path for the merge join:\n\n -> Incremental Sort\n Sort Key: t1.a, t1.b\n Presorted Key: t1.a\n -> Index Scan using t1_a_idx on t1\n\n... would be an immediate improvement over the current path, which is:\n\n -> Sort\n Sort Key: t1.a, t1.b\n -> Index Scan using t1_a_idx on t1\n\n\nThe shaky cost estimates for incremental sort that you mentioned are\nindeed a concern. Fortunately we have the enable_incremental_sort GUC\nalready. As in may other parts of the code (such as in the function\nadd_paths_to_grouping_rel), we can always disable incremental sort to\nfall back to a full sort if needed.\n\n> I'm not saying it's not worth exploring, I'm just trying recall reasons\n> why it might not work. Also I don't think it's fundamentally impossible\n> to do mark/restore for incremental sort - I just haven't tried doing it\n> because it wasn't necessary. In the worst case we could simply add a\n> Materialize node on top, no?\n\nYeah, perhaps we could support mark/restore for incremental sort\nsomeday. This would allow us to apply incremental sort to the inner\npath of a merge join too. But if we apply a Material node on top of\nthe inner due to the lack of mark/restore support for incremental\nsort, then we will need to compare two paths:\n\npartially sorted path -> incremental sort -> material\n\nVS.\n\npartially sorted path -> full sort\n\nI think it's hard to tell which is cheaper without a cost comparison,\nwhich we do not have for now.\n\n\nTo help with the discussion, I drafted a WIP patch as attached, which\nchooses an incremental sort over a full sort on the given outer path\nof a mergejoin whenever possible. There is one ensuing plan diff in\nthe regression tests (not included in the patch). It seems that some\nquery in the tests begins to use incremental sort for mergejoin.\n\nThanks\nRichard",
"msg_date": "Fri, 13 Sep 2024 12:04:01 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "On Mon, Sep 9, 2024 at 6:22 PM Tomas Vondra <tomas@vondra.me> wrote:\n> I have not thought about this particular case too much, but how likely\n> is it that mergejoin will win for this plan in practice? If we consider\n> a query that only needs a fraction of rows (say, thanks to LIMIT),\n> aren't we likely to pick a nested loop (which can do incremental sort\n> for the outer plan)? For joins that need to run to completion it might\n> be a win, but there's also the higher risk of a poor costing.\n\nI think one situation where mergejoin tends to outperform hashjoin and\nnestloop is when ORDER BY clauses are present. For example, for the\nquery below, mergejoin runs much faster than hashjoin and nestloop,\nand mergejoin with incremental sort is even faster than mergejoin with\nfull sort.\n\ncreate table t (a int, b int);\ninsert into t select random(1,100), random(1,100) from\ngenerate_series(1,100000);\n\nanalyze t;\n\n-- on patched\nexplain (analyze, costs off, timing off)\nselect * from (select * from t order by a) t1 join t t2 on t1.a = t2.a\nand t1.b = t2.b order by t1.a, t1.b;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\n Merge Join (actual rows=1100372 loops=1)\n Merge Cond: ((t.a = t2.a) AND (t.b = t2.b))\n -> Incremental Sort (actual rows=100000 loops=1)\n Sort Key: t.a, t.b\n Presorted Key: t.a\n Full-sort Groups: 100 Sort Method: quicksort Average\nMemory: 26kB Peak Memory: 26kB\n Pre-sorted Groups: 100 Sort Method: quicksort Average\nMemory: 74kB Peak Memory: 74kB\n -> Sort (actual rows=100000 loops=1)\n Sort Key: t.a\n Sort Method: external merge Disk: 1768kB\n -> Seq Scan on t (actual rows=100000 loops=1)\n -> Sort (actual rows=1100367 loops=1)\n Sort Key: t2.a, t2.b\n Sort Method: external sort Disk: 2160kB\n -> Seq Scan on t t2 (actual rows=100000 loops=1)\n Planning Time: 0.912 ms\n Execution Time: 854.502 ms\n(17 rows)\n\n-- disable incremental sort\nset enable_incremental_sort to off;\n\nexplain (analyze, costs off, timing off)\nselect * from (select * from t order by a) t1 join t t2 on t1.a = t2.a\nand t1.b = t2.b order by t1.a, t1.b;\n QUERY PLAN\n--------------------------------------------------------------\n Merge Join (actual rows=1100372 loops=1)\n Merge Cond: ((t.a = t2.a) AND (t.b = t2.b))\n -> Sort (actual rows=100000 loops=1)\n Sort Key: t.a, t.b\n Sort Method: external merge Disk: 1768kB\n -> Sort (actual rows=100000 loops=1)\n Sort Key: t.a\n Sort Method: external merge Disk: 1768kB\n -> Seq Scan on t (actual rows=100000 loops=1)\n -> Sort (actual rows=1100367 loops=1)\n Sort Key: t2.a, t2.b\n Sort Method: external sort Disk: 2160kB\n -> Seq Scan on t t2 (actual rows=100000 loops=1)\n Planning Time: 1.451 ms\n Execution Time: 958.607 ms\n(15 rows)\n\n\n-- with hashjoin\nset enable_mergejoin to off;\n\nexplain (analyze, costs off, timing off)\nselect * from (select * from t order by a) t1 join t t2 on t1.a = t2.a\nand t1.b = t2.b order by t1.a, t1.b;\n QUERY PLAN\n--------------------------------------------------------------------\n Sort (actual rows=1100372 loops=1)\n Sort Key: t.a, t.b\n Sort Method: external merge Disk: 28000kB\n -> Hash Join (actual rows=1100372 loops=1)\n Hash Cond: ((t2.a = t.a) AND (t2.b = t.b))\n -> Seq Scan on t t2 (actual rows=100000 loops=1)\n -> Hash (actual rows=100000 loops=1)\n Buckets: 131072 Batches: 1 Memory Usage: 4931kB\n -> Sort (actual rows=100000 loops=1)\n Sort Key: t.a\n Sort Method: external merge Disk: 1768kB\n -> Seq Scan on t (actual rows=100000 loops=1)\n Planning Time: 1.998 ms\n Execution Time: 2469.954 ms\n(14 rows)\n\n-- with nestloop, my small machine seems runs forever.\n\nThanks\nRichard\n\n\n",
"msg_date": "Fri, 13 Sep 2024 19:18:55 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "\n\nOn 9/13/24 06:04, Richard Guo wrote:\n> On Mon, Sep 9, 2024 at 6:22 PM Tomas Vondra <tomas@vondra.me> wrote:\n>> I think we intentionally added incremental sort ... incrementally ;-)\n> \n> Haha, right.\n> \n>> I think one challenge with this case is that create_mergejoin_plan\n>> creates these Sort plans explicitly - it's not \"pathified\" so it doesn't\n>> go through the usual cost comparison etc. And it certainly is not the\n>> case that incremental sort would win always, so we'd need to replicate\n>> the cost comparison logic too.\n>>\n>> I have not thought about this particular case too much, but how likely\n>> is it that mergejoin will win for this plan in practice? If we consider\n>> a query that only needs a fraction of rows (say, thanks to LIMIT),\n>> aren't we likely to pick a nested loop (which can do incremental sort\n>> for the outer plan)? For joins that need to run to completion it might\n>> be a win, but there's also the higher risk of a poor costing.\n> \n> Yeah, currently mergejoin path always assumes that full sort would be\n> used on top of the outer path and inner path if they are not already\n> ordered appropriately. So initial_cost_mergejoin directly charges the\n> cost of full sort into outer/inner path's cost, without going through\n> the usual cost comparison with incremental sort.\n> \n> It seems to me that some parts of our code assume that, for a given\n> input path that is partially ordered, an incremental sort is always\n> preferable to a full sort (see commit 4a29eabd1). Am I getting this\n> correctly?\n\nI don't think we're making such assumption. I don't know which of the\nmany places modified by 4a29eabd1 you have in mind, but IIRC we always\nconsider both full and incremental sort.\n\n> If this is the case, then I think using the following\n> outer path for the merge join:\n> \n> -> Incremental Sort\n> Sort Key: t1.a, t1.b\n> Presorted Key: t1.a\n> -> Index Scan using t1_a_idx on t1\n> \n> ... would be an immediate improvement over the current path, which is:\n> \n> -> Sort\n> Sort Key: t1.a, t1.b\n> -> Index Scan using t1_a_idx on t1\n> \n> \n> The shaky cost estimates for incremental sort that you mentioned are\n> indeed a concern. Fortunately we have the enable_incremental_sort GUC\n> already. As in may other parts of the code (such as in the function\n> add_paths_to_grouping_rel), we can always disable incremental sort to\n> fall back to a full sort if needed.\n> \n\nI think our goal should be to not rely on the enable_incremental_sort\nGUC as a defense very much. It's a very blunt instrument, that often\nforces users to pick whether they prefer to improve one query while\nharming some other queries. I personally consider these enable_ GUC more\na development tool than something suitable for users.\n\n>> I'm not saying it's not worth exploring, I'm just trying recall reasons\n>> why it might not work. Also I don't think it's fundamentally impossible\n>> to do mark/restore for incremental sort - I just haven't tried doing it\n>> because it wasn't necessary. In the worst case we could simply add a\n>> Materialize node on top, no?\n> \n> Yeah, perhaps we could support mark/restore for incremental sort\n> someday. This would allow us to apply incremental sort to the inner\n> path of a merge join too. But if we apply a Material node on top of\n> the inner due to the lack of mark/restore support for incremental\n> sort, then we will need to compare two paths:\n> \n> partially sorted path -> incremental sort -> material\n> \n> VS.\n> \n> partially sorted path -> full sort\n> \n> I think it's hard to tell which is cheaper without a cost comparison,\n> which we do not have for now.\n> \n\nHow is this different from the incremental sort costing we already have?\n\n> \n> To help with the discussion, I drafted a WIP patch as attached, which\n> chooses an incremental sort over a full sort on the given outer path\n> of a mergejoin whenever possible. There is one ensuing plan diff in\n> the regression tests (not included in the patch). It seems that some\n> query in the tests begins to use incremental sort for mergejoin.\n> \n\nI'm not against the patch in principle, but I haven't thought about the\ncosting and risk very much. If I have time I'll try to do some more\nexperiments to see how it behaves, but no promises.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Fri, 13 Sep 2024 13:35:35 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "On 9/13/24 13:18, Richard Guo wrote:\n> On Mon, Sep 9, 2024 at 6:22 PM Tomas Vondra <tomas@vondra.me> wrote:\n>> I have not thought about this particular case too much, but how likely\n>> is it that mergejoin will win for this plan in practice? If we consider\n>> a query that only needs a fraction of rows (say, thanks to LIMIT),\n>> aren't we likely to pick a nested loop (which can do incremental sort\n>> for the outer plan)? For joins that need to run to completion it might\n>> be a win, but there's also the higher risk of a poor costing.\n> \n> I think one situation where mergejoin tends to outperform hashjoin and\n> nestloop is when ORDER BY clauses are present. For example, for the\n> query below, mergejoin runs much faster than hashjoin and nestloop,\n> and mergejoin with incremental sort is even faster than mergejoin with\n> full sort.\n> \n\nSure, an incremental sort can improve things if things go well, no doubt\nabout that. But how significant can the improvement be, especially if we\nneed to sort everything? In your example it's ~15%, which is nice, and\nmaybe the benefits can be much larger if the incremental sort can do\neverything in memory, without flushing to disk.\n\nBut what about the risks? What will happen if the groups are not this\nuniformly and independently sized, and stuff like that? Maybe it'll be\ncosted well enough, I haven't tried.\n\nI don't recall the exact reasoning for why we didn't add incremental\nsort in various places, I'd have to dig into the old threads, or\nsomething. But I believe thinking about these risks was part of it -\ntrying to handle places where the potential benefits are much larger\nthan the risks.\n\nAs I wrote earlier, it's not my intent to discourage you from working on\nthis. Please do, I'm sure it can be improved.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Fri, 13 Sep 2024 13:51:55 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "On Fri, Sep 13, 2024 at 7:35 PM Tomas Vondra <tomas@vondra.me> wrote:\n> On 9/13/24 06:04, Richard Guo wrote:\n> > It seems to me that some parts of our code assume that, for a given\n> > input path that is partially ordered, an incremental sort is always\n> > preferable to a full sort (see commit 4a29eabd1). Am I getting this\n> > correctly?\n>\n> I don't think we're making such assumption. I don't know which of the\n> many places modified by 4a29eabd1 you have in mind, but IIRC we always\n> consider both full and incremental sort.\n\nHmm, I don't think it's true that we always consider both full and\nincremental sort on the same path. It was true initially, but that’s\nno longer the case.\n\nI checked the 9 callers of create_incremental_sort_path, and they all\nfollow the logic that if there are presorted keys, only incremental\nsort is considered. To quote a comment from one of the callers:\n\n * ... We've no need to consider both\n * sort and incremental sort on the same path. We assume that\n * incremental sort is always faster when there are presorted\n * keys.\n\nI think this is also explained in the commit message of 4a29eabd1,\nquoted here:\n\n\"\nPreviously we would generally create a sort path on the cheapest input\npath (if that wasn't sorted already) and incremental sort paths on any\npath which had presorted keys. This meant that if the cheapest input path\nwasn't completely sorted but happened to have presorted keys, we would\ncreate a full sort path *and* an incremental sort path on that input path.\nHere we change this logic so that if there are presorted keys, we only\ncreate an incremental sort path, and create sort paths only when a full\nsort is required.\n\"\n\nThanks\nRichard\n\n\n",
"msg_date": "Sat, 14 Sep 2024 11:37:21 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "On Fri, Sep 13, 2024 at 7:51 PM Tomas Vondra <tomas@vondra.me> wrote:\n> Sure, an incremental sort can improve things if things go well, no doubt\n> about that. But how significant can the improvement be, especially if we\n> need to sort everything? In your example it's ~15%, which is nice, and\n> maybe the benefits can be much larger if the incremental sort can do\n> everything in memory, without flushing to disk.\n>\n> But what about the risks? What will happen if the groups are not this\n> uniformly and independently sized, and stuff like that? Maybe it'll be\n> costed well enough, I haven't tried.\n\nI understand the concern and agree that there is a risk of regression\nif there is a skew in the number of rows per pre-sorted group.\n\nActually there were discussions about this during the work on commit\n4a29eabd1. Please see [1]. I will repeat David's demonstration and\nrerun his query on the current master branch to see what happens.\n\ncreate table ab (a int not null, b int not null);\ninsert into ab select 0,x from generate_Series(1,999000)x union all\nselect x%1000+1,0 from generate_Series(999001,1000000)x;\n\ncreate index on ab (a);\n\nanalyze ab;\n\nSo this is a table with a very large skew: the 0 group has 999000\nrows, and the remaining groups 1-1000 have just 1 row each.\n\n-- on master\nexplain (analyze, timing off) select * from ab order by a,b;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Incremental Sort (cost=2767.38..109344.55 rows=1000000 width=8)\n(actual rows=1000000 loops=1)\n Sort Key: a, b\n Presorted Key: a\n Full-sort Groups: 33 Sort Method: quicksort Average Memory: 26kB\nPeak Memory: 26kB\n Pre-sorted Groups: 1 Sort Method: external merge Average Disk:\n17680kB Peak Disk: 17680kB\n -> Index Scan using ab_a_idx on ab (cost=0.42..22832.42\nrows=1000000 width=8) (actual rows=1000000 loops=1)\n Planning Time: 0.829 ms\n Execution Time: 1028.892 ms\n(8 rows)\n\n-- manually disable incremental sort\nset enable_incremental_sort to off;\nexplain (analyze, timing off) select * from ab order by a,b;\n QUERY PLAN\n------------------------------------------------------------------------------------------------\n Sort (cost=127757.34..130257.34 rows=1000000 width=8) (actual\nrows=1000000 loops=1)\n Sort Key: a, b\n Sort Method: external merge Disk: 17704kB\n -> Seq Scan on ab (cost=0.00..14425.00 rows=1000000 width=8)\n(actual rows=1000000 loops=1)\n Planning Time: 0.814 ms\n Execution Time: 765.198 ms\n(6 rows)\n\nLook, regression happens on current master!\n\nThis is a question I’ve often pondered: each time we introduce a new\noptimization, there are always potential cases where it could lead to\nregressions. What should we do about this? I kind of agree on\nDavid's option that, as in the commit message of 4a29eabd1:\n\n\"\nThat, of course, as with teaching the planner any new tricks,\nmeans an increased likelihood that the planner will perform an incremental\nsort when it's not the best method. Our standard escape hatch for these\ncases is an enable_* GUC.\n\"\n\nI know ideally we should not rely on these enable_* GUCs, but in\nreality it seems that sometimes we do not have a better solution.\n\n[1] https://postgr.es/m/CAApHDvr1Sm+g9hbv4REOVuvQKeDWXcKUAhmbK5K+dfun0s9CvA@mail.gmail.com\n\nThanks\nRichard\n\n\n",
"msg_date": "Sat, 14 Sep 2024 11:50:30 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "\n\nOn 9/14/24 05:50, Richard Guo wrote:\n> On Fri, Sep 13, 2024 at 7:51 PM Tomas Vondra <tomas@vondra.me> wrote:\n>> Sure, an incremental sort can improve things if things go well, no doubt\n>> about that. But how significant can the improvement be, especially if we\n>> need to sort everything? In your example it's ~15%, which is nice, and\n>> maybe the benefits can be much larger if the incremental sort can do\n>> everything in memory, without flushing to disk.\n>>\n>> But what about the risks? What will happen if the groups are not this\n>> uniformly and independently sized, and stuff like that? Maybe it'll be\n>> costed well enough, I haven't tried.\n> \n> I understand the concern and agree that there is a risk of regression\n> if there is a skew in the number of rows per pre-sorted group.\n> \n> Actually there were discussions about this during the work on commit\n> 4a29eabd1. Please see [1]. I will repeat David's demonstration and\n> rerun his query on the current master branch to see what happens.\n> \n> create table ab (a int not null, b int not null);\n> insert into ab select 0,x from generate_Series(1,999000)x union all\n> select x%1000+1,0 from generate_Series(999001,1000000)x;\n> \n> create index on ab (a);\n> \n> analyze ab;\n> \n> So this is a table with a very large skew: the 0 group has 999000\n> rows, and the remaining groups 1-1000 have just 1 row each.\n> \n> -- on master\n> explain (analyze, timing off) select * from ab order by a,b;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------\n> Incremental Sort (cost=2767.38..109344.55 rows=1000000 width=8)\n> (actual rows=1000000 loops=1)\n> Sort Key: a, b\n> Presorted Key: a\n> Full-sort Groups: 33 Sort Method: quicksort Average Memory: 26kB\n> Peak Memory: 26kB\n> Pre-sorted Groups: 1 Sort Method: external merge Average Disk:\n> 17680kB Peak Disk: 17680kB\n> -> Index Scan using ab_a_idx on ab (cost=0.42..22832.42\n> rows=1000000 width=8) (actual rows=1000000 loops=1)\n> Planning Time: 0.829 ms\n> Execution Time: 1028.892 ms\n> (8 rows)\n> \n> -- manually disable incremental sort\n> set enable_incremental_sort to off;\n> explain (analyze, timing off) select * from ab order by a,b;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------\n> Sort (cost=127757.34..130257.34 rows=1000000 width=8) (actual\n> rows=1000000 loops=1)\n> Sort Key: a, b\n> Sort Method: external merge Disk: 17704kB\n> -> Seq Scan on ab (cost=0.00..14425.00 rows=1000000 width=8)\n> (actual rows=1000000 loops=1)\n> Planning Time: 0.814 ms\n> Execution Time: 765.198 ms\n> (6 rows)\n> \n> Look, regression happens on current master!\n> > This is a question I’ve often pondered: each time we introduce a new\n> optimization, there are always potential cases where it could lead to\n> regressions. What should we do about this? I kind of agree on\n> David's option that, as in the commit message of 4a29eabd1:\n> \n\nRight, or as Goetz Graefe said \"choice is confusion.\"\n\nThe funny thing is it also matters when the alternative plans are\nintroduced. Had it been at the very beginning, we'd consider the\nbehavior (including choosing sub-optimal plans) the baseline, and it'd\nbe okay-ish. But when we're introducing those alternative paths later,\nit's more likely to be seen as a \"regression\" ...\n\n> \"\n> That, of course, as with teaching the planner any new tricks,\n> means an increased likelihood that the planner will perform an incremental\n> sort when it's not the best method. Our standard escape hatch for these\n> cases is an enable_* GUC.\n> \"\n> \n> I know ideally we should not rely on these enable_* GUCs, but in\n> reality it seems that sometimes we do not have a better solution.\n> \n\nRight, the basic truth is there's no way to teach the optimizer to do\nnew stuff without a risk of regression. We simply can't do perfect\ndecisions based on incomplete information (which is what the statistics\nare, intentionally). It is up to us to reason about the risks, and\nideally deal with that later at execution time.\n\nI still don't think we should rely on GUCs too much for this.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Sat, 14 Sep 2024 23:50:09 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "On 9/14/24 05:37, Richard Guo wrote:\n> On Fri, Sep 13, 2024 at 7:35 PM Tomas Vondra <tomas@vondra.me> wrote:\n>> On 9/13/24 06:04, Richard Guo wrote:\n>>> It seems to me that some parts of our code assume that, for a given\n>>> input path that is partially ordered, an incremental sort is always\n>>> preferable to a full sort (see commit 4a29eabd1). Am I getting this\n>>> correctly?\n>>\n>> I don't think we're making such assumption. I don't know which of the\n>> many places modified by 4a29eabd1 you have in mind, but IIRC we always\n>> consider both full and incremental sort.\n> \n> Hmm, I don't think it's true that we always consider both full and\n> incremental sort on the same path. It was true initially, but that’s\n> no longer the case.\n> \n> I checked the 9 callers of create_incremental_sort_path, and they all\n> follow the logic that if there are presorted keys, only incremental\n> sort is considered. To quote a comment from one of the callers:\n> \n> * ... We've no need to consider both\n> * sort and incremental sort on the same path. We assume that\n> * incremental sort is always faster when there are presorted\n> * keys.\n> \n> I think this is also explained in the commit message of 4a29eabd1,\n> quoted here:\n> \n> \"\n> Previously we would generally create a sort path on the cheapest input\n> path (if that wasn't sorted already) and incremental sort paths on any\n> path which had presorted keys. This meant that if the cheapest input path\n> wasn't completely sorted but happened to have presorted keys, we would\n> create a full sort path *and* an incremental sort path on that input path.\n> Here we change this logic so that if there are presorted keys, we only\n> create an incremental sort path, and create sort paths only when a full\n> sort is required.\n> \"\n> \n\nHmmm ... I wasn't involved in that discussion and I haven't studied the\nthread now, but this seems a bit weird to me. If the presorted keys have\nlow cardinality, can't the full sort be faster?\n\nMaybe it's not possible with the costing changes (removing the\npenalization), in which case it would be fine to not consider the full\nsort, because we'd just throw it away immediately. But if the full sort\ncould be costed as cheaper, I'd say that might be wrong.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Sun, 15 Sep 2024 00:01:55 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "On Sun, Sep 15, 2024 at 6:01 AM Tomas Vondra <tomas@vondra.me> wrote:\n> Hmmm ... I wasn't involved in that discussion and I haven't studied the\n> thread now, but this seems a bit weird to me. If the presorted keys have\n> low cardinality, can't the full sort be faster?\n>\n> Maybe it's not possible with the costing changes (removing the\n> penalization), in which case it would be fine to not consider the full\n> sort, because we'd just throw it away immediately. But if the full sort\n> could be costed as cheaper, I'd say that might be wrong.\n\nIIUC, we initially applied a 1.5x pessimism factor to the cost\nestimates of incremental sort in an effort to avoid performance\nregressions in cases with a large skew in the number of rows within\nthe presorted groups. OTOH, this also restricted our ability to\nleverage incremental sort when it would be preferable.\n\nI agree with you that sometimes the definition of 'regression' can\ndepend on when the alternative plans are introduced. Imagine if we\ninitially did not have the 1.5x pessimism factor and introduced it\nlater, it would be treated as a 'regression' because some queries that\ncould benefit from incremental sort would then have to resort to full\nsort.\n\nIn commit 4a29eabd1, we removed the 1.5x pessimism factor to allow\nincremental sort to have a fairer chance at being chosen against a\nfull sort. With this change, the cost model now tends to favor\nincremental sort as being cheaper than full sort in the presence of\npresorted keys, making it unnecessary to consider full sort in such\ncases, because, as you mentioned, we'd just throw the full sort path\naway immediately. So 4a29eabd1 also modified the logic so that if\nthere are presorted keys, we only create an incremental sort path.\nAs for the potential performance regressions caused by these changes,\n4a29eabd1 opted to use enable_incremental_sort as an escape hatch.\n\nI think the same theory applies to mergejoins too. We can leverage\nincremental sort if it is enabled and there are presorted keys,\nassuming that it is always faster than full sort, and use the GUC as\nan escape hatch in the worst case.\n\nSo here is the v2 patch, which is almost the same as v1, but includes\nchanges in test cases and comments, and also includes a commit\nmessage. I'll put it in the commitfest.\n\nThanks\nRichard",
"msg_date": "Fri, 20 Sep 2024 16:48:11 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "On Fri, 20 Sept 2024 at 20:48, Richard Guo <guofenglinux@gmail.com> wrote:\n> I agree with you that sometimes the definition of 'regression' can\n> depend on when the alternative plans are introduced. Imagine if we\n> initially did not have the 1.5x pessimism factor and introduced it\n> later, it would be treated as a 'regression' because some queries that\n> could benefit from incremental sort would then have to resort to full\n> sort.\n\nI think this is a good way of looking at it. I think it's a bad idea\nwhen introducing new planner/executor abilities to penalise the costs\nfor that ability. It might make the committer sleep better at night\nafter committing some new feature, but it's just not a future-proof\nendeavour. We should aim for our cost models to be as close to the\ntruth as possible. As soon as you start introducing \"just in case\"\npenalties, you're telling lies. Lies don't work out well, especially\nwhen compounded with other lies, which is exactly what you get when\nlayering Paths atop of Paths with just-in-case penalties added at each\nlevel.\n\nI was in this position with Memoize. The problem there is that when we\ndon't have any stats, we assume the n_distinct is 200. Memoize can\nlook quite attractive with such a small n_distinct estimate. I\ninvented SELFLAG_USED_DEFAULT to allow Memoize to steer clear when the\nn_distinct estimate used the hard-coded default. I think that's an ok\nthing to do as otherwise it could work out very badly if Nested Loop\n-> Memoize was used instead of, say a Hash Join on a join problem with\nmillions of rows, all of them with distinct join values.\n\n> So here is the v2 patch, which is almost the same as v1, but includes\n> changes in test cases and comments, and also includes a commit\n> message. I'll put it in the commitfest.\n\nJust looking at the commit message:\n\n> The rationale is based on the assumption that incremental sort is\n> always faster than full sort when there are presorted keys, a premise\n> that has been applied in various parts of the code. This assumption\n> does not always hold, particularly in cases with a large skew in the\n> number of rows within the presorted groups.\n\nMy understanding is that the worst case for incremental sort is the\nsame as sort, i.e. only 1 presorted group, which is the same effort to\nsort. Is there something I'm missing?\n\nDavid\n\n\n",
"msg_date": "Sun, 22 Sep 2024 17:38:23 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "On Sun, Sep 22, 2024 at 1:38 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> Just looking at the commit message:\n>\n> > The rationale is based on the assumption that incremental sort is\n> > always faster than full sort when there are presorted keys, a premise\n> > that has been applied in various parts of the code. This assumption\n> > does not always hold, particularly in cases with a large skew in the\n> > number of rows within the presorted groups.\n>\n> My understanding is that the worst case for incremental sort is the\n> same as sort, i.e. only 1 presorted group, which is the same effort to\n> sort. Is there something I'm missing?\n\nI was thinking that when there’s a large skew in the number of rows\nper pre-sorted group, incremental sort might underperform full sort.\nAs mentioned in the comments for cost_incremental_sort, it has to\ndetect the sort groups, plus it needs to perform tuplesort_reset after\neach group. However, I doubt these factors would have a substantial\nimpact on the performance of incremental sort. So maybe in the worst\nskewed groups case, incremental sort may still perform similarly to\nfull sort.\n\nAnother less obvious reason is that in cases of skewed groups, we may\nsignificantly underestimate the cost of incremental sort. This could\nresult in choosing a more expensive subpath under the sort. Such as\nthe example in [1], we end up with IndexScan->IncrementalSort rather\nthan SeqScan->FullSort, while the former plan is more expensive to\nexecute. However, this point does not affect this patch, because for\na mergejoin, we only consider outerrel's cheapest-total-cost when we\nassume that an explicit sort atop is needed, i.e., we do not have a\nchance to select which subpath to use in this case.\n\n[1] https://postgr.es/m/CAMbWs49+CXsrgbq0LD1at-5jR=AHHN0YtDy9YvgXAsMfndZe-w@mail.gmail.com\n\nThanks\nRichard\n\n\n",
"msg_date": "Mon, 23 Sep 2024 11:03:37 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "On 9/23/24 05:03, Richard Guo wrote:\n> On Sun, Sep 22, 2024 at 1:38 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>> Just looking at the commit message:\n>>\n>>> The rationale is based on the assumption that incremental sort is\n>>> always faster than full sort when there are presorted keys, a premise\n>>> that has been applied in various parts of the code. This assumption\n>>> does not always hold, particularly in cases with a large skew in the\n>>> number of rows within the presorted groups.\n>>\n>> My understanding is that the worst case for incremental sort is the\n>> same as sort, i.e. only 1 presorted group, which is the same effort to\n>> sort. Is there something I'm missing?\n> \n> I was thinking that when there’s a large skew in the number of rows\n> per pre-sorted group, incremental sort might underperform full sort.\n> As mentioned in the comments for cost_incremental_sort, it has to\n> detect the sort groups, plus it needs to perform tuplesort_reset after\n> each group. However, I doubt these factors would have a substantial\n> impact on the performance of incremental sort. So maybe in the worst\n> skewed groups case, incremental sort may still perform similarly to\n> full sort.\n> \n> Another less obvious reason is that in cases of skewed groups, we may\n> significantly underestimate the cost of incremental sort. This could\n> result in choosing a more expensive subpath under the sort. Such as\n> the example in [1], we end up with IndexScan->IncrementalSort rather\n> than SeqScan->FullSort, while the former plan is more expensive to\n> execute. However, this point does not affect this patch, because for\n> a mergejoin, we only consider outerrel's cheapest-total-cost when we\n> assume that an explicit sort atop is needed, i.e., we do not have a\n> chance to select which subpath to use in this case.\n> \n> [1] https://postgr.es/m/CAMbWs49+CXsrgbq0LD1at-5jR=AHHN0YtDy9YvgXAsMfndZe-w@mail.gmail.com\n> \n\nYou don't need any skew. Consider this perfectly uniform dataset:\n\ncreate table t (a int, b int);\ninsert into t select 100000 * random(), 100 * random()\n from generate_series(1,1000000) s(i);\ncreate index on t (a);\nvacuum analyze;\ncheckpoint;\n\nexplain analyze select * from t order by a, b;\n\n QUERY PLAN\n-----------------------------------------------------------------\n Sort (cost=127757.34..130257.34 rows=1000000 width=8)\n (actual time=186.288..275.813 rows=1000000 loops=1)\n Sort Key: a, b\n Sort Method: external merge Disk: 17704kB\n -> Seq Scan on t (cost=0.00..14425.00 rows=1000000 width=8)\n (actual time=0.005..35.989 rows=1000000 loops=1)\n Planning Time: 0.075 ms\n Execution Time: 301.143 ms\n(6 rows)\n\n\nset enable_incremental_sort = on;\nexplain analyze select * from t order by a, b;\n\n QUERY PLAN\n-----------------------------------------------------------------\n Incremental Sort (cost=1.03..68908.13 rows=1000000 width=8)\n (actual time=0.102..497.143 rows=1000000 loops=1)\n Sort Key: a, b\n Presorted Key: a\n Full-sort Groups: 27039 Sort Method: quicksort\n Average Memory: 25kB Peak Memory: 25kB\n -> Index Scan using t_a_idx on t\n (cost=0.42..37244.25 rows=1000000 width=8)\n (actual time=0.050..376.403 rows=1000000 loops=1)\n Planning Time: 0.057 ms\n Execution Time: 519.301 ms\n(7 rows)\n\nSure, the table is very small, but the example is not crazy. In fact,\nthis is the \"nicest\" example for estimation - it's perfectly random, no\ncorrelations, no skew.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Mon, 23 Sep 2024 16:01:42 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "On Tue, 24 Sept 2024 at 02:01, Tomas Vondra <tomas@vondra.me> wrote:\n>\n> On 9/23/24 05:03, Richard Guo wrote:\n> > On Sun, Sep 22, 2024 at 1:38 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> >> Just looking at the commit message:\n> >>\n> >>> The rationale is based on the assumption that incremental sort is\n> >>> always faster than full sort when there are presorted keys, a premise\n> >>> that has been applied in various parts of the code. This assumption\n> >>> does not always hold, particularly in cases with a large skew in the\n> >>> number of rows within the presorted groups.\n> >>\n> >> My understanding is that the worst case for incremental sort is the\n> >> same as sort, i.e. only 1 presorted group, which is the same effort to\n> >> sort. Is there something I'm missing?\n> >\n> > I was thinking that when there’s a large skew in the number of rows\n> > per pre-sorted group, incremental sort might underperform full sort.\n> > As mentioned in the comments for cost_incremental_sort, it has to\n> > detect the sort groups, plus it needs to perform tuplesort_reset after\n> > each group. However, I doubt these factors would have a substantial\n> > impact on the performance of incremental sort. So maybe in the worst\n> > skewed groups case, incremental sort may still perform similarly to\n> > full sort.\n> >\n> > Another less obvious reason is that in cases of skewed groups, we may\n> > significantly underestimate the cost of incremental sort. This could\n> > result in choosing a more expensive subpath under the sort. Such as\n> > the example in [1], we end up with IndexScan->IncrementalSort rather\n> > than SeqScan->FullSort, while the former plan is more expensive to\n> > execute. However, this point does not affect this patch, because for\n> > a mergejoin, we only consider outerrel's cheapest-total-cost when we\n> > assume that an explicit sort atop is needed, i.e., we do not have a\n> > chance to select which subpath to use in this case.\n> >\n> > [1] https://postgr.es/m/CAMbWs49+CXsrgbq0LD1at-5jR=AHHN0YtDy9YvgXAsMfndZe-w@mail.gmail.com\n> >\n>\n> You don't need any skew. Consider this perfectly uniform dataset:\n>\n> Sort (cost=127757.34..130257.34 rows=1000000 width=8)\n> (actual time=186.288..275.813 rows=1000000 loops=1)\n> Sort Key: a, b\n> Sort Method: external merge Disk: 17704kB\n> -> Seq Scan on t (cost=0.00..14425.00 rows=1000000 width=8)\n> (actual time=0.005..35.989 rows=1000000 loops=1)\n> Planning Time: 0.075 ms\n> Execution Time: 301.143 ms\n>\n> set enable_incremental_sort = on;\n\n> Incremental Sort (cost=1.03..68908.13 rows=1000000 width=8)\n> (actual time=0.102..497.143 rows=1000000 loops=1)\n> Sort Key: a, b\n> Presorted Key: a\n> Full-sort Groups: 27039 Sort Method: quicksort\n> Average Memory: 25kB Peak Memory: 25kB\n> -> Index Scan using t_a_idx on t\n> (cost=0.42..37244.25 rows=1000000 width=8)\n> (actual time=0.050..376.403 rows=1000000 loops=1)\n> Planning Time: 0.057 ms\n> Execution Time: 519.301 ms\n\nOk, let's first look at the total Seq Scan cost of the first EXPLAIN.\n14425.00 units and 35.989 milliseconds to execute. That's about 400.81\nunits per millisecond. The Index Scan is only being charged 98.94\nunits per millisecond of execution. If the Index Scan was costed the\nsame per unit as the Seq Scan, the total Index Scan cost would be\n150868 which is already more than the Seq Scan plan without even\nadding the Incremental Sort costs on. To me, that seems to be an\ninaccuracy either with the Seq Scan costings coming out too expensive\nor Index Scan coming out too cheap.\n\nIf you think that the Incremental Sort plan shouldn't be chosen\nbecause the Index Scan costing came out too cheap (or the Seq Scan\ncosting was too expensive) then I disagree. Applying some penalty to\none node type because some other node type isn't costed accurately is\njust not a practice we should be doing. Instead, we should be trying\nour best to cost each node type as accurately as possible. If you\nthink there's some inaccuracy with Incremental Sort, then let's look\ninto that. If you want to re-add the penalty because Index Scan\ncosting isn't good, let's see if we can fix Index Scan costings.\n\nDavid\n\n\n",
"msg_date": "Tue, 24 Sep 2024 10:21:58 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "On Mon, Sep 23, 2024 at 10:01 PM Tomas Vondra <tomas@vondra.me> wrote:\n> You don't need any skew. Consider this perfectly uniform dataset:\n>\n> create table t (a int, b int);\n> insert into t select 100000 * random(), 100 * random()\n> from generate_series(1,1000000) s(i);\n> create index on t (a);\n> vacuum analyze;\n> checkpoint;\n>\n> explain analyze select * from t order by a, b;\n\nI think if we want to compare the performance of incremental sort vs.\nfull sort on this dataset, we need to ensure that other variables are\nkept constant, ie we need to ensure that both methods use the same\nsubpath, whether it's Index Scan or Seq Scan.\n\nThis is especially true in the scenario addressed by this patch, as\nfor a mergejoin, we only consider outerrel's cheapest_total_path when\nwe assume that an explicit sort atop is needed. That is to say, the\nsubpath has already been chosen when it comes to determine whether to\nuse incremental sort or full sort.\n\nAccording to this theory, here is what I got on this same dataset:\n\n-- incremental sort\nexplain analyze select * from t order by a, b;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Incremental Sort (cost=1.02..68838.65 rows=1000000 width=8) (actual\ntime=1.292..1564.265 rows=1000000 loops=1)\n Sort Key: a, b\n Presorted Key: a\n Full-sort Groups: 27022 Sort Method: quicksort Average Memory:\n25kB Peak Memory: 25kB\n -> Index Scan using t_a_idx on t (cost=0.42..37244.20\nrows=1000000 width=8) (actual time=0.408..1018.785 rows=1000000\nloops=1)\n Planning Time: 0.998 ms\n Execution Time: 1602.861 ms\n(7 rows)\n\n\n-- full sort\nset enable_incremental_sort to off;\nset enable_seqscan to off;\nexplain analyze select * from t order by a, b;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=150576.54..153076.54 rows=1000000 width=8) (actual\ntime=1760.090..1927.598 rows=1000000 loops=1)\n Sort Key: a, b\n Sort Method: external merge Disk: 17704kB\n -> Index Scan using t_a_idx on t (cost=0.42..37244.20\nrows=1000000 width=8) (actual time=0.531..1010.931 rows=1000000\nloops=1)\n Planning Time: 0.980 ms\n Execution Time: 1970.287 ms\n(6 rows)\n\nSo incremental sort outperforms full sort on this dataset.\n\nThanks\nRichard\n\n\n",
"msg_date": "Tue, 24 Sep 2024 10:21:28 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
},
{
"msg_contents": "\n\nOn 9/24/24 00:21, David Rowley wrote:\n> On Tue, 24 Sept 2024 at 02:01, Tomas Vondra <tomas@vondra.me> wrote:\n>>\n>> On 9/23/24 05:03, Richard Guo wrote:\n>>> On Sun, Sep 22, 2024 at 1:38 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>>>> Just looking at the commit message:\n>>>>\n>>>>> The rationale is based on the assumption that incremental sort is\n>>>>> always faster than full sort when there are presorted keys, a premise\n>>>>> that has been applied in various parts of the code. This assumption\n>>>>> does not always hold, particularly in cases with a large skew in the\n>>>>> number of rows within the presorted groups.\n>>>>\n>>>> My understanding is that the worst case for incremental sort is the\n>>>> same as sort, i.e. only 1 presorted group, which is the same effort to\n>>>> sort. Is there something I'm missing?\n>>>\n>>> I was thinking that when there’s a large skew in the number of rows\n>>> per pre-sorted group, incremental sort might underperform full sort.\n>>> As mentioned in the comments for cost_incremental_sort, it has to\n>>> detect the sort groups, plus it needs to perform tuplesort_reset after\n>>> each group. However, I doubt these factors would have a substantial\n>>> impact on the performance of incremental sort. So maybe in the worst\n>>> skewed groups case, incremental sort may still perform similarly to\n>>> full sort.\n>>>\n>>> Another less obvious reason is that in cases of skewed groups, we may\n>>> significantly underestimate the cost of incremental sort. This could\n>>> result in choosing a more expensive subpath under the sort. Such as\n>>> the example in [1], we end up with IndexScan->IncrementalSort rather\n>>> than SeqScan->FullSort, while the former plan is more expensive to\n>>> execute. However, this point does not affect this patch, because for\n>>> a mergejoin, we only consider outerrel's cheapest-total-cost when we\n>>> assume that an explicit sort atop is needed, i.e., we do not have a\n>>> chance to select which subpath to use in this case.\n>>>\n>>> [1] https://postgr.es/m/CAMbWs49+CXsrgbq0LD1at-5jR=AHHN0YtDy9YvgXAsMfndZe-w@mail.gmail.com\n>>>\n>>\n>> You don't need any skew. Consider this perfectly uniform dataset:\n>>\n>> Sort (cost=127757.34..130257.34 rows=1000000 width=8)\n>> (actual time=186.288..275.813 rows=1000000 loops=1)\n>> Sort Key: a, b\n>> Sort Method: external merge Disk: 17704kB\n>> -> Seq Scan on t (cost=0.00..14425.00 rows=1000000 width=8)\n>> (actual time=0.005..35.989 rows=1000000 loops=1)\n>> Planning Time: 0.075 ms\n>> Execution Time: 301.143 ms\n>>\n>> set enable_incremental_sort = on;\n> \n>> Incremental Sort (cost=1.03..68908.13 rows=1000000 width=8)\n>> (actual time=0.102..497.143 rows=1000000 loops=1)\n>> Sort Key: a, b\n>> Presorted Key: a\n>> Full-sort Groups: 27039 Sort Method: quicksort\n>> Average Memory: 25kB Peak Memory: 25kB\n>> -> Index Scan using t_a_idx on t\n>> (cost=0.42..37244.25 rows=1000000 width=8)\n>> (actual time=0.050..376.403 rows=1000000 loops=1)\n>> Planning Time: 0.057 ms\n>> Execution Time: 519.301 ms\n> \n> Ok, let's first look at the total Seq Scan cost of the first EXPLAIN.\n> 14425.00 units and 35.989 milliseconds to execute. That's about 400.81\n> units per millisecond. The Index Scan is only being charged 98.94\n> units per millisecond of execution. If the Index Scan was costed the\n> same per unit as the Seq Scan, the total Index Scan cost would be\n> 150868 which is already more than the Seq Scan plan without even\n> adding the Incremental Sort costs on. To me, that seems to be an\n> inaccuracy either with the Seq Scan costings coming out too expensive\n> or Index Scan coming out too cheap.\n> \n> If you think that the Incremental Sort plan shouldn't be chosen\n> because the Index Scan costing came out too cheap (or the Seq Scan\n> costing was too expensive) then I disagree. Applying some penalty to\n> one node type because some other node type isn't costed accurately is\n> just not a practice we should be doing. Instead, we should be trying\n> our best to cost each node type as accurately as possible. If you\n> think there's some inaccuracy with Incremental Sort, then let's look\n> into that. If you want to re-add the penalty because Index Scan\n> costing isn't good, let's see if we can fix Index Scan costings.\n> \n\nYou're right, this wasn't a good example. I tried to come up with\nsomething quickly, and I didn't realize the extra time comes from the\nother node in the plan, not the sort :-(\n\nI vaguely remember there were a couple reports about slow queries\ninvolving incremental sort, but I didn't find any that would show\nincremental sort itself being slower. So maybe I'm wrong ...\n\nIIRC the concerns were more about planning - what happens when the\nmulti-column ndistinct estimates (which are quite shaky) are wrong, etc.\nOr how it interacts with LIMIT with hidden correlations, etc.\n\nSure, those are not problems of incremental sort, it just makes it\neasier to hit some of these issues. Of course, that doesn't mean the\n1.5x penalty was a particularly good solution.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n",
"msg_date": "Tue, 24 Sep 2024 11:52:11 +0200",
"msg_from": "Tomas Vondra <tomas@vondra.me>",
"msg_from_op": false,
"msg_subject": "Re: Why don't we consider explicit Incremental Sort?"
}
] |
[
{
"msg_contents": "Hi\n\nI try new Fedora 41. Build fails\n\necho 'Name: libpq' >>libpq.pc\necho 'Description: PostgreSQL libpq library' >>libpq.pc\necho 'URL: https://www.postgresql.org/' >>libpq.pc\necho 'Version: 18devel' >>libpq.pc\necho 'Requires: ' >>libpq.pc\necho 'Requires.private: libssl, libcrypto' >>libpq.pc\necho 'Cflags: -I${includedir}' >>libpq.pc\necho 'Libs: -L${libdir} -lpq' >>libpq.pc\necho 'Libs.private: -L/usr/lib64 -lpgcommon -lpgport -lssl -lm' >>libpq.pc\nfe-secure-openssl.c:62:10: fatal error: openssl/engine.h: Adresář nebo\nsoubor neexistuje\n 62 | #include <openssl/engine.h>\n | ^~~~~~~~~~~~~~~~~~\ncompilation terminated.\n\nRegards\n\nPavel\n\nHiI try new Fedora 41. Build failsecho 'Name: libpq' >>libpq.pcecho 'Description: PostgreSQL libpq library' >>libpq.pcecho 'URL: https://www.postgresql.org/' >>libpq.pcecho 'Version: 18devel' >>libpq.pcecho 'Requires: ' >>libpq.pcecho 'Requires.private: libssl, libcrypto' >>libpq.pcecho 'Cflags: -I${includedir}' >>libpq.pcecho 'Libs: -L${libdir} -lpq' >>libpq.pcecho 'Libs.private: -L/usr/lib64 -lpgcommon -lpgport -lssl -lm' >>libpq.pcfe-secure-openssl.c:62:10: fatal error: openssl/engine.h: Adresář nebo soubor neexistuje 62 | #include <openssl/engine.h> | ^~~~~~~~~~~~~~~~~~compilation terminated.RegardsPavel",
"msg_date": "Mon, 9 Sep 2024 13:45:50 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "broken build - FC 41"
},
{
"msg_contents": "On Mon, 09 Sep 2024 13:45:50 +0200, Pavel Stehule wrote:\n> \n> Hi\n> \n> I try new Fedora 41. Build fails\n> \n> echo 'Name: libpq' >>libpq.pc\n> echo 'Description: PostgreSQL libpq library' >>libpq.pc\n> echo 'URL: https://www.postgresql.org/' >>libpq.pc\n> echo 'Version: 18devel' >>libpq.pc\n> echo 'Requires: ' >>libpq.pc\n> echo 'Requires.private: libssl, libcrypto' >>libpq.pc\n> echo 'Cflags: -I${includedir}' >>libpq.pc\n> echo 'Libs: -L${libdir} -lpq' >>libpq.pc\n> echo 'Libs.private: -L/usr/lib64 -lpgcommon -lpgport -lssl -lm' >>libpq.pc\n> fe-secure-openssl.c:62:10: fatal error: openssl/engine.h: Adresář nebo\n> soubor neexistuje\n> 62 | #include <openssl/engine.h>\n> | ^~~~~~~~~~~~~~~~~~\n> compilation terminated.\n\nI am not a Fedora user but have you installed openssl-devel-engine?\n\n<https://packages.fedoraproject.org/pkgs/openssl/openssl-devel-engine/fedora-41.html#files>\n\n--\nHerbert\n\n\n",
"msg_date": "Mon, 09 Sep 2024 13:57:31 +0200",
"msg_from": "\"Herbert J. Skuhra\" <herbert@gojira.at>",
"msg_from_op": false,
"msg_subject": "Re: broken build - FC 41"
},
{
"msg_contents": "> On 9 Sep 2024, at 13:45, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> echo 'Libs.private: -L/usr/lib64 -lpgcommon -lpgport -lssl -lm' >>libpq.pc\n> fe-secure-openssl.c:62:10: fatal error: openssl/engine.h: Adresář nebo soubor neexistuje\n> 62 | #include <openssl/engine.h>\n> | ^~~~~~~~~~~~~~~~~~\n> compilation terminated.\n\nThat implies OPENSSL_NO_ENGINE isn't defined while the engine header is\nmissing, which isn't really a workable combination. Which version of OpenSSL\nis this?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 9 Sep 2024 13:57:39 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: broken build - FC 41"
},
{
"msg_contents": "po 9. 9. 2024 v 13:58 odesílatel Herbert J. Skuhra <herbert@gojira.at>\nnapsal:\n\n> On Mon, 09 Sep 2024 13:45:50 +0200, Pavel Stehule wrote:\n> >\n> > Hi\n> >\n> > I try new Fedora 41. Build fails\n> >\n> > echo 'Name: libpq' >>libpq.pc\n> > echo 'Description: PostgreSQL libpq library' >>libpq.pc\n> > echo 'URL: https://www.postgresql.org/' >>libpq.pc\n> > echo 'Version: 18devel' >>libpq.pc\n> > echo 'Requires: ' >>libpq.pc\n> > echo 'Requires.private: libssl, libcrypto' >>libpq.pc\n> > echo 'Cflags: -I${includedir}' >>libpq.pc\n> > echo 'Libs: -L${libdir} -lpq' >>libpq.pc\n> > echo 'Libs.private: -L/usr/lib64 -lpgcommon -lpgport -lssl -lm'\n> >>libpq.pc\n> > fe-secure-openssl.c:62:10: fatal error: openssl/engine.h: Adresář nebo\n> > soubor neexistuje\n> > 62 | #include <openssl/engine.h>\n> > | ^~~~~~~~~~~~~~~~~~\n> > compilation terminated.\n>\n> I am not a Fedora user but have you installed openssl-devel-engine?\n>\n> <\n> https://packages.fedoraproject.org/pkgs/openssl/openssl-devel-engine/fedora-41.html#files\n> >\n>\n\nIt helps\n\nThank you.\n\n\nPavel\n\n\n>\n> --\n> Herbert\n>\n>\n>\n\npo 9. 9. 2024 v 13:58 odesílatel Herbert J. Skuhra <herbert@gojira.at> napsal:On Mon, 09 Sep 2024 13:45:50 +0200, Pavel Stehule wrote:\n> \n> Hi\n> \n> I try new Fedora 41. Build fails\n> \n> echo 'Name: libpq' >>libpq.pc\n> echo 'Description: PostgreSQL libpq library' >>libpq.pc\n> echo 'URL: https://www.postgresql.org/' >>libpq.pc\n> echo 'Version: 18devel' >>libpq.pc\n> echo 'Requires: ' >>libpq.pc\n> echo 'Requires.private: libssl, libcrypto' >>libpq.pc\n> echo 'Cflags: -I${includedir}' >>libpq.pc\n> echo 'Libs: -L${libdir} -lpq' >>libpq.pc\n> echo 'Libs.private: -L/usr/lib64 -lpgcommon -lpgport -lssl -lm' >>libpq.pc\n> fe-secure-openssl.c:62:10: fatal error: openssl/engine.h: Adresář nebo\n> soubor neexistuje\n> 62 | #include <openssl/engine.h>\n> | ^~~~~~~~~~~~~~~~~~\n> compilation terminated.\n\nI am not a Fedora user but have you installed openssl-devel-engine?\n\n<https://packages.fedoraproject.org/pkgs/openssl/openssl-devel-engine/fedora-41.html#files>It helpsThank you.Pavel \n\n--\nHerbert",
"msg_date": "Mon, 9 Sep 2024 15:20:18 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: broken build - FC 41"
},
{
"msg_contents": "po 9. 9. 2024 v 13:57 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:\n\n> > On 9 Sep 2024, at 13:45, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> > echo 'Libs.private: -L/usr/lib64 -lpgcommon -lpgport -lssl -lm'\n> >>libpq.pc\n> > fe-secure-openssl.c:62:10: fatal error: openssl/engine.h: Adresář nebo\n> soubor neexistuje\n> > 62 | #include <openssl/engine.h>\n> > | ^~~~~~~~~~~~~~~~~~\n> > compilation terminated.\n>\n> That implies OPENSSL_NO_ENGINE isn't defined while the engine header is\n> missing, which isn't really a workable combination. Which version of\n> OpenSSL\n> is this?\n>\n\nI needed to install\n\nName : openssl-devel-engine\nEpoch : 1\nVersion : 3.2.2\nRelease : 5.fc41\nArchitecture : x86_64\nDownload size : 44.0 KiB\nInstalled size : 52.8 KiB\nSource : openssl-3.2.2-5.fc41.src.rpm\nRepository : fedora\nSummary : Files for development of applications which will use\nOpenSSL and use deprecated ENGINE API.\nURL : http://www.openssl.org/\nLicense : Apache-2.0\nDescription : OpenSSL is a toolkit for supporting cryptography. The\nopenssl-devel-engine\n : package contains include files needed to develop\napplications which\n : use deprecated OpenSSL ENGINE functionality.\nVendor : Fedora Project\npavel@nemesis:~$ sudo dnf install openssl-devel-engine\nUpdating and loading repositories:\nRepositories loaded.\nPackage\n\nToday I upgraded from FC40 to FC41, and only this library was installed to\nmake the build.\n\nThe question is why the missing header was not detected by configure?\n\nThe description of this package says so the OpenSSL ENGINE is deprecated?\n\nRegards\n\nPavel\n\n\n\n\n\n> --\n> Daniel Gustafsson\n>\n>\n\npo 9. 9. 2024 v 13:57 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 9 Sep 2024, at 13:45, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> echo 'Libs.private: -L/usr/lib64 -lpgcommon -lpgport -lssl -lm' >>libpq.pc\n> fe-secure-openssl.c:62:10: fatal error: openssl/engine.h: Adresář nebo soubor neexistuje\n> 62 | #include <openssl/engine.h>\n> | ^~~~~~~~~~~~~~~~~~\n> compilation terminated.\n\nThat implies OPENSSL_NO_ENGINE isn't defined while the engine header is\nmissing, which isn't really a workable combination. Which version of OpenSSL\nis this?I needed to installName : openssl-devel-engineEpoch : 1Version : 3.2.2Release : 5.fc41Architecture : x86_64Download size : 44.0 KiBInstalled size : 52.8 KiBSource : openssl-3.2.2-5.fc41.src.rpmRepository : fedoraSummary : Files for development of applications which will use OpenSSL and use deprecated ENGINE API.URL : http://www.openssl.org/License : Apache-2.0Description : OpenSSL is a toolkit for supporting cryptography. The openssl-devel-engine : package contains include files needed to develop applications which : use deprecated OpenSSL ENGINE functionality.Vendor : Fedora Projectpavel@nemesis:~$ sudo dnf install openssl-devel-engineUpdating and loading repositories:Repositories loaded.Package Today I upgraded from FC40 to FC41, and only this library was installed to make the build.The question is why the missing header was not detected by configure?The description of this package says so the OpenSSL ENGINE is deprecated?RegardsPavel\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 9 Sep 2024 15:20:21 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: broken build - FC 41"
},
{
"msg_contents": "> On 9 Sep 2024, at 15:20, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> The question is why the missing header was not detected by configure?\n\nWe don't test for every 3rd party header we include. If engines were separate\nfrom OpenSSL we'd probably probe for it, but this separation is a packager\ndecision and not one from the OpenSSL project.\n\n> The description of this package says so the OpenSSL ENGINE is deprecated?\n\nOpenSSL deprecated the concept of engines in favor of providers in OpenSSL 3.0,\nbut as is common with OpenSSL they are still around and there is a way to keep\nthem running albeit in a limited fashion.\n\nPostgreSQL still support OpenSSL 1.1.1 where engines aren't deprecated, and I\nexpect we will for some time.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 11 Sep 2024 09:54:43 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: broken build - FC 41"
},
{
"msg_contents": "Hi\n\nst 11. 9. 2024 v 9:54 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:\n\n> > On 9 Sep 2024, at 15:20, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> > The question is why the missing header was not detected by configure?\n>\n> We don't test for every 3rd party header we include. If engines were\n> separate\n> from OpenSSL we'd probably probe for it, but this separation is a packager\n> decision and not one from the OpenSSL project.\n>\n> > The description of this package says so the OpenSSL ENGINE is deprecated?\n>\n> OpenSSL deprecated the concept of engines in favor of providers in OpenSSL\n> 3.0,\n> but as is common with OpenSSL they are still around and there is a way to\n> keep\n> them running albeit in a limited fashion.\n>\n> PostgreSQL still support OpenSSL 1.1.1 where engines aren't deprecated,\n> and I\n> expect we will for some time.\n>\n\nok\n\nThank you for the reply\n\nRegards\n\nPavel\n\n>\n> --\n> Daniel Gustafsson\n>\n>\n\nHist 11. 9. 2024 v 9:54 odesílatel Daniel Gustafsson <daniel@yesql.se> napsal:> On 9 Sep 2024, at 15:20, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> The question is why the missing header was not detected by configure?\n\nWe don't test for every 3rd party header we include. If engines were separate\nfrom OpenSSL we'd probably probe for it, but this separation is a packager\ndecision and not one from the OpenSSL project.\n\n> The description of this package says so the OpenSSL ENGINE is deprecated?\n\nOpenSSL deprecated the concept of engines in favor of providers in OpenSSL 3.0,\nbut as is common with OpenSSL they are still around and there is a way to keep\nthem running albeit in a limited fashion.\n\nPostgreSQL still support OpenSSL 1.1.1 where engines aren't deprecated, and I\nexpect we will for some time.okThank you for the replyRegardsPavel \n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 11 Sep 2024 11:42:26 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: broken build - FC 41"
}
] |
[
{
"msg_contents": "I'm the maintainer of the PL/Haskell language extension. (\nhttps://github.com/ed-o-saurus/PLHaskell/)\n\nI want to be able to have a function received and/or return numeric data.\nHowever, I'm having trouble getting data from Datums and building Datums to\nreturn. numeric.h does not contain the macros to do this. They are in\nnumeric.c.\n\nIs there a way to access the values in the numeric structures? If not,\nwould a PR to move macros to numeric.h be welcome in the next commitfest?\n\n -Ed\n\nI'm the maintainer of the PL/Haskell language extension. (https://github.com/ed-o-saurus/PLHaskell/)I want to be able to have a function received and/or return numeric data. However, I'm having trouble getting data from Datums and building Datums to return. numeric.h does not contain the macros to do this. They are in numeric.c. Is there a way to access the values in the numeric structures? If not, would a PR to move macros to numeric.h be welcome in the next commitfest? -Ed",
"msg_date": "Mon, 9 Sep 2024 07:57:59 -0400",
"msg_from": "Ed Behn <ed@behn.us>",
"msg_from_op": true,
"msg_subject": "access numeric data in module"
},
{
"msg_contents": "Ed Behn <ed@behn.us> writes:\n> I want to be able to have a function received and/or return numeric data.\n> However, I'm having trouble getting data from Datums and building Datums to\n> return. numeric.h does not contain the macros to do this. They are in\n> numeric.c.\n\n> Is there a way to access the values in the numeric structures? If not,\n> would a PR to move macros to numeric.h be welcome in the next commitfest?\n\nIt's intentional that that stuff is not exposed, so no.\n\nWhat actual functionality do you need that numeric.h doesn't expose?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Sep 2024 10:14:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: access numeric data in module"
},
{
"msg_contents": "On Mon, Sep 9, 2024 at 10:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Ed Behn <ed@behn.us> writes:\n> > I want to be able to have a function received and/or return numeric data.\n> > However, I'm having trouble getting data from Datums and building Datums to\n> > return. numeric.h does not contain the macros to do this. They are in\n> > numeric.c.\n>\n> > Is there a way to access the values in the numeric structures? If not,\n> > would a PR to move macros to numeric.h be welcome in the next commitfest?\n>\n> It's intentional that that stuff is not exposed, so no.\n>\n> What actual functionality do you need that numeric.h doesn't expose?\n\nI don't agree with this reponse at all. It seems entirely reasonable\nfor third-party code to want to have a way to construct and interpret\nnumeric datums. Keeping the details private would MAYBE make sense if\nthe internal details were changing release to release, but that's\nclearly not the case. Even if it were, an extension author is\ncompletely entitled to say \"hey, I'd rather have access to an unstable\nAPI and update my code for new releases\" and we should accommodate\nthat. If we don't, people don't give up on writing the code that they\nwant to write -- they just cut-and-paste private declarations/code\ninto their own source tree, which is WAY worse than if we just put the\nstuff in a .h file.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Sep 2024 13:00:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: access numeric data in module"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Sep 9, 2024 at 10:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It's intentional that that stuff is not exposed, so no.\n>> What actual functionality do you need that numeric.h doesn't expose?\n\n> I don't agree with this reponse at all. It seems entirely reasonable\n> for third-party code to want to have a way to construct and interpret\n> numeric datums. Keeping the details private would MAYBE make sense if\n> the internal details were changing release to release, but that's\n> clearly not the case.\n\nWe have changed numeric's internal representation in the past, and\nI'd like to keep the freedom to do so again. There's been discussion\nfor example of reconsidering the choice of NBASE to make more sense\non 64-bit hardware. Yeah, maintaining on-disk compatibility limits\nwhat we can do there, but not as much as if some external module\nis in bed with the representation.\n\n> Even if it were, an extension author is\n> completely entitled to say \"hey, I'd rather have access to an unstable\n> API and update my code for new releases\" and we should accommodate\n> that. If we don't, people don't give up on writing the code that they\n> want to write -- they just cut-and-paste private declarations/code\n> into their own source tree, which is WAY worse than if we just put the\n> stuff in a .h file.\n\nIMO it'd be a lot better if numeric.c exposed whatever functionality\nEd feels is missing, while keeping the contents of a numeric opaque.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Sep 2024 13:25:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: access numeric data in module"
},
{
"msg_contents": "On Mon, Sep 9, 2024 at 1:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> We have changed numeric's internal representation in the past, and\n> I'd like to keep the freedom to do so again. There's been discussion\n> for example of reconsidering the choice of NBASE to make more sense\n> on 64-bit hardware. Yeah, maintaining on-disk compatibility limits\n> what we can do there, but not as much as if some external module\n> is in bed with the representation.\n\nI disagree with the idea that a contrib module looking at the details\nof a Numeric value means we can't make these kinds of updates.\n\n> > Even if it were, an extension author is\n> > completely entitled to say \"hey, I'd rather have access to an unstable\n> > API and update my code for new releases\" and we should accommodate\n> > that. If we don't, people don't give up on writing the code that they\n> > want to write -- they just cut-and-paste private declarations/code\n> > into their own source tree, which is WAY worse than if we just put the\n> > stuff in a .h file.\n>\n> IMO it'd be a lot better if numeric.c exposed whatever functionality\n> Ed feels is missing, while keeping the contents of a numeric opaque.\n\nWe could certainly expose a bunch of functions, but I think that would\nactually be a bigger maintenance burden for us than just exposing some\nof the details that are currently private to numeric.c. It would also\npresumably be less performant, since it means somebody has to call a\nfunction rather than just using a macro.\n\nAlso, this seems to me to be holding the numeric data type to a\ndifferent standard than other things. For numeric, we have\nNumericData, NumericChoice, NumericShort, and NumericLong as structs\nthat define the on-disk representation. They're in numeric.c. But\nArrayType is in array.h. RangeType is in rangetypes.h. MultiRangeType\nis in multirangetypes.h. PATH and POLYGON are in geo_decls.h. inet and\ninet_data are in inet.h. int2vector and oidvector are in c.h (which\nseems like questionable placement, but I digress). And there must be\ntons of third-party code out there that knows how to interpret a text\nor bytea varlena. So it's not like we have some principled\nproject-wide policy of hiding these implementation details. At first\nlook, numeric seems like an outlier.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Sep 2024 13:51:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: access numeric data in module"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Sep 9, 2024 at 1:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> IMO it'd be a lot better if numeric.c exposed whatever functionality\n>> Ed feels is missing, while keeping the contents of a numeric opaque.\n\n> We could certainly expose a bunch of functions, but I think that would\n> actually be a bigger maintenance burden for us than just exposing some\n> of the details that are currently private to numeric.c.\n\nThis whole argument is contingent on details that haven't been\nprovided, namely exactly what it is that Ed wants to do that he can't\ndo today. I think we should investigate that before deciding that\npublishing previously-private detail is the best solution.\n\n> Also, this seems to me to be holding the numeric data type to a\n> different standard than other things.\n\nBy that argument, we should move every declaration in every .c file\ninto c.h and be done. I'd personally be happier if we had *not*\nexposed the other data structure details you mention, but that\nship has sailed.\n\nIf we do do what you're advocating, I'd at least insist that the\ndeclarations go into a new file numeric_internal.h, so that it's\nclear to all concerned that they're playing with fire if they\ndepend on that stuff.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Sep 2024 14:02:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: access numeric data in module"
},
{
"msg_contents": "On 09/09/24 13:00, Robert Haas wrote:\n> I don't agree with this reponse at all. It seems entirely reasonable\n> for third-party code to want to have a way to construct and interpret\n> numeric datums. Keeping the details private would MAYBE make sense if\n> the internal details were changing release to release, but that's\n> clearly not the case. Even if it were, an extension author is\n> completely entitled to say \"hey, I'd rather have access to an unstable\n> API and update my code for new releases\" and we should accommodate\n> that. If we don't, people don't give up on writing the code that they\n> want to write -- they just cut-and-paste private declarations/code\n> into their own source tree, which is WAY worse than if we just put the\n> stuff in a .h file.\n\nAmen.\n\nhttps://tada.github.io/pljava/preview1.7/pljava-api/apidocs/org.postgresql.pljava/org/postgresql/pljava/adt/Numeric.html\n\nThe above API documentation was written when the PostgreSQL source\ncomments read \"values of NBASE other than 10000 are considered of historical\ninterest only and are no longer supported in any sense\".\nI will have to generalize it a bit more if other NBASEs are now\nto be considered again.\n\nIf Tom prefers the idea of keeping the datum layout strongly encapsulated\n(pretty much uniquely among PG data types) and providing only a callable\nC API for manipulating it, then I might propose something like the above-\nlinked Java API as one source of API ideas.\n\nI think it's worth remembering that most PLs will have their own\nlibraries (sometimes multiple alternatives) for arbitrary-precision numbers,\nand it's hard to generalize about /those/ libraries regarding what API\nthey will provide for most efficiently and faithfully converting a\nforeign representation to or from their own. Conversion through a decimal\nstring (a) may not be most efficient, and (b) may not faithfully roundtrip\npossible combinations of digits, displayScale, and weight.\n\n From Java's perspective, there has historically been a significant JNI\noverhead for calling from Java into a C API, so that it's advantageous\nto know the memory layout and keep the processing in Java. There is\nat last a batteries-included Java foreign-function interface that can\nmake it less costly to call into a C API, but that has only landed in\nJava 22, which not everyone will be using right away.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 9 Sep 2024 14:05:41 -0400",
"msg_from": "Chapman Flack <jcflack@acm.org>",
"msg_from_op": false,
"msg_subject": "Re: access numeric data in module"
},
{
"msg_contents": "On Mon, Sep 9, 2024 at 2:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> By that argument, we should move every declaration in every .c file\n> into c.h and be done. I'd personally be happier if we had *not*\n> exposed the other data structure details you mention, but that\n> ship has sailed.\n\nNot every declaration in every .c file is of general interest, but the\nones that are probably should be moved into .h files. The on-disk\nrepresentation of a commonly-used data type certainly qualifies.\n\nYou can see from Chapman's reply that I'm not making this up: when we\ndon't expose things, it doesn't keep people from depending on them, it\njust makes them copy our code into their own repository. That's not a\nwin. It makes those extensions more fragile, not less, and it makes\nthe PostgreSQL extension ecosystem worse. pg_hint_plan is another,\nrecently-discussed example of this phenomenon: refuse to give people\nthe keys, and they start hot-wiring stuff.\n\n> If we do do what you're advocating, I'd at least insist that the\n> declarations go into a new file numeric_internal.h, so that it's\n> clear to all concerned that they're playing with fire if they\n> depend on that stuff.\n\nI think that's a bit pointless considering that we don't do it in any\nof the other cases. I'd rather be consistent with our usual practice.\nBut if it ends up in a separate header file that's still better than\nthe status quo.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Sep 2024 14:45:07 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: access numeric data in module"
},
{
"msg_contents": "Sorry for taking so long to respond. I was at my day-job.\n\nAs I mentioned, I am the maintainer of the PL/Haskell language extension.\nThis extension allows users to write code in the Haskell language. In order\nto use numeric types, I will need to create a Haskell type equivalent.\nSomething like\n\ndata Numeric = PosInfinity | NegInfinity | NaN | Number Integer Int16\n\nwhere the Number constructor represents a numeric's mantissa and weight.\n\nIn order to get or return data, I would need to be able to access those\nfields of the numeric type.\n\nI'm not proposing giving access to the actual numeric structure. Rather,\nthe data should be accessed by function call or macro. This would allow\nfuture changes to the inner workings without breaking compatibility as long\nas the interface is maintained. It looks to me like all of the code to\naccess data exists, it should simply be made accessible. An additional\nfunction should exist that allows an extension to create a numeric\nstructure by passing the needed data.\n\n -Ed\n\n\nOn Mon, Sep 9, 2024 at 2:45 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Sep 9, 2024 at 2:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > By that argument, we should move every declaration in every .c file\n> > into c.h and be done. I'd personally be happier if we had *not*\n> > exposed the other data structure details you mention, but that\n> > ship has sailed.\n>\n> Not every declaration in every .c file is of general interest, but the\n> ones that are probably should be moved into .h files. The on-disk\n> representation of a commonly-used data type certainly qualifies.\n>\n> You can see from Chapman's reply that I'm not making this up: when we\n> don't expose things, it doesn't keep people from depending on them, it\n> just makes them copy our code into their own repository. That's not a\n> win. It makes those extensions more fragile, not less, and it makes\n> the PostgreSQL extension ecosystem worse. pg_hint_plan is another,\n> recently-discussed example of this phenomenon: refuse to give people\n> the keys, and they start hot-wiring stuff.\n>\n> > If we do do what you're advocating, I'd at least insist that the\n> > declarations go into a new file numeric_internal.h, so that it's\n> > clear to all concerned that they're playing with fire if they\n> > depend on that stuff.\n>\n> I think that's a bit pointless considering that we don't do it in any\n> of the other cases. I'd rather be consistent with our usual practice.\n> But if it ends up in a separate header file that's still better than\n> the status quo.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nSorry for taking so long to respond. I was at my day-job. As I mentioned, I am the maintainer of the PL/Haskell language extension. This extension allows users to write code in the Haskell language. In order to use numeric types, I will need to create a Haskell type equivalent. Something likedata Numeric = PosInfinity | NegInfinity | NaN | Number Integer Int16where the Number constructor represents a numeric's mantissa and weight. In order to get or return data, I would need to be able to access those fields of the numeric type. I'm not proposing giving access to the actual numeric structure. Rather, the data should be accessed by function call or macro. This would allow future changes to the inner workings without breaking compatibility as long as the interface is maintained. It looks to me like all of the code to access data exists, it should simply be made accessible. An additional function should exist that allows an extension to create a numeric structure by passing the needed data. -EdOn Mon, Sep 9, 2024 at 2:45 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Sep 9, 2024 at 2:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> By that argument, we should move every declaration in every .c file\n> into c.h and be done. I'd personally be happier if we had *not*\n> exposed the other data structure details you mention, but that\n> ship has sailed.\n\nNot every declaration in every .c file is of general interest, but the\nones that are probably should be moved into .h files. The on-disk\nrepresentation of a commonly-used data type certainly qualifies.\n\nYou can see from Chapman's reply that I'm not making this up: when we\ndon't expose things, it doesn't keep people from depending on them, it\njust makes them copy our code into their own repository. That's not a\nwin. It makes those extensions more fragile, not less, and it makes\nthe PostgreSQL extension ecosystem worse. pg_hint_plan is another,\nrecently-discussed example of this phenomenon: refuse to give people\nthe keys, and they start hot-wiring stuff.\n\n> If we do do what you're advocating, I'd at least insist that the\n> declarations go into a new file numeric_internal.h, so that it's\n> clear to all concerned that they're playing with fire if they\n> depend on that stuff.\n\nI think that's a bit pointless considering that we don't do it in any\nof the other cases. I'd rather be consistent with our usual practice.\nBut if it ends up in a separate header file that's still better than\nthe status quo.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 9 Sep 2024 20:40:21 -0400",
"msg_from": "Ed Behn <ed@behn.us>",
"msg_from_op": true,
"msg_subject": "Re: access numeric data in module"
},
{
"msg_contents": "Good afternoon-\n Was there a resolution of this? I'm wondering if it is worth it for me\nto submit a PR for the next commitfest.\n\n -Ed\n\nOn Mon, Sep 9, 2024 at 8:40 PM Ed Behn <ed@behn.us> wrote:\n\n> Sorry for taking so long to respond. I was at my day-job.\n>\n> As I mentioned, I am the maintainer of the PL/Haskell language extension.\n> This extension allows users to write code in the Haskell language. In order\n> to use numeric types, I will need to create a Haskell type equivalent.\n> Something like\n>\n> data Numeric = PosInfinity | NegInfinity | NaN | Number Integer Int16\n>\n> where the Number constructor represents a numeric's mantissa and weight.\n>\n> In order to get or return data, I would need to be able to access those\n> fields of the numeric type.\n>\n> I'm not proposing giving access to the actual numeric structure. Rather,\n> the data should be accessed by function call or macro. This would allow\n> future changes to the inner workings without breaking compatibility as long\n> as the interface is maintained. It looks to me like all of the code to\n> access data exists, it should simply be made accessible. An additional\n> function should exist that allows an extension to create a numeric\n> structure by passing the needed data.\n>\n> -Ed\n>\n>\n> On Mon, Sep 9, 2024 at 2:45 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Mon, Sep 9, 2024 at 2:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> > By that argument, we should move every declaration in every .c file\n>> > into c.h and be done. I'd personally be happier if we had *not*\n>> > exposed the other data structure details you mention, but that\n>> > ship has sailed.\n>>\n>> Not every declaration in every .c file is of general interest, but the\n>> ones that are probably should be moved into .h files. The on-disk\n>> representation of a commonly-used data type certainly qualifies.\n>>\n>> You can see from Chapman's reply that I'm not making this up: when we\n>> don't expose things, it doesn't keep people from depending on them, it\n>> just makes them copy our code into their own repository. That's not a\n>> win. It makes those extensions more fragile, not less, and it makes\n>> the PostgreSQL extension ecosystem worse. pg_hint_plan is another,\n>> recently-discussed example of this phenomenon: refuse to give people\n>> the keys, and they start hot-wiring stuff.\n>>\n>> > If we do do what you're advocating, I'd at least insist that the\n>> > declarations go into a new file numeric_internal.h, so that it's\n>> > clear to all concerned that they're playing with fire if they\n>> > depend on that stuff.\n>>\n>> I think that's a bit pointless considering that we don't do it in any\n>> of the other cases. I'd rather be consistent with our usual practice.\n>> But if it ends up in a separate header file that's still better than\n>> the status quo.\n>>\n>> --\n>> Robert Haas\n>> EDB: http://www.enterprisedb.com\n>>\n>\n\nGood afternoon- Was there a resolution of this? I'm wondering if it is worth it for me to submit a PR for the next commitfest. -EdOn Mon, Sep 9, 2024 at 8:40 PM Ed Behn <ed@behn.us> wrote:Sorry for taking so long to respond. I was at my day-job. As I mentioned, I am the maintainer of the PL/Haskell language extension. This extension allows users to write code in the Haskell language. In order to use numeric types, I will need to create a Haskell type equivalent. Something likedata Numeric = PosInfinity | NegInfinity | NaN | Number Integer Int16where the Number constructor represents a numeric's mantissa and weight. In order to get or return data, I would need to be able to access those fields of the numeric type. I'm not proposing giving access to the actual numeric structure. Rather, the data should be accessed by function call or macro. This would allow future changes to the inner workings without breaking compatibility as long as the interface is maintained. It looks to me like all of the code to access data exists, it should simply be made accessible. An additional function should exist that allows an extension to create a numeric structure by passing the needed data. -EdOn Mon, Sep 9, 2024 at 2:45 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Sep 9, 2024 at 2:02 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> By that argument, we should move every declaration in every .c file\n> into c.h and be done. I'd personally be happier if we had *not*\n> exposed the other data structure details you mention, but that\n> ship has sailed.\n\nNot every declaration in every .c file is of general interest, but the\nones that are probably should be moved into .h files. The on-disk\nrepresentation of a commonly-used data type certainly qualifies.\n\nYou can see from Chapman's reply that I'm not making this up: when we\ndon't expose things, it doesn't keep people from depending on them, it\njust makes them copy our code into their own repository. That's not a\nwin. It makes those extensions more fragile, not less, and it makes\nthe PostgreSQL extension ecosystem worse. pg_hint_plan is another,\nrecently-discussed example of this phenomenon: refuse to give people\nthe keys, and they start hot-wiring stuff.\n\n> If we do do what you're advocating, I'd at least insist that the\n> declarations go into a new file numeric_internal.h, so that it's\n> clear to all concerned that they're playing with fire if they\n> depend on that stuff.\n\nI think that's a bit pointless considering that we don't do it in any\nof the other cases. I'd rather be consistent with our usual practice.\nBut if it ends up in a separate header file that's still better than\nthe status quo.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 14 Sep 2024 14:09:58 -0400",
"msg_from": "Ed Behn <ed@behn.us>",
"msg_from_op": true,
"msg_subject": "Re: access numeric data in module"
},
{
"msg_contents": "On Sat, Sep 14, 2024 at 2:10 PM Ed Behn <ed@behn.us> wrote:\n> Was there a resolution of this? I'm wondering if it is worth it for me to submit a PR for the next commitfest.\n\nWell, it seems like what you want is different than what I want, and\nwhat Tom wants is different from both of us. I'd like there to be a\nway forward here but at the moment I'm not quite sure what it is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Sep 2024 09:50:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: access numeric data in module"
}
] |
[
{
"msg_contents": "Commit a70e01d430 removed support for OpenSSL 1.0.2 in order to simplify the\ncode by removing the need for finicky initialization of the library. Based on\nour API usage the new minimum version was defined as 1.1.0.\n\nThe patchset in https://commitfest.postgresql.org/49/5025/ which adds support\nfor configuring cipher suites in TLS 1.3 handshakes require an API available in\nOpenSSL 1.1.1 and onwards. With that as motivation I'd like to propose that we\nremove support for OpenSSL 1.1.0 and set the minimum required version to 1.1.1.\nOpenSSL 1.1.0 was EOL in September 2019 and was never an LTS version, so it's\nnot packaged in anything anymore AFAICT and should be very rare in production\nuse in conjunction with an updated postgres. 1.1.1 LTS will be 2 years EOL by\nthe time v18 ships so I doubt this will be all that controversial.\n\nThe attached is the 0001 from the above mentioned patchset for illustration.\nThe removal should happen when pushing the rest of the patchset.\n\nDoes anyone see any reason not to go to 1.1.1 as the minimum?\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 9 Sep 2024 14:22:19 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Retire support for OpenSSL 1.1.1 due to raised API requirements"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> The patchset in https://commitfest.postgresql.org/49/5025/ which adds support\n> for configuring cipher suites in TLS 1.3 handshakes require an API available in\n> OpenSSL 1.1.1 and onwards. With that as motivation I'd like to propose that we\n> remove support for OpenSSL 1.1.0 and set the minimum required version to 1.1.1.\n> OpenSSL 1.1.0 was EOL in September 2019 and was never an LTS version, so it's\n> not packaged in anything anymore AFAICT and should be very rare in production\n> use in conjunction with an updated postgres. 1.1.1 LTS will be 2 years EOL by\n> the time v18 ships so I doubt this will be all that controversial.\n\nYeah ... the alternative would be to conditionally compile the new\nfunctionality. That doesn't seem like a productive use of developer\ntime if it's supporting just one version that should be extinct in\nthe wild by now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Sep 2024 10:48:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Retire support for OpenSSL 1.1.1 due to raised API requirements"
},
{
"msg_contents": "> On 9 Sep 2024, at 16:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> The patchset in https://commitfest.postgresql.org/49/5025/ which adds support\n>> for configuring cipher suites in TLS 1.3 handshakes require an API available in\n>> OpenSSL 1.1.1 and onwards. With that as motivation I'd like to propose that we\n>> remove support for OpenSSL 1.1.0 and set the minimum required version to 1.1.1.\n>> OpenSSL 1.1.0 was EOL in September 2019 and was never an LTS version, so it's\n>> not packaged in anything anymore AFAICT and should be very rare in production\n>> use in conjunction with an updated postgres. 1.1.1 LTS will be 2 years EOL by\n>> the time v18 ships so I doubt this will be all that controversial.\n> \n> Yeah ... the alternative would be to conditionally compile the new\n> functionality. That doesn't seem like a productive use of developer\n> time if it's supporting just one version that should be extinct in\n> the wild by now.\n\nAgreed. OpenSSL 1.1.1 is very different story and I suspect we'll be stuck on\nthat level for some time, but 1.1.0 is gone from production use.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 9 Sep 2024 23:29:09 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Retire support for OpenSSL 1.1.1 due to raised API requirements"
},
{
"msg_contents": "On Mon, Sep 09, 2024 at 11:29:09PM +0200, Daniel Gustafsson wrote:\n> Agreed. OpenSSL 1.1.1 is very different story and I suspect we'll be stuck on\n> that level for some time, but 1.1.0 is gone from production use.\n\nThe cleanup induced by the removal of 1.1.0 is minimal. I'm on board\nabout your argument with SSL_CTX_set_ciphersuites() to drop 1.1.0 and\nsimplify the other feature.\n\nI was wondering about HAVE_SSL_CTX_SET_NUM_TICKETS for a few seconds,\nbut morepork that relies on LibreSSL 3.3.2 disagrees with me.\n--\nMichael",
"msg_date": "Tue, 10 Sep 2024 07:53:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Retire support for OpenSSL 1.1.1 due to raised API requirements"
},
{
"msg_contents": "> On 10 Sep 2024, at 00:53, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Sep 09, 2024 at 11:29:09PM +0200, Daniel Gustafsson wrote:\n>> Agreed. OpenSSL 1.1.1 is very different story and I suspect we'll be stuck on\n>> that level for some time, but 1.1.0 is gone from production use.\n> \n> The cleanup induced by the removal of 1.1.0 is minimal. I'm on board\n> about your argument with SSL_CTX_set_ciphersuites() to drop 1.1.0 and\n> simplify the other feature.\n\nYeah, the change to existing code is trivial but avoiding adding a kluge to\nhandle versions without the relevant API will save complexity. Thanks for\nreview.\n\nThis change will be committed together with the TLSv1.3 cipher suite pathcset,\njust wanted to bring it up here and not hide it in another thread.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 10 Sep 2024 10:44:42 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Retire support for OpenSSL 1.1.1 due to raised API requirements"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 10:44:42AM +0200, Daniel Gustafsson wrote:\n> This change will be committed together with the TLSv1.3 cipher suite pathcset,\n> just wanted to bring it up here and not hide it in another thread.\n\nAs you wish ;)\n--\nMichael",
"msg_date": "Wed, 11 Sep 2024 07:23:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Retire support for OpenSSL 1.1.1 due to raised API requirements"
}
] |
[
{
"msg_contents": "Hi,\n\nIn v8.2, the RULE privilege for tables was removed, but for backward compatibility,\nGRANT/REVOKE RULE, has_table_privilege(..., 'RULE') etc are still accepted,\nthough they don't perform any actions.\n\nDo we still need to maintain this backward compatibility?\nCould we consider removing the RULE privilege entirely?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Mon, 9 Sep 2024 23:36:46 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Remove old RULE privilege completely"
},
{
"msg_contents": "On Mon, Sep 9, 2024 at 10:37 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> In v8.2, the RULE privilege for tables was removed, but for backward compatibility,\n> GRANT/REVOKE RULE, has_table_privilege(..., 'RULE') etc are still accepted,\n> though they don't perform any actions.\n>\n> Do we still need to maintain this backward compatibility?\n> Could we consider removing the RULE privilege entirely?\n\n8.2 is a long time ago. If it's really been dead since then, I think\nwe should remove it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Sep 2024 12:02:10 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove old RULE privilege completely"
},
{
"msg_contents": "On 2024/09/10 1:02, Robert Haas wrote:\n> On Mon, Sep 9, 2024 at 10:37 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> In v8.2, the RULE privilege for tables was removed, but for backward compatibility,\n>> GRANT/REVOKE RULE, has_table_privilege(..., 'RULE') etc are still accepted,\n>> though they don't perform any actions.\n>>\n>> Do we still need to maintain this backward compatibility?\n>> Could we consider removing the RULE privilege entirely?\n> \n> 8.2 is a long time ago. If it's really been dead since then, I think\n> we should remove it.\n\nOk, so, patch attached.\n\nThere was a test to check if has_table_privilege() accepted the keyword RULE.\nThe patch removed it since it's now unnecessary and would only waste cycles\ntesting that has_table_privilege() no longer accepts the keyword.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 10 Sep 2024 02:45:37 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove old RULE privilege completely"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 02:45:37AM +0900, Fujii Masao wrote:\n> On 2024/09/10 1:02, Robert Haas wrote:\n>> On Mon, Sep 9, 2024 at 10:37 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> > In v8.2, the RULE privilege for tables was removed, but for backward compatibility,\n>> > GRANT/REVOKE RULE, has_table_privilege(..., 'RULE') etc are still accepted,\n>> > though they don't perform any actions.\n>> > \n>> > Do we still need to maintain this backward compatibility?\n>> > Could we consider removing the RULE privilege entirely?\n>> \n>> 8.2 is a long time ago. If it's really been dead since then, I think\n>> we should remove it.\n\n+1. It seems more likely to cause confusion at this point.\n\n> Ok, so, patch attached.\n> \n> There was a test to check if has_table_privilege() accepted the keyword RULE.\n> The patch removed it since it's now unnecessary and would only waste cycles\n> testing that has_table_privilege() no longer accepts the keyword.\n\nLGTM\n\n-- \nnathan\n\n\n",
"msg_date": "Mon, 9 Sep 2024 14:49:58 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove old RULE privilege completely"
},
{
"msg_contents": "\n\nOn 2024/09/10 4:49, Nathan Bossart wrote:\n>> Ok, so, patch attached.\n>>\n>> There was a test to check if has_table_privilege() accepted the keyword RULE.\n>> The patch removed it since it's now unnecessary and would only waste cycles\n>> testing that has_table_privilege() no longer accepts the keyword.\n> \n> LGTM\n\nThanks for the review! Barring any objections, I'll commit the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Wed, 11 Sep 2024 23:55:47 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove old RULE privilege completely"
}
] |
[
{
"msg_contents": "Hi all,\n\npg_utf8_string_len() doesn't check the remaining string length before\ncalling pg_utf8_is_legal(), so there's a possibility of jumping a\ncouple of bytes past the end of the string. (The overread stops there,\nbecause the function won't validate a sequence containing a null\nbyte.)\n\nHere's a quick patch to fix it. I didn't see any other uses of\npg_utf8_is_legal() with missing length checks.\n\nThanks,\n--Jacob",
"msg_date": "Mon, 9 Sep 2024 08:29:17 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Fix small overread during SASLprep"
},
{
"msg_contents": "> On 9 Sep 2024, at 17:29, Jacob Champion <jacob.champion@enterprisedb.com> wrote:\n\n> pg_utf8_string_len() doesn't check the remaining string length before\n> calling pg_utf8_is_legal(), so there's a possibility of jumping a\n> couple of bytes past the end of the string. (The overread stops there,\n> because the function won't validate a sequence containing a null\n> byte.)\n> \n> Here's a quick patch to fix it. I didn't see any other uses of\n> pg_utf8_is_legal() with missing length checks.\n\nJust to make sure I understand, this is for guarding against overreads in\nvalidation of strings containing torn MB characters? Assuming I didn't\nmisunderstand you this patch seems correct to me.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 9 Sep 2024 20:30:07 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix small overread during SASLprep"
},
{
"msg_contents": "On Mon, Sep 9, 2024 at 11:30 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> Just to make sure I understand, this is for guarding against overreads in\n> validation of strings containing torn MB characters?\n\nRight. Our SASLprep code doesn't require/enforce UTF8-encoded inputs.\n\n> Assuming I didn't\n> misunderstand you this patch seems correct to me.\n\nThanks for the review!\n\n--Jacob\n\n\n",
"msg_date": "Mon, 9 Sep 2024 11:41:11 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix small overread during SASLprep"
},
{
"msg_contents": "> On 9 Sep 2024, at 20:41, Jacob Champion <jacob.champion@enterprisedb.com> wrote:\n> \n> On Mon, Sep 9, 2024 at 11:30 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> Just to make sure I understand, this is for guarding against overreads in\n>> validation of strings containing torn MB characters?\n> \n> Right. Our SASLprep code doesn't require/enforce UTF8-encoded inputs.\n\nThanks for confirming, I'll have another look in the morning and will apply\nthen unless there are objections.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 9 Sep 2024 23:21:16 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix small overread during SASLprep"
},
{
"msg_contents": "> On 9 Sep 2024, at 23:21, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 9 Sep 2024, at 20:41, Jacob Champion <jacob.champion@enterprisedb.com> wrote:\n>> \n>> On Mon, Sep 9, 2024 at 11:30 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> Just to make sure I understand, this is for guarding against overreads in\n>>> validation of strings containing torn MB characters?\n>> \n>> Right. Our SASLprep code doesn't require/enforce UTF8-encoded inputs.\n> \n> Thanks for confirming, I'll have another look in the morning and will apply\n> then unless there are objections.\n\nPushed, thanks!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 10 Sep 2024 13:39:24 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix small overread during SASLprep"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 4:39 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> Pushed, thanks!\n\nThank you!\n\n--Jacob\n\n\n",
"msg_date": "Tue, 10 Sep 2024 08:16:32 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix small overread during SASLprep"
}
] |
[
{
"msg_contents": "Hi,\n\nALTER_REPLICATION_SLOT on invalidated replication slots is unnecessary\nas there is no way to get the invalidated (logical) slot to work.\nPlease find the patch to add an error in such cases. Relevant\ndiscussion is at [1].\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CAA4eK1%2Bszcosq0nS109mMSxPWyNT1Q%3DUNYCJgXKYuCceaPS%2BhA%40mail.gmail.com\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 10 Sep 2024 00:11:01 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Disallow altering invalidated replication slots"
},
{
"msg_contents": "Hi, here are some review comments for patch v1.\n\n======\nCommit message\n\n1.\nALTER_REPLICATION_SLOT on invalidated replication slots is unnecessary\nas there is no way...\n\nsuggestion:\nALTER_REPLICATION_SLOT for invalid replication slots should not be\nallowed because there is no way...\n\n======\n2. Missing docs update\n\nShould this docs page [1] be updated to say ALTER_REPLICATION_SLOT is\nnot allowed for invalid slots?\n\n======\nsrc/backend/replication/slot.c\n\n3.\n+ if (MyReplicationSlot->data.invalidated != RS_INVAL_NONE)\n+ ereport(ERROR,\n+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot alter replication slot \\\"%s\\\"\", name),\n+ errdetail(\"This replication slot was invalidated due to \\\"%s\\\".\",\n+ SlotInvalidationCauses[MyReplicationSlot->data.invalidated]));\n+\n\nI thought including the reason \"invalid\" (e.g. \"cannot alter invalid\nreplication slot \\\"%s\\\"\") in the message might be better, but OTOH I\nsee the patch message is the same as an existing one. Maybe see what\nothers think.\n\n======\nsrc/test/recovery/t/035_standby_logical_decoding.pl\n\n3.\nThere is already a comment about this test:\n##################################################\n# Recovery conflict: Invalidate conflicting slots, including in-use slots\n# Scenario 1: hot_standby_feedback off and vacuum FULL\n#\n# In passing, ensure that replication slot stats are not removed when the\n# active slot is invalidated.\n##################################################\n\nIMO we should update that \"In passing...\" sentence to something like:\n\nIn passing, ensure that replication slot stats are not removed when\nthe active slot is invalidated, and check that an error occurs when\nattempting to alter the invalid slot.\n\n======\n[1] docs - https://www.postgresql.org/docs/devel/protocol-replication.html\n\nKind Regards,\nPeter Smith.\nFujitsu Austalia\n\n\n",
"msg_date": "Tue, 10 Sep 2024 13:09:45 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow altering invalidated replication slots"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 12:11 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> ALTER_REPLICATION_SLOT on invalidated replication slots is unnecessary\n> as there is no way to get the invalidated (logical) slot to work.\n> Please find the patch to add an error in such cases. Relevant\n> discussion is at [1].\n>\n> Thoughts?\n>\n\n+1 on the idea.\n\n+ errmsg(\"cannot alter replication slot \\\"%s\\\"\", name),\n+ errdetail(\"This replication slot was invalidated due to \\\"%s\\\".\",\n\nMaybe we shall have: \"This slot has been invalidated due to ...\"\nThis is similar to all other occurrences where such errors are raised,\nsee logical.c for instance.\n\nthanks\nShveta\n\n\n",
"msg_date": "Tue, 10 Sep 2024 10:37:19 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow altering invalidated replication slots"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 8:40 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, here are some review comments for patch v1.\n>\n> ======\n> Commit message\n>\n> 1.\n> ALTER_REPLICATION_SLOT on invalidated replication slots is unnecessary\n> as there is no way...\n>\n> suggestion:\n> ALTER_REPLICATION_SLOT for invalid replication slots should not be\n> allowed because there is no way...\n>\n> ======\n> 2. Missing docs update\n>\n> Should this docs page [1] be updated to say ALTER_REPLICATION_SLOT is\n> not allowed for invalid slots?\n>\n> ======\n> src/backend/replication/slot.c\n>\n> 3.\n> + if (MyReplicationSlot->data.invalidated != RS_INVAL_NONE)\n> + ereport(ERROR,\n> + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"cannot alter replication slot \\\"%s\\\"\", name),\n> + errdetail(\"This replication slot was invalidated due to \\\"%s\\\".\",\n> + SlotInvalidationCauses[MyReplicationSlot->data.invalidated]));\n> +\n>\n> I thought including the reason \"invalid\" (e.g. \"cannot alter invalid\n> replication slot \\\"%s\\\"\") in the message might be better,\n>\n\nAgreed, I could see a similar case with a message (\"cannot alter\ninvalid database \\\"%s\\\"\") in the code. Additionally, we should also\ninclude Shveta's suggestion to change the detailed message to other\nsimilar messages in logical.c\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Sep 2024 10:53:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow altering invalidated replication slots"
},
{
"msg_contents": "Hi,\n\nThanks for reviewing.\n\nOn Tue, Sep 10, 2024 at 8:40 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Commit message\n>\n> 1.\n> ALTER_REPLICATION_SLOT on invalidated replication slots is unnecessary\n> as there is no way...\n>\n> suggestion:\n> ALTER_REPLICATION_SLOT for invalid replication slots should not be\n> allowed because there is no way...\n\nModified.\n\n> ======\n> 2. Missing docs update\n>\n> Should this docs page [1] be updated to say ALTER_REPLICATION_SLOT is\n> not allowed for invalid slots?\n\nHaven't noticed for other ERROR cases in the docs, e.g. slots being\nsynced, temporary slots. Not sure if it's worth adding every ERROR\ncase to the docs.\n\n> ======\n> src/backend/replication/slot.c\n>\n> 3.\n> + if (MyReplicationSlot->data.invalidated != RS_INVAL_NONE)\n> + ereport(ERROR,\n> + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"cannot alter replication slot \\\"%s\\\"\", name),\n> + errdetail(\"This replication slot was invalidated due to \\\"%s\\\".\",\n> + SlotInvalidationCauses[MyReplicationSlot->data.invalidated]));\n> +\n>\n> I thought including the reason \"invalid\" (e.g. \"cannot alter invalid\n> replication slot \\\"%s\\\"\") in the message might be better, but OTOH I\n> see the patch message is the same as an existing one. Maybe see what\n> others think.\n\nChanged.\n\n> ======\n> src/test/recovery/t/035_standby_logical_decoding.pl\n>\n> 3.\n> There is already a comment about this test:\n> ##################################################\n> # Recovery conflict: Invalidate conflicting slots, including in-use slots\n> # Scenario 1: hot_standby_feedback off and vacuum FULL\n> #\n> # In passing, ensure that replication slot stats are not removed when the\n> # active slot is invalidated.\n> ##################################################\n>\n> IMO we should update that \"In passing...\" sentence to something like:\n>\n> In passing, ensure that replication slot stats are not removed when\n> the active slot is invalidated, and check that an error occurs when\n> attempting to alter the invalid slot.\n\nAdded. But, keeping it closer to the test case doesn't hurt.\n\nPlease find the attached v2 patch also having Shveta's review comments\naddressed.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 10 Sep 2024 23:24:35 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Disallow altering invalidated replication slots"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 3:54 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> Thanks for reviewing.\n>\n> On Tue, Sep 10, 2024 at 8:40 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Commit message\n> >\n> > 1.\n> > ALTER_REPLICATION_SLOT on invalidated replication slots is unnecessary\n> > as there is no way...\n> >\n> > suggestion:\n> > ALTER_REPLICATION_SLOT for invalid replication slots should not be\n> > allowed because there is no way...\n>\n> Modified.\n>\n> > ======\n> > 2. Missing docs update\n> >\n> > Should this docs page [1] be updated to say ALTER_REPLICATION_SLOT is\n> > not allowed for invalid slots?\n>\n> Haven't noticed for other ERROR cases in the docs, e.g. slots being\n> synced, temporary slots. Not sure if it's worth adding every ERROR\n> case to the docs.\n>\n\nOK.\n\n> > ======\n> > src/backend/replication/slot.c\n> >\n> > 3.\n> > + if (MyReplicationSlot->data.invalidated != RS_INVAL_NONE)\n> > + ereport(ERROR,\n> > + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > + errmsg(\"cannot alter replication slot \\\"%s\\\"\", name),\n> > + errdetail(\"This replication slot was invalidated due to \\\"%s\\\".\",\n> > + SlotInvalidationCauses[MyReplicationSlot->data.invalidated]));\n> > +\n> >\n> > I thought including the reason \"invalid\" (e.g. \"cannot alter invalid\n> > replication slot \\\"%s\\\"\") in the message might be better, but OTOH I\n> > see the patch message is the same as an existing one. Maybe see what\n> > others think.\n>\n> Changed.\n>\n> > ======\n> > src/test/recovery/t/035_standby_logical_decoding.pl\n> >\n> > 3.\n> > There is already a comment about this test:\n> > ##################################################\n> > # Recovery conflict: Invalidate conflicting slots, including in-use slots\n> > # Scenario 1: hot_standby_feedback off and vacuum FULL\n> > #\n> > # In passing, ensure that replication slot stats are not removed when the\n> > # active slot is invalidated.\n> > ##################################################\n> >\n> > IMO we should update that \"In passing...\" sentence to something like:\n> >\n> > In passing, ensure that replication slot stats are not removed when\n> > the active slot is invalidated, and check that an error occurs when\n> > attempting to alter the invalid slot.\n>\n> Added. But, keeping it closer to the test case doesn't hurt.\n>\n> Please find the attached v2 patch also having Shveta's review comments\n> addressed.\n>\n\nThe v2 patch looks OK to me.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 11 Sep 2024 11:29:03 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow altering invalidated replication slots"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 11:24 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n>\n> Please find the attached v2 patch also having Shveta's review comments\n> addressed.\n\nThe v2 patch looks good to me.\n\nthanks\nShveta\n\n\n",
"msg_date": "Wed, 11 Sep 2024 08:41:35 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow altering invalidated replication slots"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 8:41 AM shveta malik <shveta.malik@gmail.com> wrote:\n>\n> On Tue, Sep 10, 2024 at 11:24 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> >\n> > Please find the attached v2 patch also having Shveta's review comments\n> > addressed.\n>\n> The v2 patch looks good to me.\n>\n\nLGTM as well. I'll push this tomorrow morning unless there are more\ncomments or suggestions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 12 Sep 2024 16:24:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow altering invalidated replication slots"
},
{
"msg_contents": "On Thu, Sep 12, 2024 at 4:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Sep 11, 2024 at 8:41 AM shveta malik <shveta.malik@gmail.com> wrote:\n> >\n> > On Tue, Sep 10, 2024 at 11:24 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > >\n> > > Please find the attached v2 patch also having Shveta's review comments\n> > > addressed.\n> >\n> > The v2 patch looks good to me.\n> >\n>\n> LGTM as well. I'll push this tomorrow morning unless there are more\n> comments or suggestions.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 13 Sep 2024 11:46:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow altering invalidated replication slots"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nThis is my first time posting here, and I’d like to propose a new feature\nrelated to PostgreSQL indexes. If this idea resonates, I’d be happy to\nfollow up with a patch as well.\n\n*Problem*:\nAdding and removing indexes is a common operation in PostgreSQL. On larger\ndatabases, however, these operations can be resource-intensive. When\nevaluating the performance impact of one or more indexes, dropping them\nmight not be ideal since as a user you may want a quicker way to test their\neffects without fully committing to removing & adding them back again.\nWhich can be a time taking operation on larger tables.\n\n*Proposal*:\nI propose adding an ALTER INDEX command that allows for enabling or\ndisabling an index globally. This could look something like:\n\nALTER INDEX index_name ENABLE;\nALTER INDEX index_name DISABLE;\n\nA disabled index would still receive updates and enforce constraints as\nusual but would not be used for queries. This allows users to assess\nwhether an index impacts query performance before deciding to drop it\nentirely.\n\n*Implementation*:\nTo keep this simple, I suggest toggling the indisvalid flag in pg_index\nduring the enable/disable operation.\n\n*Additional Considerations*:\n- Keeping the index up-to-date while it’s disabled seems preferable, as it\navoids the need to rebuild the index if it’s re-enabled later. The\nalternative would be dropping and rebuilding the index upon re-enabling,\nwhich I believe would introduce additional overhead in terms of application\nlogic & complexity.\n- I am also proposing to reuse the existing indisvalid flag to avoid adding\nnew state and the maintenance that comes with it, but I’m open to feedback\nif this approach has potential downsides.\n- To keep the scope minimal for now, I propose that we only allow enabling\nand disabling indexes globally, and not locally, by supporting it\nexclusively in ALTER INDEX. I would love to know if this would break any\nSQL grammar promises that I might be unaware of.\n\nI would love to learn if this sounds like a good idea and how it can be\nimproved further. Accordingly, as a next step I would be very happy to\npropose a patch as well.\n\nBest regards,\nShayon Mukherjee\n\nHello hackers,This is my first time posting here, and I’d like to propose a new feature related to PostgreSQL indexes. If this idea resonates, I’d be happy to follow up with a patch as well.Problem:Adding and removing indexes is a common operation in PostgreSQL. On larger databases, however, these operations can be resource-intensive. When evaluating the performance impact of one or more indexes, dropping them might not be ideal since as a user you may want a quicker way to test their effects without fully committing to removing & adding them back again. Which can be a time taking operation on larger tables.Proposal:I propose adding an ALTER INDEX command that allows for enabling or disabling an index globally. This could look something like:ALTER INDEX index_name ENABLE;ALTER INDEX index_name DISABLE;A disabled index would still receive updates and enforce constraints as usual but would not be used for queries. This allows users to assess whether an index impacts query performance before deciding to drop it entirely.Implementation:To keep this simple, I suggest toggling the indisvalid flag in pg_index during the enable/disable operation.Additional Considerations:- Keeping the index up-to-date while it’s disabled seems preferable, as it avoids the need to rebuild the index if it’s re-enabled later. The alternative would be dropping and rebuilding the index upon re-enabling, which I believe would introduce additional overhead in terms of application logic & complexity.- I am also proposing to reuse the existing indisvalid flag to avoid adding new state and the maintenance that comes with it, but I’m open to feedback if this approach has potential downsides.- To keep the scope minimal for now, I propose that we only allow enabling and disabling indexes globally, and not locally, by supporting it exclusively in ALTER INDEX. I would love to know if this would break any SQL grammar promises that I might be unaware of.I would love to learn if this sounds like a good idea and how it can be improved further. Accordingly, as a next step I would be very happy to propose a patch as well. Best regards, Shayon Mukherjee",
"msg_date": "Mon, 9 Sep 2024 17:38:35 -0400",
"msg_from": "Shayon Mukherjee <shayonj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "On Tue, 10 Sept 2024 at 09:39, Shayon Mukherjee <shayonj@gmail.com> wrote:\n> Adding and removing indexes is a common operation in PostgreSQL. On larger databases, however, these operations can be resource-intensive. When evaluating the performance impact of one or more indexes, dropping them might not be ideal since as a user you may want a quicker way to test their effects without fully committing to removing & adding them back again. Which can be a time taking operation on larger tables.\n>\n> Proposal:\n> I propose adding an ALTER INDEX command that allows for enabling or disabling an index globally. This could look something like:\n>\n> ALTER INDEX index_name ENABLE;\n> ALTER INDEX index_name DISABLE;\n>\n> A disabled index would still receive updates and enforce constraints as usual but would not be used for queries. This allows users to assess whether an index impacts query performance before deciding to drop it entirely.\n\nI personally think having some way to alter an index to stop it from\nbeing used in query plans would be very useful for the reasons you\nmentioned. I don't have any arguments against the syntax you've\nproposed. We'd certainly have to clearly document that constraints\nare still enforced. Perhaps there is some other syntax which would\nself-document slightly better. I just can't think of it right now.\n\n> Implementation:\n> To keep this simple, I suggest toggling the indisvalid flag in pg_index during the enable/disable operation.\n\nThat's not a good idea as it would allow ALTER INDEX ... ENABLE; to be\nused to make valid a failed concurrently created index. I think this\nwould need a new flag and everywhere in the planner would need to be\nadjusted to ignore indexes when that flag is false.\n\n> Additional Considerations:\n> - Keeping the index up-to-date while it’s disabled seems preferable, as it avoids the need to rebuild the index if it’s re-enabled later. The alternative would be dropping and rebuilding the index upon re-enabling, which I believe would introduce additional overhead in terms of application logic & complexity.\n\nI think the primary use case here is to assist in dropping useless\nindexes in a way that can very quickly be undone if the index is more\nuseful than thought. If you didn't keep the index up-to-date then that\nwould make the feature useless for that purpose.\n\nIf we get the skip scan feature for PG18, then there's likely going to\nbe lots of people with indexes that they might want to consider\nremoving after upgrading. Maybe this is a good time to consider this\nfeature as it possibly won't ever be more useful than it will be after\nwe get skip scans.\n\nDavid\n\n\n",
"msg_date": "Tue, 10 Sep 2024 10:16:34 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "Hi Shayon\n Thank you for your work on this , I think it's great to have this\nfeature implemented ,I checked the doucment on other databases,It seems\nboth MySQL 8.0 and oracle supports it, sql server need to rebuild indexes\nafter disabled,It seems disable the index, it's equivalent to deleting\nthe index, except that the index's metadata is still retained:\nhttps://docs.oracle.com/cd/E17952_01/mysql-8.0-en/invisible-indexes.html\nhttps://learn.microsoft.com/en-us/sql/t-sql/statements/alter-index-transact-sql?view=sql-server-ver16\nhttps://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/ALTER-INDEX.html\n->A disabled index would still receive updates and enforce constraints as\nusual but would not be used for queries. This allows users to assess ->\n->whether an index impacts query performance before deciding to drop it\nentirely.\nMySQL 8.0 and oracle settings are not visible, index information is always\nupdated, I would then suggest that the statement be changed to set the\nindex invisible and visible.\n\n\n\nThanks\n\nDavid Rowley <dgrowleyml@gmail.com> 于2024年9月10日周二 06:17写道:\n\n> On Tue, 10 Sept 2024 at 09:39, Shayon Mukherjee <shayonj@gmail.com> wrote:\n> > Adding and removing indexes is a common operation in PostgreSQL. On\n> larger databases, however, these operations can be resource-intensive. When\n> evaluating the performance impact of one or more indexes, dropping them\n> might not be ideal since as a user you may want a quicker way to test their\n> effects without fully committing to removing & adding them back again.\n> Which can be a time taking operation on larger tables.\n> >\n> > Proposal:\n> > I propose adding an ALTER INDEX command that allows for enabling or\n> disabling an index globally. This could look something like:\n> >\n> > ALTER INDEX index_name ENABLE;\n> > ALTER INDEX index_name DISABLE;\n> >\n> > A disabled index would still receive updates and enforce constraints as\n> usual but would not be used for queries. This allows users to assess\n> whether an index impacts query performance before deciding to drop it\n> entirely.\n>\n> I personally think having some way to alter an index to stop it from\n> being used in query plans would be very useful for the reasons you\n> mentioned. I don't have any arguments against the syntax you've\n> proposed. We'd certainly have to clearly document that constraints\n> are still enforced. Perhaps there is some other syntax which would\n> self-document slightly better. I just can't think of it right now.\n>\n> > Implementation:\n> > To keep this simple, I suggest toggling the indisvalid flag in pg_index\n> during the enable/disable operation.\n>\n> That's not a good idea as it would allow ALTER INDEX ... ENABLE; to be\n> used to make valid a failed concurrently created index. I think this\n> would need a new flag and everywhere in the planner would need to be\n> adjusted to ignore indexes when that flag is false.\n>\n> > Additional Considerations:\n> > - Keeping the index up-to-date while it’s disabled seems preferable, as\n> it avoids the need to rebuild the index if it’s re-enabled later. The\n> alternative would be dropping and rebuilding the index upon re-enabling,\n> which I believe would introduce additional overhead in terms of application\n> logic & complexity.\n>\n> I think the primary use case here is to assist in dropping useless\n> indexes in a way that can very quickly be undone if the index is more\n> useful than thought. If you didn't keep the index up-to-date then that\n> would make the feature useless for that purpose.\n>\n> If we get the skip scan feature for PG18, then there's likely going to\n> be lots of people with indexes that they might want to consider\n> removing after upgrading. Maybe this is a good time to consider this\n> feature as it possibly won't ever be more useful than it will be after\n> we get skip scans.\n>\n> David\n>\n>\n>\n\nHi Shayon Thank you for your work on this , I think it's great to have this feature implemented ,I checked the doucment on other databases,It seems both MySQL 8.0 and oracle supports it, sql server need to rebuild indexes after disabled,It seems disable the index, it's equivalent to deleting the index, except that the index's metadata is still retained:https://docs.oracle.com/cd/E17952_01/mysql-8.0-en/invisible-indexes.htmlhttps://learn.microsoft.com/en-us/sql/t-sql/statements/alter-index-transact-sql?view=sql-server-ver16https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/ALTER-INDEX.html->A disabled index would still receive updates and enforce constraints as usual but would not be used for queries. This allows users to assess -> ->whether an index impacts query performance before deciding to drop it entirely.MySQL 8.0 and oracle settings are not visible, index information is always updated, I would then suggest that the statement be changed to set the index invisible and visible.ThanksDavid Rowley <dgrowleyml@gmail.com> 于2024年9月10日周二 06:17写道:On Tue, 10 Sept 2024 at 09:39, Shayon Mukherjee <shayonj@gmail.com> wrote:\n> Adding and removing indexes is a common operation in PostgreSQL. On larger databases, however, these operations can be resource-intensive. When evaluating the performance impact of one or more indexes, dropping them might not be ideal since as a user you may want a quicker way to test their effects without fully committing to removing & adding them back again. Which can be a time taking operation on larger tables.\n>\n> Proposal:\n> I propose adding an ALTER INDEX command that allows for enabling or disabling an index globally. This could look something like:\n>\n> ALTER INDEX index_name ENABLE;\n> ALTER INDEX index_name DISABLE;\n>\n> A disabled index would still receive updates and enforce constraints as usual but would not be used for queries. This allows users to assess whether an index impacts query performance before deciding to drop it entirely.\n\nI personally think having some way to alter an index to stop it from\nbeing used in query plans would be very useful for the reasons you\nmentioned. I don't have any arguments against the syntax you've\nproposed. We'd certainly have to clearly document that constraints\nare still enforced. Perhaps there is some other syntax which would\nself-document slightly better. I just can't think of it right now.\n\n> Implementation:\n> To keep this simple, I suggest toggling the indisvalid flag in pg_index during the enable/disable operation.\n\nThat's not a good idea as it would allow ALTER INDEX ... ENABLE; to be\nused to make valid a failed concurrently created index. I think this\nwould need a new flag and everywhere in the planner would need to be\nadjusted to ignore indexes when that flag is false.\n\n> Additional Considerations:\n> - Keeping the index up-to-date while it’s disabled seems preferable, as it avoids the need to rebuild the index if it’s re-enabled later. The alternative would be dropping and rebuilding the index upon re-enabling, which I believe would introduce additional overhead in terms of application logic & complexity.\n\nI think the primary use case here is to assist in dropping useless\nindexes in a way that can very quickly be undone if the index is more\nuseful than thought. If you didn't keep the index up-to-date then that\nwould make the feature useless for that purpose.\n\nIf we get the skip scan feature for PG18, then there's likely going to\nbe lots of people with indexes that they might want to consider\nremoving after upgrading. Maybe this is a good time to consider this\nfeature as it possibly won't ever be more useful than it will be after\nwe get skip scans.\n\nDavid",
"msg_date": "Tue, 10 Sep 2024 18:06:47 +0800",
"msg_from": "wenhui qiu <qiuwenhuifx@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "Hi,\n\nOn Tue, Sep 10, 2024 at 10:16:34AM +1200, David Rowley wrote:\n> On Tue, 10 Sept 2024 at 09:39, Shayon Mukherjee <shayonj@gmail.com> wrote:\n> > Adding and removing indexes is a common operation in PostgreSQL. On\n> > larger databases, however, these operations can be\n> > resource-intensive. When evaluating the performance impact of one or\n> > more indexes, dropping them might not be ideal since as a user you\n> > may want a quicker way to test their effects without fully\n> > committing to removing & adding them back again. Which can be a time\n> > taking operation on larger tables.\n> >\n> > Proposal:\n> > I propose adding an ALTER INDEX command that allows for enabling or disabling an index globally. This could look something like:\n> >\n> > ALTER INDEX index_name ENABLE;\n> > ALTER INDEX index_name DISABLE;\n> >\n> > A disabled index would still receive updates and enforce constraints\n> > as usual but would not be used for queries. This allows users to\n> > assess whether an index impacts query performance before deciding to\n> > drop it entirely.\n> \n> I personally think having some way to alter an index to stop it from\n> being used in query plans would be very useful for the reasons you\n> mentioned. I don't have any arguments against the syntax you've\n> proposed. We'd certainly have to clearly document that constraints\n> are still enforced. Perhaps there is some other syntax which would\n> self-document slightly better. I just can't think of it right now.\n> \n> > Implementation:\n> > To keep this simple, I suggest toggling the indisvalid flag in\n> > pg_index during the enable/disable operation.\n> \n> That's not a good idea as it would allow ALTER INDEX ... ENABLE; to be\n> used to make valid a failed concurrently created index. I think this\n> would need a new flag and everywhere in the planner would need to be\n> adjusted to ignore indexes when that flag is false.\n\nHow about the indislive flag instead? I haven't looked at the code, but\nfrom the documentation (\"If false, the index is in process of being\ndropped, and\nshould be ignored for all purposes\") it sounds like we made be able to\npiggy-back on that instead?\n\n\nMichael\n\n\n",
"msg_date": "Tue, 10 Sep 2024 12:46:15 +0200",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "On Tue, 10 Sept 2024 at 22:46, Michael Banck <mbanck@gmx.net> wrote:\n> How about the indislive flag instead? I haven't looked at the code, but\n> from the documentation (\"If false, the index is in process of being\n> dropped, and\n> should be ignored for all purposes\") it sounds like we made be able to\n> piggy-back on that instead?\n\nDoing that could cause an UPDATE which would ordinarily not be\neligible for a HOT-update to become a HOT-update. That would cause\nissues if the index is enabled again as the index wouldn't have been\nupdated during the UPDATE.\n\nI don't see the big deal with adding a new flag. There's even a free\npadding byte to put this flag in after indisreplident, so we don't\nhave to worry about using more memory.\n\nDavid\n\n\n",
"msg_date": "Wed, 11 Sep 2024 00:02:34 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "+1 for the new flag as well, since it'd be nice to be able to\nenable/disable indexes without having to worry about the missed updates /\nhaving to rebuild it.\nShayon\n\nOn Tue, Sep 10, 2024 at 8:02 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 10 Sept 2024 at 22:46, Michael Banck <mbanck@gmx.net> wrote:\n> > How about the indislive flag instead? I haven't looked at the code, but\n> > from the documentation (\"If false, the index is in process of being\n> > dropped, and\n> > should be ignored for all purposes\") it sounds like we made be able to\n> > piggy-back on that instead?\n>\n> Doing that could cause an UPDATE which would ordinarily not be\n> eligible for a HOT-update to become a HOT-update. That would cause\n> issues if the index is enabled again as the index wouldn't have been\n> updated during the UPDATE.\n>\n> I don't see the big deal with adding a new flag. There's even a free\n> padding byte to put this flag in after indisreplident, so we don't\n> have to worry about using more memory.\n>\n> David\n>\n\n\n-- \nKind Regards,\nShayon Mukherjee\n\n+1 for the new flag as well, since it'd be nice to be able to enable/disable indexes without having to worry about the missed updates / having to rebuild it.ShayonOn Tue, Sep 10, 2024 at 8:02 AM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 10 Sept 2024 at 22:46, Michael Banck <mbanck@gmx.net> wrote:\n> How about the indislive flag instead? I haven't looked at the code, but\n> from the documentation (\"If false, the index is in process of being\n> dropped, and\n> should be ignored for all purposes\") it sounds like we made be able to\n> piggy-back on that instead?\n\nDoing that could cause an UPDATE which would ordinarily not be\neligible for a HOT-update to become a HOT-update. That would cause\nissues if the index is enabled again as the index wouldn't have been\nupdated during the UPDATE.\n\nI don't see the big deal with adding a new flag. There's even a free\npadding byte to put this flag in after indisreplident, so we don't\nhave to worry about using more memory.\n\nDavid\n-- Kind Regards,Shayon Mukherjee",
"msg_date": "Tue, 10 Sep 2024 08:15:19 -0400",
"msg_from": "Shayon Mukherjee <shayonj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "Hello,\n\nThank you for the detailed information and feedback David. Comments inline.\n\nP.S Re-sending it to the mailing list, because I accidentally didn't select\nreply-all on the last reply.\n\nOn Mon, Sep 9, 2024 at 6:16 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 10 Sept 2024 at 09:39, Shayon Mukherjee <shayonj@gmail.com> wrote:\n> > Adding and removing indexes is a common operation in PostgreSQL. On\n> larger databases, however, these operations can be resource-intensive. When\n> evaluating the performance impact of one or more indexes, dropping them\n> might not be ideal since as a user you may want a quicker way to test their\n> effects without fully committing to removing & adding them back again.\n> Which can be a time taking operation on larger tables.\n> >\n> > Proposal:\n> > I propose adding an ALTER INDEX command that allows for enabling or\n> disabling an index globally. This could look something like:\n> >\n> > ALTER INDEX index_name ENABLE;\n> > ALTER INDEX index_name DISABLE;\n> >\n> > A disabled index would still receive updates and enforce constraints as\n> usual but would not be used for queries. This allows users to assess\n> whether an index impacts query performance before deciding to drop it\n> entirely.\n>\n> I personally think having some way to alter an index to stop it from\n> being used in query plans would be very useful for the reasons you\n> mentioned. I don't have any arguments against the syntax you've\n> proposed. We'd certainly have to clearly document that constraints\n> are still enforced. Perhaps there is some other syntax which would\n> self-document slightly better. I just can't think of it right now.\n>\n\nThank you and likewise. I was thinking of piggy backing off of VALID / NOT\nVALID, but that might have similar issues (if not more confusion) to the\ncurrent proposed syntax. Will be sure to update the documentation.\n\n\n\n>\n> > Implementation:\n> > To keep this simple, I suggest toggling the indisvalid flag in pg_index\n> during the enable/disable operation.\n>\n> That's not a good idea as it would allow ALTER INDEX ... ENABLE; to be\n> used to make valid a failed concurrently created index. I think this\n> would need a new flag and everywhere in the planner would need to be\n> adjusted to ignore indexes when that flag is false.\n>\n\nThat is a great call and I wasn't thinking of the semantics with the\nexisting usage of concurrently created indexes.\n\n\n>\n> > Additional Considerations:\n> > - Keeping the index up-to-date while it’s disabled seems preferable, as\n> it avoids the need to rebuild the index if it’s re-enabled later. The\n> alternative would be dropping and rebuilding the index upon re-enabling,\n> which I believe would introduce additional overhead in terms of application\n> logic & complexity.\n>\n> I think the primary use case here is to assist in dropping useless\n> indexes in a way that can very quickly be undone if the index is more\n> useful than thought. If you didn't keep the index up-to-date then that\n> would make the feature useless for that purpose.\n>\n\n+1\n\n\n>\n> If we get the skip scan feature for PG18, then there's likely going to\n> be lots of people with indexes that they might want to consider\n> removing after upgrading. Maybe this is a good time to consider this\n> feature as it possibly won't ever be more useful than it will be after\n> we get skip scans.\n>\n> David\n>\n\nThank you for the feedback again, I will look into the changes required and\naccordingly propose a PATCH.\n\n-- \nKind Regards,\nShayon Mukherjee\n\nHello,Thank you for the detailed information and feedback David. Comments inline.P.S Re-sending it to the mailing list, because I accidentally didn't select reply-all on the last reply. On Mon, Sep 9, 2024 at 6:16 PM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 10 Sept 2024 at 09:39, Shayon Mukherjee <shayonj@gmail.com> wrote:\n> Adding and removing indexes is a common operation in PostgreSQL. On larger databases, however, these operations can be resource-intensive. When evaluating the performance impact of one or more indexes, dropping them might not be ideal since as a user you may want a quicker way to test their effects without fully committing to removing & adding them back again. Which can be a time taking operation on larger tables.\n>\n> Proposal:\n> I propose adding an ALTER INDEX command that allows for enabling or disabling an index globally. This could look something like:\n>\n> ALTER INDEX index_name ENABLE;\n> ALTER INDEX index_name DISABLE;\n>\n> A disabled index would still receive updates and enforce constraints as usual but would not be used for queries. This allows users to assess whether an index impacts query performance before deciding to drop it entirely.\n\nI personally think having some way to alter an index to stop it from\nbeing used in query plans would be very useful for the reasons you\nmentioned. I don't have any arguments against the syntax you've\nproposed. We'd certainly have to clearly document that constraints\nare still enforced. Perhaps there is some other syntax which would\nself-document slightly better. I just can't think of it right now. Thank you and likewise. I was thinking of piggy backing off of VALID / NOT VALID, but that might have similar issues (if not more confusion) to the current proposed syntax. Will be sure to update the documentation. \n\n> Implementation:\n> To keep this simple, I suggest toggling the indisvalid flag in pg_index during the enable/disable operation.\n\nThat's not a good idea as it would allow ALTER INDEX ... ENABLE; to be\nused to make valid a failed concurrently created index. I think this\nwould need a new flag and everywhere in the planner would need to be\nadjusted to ignore indexes when that flag is false.That is a great call and I wasn't thinking of the semantics with the existing usage of concurrently created indexes. \n\n> Additional Considerations:\n> - Keeping the index up-to-date while it’s disabled seems preferable, as it avoids the need to rebuild the index if it’s re-enabled later. The alternative would be dropping and rebuilding the index upon re-enabling, which I believe would introduce additional overhead in terms of application logic & complexity.\n\nI think the primary use case here is to assist in dropping useless\nindexes in a way that can very quickly be undone if the index is more\nuseful than thought. If you didn't keep the index up-to-date then that\nwould make the feature useless for that purpose.+1 \n\nIf we get the skip scan feature for PG18, then there's likely going to\nbe lots of people with indexes that they might want to consider\nremoving after upgrading. Maybe this is a good time to consider this\nfeature as it possibly won't ever be more useful than it will be after\nwe get skip scans.\n\nDavid\nThank you for the feedback again, I will look into the changes required and accordingly propose a PATCH. -- Kind Regards,Shayon Mukherjee",
"msg_date": "Tue, 10 Sep 2024 08:25:37 -0400",
"msg_from": "Shayon Mukherjee <shayonj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 10:16:34AM +1200, David Rowley wrote:\n> I think the primary use case here is to assist in dropping useless\n> indexes in a way that can very quickly be undone if the index is more\n> useful than thought. If you didn't keep the index up-to-date then that\n> would make the feature useless for that purpose.\n> \n> If we get the skip scan feature for PG18, then there's likely going to\n> be lots of people with indexes that they might want to consider\n> removing after upgrading. Maybe this is a good time to consider this\n> feature as it possibly won't ever be more useful than it will be after\n> we get skip scans.\n\n+1, this is something I've wanted for some time. There was some past\ndiscussion, too [0].\n\n[0] https://postgr.es/m/flat/ed8c9ed7-bb5d-aaec-065b-ad4893645deb%402ndQuadrant.com\n\n-- \nnathan\n\n\n",
"msg_date": "Tue, 10 Sep 2024 10:12:22 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "On Wed, 11 Sept 2024 at 03:12, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Sep 10, 2024 at 10:16:34AM +1200, David Rowley wrote:\n> > If we get the skip scan feature for PG18, then there's likely going to\n> > be lots of people with indexes that they might want to consider\n> > removing after upgrading. Maybe this is a good time to consider this\n> > feature as it possibly won't ever be more useful than it will be after\n> > we get skip scans.\n>\n> +1, this is something I've wanted for some time. There was some past\n> discussion, too [0].\n>\n> [0] https://postgr.es/m/flat/ed8c9ed7-bb5d-aaec-065b-ad4893645deb%402ndQuadrant.com\n\nThanks for digging that up. I'd forgotten about that. I see there was\npushback from having this last time, which is now over 6 years ago.\nIn the meantime, we still have nothing to make this easy for people.\n\nI think the most important point I read in that thread is [1]. Maybe\nwhat I mentioned in [2] is a good workaround.\n\nAdditionally, I think there will need to be syntax in CREATE INDEX for\nthis. Without that pg_get_indexdef() might return SQL that does not\nreflect the current state of the index. MySQL seems to use \"CREATE\nINDEX name ON table (col) [VISIBLE|INVISIBLE]\".\n\nDavid\n\n[1] https://www.postgresql.org/message-id/20180618215635.m5vrnxdxhxytvmcm%40alap3.anarazel.de\n[2] https://www.postgresql.org/message-id/CAKJS1f_L7y_BTGESp5Qd6BSRHXP0mj3x9O9C_U27GU478UwpBw%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 11 Sep 2024 09:35:16 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "Hello,\n\nThank you for all the feedback and insights. Work was busy, so I didn't get\nto follow up earlier.\n\nThis patch introduces the ability to enable or disable indexes using ALTER\nINDEX\nand CREATE INDEX commands.\n\nOriginal motivation for the problem and proposal for a patch\ncan be found here[0]\n\nThis patch contains the relevant implementation details, new regression\ntests and documentation.\nIt passes all the existing specs and the newly added regression tests. It\ncompiles, so the\npatch can be applied for testing as well.\n\nI have attached the patch in this email, and have also shared it on my\nGithub fork[1]. Mostly so\nthat I can ensure the full CI passes.\n\n\n*Implementation details:*\n- New Grammar:\n * ALTER INDEX ... ENABLE/DISABLE\n * CREATE INDEX ... DISABLE\n\n- Default state is enabled. Indexes are only disabled when explicitly\n instructed via CREATE INDEX ... DISABLE or ALTER INDEX ... DISABLE.\n\n- Primary Key and Unique constraint indexes are always enabled as well. The\n ENABLE/DISABLE grammar is not supported for these types of indexes. They\ncan\n be later disabled via ALTER INDEX ... ENABLE/DISABLE.\n\n- ALTER INDEX ... ENABLE/DISABLE performs an in-place update of the\npg_index\n catalog to protect against indcheckxmin.\n\n- pg_get_indexdef() support for the new functionality and grammar. This\nchange is\n reflected in \\d output for tables and pg_dump. We show the DISABLE syntax\naccordingly.\n\n- Updated create_index.sql regression test to cover the new grammar and\nverify\n that disabled indexes are not used in queries.\n\n- Modified get_index_paths() and build_index_paths() to exclude disabled\n indexes from consideration during query planning.\n\n- No changes are made to stop the index from getting rebuilt. This way we\nensure no\n data miss or corruption when index is re-enabled.\n\n- TOAST indexes are supported and enabled by default.\n\n- REINDEX CONCURRENTLY is supported as well and the existing state of\npg_index.indisenabled\n is carried over accordingly.\n\n- catversion.h is updated with a new CATALOG_VERSION_NO to reflect change\nin pg_index\n schema.\n\n- See the changes in create_index.sql to get an idea of the grammar and sql\nstatements.\n\n- See the changes in create_index.out to get an idea of the catalogue\nstates and EXPLAIN\n output to see when an index is getting used or isn't (when disabled).\n\nI am looking forward to any and all feedback on this patch, including but\nnot limited to\ncode quality, tests, and fundamental logic.\n\nThank you for the reviews and feedback.\n\n[0]\nhttps://www.postgresql.org/message-id/CANqtF-oXKe0M%3D0QOih6H%2BsZRjE2BWAbkW_1%2B9nMEAMLxUJg5jA%40mail.gmail.com\n[1] https://github.com/shayonj/postgres/pull/1\n\nBest,\nShayon\n\nOn Tue, Sep 10, 2024 at 5:35 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 11 Sept 2024 at 03:12, Nathan Bossart <nathandbossart@gmail.com>\n> wrote:\n> >\n> > On Tue, Sep 10, 2024 at 10:16:34AM +1200, David Rowley wrote:\n> > > If we get the skip scan feature for PG18, then there's likely going to\n> > > be lots of people with indexes that they might want to consider\n> > > removing after upgrading. Maybe this is a good time to consider this\n> > > feature as it possibly won't ever be more useful than it will be after\n> > > we get skip scans.\n> >\n> > +1, this is something I've wanted for some time. There was some past\n> > discussion, too [0].\n> >\n> > [0]\n> https://postgr.es/m/flat/ed8c9ed7-bb5d-aaec-065b-ad4893645deb%402ndQuadrant.com\n>\n> Thanks for digging that up. I'd forgotten about that. I see there was\n> pushback from having this last time, which is now over 6 years ago.\n> In the meantime, we still have nothing to make this easy for people.\n>\n> I think the most important point I read in that thread is [1]. Maybe\n> what I mentioned in [2] is a good workaround.\n>\n> Additionally, I think there will need to be syntax in CREATE INDEX for\n> this. Without that pg_get_indexdef() might return SQL that does not\n> reflect the current state of the index. MySQL seems to use \"CREATE\n> INDEX name ON table (col) [VISIBLE|INVISIBLE]\".\n>\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/20180618215635.m5vrnxdxhxytvmcm%40alap3.anarazel.de\n> [2]\n> https://www.postgresql.org/message-id/CAKJS1f_L7y_BTGESp5Qd6BSRHXP0mj3x9O9C_U27GU478UwpBw%40mail.gmail.com\n>\n\n\n-- \nKind Regards,\nShayon Mukherjee",
"msg_date": "Sun, 22 Sep 2024 13:42:48 -0400",
"msg_from": "Shayon Mukherjee <shayonj@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "Hello,\n\nI realized there were some white spaces in the diff and a compiler warning\nerror from CI, so I have fixed those and the updated patch with v2 is now\nattached.\n\nShayon\n\nOn Sun, Sep 22, 2024 at 1:42 PM Shayon Mukherjee <shayonj@gmail.com> wrote:\n\n> Hello,\n>\n> Thank you for all the feedback and insights. Work was busy, so I didn't\n> get to follow up earlier.\n>\n> This patch introduces the ability to enable or disable indexes using ALTER\n> INDEX\n> and CREATE INDEX commands.\n>\n> Original motivation for the problem and proposal for a patch\n> can be found here[0]\n>\n> This patch contains the relevant implementation details, new regression\n> tests and documentation.\n> It passes all the existing specs and the newly added regression tests. It\n> compiles, so the\n> patch can be applied for testing as well.\n>\n> I have attached the patch in this email, and have also shared it on my\n> Github fork[1]. Mostly so\n> that I can ensure the full CI passes.\n>\n>\n> *Implementation details:*\n> - New Grammar:\n> * ALTER INDEX ... ENABLE/DISABLE\n> * CREATE INDEX ... DISABLE\n>\n> - Default state is enabled. Indexes are only disabled when explicitly\n> instructed via CREATE INDEX ... DISABLE or ALTER INDEX ... DISABLE.\n>\n> - Primary Key and Unique constraint indexes are always enabled as well.\n> The\n> ENABLE/DISABLE grammar is not supported for these types of indexes. They\n> can\n> be later disabled via ALTER INDEX ... ENABLE/DISABLE.\n>\n> - ALTER INDEX ... ENABLE/DISABLE performs an in-place update of the\n> pg_index\n> catalog to protect against indcheckxmin.\n>\n> - pg_get_indexdef() support for the new functionality and grammar. This\n> change is\n> reflected in \\d output for tables and pg_dump. We show the DISABLE\n> syntax accordingly.\n>\n> - Updated create_index.sql regression test to cover the new grammar and\n> verify\n> that disabled indexes are not used in queries.\n>\n> - Modified get_index_paths() and build_index_paths() to exclude disabled\n> indexes from consideration during query planning.\n>\n> - No changes are made to stop the index from getting rebuilt. This way we\n> ensure no\n> data miss or corruption when index is re-enabled.\n>\n> - TOAST indexes are supported and enabled by default.\n>\n> - REINDEX CONCURRENTLY is supported as well and the existing state of\n> pg_index.indisenabled\n> is carried over accordingly.\n>\n> - catversion.h is updated with a new CATALOG_VERSION_NO to reflect change\n> in pg_index\n> schema.\n>\n> - See the changes in create_index.sql to get an idea of the grammar and\n> sql statements.\n>\n> - See the changes in create_index.out to get an idea of the catalogue\n> states and EXPLAIN\n> output to see when an index is getting used or isn't (when disabled).\n>\n> I am looking forward to any and all feedback on this patch, including but\n> not limited to\n> code quality, tests, and fundamental logic.\n>\n> Thank you for the reviews and feedback.\n>\n> [0]\n> https://www.postgresql.org/message-id/CANqtF-oXKe0M%3D0QOih6H%2BsZRjE2BWAbkW_1%2B9nMEAMLxUJg5jA%40mail.gmail.com\n> [1] https://github.com/shayonj/postgres/pull/1\n>\n> Best,\n> Shayon\n>\n> On Tue, Sep 10, 2024 at 5:35 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n>> On Wed, 11 Sept 2024 at 03:12, Nathan Bossart <nathandbossart@gmail.com>\n>> wrote:\n>> >\n>> > On Tue, Sep 10, 2024 at 10:16:34AM +1200, David Rowley wrote:\n>> > > If we get the skip scan feature for PG18, then there's likely going to\n>> > > be lots of people with indexes that they might want to consider\n>> > > removing after upgrading. Maybe this is a good time to consider this\n>> > > feature as it possibly won't ever be more useful than it will be after\n>> > > we get skip scans.\n>> >\n>> > +1, this is something I've wanted for some time. There was some past\n>> > discussion, too [0].\n>> >\n>> > [0]\n>> https://postgr.es/m/flat/ed8c9ed7-bb5d-aaec-065b-ad4893645deb%402ndQuadrant.com\n>>\n>> Thanks for digging that up. I'd forgotten about that. I see there was\n>> pushback from having this last time, which is now over 6 years ago.\n>> In the meantime, we still have nothing to make this easy for people.\n>>\n>> I think the most important point I read in that thread is [1]. Maybe\n>> what I mentioned in [2] is a good workaround.\n>>\n>> Additionally, I think there will need to be syntax in CREATE INDEX for\n>> this. Without that pg_get_indexdef() might return SQL that does not\n>> reflect the current state of the index. MySQL seems to use \"CREATE\n>> INDEX name ON table (col) [VISIBLE|INVISIBLE]\".\n>>\n>> David\n>>\n>> [1]\n>> https://www.postgresql.org/message-id/20180618215635.m5vrnxdxhxytvmcm%40alap3.anarazel.de\n>> [2]\n>> https://www.postgresql.org/message-id/CAKJS1f_L7y_BTGESp5Qd6BSRHXP0mj3x9O9C_U27GU478UwpBw%40mail.gmail.com\n>>\n>\n>\n> --\n> Kind Regards,\n> Shayon Mukherjee\n>\n\n\n-- \nKind Regards,\nShayon Mukherjee",
"msg_date": "Sun, 22 Sep 2024 14:20:53 -0400",
"msg_from": "Shayon Mukherjee <shayonj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "On Mon, 23 Sept 2024 at 05:43, Shayon Mukherjee <shayonj@gmail.com> wrote:\n> - Modified get_index_paths() and build_index_paths() to exclude disabled\n> indexes from consideration during query planning.\n\nThere are quite a large number of other places you also need to modify.\n\nHere are 2 places where the index should be ignored but isn't:\n\n1. expression indexes seem to still be used for statistical estimations:\n\ncreate table b as select generate_series(1,1000)b;\ncreate index on b((b%10));\nanalyze b;\nexplain select distinct b%10 from b;\n-- HashAggregate (cost=23.00..23.12 rows=10 width=4)\n\nalter index b_expr_idx disable;\nexplain select distinct b%10 from b;\n-- HashAggregate (cost=23.00..23.12 rows=10 width=4) <-- should be 1000 rows\n\ndrop index b_expr_idx;\nexplain select distinct b%10 from b;\n-- HashAggregate (cost=23.00..35.50 rows=1000 width=4)\n\n2. Indexes seem to still be used for join removals.\n\ncreate table c (c int primary key);\nexplain select c1.* from c c1 left join c c2 on c1.c=c2.c; --\ncorrectly removes join.\nalter index c_pkey disable;\nexplain select c1.* from c c1 left join c c2 on c1.c=c2.c; -- should\nnot remove join.\n\nPlease carefully look over all places that RelOptInfo.indexlist is\nlooked at and consider skipping disabled indexes. Please also take\ntime to find SQL that exercises each of those places so you can verify\nthat the behaviour is correct after your change. This is also a good\nway to learn exactly all cases where indexes are used. Using this\nmethod would have led you to find places like\nrel_supports_distinctness(), where you should be skipping disabled\nindexes.\n\nThe planner should not be making use of disabled indexes for any\noptimisations at all.\n\n> - catversion.h is updated with a new CATALOG_VERSION_NO to reflect change in pg_index\n> schema.\n\nPlease leave that up to the committer. Patch authors doing this just\nresults in the patch no longer applying as soon as someone commits a\nversion bump.\n\nAlso, please get rid of these notices. The command tag serves that\npurpose. It's not interesting that the index is already disabled.\n\n# alter index a_pkey disable;\nNOTICE: index \"a_pkey\" is now disabled\nALTER INDEX\n# alter index a_pkey disable;\nNOTICE: index \"a_pkey\" is already disabled\nALTER INDEX\n\nI've only given the code a very quick glance. I don't quite understand\nwhy you're checking the index is enabled in create_index_paths() and\nget_index_paths(). I think the check should be done only in\ncreate_index_paths(). Primarily, you'll find code such as \"if\n(index->indpred != NIL && !index->predOK)\" in the locations you need\nto consider skipping the disabled index. I think your new code should\nbe located very close to those places or perhaps within the same if\ncondition unless it makes it overly complex for the human reader.\n\nI think the documents should also mention that disabling an index is a\nuseful way to verify an index is not being used before dropping it as\nthe index can be enabled again at the first sign that performance has\nbeen effected. (It might also be good to mention that checking\npg_stat_user_indexes.idx_scan should be the first port of call when\nchecking for unused indexes)\n\nDavid\n\n\n",
"msg_date": "Mon, 23 Sep 2024 10:44:58 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "Hi David,\n\nThank you so much for the review and pointers. I totally missed expression indexes. I am going to do another proper pass along with your feedback and follow up with an updated patch and any questions. \n\nExcited to be learning so much about the internals. \nShayon\n\n> On Sep 22, 2024, at 6:44 PM, David Rowley <dgrowleyml@gmail.com> wrote:\n> \n> On Mon, 23 Sept 2024 at 05:43, Shayon Mukherjee <shayonj@gmail.com> wrote:\n>> - Modified get_index_paths() and build_index_paths() to exclude disabled\n>> indexes from consideration during query planning.\n> \n> There are quite a large number of other places you also need to modify.\n> \n> Here are 2 places where the index should be ignored but isn't:\n> \n> 1. expression indexes seem to still be used for statistical estimations:\n> \n> create table b as select generate_series(1,1000)b;\n> create index on b((b%10));\n> analyze b;\n> explain select distinct b%10 from b;\n> -- HashAggregate (cost=23.00..23.12 rows=10 width=4)\n> \n> alter index b_expr_idx disable;\n> explain select distinct b%10 from b;\n> -- HashAggregate (cost=23.00..23.12 rows=10 width=4) <-- should be 1000 rows\n> \n> drop index b_expr_idx;\n> explain select distinct b%10 from b;\n> -- HashAggregate (cost=23.00..35.50 rows=1000 width=4)\n> \n> 2. Indexes seem to still be used for join removals.\n> \n> create table c (c int primary key);\n> explain select c1.* from c c1 left join c c2 on c1.c=c2.c; --\n> correctly removes join.\n> alter index c_pkey disable;\n> explain select c1.* from c c1 left join c c2 on c1.c=c2.c; -- should\n> not remove join.\n> \n> Please carefully look over all places that RelOptInfo.indexlist is\n> looked at and consider skipping disabled indexes. Please also take\n> time to find SQL that exercises each of those places so you can verify\n> that the behaviour is correct after your change. This is also a good\n> way to learn exactly all cases where indexes are used. Using this\n> method would have led you to find places like\n> rel_supports_distinctness(), where you should be skipping disabled\n> indexes.\n> \n> The planner should not be making use of disabled indexes for any\n> optimisations at all.\n> \n>> - catversion.h is updated with a new CATALOG_VERSION_NO to reflect change in pg_index\n>> schema.\n> \n> Please leave that up to the committer. Patch authors doing this just\n> results in the patch no longer applying as soon as someone commits a\n> version bump.\n> \n> Also, please get rid of these notices. The command tag serves that\n> purpose. It's not interesting that the index is already disabled.\n> \n> # alter index a_pkey disable;\n> NOTICE: index \"a_pkey\" is now disabled\n> ALTER INDEX\n> # alter index a_pkey disable;\n> NOTICE: index \"a_pkey\" is already disabled\n> ALTER INDEX\n> \n> I've only given the code a very quick glance. I don't quite understand\n> why you're checking the index is enabled in create_index_paths() and\n> get_index_paths(). I think the check should be done only in\n> create_index_paths(). Primarily, you'll find code such as \"if\n> (index->indpred != NIL && !index->predOK)\" in the locations you need\n> to consider skipping the disabled index. I think your new code should\n> be located very close to those places or perhaps within the same if\n> condition unless it makes it overly complex for the human reader.\n> \n> I think the documents should also mention that disabling an index is a\n> useful way to verify an index is not being used before dropping it as\n> the index can be enabled again at the first sign that performance has\n> been effected. (It might also be good to mention that checking\n> pg_stat_user_indexes.idx_scan should be the first port of call when\n> checking for unused indexes)\n> \n> David\n\n\n\n",
"msg_date": "Mon, 23 Sep 2024 07:07:11 -0400",
"msg_from": "Shayon Mukherjee <shayonj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "On 09.09.24 23:38, Shayon Mukherjee wrote:\n> *Problem*:\n> Adding and removing indexes is a common operation in PostgreSQL. On \n> larger databases, however, these operations can be resource-intensive. \n> When evaluating the performance impact of one or more indexes, dropping \n> them might not be ideal since as a user you may want a quicker way to \n> test their effects without fully committing to removing & adding them \n> back again. Which can be a time taking operation on larger tables.\n> \n> *Proposal*:\n> I propose adding an ALTER INDEX command that allows for enabling or \n> disabling an index globally. This could look something like:\n> \n> ALTER INDEX index_name ENABLE;\n> ALTER INDEX index_name DISABLE;\n> \n> A disabled index would still receive updates and enforce constraints as \n> usual but would not be used for queries. This allows users to assess \n> whether an index impacts query performance before deciding to drop it \n> entirely.\n\nI think a better approach would be to make the list of disabled indexes \na GUC setting, which would then internally have an effect similar to \nenable_indexscan, meaning it would make the listed indexes unattractive \nto the planner.\n\nThis seems better than the proposed DDL command, because you'd be able \nto use this per-session, instead of forcing a global state, and even \nunprivileged users could use it.\n\n(I think we have had proposals like this before, but I can't find the \ndiscussion I'm thinking of right now.)\n\n\n\n",
"msg_date": "Mon, 23 Sep 2024 17:14:15 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "That's a good point.\n\n+1 for the idea of the GUC setting, especially since, as you mentioned, it allows unprivileged users to access it and being per-session..\n\nI am happy to draft a patch for this as well. I think I have a working idea so far of where the necessary checks might go. However if you don’t mind, can you elaborate further on how the effect would be similar to enable_indexscan? \n\nI was thinking we could introduce a new GUC option called `disabled_indexes` and perform a check against in all places for each index being considered with its OID via get_relname_relid through a helper function in the various places we need to prompt the planner to not use the index (like in indxpath.c as an example).\n\nCurious to learn if you have a different approach in mind perhaps?\n\nThank you,\nShayon\n\n\n> On Sep 23, 2024, at 11:14 AM, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n> On 09.09.24 23:38, Shayon Mukherjee wrote:\n>> *Problem*:\n>> Adding and removing indexes is a common operation in PostgreSQL. On larger databases, however, these operations can be resource-intensive. When evaluating the performance impact of one or more indexes, dropping them might not be ideal since as a user you may want a quicker way to test their effects without fully committing to removing & adding them back again. Which can be a time taking operation on larger tables.\n>> *Proposal*:\n>> I propose adding an ALTER INDEX command that allows for enabling or disabling an index globally. This could look something like:\n>> ALTER INDEX index_name ENABLE;\n>> ALTER INDEX index_name DISABLE;\n>> A disabled index would still receive updates and enforce constraints as usual but would not be used for queries. This allows users to assess whether an index impacts query performance before deciding to drop it entirely.\n> \n> I think a better approach would be to make the list of disabled indexes a GUC setting, which would then internally have an effect similar to enable_indexscan, meaning it would make the listed indexes unattractive to the planner.\n> \n> This seems better than the proposed DDL command, because you'd be able to use this per-session, instead of forcing a global state, and even unprivileged users could use it.\n> \n> (I think we have had proposals like this before, but I can't find the discussion I'm thinking of right now.)\n> \n\n\n\n",
"msg_date": "Mon, 23 Sep 2024 16:51:57 -0400",
"msg_from": "Shayon Mukherjee <shayonj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "I found an old thread here [0].\n\nAlso, a question: If we go with the GUC approach, how do we expect `pg_get_indexdef` to behave?\n\nI suppose it would behave no differently than it otherwise would, because there's no new SQL grammar to support and, given its GUC status, it seems reasonable that `pg_get_indexdef` doesn’t reflect whether an index is enabled or not. \n\nIf so, then I wonder if using a dedicated `ALTER` command and keeping the state in `pg_index` would be better for consistency's sake?\n\n[0]https://postgrespro.com/list/id/20151212.112536.1628974191058745674.t-ishii@sraoss.co.jp\n\nThank you\nShayon\n\n> On Sep 23, 2024, at 4:51 PM, Shayon Mukherjee <shayonj@gmail.com> wrote:\n> \n> That's a good point.\n> \n> +1 for the idea of the GUC setting, especially since, as you mentioned, it allows unprivileged users to access it and being per-session..\n> \n> I am happy to draft a patch for this as well. I think I have a working idea so far of where the necessary checks might go. However if you don’t mind, can you elaborate further on how the effect would be similar to enable_indexscan? \n> \n> I was thinking we could introduce a new GUC option called `disabled_indexes` and perform a check against in all places for each index being considered with its OID via get_relname_relid through a helper function in the various places we need to prompt the planner to not use the index (like in indxpath.c as an example).\n> \n> Curious to learn if you have a different approach in mind perhaps?\n> \n> Thank you,\n> Shayon\n> \n> \n>> On Sep 23, 2024, at 11:14 AM, Peter Eisentraut <peter@eisentraut.org> wrote:\n>> \n>> On 09.09.24 23:38, Shayon Mukherjee wrote:\n>>> *Problem*:\n>>> Adding and removing indexes is a common operation in PostgreSQL. On larger databases, however, these operations can be resource-intensive. When evaluating the performance impact of one or more indexes, dropping them might not be ideal since as a user you may want a quicker way to test their effects without fully committing to removing & adding them back again. Which can be a time taking operation on larger tables.\n>>> *Proposal*:\n>>> I propose adding an ALTER INDEX command that allows for enabling or disabling an index globally. This could look something like:\n>>> ALTER INDEX index_name ENABLE;\n>>> ALTER INDEX index_name DISABLE;\n>>> A disabled index would still receive updates and enforce constraints as usual but would not be used for queries. This allows users to assess whether an index impacts query performance before deciding to drop it entirely.\n>> \n>> I think a better approach would be to make the list of disabled indexes a GUC setting, which would then internally have an effect similar to enable_indexscan, meaning it would make the listed indexes unattractive to the planner.\n>> \n>> This seems better than the proposed DDL command, because you'd be able to use this per-session, instead of forcing a global state, and even unprivileged users could use it.\n>> \n>> (I think we have had proposals like this before, but I can't find the discussion I'm thinking of right now.)\n>> \n> \n\n\nI found an old thread here [0].Also, a question: If we go with the GUC approach, how do we expect `pg_get_indexdef` to behave?I suppose it would behave no differently than it otherwise would, because there's no new SQL grammar to support and, given its GUC status, it seems reasonable that `pg_get_indexdef` doesn’t reflect whether an index is enabled or not. If so, then I wonder if using a dedicated `ALTER` command and keeping the state in `pg_index` would be better for consistency's sake?[0]https://postgrespro.com/list/id/20151212.112536.1628974191058745674.t-ishii@sraoss.co.jpThank youShayonOn Sep 23, 2024, at 4:51 PM, Shayon Mukherjee <shayonj@gmail.com> wrote:That's a good point.+1 for the idea of the GUC setting, especially since, as you mentioned, it allows unprivileged users to access it and being per-session..I am happy to draft a patch for this as well. I think I have a working idea so far of where the necessary checks might go. However if you don’t mind, can you elaborate further on how the effect would be similar to enable_indexscan? I was thinking we could introduce a new GUC option called `disabled_indexes` and perform a check against in all places for each index being considered with its OID via get_relname_relid through a helper function in the various places we need to prompt the planner to not use the index (like in indxpath.c as an example).Curious to learn if you have a different approach in mind perhaps?Thank you,ShayonOn Sep 23, 2024, at 11:14 AM, Peter Eisentraut <peter@eisentraut.org> wrote:On 09.09.24 23:38, Shayon Mukherjee wrote:*Problem*:Adding and removing indexes is a common operation in PostgreSQL. On larger databases, however, these operations can be resource-intensive. When evaluating the performance impact of one or more indexes, dropping them might not be ideal since as a user you may want a quicker way to test their effects without fully committing to removing & adding them back again. Which can be a time taking operation on larger tables.*Proposal*:I propose adding an ALTER INDEX command that allows for enabling or disabling an index globally. This could look something like:ALTER INDEX index_name ENABLE;ALTER INDEX index_name DISABLE;A disabled index would still receive updates and enforce constraints as usual but would not be used for queries. This allows users to assess whether an index impacts query performance before deciding to drop it entirely.I think a better approach would be to make the list of disabled indexes a GUC setting, which would then internally have an effect similar to enable_indexscan, meaning it would make the listed indexes unattractive to the planner.This seems better than the proposed DDL command, because you'd be able to use this per-session, instead of forcing a global state, and even unprivileged users could use it.(I think we have had proposals like this before, but I can't find the discussion I'm thinking of right now.)",
"msg_date": "Mon, 23 Sep 2024 17:03:41 -0400",
"msg_from": "Shayon Mukherjee <shayonj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "On Tue, 24 Sept 2024 at 03:14, Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 09.09.24 23:38, Shayon Mukherjee wrote:\n> > ALTER INDEX index_name ENABLE;\n> > ALTER INDEX index_name DISABLE;\n>\n> I think a better approach would be to make the list of disabled indexes\n> a GUC setting, which would then internally have an effect similar to\n> enable_indexscan, meaning it would make the listed indexes unattractive\n> to the planner.\n\nI understand the last discussion went down that route too. For me, it\nseems strange that adding some global variable is seen as cleaner than\nstoring the property in the same location as all the other index\nproperties.\n\nHow would you ensure no cached plans are still using the index after\nchanging the GUC?\n\n> This seems better than the proposed DDL command, because you'd be able\n> to use this per-session, instead of forcing a global state, and even\n> unprivileged users could use it.\n\nThat's true.\n\n> (I think we have had proposals like this before, but I can't find the\n> discussion I'm thinking of right now.)\n\nI think it's the one that was already linked by Nathan. [1]? The GUC\nseems to have been first suggested on the same thread in [2].\n\nDavid\n\n[1] https://www.postgresql.org/message-id/ed8c9ed7-bb5d-aaec-065b-ad4893645deb%402ndQuadrant.com\n[2] https://www.postgresql.org/message-id/29800.1529359024%40sss.pgh.pa.us\n\n\n",
"msg_date": "Tue, 24 Sep 2024 12:30:59 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "On Mon, Sep 23, 2024 at 8:31 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 24 Sept 2024 at 03:14, Peter Eisentraut <peter@eisentraut.org>\n> wrote:\n> >\n> > On 09.09.24 23:38, Shayon Mukherjee wrote:\n> > > ALTER INDEX index_name ENABLE;\n> > > ALTER INDEX index_name DISABLE;\n> >\n> > I think a better approach would be to make the list of disabled indexes\n> > a GUC setting, which would then internally have an effect similar to\n> > enable_indexscan, meaning it would make the listed indexes unattractive\n> > to the planner.\n>\n> I understand the last discussion went down that route too. For me, it\n> seems strange that adding some global variable is seen as cleaner than\n> storing the property in the same location as all the other index\n> properties.\n>\n>\nThat was my first instinct as well. Although, being able to control this\nsetting on a per session level and as an unprivileged user is somewhat\nattractive.\n\n\n> How would you ensure no cached plans are still using the index after\n> changing the GUC?\n>\n\nCould we call ResetPlanCache() to invalidate all plan caches from the\nassign_ hook on GUC when it's set (and doesn't match the old value).\nSomething like this (assuming the GUC is called `disabled_indexes`)\n\nvoid\nassign_disabled_indexes(const char *newval, void *extra)\n{\nif (disabled_indexes != newval)\nResetPlanCache();\n}\n\nA bit heavy-handed, but perhaps it's OK, since it's not meant to be used\nfrequently also ?\n\n\n> > This seems better than the proposed DDL command, because you'd be able\n> > to use this per-session, instead of forcing a global state, and even\n> > unprivileged users could use it.\n>\n> That's true.\n>\n> > (I think we have had proposals like this before, but I can't find the\n> > discussion I'm thinking of right now.)\n>\n> I think it's the one that was already linked by Nathan. [1]? The GUC\n> seems to have been first suggested on the same thread in [2].\n>\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/ed8c9ed7-bb5d-aaec-065b-ad4893645deb%402ndQuadrant.com\n> [2] https://www.postgresql.org/message-id/29800.1529359024%40sss.pgh.pa.us\n>\n\n\nShayon\n\nOn Mon, Sep 23, 2024 at 8:31 PM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 24 Sept 2024 at 03:14, Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 09.09.24 23:38, Shayon Mukherjee wrote:\n> > ALTER INDEX index_name ENABLE;\n> > ALTER INDEX index_name DISABLE;\n>\n> I think a better approach would be to make the list of disabled indexes\n> a GUC setting, which would then internally have an effect similar to\n> enable_indexscan, meaning it would make the listed indexes unattractive\n> to the planner.\n\nI understand the last discussion went down that route too. For me, it\nseems strange that adding some global variable is seen as cleaner than\nstoring the property in the same location as all the other index\nproperties.\nThat was my first instinct as well. Although, being able to control this setting on a per session level and as an unprivileged user is somewhat attractive. \nHow would you ensure no cached plans are still using the index after\nchanging the GUC?Could we call ResetPlanCache() to invalidate all plan caches from the assign_ hook on GUC when it's set (and doesn't match the old value). Something like this (assuming the GUC is called `disabled_indexes`)voidassign_disabled_indexes(const char *newval, void *extra){\tif (disabled_indexes != newval)\t\tResetPlanCache();}A bit heavy-handed, but perhaps it's OK, since it's not meant to be used frequently also ? \n\n> This seems better than the proposed DDL command, because you'd be able\n> to use this per-session, instead of forcing a global state, and even\n> unprivileged users could use it.\n\nThat's true.\n\n> (I think we have had proposals like this before, but I can't find the\n> discussion I'm thinking of right now.)\n\nI think it's the one that was already linked by Nathan. [1]? The GUC\nseems to have been first suggested on the same thread in [2].\n\nDavid\n\n[1] https://www.postgresql.org/message-id/ed8c9ed7-bb5d-aaec-065b-ad4893645deb%402ndQuadrant.com\n[2] https://www.postgresql.org/message-id/29800.1529359024%40sss.pgh.pa.us\nShayon",
"msg_date": "Mon, 23 Sep 2024 21:44:17 -0400",
"msg_from": "Shayon Mukherjee <shayonj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "If one of the use cases is soft-dropping indexes, would a GUC approach\nstill support that? ALTER TABLE?\n\nIf one of the use cases is soft-dropping indexes, would a GUC approach still support that? ALTER TABLE?",
"msg_date": "Mon, 23 Sep 2024 21:37:58 -0700",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "Hello,\n\nRegarding GUC implementation for index disabling, I was imagining something\nlike the attached PATCH. The patch compiles and can be applied for testing.\nIt's not meant to be production ready, but I am sharing it as a way to get\na sense of the nuts and bolts. It requires more proper test cases and docs,\netc. Example towards the end of the email.\n\nThat said, I am still quite torn between GUC setting or having a dedicated\nALTER grammar. My additional thoughts which is mostly a summary of what\nDavid and Peter have already very nicely raised earlier are:\n\n- GUC allows a non-privileged user to disable one or more indexes per\nsession.\n\n- If we think of the task of disabling indexes temporarily (without\nstopping any updates to the index), then it feels more in the territory of\nquery tuning than index maintenance. In which case, a GUC setting makes\nmore sense and sits well with others in the team like enable_indexscan,\nenable_indexonlyscan and so on.\n\n- At the same time, as David pointed out earlier, GUC is also a global\nsetting and perhaps storing the state of whether or not an index is being\nused is perhaps better situated along with other index properties in\npg_index.\n\n- One of my original motivations for the proposal was also that we can\ndisable an index for _all_ sessions quickly without it impacting index\nbuild and turn it back on quickly as well. To do so with GUC, we would need\nto do something like the following, if I am not mistaken, in which case\nthat is not something an unprivileged user may be able to perform, so just\ncalling it out.\n\n ALTER USER example_user SET disabled_indexes = 'idx_foo_bar';\n\n- For an ALTER statement, I think an ALTER INDEX makes more sense than\nALTER TABLE, especially since we have the existing ALTER INDEX grammar and\nfunctionality. But let me know if I am missing something here.\n\n- Resetting plan cache could still be an open question for GUC. I was\nwondering if we can reset the plan cache local to the session for GUC (like\nthe one in the PATCH attached) and if that is enough? This concern doesn't\napply with managing property in pg_index.\n\n- With a GUC attribute, the state of an index being enabled/disabled won't\nbe captured in pg_get_indexdef(), and that is likely OK, but maybe that\nwould need to be made explicit through docs.\n\nExample 1\n\nCREATE TABLE b AS SELECT generate_series(1,1000) AS b;\nCREATE INDEX ON b((b%10));\nANALYZE b;\nEXPLAIN SELECT DISTINCT b%10 FROM b;\n\nSET disabled_indexes = 'b_expr_idx';\n\nEXPLAIN SELECT DISTINCT b%10 FROM b; -- HashAggregate rows=10000\n\nExample 2\n\nCREATE TABLE disabled_index_test(id int PRIMARY KEY, data text);\nINSERT INTO disabled_index_test SELECT g, 'data ' || g FROM\ngenerate_series(1, 1000) g;\nCREATE INDEX disabled_index_idx1 ON disabled_index_test(data);\nEXPLAIN (COSTS OFF) SELECT * FROM disabled_index_test WHERE data = 'data\n500';\n\nSET disabled_indexes = 'b_expr_idx, disabled_index_idx1';\n\nEXPLAIN SELECT * FROM disabled_index_test WHERE data = 'data 500'; -- no\nindex is used\n\nWrapping up...\n\nI am sure there are things I am missing or unintentionally overlooking.\nSince this would be a nice feature to have, I'd love some guidance on which\napproach seems like a good next step to take. I am happy to work\naccordingly on the patch.\n\nThank you\nShayon\n\nOn Tue, Sep 24, 2024 at 12:38 AM Maciek Sakrejda <m.sakrejda@gmail.com>\nwrote:\n\n> If one of the use cases is soft-dropping indexes, would a GUC approach\n> still support that? ALTER TABLE?\n>\n\n\n-- \nKind Regards,\nShayon Mukherjee",
"msg_date": "Tue, 24 Sep 2024 09:19:45 -0400",
"msg_from": "Shayon Mukherjee <shayonj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "On 23.09.24 22:51, Shayon Mukherjee wrote:\n> I am happy to draft a patch for this as well. I think I have a working\n> idea so far of where the necessary checks might go. However if you don’t\n> mind, can you elaborate further on how the effect would be similar to\n> enable_indexscan?\n\nPlanner settings like enable_indexscan used to just add a large number \n(disable_cost) to the estimated plan node costs. It's a bit more \nsophisticated in PG17. But in any case, I imagine \"disabling an index\" \ncould use the same mechanism. Or maybe not, maybe the setting would \njust completely ignore the index.\n\n\n",
"msg_date": "Tue, 24 Sep 2024 20:08:10 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "On 24.09.24 02:30, David Rowley wrote:\n> I understand the last discussion went down that route too. For me, it\n> seems strange that adding some global variable is seen as cleaner than\n> storing the property in the same location as all the other index\n> properties.\n\nIt's arguably not actually a property of the index, it's a property of \nthe user's session. Like, kind of, the search path is a session \nproperty, not a property of a schema.\n\n> How would you ensure no cached plans are still using the index after\n> changing the GUC?\n\nSomething for the patch author to figure out. ;-)\n\n\n",
"msg_date": "Tue, 24 Sep 2024 20:11:23 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "Peter Eisentraut <peter@eisentraut.org> writes:\n> On 23.09.24 22:51, Shayon Mukherjee wrote:\n>> I am happy to draft a patch for this as well. I think I have a working\n>> idea so far of where the necessary checks might go. However if you don’t\n>> mind, can you elaborate further on how the effect would be similar to\n>> enable_indexscan?\n\n> Planner settings like enable_indexscan used to just add a large number \n> (disable_cost) to the estimated plan node costs. It's a bit more \n> sophisticated in PG17. But in any case, I imagine \"disabling an index\" \n> could use the same mechanism. Or maybe not, maybe the setting would \n> just completely ignore the index.\n\nYeah, I'd be inclined to implement this by having create_index_paths\njust not make any paths for rejected indexes. Or you could back up\nanother step and keep plancat.c from building IndexOptInfos for them.\nThe latter might have additional effects, such as preventing the plan\nfrom relying on a uniqueness condition enforced by the index. Not\nclear to me if that's desirable or not.\n\n[ thinks... ] One good reason for implementing it in plancat.c is\nthat you'd have the index relation open and be able to see its name\nfor purposes of matching to the filter. Anywhere else, getting the\nname would involve additional overhead.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Sep 2024 14:21:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "Thank you for the historical context and working, I understand what you were referring to before now. \n\nShayon\n\n> On Sep 24, 2024, at 2:08 PM, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n> On 23.09.24 22:51, Shayon Mukherjee wrote:\n>> I am happy to draft a patch for this as well. I think I have a working\n>> idea so far of where the necessary checks might go. However if you don’t\n>> mind, can you elaborate further on how the effect would be similar to\n>> enable_indexscan?\n> \n> Planner settings like enable_indexscan used to just add a large number (disable_cost) to the estimated plan node costs. It's a bit more sophisticated in PG17. But in any case, I imagine \"disabling an index\" could use the same mechanism. Or maybe not, maybe the setting would just completely ignore the index.\n\n\n\n",
"msg_date": "Tue, 24 Sep 2024 15:38:08 -0400",
"msg_from": "Shayon Mukherjee <shayonj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
},
{
"msg_contents": "Hello,\n\nI am back with a PATCH :). Thanks to everyone in the threads for all the helpful discussions.\n\nThis proposal is for a PATCH to introduce a GUC variable to disable specific indexes during query planning.\n\nThis is an alternative approach to the previous PATCH I had proposed and is improved upon after some of the recent discussions in the thread. The PATCH contains the relevant changes, regression tests, and documentation.\n\nI went with the GUC approach to introduce a way for a user to disable indexes during query planning over dedicated SQL Grammar and introducing the `isenabled` attribute in `pg_index` for the following reasons:\n\n- Inspired by the discussions brought in earlier about this setting being something that unprivileged users can benefit from versus an ALTER statement.\n- A GUC variable felt more closely aligned with the query tuning purpose, which this feature would serve, over index maintenance, the state of which is more closely reflected in `pg_index`.\n\nImplementation details:\n\nThe patch introduces a new GUC parameter `disabled_indexes` that allows users to specify a comma-separated list of indexes to be ignored during query planning. Key aspects:\n\n- Adds a new `isdisabled` attribute to the `IndexOptInfo` structure.\n- Modifies `get_relation_info` in `plancat.c` to skip disabled indexes entirely, thus reducing the number of places we need to check if an index is disabled or not.\n- Implements GUC hooks for parameter validation and assignment.\n- Resets the plan cache when the `disabled_indexes` list is modified through `ResetPlanCache()`\n\nI chose to modify the logic within `get_relation_info` as compared to, say, reducing the cost to make the planner not consider an index during planning, mostly to keep the number of changes being introduced to a minimum and also the logic itself being self-contained and easier to under perhaps (?).\n\nAs mentioned before, this does not impact the building of the index. That still happens.\n\nI have added regression tests for:\n\n- Basic single-column and multi-column indexes\n- Partial indexes\n- Expression indexes\n- Join indexes\n- GIN and GiST indexes\n- Covering indexes\n- Range indexes\n- Unique indexes and constraints\n\nI'd love to hear any feedback on the proposed PATCH and also the overall approach.\n\n\n\n> On Sep 24, 2024, at 9:19 AM, Shayon Mukherjee <shayonj@gmail.com> wrote:\n> \n> -- \n> Kind Regards,\n> Shayon Mukherjee\n> <v1-0001-Proof-of-Concept-Ability-to-enable-disable-indexe.patch>",
"msg_date": "Thu, 26 Sep 2024 13:39:23 -0400",
"msg_from": "Shayon Mukherjee <shayonj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal to Enable/Disable Index using ALTER INDEX"
}
] |
[
{
"msg_contents": "hi,\none minor issue. not that minor,\nsince many DDLs need to consider the system attribute.\n\nlooking at these functions:\nSearchSysCacheCopyAttName\nSearchSysCacheAttName\nget_attnum\n\nget_attnum says:\nReturns InvalidAttrNumber if the attr doesn't exist (or is dropped).\n\nSo I conclude that \"attnum == 0\" is not related to the idea of a system column.\n\n\nfor example, ATExecColumnDefault, following code snippet,\nthe second ereport should be \"if (attnum < 0)\"\n==========\n attnum = get_attnum(RelationGetRelid(rel), colName);\n if (attnum == InvalidAttrNumber)\n ereport(ERROR,\n (errcode(ERRCODE_UNDEFINED_COLUMN),\n errmsg(\"column \\\"%s\\\" of relation \\\"%s\\\" does not exist\",\n colName, RelationGetRelationName(rel))));\n\n /* Prevent them from altering a system attribute */\n if (attnum <= 0)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"cannot alter system column \\\"%s\\\"\",\n colName)));\n==========\nbut there are many occurrences of \"attnum <= 0\".\nI am sure tablecmds.c, we can change to \"attnum < 0\".\nnot that sure with other places.\n\nIn some places in tablecmd.c,\nwe already use \"attnum < 0\" to represent the system attribute.\nso it's kind of inconsistent already.\n\nShould we do the change?\n\n\n",
"msg_date": "Tue, 10 Sep 2024 11:16:34 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": true,
"msg_subject": "change \"attnum <=0\" to \"attnum <0\" for better reflect system\n attribute"
},
{
"msg_contents": "jian he <jian.universality@gmail.com> writes:\n> get_attnum says:\n> Returns InvalidAttrNumber if the attr doesn't exist (or is dropped).\n\n> So I conclude that \"attnum == 0\" is not related to the idea of a system column.\n\nattnum = 0 is also used for whole-row Vars. This is a pretty\nunfortunate choice given the alternative meaning of \"invalid\",\nbut cleaning it up would be a daunting task (with not a whole\nlot of payoff in the end, AFAICS). It's deeply embedded.\n\nThat being the case, you have to tread *very* carefully when\nconsidering making changes like this.\n\n> for example, ATExecColumnDefault, following code snippet,\n> the second ereport should be \"if (attnum < 0)\"\n\n> /* Prevent them from altering a system attribute */\n> if (attnum <= 0)\n\nI think that's just fine as-is. Sure, the == case is unreachable,\nbut it is very very common to consider whole-row Vars as being more\nlike system attributes than user attributes. In this particular\ncase, for sure we don't want to permit attaching a default to a\nwhole-row Var. So I'm content to allow the duplicative rejection.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Sep 2024 23:30:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: change \"attnum <=0\" to \"attnum <0\" for better reflect system\n attribute"
},
{
"msg_contents": "On Tue, Sep 10, 2024 at 8:46 AM jian he <jian.universality@gmail.com> wrote:\n>\n> hi,\n> one minor issue. not that minor,\n> since many DDLs need to consider the system attribute.\n>\n> looking at these functions:\n> SearchSysCacheCopyAttName\n> SearchSysCacheAttName\n> get_attnum\n>\n> get_attnum says:\n> Returns InvalidAttrNumber if the attr doesn't exist (or is dropped).\n>\n> So I conclude that \"attnum == 0\" is not related to the idea of a system column.\n>\n>\n> for example, ATExecColumnDefault, following code snippet,\n> the second ereport should be \"if (attnum < 0)\"\n> ==========\n> attnum = get_attnum(RelationGetRelid(rel), colName);\n> if (attnum == InvalidAttrNumber)\n> ereport(ERROR,\n> (errcode(ERRCODE_UNDEFINED_COLUMN),\n> errmsg(\"column \\\"%s\\\" of relation \\\"%s\\\" does not exist\",\n> colName, RelationGetRelationName(rel))));\n>\n> /* Prevent them from altering a system attribute */\n> if (attnum <= 0)\n> ereport(ERROR,\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> errmsg(\"cannot alter system column \\\"%s\\\"\",\n> colName)));\n> ==========\n> but there are many occurrences of \"attnum <= 0\".\n> I am sure tablecmds.c, we can change to \"attnum < 0\".\n> not that sure with other places.\n\nWhat it really means is \"Prevent them from altering any attribute not\ndefined by user\" - a whole row reference is not defined explicitly by\nuser; it's collection of user defined attributes and it's not\ncataloged.\n\nI think we generally confuse between system attribute and !(user\nattribute); the grey being attnum = 0. It might be better to create\nmacros for these cases and use them to make their usage clear.\n\ne.g. #define ATTNUM_IS_SYSTEM(attnum) ((attnum) < 0)\n #define ATTNUM_IS_USER_DEFINED(attnum) ((attnum) > 0)\n #define WholeRowAttrNumber 0\nadd good comments about usage near their definitions and use\nappropriately in the code.\n\n Example above would then turn into (notice ! in the condition)\n /* Prevent them from altering an attribute not defined by user */\n if (!ATTNUM_IS_USER_DEFINED(attnum) )\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"attribute \\\"%s\\\" is not a user-defined attribute\",\n colName)));\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 10 Sep 2024 09:36:21 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: change \"attnum <=0\" to \"attnum <0\" for better reflect system\n attribute"
}
] |
[
{
"msg_contents": "In some cases, we may want to transfer a HAVING clause into WHERE in\nhopes of eliminating tuples before aggregation instead of after.\n\nPreviously, we couldn't do this if there were any nonempty grouping\nsets, because we didn't have a way to tell if the HAVING clause\nreferenced any columns that were nullable by the grouping sets, and\nmoving such a clause into WHERE could potentially change the results.\n\nNow, with expressions marked nullable by grouping sets with the RT\nindex of the RTE_GROUP RTE, it is much easier to identify those\nclauses that reference any nullable-by-grouping-sets columns: we just\nneed to check if the RT index of the RTE_GROUP RTE is present in the\nclause. For other HAVING clauses, they can be safely pushed down.\n\nI'm not sure how common it is in practice to have a HAVING clause\nwhere all referenced columns are present in all the grouping sets.\nBut it seems to me that this optimization doesn't cost too much. Not\nimplementing it seems like leaving money on the table.\n\nAny thoughts?\n\nThanks\nRichard",
"msg_date": "Wed, 11 Sep 2024 11:43:47 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Allow pushdown of HAVING clauses with grouping sets"
}
] |
[
{
"msg_contents": "At the end of SetupLockInTable(), there is a check for the \"lock already\nheld\" error.\nBecause the nRequested and requested[lockmode] value of a lock is bumped\nbefore \"lock already held\" error, and there is no way to reduce them later\nfor\nthis situation, then it will keep the inconsistency in lock structure until\ncluster restart or reset.\n\nThe inconsistency is:\n* nRequested will never reduce to zero, the lock will never be\ngarbage-collected\n* if there is a waitMask for this lock, the waitMast will never be removed,\nthen\n new proc will be blocked to wait for a lock with zero holder\n (looks weird in the pg_locks table)\n\nI think moving the \"lock already held\" error before the bump operation of\nnRequested\nand requested[lockmode] value in SetupLockInTable() will fix it.\n(maybe also fix the lock_twophase_recover() function)\n\nTo recreate the inconsistency:\n1. create a backend 1 to lock table a, keep it idle in transaction\n2. terminate backend 1 and hack it to skip the LockReleaseAll() function\n3. create another backend 2 to lock table a, it will wait for the lock to\nrelease\n4. reuse the backend 1 (reuse the same proc) to lock table a again,\n it will trigger the \"lock already held\" error\n5. quit both backend 1 and 2\n6. create backend 3 to lock table a, it will wait for the lock's waitMask\n7. check the pg_locks table\n\n-- \nGaoZengqi\npgf00a@gmail.com\nzengqigao@gmail.com\n\nAt the end of SetupLockInTable(), there is a check for the \"lock already held\" error.Because the nRequested and requested[lockmode] value of a lock is bumpedbefore \"lock already held\" error, and there is no way to reduce them later forthis situation, then it will keep the inconsistency in lock structure untilcluster restart or reset.The inconsistency is:* nRequested will never reduce to zero, the lock will never be garbage-collected* if there is a waitMask for this lock, the waitMast will never be removed, then new proc will be blocked to wait for a lock with zero holder (looks weird in the pg_locks table)I think moving the \"lock already held\" error before the bump operation of nRequestedand requested[lockmode] value in SetupLockInTable() will fix it.(maybe also fix the lock_twophase_recover() function)To recreate the inconsistency:1. create a backend 1 to lock table a, keep it idle in transaction2. terminate backend 1 and hack it to skip the LockReleaseAll() function3. create another backend 2 to lock table a, it will wait for the lock to release4. reuse the backend 1 (reuse the same proc) to lock table a again, it will trigger the \"lock already held\" error5. quit both backend 1 and 26. create backend 3 to lock table a, it will wait for the lock's waitMask7. check the pg_locks table-- GaoZengqipgf00a@gmail.comzengqigao@gmail.com",
"msg_date": "Wed, 11 Sep 2024 14:30:49 +0800",
"msg_from": "=?UTF-8?B?6auY5aKe55Cm?= <pgf00a@gmail.com>",
"msg_from_op": true,
"msg_subject": "Inconsistency in lock structure after reporting \"lock already held\"\n error"
},
{
"msg_contents": "The attached patch attempts to fix this.\n\n高增琦 <pgf00a@gmail.com> 于2024年9月11日周三 14:30写道:\n\n> At the end of SetupLockInTable(), there is a check for the \"lock already\n> held\" error.\n> Because the nRequested and requested[lockmode] value of a lock is bumped\n> before \"lock already held\" error, and there is no way to reduce them later\n> for\n> this situation, then it will keep the inconsistency in lock structure until\n> cluster restart or reset.\n>\n> The inconsistency is:\n> * nRequested will never reduce to zero, the lock will never be\n> garbage-collected\n> * if there is a waitMask for this lock, the waitMast will never be\n> removed, then\n> new proc will be blocked to wait for a lock with zero holder\n> (looks weird in the pg_locks table)\n>\n> I think moving the \"lock already held\" error before the bump operation of\n> nRequested\n> and requested[lockmode] value in SetupLockInTable() will fix it.\n> (maybe also fix the lock_twophase_recover() function)\n>\n> To recreate the inconsistency:\n> 1. create a backend 1 to lock table a, keep it idle in transaction\n> 2. terminate backend 1 and hack it to skip the LockReleaseAll() function\n> 3. create another backend 2 to lock table a, it will wait for the lock to\n> release\n> 4. reuse the backend 1 (reuse the same proc) to lock table a again,\n> it will trigger the \"lock already held\" error\n> 5. quit both backend 1 and 2\n> 6. create backend 3 to lock table a, it will wait for the lock's waitMask\n> 7. check the pg_locks table\n>\n> --\n> GaoZengqi\n> pgf00a@gmail.com\n> zengqigao@gmail.com\n>\n\n\n-- \nGaoZengqi\npgf00a@gmail.com\nzengqigao@gmail.com",
"msg_date": "Thu, 12 Sep 2024 10:28:42 +0800",
"msg_from": "=?UTF-8?B?6auY5aKe55Cm?= <pgf00a@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency in lock structure after reporting \"lock already\n held\" error"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nCurrently, in the pg_stat_io view, IOs are counted as blocks. However,\r\nthere are two issues with this approach:\r\n\r\n1- The actual number of IO requests to the kernel is lower because IO\r\nrequests can be merged before sending the final request. Additionally, it\r\nappears that all IOs are counted in block size.\r\n2- Some IOs may not align with block size. For example, WAL read IOs are\r\ndone in variable bytes and it is not possible to correctly show these IOs\r\nin the pg_stat_io view [1].\r\n\r\nTo address this, I propose showing the total number of IO requests to the\r\nkernel (as smgr function calls) and the total number of bytes in the IO. To\r\nimplement this change, the op_bytes column will be removed from the\r\npg_stat_io view. Instead, the [reads | writes | extends] columns will track\r\nthe total number of IO requests, and newly added [read | write |\r\nextend]_bytes columns will track the total number of bytes in the IO.\r\n\r\nExample benefit of this change:\r\n\r\nRunning query [2], the result is:\r\n\r\n╔═══════════════════╦══════════╦══════════╦═══════════════╗\r\n║ backend_type ║ object ║ context ║ avg_io_blocks ║\r\n╠═══════════════════╬══════════╬══════════╬═══════════════╣\r\n║ client backend ║ relation ║ bulkread ║ 15.99 ║\r\n╠═══════════════════╬══════════╬══════════╬═══════════════╣\r\n║ background worker ║ relation ║ bulkread ║ 15.99 ║\r\n╚═══════════════════╩══════════╩══════════╩═══════════════╝\r\n\r\nYou can rerun the same query [2] after setting io_combine_limit to 32 [3].\r\nThe result is:\r\n\r\n╔═══════════════════╦══════════╦══════════╦═══════════════╗\r\n║ backend_type ║ object ║ context ║ avg_io_blocks ║\r\n╠═══════════════════╬══════════╬══════════╬═══════════════╣\r\n║ client backend ║ relation ║ bulkread ║ 31.70 ║\r\n╠═══════════════════╬══════════╬══════════╬═══════════════╣\r\n║ background worker ║ relation ║ bulkread ║ 31.60 ║\r\n╚═══════════════════╩══════════╩══════════╩═══════════════╝\r\n\r\nI believe that having visibility into avg_io_[bytes | blocks] is valuable\r\ninformation that could help optimize Postgres.\r\n\r\nAny feedback would be appreciated.\r\n\r\n[1]\r\nhttps://www.postgresql.org/message-id/CAN55FZ1ny%2B3kpdm5X3nGZ2Jp3wxZO-744eFgxktS6YQ3%3DOKR-A%40mail.gmail.com\r\n\r\n[2]\r\nCREATE TABLE t as select i, repeat('a', 600) as filler from\r\ngenerate_series(1, 10000000) as i;\r\nSELECT pg_stat_reset_shared('io');\r\nSELECT * FROM t WHERE i = 0;\r\nSELECT backend_type, object, context, TRUNC((read_bytes / reads / (SELECT\r\ncurrent_setting('block_size')::numeric)), 2) as avg_io_blocks FROM\r\npg_stat_io WHERE reads > 0;\r\n\r\n[3] SET io_combine_limit TO 32;\r\n\r\n-- \r\nRegards,\r\nNazir Bilal Yavuz\r\nMicrosoft",
"msg_date": "Wed, 11 Sep 2024 14:19:02 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": true,
"msg_subject": "Make pg_stat_io view count IOs as bytes instead of blocks"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile working on a new pg_logicalinspect module ([1]), I reached a point where\nall the CI tests were green except the compiler warnings one. Then, to save time\naddressing the issue, I modified the .cirrus.tasks.yml file to $SUBJECT.\n\nI think this could be useful for others too, so please find attached this tiny\npatch.\n\nNote that the patch does not add an extra \"ci-task-only\", but for simplicity it\nit renames ci-os-only to ci-task-only.\n\n\n[1]: https://www.postgresql.org/message-id/ZuF2Okt7aBR//bxu%40ip-10-97-1-34.eu-west-3.compute.internal\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 11 Sep 2024 12:36:25 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Allow CI to only run the compiler warnings task"
},
{
"msg_contents": "Hi,\n\nOn Wed, 11 Sept 2024 at 15:36, Bertrand Drouvot\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> Hi hackers,\n>\n> While working on a new pg_logicalinspect module ([1]), I reached a point where\n> all the CI tests were green except the compiler warnings one. Then, to save time\n> addressing the issue, I modified the .cirrus.tasks.yml file to $SUBJECT.\n>\n> I think this could be useful for others too, so please find attached this tiny\n> patch.\n\n+1 for this. I encountered the same issue before.\n\n> Note that the patch does not add an extra \"ci-task-only\", but for simplicity it\n> it renames ci-os-only to ci-task-only.\n\nI think this change makes sense. I gave a quick try to your patch with\nci-task-only: [\"\", \"linux\", \"compilerwarnings\"] and it worked as\nexpected.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Wed, 11 Sep 2024 16:39:57 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow CI to only run the compiler warnings task"
},
{
"msg_contents": "Hi,\n\nOn Wed, Sep 11, 2024 at 04:39:57PM +0300, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> On Wed, 11 Sept 2024 at 15:36, Bertrand Drouvot\n> <bertranddrouvot.pg@gmail.com> wrote:\n> >\n> > Hi hackers,\n> >\n> > While working on a new pg_logicalinspect module ([1]), I reached a point where\n> > all the CI tests were green except the compiler warnings one. Then, to save time\n> > addressing the issue, I modified the .cirrus.tasks.yml file to $SUBJECT.\n> >\n> > I think this could be useful for others too, so please find attached this tiny\n> > patch.\n> \n> +1 for this. I encountered the same issue before.\n\nThanks for the feedback!\n\n> \n> > Note that the patch does not add an extra \"ci-task-only\", but for simplicity it\n> > it renames ci-os-only to ci-task-only.\n> \n> I think this change makes sense. I gave a quick try to your patch with\n> ci-task-only: [\"\", \"linux\", \"compilerwarnings\"] and it worked as\n> expected.\n\nAnd for the testing.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 12 Sep 2024 05:21:43 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow CI to only run the compiler warnings task"
}
] |
[
{
"msg_contents": "Hi,\n\nCommit 4ed8f0913bfd introduced long SLRU file names. The proposed\npatch removes SlruCtl->long_segment_names flag and makes SLRU always\nuse long file names. This simplifies both the code and the API.\nCorresponding changes to pg_upgrade are included.\n\nOne drawback I see is that technically SLRU is an exposed API and\nchanging it may affect third-party code. I'm not sure if we should\nseriously worry about this. Firstly, the change is trivial and\nsecondly, it's not clear whether such third-party code even exists (we\nbroke this API just recently in 4ed8f0913bfd and no one complained).\n\nI didn't include any tests for the new pg_upgrade code. To my\nknowledge we test it manually, with buildfarm members and during\nalpha- and beta-testing periods. Please let me know if you think there\nshould be a corresponding TAP test.\n\nThoughts?\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 11 Sep 2024 16:07:06 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Refactor SLRU to always use long file names"
},
{
"msg_contents": "On Wed, Sep 11, 2024 at 04:07:06PM +0300, Aleksander Alekseev wrote:\n> Commit 4ed8f0913bfd introduced long SLRU file names. The proposed\n> patch removes SlruCtl->long_segment_names flag and makes SLRU always\n> use long file names. This simplifies both the code and the API.\n> Corresponding changes to pg_upgrade are included.\n\nThat's leaner, indeed.\n\n> One drawback I see is that technically SLRU is an exposed API and\n> changing it may affect third-party code. I'm not sure if we should\n> seriously worry about this. Firstly, the change is trivial and\n> secondly, it's not clear whether such third-party code even exists (we\n> broke this API just recently in 4ed8f0913bfd and no one complained).\n\nAny third-party code using custom SLRUs would need to take care of\nhandling their upgrade path outside pg_upgrade. Not sure there are\nany of them, TBH, but let's see.\n\n> I didn't include any tests for the new pg_upgrade code. To my\n> knowledge we test it manually, with buildfarm members and during\n> alpha- and beta-testing periods. Please let me know if you think there\n> should be a corresponding TAP test.\n\nRemoving the old API means that it is impossible to test a move from\nshort to long file names. That's OK by me to rely on the pg_upgrade\npaths in the buildfarm code. We have a few of them.\n\nThere is one thing I am wondering, here, though, which is to think\nharder about a validity check at the end of 002_pg_upgrade.pl to make\nsure that all the SLRU use long file names after running the tests.\nThat would mean thinking about a mechanism to list all of them from a\nbackend, rather than hardcode a list of them. Perhaps that's not\nworth it, just dropping an idea in the bucket of ideas. I would guess\nin the shape of a catalog that's able to represent at SQL level all\nthe SLRUs that exist in a backend.\n--\nMichael",
"msg_date": "Thu, 12 Sep 2024 08:55:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Refactor SLRU to always use long file names"
},
{
"msg_contents": "Hi Michael,\n\n> On Wed, Sep 11, 2024 at 04:07:06PM +0300, Aleksander Alekseev wrote:\n> > Commit 4ed8f0913bfd introduced long SLRU file names. The proposed\n> > patch removes SlruCtl->long_segment_names flag and makes SLRU always\n> > use long file names. This simplifies both the code and the API.\n> > Corresponding changes to pg_upgrade are included.\n>\n> That's leaner, indeed.\n>\n> > One drawback I see is that technically SLRU is an exposed API and\n> > changing it may affect third-party code. I'm not sure if we should\n> > seriously worry about this. Firstly, the change is trivial and\n> > secondly, it's not clear whether such third-party code even exists (we\n> > broke this API just recently in 4ed8f0913bfd and no one complained).\n>\n> Any third-party code using custom SLRUs would need to take care of\n> handling their upgrade path outside pg_upgrade. Not sure there are\n> any of them, TBH, but let's see.\n>\n> > I didn't include any tests for the new pg_upgrade code. To my\n> > knowledge we test it manually, with buildfarm members and during\n> > alpha- and beta-testing periods. Please let me know if you think there\n> > should be a corresponding TAP test.\n>\n> Removing the old API means that it is impossible to test a move from\n> short to long file names. That's OK by me to rely on the pg_upgrade\n> paths in the buildfarm code. We have a few of them.\n\nThanks for the feedback.\n\n> There is one thing I am wondering, here, though, which is to think\n> harder about a validity check at the end of 002_pg_upgrade.pl to make\n> sure that all the SLRU use long file names after running the tests.\n> That would mean thinking about a mechanism to list all of them from a\n> backend, rather than hardcode a list of them. Perhaps that's not\n> worth it, just dropping an idea in the bucket of ideas. I would guess\n> in the shape of a catalog that's able to represent at SQL level all\n> the SLRUs that exist in a backend.\n\nHmm... IMO it would be a rather niche facility to maintain in PG core.\nAt least I'm not aware of cases when a DBA wanted to list initialized\nSLRUs. Would it be convenient for core / extensions developers?\nCreating a breakpoint on SimpleLruInit() or adding a temporary elog()\nsounds simpler to me.\n\nIt wouldn't hurt re-checking the segment file names in the TAP test\nbut this would mean hardcoding catalog names which as I understand you\nwant to avoid. With high probability PG wouldn't start if the\ncorresponding piece of pg_upgrade is wrong (I checked more than once\n:). So I'm not entirely sure if it's worth the effort, but let's see\nwhat others think.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 12 Sep 2024 12:33:14 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Refactor SLRU to always use long file names"
}
] |
[
{
"msg_contents": "Hi,\n\nDuring the PGDAY.Spain'2024 Yurii Rashkovskii found out boring behaviour \nrelated to functions versioning:\nIf you open a transaction in one session and execute the function, then \nreplace this function with a new version or drop it in a concurrent \nsession; the first session doesn't see any changes: it will use the old \nversion of the function in the following query.\nIf you execute the function without explicit transaction, subsequent \nquery execution will employ the new version of the function.\n\nIt happens because of documented behaviour [1], which doesn't guarantee \nisolation levels for internal access to the system catalogues. But it \nlooks more consistent for a user to see changes the same way with and \nwithout explicit transactions, doesn't it? At least, an extension \ndeveloper may be confident that after updating to a new version, no one \nuser will employ the old version of the extension's function and not \nthink about the consequences it may cause.\n\nI don't know whether to classify this as a bug. The sketch of the patch \nwith an example isolation test is attached.\n\n[1] https://www.postgresql.org/docs/16/mvcc-caveats.html\n\n-- \nregards, Andrei Lepikhov",
"msg_date": "Wed, 11 Sep 2024 21:50:44 +0200",
"msg_from": "Andrei Lepikhov <lepihov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Accept invalidation messages before the query starts inside a\n transaction"
},
{
"msg_contents": "Andrei Lepikhov <lepihov@gmail.com> writes:\n> I don't know whether to classify this as a bug. The sketch of the patch \n> with an example isolation test is attached.\n\nThis seems like an extremely strange place (and an extremely\nbrute-force way) to insert an AcceptInvalidationMessages call.\nUnder what circumstances wouldn't we do one or more AIMs later\non, eg while acquiring locks?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 Sep 2024 15:55:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Accept invalidation messages before the query starts inside a\n transaction"
},
{
"msg_contents": "\n\n> On 12 Sep 2024, at 00:50, Andrei Lepikhov <lepihov@gmail.com> wrote:\n> \n> It happens because of documented behaviour [1], which doesn't guarantee isolation levels for internal access to the system catalogues. \n\nAs far as I understood you are proposing not isolation guaranties, but allowed isolation anomaly.\nI think we do not guaranty anomalies.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 12 Sep 2024 09:48:57 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Accept invalidation messages before the query starts inside a\n transaction"
},
{
"msg_contents": "On Wednesday, September 11, 2024, Andrei Lepikhov <lepihov@gmail.com> wrote:\n>\n>\n> I don't know whether to classify this as a bug.\n>\n> [1] https://www.postgresql.org/docs/16/mvcc-caveats.html\n>\n>\nSeems we need to add another sentence to that final paragraph. Something\nlike:\n\nHowever, once an object is accessed within a transaction its definition is\ncached for the duration of the transaction. Of particular note are routine\nbodies modified using create or replace. If a replacement gets committed\nmid-transaction the old code body will still be executed for the remainder\nof the transaction.\n\nDavid J.\n\nOn Wednesday, September 11, 2024, Andrei Lepikhov <lepihov@gmail.com> wrote:\n\nI don't know whether to classify this as a bug.\n\n[1] https://www.postgresql.org/docs/16/mvcc-caveats.htmlSeems we need to add another sentence to that final paragraph. Something like:However, once an object is accessed within a transaction its definition is cached for the duration of the transaction. Of particular note are routine bodies modified using create or replace. If a replacement gets committed mid-transaction the old code body will still be executed for the remainder of the transaction.David J.",
"msg_date": "Wed, 11 Sep 2024 23:31:01 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Accept invalidation messages before the query starts inside a\n transaction"
},
{
"msg_contents": "On 12/9/2024 08:31, David G. Johnston wrote:\n> On Wednesday, September 11, 2024, Andrei Lepikhov <lepihov@gmail.com \n> <mailto:lepihov@gmail.com>> wrote:\n> \n> \n> I don't know whether to classify this as a bug.\n> \n> [1] https://www.postgresql.org/docs/16/mvcc-caveats.html\n> <https://www.postgresql.org/docs/16/mvcc-caveats.html>\n> \n> \n> Seems we need to add another sentence to that final paragraph. \n> Something like:\n> \n> However, once an object is accessed within a transaction its definition \n> is cached for the duration of the transaction. Of particular note are \n> routine bodies modified using create or replace. If a replacement gets \n> committed mid-transaction the old code body will still be executed for \n> the remainder of the transaction.\nI'm not sure we need any changes here yet unless Yurii provides an \nexample of a real issue. But for the record, let me explain the initial \nreason in more detail.\nExtensions have always been challenging to maintain because they include \nhook calls from the core and stored procedures visible in databases \nwhere the extension was created. At the same time, we should control the \nUPDATE/DROP/CREATE extension commands. And don't forget, we have some \ndata in shared memory as well that can't be easily versioned.\n\nOur primary goal is to establish a stable point that can provide \nextension developers (and users who call their functions) with a \nreliable guarantee that we are up to date with all the changes made to \nthe extension.\n\nLast week, Michael Paquer's patch introduced sys caches and invalidation \ncallbacks to watch such actions. But curiously, while we just execute \nsome extension's functions like:\nSELECT pg_extension.smth()\nwe don't lock any table, don't apply invalidation messages at all, don't \nhave an information about updating/altering/deletion of our extension's \nobjects!\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Thu, 12 Sep 2024 11:32:47 +0200",
"msg_from": "Andrei Lepikhov <lepihov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Accept invalidation messages before the query starts inside a\n transaction"
}
] |
[
{
"msg_contents": "Hi all,\n\nWe have several reports that logical decoding uses memory much more\nthan logical_decoding_work_mem[1][2][3]. For instance in one of the\nreports[1], even though users set logical_decoding_work_mem to\n'256MB', a walsender process was killed by OOM because of using more\nthan 4GB memory.\n\nAs per the discussion in these threads so far, what happened was that\nthere was huge memory fragmentation in rb->tup_context.\nrb->tup_context uses GenerationContext with 8MB memory blocks. We\ncannot free memory blocks until all memory chunks in the block are\nfreed. If there is a long-running transaction making changes, its\nchanges could be spread across many memory blocks and we end up not\nbeing able to free memory blocks unless the long-running transaction\nis evicted or completed. Since we don't account fragmentation, block\nheader size, nor chunk header size into per-transaction memory usage\n(i.e. txn->size), rb->size could be less than\nlogical_decoding_work_mem but the actual allocated memory in the\ncontext is much higher than logical_decoding_work_mem.\n\nWe can reproduce this issue with the attached patch\nrb_excessive_memory_reproducer.patch (not intended to commit) that\nadds a memory usage reporting and includes the test. After running the\ntap test contrib/test_decoding/t/002_rb_memory.pl, we can see a memory\nusage report in the server logs like follows:\n\nLOG: reorderbuffer memory: logical_decoding_work_mem=268435456\nrb->size=17816832 rb->tup_context=1082130304 rb->context=1086267264\n\nWhich means that the logical decoding allocated 1GB memory in spite of\nlogical_decoding_work_mem being 256MB.\n\nOne idea to deal with this problem is that we use\nMemoryContextMemAllocated() as the reorderbuffer's memory usage\ninstead of txn->total_size. That is, we evict transactions until the\nvalue returned by MemoryContextMemAllocated() gets lower than\nlogical_decoding_work_mem. However, it could end up evicting a lot of\n(possibly all) transactions because the transaction whose decoded\ntuples data are spread across memory context blocks could be evicted\nafter all other larger transactions are evicted (note that we evict\ntransactions from largest to smallest). Therefore, if we want to do\nthat, we would need to change the eviction algorithm to for example\nLSN order instead of transaction size order. Furthermore,\nreorderbuffer's changes that are counted in txn->size (and therefore\nrb->size too) are stored in different memory contexts depending on the\nkinds. For example, decoded tuples are stored in rb->context,\nReorderBufferChange are stored in rb->change_context, and snapshot\ndata are stored in builder->context. So we would need to sort out\nwhich data is stored into which memory context and which memory\ncontext should be accounted for in the reorderbuffer's memory usage.\nWhich could be a large amount of work.\n\nAnother idea is to have memory context for storing decoded tuples per\ntransaction. That way, we can ensure that all memory blocks of the\ncontext are freed when the transaction is evicted or completed. I\nthink that this idea would be simpler and worth considering, so I\nattached the PoC patch, use_tup_context_per_ctx_v1.patch. Since the\ndecoded tuple data is created individually when the corresponding WAL\nrecord is decoded but is released collectively when it is released\n(e.g., transaction eviction), the bump memory context would fit the\nbest this case. One exception is that we immediately free the decoded\ntuple data if the transaction is already aborted. However, in this\ncase, I think it is possible to skip the WAL decoding in the first\nplace.\n\nOne thing we need to consider is that the number of transaction\nentries and memory contexts in the reorderbuffer could be very high\nsince it has entries for both top-level transaction entries and\nsub-transactions. To deal with that, the patch changes that decoded\ntuples of a subtransaction are stored into its top-level transaction's\ntuple context. We always evict sub-transactions and its top-level\ntransaction at the same time, I think it would not be a problem.\nChecking performance degradation due to using many memory contexts\nwould have to be done.\n\nEven with this idea, we would still have a mismatch between the actual\namount of allocated memory and rb->size, but it would be much better\nthan today. And something like the first idea would be necessary to\naddress this mismatch, and we can discuss it in a separate thread.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAMnUB3oYugXCBLSkih%2BqNsWQPciEwos6g_AMbnz_peNoxfHwyw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/17974-f8c9d353a62f414d%40postgresql.org\n[3] https://www.postgresql.org/message-id/DB9PR07MB71808AC6C7770AF2FB36B95BCB252%40DB9PR07MB7180.eurprd07.prod.outlook.com\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 11 Sep 2024 15:32:39 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On 2024-09-12 07:32, Masahiko Sawada wrote:\n\nThanks a lot for working on this!\n\n> Hi all,\n> \n> We have several reports that logical decoding uses memory much more\n> than logical_decoding_work_mem[1][2][3]. For instance in one of the\n> reports[1], even though users set logical_decoding_work_mem to\n> '256MB', a walsender process was killed by OOM because of using more\n> than 4GB memory.\n> \n> As per the discussion in these threads so far, what happened was that\n> there was huge memory fragmentation in rb->tup_context.\n> rb->tup_context uses GenerationContext with 8MB memory blocks. We\n> cannot free memory blocks until all memory chunks in the block are\n> freed. If there is a long-running transaction making changes, its\n> changes could be spread across many memory blocks and we end up not\n> being able to free memory blocks unless the long-running transaction\n> is evicted or completed. Since we don't account fragmentation, block\n> header size, nor chunk header size into per-transaction memory usage\n> (i.e. txn->size), rb->size could be less than\n> logical_decoding_work_mem but the actual allocated memory in the\n> context is much higher than logical_decoding_work_mem.\n> \n> We can reproduce this issue with the attached patch\n> rb_excessive_memory_reproducer.patch (not intended to commit) that\n> adds a memory usage reporting and includes the test. After running the\n> tap test contrib/test_decoding/t/002_rb_memory.pl, we can see a memory\n> usage report in the server logs like follows:\n> \n> LOG: reorderbuffer memory: logical_decoding_work_mem=268435456\n> rb->size=17816832 rb->tup_context=1082130304 rb->context=1086267264\n> \n> Which means that the logical decoding allocated 1GB memory in spite of\n> logical_decoding_work_mem being 256MB.\n> \n> One idea to deal with this problem is that we use\n> MemoryContextMemAllocated() as the reorderbuffer's memory usage\n> instead of txn->total_size. That is, we evict transactions until the\n> value returned by MemoryContextMemAllocated() gets lower than\n> logical_decoding_work_mem. However, it could end up evicting a lot of\n> (possibly all) transactions because the transaction whose decoded\n> tuples data are spread across memory context blocks could be evicted\n> after all other larger transactions are evicted (note that we evict\n> transactions from largest to smallest). Therefore, if we want to do\n> that, we would need to change the eviction algorithm to for example\n> LSN order instead of transaction size order. Furthermore,\n> reorderbuffer's changes that are counted in txn->size (and therefore\n> rb->size too) are stored in different memory contexts depending on the\n> kinds. For example, decoded tuples are stored in rb->context,\n> ReorderBufferChange are stored in rb->change_context, and snapshot\n> data are stored in builder->context. So we would need to sort out\n> which data is stored into which memory context and which memory\n> context should be accounted for in the reorderbuffer's memory usage.\n> Which could be a large amount of work.\n> \n> Another idea is to have memory context for storing decoded tuples per\n> transaction. That way, we can ensure that all memory blocks of the\n> context are freed when the transaction is evicted or completed. I\n> think that this idea would be simpler and worth considering, so I\n> attached the PoC patch, use_tup_context_per_ctx_v1.patch. Since the\n> decoded tuple data is created individually when the corresponding WAL\n> record is decoded but is released collectively when it is released\n> (e.g., transaction eviction), the bump memory context would fit the\n> best this case. One exception is that we immediately free the decoded\n> tuple data if the transaction is already aborted. However, in this\n> case, I think it is possible to skip the WAL decoding in the first\n> place.\n\nI haven't read the patch yet, but it seems a reasonable approach.\n\n> One thing we need to consider is that the number of transaction\n> entries and memory contexts in the reorderbuffer could be very high\n> since it has entries for both top-level transaction entries and\n> sub-transactions. To deal with that, the patch changes that decoded\n> tuples of a subtransaction are stored into its top-level transaction's\n> tuple context. We always evict sub-transactions and its top-level\n> transaction at the same time, I think it would not be a problem.\n> Checking performance degradation due to using many memory contexts\n> would have to be done.\n\nYeah, and I imagine there would be cases where the current \nimplementation shows better performance, such as when there are no long \ntransactions, but compared to unexpected memory bloat and OOM kill, I \nthink it's far more better.\n\n> Even with this idea, we would still have a mismatch between the actual\n> amount of allocated memory and rb->size, but it would be much better\n> than today. And something like the first idea would be necessary to\n> address this mismatch, and we can discuss it in a separate thread.\n> \n> Regards,\n> \n> [1] \n> https://www.postgresql.org/message-id/CAMnUB3oYugXCBLSkih%2BqNsWQPciEwos6g_AMbnz_peNoxfHwyw%40mail.gmail.com\n> [2] \n> https://www.postgresql.org/message-id/17974-f8c9d353a62f414d%40postgresql.org\n> [3] \n> https://www.postgresql.org/message-id/DB9PR07MB71808AC6C7770AF2FB36B95BCB252%40DB9PR07MB7180.eurprd07.prod.outlook.com\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n",
"msg_date": "Fri, 13 Sep 2024 14:26:27 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Thu, Sep 12, 2024 at 4:03 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> We have several reports that logical decoding uses memory much more\n> than logical_decoding_work_mem[1][2][3]. For instance in one of the\n> reports[1], even though users set logical_decoding_work_mem to\n> '256MB', a walsender process was killed by OOM because of using more\n> than 4GB memory.\n>\n> As per the discussion in these threads so far, what happened was that\n> there was huge memory fragmentation in rb->tup_context.\n> rb->tup_context uses GenerationContext with 8MB memory blocks. We\n> cannot free memory blocks until all memory chunks in the block are\n> freed. If there is a long-running transaction making changes, its\n> changes could be spread across many memory blocks and we end up not\n> being able to free memory blocks unless the long-running transaction\n> is evicted or completed. Since we don't account fragmentation, block\n> header size, nor chunk header size into per-transaction memory usage\n> (i.e. txn->size), rb->size could be less than\n> logical_decoding_work_mem but the actual allocated memory in the\n> context is much higher than logical_decoding_work_mem.\n>\n\nIt is not clear to me how the fragmentation happens. Is it because of\nsome interleaving transactions which are even ended but the memory\ncorresponding to them is not released? Can we try reducing the size of\n8MB memory blocks? The comment atop allocation says: \"XXX the\nallocation sizes used below pre-date generation context's block\ngrowing code. These values should likely be benchmarked and set to\nmore suitable values.\", so do we need some tuning here?\n\n> We can reproduce this issue with the attached patch\n> rb_excessive_memory_reproducer.patch (not intended to commit) that\n> adds a memory usage reporting and includes the test. After running the\n> tap test contrib/test_decoding/t/002_rb_memory.pl, we can see a memory\n> usage report in the server logs like follows:\n>\n> LOG: reorderbuffer memory: logical_decoding_work_mem=268435456\n> rb->size=17816832 rb->tup_context=1082130304 rb->context=1086267264\n>\n> Which means that the logical decoding allocated 1GB memory in spite of\n> logical_decoding_work_mem being 256MB.\n>\n> One idea to deal with this problem is that we use\n> MemoryContextMemAllocated() as the reorderbuffer's memory usage\n> instead of txn->total_size. That is, we evict transactions until the\n> value returned by MemoryContextMemAllocated() gets lower than\n> logical_decoding_work_mem. However, it could end up evicting a lot of\n> (possibly all) transactions because the transaction whose decoded\n> tuples data are spread across memory context blocks could be evicted\n> after all other larger transactions are evicted (note that we evict\n> transactions from largest to smallest). Therefore, if we want to do\n> that, we would need to change the eviction algorithm to for example\n> LSN order instead of transaction size order. Furthermore,\n> reorderbuffer's changes that are counted in txn->size (and therefore\n> rb->size too) are stored in different memory contexts depending on the\n> kinds. For example, decoded tuples are stored in rb->context,\n> ReorderBufferChange are stored in rb->change_context, and snapshot\n> data are stored in builder->context. So we would need to sort out\n> which data is stored into which memory context and which memory\n> context should be accounted for in the reorderbuffer's memory usage.\n> Which could be a large amount of work.\n>\n> Another idea is to have memory context for storing decoded tuples per\n> transaction. That way, we can ensure that all memory blocks of the\n> context are freed when the transaction is evicted or completed. I\n> think that this idea would be simpler and worth considering, so I\n> attached the PoC patch, use_tup_context_per_ctx_v1.patch. Since the\n> decoded tuple data is created individually when the corresponding WAL\n> record is decoded but is released collectively when it is released\n> (e.g., transaction eviction), the bump memory context would fit the\n> best this case. One exception is that we immediately free the decoded\n> tuple data if the transaction is already aborted. However, in this\n> case, I think it is possible to skip the WAL decoding in the first\n> place.\n>\n> One thing we need to consider is that the number of transaction\n> entries and memory contexts in the reorderbuffer could be very high\n> since it has entries for both top-level transaction entries and\n> sub-transactions. To deal with that, the patch changes that decoded\n> tuples of a subtransaction are stored into its top-level transaction's\n> tuple context.\n>\n\nWon't that impact the calculation for ReorderBufferLargestTXN() which\ncan select either subtransaction or top-level xact?\n\n> We always evict sub-transactions and its top-level\n> transaction at the same time, I think it would not be a problem.\n> Checking performance degradation due to using many memory contexts\n> would have to be done.\n>\n\nThe generation context has been introduced in commit a4ccc1ce which\nclaims that it has significantly reduced logical decoding's memory\nusage and improved its performance. Won't changing it requires us to\nvalidate all the cases which led us to use generation context in the\nfirst place?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 13 Sep 2024 16:28:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Fri, Sep 13, 2024 at 3:58 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Sep 12, 2024 at 4:03 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > We have several reports that logical decoding uses memory much more\n> > than logical_decoding_work_mem[1][2][3]. For instance in one of the\n> > reports[1], even though users set logical_decoding_work_mem to\n> > '256MB', a walsender process was killed by OOM because of using more\n> > than 4GB memory.\n> >\n> > As per the discussion in these threads so far, what happened was that\n> > there was huge memory fragmentation in rb->tup_context.\n> > rb->tup_context uses GenerationContext with 8MB memory blocks. We\n> > cannot free memory blocks until all memory chunks in the block are\n> > freed. If there is a long-running transaction making changes, its\n> > changes could be spread across many memory blocks and we end up not\n> > being able to free memory blocks unless the long-running transaction\n> > is evicted or completed. Since we don't account fragmentation, block\n> > header size, nor chunk header size into per-transaction memory usage\n> > (i.e. txn->size), rb->size could be less than\n> > logical_decoding_work_mem but the actual allocated memory in the\n> > context is much higher than logical_decoding_work_mem.\n> >\n>\n> It is not clear to me how the fragmentation happens. Is it because of\n> some interleaving transactions which are even ended but the memory\n> corresponding to them is not released?\n\nIn a generation context, we can free a memory block only when all\nmemory chunks there are freed. Therefore, individual tuple buffers are\nalready pfree()'ed but the underlying memory blocks are not freed.\n\n> Can we try reducing the size of\n> 8MB memory blocks? The comment atop allocation says: \"XXX the\n> allocation sizes used below pre-date generation context's block\n> growing code. These values should likely be benchmarked and set to\n> more suitable values.\", so do we need some tuning here?\n\nReducing the size of the 8MB memory block would be one solution and\ncould be better as it could be back-patchable. It would mitigate the\nproblem but would not resolve it. I agree to try reducing it and do\nsome benchmark tests. If it reasonably makes the problem less likely\nto happen, it would be a good solution.\n\n>\n> > We can reproduce this issue with the attached patch\n> > rb_excessive_memory_reproducer.patch (not intended to commit) that\n> > adds a memory usage reporting and includes the test. After running the\n> > tap test contrib/test_decoding/t/002_rb_memory.pl, we can see a memory\n> > usage report in the server logs like follows:\n> >\n> > LOG: reorderbuffer memory: logical_decoding_work_mem=268435456\n> > rb->size=17816832 rb->tup_context=1082130304 rb->context=1086267264\n> >\n> > Which means that the logical decoding allocated 1GB memory in spite of\n> > logical_decoding_work_mem being 256MB.\n> >\n> > One idea to deal with this problem is that we use\n> > MemoryContextMemAllocated() as the reorderbuffer's memory usage\n> > instead of txn->total_size. That is, we evict transactions until the\n> > value returned by MemoryContextMemAllocated() gets lower than\n> > logical_decoding_work_mem. However, it could end up evicting a lot of\n> > (possibly all) transactions because the transaction whose decoded\n> > tuples data are spread across memory context blocks could be evicted\n> > after all other larger transactions are evicted (note that we evict\n> > transactions from largest to smallest). Therefore, if we want to do\n> > that, we would need to change the eviction algorithm to for example\n> > LSN order instead of transaction size order. Furthermore,\n> > reorderbuffer's changes that are counted in txn->size (and therefore\n> > rb->size too) are stored in different memory contexts depending on the\n> > kinds. For example, decoded tuples are stored in rb->context,\n> > ReorderBufferChange are stored in rb->change_context, and snapshot\n> > data are stored in builder->context. So we would need to sort out\n> > which data is stored into which memory context and which memory\n> > context should be accounted for in the reorderbuffer's memory usage.\n> > Which could be a large amount of work.\n> >\n> > Another idea is to have memory context for storing decoded tuples per\n> > transaction. That way, we can ensure that all memory blocks of the\n> > context are freed when the transaction is evicted or completed. I\n> > think that this idea would be simpler and worth considering, so I\n> > attached the PoC patch, use_tup_context_per_ctx_v1.patch. Since the\n> > decoded tuple data is created individually when the corresponding WAL\n> > record is decoded but is released collectively when it is released\n> > (e.g., transaction eviction), the bump memory context would fit the\n> > best this case. One exception is that we immediately free the decoded\n> > tuple data if the transaction is already aborted. However, in this\n> > case, I think it is possible to skip the WAL decoding in the first\n> > place.\n> >\n> > One thing we need to consider is that the number of transaction\n> > entries and memory contexts in the reorderbuffer could be very high\n> > since it has entries for both top-level transaction entries and\n> > sub-transactions. To deal with that, the patch changes that decoded\n> > tuples of a subtransaction are stored into its top-level transaction's\n> > tuple context.\n> >\n>\n> Won't that impact the calculation for ReorderBufferLargestTXN() which\n> can select either subtransaction or top-level xact?\n\nYeah, I missed that we could evict only sub-transactions when choosing\nthe largest transaction. We need to find a better solution.\n\n>\n> > We always evict sub-transactions and its top-level\n> > transaction at the same time, I think it would not be a problem.\n> > Checking performance degradation due to using many memory contexts\n> > would have to be done.\n> >\n>\n> The generation context has been introduced in commit a4ccc1ce which\n> claims that it has significantly reduced logical decoding's memory\n> usage and improved its performance. Won't changing it requires us to\n> validate all the cases which led us to use generation context in the\n> first place?\n\nAgreed. Will investigate the thread and check the cases.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 16 Sep 2024 10:12:50 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "Hi,\r\n\r\n> We have several reports that logical decoding uses memory much more\r\n> than logical_decoding_work_mem[1][2][3]. For instance in one of the\r\n> reports[1], even though users set logical_decoding_work_mem to\r\n> '256MB', a walsender process was killed by OOM because of using more\r\n> than 4GB memory.\r\n\r\nI appreciate your work on logical replication and am interested in the thread.\r\nI've heard this issue from others, and this has been the barrier to using logical\r\nreplication. Please let me know if I can help with benchmarking, other\r\nmeasurements, etc.\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 17 Sep 2024 05:56:23 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Mon, Sep 16, 2024 at 10:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Sep 13, 2024 at 3:58 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Sep 12, 2024 at 4:03 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > We have several reports that logical decoding uses memory much more\n> > > than logical_decoding_work_mem[1][2][3]. For instance in one of the\n> > > reports[1], even though users set logical_decoding_work_mem to\n> > > '256MB', a walsender process was killed by OOM because of using more\n> > > than 4GB memory.\n> > >\n> > > As per the discussion in these threads so far, what happened was that\n> > > there was huge memory fragmentation in rb->tup_context.\n> > > rb->tup_context uses GenerationContext with 8MB memory blocks. We\n> > > cannot free memory blocks until all memory chunks in the block are\n> > > freed. If there is a long-running transaction making changes, its\n> > > changes could be spread across many memory blocks and we end up not\n> > > being able to free memory blocks unless the long-running transaction\n> > > is evicted or completed. Since we don't account fragmentation, block\n> > > header size, nor chunk header size into per-transaction memory usage\n> > > (i.e. txn->size), rb->size could be less than\n> > > logical_decoding_work_mem but the actual allocated memory in the\n> > > context is much higher than logical_decoding_work_mem.\n> > >\n> >\n> > It is not clear to me how the fragmentation happens. Is it because of\n> > some interleaving transactions which are even ended but the memory\n> > corresponding to them is not released?\n>\n> In a generation context, we can free a memory block only when all\n> memory chunks there are freed. Therefore, individual tuple buffers are\n> already pfree()'ed but the underlying memory blocks are not freed.\n>\n\nI understood this part but didn't understand the cases leading to this\nproblem. For example, if there is a large (and only) transaction in\nthe system that allocates many buffers for change records during\ndecoding, in the end, it should free memory for all the buffers\nallocated in the transaction. So, wouldn't that free all the memory\nchunks corresponding to the memory blocks allocated? My guess was that\nwe couldn't free all the chunks because there were small interleaving\ntransactions that allocated memory but didn't free it before the large\ntransaction ended.\n\n> > Can we try reducing the size of\n> > 8MB memory blocks? The comment atop allocation says: \"XXX the\n> > allocation sizes used below pre-date generation context's block\n> > growing code. These values should likely be benchmarked and set to\n> > more suitable values.\", so do we need some tuning here?\n>\n> Reducing the size of the 8MB memory block would be one solution and\n> could be better as it could be back-patchable. It would mitigate the\n> problem but would not resolve it. I agree to try reducing it and do\n> some benchmark tests. If it reasonably makes the problem less likely\n> to happen, it would be a good solution.\n>\n\nmakes sense.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 17 Sep 2024 14:35:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Tue, Sep 17, 2024 at 2:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 16, 2024 at 10:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Sep 13, 2024 at 3:58 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Sep 12, 2024 at 4:03 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > We have several reports that logical decoding uses memory much more\n> > > > than logical_decoding_work_mem[1][2][3]. For instance in one of the\n> > > > reports[1], even though users set logical_decoding_work_mem to\n> > > > '256MB', a walsender process was killed by OOM because of using more\n> > > > than 4GB memory.\n> > > >\n> > > > As per the discussion in these threads so far, what happened was that\n> > > > there was huge memory fragmentation in rb->tup_context.\n> > > > rb->tup_context uses GenerationContext with 8MB memory blocks. We\n> > > > cannot free memory blocks until all memory chunks in the block are\n> > > > freed. If there is a long-running transaction making changes, its\n> > > > changes could be spread across many memory blocks and we end up not\n> > > > being able to free memory blocks unless the long-running transaction\n> > > > is evicted or completed. Since we don't account fragmentation, block\n> > > > header size, nor chunk header size into per-transaction memory usage\n> > > > (i.e. txn->size), rb->size could be less than\n> > > > logical_decoding_work_mem but the actual allocated memory in the\n> > > > context is much higher than logical_decoding_work_mem.\n> > > >\n> > >\n> > > It is not clear to me how the fragmentation happens. Is it because of\n> > > some interleaving transactions which are even ended but the memory\n> > > corresponding to them is not released?\n> >\n> > In a generation context, we can free a memory block only when all\n> > memory chunks there are freed. Therefore, individual tuple buffers are\n> > already pfree()'ed but the underlying memory blocks are not freed.\n> >\n>\n> I understood this part but didn't understand the cases leading to this\n> problem. For example, if there is a large (and only) transaction in\n> the system that allocates many buffers for change records during\n> decoding, in the end, it should free memory for all the buffers\n> allocated in the transaction. So, wouldn't that free all the memory\n> chunks corresponding to the memory blocks allocated? My guess was that\n> we couldn't free all the chunks because there were small interleaving\n> transactions that allocated memory but didn't free it before the large\n> transaction ended.\n\nWe haven't actually checked with the person who reported the problem,\nso this is just a guess, but I think there were concurrent\ntransactions, including long-running INSERT transactions. For example,\nsuppose a transaction that inserts 10 million rows and many OLTP-like\n(short) transactions are running at the same time. The scenario I\nthought of was that one 8MB Generation Context Block contains 1MB of\nthe large insert transaction changes, and the other 7MB contains short\nOLTP transaction changes. If there are just 256 such blocks, even\nafter all short-transactions have completed, the Generation Context\nwill have allocated 2GB of memory until we decode the commit record of\nthe large transaction, but rb->size will say 256MB.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 17 Sep 2024 11:23:44 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Tue, Sep 17, 2024 at 2:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 16, 2024 at 10:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Sep 13, 2024 at 3:58 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Can we try reducing the size of\n> > > 8MB memory blocks? The comment atop allocation says: \"XXX the\n> > > allocation sizes used below pre-date generation context's block\n> > > growing code. These values should likely be benchmarked and set to\n> > > more suitable values.\", so do we need some tuning here?\n> >\n> > Reducing the size of the 8MB memory block would be one solution and\n> > could be better as it could be back-patchable. It would mitigate the\n> > problem but would not resolve it. I agree to try reducing it and do\n> > some benchmark tests. If it reasonably makes the problem less likely\n> > to happen, it would be a good solution.\n> >\n>\n> makes sense.\n\nI've done some benchmark tests for three different code bases with\ndifferent test cases. In short, reducing the generation memory context\nblock size to 8kB seems to be promising; it mitigates the problem\nwhile keeping a similar performance.\n\nHere are three code bases that I used:\n\n* head: current head code.\n* per-tx-bump: the proposed idea (with a slight change; each sub and\ntop-level transactions have its own bump memory context to store\ndecoded tuples).\n* 8kb-mem-block: same as head except for changing the generation\nmemory block size from 8MB to 8kB.\n\nAnd here are test cases and results:\n\n1. Memory usage check\n\nI've run the test that I shared before and checked the maximum amount\nof memory allocated in the reorderbuffer context shown by\nMemoryContextMemAllocated(). Here are results:\n\nhead: 2.1GB (while rb->size showing 43MB)\nper-tx-bump: 50MB (while rb->size showing 43MB)\n8kb-mem-block: 54MB (while rb->size showing 43MB)\n\nI've confirmed that the excessive memory usage issue didn't happen in\nthe per-tx-bump case and the 8kb-mem-block cases.\n\n2. Decoding many sub transactions\n\nIIUC this kind of workload was a trigger to make us invent the\nGeneration Context for logical decoding[1]. The single top-level\ntransaction has 1M sub-transactions each of which insert a tuple. Here\nare results:\n\nhead: 31694.163 ms (00:31.694)\nper-tx-bump: 32661.752 ms (00:32.662)\n8kb-mem-block: 31834.872 ms (00:31.835)\n\nThe head and 8kb-mem-block showed similar results whereas I see there\nis a bit of regression on per-tx-bump. I think this is because of the\noverhead of creating and deleting memory contexts for each\nsub-transactions.\n\n3. Decoding a big transaction\n\nThe next test case I did is to decode a single big transaction that\ninserts 10M rows. I set logical_decoding_work_mem large enough to\navoid spilling behavior. Here are results:\n\nhead: 19859.113 ms (00:19.859)\nper-tx-bump: 19422.308 ms (00:19.422)\n8kb-mem-block: 19923.600 ms (00:19.924)\n\nThere were no big differences. FYI, I also checked the maximum memory\nusage for this test case as well:\n\nhead: 1.53GB\nper-tx-bump: 1.4GB\n8kb-mem-block: 1.53GB\n\nThe per-tx-bump used a bit lesser memory probably thanks to bump\nmemory contexts.\n\n4. Decoding many short transactions.\n\nThe last test case I did is to decode a bunch of short pgbench\ntransactions (10k transactions). Here are results:\n\nhead: 31694.163 ms (00:31.694)\nper-tx-bump: 32661.752 ms (00:32.662)\n8kb-mem-block: Time: 31834.872 ms (00:31.835)\n\nI can see a similar trend of the test case #2 above.\n\nOverall, reducing the generation context memory block size to 8kB\nseems to be promising. And using the bump memory context per\ntransaction didn't bring performance improvement than I expected in\nthese cases.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/flat/20160706185502.1426.28143@wrigleys.postgresql.org\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 18 Sep 2024 16:53:27 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Thu, 19 Sept 2024 at 11:54, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I've done some benchmark tests for three different code bases with\n> different test cases. In short, reducing the generation memory context\n> block size to 8kB seems to be promising; it mitigates the problem\n> while keeping a similar performance.\n\nDid you try any sizes between 8KB and 8MB? 1000x reduction seems\nquite large a jump. There is additional overhead from having more\nblocks. It means more malloc() work and more free() work when deleting\na context. It would be nice to see some numbers with all powers of 2\nbetween 8KB and 8MB. I imagine the returns are diminishing as the\nblock size is reduced further.\n\nAnother alternative idea would be to defragment transactions with a\nlarge number of changes after they grow to some predefined size.\nDefragmentation would just be a matter of performing\npalloc/memcpy/pfree for each change. If that's all done at once, all\nthe changes for that transaction would be contiguous in memory. If\nyou're smart about what the trigger point is for performing the\ndefragmentation then I imagine there's not much risk of performance\nregression for the general case. For example, you might only want to\ntrigger it when MemoryContextMemAllocated() for the generation context\nexceeds logical_decoding_work_mem by some factor and only do it for\ntransactions where the size of the changes exceeds some threshold.\n\n> Overall, reducing the generation context memory block size to 8kB\n> seems to be promising. And using the bump memory context per\n> transaction didn't bring performance improvement than I expected in\n> these cases.\n\nHaving a bump context per transaction would cause a malloc() every\ntime you create a new context and a free() each time you delete the\ncontext when cleaning up the reorder buffer for the transaction.\nGenerationContext has a \"freeblock\" field that it'll populate instead\nof freeing a block. That block will be reused next time a new block is\nrequired. For truly FIFO workloads that never need an oversized\nblock, all new blocks will come from the freeblock field once the\nfirst block becomes unused. See the comments in GenerationFree(). I\nexpect this is why bump contexts are slower than the generation\ncontext for your short transaction workload.\n\nDavid\n\n\n",
"msg_date": "Thu, 19 Sep 2024 13:16:03 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "\n\nOn 2024/09/19 8:53, Masahiko Sawada wrote:\n> On Tue, Sep 17, 2024 at 2:06 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Mon, Sep 16, 2024 at 10:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>\n>>> On Fri, Sep 13, 2024 at 3:58 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>>\n>>>> Can we try reducing the size of\n>>>> 8MB memory blocks? The comment atop allocation says: \"XXX the\n>>>> allocation sizes used below pre-date generation context's block\n>>>> growing code. These values should likely be benchmarked and set to\n>>>> more suitable values.\", so do we need some tuning here?\n>>>\n>>> Reducing the size of the 8MB memory block would be one solution and\n>>> could be better as it could be back-patchable. It would mitigate the\n>>> problem but would not resolve it. I agree to try reducing it and do\n>>> some benchmark tests. If it reasonably makes the problem less likely\n>>> to happen, it would be a good solution.\n>>>\n>>\n>> makes sense.\n> \n> I've done some benchmark tests for three different code bases with\n> different test cases. In short, reducing the generation memory context\n> block size to 8kB seems to be promising; it mitigates the problem\n> while keeping a similar performance.\n\nSounds good!\n\nI believe the memory bloat issue in the reorder buffer should be\nconsidered a bug. Since this solution isn’t too invasive, I think\nit’s worth considering back-patch to older versions.\n\nThen, if we find a better approach, we can apply it to v18 or later.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Thu, 19 Sep 2024 12:53:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Thu, Sep 19, 2024 at 6:46 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 19 Sept 2024 at 11:54, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've done some benchmark tests for three different code bases with\n> > different test cases. In short, reducing the generation memory context\n> > block size to 8kB seems to be promising; it mitigates the problem\n> > while keeping a similar performance.\n>\n> Did you try any sizes between 8KB and 8MB? 1000x reduction seems\n> quite large a jump. There is additional overhead from having more\n> blocks. It means more malloc() work and more free() work when deleting\n> a context. It would be nice to see some numbers with all powers of 2\n> between 8KB and 8MB. I imagine the returns are diminishing as the\n> block size is reduced further.\n>\n\nGood idea.\n\n> Another alternative idea would be to defragment transactions with a\n> large number of changes after they grow to some predefined size.\n> Defragmentation would just be a matter of performing\n> palloc/memcpy/pfree for each change. If that's all done at once, all\n> the changes for that transaction would be contiguous in memory. If\n> you're smart about what the trigger point is for performing the\n> defragmentation then I imagine there's not much risk of performance\n> regression for the general case. For example, you might only want to\n> trigger it when MemoryContextMemAllocated() for the generation context\n> exceeds logical_decoding_work_mem by some factor and only do it for\n> transactions where the size of the changes exceeds some threshold.\n>\n\nAfter collecting the changes that exceed 'logical_decoding_work_mem',\none can choose to stream the transaction and free the changes to avoid\nhitting this problem, however, we can use that or some other constant\nto decide the point of defragmentation. The other point we need to\nthink in this idea is whether we actually need any defragmentation at\nall. This will depend on whether there are concurrent transactions\nbeing decoded. This would require benchmarking to see the performance\nimpact.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 19 Sep 2024 09:25:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Wed, Sep 18, 2024 at 8:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Sep 19, 2024 at 6:46 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Thu, 19 Sept 2024 at 11:54, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > I've done some benchmark tests for three different code bases with\n> > > different test cases. In short, reducing the generation memory context\n> > > block size to 8kB seems to be promising; it mitigates the problem\n> > > while keeping a similar performance.\n> >\n> > Did you try any sizes between 8KB and 8MB? 1000x reduction seems\n> > quite large a jump. There is additional overhead from having more\n> > blocks. It means more malloc() work and more free() work when deleting\n> > a context. It would be nice to see some numbers with all powers of 2\n> > between 8KB and 8MB. I imagine the returns are diminishing as the\n> > block size is reduced further.\n> >\n>\n> Good idea.\n\nAgreed.\n\nI've done other benchmarking tests while changing the memory block\nsizes from 8kB to 8MB. I measured the execution time of logical\ndecoding of one transaction that inserted 10M rows. I set\nlogical_decoding_work_mem large enough to avoid spilling behavior. In\nthis scenario, we allocate many memory chunks while decoding the\ntransaction and resulting in calling more malloc() in smaller memory\nblock sizes. Here are results (an average of 3 executions):\n\n8kB: 19747.870 ms\n16kB: 19780.025 ms\n32kB: 19760.575 ms\n64kB: 19772.387 ms\n128kB: 19825.385 ms\n256kB: 19781.118 ms\n512kB: 19808.138 ms\n1MB: 19757.640 ms\n2MB: 19801.429 ms\n4MB: 19673.996 ms\n8MB: 19643.547 ms\n\nInterestingly, there were no noticeable differences in the execution\ntime. I've checked the number of allocated memory blocks in each case\nand more blocks are allocated in smaller block size cases. For\nexample, when the logical decoding used the maximum memory (about\n1.5GB), we allocated about 80k blocks in 8kb memory block size case\nand 80 blocks in 8MB memory block cases.\n\nIt could have different results in different environments. I've\nattached the patch that I used to change the memory block size via\nGUC. It would be appreciated if someone also could do a similar test\nto see the differences.\n\n>\n> > Another alternative idea would be to defragment transactions with a\n> > large number of changes after they grow to some predefined size.\n> > Defragmentation would just be a matter of performing\n> > palloc/memcpy/pfree for each change. If that's all done at once, all\n> > the changes for that transaction would be contiguous in memory. If\n> > you're smart about what the trigger point is for performing the\n> > defragmentation then I imagine there's not much risk of performance\n> > regression for the general case. For example, you might only want to\n> > trigger it when MemoryContextMemAllocated() for the generation context\n> > exceeds logical_decoding_work_mem by some factor and only do it for\n> > transactions where the size of the changes exceeds some threshold.\n> >\n\nInteresting idea.\n\n> After collecting the changes that exceed 'logical_decoding_work_mem',\n> one can choose to stream the transaction and free the changes to avoid\n> hitting this problem, however, we can use that or some other constant\n> to decide the point of defragmentation. The other point we need to\n> think in this idea is whether we actually need any defragmentation at\n> all. This will depend on whether there are concurrent transactions\n> being decoded. This would require benchmarking to see the performance\n> impact.\n>\n\nThe fact that we're using rb->size and txn->size to choose the\ntransactions to evict could make this idea less attractive. Even if we\ndefragment transactions, rb->size and txn->size don't change.\nTherefore, it doesn't mean we can avoid evicting transactions due to\nfull of logical_decoding_work_mem, but just mean the amount of\nallocated memory might have been reduced.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 19 Sep 2024 10:03:18 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "Hi,\n\nOn Mon, Sep 16, 2024 at 10:56 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Hi,\n>\n> > We have several reports that logical decoding uses memory much more\n> > than logical_decoding_work_mem[1][2][3]. For instance in one of the\n> > reports[1], even though users set logical_decoding_work_mem to\n> > '256MB', a walsender process was killed by OOM because of using more\n> > than 4GB memory.\n>\n> I appreciate your work on logical replication and am interested in the thread.\n> I've heard this issue from others, and this has been the barrier to using logical\n> replication. Please let me know if I can help with benchmarking, other\n> measurements, etc.\n\nThank you for your interest in this patch. I've just shared some\nbenchmark results (with a patch) that could be different depending on\nthe environment[1]. I would be appreciated if you also do similar\ntests and share the results.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoAaN4jaJ%3DW%2BWprHvc0cGCf80SkiFQhRc6R%2BX3-05HAFqw%40mail.gmail.com\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 19 Sep 2024 10:06:06 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Fri, 20 Sept 2024 at 05:03, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I've done other benchmarking tests while changing the memory block\n> sizes from 8kB to 8MB. I measured the execution time of logical\n> decoding of one transaction that inserted 10M rows. I set\n> logical_decoding_work_mem large enough to avoid spilling behavior. In\n> this scenario, we allocate many memory chunks while decoding the\n> transaction and resulting in calling more malloc() in smaller memory\n> block sizes. Here are results (an average of 3 executions):\n\nI was interested in seeing the memory consumption with the test that\nwas causing an OOM due to the GenerationBlock fragmentation. You saw\nbad results with 8MB blocks and good results with 8KB blocks. The\nmeasure that's interesting here is the MemoryContextMemAllocated() for\nthe GenerationContext in question.\n\n> The fact that we're using rb->size and txn->size to choose the\n> transactions to evict could make this idea less attractive. Even if we\n> defragment transactions, rb->size and txn->size don't change.\n> Therefore, it doesn't mean we can avoid evicting transactions due to\n> full of logical_decoding_work_mem, but just mean the amount of\n> allocated memory might have been reduced.\n\nI had roughly imagined that you'd add an extra field to store\ntxn->size when the last fragmentation was done and only defrag the\ntransaction when the ReorderBufferTXN->size is, say, double the last\nsize when the changes were last defragmented. Of course, if the first\ndefragmentation was enough to drop MemoryContextMemAllocated() below\nthe concerning threshold above logical_decoding_work_mem then further\ndefrags wouldn't be done, at least, until such times as the\nMemoryContextMemAllocated() became concerning again. If you didn't\nwant to a full Size variable for the defragmentation size, you could\njust store a uint8 to store which power of 2 ReorderBufferTXN->size\nwas when it was last defragmented. There are plenty of holds in that\nstruct that could be filled without enlarging the struct.\n\nIn general, it's a bit annoying to have to code around this\nGenerationContext fragmentation issue. Unfortunately, AllocSet could\nalso suffer from fragmentation issues too if lots of chunks end up on\nfreelists that are never reused, so using another context type might\njust create a fragmentation issue for a different workload.\n\nDavid\n\n\n",
"msg_date": "Fri, 20 Sep 2024 11:43:41 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Thu, Sep 19, 2024 at 10:33 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Sep 18, 2024 at 8:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Sep 19, 2024 at 6:46 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > >\n> > > On Thu, 19 Sept 2024 at 11:54, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > I've done some benchmark tests for three different code bases with\n> > > > different test cases. In short, reducing the generation memory context\n> > > > block size to 8kB seems to be promising; it mitigates the problem\n> > > > while keeping a similar performance.\n> > >\n> > > Did you try any sizes between 8KB and 8MB? 1000x reduction seems\n> > > quite large a jump. There is additional overhead from having more\n> > > blocks. It means more malloc() work and more free() work when deleting\n> > > a context. It would be nice to see some numbers with all powers of 2\n> > > between 8KB and 8MB. I imagine the returns are diminishing as the\n> > > block size is reduced further.\n> > >\n> >\n> > Good idea.\n>\n> Agreed.\n>\n> I've done other benchmarking tests while changing the memory block\n> sizes from 8kB to 8MB. I measured the execution time of logical\n> decoding of one transaction that inserted 10M rows. I set\n> logical_decoding_work_mem large enough to avoid spilling behavior. In\n> this scenario, we allocate many memory chunks while decoding the\n> transaction and resulting in calling more malloc() in smaller memory\n> block sizes. Here are results (an average of 3 executions):\n>\n> 8kB: 19747.870 ms\n> 16kB: 19780.025 ms\n> 32kB: 19760.575 ms\n> 64kB: 19772.387 ms\n> 128kB: 19825.385 ms\n> 256kB: 19781.118 ms\n> 512kB: 19808.138 ms\n> 1MB: 19757.640 ms\n> 2MB: 19801.429 ms\n> 4MB: 19673.996 ms\n> 8MB: 19643.547 ms\n>\n> Interestingly, there were no noticeable differences in the execution\n> time. I've checked the number of allocated memory blocks in each case\n> and more blocks are allocated in smaller block size cases. For\n> example, when the logical decoding used the maximum memory (about\n> 1.5GB), we allocated about 80k blocks in 8kb memory block size case\n> and 80 blocks in 8MB memory block cases.\n>\n\nWhat exactly do these test results mean? Do you want to prove that\nthere is no regression by using smaller block sizes?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 20 Sep 2024 11:13:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Fri, Sep 20, 2024 at 5:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 20 Sept 2024 at 05:03, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've done other benchmarking tests while changing the memory block\n> > sizes from 8kB to 8MB. I measured the execution time of logical\n> > decoding of one transaction that inserted 10M rows. I set\n> > logical_decoding_work_mem large enough to avoid spilling behavior. In\n> > this scenario, we allocate many memory chunks while decoding the\n> > transaction and resulting in calling more malloc() in smaller memory\n> > block sizes. Here are results (an average of 3 executions):\n>\n> I was interested in seeing the memory consumption with the test that\n> was causing an OOM due to the GenerationBlock fragmentation.\n>\n\n+1. That test will be helpful.\n\n> > The fact that we're using rb->size and txn->size to choose the\n> > transactions to evict could make this idea less attractive. Even if we\n> > defragment transactions, rb->size and txn->size don't change.\n> > Therefore, it doesn't mean we can avoid evicting transactions due to\n> > full of logical_decoding_work_mem, but just mean the amount of\n> > allocated memory might have been reduced.\n>\n> I had roughly imagined that you'd add an extra field to store\n> txn->size when the last fragmentation was done and only defrag the\n> transaction when the ReorderBufferTXN->size is, say, double the last\n> size when the changes were last defragmented. Of course, if the first\n> defragmentation was enough to drop MemoryContextMemAllocated() below\n> the concerning threshold above logical_decoding_work_mem then further\n> defrags wouldn't be done, at least, until such times as the\n> MemoryContextMemAllocated() became concerning again. If you didn't\n> want to a full Size variable for the defragmentation size, you could\n> just store a uint8 to store which power of 2 ReorderBufferTXN->size\n> was when it was last defragmented. There are plenty of holds in that\n> struct that could be filled without enlarging the struct.\n>\n> In general, it's a bit annoying to have to code around this\n> GenerationContext fragmentation issue.\n>\n\nRight, and I am also slightly afraid that this may not cause some\nregression in other cases where defrag wouldn't help.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 20 Sep 2024 11:16:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "Dear Sawada-san,\r\n\r\n> Thank you for your interest in this patch. I've just shared some\r\n> benchmark results (with a patch) that could be different depending on\r\n> the environment[1]. I would be appreciated if you also do similar\r\n> tests and share the results.\r\n\r\nOkay, I did similar tests, the attached script is the test runner. rb_mem_block_size\r\nwas changed from 8kB to 8MB. Below table show the result (millisecond unit).\r\nEach cell is the average of 5 runs.\r\n\r\n==========\r\n8kB\t12877.4\r\n16kB\t12829.1\r\n32kB\t11793.3\r\n64kB\t13134.4\r\n128kB\t13353.1\r\n256kB\t11664.0\r\n512kB\t12603.4\r\n1MB\t13443.8\r\n2MB\t12469.0\r\n4MB\t12651.4\r\n8MB\t12381.4\r\n==========\r\n\r\nThe standard deviation of measurements was 100-500 ms, there were no noticeable\r\ndifferences on my env as well.\r\n\r\nAlso, I've checked the statistics of the generation context, and confirmed the\r\nnumber of allocated blocks is x1000 higher if the block size is changed 8kB->8MB.\r\n[1] shows the output from MemoryContextStats(), just in case. IIUC, the difference\r\nof actual used space comes from the header of each block. Each block has attributes\r\nfor management so that the total usage becomes larger based on the number.\r\n\r\n[1]\r\n8kB\r\nTuples: 724959232 total in 88496 blocks (10000000 chunks); 3328 free (0 chunks); 724955904 used\r\nGrand total: 724959232 bytes in 88496 blocks; 3328 free (0 chunks); 724955904 used\r\n\r\n8MB\r\nTuples: 721420288 total in 86 blocks (10000000 chunks); 1415344 free (0 chunks); 720004944 used\r\nGrand total: 721420288 bytes in 86 blocks; 1415344 free (0 chunks); 720004944 use\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 20 Sep 2024 09:36:10 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Thu, Sep 19, 2024 at 10:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 20, 2024 at 5:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Fri, 20 Sept 2024 at 05:03, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > I've done other benchmarking tests while changing the memory block\n> > > sizes from 8kB to 8MB. I measured the execution time of logical\n> > > decoding of one transaction that inserted 10M rows. I set\n> > > logical_decoding_work_mem large enough to avoid spilling behavior. In\n> > > this scenario, we allocate many memory chunks while decoding the\n> > > transaction and resulting in calling more malloc() in smaller memory\n> > > block sizes. Here are results (an average of 3 executions):\n> >\n> > I was interested in seeing the memory consumption with the test that\n> > was causing an OOM due to the GenerationBlock fragmentation.\n> >\n>\n> +1. That test will be helpful.\n\nSure. Here are results of peak memory usage in bytes reported by\nMemoryContextMemAllocated() (when rb->size shows 43MB):\n\n8kB: 52,371,328\n16kB: 52,887,424\n32kB: 53,428,096\n64kB: 55,099,264\n128kB: 86,163,328\n256kB: 149,340,032\n512kB: 273,334,144\n1MB: 523,419,520\n2MB: 1,021,493,120\n4MB: 1,984,085,888\n8MB: 2,130,886,528\n\nProbably we can increase the size to 64kB?\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 20 Sep 2024 10:22:50 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Fri, 20 Sept 2024 at 17:46, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 20, 2024 at 5:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > In general, it's a bit annoying to have to code around this\n> > GenerationContext fragmentation issue.\n>\n> Right, and I am also slightly afraid that this may not cause some\n> regression in other cases where defrag wouldn't help.\n\nYeah, that's certainly a possibility. I was hoping that\nMemoryContextMemAllocated() being much larger than logical_work_mem\ncould only happen when there is fragmentation, but certainly, you\ncould be wasting effort trying to defrag transactions where the\nchanges all arrive in WAL consecutively and there is no\ndefragmentation. It might be some other large transaction that's\ncausing the context's allocations to be fragmented. I don't have any\ngood ideas on how to avoid wasting effort on non-problematic\ntransactions. Maybe there's something that could be done if we knew\nthe LSN of the first and last change and the gap between the LSNs was\nmuch larger than the WAL space used for this transaction. That would\nlikely require tracking way more stuff than we do now, however.\n\nWith the smaller blocks idea, I'm a bit concerned that using smaller\nblocks could cause regressions on systems that are better at releasing\nmemory back to the OS after free() as no doubt malloc() would often be\nslower on those systems. There have been some complaints recently\nabout glibc being a bit too happy to keep hold of memory after free()\nand I wondered if that was the reason why the small block test does\nnot cause much of a performance regression. I wonder how the small\nblock test would look on Mac, FreeBSD or Windows. I think it would be\nrisky to assume that all is well with reducing the block size after\ntesting on a single platform.\n\nDavid\n\n\n",
"msg_date": "Sun, 22 Sep 2024 17:56:59 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Sun, Sep 22, 2024 at 11:27 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 20 Sept 2024 at 17:46, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Sep 20, 2024 at 5:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > In general, it's a bit annoying to have to code around this\n> > > GenerationContext fragmentation issue.\n> >\n> > Right, and I am also slightly afraid that this may not cause some\n> > regression in other cases where defrag wouldn't help.\n>\n> Yeah, that's certainly a possibility. I was hoping that\n> MemoryContextMemAllocated() being much larger than logical_work_mem\n> could only happen when there is fragmentation, but certainly, you\n> could be wasting effort trying to defrag transactions where the\n> changes all arrive in WAL consecutively and there is no\n> defragmentation. It might be some other large transaction that's\n> causing the context's allocations to be fragmented. I don't have any\n> good ideas on how to avoid wasting effort on non-problematic\n> transactions. Maybe there's something that could be done if we knew\n> the LSN of the first and last change and the gap between the LSNs was\n> much larger than the WAL space used for this transaction. That would\n> likely require tracking way more stuff than we do now, however.\n>\n\nWith more information tracking, we could avoid some non-problematic\ntransactions but still, it would be difficult to predict that we\ndidn't harm many cases because to make the memory non-contiguous, we\nonly need a few interleaving small transactions. We can try to think\nof ideas for implementing defragmentation in our code if we first can\nprove that smaller block sizes cause problems.\n\n> With the smaller blocks idea, I'm a bit concerned that using smaller\n> blocks could cause regressions on systems that are better at releasing\n> memory back to the OS after free() as no doubt malloc() would often be\n> slower on those systems. There have been some complaints recently\n> about glibc being a bit too happy to keep hold of memory after free()\n> and I wondered if that was the reason why the small block test does\n> not cause much of a performance regression. I wonder how the small\n> block test would look on Mac, FreeBSD or Windows. I think it would be\n> risky to assume that all is well with reducing the block size after\n> testing on a single platform.\n>\n\nGood point. We need extensive testing on different platforms, as you\nsuggest, to verify if smaller block sizes caused any regressions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 23 Sep 2024 09:58:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Fri, Sep 20, 2024 at 10:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Sep 19, 2024 at 10:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Sep 20, 2024 at 5:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > >\n> > > On Fri, 20 Sept 2024 at 05:03, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > I've done other benchmarking tests while changing the memory block\n> > > > sizes from 8kB to 8MB. I measured the execution time of logical\n> > > > decoding of one transaction that inserted 10M rows. I set\n> > > > logical_decoding_work_mem large enough to avoid spilling behavior. In\n> > > > this scenario, we allocate many memory chunks while decoding the\n> > > > transaction and resulting in calling more malloc() in smaller memory\n> > > > block sizes. Here are results (an average of 3 executions):\n> > >\n> > > I was interested in seeing the memory consumption with the test that\n> > > was causing an OOM due to the GenerationBlock fragmentation.\n> > >\n> >\n> > +1. That test will be helpful.\n>\n> Sure. Here are results of peak memory usage in bytes reported by\n> MemoryContextMemAllocated() (when rb->size shows 43MB):\n>\n> 8kB: 52,371,328\n> 16kB: 52,887,424\n> 32kB: 53,428,096\n> 64kB: 55,099,264\n> 128kB: 86,163,328\n> 256kB: 149,340,032\n> 512kB: 273,334,144\n> 1MB: 523,419,520\n> 2MB: 1,021,493,120\n> 4MB: 1,984,085,888\n> 8MB: 2,130,886,528\n>\n> Probably we can increase the size to 64kB?\n>\n\nYeah, but before deciding on a particular size, we need more testing\non different platforms as suggested by David.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 23 Sep 2024 10:00:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Thu, Sep 19, 2024 at 10:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Sep 19, 2024 at 10:33 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Sep 18, 2024 at 8:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Sep 19, 2024 at 6:46 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > >\n> > > > On Thu, 19 Sept 2024 at 11:54, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > I've done some benchmark tests for three different code bases with\n> > > > > different test cases. In short, reducing the generation memory context\n> > > > > block size to 8kB seems to be promising; it mitigates the problem\n> > > > > while keeping a similar performance.\n> > > >\n> > > > Did you try any sizes between 8KB and 8MB? 1000x reduction seems\n> > > > quite large a jump. There is additional overhead from having more\n> > > > blocks. It means more malloc() work and more free() work when deleting\n> > > > a context. It would be nice to see some numbers with all powers of 2\n> > > > between 8KB and 8MB. I imagine the returns are diminishing as the\n> > > > block size is reduced further.\n> > > >\n> > >\n> > > Good idea.\n> >\n> > Agreed.\n> >\n> > I've done other benchmarking tests while changing the memory block\n> > sizes from 8kB to 8MB. I measured the execution time of logical\n> > decoding of one transaction that inserted 10M rows. I set\n> > logical_decoding_work_mem large enough to avoid spilling behavior. In\n> > this scenario, we allocate many memory chunks while decoding the\n> > transaction and resulting in calling more malloc() in smaller memory\n> > block sizes. Here are results (an average of 3 executions):\n> >\n> > 8kB: 19747.870 ms\n> > 16kB: 19780.025 ms\n> > 32kB: 19760.575 ms\n> > 64kB: 19772.387 ms\n> > 128kB: 19825.385 ms\n> > 256kB: 19781.118 ms\n> > 512kB: 19808.138 ms\n> > 1MB: 19757.640 ms\n> > 2MB: 19801.429 ms\n> > 4MB: 19673.996 ms\n> > 8MB: 19643.547 ms\n> >\n> > Interestingly, there were no noticeable differences in the execution\n> > time. I've checked the number of allocated memory blocks in each case\n> > and more blocks are allocated in smaller block size cases. For\n> > example, when the logical decoding used the maximum memory (about\n> > 1.5GB), we allocated about 80k blocks in 8kb memory block size case\n> > and 80 blocks in 8MB memory block cases.\n> >\n>\n> What exactly do these test results mean? Do you want to prove that\n> there is no regression by using smaller block sizes?\n\nYes, there was no noticeable performance regression at least in this\ntest scenario.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 23 Sep 2024 14:36:09 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Sun, Sep 22, 2024 at 9:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Sep 22, 2024 at 11:27 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Fri, 20 Sept 2024 at 17:46, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Sep 20, 2024 at 5:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > > In general, it's a bit annoying to have to code around this\n> > > > GenerationContext fragmentation issue.\n> > >\n> > > Right, and I am also slightly afraid that this may not cause some\n> > > regression in other cases where defrag wouldn't help.\n> >\n> > Yeah, that's certainly a possibility. I was hoping that\n> > MemoryContextMemAllocated() being much larger than logical_work_mem\n> > could only happen when there is fragmentation, but certainly, you\n> > could be wasting effort trying to defrag transactions where the\n> > changes all arrive in WAL consecutively and there is no\n> > defragmentation. It might be some other large transaction that's\n> > causing the context's allocations to be fragmented. I don't have any\n> > good ideas on how to avoid wasting effort on non-problematic\n> > transactions. Maybe there's something that could be done if we knew\n> > the LSN of the first and last change and the gap between the LSNs was\n> > much larger than the WAL space used for this transaction. That would\n> > likely require tracking way more stuff than we do now, however.\n> >\n>\n> With more information tracking, we could avoid some non-problematic\n> transactions but still, it would be difficult to predict that we\n> didn't harm many cases because to make the memory non-contiguous, we\n> only need a few interleaving small transactions. We can try to think\n> of ideas for implementing defragmentation in our code if we first can\n> prove that smaller block sizes cause problems.\n>\n> > With the smaller blocks idea, I'm a bit concerned that using smaller\n> > blocks could cause regressions on systems that are better at releasing\n> > memory back to the OS after free() as no doubt malloc() would often be\n> > slower on those systems. There have been some complaints recently\n> > about glibc being a bit too happy to keep hold of memory after free()\n> > and I wondered if that was the reason why the small block test does\n> > not cause much of a performance regression. I wonder how the small\n> > block test would look on Mac, FreeBSD or Windows. I think it would be\n> > risky to assume that all is well with reducing the block size after\n> > testing on a single platform.\n> >\n>\n> Good point. We need extensive testing on different platforms, as you\n> suggest, to verify if smaller block sizes caused any regressions.\n\n+1. I'll do the same test on my Mac as well.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 25 Sep 2024 22:57:17 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Mon, 23 Sept 2024 at 09:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Sep 22, 2024 at 11:27 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Fri, 20 Sept 2024 at 17:46, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Sep 20, 2024 at 5:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > > In general, it's a bit annoying to have to code around this\n> > > > GenerationContext fragmentation issue.\n> > >\n> > > Right, and I am also slightly afraid that this may not cause some\n> > > regression in other cases where defrag wouldn't help.\n> >\n> > Yeah, that's certainly a possibility. I was hoping that\n> > MemoryContextMemAllocated() being much larger than logical_work_mem\n> > could only happen when there is fragmentation, but certainly, you\n> > could be wasting effort trying to defrag transactions where the\n> > changes all arrive in WAL consecutively and there is no\n> > defragmentation. It might be some other large transaction that's\n> > causing the context's allocations to be fragmented. I don't have any\n> > good ideas on how to avoid wasting effort on non-problematic\n> > transactions. Maybe there's something that could be done if we knew\n> > the LSN of the first and last change and the gap between the LSNs was\n> > much larger than the WAL space used for this transaction. That would\n> > likely require tracking way more stuff than we do now, however.\n> >\n>\n> With more information tracking, we could avoid some non-problematic\n> transactions but still, it would be difficult to predict that we\n> didn't harm many cases because to make the memory non-contiguous, we\n> only need a few interleaving small transactions. We can try to think\n> of ideas for implementing defragmentation in our code if we first can\n> prove that smaller block sizes cause problems.\n>\n> > With the smaller blocks idea, I'm a bit concerned that using smaller\n> > blocks could cause regressions on systems that are better at releasing\n> > memory back to the OS after free() as no doubt malloc() would often be\n> > slower on those systems. There have been some complaints recently\n> > about glibc being a bit too happy to keep hold of memory after free()\n> > and I wondered if that was the reason why the small block test does\n> > not cause much of a performance regression. I wonder how the small\n> > block test would look on Mac, FreeBSD or Windows. I think it would be\n> > risky to assume that all is well with reducing the block size after\n> > testing on a single platform.\n> >\n>\n> Good point. We need extensive testing on different platforms, as you\n> suggest, to verify if smaller block sizes caused any regressions.\n\nI did similar tests on Windows. rb_mem_block_size was changed from 8kB\nto 8MB. Below table shows the result (average of 5 runs) and Standard\nDeviation (of 5 runs) for each block-size.\n\n===============================================\nblock-size | Average time (ms) | Standard Deviation (ms)\n-------------------------------------------------------------------------------------\n8kb | 12580.879 ms | 144.6923467\n16kb | 12442.7256 ms | 94.02799006\n32kb | 12370.7292 ms | 97.7958552\n64kb | 11877.4888 ms | 222.2419142\n128kb | 11828.8568 ms | 129.732941\n256kb | 11801.086 ms | 20.60030913\n512kb | 12361.4172 ms | 65.27390105\n1MB | 12343.3732 ms | 80.84427202\n2MB | 12357.675 ms | 79.40017604\n4MB | 12395.8364 ms | 76.78273689\n8MB | 11712.8862 ms | 50.74323039\n==============================================\n\n From the results, I think there is a small regression for small block size.\n\nI ran the tests in git bash. I have also attached the test script.\n\nThanks and Regards,\nShlok Kyal",
"msg_date": "Fri, 27 Sep 2024 13:09:13 +0530",
"msg_from": "Shlok Kyal <shlok.kyal.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
},
{
"msg_contents": "On Fri, Sep 27, 2024 at 12:39 AM Shlok Kyal <shlok.kyal.oss@gmail.com> wrote:\n>\n> On Mon, 23 Sept 2024 at 09:59, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sun, Sep 22, 2024 at 11:27 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > >\n> > > On Fri, 20 Sept 2024 at 17:46, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Fri, Sep 20, 2024 at 5:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > > > In general, it's a bit annoying to have to code around this\n> > > > > GenerationContext fragmentation issue.\n> > > >\n> > > > Right, and I am also slightly afraid that this may not cause some\n> > > > regression in other cases where defrag wouldn't help.\n> > >\n> > > Yeah, that's certainly a possibility. I was hoping that\n> > > MemoryContextMemAllocated() being much larger than logical_work_mem\n> > > could only happen when there is fragmentation, but certainly, you\n> > > could be wasting effort trying to defrag transactions where the\n> > > changes all arrive in WAL consecutively and there is no\n> > > defragmentation. It might be some other large transaction that's\n> > > causing the context's allocations to be fragmented. I don't have any\n> > > good ideas on how to avoid wasting effort on non-problematic\n> > > transactions. Maybe there's something that could be done if we knew\n> > > the LSN of the first and last change and the gap between the LSNs was\n> > > much larger than the WAL space used for this transaction. That would\n> > > likely require tracking way more stuff than we do now, however.\n> > >\n> >\n> > With more information tracking, we could avoid some non-problematic\n> > transactions but still, it would be difficult to predict that we\n> > didn't harm many cases because to make the memory non-contiguous, we\n> > only need a few interleaving small transactions. We can try to think\n> > of ideas for implementing defragmentation in our code if we first can\n> > prove that smaller block sizes cause problems.\n> >\n> > > With the smaller blocks idea, I'm a bit concerned that using smaller\n> > > blocks could cause regressions on systems that are better at releasing\n> > > memory back to the OS after free() as no doubt malloc() would often be\n> > > slower on those systems. There have been some complaints recently\n> > > about glibc being a bit too happy to keep hold of memory after free()\n> > > and I wondered if that was the reason why the small block test does\n> > > not cause much of a performance regression. I wonder how the small\n> > > block test would look on Mac, FreeBSD or Windows. I think it would be\n> > > risky to assume that all is well with reducing the block size after\n> > > testing on a single platform.\n> > >\n> >\n> > Good point. We need extensive testing on different platforms, as you\n> > suggest, to verify if smaller block sizes caused any regressions.\n>\n> I did similar tests on Windows. rb_mem_block_size was changed from 8kB\n> to 8MB. Below table shows the result (average of 5 runs) and Standard\n> Deviation (of 5 runs) for each block-size.\n>\n> ===============================================\n> block-size | Average time (ms) | Standard Deviation (ms)\n> -------------------------------------------------------------------------------------\n> 8kb | 12580.879 ms | 144.6923467\n> 16kb | 12442.7256 ms | 94.02799006\n> 32kb | 12370.7292 ms | 97.7958552\n> 64kb | 11877.4888 ms | 222.2419142\n> 128kb | 11828.8568 ms | 129.732941\n> 256kb | 11801.086 ms | 20.60030913\n> 512kb | 12361.4172 ms | 65.27390105\n> 1MB | 12343.3732 ms | 80.84427202\n> 2MB | 12357.675 ms | 79.40017604\n> 4MB | 12395.8364 ms | 76.78273689\n> 8MB | 11712.8862 ms | 50.74323039\n> ==============================================\n>\n> From the results, I think there is a small regression for small block size.\n>\n> I ran the tests in git bash. I have also attached the test script.\n\nThank you for testing on Windows! I've run the same benchmark on Mac\n(Sonoma 14.7, M1 Pro):\n\n8kB: 4852.198 ms\n16kB: 4822.733 ms\n32kB: 4776.776 ms\n64kB: 4851.433 ms\n128kB: 4804.821 ms\n256kB: 4781.778 ms\n512kB: 4776.486 ms\n1MB: 4783.456 ms\n2MB: 4770.671 ms\n4MB: 4785.800 ms\n8MB: 4747.447 ms\n\nI can see there is a small regression for small block sizes.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 27 Sep 2024 09:53:48 -0700",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using per-transaction memory contexts for storing decoded tuples"
}
] |